threads
listlengths
1
2.99k
[ { "msg_contents": "I propose the attached patch to shake up the wording in the connection \ntype section of pg_hba.conf.sample a bit. After the hostgssenc part was \nadded on, the whole thing became a bit wordy, and it's also a bit \ninaccurate for example in that the current wording for \"host\" appears to \nsay that it does not apply to GSS-encrypted connections.", "msg_date": "Wed, 28 Apr 2021 07:51:43 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "pg_hba.conf.sample wording improvement" }, { "msg_contents": "On Wed, 2021-04-28 at 07:51 +0200, Peter Eisentraut wrote:\n> I propose the attached patch to shake up the wording in the connection \n> type section of pg_hba.conf.sample a bit. After the hostgssenc part was \n> added on, the whole thing became a bit wordy, and it's also a bit \n> inaccurate for example in that the current wording for \"host\" appears to \n> say that it does not apply to GSS-encrypted connections.\n\n+1\n\nThanks for taking care of things like that.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 28 Apr 2021 08:54:51 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: pg_hba.conf.sample wording improvement" }, { "msg_contents": "On 2021-Apr-28, Peter Eisentraut wrote:\n\n> I propose the attached patch to shake up the wording in the connection type\n> section of pg_hba.conf.sample a bit. After the hostgssenc part was added\n> on, the whole thing became a bit wordy, and it's also a bit inaccurate for\n> example in that the current wording for \"host\" appears to say that it does\n> not apply to GSS-encrypted connections.\n\nYeah, that's a clear improvement.\n\nLooking at it now, I wonder how well do the \"hostno\" options work. If I\nsay \"hostnogssenc\", is an SSL-encrypted socket good? If I say\n\"hostnossl\", is a GSS-encrypted socket good? If so, how does that make\nsense?\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Pensar que el espectro que vemos es ilusorio no lo despoja de espanto,\ns�lo le suma el nuevo terror de la locura\" (Perelandra, C.S. Lewis)\n\n\n", "msg_date": "Wed, 28 Apr 2021 10:09:46 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg_hba.conf.sample wording improvement" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I propose the attached patch to shake up the wording in the connection \n> type section of pg_hba.conf.sample a bit. After the hostgssenc part was \n> added on, the whole thing became a bit wordy, and it's also a bit \n> inaccurate for example in that the current wording for \"host\" appears to \n> say that it does not apply to GSS-encrypted connections.\n\n+1 for revising it in this general way. I notice you omitted \"TCP/IP\"\nfrom the last line though:\n\n+# - \"hostnogssenc\" is a not GSSAPI-encrypted socket\n\nwhich doesn't seem consistent.\n\nAnother thought is to switch the phrase order:\n\n+# - \"local\" is a Unix-domain socket\n+# - \"host\" is a TCP/IP socket (encrypted or not)\n+# - \"hostssl\" is a TCP/IP socket that is SSL-encrypted\n+# - \"hostnossl\" is a TCP/IP socket that is not SSL-encrypted\n+# - \"hostgssenc\" is a TCP/IP socket that is GSSAPI-encrypted\n+# - \"hostnogssenc\" is a TCP/IP socket that is not GSSAPI-encrypted\n\nI'm not wedded to that idea, but it seems to help reduce random\nvariations between the wordings of these lines.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Apr 2021 10:20:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_hba.conf.sample wording improvement" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Looking at it now, I wonder how well do the \"hostno\" options work. If I\n> say \"hostnogssenc\", is an SSL-encrypted socket good? If I say\n> \"hostnossl\", is a GSS-encrypted socket good? If so, how does that make\n> sense?\n\nKind of off-topic for this thread, but I wonder if we should introduce\n\"hostenc\" and \"hostnoenc\" to mean \"encrypted (or not), and I don't care\nby which method\". The addition of GSS has made it painful to express\nthose concepts.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Apr 2021 10:30:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_hba.conf.sample wording improvement" }, { "msg_contents": "On 28.04.21 16:20, Tom Lane wrote:\n> Another thought is to switch the phrase order:\n> \n> +# - \"local\" is a Unix-domain socket\n> +# - \"host\" is a TCP/IP socket (encrypted or not)\n> +# - \"hostssl\" is a TCP/IP socket that is SSL-encrypted\n> +# - \"hostnossl\" is a TCP/IP socket that is not SSL-encrypted\n> +# - \"hostgssenc\" is a TCP/IP socket that is GSSAPI-encrypted\n> +# - \"hostnogssenc\" is a TCP/IP socket that is not GSSAPI-encrypted\n> \n> I'm not wedded to that idea, but it seems to help reduce random\n> variations between the wordings of these lines.\n\ndone that way\n\n\n", "msg_date": "Thu, 29 Apr 2021 07:06:05 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: pg_hba.conf.sample wording improvement" }, { "msg_contents": "On 28.04.21 16:09, Alvaro Herrera wrote:\n> Looking at it now, I wonder how well do the \"hostno\" options work. If I\n> say \"hostnogssenc\", is an SSL-encrypted socket good? If I say\n> \"hostnossl\", is a GSS-encrypted socket good? If so, how does that make\n> sense?\n\nI think for example if you want to enforce SSL connections, then writing \n\"hostnossl ... reject\" would be sensible. That would also reject \nGSS-encrypted connections, but that would be what you want in that scenario.\n\n\n\n", "msg_date": "Thu, 29 Apr 2021 07:07:54 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: pg_hba.conf.sample wording improvement" }, { "msg_contents": "On Thu, Apr 29, 2021 at 7:08 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 28.04.21 16:09, Alvaro Herrera wrote:\n> > Looking at it now, I wonder how well do the \"hostno\" options work. If I\n> > say \"hostnogssenc\", is an SSL-encrypted socket good? If I say\n> > \"hostnossl\", is a GSS-encrypted socket good? If so, how does that make\n> > sense?\n>\n> I think for example if you want to enforce SSL connections, then writing\n> \"hostnossl ... reject\" would be sensible. That would also reject\n> GSS-encrypted connections, but that would be what you want in that scenario.\n\nI'd say the interface has become a lot less well-matching now that we\nhave two separate settings for it. For example right now it's more\ncomplex to say \"reject anything not encrypted\", which I bet is what a\nlot of people would want. They don't particularly care if it's gss\nencrypted or ssl encrypted.\n\nPerhaps what we want to do (obviously not for 14) is to allow you to\nspecify more than one entry in the first column, so you could say\n\"hostssl,hostgssenc\" on the same row? That would give some strange\nresults with the \"no\" mappings, but it might work if used right?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Thu, 29 Apr 2021 11:47:08 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: pg_hba.conf.sample wording improvement" }, { "msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Thu, Apr 29, 2021 at 7:08 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> > On 28.04.21 16:09, Alvaro Herrera wrote:\n> > > Looking at it now, I wonder how well do the \"hostno\" options work. If I\n> > > say \"hostnogssenc\", is an SSL-encrypted socket good? If I say\n> > > \"hostnossl\", is a GSS-encrypted socket good? If so, how does that make\n> > > sense?\n> >\n> > I think for example if you want to enforce SSL connections, then writing\n> > \"hostnossl ... reject\" would be sensible. That would also reject\n> > GSS-encrypted connections, but that would be what you want in that scenario.\n> \n> I'd say the interface has become a lot less well-matching now that we\n> have two separate settings for it. For example right now it's more\n> complex to say \"reject anything not encrypted\", which I bet is what a\n> lot of people would want. They don't particularly care if it's gss\n> encrypted or ssl encrypted.\n\nI'm not really sure that I agree it's such an issue, particularly since\nyou have to come up with a way to specify the auth method to use somehow\ntoo as we haven't got any fallback mechanism or anything like that.\nWhile you might use cert-based auth or SCRAM for TLS connections, it\nisn't the case that you can use SCRAM with a GSS encrypted connection.\n\n> Perhaps what we want to do (obviously not for 14) is to allow you to\n> specify more than one entry in the first column, so you could say\n> \"hostssl,hostgssenc\" on the same row? That would give some strange\n> results with the \"no\" mappings, but it might work if used right?\n\nIn general, I'm not against the idea of giving more options but I'm just\nnot sure that it's a real use-case when you consider that the auth\nmethod also has to be specified. I also don't recall anyone showing up\nasking about how they could specify \"encrypted but I don't care how\".\n\nThanks,\n\nStephen", "msg_date": "Thu, 29 Apr 2021 11:06:18 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pg_hba.conf.sample wording improvement" }, { "msg_contents": "On Wed, Apr 28, 2021 at 07:51:43AM +0200, Peter Eisentraut wrote:\n> I propose the attached patch to shake up the wording in the connection type\n> section of pg_hba.conf.sample a bit. After the hostgssenc part was added\n> on, the whole thing became a bit wordy, and it's also a bit inaccurate for\n> example in that the current wording for \"host\" appears to say that it does\n> not apply to GSS-encrypted connections.\n\nYes, much better.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 30 Apr 2021 10:29:44 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg_hba.conf.sample wording improvement" }, { "msg_contents": "On Thu, Apr 29, 2021 at 5:06 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Magnus Hagander (magnus@hagander.net) wrote:\n> > On Thu, Apr 29, 2021 at 7:08 AM Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> > > On 28.04.21 16:09, Alvaro Herrera wrote:\n> > > > Looking at it now, I wonder how well do the \"hostno\" options work. If I\n> > > > say \"hostnogssenc\", is an SSL-encrypted socket good? If I say\n> > > > \"hostnossl\", is a GSS-encrypted socket good? If so, how does that make\n> > > > sense?\n> > >\n> > > I think for example if you want to enforce SSL connections, then writing\n> > > \"hostnossl ... reject\" would be sensible. That would also reject\n> > > GSS-encrypted connections, but that would be what you want in that scenario.\n> >\n> > I'd say the interface has become a lot less well-matching now that we\n> > have two separate settings for it. For example right now it's more\n> > complex to say \"reject anything not encrypted\", which I bet is what a\n> > lot of people would want. They don't particularly care if it's gss\n> > encrypted or ssl encrypted.\n>\n> I'm not really sure that I agree it's such an issue, particularly since\n> you have to come up with a way to specify the auth method to use somehow\n> too as we haven't got any fallback mechanism or anything like that.\n> While you might use cert-based auth or SCRAM for TLS connections, it\n> isn't the case that you can use SCRAM with a GSS encrypted connection.\n>\n> > Perhaps what we want to do (obviously not for 14) is to allow you to\n> > specify more than one entry in the first column, so you could say\n> > \"hostssl,hostgssenc\" on the same row? That would give some strange\n> > results with the \"no\" mappings, but it might work if used right?\n>\n> In general, I'm not against the idea of giving more options but I'm just\n> not sure that it's a real use-case when you consider that the auth\n> method also has to be specified. I also don't recall anyone showing up\n> asking about how they could specify \"encrypted but I don't care how\".\n\nTBH, I bet that is a lot because people don't use gss encryption, or\neven know it exists. Which is sad, because it's darn convenient once\nyou have Kerberos set up...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 17 May 2021 22:10:39 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: pg_hba.conf.sample wording improvement" } ]
[ { "msg_contents": "Hello.\n\n0001: I found some typos in a error message and a comment.\n\nmultirangetypes.c: 1420\n> errmsg(\"range_intersect_agg must be called with a multirange\")));\n\nThis \"range_intersect_agg\" looks like a typo of \"multirange_..\".\n\noperatorcmds.c:303\n> * Look up a join estimator function ny name, and verify that it has the\n\n\"ny\" looks like a typo of \"by\".\n\n\n\n0002: The following messages are substantially same and are uselessly\nsplit into separate messages. I'm not sure any compiler complains\nabout using %zu for int, explicit casting would work in that case.\n\nbe-secure-gssapi.c:351\n>\t(errmsg(\"oversize GSSAPI packet sent by the client (%zu > %zu)\",\n>\t\t\t(size_t) input.length,\n>\t\t\tPQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))));\nbe-secure-gssapi.c:570\n>\t(errmsg(\"oversize GSSAPI packet sent by the client (%zu > %d)\",\n>\t\t\t(size_t) input.length,\n>\t\t\tPQ_GSS_RECV_BUFFER_SIZE)));\n\n\n\n0003: The messages below seems to be a bit unclear. I'm not sure they\nworth doing.\n\nconversioncmds.c: 130\n errmsg(\"encoding conversion function %s returned incorrect result for empty input\",\n\nThis is not wrong at all, but another message just above is saying\nthat \"encoding conversion function %s must return type %s\". Why\naren't we explicit here, like this?\n\n\"encoding conversion function %s must return zero for empty input\"\n\n\ntypecmds.c:4294\n>\tif (requireSuper)\n>\t\tif (!superuser())\n>\t\t\tereport(ERROR,\n>\t\t\t\t\t errmsg(\"must be superuser to alter a type\")));\n\nWhere, requireSuper varies depending on the set of operations but the\ndescription looks like describing general behavior. I'm not sure but\nsomething like the following might be better?\n\n+\t\t errmsg(\"must be superuser to perform all operations\")));\n+\t\t errmsg(\"some of the operations require superuser privilege\")));\n\nAny opinions or suggestions?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 28 Apr 2021 17:36:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Some doubious error messages and comments" }, { "msg_contents": "On Wed, Apr 28, 2021 at 05:36:33PM +0900, Kyotaro Horiguchi wrote:\n> 0001: I found some typos in a error message and a comment.\n> \n> multirangetypes.c: 1420\n> > errmsg(\"range_intersect_agg must be called with a multirange\")));\n> \n> This \"range_intersect_agg\" looks like a typo of \"multirange_..\".\n> \n> operatorcmds.c:303\n> > * Look up a join estimator function ny name, and verify that it has the\n> \n> \"ny\" looks like a typo of \"by\".\n\n\"ny name\" shows up a 2nd time.\n\nI have another \"comment typos\" patch - maybe someone will want to push them\ntogether.\n\ncommit 32e979c652c68ca5e3a7f308d677058e0c08547b\nAuthor: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Wed Apr 28 17:23:52 2021 +0900\n\n Fix typos\n \n ny name: 321eed5f0f7563a0cabb3d7a98132856287c1ad1\n multirange: 6df7a9698bb036610c1e8c6d375e1be38cb26d5f\n\ndiff --git a/src/backend/commands/operatorcmds.c b/src/backend/commands/operatorcmds.c\nindex 809043c5d1..fbd7d8d062 100644\n--- a/src/backend/commands/operatorcmds.c\n+++ b/src/backend/commands/operatorcmds.c\n@@ -265,7 +265,7 @@ DefineOperator(List *names, List *parameters)\n }\n \n /*\n- * Look up a restriction estimator function ny name, and verify that it has\n+ * Look up a restriction estimator function by name, and verify that it has\n * the correct signature and we have the permissions to attach it to an\n * operator.\n */\n@@ -300,7 +300,7 @@ ValidateRestrictionEstimator(List *restrictionName)\n }\n \n /*\n- * Look up a join estimator function ny name, and verify that it has the\n+ * Look up a join estimator function by name, and verify that it has the\n * correct signature and we have the permissions to attach it to an\n * operator.\n */\ndiff --git a/src/backend/utils/adt/multirangetypes.c b/src/backend/utils/adt/multirangetypes.c\nindex 0b81649779..2583ddeedf 100644\n--- a/src/backend/utils/adt/multirangetypes.c\n+++ b/src/backend/utils/adt/multirangetypes.c\n@@ -1417,7 +1417,7 @@ multirange_intersect_agg_transfn(PG_FUNCTION_ARGS)\n \tif (!type_is_multirange(mltrngtypoid))\n \t\tereport(ERROR,\n \t\t\t\t(errcode(ERRCODE_DATATYPE_MISMATCH),\n-\t\t\t\t errmsg(\"range_intersect_agg must be called with a multirange\")));\n+\t\t\t\t errmsg(\"multirange_intersect_agg must be called with a multirange\")));\n \n \ttypcache = multirange_get_typcache(fcinfo, mltrngtypoid);\n \n\ncommit 8247b4034ed4c68241be9fbdec249bc967ceafd4\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Tue Apr 27 07:57:50 2021 -0500\n\n Comment typos: extended stats a4d75c86b and 518442c7f\n\ndiff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c\nindex 9dd30370da..eb9e63f4a8 100644\n--- a/src/backend/parser/parse_utilcmd.c\n+++ b/src/backend/parser/parse_utilcmd.c\n@@ -1943,7 +1943,7 @@ generateClonedExtStatsStmt(RangeVar *heapRel, Oid heapRelid,\n \t * simply append them after simple column references.\n \t *\n \t * XXX Some places during build/estimation treat expressions as if they\n-\t * are before atttibutes, but for the CREATE command that's entirely\n+\t * are before attributes, but for the CREATE command that's entirely\n \t * irrelevant.\n \t */\n \tdatum = SysCacheGetAttr(STATEXTOID, ht_stats,\ndiff --git a/src/backend/statistics/extended_stats.c b/src/backend/statistics/extended_stats.c\nindex 7e11cb9d5f..5e53783ea6 100644\n--- a/src/backend/statistics/extended_stats.c\n+++ b/src/backend/statistics/extended_stats.c\n@@ -1796,7 +1796,7 @@ statext_mcv_clauselist_selectivity(PlannerInfo *root, List *clauses, int varReli\n \t\t\t\tcontinue;\n \n \t\t\t/*\n-\t\t\t * Now we know the clause is compatible (we have either atttnums\n+\t\t\t * Now we know the clause is compatible (we have either attnums\n \t\t\t * or expressions extracted from it), and was not estimated yet.\n \t\t\t */\n \n\n\n", "msg_date": "Wed, 28 Apr 2021 08:11:47 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Some doubious error messages and comments" }, { "msg_contents": "On Wed, Apr 28, 2021 at 08:11:47AM -0500, Justin Pryzby wrote:\n> On Wed, Apr 28, 2021 at 05:36:33PM +0900, Kyotaro Horiguchi wrote:\n>> 0001: I found some typos in a error message and a comment.\n>> \n>> multirangetypes.c: 1420\n>> > errmsg(\"range_intersect_agg must be called with a multirange\")));\n>> \n>> This \"range_intersect_agg\" looks like a typo of \"multirange_..\".\n>> \n>> operatorcmds.c:303\n>> > * Look up a join estimator function ny name, and verify that it has the\n>> \n>> \"ny\" looks like a typo of \"by\".\n> \n> \"ny name\" shows up a 2nd time.\n> \n> I have another \"comment typos\" patch - maybe someone will want to push them\n> together.\n\nThanks. Two of them were already fixed, two of them are correct but\nwent missing so I have applied a fix for these. The change in\nmultirange_intersect_agg_transfn() is incorrect as the error refers to\nthe SQL function range_intersect_agg().\n--\nMichael", "msg_date": "Mon, 10 May 2021 15:52:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Some doubious error messages and comments" }, { "msg_contents": "At Mon, 10 May 2021 15:52:15 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Apr 28, 2021 at 08:11:47AM -0500, Justin Pryzby wrote:\n> > On Wed, Apr 28, 2021 at 05:36:33PM +0900, Kyotaro Horiguchi wrote:\n> >> 0001: I found some typos in a error message and a comment.\n> >> \n> >> multirangetypes.c: 1420\n> >> > errmsg(\"range_intersect_agg must be called with a multirange\")));\n> >> \n> >> This \"range_intersect_agg\" looks like a typo of \"multirange_..\".\n> >> \n> >> operatorcmds.c:303\n> >> > * Look up a join estimator function ny name, and verify that it has the\n> >> \n> >> \"ny\" looks like a typo of \"by\".\n> > \n> > \"ny name\" shows up a 2nd time.\n> > \n> > I have another \"comment typos\" patch - maybe someone will want to push them\n> > together.\n> \n> Thanks. Two of them were already fixed, two of them are correct but\n> went missing so I have applied a fix for these. The change in\n\nThanks.\n\n> multirange_intersect_agg_transfn() is incorrect as the error refers to\n> the SQL function range_intersect_agg().\n\nUh!! It seems to be a never-shown message. Thaks for checking that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 14 May 2021 15:06:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Some doubious error messages and comments" } ]
[ { "msg_contents": "Hi,\n\nOne of my customers has an issue with logical replication. As $SUBJECT\nsays, multiple table synchronization happens serially. To be honest, it\ndoesn't do this every time. It happens when the tables are big enough.\n\nThis issue was already described on this thread (from 2017):\nhttps://www.postgresql.org/message-id/flat/CAD21AoC2KJdavS7MFffmSsRc1dn3Vg_0xmuc=UpBrZ-_MUxh-Q@mail.gmail.com\n\nThis thread was closed by a commit (\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=6c2003f8a1bbc7c192a2e83ec51581c018aa162f)\nwhich apparently fixed the issue for the OP.\n\nAttached is a small test case where it still happens for me on 12.6, 11.11,\nand 10.16. I can't make it happen on 13.2. I don't know why. It may imply\nbigger tables for 13.2, but why? I simply don't know.\n\nAnyway, the issue at the end of the test case is that synchronizations\nsometimes happen serially. You can see on the process list that one\nwalsender process is waiting in \"idle in transaction\" state:\n\nguillau+ 486822 2227 0 10:44 ? 00:00:00\n/opt/postgresql/12/bin/postgres\nguillau+ 486824 486822 0 10:44 ? 00:00:01 postgres: testcase:\ncheckpointer\nguillau+ 486825 486822 0 10:44 ? 00:00:04 postgres: testcase:\nbackground writer\nguillau+ 486826 486822 1 10:44 ? 00:00:06 postgres: testcase:\nwalwriter\nguillau+ 486827 486822 0 10:44 ? 00:00:00 postgres: testcase:\nautovacuum launcher\nguillau+ 486828 486822 0 10:44 ? 00:00:00 postgres: testcase:\nstats collector\nguillau+ 486829 486822 0 10:44 ? 00:00:00 postgres: testcase:\nlogical replication launcher\nguillau+ 489822 486822 0 10:55 ? 00:00:00 postgres: testcase:\nlogical replication worker for subscription 16436\nguillau+ 489824 486822 10 10:55 ? 00:00:01 postgres: testcase:\nwalsender repuser ::1(38770) idle\nguillau+ 489825 486822 22 10:55 ? 00:00:02 postgres: testcase:\nlogical replication worker for subscription 16436 sync 16416\nguillau+ 489826 486822 8 10:55 ? 00:00:00 postgres: testcase:\nwalsender repuser ::1(38772) COPY\nguillau+ 489827 486822 0 10:55 ? 00:00:00 postgres: testcase:\nlogical replication worker for subscription 16436 sync 16427\nguillau+ 489828 486822 0 10:55 ? 00:00:00 postgres: testcase:\nwalsender repuser ::1(38774) idle in transaction waiting\n\nAnd the log says (from the start of the subscription):\n\n2021-04-28 10:55:32.337 CEST [489822] LOG: logical replication apply\nworker for subscription \"sub\" has started\n2021-04-28 10:55:32.342 CEST [489824] LOG: duration: 0.426 ms statement:\nSELECT pg_catalog.set_config('search_path', '', false);\n2021-04-28 10:55:32.342 CEST [489824] LOG: received replication command:\nIDENTIFY_SYSTEM\n2021-04-28 10:55:32.342 CEST [489824] LOG: received replication command:\nSTART_REPLICATION SLOT \"sub\" LOGICAL 0/0 (proto_version '1',\npublication_names '\"pub\"')\n2021-04-28 10:55:32.342 CEST [489824] LOG: starting logical decoding for\nslot \"sub\"\n2021-04-28 10:55:32.342 CEST [489824] DETAIL: Streaming transactions\ncommitting after 1/FF5D8130, reading WAL from 1/FF5D80F8.\n2021-04-28 10:55:32.342 CEST [489824] LOG: logical decoding found\nconsistent point at 1/FF5D80F8\n2021-04-28 10:55:32.342 CEST [489824] DETAIL: There are no running\ntransactions.\n2021-04-28 10:55:32.345 CEST [489825] LOG: logical replication table\nsynchronization worker for subscription \"sub\", table \"foo\" has started\n2021-04-28 10:55:32.348 CEST [489826] LOG: duration: 0.315 ms statement:\nSELECT pg_catalog.set_config('search_path', '', false);\n2021-04-28 10:55:32.349 CEST [489826] LOG: duration: 0.041 ms statement:\nBEGIN READ ONLY ISOLATION LEVEL REPEATABLE READ\n2021-04-28 10:55:32.349 CEST [489826] LOG: received replication command:\nCREATE_REPLICATION_SLOT \"sub_16436_sync_16416\" TEMPORARY LOGICAL pgoutput\nUSE_SNAPSHOT\n2021-04-28 10:55:32.355 CEST [489827] LOG: logical replication table\nsynchronization worker for subscription \"sub\", table \"bar\" has started\n2021-04-28 10:55:32.359 CEST [489828] LOG: duration: 0.431 ms statement:\nSELECT pg_catalog.set_config('search_path', '', false);\n2021-04-28 10:55:32.359 CEST [489828] LOG: duration: 0.048 ms statement:\nBEGIN READ ONLY ISOLATION LEVEL REPEATABLE READ\n2021-04-28 10:55:32.360 CEST [489828] LOG: received replication command:\nCREATE_REPLICATION_SLOT \"sub_16436_sync_16427\" TEMPORARY LOGICAL pgoutput\nUSE_SNAPSHOT\n2021-04-28 10:55:32.407 CEST [489826] LOG: logical decoding found\nconsistent point at 1/FF602880\n2021-04-28 10:55:32.407 CEST [489826] DETAIL: There are no running\ntransactions.\n2021-04-28 10:55:32.409 CEST [489826] LOG: duration: 1.262 ms statement:\nSELECT c.oid, c.relreplident FROM pg_catalog.pg_class c INNER JOIN\npg_catalog.pg_namespace n ON (c.relnamespace = n.oid) WHERE\nn.nspname = 's01' AND c.relname = 'foo' AND c.relkind = 'r'\n2021-04-28 10:55:32.410 CEST [489826] LOG: duration: 1.347 ms statement:\nSELECT a.attname, a.atttypid, a.atttypmod, a.attnum =\nANY(i.indkey) FROM pg_catalog.pg_attribute a LEFT JOIN\npg_catalog.pg_index i ON (i.indexrelid =\npg_get_replica_identity_index(16390)) WHERE a.attnum > 0::pg_catalog.int2\nAND NOT a. attisdropped AND a.attgenerated = '' AND a.attrelid\n= 16390 ORDER BY a.attnum\n2021-04-28 10:55:32.484 CEST [489828] LOG: logical decoding found initial\nstarting point at 1/FF63D168\n2021-04-28 10:55:32.484 CEST [489828] DETAIL: Waiting for transactions\n(approximately 1) older than 503 to end.\n2021-04-28 10:55:33.484 CEST [489828] LOG: process 489828 still waiting\nfor ShareLock on transaction 502 after 1000.103 ms\n2021-04-28 10:55:33.484 CEST [489828] DETAIL: Process holding the lock:\n489825. Wait queue: 489828.\n2021-04-28 10:55:33.496 CEST [489828] LOG: process 489828 still waiting\nfor ShareLock on transaction 502 after 1012.089 ms\n\nAnd it logs the \"still waiting\" message as long as the first table is being\nsynchronized. Once this is done, it releases the lock, and the\nsynchronization of the second table starts.\n\nIs there something I didn't understand on the previous thread? I'd like to\nknow why serial synchronization happens, and if there's a way to avoid it.\n\nThanks.\n\nRegards.\n\n\n-- \nGuillaume.", "msg_date": "Wed, 28 Apr 2021 11:12:51 +0200", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "\"Multiple table synchronizations are processed serially\" still\n happens" }, { "msg_contents": "Le mer. 28 avr. 2021 à 11:12, Guillaume Lelarge <guillaume@lelarge.info> a\nécrit :\n\n> Hi,\n>\n> One of my customers has an issue with logical replication. As $SUBJECT\n> says, multiple table synchronization happens serially. To be honest, it\n> doesn't do this every time. It happens when the tables are big enough.\n>\n> This issue was already described on this thread (from 2017):\n> https://www.postgresql.org/message-id/flat/CAD21AoC2KJdavS7MFffmSsRc1dn3Vg_0xmuc=UpBrZ-_MUxh-Q@mail.gmail.com\n>\n> This thread was closed by a commit (\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=6c2003f8a1bbc7c192a2e83ec51581c018aa162f)\n> which apparently fixed the issue for the OP.\n>\n> Attached is a small test case where it still happens for me on 12.6,\n> 11.11, and 10.16. I can't make it happen on 13.2. I don't know why. It may\n> imply bigger tables for 13.2, but why? I simply don't know.\n>\n>\nActually, it also happens on 13.2.\n\n\n-- \nGuillaume.\n\nLe mer. 28 avr. 2021 à 11:12, Guillaume Lelarge <guillaume@lelarge.info> a écrit :Hi,One of my customers has an issue with logical replication. As $SUBJECT says, multiple table synchronization happens serially. To be honest, it doesn't do this every time. It happens when the tables are big enough.This issue was already described on this thread (from 2017): https://www.postgresql.org/message-id/flat/CAD21AoC2KJdavS7MFffmSsRc1dn3Vg_0xmuc=UpBrZ-_MUxh-Q@mail.gmail.comThis thread was closed by a commit (https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=6c2003f8a1bbc7c192a2e83ec51581c018aa162f) which apparently fixed the issue for the OP.Attached is a small test case where it still happens for me on 12.6, 11.11, and 10.16. I can't make it happen on 13.2. I don't know why. It may imply bigger tables for 13.2, but why? I simply don't know.Actually, it also happens on 13.2.-- Guillaume.", "msg_date": "Wed, 28 Apr 2021 11:37:20 +0200", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: \"Multiple table synchronizations are processed serially\" still\n happens" }, { "msg_contents": "Le mer. 28 avr. 2021 à 11:37, Guillaume Lelarge <guillaume@lelarge.info> a\nécrit :\n\n> Le mer. 28 avr. 2021 à 11:12, Guillaume Lelarge <guillaume@lelarge.info>\n> a écrit :\n>\n>> Hi,\n>>\n>> One of my customers has an issue with logical replication. As $SUBJECT\n>> says, multiple table synchronization happens serially. To be honest, it\n>> doesn't do this every time. It happens when the tables are big enough.\n>>\n>> This issue was already described on this thread (from 2017):\n>> https://www.postgresql.org/message-id/flat/CAD21AoC2KJdavS7MFffmSsRc1dn3Vg_0xmuc=UpBrZ-_MUxh-Q@mail.gmail.com\n>>\n>> This thread was closed by a commit (\n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=6c2003f8a1bbc7c192a2e83ec51581c018aa162f)\n>> which apparently fixed the issue for the OP.\n>>\n>> Attached is a small test case where it still happens for me on 12.6,\n>> 11.11, and 10.16. I can't make it happen on 13.2. I don't know why. It may\n>> imply bigger tables for 13.2, but why? I simply don't know.\n>>\n>>\n> Actually, it also happens on 13.2.\n>\n>\nPing? :)\n\nIt's been two weeks and no answer as far as I can tell. I may have missed\nsomething on this issue but I don't see what. It's not a big deal, but I\nwould rather understand what's going on.\n\nThanks.\n\n\n-- \nGuillaume.\n\nLe mer. 28 avr. 2021 à 11:37, Guillaume Lelarge <guillaume@lelarge.info> a écrit :Le mer. 28 avr. 2021 à 11:12, Guillaume Lelarge <guillaume@lelarge.info> a écrit :Hi,One of my customers has an issue with logical replication. As $SUBJECT says, multiple table synchronization happens serially. To be honest, it doesn't do this every time. It happens when the tables are big enough.This issue was already described on this thread (from 2017): https://www.postgresql.org/message-id/flat/CAD21AoC2KJdavS7MFffmSsRc1dn3Vg_0xmuc=UpBrZ-_MUxh-Q@mail.gmail.comThis thread was closed by a commit (https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=6c2003f8a1bbc7c192a2e83ec51581c018aa162f) which apparently fixed the issue for the OP.Attached is a small test case where it still happens for me on 12.6, 11.11, and 10.16. I can't make it happen on 13.2. I don't know why. It may imply bigger tables for 13.2, but why? I simply don't know.Actually, it also happens on 13.2.Ping? :)It's been two weeks and no answer as far as I can tell. I may have missed something on this issue but I don't see what. It's not a big deal, but I would rather understand what's going on.Thanks.-- Guillaume.", "msg_date": "Tue, 11 May 2021 16:32:04 +0200", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: \"Multiple table synchronizations are processed serially\" still\n happens" }, { "msg_contents": "On Wed, Apr 28, 2021 at 2:43 PM Guillaume Lelarge\n<guillaume@lelarge.info> wrote:\n>\n> And it logs the \"still waiting\" message as long as the first table is being synchronized. Once this is done, it releases the lock, and the synchronization of the second table starts.\n>\n> Is there something I didn't understand on the previous thread?\n>\n\nIt seems from a script that you are creating a subscription on the\nsame node as publication though in a different DB, right? If so, the\nproblem might be that copying the data of the first table creates a\ntransaction which blocks creation of the slot for second table copy.\nThe commit you referred will just fix the problem while reading the\ndata from the publisher not while writing data in the table in the\nsubscriber.\n\n> I'd like to know why serial synchronization happens, and if there's a way to avoid it.\n>\n\nI guess you need to create a subscription on a different node.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 20 May 2021 15:39:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: \"Multiple table synchronizations are processed serially\" still\n happens" }, { "msg_contents": "Hi,\n\nLe jeu. 20 mai 2021 à 12:09, Amit Kapila <amit.kapila16@gmail.com> a écrit :\n\n> On Wed, Apr 28, 2021 at 2:43 PM Guillaume Lelarge\n> <guillaume@lelarge.info> wrote:\n> >\n> > And it logs the \"still waiting\" message as long as the first table is\n> being synchronized. Once this is done, it releases the lock, and the\n> synchronization of the second table starts.\n> >\n> > Is there something I didn't understand on the previous thread?\n> >\n>\n> It seems from a script that you are creating a subscription on the\n> same node as publication though in a different DB, right?\n\n\nYes, that's right.\n\nIf so, the\n> problem might be that copying the data of the first table creates a\n> transaction which blocks creation of the slot for second table copy.\n>\n\nI don't understand how a transaction could block the creation of a slot.\nCould you explain that to me? or do you know where this is explained in the\ndocumentation?\n\nThe commit you referred will just fix the problem while reading the\n> data from the publisher not while writing data in the table in the\n> subscriber.\n>\n> > I'd like to know why serial synchronization happens, and if there's a\n> way to avoid it.\n> >\n>\n> I guess you need to create a subscription on a different node.\n>\n>\nThanks.\n\n\n-- \nGuillaume.\n\nHi,Le jeu. 20 mai 2021 à 12:09, Amit Kapila <amit.kapila16@gmail.com> a écrit :On Wed, Apr 28, 2021 at 2:43 PM Guillaume Lelarge\n<guillaume@lelarge.info> wrote:\n>\n> And it logs the \"still waiting\" message as long as the first table is being synchronized. Once this is done, it releases the lock, and the synchronization of the second table starts.\n>\n> Is there something I didn't understand on the previous thread?\n>\n\nIt seems from a script that you are creating a subscription on the\nsame node as publication though in a different DB, right?Yes, that's right. If so, the\nproblem might be that copying the data of the first table creates a\ntransaction which blocks creation of the slot for second table copy.I don't understand how a transaction could block the creation of a slot. Could you explain that to me? or do you know where this is explained in the documentation?\nThe commit you referred will just fix the problem while reading the\ndata from the publisher not while writing data in the table in the\nsubscriber.\n\n> I'd like to know why serial synchronization happens, and if there's a way to avoid it.\n>\n\nI guess you need to create a subscription on a different node.\nThanks.-- Guillaume.", "msg_date": "Thu, 20 May 2021 22:00:34 +0200", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: \"Multiple table synchronizations are processed serially\" still\n happens" }, { "msg_contents": "On Fri, May 21, 2021 at 1:30 AM Guillaume Lelarge\n<guillaume@lelarge.info> wrote:\n>\n>\n>> If so, the\n>> problem might be that copying the data of the first table creates a\n>> transaction which blocks creation of the slot for second table copy.\n>\n>\n> I don't understand how a transaction could block the creation of a slot. Could you explain that to me?\n>\n\nDuring the creation of the slot, we need to build the initial snapshot\nwhich is used for decoding WAL. Now, to build the initial snapshot, we\nwait for all running xacts to finish. See functions\nCreateReplicationSlot() and SnapBuildFindSnapshot().\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 21 May 2021 09:13:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: \"Multiple table synchronizations are processed serially\" still\n happens" }, { "msg_contents": "Le ven. 21 mai 2021 à 05:43, Amit Kapila <amit.kapila16@gmail.com> a écrit :\n\n> On Fri, May 21, 2021 at 1:30 AM Guillaume Lelarge\n> <guillaume@lelarge.info> wrote:\n> >\n> >\n> >> If so, the\n> >> problem might be that copying the data of the first table creates a\n> >> transaction which blocks creation of the slot for second table copy.\n> >\n> >\n> > I don't understand how a transaction could block the creation of a slot.\n> Could you explain that to me?\n> >\n>\n> During the creation of the slot\n\n\nDuring the creation of the slot or during the creation of the subscription?\nbecause, in my tests, I create the slot before creating the snapshot.\n\n\n> , we need to build the initial snapshot\n> which is used for decoding WAL. Now, to build the initial snapshot, we\n> wait for all running xacts to finish. See functions\n> CreateReplicationSlot() and SnapBuildFindSnapshot().\n>\n>\nIf we have two workers, both will have a snapshot? they don't share the\nsame snapshot?\n\nAnd if all this is true, I don't see how it could work when the replication\nhappens between two clusters, and couldn't work when it happens with only\none cluster.\n\n\n-- \nGuillaume.\n\nLe ven. 21 mai 2021 à 05:43, Amit Kapila <amit.kapila16@gmail.com> a écrit :On Fri, May 21, 2021 at 1:30 AM Guillaume Lelarge\n<guillaume@lelarge.info> wrote:\n>\n>\n>> If so, the\n>> problem might be that copying the data of the first table creates a\n>> transaction which blocks creation of the slot for second table copy.\n>\n>\n> I don't understand how a transaction could block the creation of a slot. Could you explain that to me?\n>\n\nDuring the creation of the slotDuring the creation of the slot or during the creation of the subscription? because, in my tests, I create the slot before creating the snapshot. , we need to build the initial snapshot\nwhich is used for decoding WAL. Now, to build the initial snapshot, we\nwait for all running xacts to finish. See functions\nCreateReplicationSlot() and SnapBuildFindSnapshot().\nIf we have two workers, both will have a snapshot? they don't share the same snapshot?And if all this is true, I don't see how it could work when the replication happens between two clusters, and couldn't work when it happens with only one cluster.-- Guillaume.", "msg_date": "Fri, 21 May 2021 15:17:43 +0200", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: \"Multiple table synchronizations are processed serially\" still\n happens" }, { "msg_contents": "On Fri, May 21, 2021 at 6:47 PM Guillaume Lelarge\n<guillaume@lelarge.info> wrote:\n>\n> Le ven. 21 mai 2021 à 05:43, Amit Kapila <amit.kapila16@gmail.com> a écrit :\n>>\n>> On Fri, May 21, 2021 at 1:30 AM Guillaume Lelarge\n>> <guillaume@lelarge.info> wrote:\n>> >\n>> >\n>> >> If so, the\n>> >> problem might be that copying the data of the first table creates a\n>> >> transaction which blocks creation of the slot for second table copy.\n>> >\n>> >\n>> > I don't understand how a transaction could block the creation of a slot. Could you explain that to me?\n>> >\n>>\n>> During the creation of the slot\n>\n>\n> During the creation of the slot or during the creation of the subscription? because, in my tests, I create the slot before creating the snapshot.\n>\n\nBut we do internally create another slot for tablesync via a\ntablesync-worker that does the initial copy.\n\n>>\n>> , we need to build the initial snapshot\n>> which is used for decoding WAL. Now, to build the initial snapshot, we\n>> wait for all running xacts to finish. See functions\n>> CreateReplicationSlot() and SnapBuildFindSnapshot().\n>>\n>\n> If we have two workers, both will have a snapshot? they don't share the same snapshot?\n>\n\nNo, for initial tablesync, we need to build a full snapshot (see use\nof CRS_USE_SNAPSHOT option in code).\n\n> And if all this is true, I don't see how it could work when the replication happens between two clusters, and couldn't work when it happens with only one cluster.\n>\n\nI think you might want to try this once.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 22 May 2021 09:28:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: \"Multiple table synchronizations are processed serially\" still\n happens" } ]
[ { "msg_contents": "These look strange to me - the inner parens don't do anything.\nI wouldn't write it with 2x parens for the same reason I wouldn't write it with\n8x parens.\n\ndiff --git a/contrib/amcheck/verify_heapam.c b/contrib/amcheck/verify_heapam.c\nindex 9f159eb3db..3bbc13c443 100644\n--- a/contrib/amcheck/verify_heapam.c\n+++ b/contrib/amcheck/verify_heapam.c\n@@ -693,7 +693,7 @@ check_tuple_header(HeapCheckContext *ctx)\n \t\t\treport_corruption(ctx,\n \t\t\t\t\t\t\t psprintf(\"tuple data should begin at byte %u, but actually begins at byte %u (1 attribute, has nulls)\",\n \t\t\t\t\t\t\t\t\t expected_hoff, ctx->tuphdr->t_hoff));\n-\t\telse if ((infomask & HEAP_HASNULL))\n+\t\telse if ((infomask & HEAP_HASNULL) != 0)\n \t\t\treport_corruption(ctx,\n \t\t\t\t\t\t\t psprintf(\"tuple data should begin at byte %u, but actually begins at byte %u (%u attributes, has nulls)\",\n \t\t\t\t\t\t\t\t\t expected_hoff, ctx->tuphdr->t_hoff, ctx->natts));\ndiff --git a/contrib/ltree/ltree_io.c b/contrib/ltree/ltree_io.c\nindex 15115cb29f..0dd2838f8b 100644\n--- a/contrib/ltree/ltree_io.c\n+++ b/contrib/ltree/ltree_io.c\n@@ -661,17 +661,17 @@ deparse_lquery(const lquery *in)\n \t\t\t\t}\n \t\t\t\tmemcpy(ptr, curtlevel->name, curtlevel->len);\n \t\t\t\tptr += curtlevel->len;\n-\t\t\t\tif ((curtlevel->flag & LVAR_SUBLEXEME))\n+\t\t\t\tif ((curtlevel->flag & LVAR_SUBLEXEME) != 0)\n \t\t\t\t{\n \t\t\t\t\t*ptr = '%';\n \t\t\t\t\tptr++;\n \t\t\t\t}\n-\t\t\t\tif ((curtlevel->flag & LVAR_INCASE))\n+\t\t\t\tif ((curtlevel->flag & LVAR_INCASE) != 0)\n \t\t\t\t{\n \t\t\t\t\t*ptr = '@';\n \t\t\t\t\tptr++;\n \t\t\t\t}\n-\t\t\t\tif ((curtlevel->flag & LVAR_ANYEND))\n+\t\t\t\tif ((curtlevel->flag & LVAR_ANYEND) != 0)\n \t\t\t\t{\n \t\t\t\t\t*ptr = '*';\n \t\t\t\t\tptr++;\ndiff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c\nindex 13396eb7f2..f5a4db5c57 100644\n--- a/src/backend/access/heap/heapam.c\n+++ b/src/backend/access/heap/heapam.c\n@@ -2107,7 +2107,8 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,\n \t\t\tvmstatus = visibilitymap_get_status(relation,\n \t\t\t\t\t\t\t\t BufferGetBlockNumber(buffer), &vmbuffer);\n \n-\t\tif ((starting_with_empty_page || vmstatus & VISIBILITYMAP_ALL_FROZEN))\n+\t\tif (starting_with_empty_page ||\n+\t\t\t\t(vmstatus & VISIBILITYMAP_ALL_FROZEN) != 0)\n \t\t\tall_frozen_set = true;\n \t}\n \ndiff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c\nindex 441445927e..28fdd2943b 100644\n--- a/src/backend/access/transam/xact.c\n+++ b/src/backend/access/transam/xact.c\n@@ -2417,7 +2417,7 @@ PrepareTransaction(void)\n \t * cases, such as a temp table created and dropped all within the\n \t * transaction. That seems to require much more bookkeeping though.\n \t */\n-\tif ((MyXactFlags & XACT_FLAGS_ACCESSEDTEMPNAMESPACE))\n+\tif ((MyXactFlags & XACT_FLAGS_ACCESSEDTEMPNAMESPACE) != 0)\n \t\tereport(ERROR,\n \t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n \t\t\t\t errmsg(\"cannot PREPARE a transaction that has operated on temporary objects\")));\n@@ -5530,7 +5530,7 @@ XactLogCommitRecord(TimestampTz commit_time,\n \t\txl_xinfo.xinfo |= XACT_COMPLETION_UPDATE_RELCACHE_FILE;\n \tif (forceSyncCommit)\n \t\txl_xinfo.xinfo |= XACT_COMPLETION_FORCE_SYNC_COMMIT;\n-\tif ((xactflags & XACT_FLAGS_ACQUIREDACCESSEXCLUSIVELOCK))\n+\tif ((xactflags & XACT_FLAGS_ACQUIREDACCESSEXCLUSIVELOCK) != 0)\n \t\txl_xinfo.xinfo |= XACT_XINFO_HAS_AE_LOCKS;\n \n \t/*\n@@ -5681,7 +5681,7 @@ XactLogAbortRecord(TimestampTz abort_time,\n \n \txlrec.xact_time = abort_time;\n \n-\tif ((xactflags & XACT_FLAGS_ACQUIREDACCESSEXCLUSIVELOCK))\n+\tif ((xactflags & XACT_FLAGS_ACQUIREDACCESSEXCLUSIVELOCK) != 0)\n \t\txl_xinfo.xinfo |= XACT_XINFO_HAS_AE_LOCKS;\n \n \tif (nsubxacts > 0)\ndiff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c\nindex 8e717ada28..f341e6d143 100644\n--- a/src/backend/commands/tablecmds.c\n+++ b/src/backend/commands/tablecmds.c\n@@ -16029,7 +16029,7 @@ PreCommit_on_commit_actions(void)\n \t\t\t\t * relations, we can skip truncating ON COMMIT DELETE ROWS\n \t\t\t\t * tables, as they must still be empty.\n \t\t\t\t */\n-\t\t\t\tif ((MyXactFlags & XACT_FLAGS_ACCESSEDTEMPNAMESPACE))\n+\t\t\t\tif ((MyXactFlags & XACT_FLAGS_ACCESSEDTEMPNAMESPACE) != 0)\n \t\t\t\t\toids_to_truncate = lappend_oid(oids_to_truncate, oc->relid);\n \t\t\t\tbreak;\n \t\t\tcase ONCOMMIT_DROP:\ndiff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c\nindex ff3dcc7b18..fe825c6ede 100644\n--- a/src/backend/nodes/nodeFuncs.c\n+++ b/src/backend/nodes/nodeFuncs.c\n@@ -2390,7 +2390,7 @@ query_tree_walker(Query *query,\n \t * don't contain actual expressions. However they do contain OIDs which\n \t * may be needed by dependency walkers etc.\n \t */\n-\tif ((flags & QTW_EXAMINE_SORTGROUP))\n+\tif ((flags & QTW_EXAMINE_SORTGROUP) != 0)\n \t{\n \t\tif (walker((Node *) query->groupClause, context))\n \t\t\treturn true;\n@@ -3328,7 +3328,7 @@ query_tree_mutator(Query *query,\n \t * may be of interest to some mutators.\n \t */\n \n-\tif ((flags & QTW_EXAMINE_SORTGROUP))\n+\tif ((flags & QTW_EXAMINE_SORTGROUP) != 0)\n \t{\n \t\tMUTATE(query->groupClause, query->groupClause, List *);\n \t\tMUTATE(query->windowClause, query->windowClause, List *);\ndiff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c\nindex 7924581cdc..62baf48f8e 100644\n--- a/src/backend/replication/logical/decode.c\n+++ b/src/backend/replication/logical/decode.c\n@@ -337,7 +337,7 @@ DecodeXactOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)\n \t\t\t\t\tReorderBufferXidSetCatalogChanges(ctx->reorder, xid,\n \t\t\t\t\t\t\t\t\t\t\t\t\t buf->origptr);\n \t\t\t\t}\n-\t\t\t\telse if ((!ctx->fast_forward))\n+\t\t\t\telse if ((!ctx->fast_forward) != 0)\n \t\t\t\t\tReorderBufferImmediateInvalidation(ctx->reorder,\n \t\t\t\t\t\t\t\t\t\t\t\t\t invals->nmsgs,\n \t\t\t\t\t\t\t\t\t\t\t\t\t invals->msgs);\n\n\n", "msg_date": "Wed, 28 Apr 2021 13:29:36 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "strange case of \"if ((a & b))\"" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> These look strange to me - the inner parens don't do anything.\n> I wouldn't write it with 2x parens for the same reason I wouldn't write it with\n> 8x parens.\n\nAgreed, but shouldn't we just drop the excess parens rather than\ndoubling down on useless notation?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Apr 2021 14:40:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Wed, Apr 28, 2021 at 01:29:36PM -0500, Justin Pryzby wrote:\n> These look strange to me - the inner parens don't do anything.\n> I wouldn't write it with 2x parens for the same reason I wouldn't write it with\n> 8x parens.\n\n> \t\t\t\t}\n> -\t\t\t\telse if ((!ctx->fast_forward))\n> +\t\t\t\telse if ((!ctx->fast_forward) != 0)\n\nI find this part of the change harder to understand.\n--\nMichael", "msg_date": "Mon, 10 May 2021 15:25:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Wed, Apr 28, 2021 at 02:40:09PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n>> These look strange to me - the inner parens don't do anything.\n>> I wouldn't write it with 2x parens for the same reason I wouldn't write it with\n>> 8x parens.\n> \n> Agreed, but shouldn't we just drop the excess parens rather than\n> doubling down on useless notation?\n\nUsing a notation like ((a & b) != 0) to enforce a boolean check after\nthe bitwise operation is the usual notation I've preferred, FWIW. Do\nyou mean something different here?\n--\nMichael", "msg_date": "Mon, 10 May 2021 15:27:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Apr 28, 2021 at 02:40:09PM -0400, Tom Lane wrote:\n>> Agreed, but shouldn't we just drop the excess parens rather than\n>> doubling down on useless notation?\n\n> Using a notation like ((a & b) != 0) to enforce a boolean check after\n> the bitwise operation is the usual notation I've preferred, FWIW. Do\n> you mean something different here?\n\nYeah --- the \"!= 0\" is pointless in the context of an if-test.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 May 2021 02:32:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Wed, Apr 28, 2021 at 02:40:09PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > These look strange to me - the inner parens don't do anything.\n> > I wouldn't write it with 2x parens for the same reason I wouldn't write it with\n> > 8x parens.\n> \n> Agreed, but shouldn't we just drop the excess parens rather than\n> doubling down on useless notation?\n\nI believe I got the impression from Michael that there was a style preference\nto write != 0.\n\n0002 is a bonus patch I found in my typos branch. I will hold onto it for\nlater if nobody wants to deal with it.", "msg_date": "Thu, 24 Jun 2021 21:31:11 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Thu, Jun 24, 2021 at 09:31:11PM -0500, Justin Pryzby wrote:\n> On Wed, Apr 28, 2021 at 02:40:09PM -0400, Tom Lane wrote:\n> > Justin Pryzby <pryzby@telsasoft.com> writes:\n> > > These look strange to me - the inner parens don't do anything.\n> > > I wouldn't write it with 2x parens for the same reason I wouldn't write it with\n> > > 8x parens.\n> > \n> > Agreed, but shouldn't we just drop the excess parens rather than\n> > doubling down on useless notation?\n> \n> I believe I got the impression from Michael that there was a style preference\n> to write != 0.\n> \n> 0002 is a bonus patch I found in my typos branch. I will hold onto it for\n> later if nobody wants to deal with it.\n\nI am ready to deal with this patch. Should I apply it to master soon?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 18 Aug 2021 14:02:22 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Wed, Aug 18, 2021 at 02:02:22PM -0400, Bruce Momjian wrote:\n> On Thu, Jun 24, 2021 at 09:31:11PM -0500, Justin Pryzby wrote:\n> > On Wed, Apr 28, 2021 at 02:40:09PM -0400, Tom Lane wrote:\n> > > Justin Pryzby <pryzby@telsasoft.com> writes:\n> > > > These look strange to me - the inner parens don't do anything.\n> > > > I wouldn't write it with 2x parens for the same reason I wouldn't write it with\n> > > > 8x parens.\n> > > \n> > > Agreed, but shouldn't we just drop the excess parens rather than\n> > > doubling down on useless notation?\n> > \n> > I believe I got the impression from Michael that there was a style preference\n> > to write != 0.\n> > \n> > 0002 is a bonus patch I found in my typos branch. I will hold onto it for\n> > later if nobody wants to deal with it.\n> \n> I am ready to deal with this patch. Should I apply it to master soon?\n\nThanks for looking at it. I suggest not to apply 0002 - I'll resend it on\nanother thread with other, similar cleanups.\n\nHowever, I have another patch to clean up stuff like \"? true : false\", which\nseems related to this patch (but maybe it should be applied separately).\n\ncommit 85952c0e1621a5a491a9422cdee66e733728e3a8\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Fri May 7 08:16:51 2021 -0500\n\n Avoid verbose ternary operator with expressions which are already boolean\n\ndiff --git a/contrib/intarray/_int_tool.c b/contrib/intarray/_int_tool.c\nindex 91690aff51..8ed4d63fc3 100644\n--- a/contrib/intarray/_int_tool.c\n+++ b/contrib/intarray/_int_tool.c\n@@ -41,7 +41,7 @@ inner_int_contains(ArrayType *a, ArrayType *b)\n \t\t\tbreak;\t\t\t\t/* db[j] is not in da */\n \t}\n \n-\treturn (n == nb) ? true : false;\n+\treturn (n == nb);\n }\n \n /* arguments are assumed sorted */\ndiff --git a/contrib/ltree/ltree_gist.c b/contrib/ltree/ltree_gist.c\nindex 6cf181bc53..7c39ed4298 100644\n--- a/contrib/ltree/ltree_gist.c\n+++ b/contrib/ltree/ltree_gist.c\n@@ -137,7 +137,7 @@ ltree_same(PG_FUNCTION_ARGS)\n \t\tPG_RETURN_POINTER(result);\n \n \tif (LTG_ISONENODE(a))\n-\t\t*result = (ISEQ(LTG_NODE(a), LTG_NODE(b))) ? true : false;\n+\t\t*result = ISEQ(LTG_NODE(a), LTG_NODE(b));\n \telse\n \t{\n \t\tint32\t\ti;\ndiff --git a/contrib/sepgsql/selinux.c b/contrib/sepgsql/selinux.c\nindex f11968bcaa..dac3f3ec91 100644\n--- a/contrib/sepgsql/selinux.c\n+++ b/contrib/sepgsql/selinux.c\n@@ -615,7 +615,7 @@ static int\tsepgsql_mode = SEPGSQL_MODE_INTERNAL;\n bool\n sepgsql_is_enabled(void)\n {\n-\treturn (sepgsql_mode != SEPGSQL_MODE_DISABLED ? true : false);\n+\treturn sepgsql_mode != SEPGSQL_MODE_DISABLED;\n }\n \n /*\ndiff --git a/src/backend/access/gin/gindatapage.c b/src/backend/access/gin/gindatapage.c\nindex 06c0586543..2ada1dcbda 100644\n--- a/src/backend/access/gin/gindatapage.c\n+++ b/src/backend/access/gin/gindatapage.c\n@@ -241,7 +241,7 @@ dataIsMoveRight(GinBtree btree, Page page)\n \tif (GinPageIsDeleted(page))\n \t\treturn true;\n \n-\treturn (ginCompareItemPointers(&btree->itemptr, iptr) > 0) ? true : false;\n+\treturn ginCompareItemPointers(&btree->itemptr, iptr) > 0;\n }\n \n /*\ndiff --git a/src/backend/access/gin/ginutil.c b/src/backend/access/gin/ginutil.c\nindex cdd626ff0a..5b054ef4ae 100644\n--- a/src/backend/access/gin/ginutil.c\n+++ b/src/backend/access/gin/ginutil.c\n@@ -100,7 +100,7 @@ initGinState(GinState *state, Relation index)\n \tMemSet(state, 0, sizeof(GinState));\n \n \tstate->index = index;\n-\tstate->oneCol = (origTupdesc->natts == 1) ? true : false;\n+\tstate->oneCol = origTupdesc->natts == 1;\n \tstate->origTupdesc = origTupdesc;\n \n \tfor (i = 0; i < origTupdesc->natts; i++)\ndiff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c\nindex 0683f42c25..a83a2e9952 100644\n--- a/src/backend/access/gist/gist.c\n+++ b/src/backend/access/gist/gist.c\n@@ -231,7 +231,7 @@ gistplacetopage(Relation rel, Size freespace, GISTSTATE *giststate,\n {\n \tBlockNumber blkno = BufferGetBlockNumber(buffer);\n \tPage\t\tpage = BufferGetPage(buffer);\n-\tbool\t\tis_leaf = (GistPageIsLeaf(page)) ? true : false;\n+\tbool\t\tis_leaf = GistPageIsLeaf(page);\n \tXLogRecPtr\trecptr;\n \tint\t\t\ti;\n \tbool\t\tis_split;\ndiff --git a/src/backend/access/gist/gistsplit.c b/src/backend/access/gist/gistsplit.c\nindex 526ed1218e..853ebc387b 100644\n--- a/src/backend/access/gist/gistsplit.c\n+++ b/src/backend/access/gist/gistsplit.c\n@@ -303,9 +303,9 @@ supportSecondarySplit(Relation r, GISTSTATE *giststate, int attno,\n \t\tpenalty2 = gistpenalty(giststate, attno, entry1, false, &entrySR, false);\n \n \t\tif (penalty1 < penalty2)\n-\t\t\tleaveOnLeft = (sv->spl_ldatum_exists) ? true : false;\n+\t\t\tleaveOnLeft = sv->spl_ldatum_exists;\n \t\telse\n-\t\t\tleaveOnLeft = (sv->spl_rdatum_exists) ? true : false;\n+\t\t\tleaveOnLeft = sv->spl_rdatum_exists;\n \t}\n \n \tif (leaveOnLeft == false)\ndiff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c\nindex 0752fb38a9..5e3730201c 100644\n--- a/src/backend/access/hash/hash.c\n+++ b/src/backend/access/hash/hash.c\n@@ -816,7 +816,7 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,\n \t\t\t\tXLogRecPtr\trecptr;\n \n \t\t\t\txlrec.clear_dead_marking = clear_dead_marking;\n-\t\t\t\txlrec.is_primary_bucket_page = (buf == bucket_buf) ? true : false;\n+\t\t\t\txlrec.is_primary_bucket_page = buf == bucket_buf;\n \n \t\t\t\tXLogBeginInsert();\n \t\t\t\tXLogRegisterData((char *) &xlrec, SizeOfHashDelete);\ndiff --git a/src/backend/access/hash/hashinsert.c b/src/backend/access/hash/hashinsert.c\nindex d254a00b6a..83af8c1f67 100644\n--- a/src/backend/access/hash/hashinsert.c\n+++ b/src/backend/access/hash/hashinsert.c\n@@ -176,7 +176,7 @@ restart_insert:\n \t\t\tLockBuffer(buf, BUFFER_LOCK_UNLOCK);\n \n \t\t\t/* chain to a new overflow page */\n-\t\t\tbuf = _hash_addovflpage(rel, metabuf, buf, (buf == bucket_buf) ? true : false);\n+\t\t\tbuf = _hash_addovflpage(rel, metabuf, buf, buf == bucket_buf);\n \t\t\tpage = BufferGetPage(buf);\n \n \t\t\t/* should fit now, given test above */\ndiff --git a/src/backend/access/hash/hashovfl.c b/src/backend/access/hash/hashovfl.c\nindex 404f2b6221..7397b6963f 100644\n--- a/src/backend/access/hash/hashovfl.c\n+++ b/src/backend/access/hash/hashovfl.c\n@@ -953,7 +953,7 @@ readpage:\n \t\t\t\t\t\txl_hash_move_page_contents xlrec;\n \n \t\t\t\t\t\txlrec.ntups = nitups;\n-\t\t\t\t\t\txlrec.is_prim_bucket_same_wrt = (wbuf == bucket_buf) ? true : false;\n+\t\t\t\t\t\txlrec.is_prim_bucket_same_wrt = wbuf == bucket_buf;\n \n \t\t\t\t\t\tXLogBeginInsert();\n \t\t\t\t\t\tXLogRegisterData((char *) &xlrec, SizeOfHashMovePageContents);\ndiff --git a/src/backend/access/hash/hashpage.c b/src/backend/access/hash/hashpage.c\nindex b730025356..eda10386b2 100644\n--- a/src/backend/access/hash/hashpage.c\n+++ b/src/backend/access/hash/hashpage.c\n@@ -1195,7 +1195,7 @@ _hash_splitbucket(Relation rel,\n \t\t\t\t\tall_tups_size = 0;\n \n \t\t\t\t\t/* chain to a new overflow page */\n-\t\t\t\t\tnbuf = _hash_addovflpage(rel, metabuf, nbuf, (nbuf == bucket_nbuf) ? true : false);\n+\t\t\t\t\tnbuf = _hash_addovflpage(rel, metabuf, nbuf, nbuf == bucket_nbuf);\n \t\t\t\t\tnpage = BufferGetPage(nbuf);\n \t\t\t\t\tnopaque = (HashPageOpaque) PageGetSpecialPointer(npage);\n \t\t\t\t}\ndiff --git a/src/backend/access/heap/heapam_visibility.c b/src/backend/access/heap/heapam_visibility.c\nindex d3c57cd16a..b72b03ea25 100644\n--- a/src/backend/access/heap/heapam_visibility.c\n+++ b/src/backend/access/heap/heapam_visibility.c\n@@ -1475,7 +1475,7 @@ HeapTupleIsSurelyDead(HeapTuple htup, GlobalVisState *vistest)\n \t * all relevant hint bits were just set moments ago).\n \t */\n \tif (!HeapTupleHeaderXminCommitted(tuple))\n-\t\treturn HeapTupleHeaderXminInvalid(tuple) ? true : false;\n+\t\treturn HeapTupleHeaderXminInvalid(tuple);\n \n \t/*\n \t * If the inserting transaction committed, but any deleting transaction\ndiff --git a/src/backend/access/spgist/spgscan.c b/src/backend/access/spgist/spgscan.c\nindex e14b9fa573..037d56132e 100644\n--- a/src/backend/access/spgist/spgscan.c\n+++ b/src/backend/access/spgist/spgscan.c\n@@ -860,7 +860,7 @@ redirect:\n \t\t\tpage = BufferGetPage(buffer);\n \t\t\tTestForOldSnapshot(snapshot, index, page);\n \n-\t\t\tisnull = SpGistPageStoresNulls(page) ? true : false;\n+\t\t\tisnull = SpGistPageStoresNulls(page);\n \n \t\t\tif (SpGistPageIsLeaf(page))\n \t\t\t{\ndiff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c\nindex bf619d3a65..23fe1d85fd 100644\n--- a/src/backend/executor/spi.c\n+++ b/src/backend/executor/spi.c\n@@ -1037,7 +1037,7 @@ SPI_modifytuple(Relation rel, HeapTuple tuple, int natts, int *attnum,\n \t\tif (attnum[i] <= 0 || attnum[i] > numberOfAttributes)\n \t\t\tbreak;\n \t\tv[attnum[i] - 1] = Values[i];\n-\t\tn[attnum[i] - 1] = (Nulls && Nulls[i] == 'n') ? true : false;\n+\t\tn[attnum[i] - 1] = Nulls && Nulls[i] == 'n';\n \t}\n \n \tif (i == natts)\t\t\t\t/* no errors in *attnum */\ndiff --git a/src/backend/jit/jit.c b/src/backend/jit/jit.c\nindex 2da300e000..91b8ae6c51 100644\n--- a/src/backend/jit/jit.c\n+++ b/src/backend/jit/jit.c\n@@ -198,7 +198,7 @@ file_exists(const char *name)\n \tAssertArg(name != NULL);\n \n \tif (stat(name, &st) == 0)\n-\t\treturn S_ISDIR(st.st_mode) ? false : true;\n+\t\treturn !S_ISDIR(st.st_mode);\n \telse if (!(errno == ENOENT || errno == ENOTDIR))\n \t\tereport(ERROR,\n \t\t\t\t(errcode_for_file_access(),\ndiff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c\nindex 41cbf328c4..3f14b5c1c4 100644\n--- a/src/backend/optimizer/util/pathnode.c\n+++ b/src/backend/optimizer/util/pathnode.c\n@@ -936,7 +936,7 @@ create_seqscan_path(PlannerInfo *root, RelOptInfo *rel,\n \tpathnode->pathtarget = rel->reltarget;\n \tpathnode->param_info = get_baserel_parampathinfo(root, rel,\n \t\t\t\t\t\t\t\t\t\t\t\t\t required_outer);\n-\tpathnode->parallel_aware = parallel_workers > 0 ? true : false;\n+\tpathnode->parallel_aware = parallel_workers > 0;\n \tpathnode->parallel_safe = rel->consider_parallel;\n \tpathnode->parallel_workers = parallel_workers;\n \tpathnode->pathkeys = NIL;\t/* seqscan has unordered result */\n@@ -1057,7 +1057,7 @@ create_bitmap_heap_path(PlannerInfo *root,\n \tpathnode->path.pathtarget = rel->reltarget;\n \tpathnode->path.param_info = get_baserel_parampathinfo(root, rel,\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t required_outer);\n-\tpathnode->path.parallel_aware = parallel_degree > 0 ? true : false;\n+\tpathnode->path.parallel_aware = parallel_degree > 0;\n \tpathnode->path.parallel_safe = rel->consider_parallel;\n \tpathnode->path.parallel_workers = parallel_degree;\n \tpathnode->path.pathkeys = NIL;\t/* always unordered */\ndiff --git a/src/backend/statistics/mcv.c b/src/backend/statistics/mcv.c\nindex ef118952c7..35b39ece07 100644\n--- a/src/backend/statistics/mcv.c\n+++ b/src/backend/statistics/mcv.c\n@@ -1772,7 +1772,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,\n \t\t\tfor (i = 0; i < mcvlist->nitems; i++)\n \t\t\t{\n \t\t\t\tint\t\t\tj;\n-\t\t\t\tbool\t\tmatch = (expr->useOr ? false : true);\n+\t\t\t\tbool\t\tmatch = !expr->useOr;\n \t\t\t\tMCVItem *item = &mcvlist->items[i];\n \n \t\t\t\t/*\ndiff --git a/src/backend/storage/file/buffile.c b/src/backend/storage/file/buffile.c\nindex a4be5fe513..41a6d4793c 100644\n--- a/src/backend/storage/file/buffile.c\n+++ b/src/backend/storage/file/buffile.c\n@@ -325,7 +325,7 @@ BufFileOpenShared(SharedFileSet *fileset, const char *name, int mode)\n \n \tfile = makeBufFileCommon(nfiles);\n \tfile->files = files;\n-\tfile->readOnly = (mode == O_RDONLY) ? true : false;\n+\tfile->readOnly = mode == O_RDONLY;\n \tfile->fileset = fileset;\n \tfile->name = pstrdup(name);\n \ndiff --git a/src/backend/tsearch/ts_parse.c b/src/backend/tsearch/ts_parse.c\nindex d978c8850d..3ae0044dfb 100644\n--- a/src/backend/tsearch/ts_parse.c\n+++ b/src/backend/tsearch/ts_parse.c\n@@ -288,7 +288,7 @@ LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)\n \t\t\t\t}\n \t\t\t}\n \n-\t\t\tld->dictState.isend = (curVal->type == 0) ? true : false;\n+\t\t\tld->dictState.isend = (curVal->type == 0);\n \t\t\tld->dictState.getnext = false;\n \n \t\t\tres = (TSLexeme *) DatumGetPointer(FunctionCall4(&(dict->lexize),\ndiff --git a/src/backend/utils/adt/bool.c b/src/backend/utils/adt/bool.c\nindex fe11d1ae94..cd98f84270 100644\n--- a/src/backend/utils/adt/bool.c\n+++ b/src/backend/utils/adt/bool.c\n@@ -184,7 +184,7 @@ boolrecv(PG_FUNCTION_ARGS)\n \tint\t\t\text;\n \n \text = pq_getmsgbyte(buf);\n-\tPG_RETURN_BOOL((ext != 0) ? true : false);\n+\tPG_RETURN_BOOL(ext != 0);\n }\n \n /*\ndiff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c\nindex 4df8cc5abf..0f5f1208c9 100644\n--- a/src/backend/utils/adt/ruleutils.c\n+++ b/src/backend/utils/adt/ruleutils.c\n@@ -7997,14 +7997,14 @@ isSimpleNode(Node *node, Node *parentNode, int prettyFlags)\n \t\t\t * appears simple since . has top precedence, unless parent is\n \t\t\t * T_FieldSelect itself!\n \t\t\t */\n-\t\t\treturn (IsA(parentNode, FieldSelect) ? false : true);\n+\t\t\treturn !IsA(parentNode, FieldSelect);\n \n \t\tcase T_FieldStore:\n \n \t\t\t/*\n \t\t\t * treat like FieldSelect (probably doesn't matter)\n \t\t\t */\n-\t\t\treturn (IsA(parentNode, FieldStore) ? false : true);\n+\t\t\treturn !IsA(parentNode, FieldStore);\n \n \t\tcase T_CoerceToDomain:\n \t\t\t/* maybe simple, check args */\ndiff --git a/src/backend/utils/adt/tsquery_gist.c b/src/backend/utils/adt/tsquery_gist.c\nindex 14d7343afa..906a686914 100644\n--- a/src/backend/utils/adt/tsquery_gist.c\n+++ b/src/backend/utils/adt/tsquery_gist.c\n@@ -109,7 +109,7 @@ gtsquery_same(PG_FUNCTION_ARGS)\n \tTSQuerySign b = PG_GETARG_TSQUERYSIGN(1);\n \tbool\t *result = (bool *) PG_GETARG_POINTER(2);\n \n-\t*result = (a == b) ? true : false;\n+\t*result = (a == b);\n \n \tPG_RETURN_POINTER(result);\n }\ndiff --git a/src/backend/utils/adt/tsquery_util.c b/src/backend/utils/adt/tsquery_util.c\nindex 7f936427b5..3dcc753e98 100644\n--- a/src/backend/utils/adt/tsquery_util.c\n+++ b/src/backend/utils/adt/tsquery_util.c\n@@ -186,7 +186,7 @@ QTNEq(QTNode *a, QTNode *b)\n \tif (!(sign == a->sign && sign == b->sign))\n \t\treturn false;\n \n-\treturn (QTNodeCompare(a, b) == 0) ? true : false;\n+\treturn QTNodeCompare(a, b) == 0;\n }\n \n /*\ndiff --git a/src/backend/utils/adt/xid8funcs.c b/src/backend/utils/adt/xid8funcs.c\nindex cc2b4ac797..6c6786bc39 100644\n--- a/src/backend/utils/adt/xid8funcs.c\n+++ b/src/backend/utils/adt/xid8funcs.c\n@@ -221,7 +221,7 @@ is_visible_fxid(FullTransactionId value, const pg_snapshot *snap)\n \t\tres = bsearch(&value, snap->xip, snap->nxip, sizeof(FullTransactionId),\n \t\t\t\t\t cmp_fxid);\n \t\t/* if found, transaction is still in progress */\n-\t\treturn (res) ? false : true;\n+\t\treturn !res;\n \t}\n #endif\n \telse\ndiff --git a/src/backend/utils/fmgr/dfmgr.c b/src/backend/utils/fmgr/dfmgr.c\nindex e8c6cdde97..96fd9d2268 100644\n--- a/src/backend/utils/fmgr/dfmgr.c\n+++ b/src/backend/utils/fmgr/dfmgr.c\n@@ -458,7 +458,7 @@ file_exists(const char *name)\n \tAssertArg(name != NULL);\n \n \tif (stat(name, &st) == 0)\n-\t\treturn S_ISDIR(st.st_mode) ? false : true;\n+\t\treturn !S_ISDIR(st.st_mode);\n \telse if (!(errno == ENOENT || errno == ENOTDIR || errno == EACCES))\n \t\tereport(ERROR,\n \t\t\t\t(errcode_for_file_access(),\n\n\n", "msg_date": "Wed, 18 Aug 2021 13:28:56 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Wed, Aug 18, 2021 at 01:28:56PM -0500, Justin Pryzby wrote:\n> > > 0002 is a bonus patch I found in my typos branch. I will hold onto it for\n> > > later if nobody wants to deal with it.\n> > \n> > I am ready to deal with this patch. Should I apply it to master soon?\n> \n> Thanks for looking at it. I suggest not to apply 0002 - I'll resend it on\n> another thread with other, similar cleanups.\n\nOK.\n\n> However, I have another patch to clean up stuff like \"? true : false\", which\n> seems related to this patch (but maybe it should be applied separately).\n\nYes, that is odd. I think it is related to the confusion that if ()\ncompares non-zero(true) and zero(false), while booleans return only 1/0\n(no other values). This explores that:\n\n\thttps://stackoverflow.com/questions/22489517/c-language-boolean-expression-return-value\n\nDo you want me to consider this patch now?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 18 Aug 2021 15:15:21 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Wed, Aug 18, 2021 at 03:15:21PM -0400, Bruce Momjian wrote:\n> On Wed, Aug 18, 2021 at 01:28:56PM -0500, Justin Pryzby wrote:\n> > > > 0002 is a bonus patch I found in my typos branch. I will hold onto it for\n> > > > later if nobody wants to deal with it.\n> > > \n> > > I am ready to deal with this patch. Should I apply it to master soon?\n> > \n> > Thanks for looking at it. I suggest not to apply 0002 - I'll resend it on\n> > another thread with other, similar cleanups.\n> \n> OK.\n> \n> > However, I have another patch to clean up stuff like \"? true : false\", which\n> > seems related to this patch (but maybe it should be applied separately).\n> \n> Yes, that is odd. I think it is related to the confusion that if ()\n> compares non-zero(true) and zero(false), while booleans return only 1/0\n> (no other values). This explores that:\n> \n> \thttps://stackoverflow.com/questions/22489517/c-language-boolean-expression-return-value\n> \n> Do you want me to consider this patch now?\n\nYes, please.\nIt may be helpful to dispose of the first patch first.\n\nThanks,\nJustin\n\n\n", "msg_date": "Wed, 18 Aug 2021 14:23:18 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Thu, Aug 19, 2021 at 4:29 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Aug 18, 2021 at 02:02:22PM -0400, Bruce Momjian wrote:\n> > On Thu, Jun 24, 2021 at 09:31:11PM -0500, Justin Pryzby wrote:\n> > > On Wed, Apr 28, 2021 at 02:40:09PM -0400, Tom Lane wrote:\n> > > > Justin Pryzby <pryzby@telsasoft.com> writes:\n> > > > > These look strange to me - the inner parens don't do anything.\n> > > > > I wouldn't write it with 2x parens for the same reason I wouldn't write it with\n> > > > > 8x parens.\n> > > >\n> > > > Agreed, but shouldn't we just drop the excess parens rather than\n> > > > doubling down on useless notation?\n> > >\n> > > I believe I got the impression from Michael that there was a style preference\n> > > to write != 0.\n> > >\n> > > 0002 is a bonus patch I found in my typos branch. I will hold onto it for\n> > > later if nobody wants to deal with it.\n> >\n> > I am ready to deal with this patch. Should I apply it to master soon?\n>\n> Thanks for looking at it. I suggest not to apply 0002 - I'll resend it on\n> another thread with other, similar cleanups.\n>\n> However, I have another patch to clean up stuff like \"? true : false\", which\n> seems related to this patch (but maybe it should be applied separately).\n>\n> commit 85952c0e1621a5a491a9422cdee66e733728e3a8\n> Author: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Fri May 7 08:16:51 2021 -0500\n>\n> Avoid verbose ternary operator with expressions which are already boolean\n>\n> diff --git a/contrib/intarray/_int_tool.c b/contrib/intarray/_int_tool.c\n> index 91690aff51..8ed4d63fc3 100644\n> --- a/contrib/intarray/_int_tool.c\n> +++ b/contrib/intarray/_int_tool.c\n> @@ -41,7 +41,7 @@ inner_int_contains(ArrayType *a, ArrayType *b)\n> break; /* db[j] is not in da */\n> }\n>\n> - return (n == nb) ? true : false;\n> + return (n == nb);\n> }\n>\n> /* arguments are assumed sorted */\n> diff --git a/contrib/ltree/ltree_gist.c b/contrib/ltree/ltree_gist.c\n> index 6cf181bc53..7c39ed4298 100644\n> --- a/contrib/ltree/ltree_gist.c\n> +++ b/contrib/ltree/ltree_gist.c\n> @@ -137,7 +137,7 @@ ltree_same(PG_FUNCTION_ARGS)\n> PG_RETURN_POINTER(result);\n>\n> if (LTG_ISONENODE(a))\n> - *result = (ISEQ(LTG_NODE(a), LTG_NODE(b))) ? true : false;\n> + *result = ISEQ(LTG_NODE(a), LTG_NODE(b));\n> else\n> {\n> int32 i;\n> diff --git a/contrib/sepgsql/selinux.c b/contrib/sepgsql/selinux.c\n> index f11968bcaa..dac3f3ec91 100644\n> --- a/contrib/sepgsql/selinux.c\n> +++ b/contrib/sepgsql/selinux.c\n> @@ -615,7 +615,7 @@ static int sepgsql_mode = SEPGSQL_MODE_INTERNAL;\n> bool\n> sepgsql_is_enabled(void)\n> {\n> - return (sepgsql_mode != SEPGSQL_MODE_DISABLED ? true : false);\n> + return sepgsql_mode != SEPGSQL_MODE_DISABLED;\n> }\n>\n> /*\n> diff --git a/src/backend/access/gin/gindatapage.c b/src/backend/access/gin/gindatapage.c\n> index 06c0586543..2ada1dcbda 100644\n> --- a/src/backend/access/gin/gindatapage.c\n> +++ b/src/backend/access/gin/gindatapage.c\n> @@ -241,7 +241,7 @@ dataIsMoveRight(GinBtree btree, Page page)\n> if (GinPageIsDeleted(page))\n> return true;\n>\n> - return (ginCompareItemPointers(&btree->itemptr, iptr) > 0) ? true : false;\n> + return ginCompareItemPointers(&btree->itemptr, iptr) > 0;\n> }\n>\n> /*\n> diff --git a/src/backend/access/gin/ginutil.c b/src/backend/access/gin/ginutil.c\n> index cdd626ff0a..5b054ef4ae 100644\n> --- a/src/backend/access/gin/ginutil.c\n> +++ b/src/backend/access/gin/ginutil.c\n> @@ -100,7 +100,7 @@ initGinState(GinState *state, Relation index)\n> MemSet(state, 0, sizeof(GinState));\n>\n> state->index = index;\n> - state->oneCol = (origTupdesc->natts == 1) ? true : false;\n> + state->oneCol = origTupdesc->natts == 1;\n> state->origTupdesc = origTupdesc;\n>\n> for (i = 0; i < origTupdesc->natts; i++)\n> diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c\n> index 0683f42c25..a83a2e9952 100644\n> --- a/src/backend/access/gist/gist.c\n> +++ b/src/backend/access/gist/gist.c\n> @@ -231,7 +231,7 @@ gistplacetopage(Relation rel, Size freespace, GISTSTATE *giststate,\n> {\n> BlockNumber blkno = BufferGetBlockNumber(buffer);\n> Page page = BufferGetPage(buffer);\n> - bool is_leaf = (GistPageIsLeaf(page)) ? true : false;\n> + bool is_leaf = GistPageIsLeaf(page);\n> XLogRecPtr recptr;\n> int i;\n> bool is_split;\n> diff --git a/src/backend/access/gist/gistsplit.c b/src/backend/access/gist/gistsplit.c\n> index 526ed1218e..853ebc387b 100644\n> --- a/src/backend/access/gist/gistsplit.c\n> +++ b/src/backend/access/gist/gistsplit.c\n> @@ -303,9 +303,9 @@ supportSecondarySplit(Relation r, GISTSTATE *giststate, int attno,\n> penalty2 = gistpenalty(giststate, attno, entry1, false, &entrySR, false);\n>\n> if (penalty1 < penalty2)\n> - leaveOnLeft = (sv->spl_ldatum_exists) ? true : false;\n> + leaveOnLeft = sv->spl_ldatum_exists;\n> else\n> - leaveOnLeft = (sv->spl_rdatum_exists) ? true : false;\n> + leaveOnLeft = sv->spl_rdatum_exists;\n> }\n>\n> if (leaveOnLeft == false)\n> diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c\n> index 0752fb38a9..5e3730201c 100644\n> --- a/src/backend/access/hash/hash.c\n> +++ b/src/backend/access/hash/hash.c\n> @@ -816,7 +816,7 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,\n> XLogRecPtr recptr;\n>\n> xlrec.clear_dead_marking = clear_dead_marking;\n> - xlrec.is_primary_bucket_page = (buf == bucket_buf) ? true : false;\n> + xlrec.is_primary_bucket_page = buf == bucket_buf;\n>\n> XLogBeginInsert();\n> XLogRegisterData((char *) &xlrec, SizeOfHashDelete);\n> diff --git a/src/backend/access/hash/hashinsert.c b/src/backend/access/hash/hashinsert.c\n> index d254a00b6a..83af8c1f67 100644\n> --- a/src/backend/access/hash/hashinsert.c\n> +++ b/src/backend/access/hash/hashinsert.c\n> @@ -176,7 +176,7 @@ restart_insert:\n> LockBuffer(buf, BUFFER_LOCK_UNLOCK);\n>\n> /* chain to a new overflow page */\n> - buf = _hash_addovflpage(rel, metabuf, buf, (buf == bucket_buf) ? true : false);\n> + buf = _hash_addovflpage(rel, metabuf, buf, buf == bucket_buf);\n> page = BufferGetPage(buf);\n>\n> /* should fit now, given test above */\n> diff --git a/src/backend/access/hash/hashovfl.c b/src/backend/access/hash/hashovfl.c\n> index 404f2b6221..7397b6963f 100644\n> --- a/src/backend/access/hash/hashovfl.c\n> +++ b/src/backend/access/hash/hashovfl.c\n> @@ -953,7 +953,7 @@ readpage:\n> xl_hash_move_page_contents xlrec;\n>\n> xlrec.ntups = nitups;\n> - xlrec.is_prim_bucket_same_wrt = (wbuf == bucket_buf) ? true : false;\n> + xlrec.is_prim_bucket_same_wrt = wbuf == bucket_buf;\n>\n> XLogBeginInsert();\n> XLogRegisterData((char *) &xlrec, SizeOfHashMovePageContents);\n> diff --git a/src/backend/access/hash/hashpage.c b/src/backend/access/hash/hashpage.c\n> index b730025356..eda10386b2 100644\n> --- a/src/backend/access/hash/hashpage.c\n> +++ b/src/backend/access/hash/hashpage.c\n> @@ -1195,7 +1195,7 @@ _hash_splitbucket(Relation rel,\n> all_tups_size = 0;\n>\n> /* chain to a new overflow page */\n> - nbuf = _hash_addovflpage(rel, metabuf, nbuf, (nbuf == bucket_nbuf) ? true : false);\n> + nbuf = _hash_addovflpage(rel, metabuf, nbuf, nbuf == bucket_nbuf);\n> npage = BufferGetPage(nbuf);\n> nopaque = (HashPageOpaque) PageGetSpecialPointer(npage);\n> }\n> diff --git a/src/backend/access/heap/heapam_visibility.c b/src/backend/access/heap/heapam_visibility.c\n> index d3c57cd16a..b72b03ea25 100644\n> --- a/src/backend/access/heap/heapam_visibility.c\n> +++ b/src/backend/access/heap/heapam_visibility.c\n> @@ -1475,7 +1475,7 @@ HeapTupleIsSurelyDead(HeapTuple htup, GlobalVisState *vistest)\n> * all relevant hint bits were just set moments ago).\n> */\n> if (!HeapTupleHeaderXminCommitted(tuple))\n> - return HeapTupleHeaderXminInvalid(tuple) ? true : false;\n> + return HeapTupleHeaderXminInvalid(tuple);\n>\n> /*\n> * If the inserting transaction committed, but any deleting transaction\n> diff --git a/src/backend/access/spgist/spgscan.c b/src/backend/access/spgist/spgscan.c\n> index e14b9fa573..037d56132e 100644\n> --- a/src/backend/access/spgist/spgscan.c\n> +++ b/src/backend/access/spgist/spgscan.c\n> @@ -860,7 +860,7 @@ redirect:\n> page = BufferGetPage(buffer);\n> TestForOldSnapshot(snapshot, index, page);\n>\n> - isnull = SpGistPageStoresNulls(page) ? true : false;\n> + isnull = SpGistPageStoresNulls(page);\n>\n> if (SpGistPageIsLeaf(page))\n> {\n> diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c\n> index bf619d3a65..23fe1d85fd 100644\n> --- a/src/backend/executor/spi.c\n> +++ b/src/backend/executor/spi.c\n> @@ -1037,7 +1037,7 @@ SPI_modifytuple(Relation rel, HeapTuple tuple, int natts, int *attnum,\n> if (attnum[i] <= 0 || attnum[i] > numberOfAttributes)\n> break;\n> v[attnum[i] - 1] = Values[i];\n> - n[attnum[i] - 1] = (Nulls && Nulls[i] == 'n') ? true : false;\n> + n[attnum[i] - 1] = Nulls && Nulls[i] == 'n';\n> }\n>\n> if (i == natts) /* no errors in *attnum */\n> diff --git a/src/backend/jit/jit.c b/src/backend/jit/jit.c\n> index 2da300e000..91b8ae6c51 100644\n> --- a/src/backend/jit/jit.c\n> +++ b/src/backend/jit/jit.c\n> @@ -198,7 +198,7 @@ file_exists(const char *name)\n> AssertArg(name != NULL);\n>\n> if (stat(name, &st) == 0)\n> - return S_ISDIR(st.st_mode) ? false : true;\n> + return !S_ISDIR(st.st_mode);\n> else if (!(errno == ENOENT || errno == ENOTDIR))\n> ereport(ERROR,\n> (errcode_for_file_access(),\n> diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c\n> index 41cbf328c4..3f14b5c1c4 100644\n> --- a/src/backend/optimizer/util/pathnode.c\n> +++ b/src/backend/optimizer/util/pathnode.c\n> @@ -936,7 +936,7 @@ create_seqscan_path(PlannerInfo *root, RelOptInfo *rel,\n> pathnode->pathtarget = rel->reltarget;\n> pathnode->param_info = get_baserel_parampathinfo(root, rel,\n> required_outer);\n> - pathnode->parallel_aware = parallel_workers > 0 ? true : false;\n> + pathnode->parallel_aware = parallel_workers > 0;\n> pathnode->parallel_safe = rel->consider_parallel;\n> pathnode->parallel_workers = parallel_workers;\n> pathnode->pathkeys = NIL; /* seqscan has unordered result */\n> @@ -1057,7 +1057,7 @@ create_bitmap_heap_path(PlannerInfo *root,\n> pathnode->path.pathtarget = rel->reltarget;\n> pathnode->path.param_info = get_baserel_parampathinfo(root, rel,\n> required_outer);\n> - pathnode->path.parallel_aware = parallel_degree > 0 ? true : false;\n> + pathnode->path.parallel_aware = parallel_degree > 0;\n> pathnode->path.parallel_safe = rel->consider_parallel;\n> pathnode->path.parallel_workers = parallel_degree;\n> pathnode->path.pathkeys = NIL; /* always unordered */\n> diff --git a/src/backend/statistics/mcv.c b/src/backend/statistics/mcv.c\n> index ef118952c7..35b39ece07 100644\n> --- a/src/backend/statistics/mcv.c\n> +++ b/src/backend/statistics/mcv.c\n> @@ -1772,7 +1772,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,\n> for (i = 0; i < mcvlist->nitems; i++)\n> {\n> int j;\n> - bool match = (expr->useOr ? false : true);\n> + bool match = !expr->useOr;\n> MCVItem *item = &mcvlist->items[i];\n>\n> /*\n> diff --git a/src/backend/storage/file/buffile.c b/src/backend/storage/file/buffile.c\n> index a4be5fe513..41a6d4793c 100644\n> --- a/src/backend/storage/file/buffile.c\n> +++ b/src/backend/storage/file/buffile.c\n> @@ -325,7 +325,7 @@ BufFileOpenShared(SharedFileSet *fileset, const char *name, int mode)\n>\n> file = makeBufFileCommon(nfiles);\n> file->files = files;\n> - file->readOnly = (mode == O_RDONLY) ? true : false;\n> + file->readOnly = mode == O_RDONLY;\n> file->fileset = fileset;\n> file->name = pstrdup(name);\n>\n> diff --git a/src/backend/tsearch/ts_parse.c b/src/backend/tsearch/ts_parse.c\n> index d978c8850d..3ae0044dfb 100644\n> --- a/src/backend/tsearch/ts_parse.c\n> +++ b/src/backend/tsearch/ts_parse.c\n> @@ -288,7 +288,7 @@ LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)\n> }\n> }\n>\n> - ld->dictState.isend = (curVal->type == 0) ? true : false;\n> + ld->dictState.isend = (curVal->type == 0);\n> ld->dictState.getnext = false;\n>\n> res = (TSLexeme *) DatumGetPointer(FunctionCall4(&(dict->lexize),\n> diff --git a/src/backend/utils/adt/bool.c b/src/backend/utils/adt/bool.c\n> index fe11d1ae94..cd98f84270 100644\n> --- a/src/backend/utils/adt/bool.c\n> +++ b/src/backend/utils/adt/bool.c\n> @@ -184,7 +184,7 @@ boolrecv(PG_FUNCTION_ARGS)\n> int ext;\n>\n> ext = pq_getmsgbyte(buf);\n> - PG_RETURN_BOOL((ext != 0) ? true : false);\n> + PG_RETURN_BOOL(ext != 0);\n> }\n>\n> /*\n> diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c\n> index 4df8cc5abf..0f5f1208c9 100644\n> --- a/src/backend/utils/adt/ruleutils.c\n> +++ b/src/backend/utils/adt/ruleutils.c\n> @@ -7997,14 +7997,14 @@ isSimpleNode(Node *node, Node *parentNode, int prettyFlags)\n> * appears simple since . has top precedence, unless parent is\n> * T_FieldSelect itself!\n> */\n> - return (IsA(parentNode, FieldSelect) ? false : true);\n> + return !IsA(parentNode, FieldSelect);\n>\n> case T_FieldStore:\n>\n> /*\n> * treat like FieldSelect (probably doesn't matter)\n> */\n> - return (IsA(parentNode, FieldStore) ? false : true);\n> + return !IsA(parentNode, FieldStore);\n>\n> case T_CoerceToDomain:\n> /* maybe simple, check args */\n> diff --git a/src/backend/utils/adt/tsquery_gist.c b/src/backend/utils/adt/tsquery_gist.c\n> index 14d7343afa..906a686914 100644\n> --- a/src/backend/utils/adt/tsquery_gist.c\n> +++ b/src/backend/utils/adt/tsquery_gist.c\n> @@ -109,7 +109,7 @@ gtsquery_same(PG_FUNCTION_ARGS)\n> TSQuerySign b = PG_GETARG_TSQUERYSIGN(1);\n> bool *result = (bool *) PG_GETARG_POINTER(2);\n>\n> - *result = (a == b) ? true : false;\n> + *result = (a == b);\n>\n> PG_RETURN_POINTER(result);\n> }\n> diff --git a/src/backend/utils/adt/tsquery_util.c b/src/backend/utils/adt/tsquery_util.c\n> index 7f936427b5..3dcc753e98 100644\n> --- a/src/backend/utils/adt/tsquery_util.c\n> +++ b/src/backend/utils/adt/tsquery_util.c\n> @@ -186,7 +186,7 @@ QTNEq(QTNode *a, QTNode *b)\n> if (!(sign == a->sign && sign == b->sign))\n> return false;\n>\n> - return (QTNodeCompare(a, b) == 0) ? true : false;\n> + return QTNodeCompare(a, b) == 0;\n> }\n>\n> /*\n> diff --git a/src/backend/utils/adt/xid8funcs.c b/src/backend/utils/adt/xid8funcs.c\n> index cc2b4ac797..6c6786bc39 100644\n> --- a/src/backend/utils/adt/xid8funcs.c\n> +++ b/src/backend/utils/adt/xid8funcs.c\n> @@ -221,7 +221,7 @@ is_visible_fxid(FullTransactionId value, const pg_snapshot *snap)\n> res = bsearch(&value, snap->xip, snap->nxip, sizeof(FullTransactionId),\n> cmp_fxid);\n> /* if found, transaction is still in progress */\n> - return (res) ? false : true;\n> + return !res;\n> }\n> #endif\n> else\n> diff --git a/src/backend/utils/fmgr/dfmgr.c b/src/backend/utils/fmgr/dfmgr.c\n> index e8c6cdde97..96fd9d2268 100644\n> --- a/src/backend/utils/fmgr/dfmgr.c\n> +++ b/src/backend/utils/fmgr/dfmgr.c\n> @@ -458,7 +458,7 @@ file_exists(const char *name)\n> AssertArg(name != NULL);\n>\n> if (stat(name, &st) == 0)\n> - return S_ISDIR(st.st_mode) ? false : true;\n> + return !S_ISDIR(st.st_mode);\n> else if (!(errno == ENOENT || errno == ENOTDIR || errno == EACCES))\n> ereport(ERROR,\n> (errcode_for_file_access(),\n>\n>\n\nIf you are inclined to simplify all those ternary statements like\nthat, then you might also be interested in taking a look at lots of\nsimilar code just using normal if/else (not ternary).\n\nTry this hacky regex to expose some candidates (this works for me in\nVisual Studio Code).\n\n\\s*([->*a-zA-Z_]+)\\s*=\\s*(true|false);.*\\n\\s*else\\s*\\n\\s*\\1*\\s*=\\s*(true|false);\n\nIMO many of the examples that regex finds are best left alone (for readability).\n\nOTOH there are still a few left that you might think would be better\nto be simplified. e.g.\n\nif (cube_cmp_v0(b1, b2) == 0)\n*result = true;\nelse\n*result = false;\n\n--------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 19 Aug 2021 10:59:39 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> On Thu, Aug 19, 2021 at 4:29 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> - state->oneCol = (origTupdesc->natts == 1) ? true : false;\n>> + state->oneCol = origTupdesc->natts == 1;\n\nFWIW, I am definitely not a fan of removing the parentheses in this\ncontext, because readers might wonder if you meant an \"a = b = 1\"\nmultiple-assignment, or even misread it as that and be confused.\nSo I'd prefer\n\n state->oneCol = (origTupdesc->natts == 1);\n\nIn the context of \"return (a == b)\", I'm about neutral on whether\nto keep the parens or not, but I wonder why this patch does some\nof one and some of the other.\n\nI do agree that \"x ? true : false\" is silly in contexts where x\nis guaranteed to yield zero or one. What you need to be careful\nabout is where x might yield other bitpatterns, for example\n\"(flags & SOMEFLAG) ? true : false\". Pre-C99, this type of coding\nwas often *necessary*. With C99, it's only necessary if you're\nnot sure that the compiler will cast the result to boolean.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 18 Aug 2021 23:08:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "> On 19 Aug 2021, at 05:08, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Peter Smith <smithpb2250@gmail.com> writes:\n>> On Thu, Aug 19, 2021 at 4:29 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>> - state->oneCol = (origTupdesc->natts == 1) ? true : false;\n>>> + state->oneCol = origTupdesc->natts == 1;\n> \n> FWIW, I am definitely not a fan of removing the parentheses in this\n> context, because readers might wonder if you meant an \"a = b = 1\"\n> multiple-assignment, or even misread it as that and be confused.\n> So I'd prefer\n> \n> state->oneCol = (origTupdesc->natts == 1);\n\n+1, the parenthesis makes it a lot more readable IMO.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 19 Aug 2021 09:34:35 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Wed, Aug 18, 2021 at 11:08:57PM -0400, Tom Lane wrote:\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > On Thu, Aug 19, 2021 at 4:29 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >> - state->oneCol = (origTupdesc->natts == 1) ? true : false;\n> >> + state->oneCol = origTupdesc->natts == 1;\n> \n> FWIW, I am definitely not a fan of removing the parentheses in this\n> context, because readers might wonder if you meant an \"a = b = 1\"\n> multiple-assignment, or even misread it as that and be confused.\n> So I'd prefer\n> \n> state->oneCol = (origTupdesc->natts == 1);\n\nGood point --- extra parentheses are not always bad.\n> \n> In the context of \"return (a == b)\", I'm about neutral on whether\n> to keep the parens or not, but I wonder why this patch does some\n> of one and some of the other.\n> \n> I do agree that \"x ? true : false\" is silly in contexts where x\n> is guaranteed to yield zero or one. What you need to be careful\n> about is where x might yield other bitpatterns, for example\n> \"(flags & SOMEFLAG) ? true : false\". Pre-C99, this type of coding\n> was often *necessary*. With C99, it's only necessary if you're\n> not sure that the compiler will cast the result to boolean.\n\nAgreed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 19 Aug 2021 14:26:36 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Wed, Aug 18, 2021 at 11:08:57PM -0400, Tom Lane wrote:\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > On Thu, Aug 19, 2021 at 4:29 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >> - state->oneCol = (origTupdesc->natts == 1) ? true : false;\n> >> + state->oneCol = origTupdesc->natts == 1;\n> \n> FWIW, I am definitely not a fan of removing the parentheses in this\n> context, because readers might wonder if you meant an \"a = b = 1\"\n> multiple-assignment, or even misread it as that and be confused.\n> So I'd prefer\n> \n> state->oneCol = (origTupdesc->natts == 1);\n> \n> In the context of \"return (a == b)\", I'm about neutral on whether\n> to keep the parens or not, but I wonder why this patch does some\n> of one and some of the other.\n> \n> I do agree that \"x ? true : false\" is silly in contexts where x\n> is guaranteed to yield zero or one. What you need to be careful\n> about is where x might yield other bitpatterns, for example\n> \"(flags & SOMEFLAG) ? true : false\". Pre-C99, this type of coding\n> was often *necessary*. With C99, it's only necessary if you're\n> not sure that the compiler will cast the result to boolean.\n\nI revised the patch based on these comments. I think my ternary patch already\nexcluded the cases that test something other than a boolean.\n\nPeter: you quoted my patch but didn't comment on it. Your regex finds a lot of\nconditional boolean assignments, but I agree that they're best left alone. My\npatches are to clean up silly cases, not to rewrite things in a way that's\narguably better (but arguably not worth changing and so also not worth arguing\nthat it's better).\n\n-- \nJustin", "msg_date": "Sun, 5 Sep 2021 19:11:10 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Sun, Sep 05, 2021 at 07:11:10PM -0500, Justin Pryzby wrote:\n> I revised the patch based on these comments. I think my ternary patch already\n> excluded the cases that test something other than a boolean.\n\nIn 0002, everything is a boolean expression except for\nSpGistPageStoresNulls() and GistPageIsLeaf(). So that's a good\ncleanup overall.\n\n- pathnode->parallel_aware = parallel_workers > 0 ? true : false;\n+ pathnode->parallel_aware = parallel_workers > 0;\nI also prefer that we keep the parenthesis for such things. That's\nmore readable and easier to reason about.\n--\nMichael", "msg_date": "Tue, 7 Sep 2021 14:59:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Tue, Sep 07, 2021 at 02:59:58PM +0900, Michael Paquier wrote:\n> In 0002, everything is a boolean expression except for\n> SpGistPageStoresNulls() and GistPageIsLeaf(). So that's a good\n> cleanup overall.\n\nI looked again at 0002 again yesterday, and that was an improvement\nfor most of those locations, where we already use a boolean as\nexpression, so done mostly as of fd0625c.\n\n> - pathnode->parallel_aware = parallel_workers > 0 ? true : false;\n> + pathnode->parallel_aware = parallel_workers > 0;\n> I also prefer that we keep the parenthesis for such things. That's\n> more readable and easier to reason about.\n\nAdjusted these as well.\n--\nMichael", "msg_date": "Thu, 9 Sep 2021 13:28:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "At Thu, 9 Sep 2021 13:28:54 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Sep 07, 2021 at 02:59:58PM +0900, Michael Paquier wrote:\n> > In 0002, everything is a boolean expression except for\n> > SpGistPageStoresNulls() and GistPageIsLeaf(). So that's a good\n> > cleanup overall.\n> \n> I looked again at 0002 again yesterday, and that was an improvement\n> for most of those locations, where we already use a boolean as\n> expression, so done mostly as of fd0625c.\n> \n> > - pathnode->parallel_aware = parallel_workers > 0 ? true : false;\n> > + pathnode->parallel_aware = parallel_workers > 0;\n> > I also prefer that we keep the parenthesis for such things. That's\n> > more readable and easier to reason about.\n> \n> Adjusted these as well.\n\nMaybe I'm missing something, but I can see several instances of the\n\"eval-bool ? true : false\" pattern after fd0625c7a9 that are not in\nthe latest 0002.\n\n./backend/nodes/readfuncs.c\u0000187:#define strtobool(x) ((*(x) == 't') ? true : false)\n./backend/tsearch/wparser_def.c\u00001859:\treturn (item && (item->flags & A_BINGO)) ? true : false;\n\nThese are candidates to fix.\n\n./backend/tsearch/ts_utils.c\u0000145:\t\t\t\t\tsizeof(char *), pg_qsort_strcmp)) ? true : false;\n\nThis is a part of the following expression.\n\n> \treturn (s->stop && s->len > 0 &&\n> \t\t\tbsearch(&key, s->stop, s->len,\n> \t\t\t\t\tsizeof(char *), pg_qsort_strcmp)) ? true : false;\n\nSo this is also a candidate.\n\nAlso found !f(eval) equivalents.\n\n./backend/access/gist/gistsplit.c\u0000424:\tsv->spl_ldatum_exists = (v->spl_lisnull[attno]) ? false : true;\n./backend/access/gist/gistsplit.c\u0000425:\tsv->spl_rdatum_exists = (v->spl_risnull[attno]) ? false : true;\n./backend/access/gist/gistsplit.c\u0000424:\tsv->spl_ldatum_exists = (v->spl_lisnull[attno]) ? false : true;\n./backend/access/gist/gistsplit.c\u0000425:\tsv->spl_rdatum_exists = (v->spl_risnull[attno]) ? false : true;\n./backend/access/gist/gistsplit.c\u0000454:\t\tsv->spl_ldatum_exists = (v->spl_lisnull[attno]) ? false : true;\n./backend/access/gist/gistsplit.c\u0000455:\t\tsv->spl_rdatum_exists = (v->spl_risnull[attno]) ? false : true;\n./backend/commands/tablecmds.c\u00007466:\t\t\t\t\t newDefault == NULL ? false : true);\n./backend/executor/spi.c\u0000146:\t_SPI_current->atomic = (options & SPI_OPT_NONATOMIC ? false : true);\n./backend/executor/nodeResult.c\u0000198:\tresstate->rs_checkqual = (node->resconstantqual == NULL) ? false : true;\n./backend/executor/nodeResult.c\u0000263:\tnode->rs_checkqual = (node->resconstantqual == NULL) ? false : true;\n./backend/statistics/mcv.c\u00001622:\tmemset(matches, (is_or) ? false : true,\n./backend/tsearch/spell.c\u00001708:\t\t\t\t\t\t? false : true;\n\n./interfaces/ecpg/ecpglib/execute.c\u0000124:\t\t\tstring = string ? false : true;\n./interfaces/ecpg/ecpglib/prepare.c\u0000113:\t\t\tstring = string ? false : true;\n./interfaces/ecpg/ecpglib/data.c\u0000959:\t\t\t\t\t\tstring = string ? false : true;\n(Note: the \"string\" is a bool)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 09 Sep 2021 14:14:50 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Thu, Sep 09, 2021 at 02:14:50PM +0900, Kyotaro Horiguchi wrote:\n> Maybe I'm missing something, but I can see several instances of the\n> \"eval-bool ? true : false\" pattern after fd0625c7a9 that are not in\n> the latest 0002.\n\nYep. There are more of these, and I have just looked at some of them\nas of the patches proposed. What was sent looked clean enough to\nprogress a bit and be done with it.\n--\nMichael", "msg_date": "Sat, 11 Sep 2021 14:44:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Sat, Sep 11, 2021 at 2:44 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Sep 09, 2021 at 02:14:50PM +0900, Kyotaro Horiguchi wrote:\n> > Maybe I'm missing something, but I can see several instances of the\n> > \"eval-bool ? true : false\" pattern after fd0625c7a9 that are not in\n> > the latest 0002.\n>\n> Yep. There are more of these, and I have just looked at some of them\n> as of the patches proposed. What was sent looked clean enough to\n> progress a bit and be done with it.\n\nWhile reading the decode.c I found the extra parentheses and arrived\nat this thread. The discussion seems to get inactive now but one (0001\npatch) out of two patches Justin proposed [1] is not committed yet and\nthere seems no CF entry for this item (0002 patch already got\ncommitted, fd0625c7a9). 0001 patch can be cleanly applied and looks\ngood to me.\n\nAlso, regarding \"x ? true: false\" pattern where x is guaranteed to\nyield a boolean, I found other examples other than Horiguchi-san\nmentioned[2]. I've attached the patch to remove them.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/20210906001110.GF26465%40telsasoft.com\n[2] https://www.postgresql.org/message-id/20210909.141450.11969674682374713.horikyota.ntt%40gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Thu, 7 Oct 2021 11:18:24 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Thu, Oct 07, 2021 at 11:18:24AM +0900, Masahiko Sawada wrote:\n> On Sat, Sep 11, 2021 at 2:44 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Thu, Sep 09, 2021 at 02:14:50PM +0900, Kyotaro Horiguchi wrote:\n> > > Maybe I'm missing something, but I can see several instances of the\n> > > \"eval-bool ? true : false\" pattern after fd0625c7a9 that are not in\n> > > the latest 0002.\n> >\n> > Yep. There are more of these, and I have just looked at some of them\n> > as of the patches proposed. What was sent looked clean enough to\n> > progress a bit and be done with it.\n> \n> While reading the decode.c I found the extra parentheses and arrived\n> at this thread.\n\nI'm not quite sure how you managed to search for it - well done ;)\n\n> The discussion seems to get inactive now but one (0001\n> patch) out of two patches Justin proposed [1] is not committed yet and\n> there seems no CF entry for this item (0002 patch already got\n> committed, fd0625c7a9). 0001 patch can be cleanly applied and looks\n> good to me.\n\nNote that I also included it here:\nhttps://www.postgresql.org/message-id/20210924215827.GS831@telsasoft.com\n\nMichael seems prefer writing (() != 0) in more cases than other people, so\ndidn't care for that patch.\nhttps://www.postgresql.org/message-id/577206.1620628321@sss.pgh.pa.us\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 6 Oct 2021 21:44:30 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Thu, Oct 7, 2021 at 11:44 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Oct 07, 2021 at 11:18:24AM +0900, Masahiko Sawada wrote:\n> > On Sat, Sep 11, 2021 at 2:44 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Thu, Sep 09, 2021 at 02:14:50PM +0900, Kyotaro Horiguchi wrote:\n> > > > Maybe I'm missing something, but I can see several instances of the\n> > > > \"eval-bool ? true : false\" pattern after fd0625c7a9 that are not in\n> > > > the latest 0002.\n> > >\n> > > Yep. There are more of these, and I have just looked at some of them\n> > > as of the patches proposed. What was sent looked clean enough to\n> > > progress a bit and be done with it.\n> >\n> > While reading the decode.c I found the extra parentheses and arrived\n> > at this thread.\n>\n> I'm not quite sure how you managed to search for it - well done ;)\n\nI could not find the recent thread, though :)\n\n>\n> > The discussion seems to get inactive now but one (0001\n> > patch) out of two patches Justin proposed [1] is not committed yet and\n> > there seems no CF entry for this item (0002 patch already got\n> > committed, fd0625c7a9). 0001 patch can be cleanly applied and looks\n> > good to me.\n>\n> Note that I also included it here:\n> https://www.postgresql.org/message-id/20210924215827.GS831@telsasoft.com\n\nGood. Thank you for the information!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 7 Oct 2021 13:27:34 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Thu, Oct 07, 2021 at 01:27:34PM +0900, Masahiko Sawada wrote:\n> On Thu, Oct 7, 2021 at 11:44 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> I'm not quite sure how you managed to search for it - well done ;)\n> \n> I could not find the recent thread, though :)\n\nHm. It looks like there are more occurences of \"false : true\" that\ncould be cleaned up, like in nodeResult.c or tablecmds.c.\n--\nMichael", "msg_date": "Thu, 7 Oct 2021 13:36:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Thu, Oct 7, 2021 at 1:37 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Oct 07, 2021 at 01:27:34PM +0900, Masahiko Sawada wrote:\n> > On Thu, Oct 7, 2021 at 11:44 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >> I'm not quite sure how you managed to search for it - well done ;)\n> >\n> > I could not find the recent thread, though :)\n>\n> Hm. It looks like there are more occurences of \"false : true\" that\n> could be cleaned up, like in nodeResult.c or tablecmds.c.\n\nIndeed. I've attached a patch that also deals with \"false : true\" cases.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Thu, 7 Oct 2021 15:24:53 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Thu, Oct 07, 2021 at 03:24:53PM +0900, Masahiko Sawada wrote:\n> Indeed. I've attached a patch that also deals with \"false : true\" cases.\n\nLooks right. I would be tempted to keep the one in readfuncs.c\nthough, mostly as a matter of style, and I would add a comparison with\nNULL for the return result of bsearch() in ts_utils.c.\n--\nMichael", "msg_date": "Thu, 7 Oct 2021 16:49:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Thu, Oct 07, 2021 at 04:49:10PM +0900, Michael Paquier wrote:\n> Looks right. I would be tempted to keep the one in readfuncs.c\n> though, mostly as a matter of style.\n\nI have left this one alone, and applied the rest as of 68f7c4b.\n--\nMichael", "msg_date": "Mon, 11 Oct 2021 09:45:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" }, { "msg_contents": "On Mon, Oct 11, 2021 at 9:45 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Oct 07, 2021 at 04:49:10PM +0900, Michael Paquier wrote:\n> > Looks right. I would be tempted to keep the one in readfuncs.c\n> > though, mostly as a matter of style.\n>\n> I have left this one alone, and applied the rest as of 68f7c4b.\n\nThank you!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 11 Oct 2021 10:21:28 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: strange case of \"if ((a & b))\"" } ]
[ { "msg_contents": "\n\nHello,\n\ngcc 11.1.0 produces quite a litany of second thoughts when compiling llvmjit_expr.c; is there something in it?\n\n(all 'warning' or 'note')\n\n\nllvmjit_expr.c: In function ‘llvm_compile_expr’:\nllvmjit_expr.c:69:9: warning: ‘build_EvalXFuncInt’ accessing 8 bytes in a region of size 0 [-Wstringop-overflow=]\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1553:33: note: in expansion of macro ‘build_EvalXFunc’\n 1553 | build_EvalXFunc(b, mod, \"ExecEvalSQLValueFunction\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: note: referencing argument 7 of type ‘struct LLVMOpaqueValue **’\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1553:33: note: in expansion of macro ‘build_EvalXFunc’\n 1553 | build_EvalXFunc(b, mod, \"ExecEvalSQLValueFunction\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:2460:1: note: in a call to function ‘build_EvalXFuncInt’\n 2460 | build_EvalXFuncInt(LLVMBuilderRef b, LLVMModuleRef mod, const char *funcname,\n | ^~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: warning: ‘build_EvalXFuncInt’ accessing 8 bytes in a region of size 0 [-Wstringop-overflow=]\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1559:33: note: in expansion of macro ‘build_EvalXFunc’\n 1559 | build_EvalXFunc(b, mod, \"ExecEvalCurrentOfExpr\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: note: referencing argument 7 of type ‘struct LLVMOpaqueValue **’\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1559:33: note: in expansion of macro ‘build_EvalXFunc’\n 1559 | build_EvalXFunc(b, mod, \"ExecEvalCurrentOfExpr\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:2460:1: note: in a call to function ‘build_EvalXFuncInt’\n 2460 | build_EvalXFuncInt(LLVMBuilderRef b, LLVMModuleRef mod, const char *funcname,\n | ^~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: warning: ‘build_EvalXFuncInt’ accessing 8 bytes in a region of size 0 [-Wstringop-overflow=]\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1565:33: note: in expansion of macro ‘build_EvalXFunc’\n 1565 | build_EvalXFunc(b, mod, \"ExecEvalNextValueExpr\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: note: referencing argument 7 of type ‘struct LLVMOpaqueValue **’\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1565:33: note: in expansion of macro ‘build_EvalXFunc’\n 1565 | build_EvalXFunc(b, mod, \"ExecEvalNextValueExpr\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:2460:1: note: in a call to function ‘build_EvalXFuncInt’\n 2460 | build_EvalXFuncInt(LLVMBuilderRef b, LLVMModuleRef mod, const char *funcname,\n | ^~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: warning: ‘build_EvalXFuncInt’ accessing 8 bytes in a region of size 0 [-Wstringop-overflow=]\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1571:33: note: in expansion of macro ‘build_EvalXFunc’\n 1571 | build_EvalXFunc(b, mod, \"ExecEvalArrayExpr\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: note: referencing argument 7 of type ‘struct LLVMOpaqueValue **’\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1571:33: note: in expansion of macro ‘build_EvalXFunc’\n 1571 | build_EvalXFunc(b, mod, \"ExecEvalArrayExpr\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:2460:1: note: in a call to function ‘build_EvalXFuncInt’\n 2460 | build_EvalXFuncInt(LLVMBuilderRef b, LLVMModuleRef mod, const char *funcname,\n | ^~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: warning: ‘build_EvalXFuncInt’ accessing 8 bytes in a region of size 0 [-Wstringop-overflow=]\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1583:33: note: in expansion of macro ‘build_EvalXFunc’\n 1583 | build_EvalXFunc(b, mod, \"ExecEvalRow\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: note: referencing argument 7 of type ‘struct LLVMOpaqueValue **’\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1583:33: note: in expansion of macro ‘build_EvalXFunc’\n 1583 | build_EvalXFunc(b, mod, \"ExecEvalRow\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:2460:1: note: in a call to function ‘build_EvalXFuncInt’\n 2460 | build_EvalXFuncInt(LLVMBuilderRef b, LLVMModuleRef mod, const char *funcname,\n | ^~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: warning: ‘build_EvalXFuncInt’ accessing 8 bytes in a region of size 0 [-Wstringop-overflow=]\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1738:33: note: in expansion of macro ‘build_EvalXFunc’\n 1738 | build_EvalXFunc(b, mod, \"ExecEvalMinMax\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: note: referencing argument 7 of type ‘struct LLVMOpaqueValue **’\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1738:33: note: in expansion of macro ‘build_EvalXFunc’\n 1738 | build_EvalXFunc(b, mod, \"ExecEvalMinMax\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:2460:1: note: in a call to function ‘build_EvalXFuncInt’\n 2460 | build_EvalXFuncInt(LLVMBuilderRef b, LLVMModuleRef mod, const char *funcname,\n | ^~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: warning: ‘build_EvalXFuncInt’ accessing 8 bytes in a region of size 0 [-Wstringop-overflow=]\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1816:33: note: in expansion of macro ‘build_EvalXFunc’\n 1816 | build_EvalXFunc(b, mod, \"ExecEvalConstraintNotNull\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: note: referencing argument 7 of type ‘struct LLVMOpaqueValue **’\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1816:33: note: in expansion of macro ‘build_EvalXFunc’\n 1816 | build_EvalXFunc(b, mod, \"ExecEvalConstraintNotNull\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:2460:1: note: in a call to function ‘build_EvalXFuncInt’\n 2460 | build_EvalXFuncInt(LLVMBuilderRef b, LLVMModuleRef mod, const char *funcname,\n | ^~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: warning: ‘build_EvalXFuncInt’ accessing 8 bytes in a region of size 0 [-Wstringop-overflow=]\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1822:33: note: in expansion of macro ‘build_EvalXFunc’\n 1822 | build_EvalXFunc(b, mod, \"ExecEvalConstraintCheck\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: note: referencing argument 7 of type ‘struct LLVMOpaqueValue **’\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1822:33: note: in expansion of macro ‘build_EvalXFunc’\n 1822 | build_EvalXFunc(b, mod, \"ExecEvalConstraintCheck\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:2460:1: note: in a call to function ‘build_EvalXFuncInt’\n 2460 | build_EvalXFuncInt(LLVMBuilderRef b, LLVMModuleRef mod, const char *funcname,\n | ^~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: warning: ‘build_EvalXFuncInt’ accessing 8 bytes in a region of size 0 [-Wstringop-overflow=]\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1834:33: note: in expansion of macro ‘build_EvalXFunc’\n 1834 | build_EvalXFunc(b, mod, \"ExecEvalScalarArrayOp\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: note: referencing argument 7 of type ‘struct LLVMOpaqueValue **’\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1834:33: note: in expansion of macro ‘build_EvalXFunc’\n 1834 | build_EvalXFunc(b, mod, \"ExecEvalScalarArrayOp\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:2460:1: note: in a call to function ‘build_EvalXFuncInt’\n 2460 | build_EvalXFuncInt(LLVMBuilderRef b, LLVMModuleRef mod, const char *funcname,\n | ^~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: warning: ‘build_EvalXFuncInt’ accessing 8 bytes in a region of size 0 [-Wstringop-overflow=]\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1846:33: note: in expansion of macro ‘build_EvalXFunc’\n 1846 | build_EvalXFunc(b, mod, \"ExecEvalXmlExpr\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: note: referencing argument 7 of type ‘struct LLVMOpaqueValue **’\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1846:33: note: in expansion of macro ‘build_EvalXFunc’\n 1846 | build_EvalXFunc(b, mod, \"ExecEvalXmlExpr\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:2460:1: note: in a call to function ‘build_EvalXFuncInt’\n 2460 | build_EvalXFuncInt(LLVMBuilderRef b, LLVMModuleRef mod, const char *funcname,\n | ^~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: warning: ‘build_EvalXFuncInt’ accessing 8 bytes in a region of size 0 [-Wstringop-overflow=]\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1872:33: note: in expansion of macro ‘build_EvalXFunc’\n 1872 | build_EvalXFunc(b, mod, \"ExecEvalGroupingFunc\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:69:9: note: referencing argument 7 of type ‘struct LLVMOpaqueValue **’\n 69 | build_EvalXFuncInt(b, mod, funcname, v_state, op, \\\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 70 | lengthof(((LLVMValueRef[]){__VA_ARGS__})), \\\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n 71 | ((LLVMValueRef[]){__VA_ARGS__}))\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nllvmjit_expr.c:1872:33: note: in expansion of macro ‘build_EvalXFunc’\n 1872 | build_EvalXFunc(b, mod, \"ExecEvalGroupingFunc\",\n | ^~~~~~~~~~~~~~~\nllvmjit_expr.c:2460:1: note: in a call to function ‘build_EvalXFuncInt’\n 2460 | build_EvalXFuncInt(LLVMBuilderRef b, LLVMModuleRef mod, const char *funcname,\n | ^~~~~~~~~~~~~~~~~~\n\n\n\nErik Rijkers\n\n\n", "msg_date": "Thu, 29 Apr 2021 00:07:01 +0200 (CEST)", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "gcc 11.1.0 warnings in llvmjit_expr.c" }, { "msg_contents": "Hi,\n\nOn 2021-04-29 00:07:01 +0200, Erik Rijkers wrote:\n> gcc 11.1.0 produces quite a litany of second thoughts when compiling llvmjit_expr.c; is there something in it?\n\nThey're wrong, but I can see why it warns - the signature choice was\nweird. Pushed the fix to 13 and master.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 May 2021 22:30:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: gcc 11.1.0 warnings in llvmjit_expr.c" }, { "msg_contents": "On 2021-05-05 22:30:44 -0700, Andres Freund wrote:\n> On 2021-04-29 00:07:01 +0200, Erik Rijkers wrote:\n> > gcc 11.1.0 produces quite a litany of second thoughts when compiling llvmjit_expr.c; is there something in it?\n> \n> They're wrong, but I can see why it warns - the signature choice was\n> weird. Pushed the fix to 13 and master.\n\nOh, and thanks for the report....\n\n\n", "msg_date": "Wed, 5 May 2021 22:31:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: gcc 11.1.0 warnings in llvmjit_expr.c" } ]
[ { "msg_contents": "Hello Hackers,\n\nI recently had to work on a case where some catalog files were\ncorrupt and/or missing. One of the things we sought to inspect was\npg_filenode.map, but there was no tooling available to do so.\n\nWith the help of Álvaro H.. I've put together a patch to allow pg_filedump\nto do some rudimentary decoding of pg_filenode.map, so that it's at least\nhuman-readable. I had the idea to try to map the OIDs to relnames, but\nthat might get hairy, with TOAST table OIDs possibly changing (for pg_proc,\netc.)\n\nIt seems that Christoph Berg is the primary committer for the pg_filedump\nproject these days so he is cc'd here, but if there's some other place I\nshould be sending this kind of contribution to, please let me know.\nHopefully this will be helpful to others in the future.\n\nMuch appreciated,\n--Richard", "msg_date": "Wed, 28 Apr 2021 17:54:22 -0700", "msg_from": "Richard Yen <richyen3@gmail.com>", "msg_from_op": true, "msg_subject": "Patch to allow pg_filedump to support reading of pg_filenode.map" }, { "msg_contents": "This is separate from the postgresql server repo.\nhttps://git.postgresql.org/gitweb/?p=pg_filedump.git\n\n+#define RELMAPPER_FILEMAGIC 0x592717\n+char magic_buffer[8];\n\n...\n\n+ if ( (int) magic_buffer & RELMAPPER_FILEMAGIC ) {\n\nThis is doing bitwise arithmetic on a pointer, which seems badly wrong.\nI think it breaks normal use of pg_filedump - unless you happen to get a\nmagic_buffer without those bits set. The segfault seems to confirm that, as\ndoes gcc:\n\npg_filedump.c:2041:8: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]\n 2041 | if ( (int) magic_buffer & RELMAPPER_FILEMAGIC ) {\n\nI think it probably means to do memcmp, instead ??\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 28 Apr 2021 23:42:31 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Patch to allow pg_filedump to support reading of pg_filenode.map" }, { "msg_contents": "Thanks for the feedback, Justin. I've gone ahead and switched to use\nmemcmp. I also refactored it to:\n\n1. Don't assume that any file with first 4 bytes matching the\nrelmapper magic number is a pg_relnode.map file\n2. Don't assume the pg_relnode.map file is uncorrupted and intact; perform\na check of the first 4 bytes against the reference magic number\n3. Provide a flag (-m) for users to have their file interpreted as a\npg_relnode.map file\n\nI hope this is more palatable to everyone :)\n\n--Richard\n\n\n\nOn Wed, Apr 28, 2021 at 9:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> This is separate from the postgresql server repo.\n> https://git.postgresql.org/gitweb/?p=pg_filedump.git\n>\n> +#define RELMAPPER_FILEMAGIC 0x592717\n> +char magic_buffer[8];\n>\n> ...\n>\n> + if ( (int) magic_buffer & RELMAPPER_FILEMAGIC ) {\n>\n> This is doing bitwise arithmetic on a pointer, which seems badly wrong.\n> I think it breaks normal use of pg_filedump - unless you happen to get a\n> magic_buffer without those bits set. The segfault seems to confirm that,\n> as\n> does gcc:\n>\n> pg_filedump.c:2041:8: warning: cast from pointer to integer of different\n> size [-Wpointer-to-int-cast]\n> 2041 | if ( (int) magic_buffer & RELMAPPER_FILEMAGIC ) {\n>\n> I think it probably means to do memcmp, instead ??\n>\n> --\n> Justin\n>", "msg_date": "Thu, 29 Apr 2021 11:22:05 -0700", "msg_from": "Richard Yen <richyen3@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Patch to allow pg_filedump to support reading of pg_filenode.map" }, { "msg_contents": "I think you should be able to avoid crashing if passed a non-relmapper file.\nMake sure not to loop over more mappings than exist in the relmapper file of\nthe given size.\n\nI guess you should warn if the number of mappings is too large for the file's\nsize. And then \"cap\" the number of mappings to the maximum possible number.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 29 Apr 2021 14:05:31 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Patch to allow pg_filedump to support reading of pg_filenode.map" }, { "msg_contents": "On Thu, Apr 29, 2021 at 12:05 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> I think you should be able to avoid crashing if passed a non-relmapper\n> file.\n> Make sure not to loop over more mappings than exist in the relmapper file\n> of\n> the given size.\n>\n> I guess you should warn if the number of mappings is too large for the\n> file's\n> size. And then \"cap\" the number of mappings to the maximum possible\n> number.\n>\n\nAh, thanks for the tip. That's right -- I can't assume the user's input is\na valid file. Updated patch here.\n\n--Richard\n\n\n\n\n>\n> --\n> Justin\n>", "msg_date": "Thu, 29 Apr 2021 12:39:57 -0700", "msg_from": "Richard Yen <richyen3@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Patch to allow pg_filedump to support reading of pg_filenode.map" }, { "msg_contents": "Re: Richard Yen\n> Ah, thanks for the tip. That's right -- I can't assume the user's input is\n> a valid file. Updated patch here.\n\nHi Richard,\n\nsorry for the very late response here.\n\nThanks for the patch which I just merged, and thanks Justin for the\nreviews!\n\nChristoph\n\n\n", "msg_date": "Wed, 29 Sep 2021 18:01:48 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: Patch to allow pg_filedump to support reading of pg_filenode.map" }, { "msg_contents": "\n> On Sep 29, 2021, at 9:01 AM, Christoph Berg <myon@debian.org> wrote:\n> \n> Re: Richard Yen\n>> Ah, thanks for the tip. That's right -- I can't assume the user's input is\n>> a valid file. Updated patch here.\n> \n> Hi Richard,\n> \n> sorry for the very late response here.\n> \n> Thanks for the patch which I just merged, and thanks Justin for the\n> reviews!\n\nThank you! Was a great coding exercise :)\n\n-Richard \n\n\n", "msg_date": "Wed, 29 Sep 2021 09:15:14 -0700", "msg_from": "Richard Yen <richyen3@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Patch to allow pg_filedump to support reading of pg_filenode.map" } ]
[ { "msg_contents": "Hi\n\nI met an assertion failure at the publisher in lazy_scan_heap() when synchronous running logical replication. Could someone please take a look at it?\n\nHere's what I did to produce the problem.\n\nFirst, use './configure --enable-cassert' to build the PG.\nThen, I created multiple publications at publisher and multiple subscriptions at subscriber.\nThen, set the value of synchronous_standby_names and reload, make them in synchronous commit mode. After that, an assertion failed at publisher when I COMMIT and ROLLBACK transactions concurrently: \n\n>TRAP: FailedAssertion(\"!all_visible_according_to_vm || prunestate.all_visible\", File: \"vacuumlazy.c\", Line: 1347, PID: 1274675)\n\nBTW, in asynchronous mode, the same problem can also happen but in a low frequency.(I tried many times, but the problem happened only 2 times)\nAs for synchronous mode, I found it seems easier to reproduce the problem with setting \"autovacuum_naptime = 1\".\nBut it still can't be 100% to reproduced it. (I tested it 5 times, 3 of them reproduced it.) \n\nThe script and the log are attached. It took about 6min to run it(without problem) on my machine and it could be less than 6min if the server crashed.\n\nRegards\nTang", "msg_date": "Thu, 29 Apr 2021 02:34:21 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "[BUG]\"FailedAssertion\" reported in lazy_scan_heap() when running\n logical replication" }, { "msg_contents": "On Thu, Apr 29, 2021 at 02:34:21AM +0000, tanghy.fnst@fujitsu.com wrote:\n> I met an assertion failure at the publisher in lazy_scan_heap() when\n> synchronous running logical replication. Could someone please take a\n> look at it?\n\nThis assertion is new as of 7136bf34. Peter?\n--\nMichael", "msg_date": "Thu, 29 Apr 2021 11:50:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG]\"FailedAssertion\" reported in lazy_scan_heap() when running\n logical replication" }, { "msg_contents": "On Wed, Apr 28, 2021 at 7:34 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n> >TRAP: FailedAssertion(\"!all_visible_according_to_vm || prunestate.all_visible\", File: \"vacuumlazy.c\", Line: 1347, PID: 1274675)\n>\n> BTW, in asynchronous mode, the same problem can also happen but in a low frequency.(I tried many times, but the problem happened only 2 times)\n> As for synchronous mode, I found it seems easier to reproduce the problem with setting \"autovacuum_naptime = 1\".\n> But it still can't be 100% to reproduced it. (I tested it 5 times, 3 of them reproduced it.)\n\nIs setting all_visible_according_to_vm false as below enough to avoid\nthe assertion failure?\n\ndiff --git a/src/backend/access/heap/vacuumlazy.c\nb/src/backend/access/heap/vacuumlazy.c\nindex c3fc12d76c..76c17e063e 100644\n--- a/src/backend/access/heap/vacuumlazy.c\n+++ b/src/backend/access/heap/vacuumlazy.c\n@@ -1146,6 +1146,7 @@ lazy_scan_heap(LVRelState *vacrel, VacuumParams\n*params, bool aggressive)\n {\n ReleaseBuffer(vmbuffer);\n vmbuffer = InvalidBuffer;\n+ all_visible_according_to_vm = false;\n }\n\n /* Remove the collected garbage tuples from table and indexes */\n\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 28 Apr 2021 21:21:43 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [BUG]\"FailedAssertion\" reported in lazy_scan_heap() when running\n logical replication" }, { "msg_contents": "On Thursday, April 29, 2021 1:22 PM, Peter Geoghegan <pg@bowt.ie> wrote\r\n\r\n>Is setting all_visible_according_to_vm false as below enough to avoid\r\n>the assertion failure?\r\n>\r\n>diff --git a/src/backend/access/heap/vacuumlazy.c\r\n>b/src/backend/access/heap/vacuumlazy.c\r\n>index c3fc12d76c..76c17e063e 100644\r\n>--- a/src/backend/access/heap/vacuumlazy.c\r\n>+++ b/src/backend/access/heap/vacuumlazy.c\r\n>@@ -1146,6 +1146,7 @@ lazy_scan_heap(LVRelState *vacrel, VacuumParams\r\n>*params, bool aggressive)\r\n> {\r\n> ReleaseBuffer(vmbuffer);\r\n> vmbuffer = InvalidBuffer;\r\n>+ all_visible_according_to_vm = false;\r\n> }\r\n>\r\n> /* Remove the collected garbage tuples from table and indexes */\r\n\r\nThanks for your reply.\r\nI tried your patch but the problem seems not be fixed.\r\n\r\nRegards,\r\nTang\r\n", "msg_date": "Thu, 29 Apr 2021 05:46:00 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [BUG]\"FailedAssertion\" reported in lazy_scan_heap() when running\n logical replication" }, { "msg_contents": "On Thu, Apr 29, 2021 at 1:22 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Apr 28, 2021 at 7:34 PM tanghy.fnst@fujitsu.com\n> <tanghy.fnst@fujitsu.com> wrote:\n> > >TRAP: FailedAssertion(\"!all_visible_according_to_vm || prunestate.all_visible\", File: \"vacuumlazy.c\", Line: 1347, PID: 1274675)\n> >\n> > BTW, in asynchronous mode, the same problem can also happen but in a low frequency.(I tried many times, but the problem happened only 2 times)\n> > As for synchronous mode, I found it seems easier to reproduce the problem with setting \"autovacuum_naptime = 1\".\n> > But it still can't be 100% to reproduced it. (I tested it 5 times, 3 of them reproduced it.)\n\nThanks for reporting the issue!\n\n>\n> Is setting all_visible_according_to_vm false as below enough to avoid\n> the assertion failure?\n>\n> diff --git a/src/backend/access/heap/vacuumlazy.c\n> b/src/backend/access/heap/vacuumlazy.c\n> index c3fc12d76c..76c17e063e 100644\n> --- a/src/backend/access/heap/vacuumlazy.c\n> +++ b/src/backend/access/heap/vacuumlazy.c\n> @@ -1146,6 +1146,7 @@ lazy_scan_heap(LVRelState *vacrel, VacuumParams\n> *params, bool aggressive)\n> {\n> ReleaseBuffer(vmbuffer);\n> vmbuffer = InvalidBuffer;\n> + all_visible_according_to_vm = false;\n> }\n>\n> /* Remove the collected garbage tuples from table and indexes */\n\nSince we set all_visible_according_to_vm before acquiring the buffer\nlock it's likely to happen that the page gets modified and all-visible\nbit is cleared after setting true to all_visible_according_to_vm. This\nassertion can easily be reproduced by adding a delay before the buffer\nlock and invoking autovacuums frequently:\n\ndiff --git a/src/backend/access/heap/vacuumlazy.c\nb/src/backend/access/heap/vacuumlazy.c\nindex c3fc12d76c..76f067a7e4 100644\n--- a/src/backend/access/heap/vacuumlazy.c\n+++ b/src/backend/access/heap/vacuumlazy.c\n@@ -1180,6 +1180,8 @@ lazy_scan_heap(LVRelState *vacrel, VacuumParams\n*params, bool aggressive)\n buf = ReadBufferExtended(vacrel->rel, MAIN_FORKNUM, blkno,\n RBM_NORMAL, vacrel->bstrategy);\n\n+ pg_usleep(100000);\n+\n /*\n * We need buffer cleanup lock so that we can prune HOT chains and\n * defragment the page.\n\nSo we should recheck also visibility map bit there but I think we can\nremove this assertion since we already do that in later code and we\ndon’t treat this case as a should-not-happen case:\n\n /*\n * As of PostgreSQL 9.2, the visibility map bit should never be set if\n * the page-level bit is clear. However, it's possible that the bit\n * got cleared after we checked it and before we took the buffer\n * content lock, so we must recheck before jumping to the conclusion\n * that something bad has happened.\n */\n else if (all_visible_according_to_vm && !PageIsAllVisible(page)\n && VM_ALL_VISIBLE(vacrel->rel, blkno, &vmbuffer))\n {\n elog(WARNING, \"page is not marked all-visible but\nvisibility map bit is set in relation \\\"%s\\\" page %u\",\n vacrel->relname, blkno);\n visibilitymap_clear(vacrel->rel, blkno, vmbuffer,\n VISIBILITYMAP_VALID_BITS);\n }\n\nI've attached a patch removing the assertion.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Thu, 6 May 2021 21:40:18 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG]\"FailedAssertion\" reported in lazy_scan_heap() when running\n logical replication" }, { "msg_contents": "On Thu, May 6, 2021 at 6:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Since we set all_visible_according_to_vm before acquiring the buffer\n> lock it's likely to happen that the page gets modified and all-visible\n> bit is cleared after setting true to all_visible_according_to_vm. This\n> assertion can easily be reproduced by adding a delay before the buffer\n> lock and invoking autovacuums frequently:\n>\n\nYour analysis seems correct to me that before we acquire the page the\nsomeone can modify it. So this assertion is not correct.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 May 2021 19:08:21 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG]\"FailedAssertion\" reported in lazy_scan_heap() when running\n logical replication" }, { "msg_contents": "Hi,\n\nOn 2021-05-06 21:40:18 +0900, Masahiko Sawada wrote:\n> Since we set all_visible_according_to_vm before acquiring the buffer\n> lock it's likely to happen that the page gets modified and all-visible\n> bit is cleared after setting true to all_visible_according_to_vm. This\n> assertion can easily be reproduced by adding a delay before the buffer\n> lock and invoking autovacuums frequently:\n\n> So we should recheck also visibility map bit there but I think we can\n> remove this assertion since we already do that in later code and we\n> don’t treat this case as a should-not-happen case:\n>\n> /*\n> * As of PostgreSQL 9.2, the visibility map bit should never be set if\n> * the page-level bit is clear. However, it's possible that the bit\n> * got cleared after we checked it and before we took the buffer\n> * content lock, so we must recheck before jumping to the conclusion\n> * that something bad has happened.\n> */\n> else if (all_visible_according_to_vm && !PageIsAllVisible(page)\n> && VM_ALL_VISIBLE(vacrel->rel, blkno, &vmbuffer))\n> {\n> elog(WARNING, \"page is not marked all-visible but\n> visibility map bit is set in relation \\\"%s\\\" page %u\",\n> vacrel->relname, blkno);\n> visibilitymap_clear(vacrel->rel, blkno, vmbuffer,\n> VISIBILITYMAP_VALID_BITS);\n> }\n>\n> I've attached a patch removing the assertion.\n\nI think it'd be a good idea to audit the other uses of\nall_visible_according_to_vm to make sure there's no issues there. Can't\nthis e.g. make us miss setting all-visible in\n\n\t\t/*\n\t\t * Handle setting visibility map bit based on what the VM said about\n\t\t * the page before pruning started, and using prunestate\n\t\t */\n\t\tif (!all_visible_according_to_vm && prunestate.all_visible)\n\nPerhaps we should update all_visible_according_to_vm after locking the\nbuffer, if PageIsAllVisible(page) is true?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 May 2021 12:32:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [BUG]\"FailedAssertion\" reported in lazy_scan_heap() when running\n logical replication" }, { "msg_contents": "On Thu, May 6, 2021 at 5:40 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> I've attached a patch removing the assertion.\n\nPushed that just now -- thanks!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 6 May 2021 13:18:17 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [BUG]\"FailedAssertion\" reported in lazy_scan_heap() when running\n logical replication" }, { "msg_contents": "On Thu, May 6, 2021 at 12:32 PM Andres Freund <andres@anarazel.de> wrote:\n> I think it'd be a good idea to audit the other uses of\n> all_visible_according_to_vm to make sure there's no issues there. Can't\n> this e.g. make us miss setting all-visible in\n>\n> /*\n> * Handle setting visibility map bit based on what the VM said about\n> * the page before pruning started, and using prunestate\n> */\n> if (!all_visible_according_to_vm && prunestate.all_visible)\n\nI don't think so, because it's the inverse case -- the condition that\nyou quote is concerned with the case where we found the VM all_visible\nbit to not be set earlier, and then found that we could set it now.\n\nThe assertion failed because the VM's all_visible bit was set\ninitially, but concurrently unset by some other backend. The\nall_visible_according_to_vm tracking variable became stale, so it\nwasn't correct to expect current information from prunestate to agree\nthat the page is still all_visible.\n\nHigh level philosophical observation: This reminds me of another way\nin which things are too tightly coupled in VACUUM. It's really a pity\nthat the visibility map's all_visible bit serves two purposes -- it\nremembers pages that VACUUM doesn't have to visit (except perhaps if\nit's an aggressive VACUUM), and is also used for index-only scans. If\nit was just used for index-only scans then I don't think it would be\nnecessary for a HOT update to unset a page's all_visible bit. Since a\nHOT chain's members are always versions of the same logical row, there\nis no reason why an index-only scan needs to care which precise\nversion is actually visible to its MVCC snapshot (once we know that\nthere must be exactly one version from each HOT chain).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 6 May 2021 13:35:56 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [BUG]\"FailedAssertion\" reported in lazy_scan_heap() when running\n logical replication" }, { "msg_contents": "Hi,\n\nOn 2021-05-06 13:35:56 -0700, Peter Geoghegan wrote:\n> On Thu, May 6, 2021 at 12:32 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think it'd be a good idea to audit the other uses of\n> > all_visible_according_to_vm to make sure there's no issues there. Can't\n> > this e.g. make us miss setting all-visible in\n> >\n> > /*\n> > * Handle setting visibility map bit based on what the VM said about\n> > * the page before pruning started, and using prunestate\n> > */\n> > if (!all_visible_according_to_vm && prunestate.all_visible)\n> \n> I don't think so, because it's the inverse case -- the condition that\n> you quote is concerned with the case where we found the VM all_visible\n> bit to not be set earlier, and then found that we could set it now.\n\nUh, yes, that is exactly my point. Because all_visible_according_to_vm\nis \"falsely true\", we'll not enter this branch, even though we actually\nwould want to mark it all visible again. If we did update\nall_visible_according_to_vm after acquiring the content lock, we would\nhave entered this branch, no?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 May 2021 20:18:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [BUG]\"FailedAssertion\" reported in lazy_scan_heap() when running\n logical replication" } ]
[ { "msg_contents": "For testing unusual situations I'd like to be able to cause a backend to\nterminate due to something like a segfault. Do we currently have this in\ntesting ?\n\nDave Cramer\n\nFor testing unusual situations I'd like to be able to cause a backend to terminate due to something like a segfault. Do we currently have this in testing ?Dave Cramer", "msg_date": "Thu, 29 Apr 2021 06:56:44 -0400", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "function for testing that causes the backend to terminate" }, { "msg_contents": "On Thu, Apr 29, 2021 at 4:27 PM Dave Cramer <davecramer@gmail.com> wrote:\n> For testing unusual situations I'd like to be able to cause a backend to terminate due to something like a segfault. Do we currently have this in testing ?\n\nWell, you could use pg_terminate_backend which sends SIGTERM to the\nbackend. However, we don't have a function that sends SIGSEGV yet, you\ncould signal the backend with SIGSEGV directly, if possible.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 29 Apr 2021 16:36:55 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: function for testing that causes the backend to terminate" }, { "msg_contents": "On 4/29/21 6:56 AM, Dave Cramer wrote:\n> For testing unusual situations I'd like to be able to cause a backend to \n> terminate due to something like a segfault. Do we currently have this in \n> testing ?\n\nIf you can run SQL as a superuser from that backend, try:\n\nCOPY (SELECT pg_backend_pid())\n TO PROGRAM 'xargs kill -SIGSEGV';\n\nHTH,\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Thu, 29 Apr 2021 16:16:37 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: function for testing that causes the backend to terminate" }, { "msg_contents": "On Thu, Apr 29, 2021 at 4:36 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Apr 29, 2021 at 4:27 PM Dave Cramer <davecramer@gmail.com> wrote:\n> > For testing unusual situations I'd like to be able to cause a backend to terminate due to something like a segfault. Do we currently have this in testing ?\n>\n> Well, you could use pg_terminate_backend which sends SIGTERM to the\n> backend. However, we don't have a function that sends SIGSEGV yet, you\n> could signal the backend with SIGSEGV directly, if possible.\n\nAnd, I came across an extension called pg_crash [1], see if that helps.\n\n[1] https://www.cybertec-postgresql.com/en/pg_crash-crashing-postgresql-automatically/\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 1 May 2021 11:28:16 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: function for testing that causes the backend to terminate" }, { "msg_contents": "\nOn 4/29/21 4:16 PM, Joe Conway wrote:\n> On 4/29/21 6:56 AM, Dave Cramer wrote:\n>> For testing unusual situations I'd like to be able to cause a backend\n>> to terminate due to something like a segfault. Do we currently have\n>> this in testing ?\n>\n> If you can run SQL as a superuser from that backend, try:\n>\n> COPY (SELECT pg_backend_pid())\n>  TO PROGRAM 'xargs kill -SIGSEGV';\n>\n> HTH,\n>\n> Joe\n>\n\n\nor a plperlu function that does 'kill 11, $$;' should do it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 1 May 2021 11:56:45 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: function for testing that causes the backend to terminate" }, { "msg_contents": "Joe,\n\nThanks,\n\nThis works and I don't have to install anything!\n\nDave Cramer\n\n\nOn Thu, 29 Apr 2021 at 16:16, Joe Conway <mail@joeconway.com> wrote:\n\n> On 4/29/21 6:56 AM, Dave Cramer wrote:\n> > For testing unusual situations I'd like to be able to cause a backend to\n> > terminate due to something like a segfault. Do we currently have this in\n> > testing ?\n>\n> If you can run SQL as a superuser from that backend, try:\n>\n> COPY (SELECT pg_backend_pid())\n> TO PROGRAM 'xargs kill -SIGSEGV';\n>\n> HTH,\n>\n> Joe\n>\n> --\n> Crunchy Data - http://crunchydata.com\n> PostgreSQL Support for Secure Enterprises\n> Consulting, Training, & Open Source Development\n>\n\nJoe,Thanks,This works and I don't have to install anything!Dave CramerOn Thu, 29 Apr 2021 at 16:16, Joe Conway <mail@joeconway.com> wrote:On 4/29/21 6:56 AM, Dave Cramer wrote:\n> For testing unusual situations I'd like to be able to cause a backend to \n> terminate due to something like a segfault. Do we currently have this in \n> testing ?\n\nIf you can run SQL as a superuser from that backend, try:\n\nCOPY (SELECT pg_backend_pid())\n  TO PROGRAM 'xargs kill -SIGSEGV';\n\nHTH,\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Mon, 3 May 2021 13:51:29 -0400", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Re: function for testing that causes the backend to terminate" } ]
[ { "msg_contents": "Hi,\n\nThe function quote_identifier has extra post-increment operation as\nhighlighted below,\n\nchar *\nquote_identifier(const char *s)\n{\n char *result = pg_malloc(strlen(s) * 2 + 3);\n char *r = result;\n\n *r++ = '\"';\n while (*s)\n {\n if (*s == '\"')\n *r++ = *s;\n *r++ = *s;\n s++;\n }\n *r++ = '\"';\n **r++ = '\\0';*\n\n return result;\n}\n\nI think *r = '\\0' is enough here. Per precedence table the precedence of\npostfix increment operator is higher. The above statement increments 'r'\npointer address but returns the original un-incremented pointer address,\nwhich is then dereferenced. Correct me if I am wrong here.\n\nIf my understanding is correct then '++' is not needed in the\nabove highlighted statement which is leading to overhead.\n\nFind an attached patch which does the same. This can be backported till v96.\n\nThanks & Regards,\nVaibhav Dalvi\n[image: image.png]", "msg_date": "Thu, 29 Apr 2021 18:35:28 +0530", "msg_from": "Vaibhav Dalvi <vaibhav.dalvi@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Remove post-increment in function quote_identifier of pg_upgrade" }, { "msg_contents": "On Thu, Apr 29, 2021 at 06:35:28PM +0530, Vaibhav Dalvi wrote:\n> Hi,\n> \n> The function quote_identifier has extra post-increment operation as\n> highlighted below,\n> \n> char *\n> quote_identifier(const char *s)\n> {\n> char *result = pg_malloc(strlen(s) * 2 + 3);\n> char *r = result;\n> \n> *r++ = '\"';\n> while (*s)\n> {\n> if (*s == '\"')\n> *r++ = *s;\n> *r++ = *s;\n> s++;\n> }\n> *r++ = '\"';\n> **r++ = '\\0';*\n> \n> return result;\n> }\n> \n> I think *r = '\\0' is enough here. Per precedence table the precedence of\n> postfix increment operator is higher. The above statement increments 'r'\n> pointer address but returns the original un-incremented pointer address,\n> which is then dereferenced. Correct me if I am wrong here.\n> \n> If my understanding is correct then '++' is not needed in the\n> above highlighted statement which is leading to overhead.\n\nI don't think the integer increment during pg_upgrade is a meaningful overhead.\nYou could check the compiler's assembly output it may be the same even without\nthe ++.\n\nI'd suggest to leave it as it's currently written, since the idiom on every\nother line is *r++ = ..., it'd be strange to write it differently here, and\ncould end up being confusing or copied+pasted somewhere else.\n\n> Find an attached patch which does the same. This can be backported till v96.\n\nIn any case, think it would not be backpatched, since it's essentially\ncosmetic.\n\n> diff --git a/src/bin/pg_upgrade/util.c b/src/bin/pg_upgrade/util.c\n> index fc20472..dc000d0 100644\n> --- a/src/bin/pg_upgrade/util.c\n> +++ b/src/bin/pg_upgrade/util.c\n> @@ -198,7 +198,7 @@ quote_identifier(const char *s)\n> \t\ts++;\n> \t}\n> \t*r++ = '\"';\n> -\t*r++ = '\\0';\n> +\t*r = '\\0';\n> \n> \treturn result;\n> }\n\n\n", "msg_date": "Thu, 29 Apr 2021 09:37:14 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Remove post-increment in function quote_identifier of pg_upgrade" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Thu, Apr 29, 2021 at 06:35:28PM +0530, Vaibhav Dalvi wrote:\n>> If my understanding is correct then '++' is not needed in the\n>> above highlighted statement which is leading to overhead.\n\n> I don't think the integer increment during pg_upgrade is a meaningful overhead.\n> You could check the compiler's assembly output it may be the same even without\n> the ++.\n\nYeah: if the increment actually costs something, I'd expect the compiler\nto optimize it away. But on a lot of machine architectures, a pointer\npost-increment is basically free anyhow.\n\n> I'd suggest to leave it as it's currently written, since the idiom on every\n> other line is *r++ = ..., it'd be strange to write it differently here, and\n> could end up being confusing or copied+pasted somewhere else.\n\nI agree --- cosmetically, this change isn't an improvement.\n\n(On the other hand, if it were written the other way already, I'd also\nargue to leave it like that. Basically, this sort of change is just not\nworth troubling over. It doesn't improve things meaningfully and it\ncreates back-patching hazards.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 29 Apr 2021 10:51:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove post-increment in function quote_identifier of pg_upgrade" }, { "msg_contents": "On 2021-Apr-29, Tom Lane wrote:\n\n> (On the other hand, if it were written the other way already, I'd also\n> argue to leave it like that. Basically, this sort of change is just not\n> worth troubling over. It doesn't improve things meaningfully and it\n> creates back-patching hazards.)\n\nThis argument applies equally well to the patch at \nhttp://postgr.es/m/CAAJ_b94M_1YoybQpNjmD+ZFZkUT2OpoP5xnFiWM+X=xh-nX23Q@mail.gmail.com\nso if we reject this one, we should reject that one too.\nCC'ed patch author.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Thu, 29 Apr 2021 11:16:53 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Remove post-increment in function quote_identifier of pg_upgrade" } ]
[ { "msg_contents": "Hi,\n\nWhile debugging one of the logical decoding issues, I found that xid was\nnot included in restoring reorder buffer changes from disk log messages.\nAttached a patch for it. I felt including the XID will be helpful in\ndebugging. Thoughts?\n\nRegards,\nVignesh", "msg_date": "Thu, 29 Apr 2021 21:45:23 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Included xid in restoring reorder buffer changes from disk log\n message" }, { "msg_contents": "On Thu, Apr 29, 2021 at 9:45 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> While debugging one of the logical decoding issues, I found that xid was not included in restoring reorder buffer changes from disk log messages. Attached a patch for it. I felt including the XID will be helpful in debugging. Thoughts?\n>\n\nIt makes sense to include xid in the debug message, earlier I thought\nthat will it be a good idea to also print the toplevel_xid. But I\nthink it is for debug purposes and only we have the xid we can fetch\nthe other parameters so maybe adding xid is good enough.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Apr 2021 11:53:43 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Included xid in restoring reorder buffer changes from disk log\n message" }, { "msg_contents": "On Fri, 30 Apr 2021 at 11:53, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Apr 29, 2021 at 9:45 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > While debugging one of the logical decoding issues, I found that xid was not included in restoring reorder buffer changes from disk log messages. Attached a patch for it. I felt including the XID will be helpful in debugging. Thoughts?\n> >\n>\n> It makes sense to include xid in the debug message, earlier I thought\n> that will it be a good idea to also print the toplevel_xid. But I\n> think it is for debug purposes and only we have the xid we can fetch\n> the other parameters so maybe adding xid is good enough.\n\nWhile having a look at the reorderbuffer code, I noticed that this\nchanges were still not committed.\nHere is a rebased version of the patch.\n\nRegards,\nVignesh", "msg_date": "Fri, 6 Oct 2023 14:58:13 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Included xid in restoring reorder buffer changes from disk log\n message" }, { "msg_contents": "At Fri, 6 Oct 2023 14:58:13 +0530, vignesh C <vignesh21@gmail.com> wrote in \n> On Fri, 30 Apr 2021 at 11:53, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > It makes sense to include xid in the debug message, earlier I thought\n> > that will it be a good idea to also print the toplevel_xid. But I\n> > think it is for debug purposes and only we have the xid we can fetch\n> > the other parameters so maybe adding xid is good enough.\n\n+1\n\n> While having a look at the reorderbuffer code, I noticed that this\n> changes were still not committed.\n> Here is a rebased version of the patch.\n\nWhile this patch makes the following change on the de-serializing side;\n\n-\t\t\telog(DEBUG2, \"restored %u/%u changes from disk\",\n+\t\t\telog(DEBUG2, \"restored %u/%u changes of XID %u from disk\",\n\nthe counter part ReorderBufferSerializeTXN() has the following\nmessage.\n\n> \telog(DEBUG2, \"spill %u changes in XID %u to disk\",\n>\t\t (uint32) txn->nentries_mem, txn->xid);\n\nIt might be preferable for those two messages to have a corresponding\nappearance.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 10 Oct 2023 10:29:05 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Included xid in restoring reorder buffer changes from disk log\n message" }, { "msg_contents": "On Tue, 10 Oct 2023 at 06:59, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 6 Oct 2023 14:58:13 +0530, vignesh C <vignesh21@gmail.com> wrote in\n> > On Fri, 30 Apr 2021 at 11:53, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > It makes sense to include xid in the debug message, earlier I thought\n> > > that will it be a good idea to also print the toplevel_xid. But I\n> > > think it is for debug purposes and only we have the xid we can fetch\n> > > the other parameters so maybe adding xid is good enough.\n>\n> +1\n>\n> > While having a look at the reorderbuffer code, I noticed that this\n> > changes were still not committed.\n> > Here is a rebased version of the patch.\n>\n> While this patch makes the following change on the de-serializing side;\n>\n> - elog(DEBUG2, \"restored %u/%u changes from disk\",\n> + elog(DEBUG2, \"restored %u/%u changes of XID %u from disk\",\n>\n> the counter part ReorderBufferSerializeTXN() has the following\n> message.\n>\n> > elog(DEBUG2, \"spill %u changes in XID %u to disk\",\n> > (uint32) txn->nentries_mem, txn->xid);\n>\n> It might be preferable for those two messages to have a corresponding\n> appearance.\n\nWe cannot include nentries in ReorderBufferSerializeTXN as the number\nof entries will not be known at that time. Modified to keep it\nconsistent by changing it to \"... changes in XID ...\". Attached v3\npatch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Sat, 14 Oct 2023 08:07:49 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Included xid in restoring reorder buffer changes from disk log\n message" } ]
[ { "msg_contents": "\nThe notion of TID is based on pages and line pointers, which makes\nsense for heapam, but that's not likely to make sense for a custom\ntable AM.\n\nThe obvious answer is to make a simple mapping between a TID and\nwhatever makes sense to the AM (for the sake of discussion, let's say a\nplain row number).\n\nThe most natural thing would be to say that we have 48 bits, so it can\njust be a 48-bit number. Of course, there are some restrictions on\nvalid values that complicate this:\n\n * InvalidBlockNumber of 0xFFFFFFFF. Not a problem.\n * InvalidOffsetNumber of 0. Not a problem.\n * MaxOffsetNumber of 2048. Does this limit really apply to table AMs?\nIt just seems like it's used when scanning heap or index pages for\nstack-allocated arrays. For a table AM it appears to waste 5 bits.\n * ginpostinglist.c:itemptr_to_uint64() seems to think 2047 is the max\noffset number. Is this a bug?\n\nAs a table AM author, I'd like to know what the real limits are so that\nI can use whatever bits are available to map from TID to row number and\nback, without worrying that something will break in the future. A few\npossibilities:\n\n 1. Keep MaxOffsetNumber as 2048 and fix itemptr_to_uint64().\n 2. Change MaxOffsetNumber to 2047. This seems likely to break\nextensions that rely on it.\n 3. Define MaxOffsetNumber as 65536 and introduce a new\nMaxItemsPerPage as 2048 for the stack-allocated arrays. We'd still need\nto fix itemptr_to_uint64().\n\nThoughts?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 30 Apr 2021 00:46:21 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, 30 Apr 2021, 09:46 Jeff Davis, <pgsql@j-davis.com> wrote:\n\n>\n> The notion of TID is based on pages and line pointers, which makes\n> sense for heapam, but that's not likely to make sense for a custom\n> table AM.\n>\n> The obvious answer is to make a simple mapping between a TID and\n> whatever makes sense to the AM (for the sake of discussion, let's say a\n> plain row number).\n>\n> The most natural thing would be to say that we have 48 bits, so it can\n> just be a 48-bit number. Of course, there are some restrictions on\n> valid values that complicate this:\n>\n> * InvalidBlockNumber of 0xFFFFFFFF. Not a problem.\n> * InvalidOffsetNumber of 0. Not a problem.\n> * MaxOffsetNumber of 2048. Does this limit really apply to table AMs?\n>\n\nMaxOffsetNumber is not per se 2048. It is BLCKSZ / sizeof(ItemIdData),\nwhich is only 2048 for a 8kiB BLCKSZ. As we support BLCKSZ up to 32kiB,\nMaxOffsetNumber can be as large as 8196.\n\nOther than that, I believe you've also missed the special offset numbers\nspecified in itemptr.h (SpecTokenOffsetNumber and MovedPartitionsOffsetNumber).\nI am not well enough aware of the usage of these OffsetNumber values, but\nthose might also be limiting the values any tableAM can use for their TIDs.\n\nIt just seems like it's used when scanning heap or index pages for\n> stack-allocated arrays. For a table AM it appears to waste 5 bits.\n>\n\nMaxOffsetNumber is used for postgres' Page layout, of which the\nMaxOffsetNumber is defined as how many item pointers could exist on a page,\nand AFAIK should be used for postgres' Page layout only. No thing can or\nshould change that. If any code asserts limitations to the ip_posid of\ntable tuples that could also not be tableam tuples, then I believe that is\nprobably a mistake in postgres, and that should be fixed.\n\n * ginpostinglist.c:itemptr_to_uint64() seems to think 2047 is the max\n> offset number. Is this a bug?\n\n\nNo. The documentation for that function explicitly mentions that these item\npointers are optimized for storage when using the heap tableam, and that\nthat code will be changed once there exist tableAMs with different TID /\nip_posid constraints (see the comment on lines 32 and 33 of that file).\n\nNote that the limiting number that itemptr_to_uint64 should mention for bit\ncalculations is actually MaxHeaptuplesPerPage, which is about one seventh\nof MaxOffsetNumber. The resulting number of bits reserved is not a\nmiscalculation though, because MaxHeaptuplesPerPage (for 32kiB BLCKSZ)\nrequires the mentioned 11 bits, and adapting bit swizzling for multiple\npage sizes was apparently not considered worth the effort.\n\nAs a table AM author, I'd like to know what the real limits are so that\n> I can use whatever bits are available to map from TID to row number and\n> back, without worrying that something will break in the future. A few\n> possibilities:\n>\n> 1. Keep MaxOffsetNumber as 2048 and fix itemptr_to_uint64().\n>\n\nI believe that this is the right way when there exist tableAMs that use\nthose upper 5 bits.\n\n\n> 2. Change MaxOffsetNumber to 2047. This seems likely to break\n> extensions that rely on it.\n>\n\nIf you're going to change MaxOffsetNumber, I believe that it's better to\nchange it to ((BLCKSZ - sizeof(PageHeaderData)) / sizeof(ItemIdData)), as\nthat is the maximum amount of ItemIds you could put on a Page that has no\npage opaque.\n\n 3. Define MaxOffsetNumber as 65536 and introduce a new\n> MaxItemsPerPage as 2048 for the stack-allocated arrays. We'd still need\n> to fix itemptr_to_uint64().\n\n\nI believe that that breaks more things than otherwise required. ip_posid is\nalready limited to uint16, so I see no reason to add a constant that would\nassert that the value of any uint16 is less its max value plus one.\n\n\nWith regards,\n\nMatthias van de Meent\n\nOn Fri, 30 Apr 2021, 09:46 Jeff Davis, <pgsql@j-davis.com> wrote:\nThe notion of TID is based on pages and line pointers, which makes\nsense for heapam, but that's not likely to make sense for a custom\ntable AM.\n\nThe obvious answer is to make a simple mapping between a TID and\nwhatever makes sense to the AM (for the sake of discussion, let's say a\nplain row number).\n\nThe most natural thing would be to say that we have 48 bits, so it can\njust be a 48-bit number. Of course, there are some restrictions on\nvalid values that complicate this:\n\n  * InvalidBlockNumber of 0xFFFFFFFF. Not a problem.\n  * InvalidOffsetNumber of 0. Not a problem.\n  * MaxOffsetNumber of 2048. Does this limit really apply to table AMs?MaxOffsetNumber is not per se 2048. It is BLCKSZ / sizeof(ItemIdData), which is only 2048 for a 8kiB BLCKSZ. As we support BLCKSZ up to 32kiB, MaxOffsetNumber can be as large as 8196. Other than that, I believe you've also missed the special offset numbers specified in itemptr.h (SpecTokenOffsetNumber and MovedPartitionsOffsetNumber). I am not well enough aware of the usage of these OffsetNumber values, but those might also be limiting the values any tableAM can use for their TIDs.\nIt just seems like it's used when scanning heap or index pages for\nstack-allocated arrays. For a table AM it appears to waste 5 bits.MaxOffsetNumber is used for postgres' Page layout, of which the MaxOffsetNumber is defined as how many item pointers could exist on a page, and AFAIK should be used for postgres' Page layout only. No thing can or should change that. If any code asserts limitations to the ip_posid of table tuples that could also not be tableam tuples, then I believe that is probably a mistake in postgres, and that should be fixed.\n  * ginpostinglist.c:itemptr_to_uint64() seems to think 2047 is the max\noffset number. Is this a bug? No. The documentation for that function explicitly mentions that these item pointers are optimized for storage when using the heap tableam, and that that code will be changed once there exist tableAMs with different TID / ip_posid constraints (see the comment on lines 32 and 33 of that file). Note that the limiting number that itemptr_to_uint64 should mention for bit calculations is actually MaxHeaptuplesPerPage, which is about one seventh of MaxOffsetNumber. The resulting number of bits reserved is not a miscalculation though, because MaxHeaptuplesPerPage (for 32kiB BLCKSZ) requires the mentioned 11 bits, and adapting bit swizzling for multiple page sizes was apparently not considered worth the effort.As a table AM author, I'd like to know what the real limits are so that\nI can use whatever bits are available to map from TID to row number and\nback, without worrying that something will break in the future. A few\npossibilities:\n\n  1. Keep MaxOffsetNumber as 2048 and fix itemptr_to_uint64().I believe that this is the right way when there exist tableAMs that use those upper 5 bits. \n  2. Change MaxOffsetNumber to 2047. This seems likely to break\nextensions that rely on it.If you're going to change MaxOffsetNumber, I believe that it's better to change it to ((BLCKSZ - sizeof(PageHeaderData)) / sizeof(ItemIdData)), as that is the maximum amount of ItemIds you could put on a Page that has no page opaque. \n  3. Define MaxOffsetNumber as 65536 and introduce a new\nMaxItemsPerPage as 2048 for the stack-allocated arrays. We'd still need\nto fix itemptr_to_uint64().I believe that that breaks more things than otherwise required. ip_posid is already limited to uint16, so I see no reason to add a constant that would assert that the value of any uint16 is less its max value plus one.With regards,Matthias van de Meent", "msg_date": "Fri, 30 Apr 2021 12:04:44 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> The notion of TID is based on pages and line pointers, which makes\n> sense for heapam, but that's not likely to make sense for a custom\n> table AM.\n> The obvious answer is to make a simple mapping between a TID and\n> whatever makes sense to the AM (for the sake of discussion, let's say a\n> plain row number).\n\nI'm inclined to think that when we get around to doing something\nabout this, we need to make a much bigger change than just poking\nat the margins of type tid.\n\nMy thought at the moment is that all APIs above the AM level ought\nto be redefined to use uint64 for tuple identifiers. heapam and\nrelated index AMs could map block + offset into that in some\nconvenient way, and other AMs could do what they like.\n\nAndres seems to feel that we should try to allow variable-width\ntupleids, but I'm afraid that the cost/benefit ratio for that\nwould be pretty poor.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Apr 2021 11:06:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, Apr 30, 2021 at 8:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> My thought at the moment is that all APIs above the AM level ought\n> to be redefined to use uint64 for tuple identifiers. heapam and\n> related index AMs could map block + offset into that in some\n> convenient way, and other AMs could do what they like.\n>\n> Andres seems to feel that we should try to allow variable-width\n> tupleids, but I'm afraid that the cost/benefit ratio for that\n> would be pretty poor.\n\nI agree. It'll be easier for a new table AM to be developed with that\nconstraint than it will be to retrofit it to every index AM. It\nprobably wouldn't be that hard to make nbtree deduplication and GIN\nposting list compression work with uint64 TIDs. But variable-width\nTIDs are a very different matter.\n\nCompatibility with index AMs is more than a matter of switching out\nthe tuple identifier -- if you invent something that has totally\ndifferent performance characteristics for index AMs, then it's likely\nto break tacit assumptions about the cost of certain things. For\nexample, index tuple deletion probably relies on the fact that there\njust isn't that many table blocks to visit (to get an XID for recovery\nconflict purposes) in practice due to various locality-related\neffects. Plus deduplication ameliorates problems with version churn in\nindexes -- presumably the same problems will exist when any new table\nAM is used, and so it'll be useful to address the same problems in the\nsame way.\n\nI agree that it might well be useful to make TIDs fully logical (not\n\"physiological\" -- physical across blocks, logical within blocks) for\nsome new table AM. Even then, it would still definitely be a good idea\nto make these logical TID values correlate with the physical table\nstructure in some way. Plus TIDs should still be fixed size. If a new\ntable AM can't do it that way then that certainly needs to be\njustified -- it's unreasonable to imagine that it simply isn't the\ntable AM's problem to solve.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 30 Apr 2021 08:36:17 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, 2021-04-30 at 11:06 -0400, Tom Lane wrote:\n> My thought at the moment is that all APIs above the AM level ought\n> to be redefined to use uint64 for tuple identifiers.\n\nOne challenge might be reliance on InvalidOffsetNumber as a special\nvalue in a number of places (e.g. bitmap index scans). That doesn't\nseem like a major problem though.\n\n> heapam and\n> related index AMs could map block + offset into that in some\n> convenient way, and other AMs could do what they like.\n\nDo you mean that indexes would be expected to hold a uint64, a 48-bit\nint (that directly maps to a uint64), or still hold an ItemPointerData?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 30 Apr 2021 09:28:08 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> On Fri, 2021-04-30 at 11:06 -0400, Tom Lane wrote:\n>> My thought at the moment is that all APIs above the AM level ought\n>> to be redefined to use uint64 for tuple identifiers.\n\n> Do you mean that indexes would be expected to hold a uint64, a 48-bit\n> int (that directly maps to a uint64), or still hold an ItemPointerData?\n\nISTM that would be up to the index AM. We'd need some interlocks on\nwhich index AMs could be used with which table AMs in any case, I think.\n\nIt'd likely not be hard for existing index AMs to be repurposed to store\n\"any 48-bit TID\", but extending them to full 64-bit TIDs may be\nimpractical.\n\nI think the hard part may really be in places like tidbitmap.c, which\none would wish to be AM-independent, but right now it's quite specific\nto heap-style TIDs. Maybe we can think of a way to parameterize it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Apr 2021 12:35:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, 2021-04-30 at 12:04 +0200, Matthias van de Meent wrote:\n> Other than that, I believe you've also missed the special offset\n> numbers specified in itemptr.h (SpecTokenOffsetNumber and\n> MovedPartitionsOffsetNumber). I am not well enough aware of the usage\n> of these OffsetNumber values, but those might also be limiting the\n> values any tableAM can use for their TIDs.\n\nYes, thank you. If it is treated specially in a heap tuple, it can't be\na regular TID.\n\n> > It just seems like it's used when scanning heap or index pages for\n> > stack-allocated arrays. For a table AM it appears to waste 5 bits.\n> \n> MaxOffsetNumber is used for postgres' Page layout, of which the\n> MaxOffsetNumber is defined as how many item pointers could exist on a\n> page, and AFAIK should be used for postgres' Page layout only. No\n> thing can or should change that. If any code asserts limitations to\n> the ip_posid of table tuples that could also not be tableam tuples,\n> then I believe that is probably a mistake in postgres, and that\n> should be fixed.\n\nA name that would better fit your definition would be something like\n\"MaxItemsPerPage\".\n\nThe name \"MaxOffsetNumber\" implies that any number past that must be\neither invalid or special. But it seems like you are saying that if I\nuse an offset number of 5000 in my table AM, then that's fine and\nshould be treated like a normal TID.\n\n> No. The documentation for that function explicitly mentions that\n> these item pointers are optimized for storage when using the heap\n> tableam, and that that code will be changed once there exist tableAMs\n> with different TID / ip_posid constraints (see the comment on lines\n> 32 and 33 of that file). \n\nThank you.\n\nI'm a table AM author, and I'd like to use whatever the real range of\nTIDs is. Does that mean it's time to change that code in\nginpostinglist.c now?\n\n> > 1. Keep MaxOffsetNumber as 2048 and fix itemptr_to_uint64().\n> \n> I believe that this is the right way when there exist tableAMs that\n> use those upper 5 bits.\n\nDoes that mean we should declare the valid range of offsets to be\nbetween 1 and 0xfffc (inclusive)?\n\nI'm trying to use some mapping now that's somewhat stable so that I\ndon't have to worry that something will break later, and then require\nreindexing all tables with my table AM.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 30 Apr 2021 09:50:25 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, Apr 30, 2021 at 11:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> My thought at the moment is that all APIs above the AM level ought\n> to be redefined to use uint64 for tuple identifiers. heapam and\n> related index AMs could map block + offset into that in some\n> convenient way, and other AMs could do what they like.\n>\n> Andres seems to feel that we should try to allow variable-width\n> tupleids, but I'm afraid that the cost/benefit ratio for that\n> would be pretty poor.\n\nThere are two major reasons why I want variable-width tuple IDs. One\nis global indexes, where you need as many bits as the AMs implementing\nthe partitions need, plus some extra bits to identify which partition\nis relevant for a particular tuple. No fixed number of bits that you\nmake available can ever be sufficient here, because a global index\nalways needs to have extra bits compared to a partition-local index;\nif you let the partition-local index use more bits, the global index\nnow needs even more space. The other is indirect indexes, work Álvaro\nproposed a few years ago, where the index entry points to the primary\nkey value rather than a TID. The space needs for that are based on the\ntype of the primary key column. This proposal solves neither of those\nproblems.\n\nAnother problem in this general area is that there is quite a bit of\ncode that thinks a TID is specifically a block number and an offset,\nlike the Bitmap Index/Heap Scan code, for example. But making tuple\nIDs wider doesn't help with that problem either.\n\nWhat problem do you think this proposal does solve?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Apr 2021 12:51:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, 2021-04-30 at 08:36 -0700, Peter Geoghegan wrote:\n> Compatibility with index AMs is more than a matter of switching out\n> the tuple identifier -- if you invent something that has totally\n> different performance characteristics for index AMs, then it's likely\n> to break tacit assumptions about the cost of certain things.\n\nI think it would be reasonable to document and expect that table AMs\noffer some locality of access for tuples with similar IDs. Do you think\nwe need something stronger than that?\n\n> Plus deduplication ameliorates problems with version churn\n> in\n> indexes -- presumably the same problems will exist when any new table\n> AM is used, and so it'll be useful to address the same problems in\n> the\n> same way.\n\nI got lost after \"presumably the same problems\", can you explain?\n\n> I agree that it might well be useful to make TIDs fully logical (not\n> \"physiological\" -- physical across blocks, logical within blocks) for\n> some new table AM. Even then, it would still definitely be a good\n> idea\n> to make these logical TID values correlate with the physical table\n> structure in some way.\n\nAgreed.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 30 Apr 2021 10:03:58 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Apr 30, 2021 at 11:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Andres seems to feel that we should try to allow variable-width\n>> tupleids, but I'm afraid that the cost/benefit ratio for that\n>> would be pretty poor.\n\n> There are two major reasons why I want variable-width tuple IDs. One\n> is global indexes, where you need as many bits as the AMs implementing\n> the partitions need, plus some extra bits to identify which partition\n> is relevant for a particular tuple. No fixed number of bits that you\n> make available can ever be sufficient here,\n\nI agree that global indexes need more bits, but it doesn't necessarily\nfollow that we must have variable-width TIDs. We could for example\nsay that \"real\" TIDs are only 48 bits and index AMs that want to be\nusable as global indexes must be capable of handling 64-bit TIDs,\nleaving 16 bits for partition ID. A more forward-looking definition\nwould require global index AMs to store 96 bits (partition OID plus\n64-bit TID). Either way would be far simpler for every moving part\ninvolved than going over to full varlena TIDs.\n\n> Another problem in this general area is that there is quite a bit of\n> code that thinks a TID is specifically a block number and an offset,\n> like the Bitmap Index/Heap Scan code, for example. But making tuple\n> IDs wider doesn't help with that problem either.\n\nAgreed, that's an area that will need a lot of thought for anything that\nwe do here. But varlena TIDs surely do not make that easier to fix.\n\n> What problem do you think this proposal does solve?\n\nAccommodating table AMs that want more than 48 bits for a TID.\nWe're already starting to run up against the fact that that's not\nenough bits for plausible use-cases. 64 bits may someday in the far\nfuture not be enough either, but I think that's a very long way off.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Apr 2021 13:10:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, Apr 30, 2021 at 10:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > There are two major reasons why I want variable-width tuple IDs. One\n> > is global indexes, where you need as many bits as the AMs implementing\n> > the partitions need, plus some extra bits to identify which partition\n> > is relevant for a particular tuple. No fixed number of bits that you\n> > make available can ever be sufficient here,\n>\n> I agree that global indexes need more bits, but it doesn't necessarily\n> follow that we must have variable-width TIDs. We could for example\n> say that \"real\" TIDs are only 48 bits and index AMs that want to be\n> usable as global indexes must be capable of handling 64-bit TIDs,\n> leaving 16 bits for partition ID. A more forward-looking definition\n> would require global index AMs to store 96 bits (partition OID plus\n> 64-bit TID). Either way would be far simpler for every moving part\n> involved than going over to full varlena TIDs.\n\nThe question of how the on-disk format on indexes needs to be changed\nto accomodate global indexes seems like an entirely separate question\nto how we go about expanding or redefining TIDs.\n\nGlobal indexes should work by adding an extra column that is somewhat\nlike a TID, that may even have its own pg_attribute entry. It's much\nmore natural to make the partition number a separate column IMV --\nnbtree suffix truncation and deduplication can work in about the same\nway as before. Plus you'll need to do predicate pushdown using the\npartition identifier in some scenarios anyway. You can make the\npartition identifier variable-width without imposing the cost and\ncomplexity of variable-width TIDs on index AMs.\n\nI believe that the main reason why there have been so few problems\nwith any of the nbtree work in the past few releases is that it\navoided certain kinds of special cases. Any special cases in the\non-disk format and in the space accounting used when choosing a split\npoint ought to be avoided at all costs. We can probably afford to add\na lot of complexity to make global indexes work, but it ought to be\ncontained to cases that actually use global indexes in an obvious way.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 30 Apr 2021 10:28:37 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, Apr 30, 2021 at 1:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I agree that global indexes need more bits, but it doesn't necessarily\n> follow that we must have variable-width TIDs. We could for example\n> say that \"real\" TIDs are only 48 bits and index AMs that want to be\n> usable as global indexes must be capable of handling 64-bit TIDs,\n> leaving 16 bits for partition ID. A more forward-looking definition\n> would require global index AMs to store 96 bits (partition OID plus\n> 64-bit TID). Either way would be far simpler for every moving part\n> involved than going over to full varlena TIDs.\n\n16 bits is not much for a partition identifier. We've already had\ncomplaints about INNER_VAR being too small, so apparently there are\npeople who want to use really large numbers of partitions. But even if\nwe imagine a hypothetical world where nobody uses more than a couple\nthousand partitions at once, it's very reasonable to want to avoid\nrecycling partition identifiers so that detaching a partition can be\nO(1), and there's no way that's going to be viable if the whole\naddress space is only 16 bits, because with time series data people\nare going to be continually creating new partitions and dropping old\nones. I would guess that it probably is viable with 32 bits, but we'd\nhave to have a mapping layer rather than using the OID directly to\navoid wraparound collisions.\n\nNow this problem can be avoided by just requiring the AM to store more\nbits, exactly as you say. I suspect 96 bits is large enough for all of\nthe practical use cases people have, or at least within spitting\ndistance. But it strikes me as relatively inefficient to say that\nwe're always going to store 96 bits for every TID. I certainly don't\nthink we want to break on-disk compatibility and widen every existing\nbtree index by changing all the 6-byte TIDs they're storing now to\nstore 12 bytes TIDs that are at least half zero bytes, so I think\nwe're bound to end up with at least two options: 6 and 12. But\nvariable-width would be a lot nicer. You could store small TIDs and\nsmall partition identifiers very compactly, and only use the full\nnumber of bytes when the situation demands it.\n\n> > What problem do you think this proposal does solve?\n>\n> Accommodating table AMs that want more than 48 bits for a TID.\n> We're already starting to run up against the fact that that's not\n> enough bits for plausible use-cases. 64 bits may someday in the far\n> future not be enough either, but I think that's a very long way off.\n\nDo people actually want to store more than 2^48 rows in a table, or is\nthis more about the division of a TID into a block number and an item\nnumber?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Apr 2021 13:35:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, 2021-04-30 at 12:51 -0400, Robert Haas wrote:\n> There are two major reasons why I want variable-width tuple IDs. One\n> is global indexes, where you need as many bits as the AMs\n> implementing\n> the partitions need, plus some extra bits to identify which partition\n> is relevant for a particular tuple. No fixed number of bits that you\n> make available can ever be sufficient here, because a global index\n> always needs to have extra bits compared to a partition-local index;\n> if you let the partition-local index use more bits, the global index\n> now needs even more space. The other is indirect indexes, work Álvaro\n> proposed a few years ago, where the index entry points to the primary\n> key value rather than a TID. The space needs for that are based on\n> the\n> type of the primary key column. This proposal solves neither of those\n> problems.\n\nThe particular problem I have now is that Table AMs seem to support\nindexes just fine, but TIDs are under-specified so I don't know what I\nreally have to work with. BlockNumber seems well-specified as\n0..0XFFFFFFFE (inclusive), but I don't know what the valid range of\nOffsetNumber is for the purposes of a TableAM.\n\nPart of changing to uint64 would be specifying the TIDs in a way that I\ncould rely on in the future.\n\nThe problems you mention above are above the table AM layer, so they\nseem orthogonal. There would still need to be an ID that table AMs can\nuse to fetch a tuple from a particular physical table.\n\nIn the future we may support primary unique indexes at the table AM\nlayer, which would get more interesting. I can see an argument for a\nTID being an arbitrary datum in that case, but I haven't really\nconsidered the design implications. Is this what you are suggesting?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 30 Apr 2021 10:37:26 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, Apr 30, 2021 at 1:28 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Global indexes should work by adding an extra column that is somewhat\n> like a TID, that may even have its own pg_attribute entry. It's much\n> more natural to make the partition number a separate column IMV --\n> nbtree suffix truncation and deduplication can work in about the same\n> way as before. Plus you'll need to do predicate pushdown using the\n> partition identifier in some scenarios anyway. You can make the\n> partition identifier variable-width without imposing the cost and\n> complexity of variable-width TIDs on index AMs.\n\nI agree up to a point but ... are you imagining that the TID continues\nto have its own special place in the page, while the partition\nidentifier is stored more like a regular tuple column? Because it\nseems to me that it would be better to try to eliminate the\nspecial-storage case, just like we did for OIDs. If you want a 6-byte\nTID, put a 6-byte column in the tuple for it. If you also want a\npartition identifier, put an extra column in the tuple for that. If\nyou want a wider TID or a varlena TID, well, put columns for that into\nthe tuple instead of the 6-byte column you'd normally put. This seems\nextremely flexible and a lot more aesthetically appealing than what we\nhave today.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Apr 2021 13:39:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, Apr 30, 2021 at 10:04 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Fri, 2021-04-30 at 08:36 -0700, Peter Geoghegan wrote:\n> > Compatibility with index AMs is more than a matter of switching out\n> > the tuple identifier -- if you invent something that has totally\n> > different performance characteristics for index AMs, then it's likely\n> > to break tacit assumptions about the cost of certain things.\n>\n> I think it would be reasonable to document and expect that table AMs\n> offer some locality of access for tuples with similar IDs. Do you think\n> we need something stronger than that?\n\nI don't know. This conversation is still too abstract for me to be\nable to take a firm position. ISTM that we tend to talk about the\ntable AM in excessively abstract terms. It would be a lot easier if we\nhad clear fixed goals for a small number of additional table AMs.\n\n> > Plus deduplication ameliorates problems with version churn\n> > in\n> > indexes -- presumably the same problems will exist when any new table\n> > AM is used, and so it'll be useful to address the same problems in\n> > the\n> > same way.\n>\n> I got lost after \"presumably the same problems\", can you explain?\n\nWell, there are now two things in nbtree that specifically help with\nversion churn caused by non-HOT updates: deduplication, and bottom-up\nindex deletion (especially the latter). Presumably any new table AM\nwill have something like non-HOT updates. Though they may rarely be a\nproblem (say because the new table AM isn't really for OLTP), whenever\nthey are a problem they'll be a very big problem. It seems like a good\nidea to have the same index AM level protections against accumulating\nversion-churn index tuples in an unbounded way.\n\nMore generally, it seems like a good idea to try to make new table AMs\nreasonably close to heapam insofar as possible. The reality is that\neverything evolved around heapam, and that that's likely to matter in\nall kinds of ways that nobody fully understands just yet. We have a\nsomewhat abstract table AM interface, which is good, but that doesn't\nmean that table AMs can be designed as if it was a green field\nsituation.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 30 Apr 2021 10:50:24 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, 2021-04-30 at 12:35 -0400, Tom Lane wrote:\n> ISTM that would be up to the index AM. We'd need some interlocks on\n> which index AMs could be used with which table AMs in any case, I\n> think.\n\nI'm not sure why? It seems like we should be able to come up with\nsomething that's generic enough.\n\n> I think the hard part may really be in places like tidbitmap.c, which\n> one would wish to be AM-independent, but right now it's quite\n> specific\n> to heap-style TIDs. Maybe we can think of a way to parameterize it.\n\nFor my particular AM, being able to have a parameterized granularity\nmight be nice, but not required.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 30 Apr 2021 10:55:34 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, Apr 30, 2021 at 1:37 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> The particular problem I have now is that Table AMs seem to support\n> indexes just fine, but TIDs are under-specified so I don't know what I\n> really have to work with. BlockNumber seems well-specified as\n> 0..0XFFFFFFFE (inclusive), but I don't know what the valid range of\n> OffsetNumber is for the purposes of a TableAM.\n\nI agree that this is a problem.\n\n> Part of changing to uint64 would be specifying the TIDs in a way that I\n> could rely on in the future.\n\nI mean, from my perspective, the problem here is that the abstraction\nlayer is leaky and things outside of the table AM layer know what heap\nis doing under the hood, and rely on it. If we could refactor the\nabstraction to be less leaky, it would be clearer what assumptions\ntable AM authors can make. If we can't, any specification doesn't seem\nworth much.\n\n> In the future we may support primary unique indexes at the table AM\n> layer, which would get more interesting. I can see an argument for a\n> TID being an arbitrary datum in that case, but I haven't really\n> considered the design implications. Is this what you are suggesting?\n\nI think that would be the best long-term plan. I guess I have two\ndistinguishable concerns. One is that I want to be able to have\nindexes with a payload that's not just a 6-byte TID; e.g. adding a\npartition identifier to support global indexes, or replacing the\n6-byte TID with a primary key reference to support indirect indexes.\nThe other concern is to be able to have table AMs that use arbitrary\nmethods to identify a tuple. For example, if somebody implemented an\nindex-organized table, the \"TID\" would really be the primary key.\n\nEven though these are distinguishable concerns, they basically point\nin the same direction as far as index layout is concerned. The\nimplications for the table AM layer are somewhat different in the two\ncases, but both argue that some places that are now talking about TIDs\nshould be changed to talk about Datums or something of that sort.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Apr 2021 13:56:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, Apr 30, 2021 at 10:39 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I agree up to a point but ... are you imagining that the TID continues\n> to have its own special place in the page, while the partition\n> identifier is stored more like a regular tuple column? Because it\n> seems to me that it would be better to try to eliminate the\n> special-storage case, just like we did for OIDs.\n\nI agree in principle, but making that work well is very hard in\npractice because of the format of IndexTuple -- which bleeds into\neverything. That TID is special is probably a natural consequence of\nthe fact that we don't have an offset-based format of the kind you see\nin other DB systems -- systems that don't emphasize extensibility. We\ncannot jump to a hypothetical TID attribute inexpensively inside code\nlike _bt_compare() because we don't have a cheap way to jump straight\nto the datum for any attribute. So we just store TID in IndexTuple\ndirectly instead. Imagine how much more expensive VACUUM would be if\nit had to grovel through the IndexTuple format.\n\nI wonder how the same useful performance characteristics can be\nmaintained with a variable-width TID design. If you solve the problem\nby changing IndexTuple, then you are kind of obligated to not use\nvarlena headers to keep the on-disk size manageable. Who knows where\nit all ends?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 30 Apr 2021 11:04:51 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, Apr 30, 2021 at 10:56 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think that would be the best long-term plan. I guess I have two\n> distinguishable concerns. One is that I want to be able to have\n> indexes with a payload that's not just a 6-byte TID; e.g. adding a\n> partition identifier to support global indexes, or replacing the\n> 6-byte TID with a primary key reference to support indirect indexes.\n> The other concern is to be able to have table AMs that use arbitrary\n> methods to identify a tuple. For example, if somebody implemented an\n> index-organized table, the \"TID\" would really be the primary key.\n>\n> Even though these are distinguishable concerns, they basically point\n> in the same direction as far as index layout is concerned. The\n> implications for the table AM layer are somewhat different in the two\n> cases, but both argue that some places that are now talking about TIDs\n> should be changed to talk about Datums or something of that sort.\n\nI don't know how it's possible to do any of this without first\naddressing what the table AM does in cases where heapam currently does\na non-HOT update. You obviously cannot have the equivalent of\nduplicate TIDs when your new table AM runs into these scenarios. So\nwhat do you do instead? How do you make your clustered index/IoT style\nidentifiers (i.e. your strictly logical TID-like identifiers) deal\nwith that case?\n\nISTM that this is by far the biggest issue with generalizing the table\nAM for use by a tableam (that isn't already very much like heapam). I\nam always surprised to be the one that has to point it out during\nthese discussions. It's a huge issue.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 30 Apr 2021 11:22:54 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, Apr 30, 2021 at 2:05 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I agree in principle, but making that work well is very hard in\n> practice because of the format of IndexTuple -- which bleeds into\n> everything. That TID is special is probably a natural consequence of\n> the fact that we don't have an offset-based format of the kind you see\n> in other DB systems -- systems that don't emphasize extensibility. We\n> cannot jump to a hypothetical TID attribute inexpensively inside code\n> like _bt_compare() because we don't have a cheap way to jump straight\n> to the datum for any attribute. So we just store TID in IndexTuple\n> directly instead. Imagine how much more expensive VACUUM would be if\n> it had to grovel through the IndexTuple format.\n\nI can't imagine that, so maybe you want to enlighten me? I see that\nthere's a potential problem there, and I'm glad you pointed it out\nbecause I hadn't thought about it previously ... but if you always put\nthe column or columns that VACUUM would need first, it's not obvious\nto me that it would be all that expensive. Deforming the tuple to a\nsufficient degree to extract the first column, which would even be\nfixed-width, shouldn't take much work.\n\n> I wonder how the same useful performance characteristics can be\n> maintained with a variable-width TID design. If you solve the problem\n> by changing IndexTuple, then you are kind of obligated to not use\n> varlena headers to keep the on-disk size manageable. Who knows where\n> it all ends?\n\nWhat's wrong with varlena headers? It would end up being a 1-byte\nheader in practically every case, and no variable-width representation\ncan do without a length word of some sort. I'm not saying varlena is\nas efficient as some new design could hypothetically be, but it\ndoesn't seem like it'd be a big enough problem to stress about. If you\nused a variable-width representation for integers, you might actually\nsave bytes in a lot of cases. An awful lot of the TIDs people store in\npractice probably contain several zero bytes, and if we make them\nwider, that's going to be even more true.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Apr 2021 14:23:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, 2021-04-30 at 10:50 -0700, Peter Geoghegan wrote:\n> I don't know. This conversation is still too abstract for me to be\n> able to take a firm position. ISTM that we tend to talk about the\n> table AM in excessively abstract terms. It would be a lot easier if\n> we\n> had clear fixed goals for a small number of additional table AMs.\n\nhttps://github.com/citusdata/citus/tree/master/src/backend/columnar\n\nMy colleagues and I have been working on a \"columnar\" table AM. It\ndoesn't currently support indexes, but it would be useful to support\nthem.\n\nThe basic idea is we have \"stripes\" of ~150000 tuples that are\nrearranged and compressed, and stored in an smgr-controlled file that\ngoes through the buffer cache and uses generic WAL.\n\nTo support indexes, we could do our own lookups from a \"row number\" to\na particular offset where we can find and decompress the stripe that\nholds that row number, and then scan forward in the stripe to find the\nparticular row. This will be terrible for random access, but [*waves\nhands*] we will keep state and use a few optimizations so that this is\nnot terribly slow for clustered access.\n\nGranted, TID lookup on columnar will be much slower than for a heap\n(and we can use a CustomScan so that the costs reflect that). But it\nwill satisfy important use cases:\n\n 1. Indexes defined on partition parent tables. Even if the index is\nnever used for queries, we don't want to throw an error when defining\nthe partitioned parent index.\n 2. Unique indexes and exclusion constraints.\n 3. Clustered index scans can still be reasonably fast.\n 4. Could be used for UPDATE/DELETE as well.\n\n> More generally, it seems like a good idea to try to make new table\n> AMs\n> reasonably close to heapam insofar as possible. The reality is that\n> everything evolved around heapam, and that that's likely to matter in\n> all kinds of ways that nobody fully understands just yet.\n\nAgreed. I think of this as an evolving situation where we take steps\ntoward a better abstraction.\n\nOne (hopefully reasonable) step I'd like to take is a well-specified\nTID.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 30 Apr 2021 11:28:31 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, 2021-04-30 at 13:56 -0400, Robert Haas wrote:\n> I think that would be the best long-term plan.\n\nWe should still have *some* answer in the short term for table AM\nauthors like me. If I use offset numbers as high as MaxOffsetNumber,\nthen itemptr_to_uint64 will fail. If I base my calculations for the TID\nto row number mapping on MaxOffsetNumber at all, then it will break if\nwe change MaxOffsetNumber (as was suggested[1]).\n\nMy takeaway so far is that the only safe thing to do is hard code it to\n2000. I suppose I can do that until we settle on something better (at\nwhich point I can force a reindex, I suppose).\n\n[1] \nhttps://postgr.es/m/CAEze2Wit1EtHHBHJ+CYvBPthrWUzu2Vqc-BmzU3ApK3iotHriw@mail.gmail.com\n\n> Even though these are distinguishable concerns, they basically point\n> in the same direction as far as index layout is concerned. The\n> implications for the table AM layer are somewhat different in the two\n> cases, but both argue that some places that are now talking about\n> TIDs\n> should be changed to talk about Datums or something of that sort.\n\nLogically, that makes a lot of sense to me. Peter seems to have quite a\nfew practical implementation concerns though, so it could be a long\nroad.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 30 Apr 2021 11:50:36 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, Apr 30, 2021 at 11:23 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Apr 30, 2021 at 2:05 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I agree in principle, but making that work well is very hard in\n> > practice because of the format of IndexTuple -- which bleeds into\n> > everything. That TID is special is probably a natural consequence of\n> > the fact that we don't have an offset-based format of the kind you see\n> > in other DB systems -- systems that don't emphasize extensibility. We\n> > cannot jump to a hypothetical TID attribute inexpensively inside code\n> > like _bt_compare() because we don't have a cheap way to jump straight\n> > to the datum for any attribute. So we just store TID in IndexTuple\n> > directly instead. Imagine how much more expensive VACUUM would be if\n> > it had to grovel through the IndexTuple format.\n>\n> I can't imagine that, so maybe you want to enlighten me? I see that\n> there's a potential problem there, and I'm glad you pointed it out\n> because I hadn't thought about it previously ... but if you always put\n> the column or columns that VACUUM would need first, it's not obvious\n> to me that it would be all that expensive.\n\nMaybe. The point is that it is a problem that needs to be solved.\n\n> Deforming the tuple to a\n> sufficient degree to extract the first column, which would even be\n> fixed-width, shouldn't take much work.\n\nI think that it's reasonable to impose some cost on index AMs here,\nbut that needs to be bounded sensibly and unambiguously. For example,\nit would probably be okay if you had either 6 byte or 8 byte TIDs, but\nno other variations. You could require index AMs (the subset of index\nAMs that are ever able to store 8 byte TIDs) to directly encode which\nwidth they're dealing with at the level of each IndexTuple. That would\ncreate some problems for nbtree deduplication, especially in boundary\ncases, but ISTM that you can manage the complexity by sensibly\nrestricting how the TIDs work across the board. For example, the TIDs\nshould always work like unsigned integers -- the table AM must be\nwilling to work with that restriction.\n\nYou'd then have posting lists tuples in nbtree whose TIDs were all\neither 6 bytes or 8 bytes wide, with a mix of each possible (though\nnot particularly likely) on the same leaf page. Say when you have a\ntable that exceeds the current MaxBlockNumber restrictions. It would\nbe relatively straightforward for nbtree deduplication to simply\nrefuse to mix 6 byte and 8 byte datums together to avoid complexity in\nboundary cases. The deduplication pass logic has the flexibility that\nthis requires already.\n\n> > I wonder how the same useful performance characteristics can be\n> > maintained with a variable-width TID design. If you solve the problem\n> > by changing IndexTuple, then you are kind of obligated to not use\n> > varlena headers to keep the on-disk size manageable. Who knows where\n> > it all ends?\n>\n> What's wrong with varlena headers? It would end up being a 1-byte\n> header in practically every case, and no variable-width representation\n> can do without a length word of some sort. I'm not saying varlena is\n> as efficient as some new design could hypothetically be, but it\n> doesn't seem like it'd be a big enough problem to stress about. If you\n> used a variable-width representation for integers, you might actually\n> save bytes in a lot of cases. An awful lot of the TIDs people store in\n> practice probably contain several zero bytes, and if we make them\n> wider, that's going to be even more true.\n\nMaybe all of this is true, and maybe it works out to be the best path\nforward in the long term, all things considered. But whether or not\nthat's true is crucially dependent on what real practical table AMs\n(of which there will only ever be a tiny number) actually need to do.\nWhy should we assume that the table AM cannot accept some\nrestrictions? What good does it do to legalistically define the\nproblem as a problem for index AMs to solve?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 30 Apr 2021 11:51:07 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, Apr 30, 2021 at 2:23 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I don't know how it's possible to do any of this without first\n> addressing what the table AM does in cases where heapam currently does\n> a non-HOT update.\n\nWhy can't it do what it does already? It's not broken for heap, so why\nshould it be broken for anything else? And why are non-HOT updates\nspecifically a problem?\n\n> You obviously cannot have the equivalent of\n> duplicate TIDs when your new table AM runs into these scenarios. So\n> what do you do instead? How do you make your clustered index/IoT style\n> identifiers (i.e. your strictly logical TID-like identifiers) deal\n> with that case?\n\nIs the problem you're worried about here that, with something like an\nindex-organized table, you can have multiple row versions that have\nthe same logical tuple ID, i.e. primary key value? And that the\ninterfaces aren't well-suited to that? Because that's a problem I have\nthought about and can comment on, even though I think the question of\nhaving multiple versions with the same TID is distinguishable from the\nquestion of how *wide* TIDs should be. But maybe that's not what you\nare talking about here, in which case I guess I need a clearer\nexplanation of the concern.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Apr 2021 15:19:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, Apr 30, 2021 at 12:20 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Why can't it do what it does already? It's not broken for heap, so why\n> should it be broken for anything else? And why are non-HOT updates\n> specifically a problem?\n\nNo reason.\n\n> > You obviously cannot have the equivalent of\n> > duplicate TIDs when your new table AM runs into these scenarios. So\n> > what do you do instead? How do you make your clustered index/IoT style\n> > identifiers (i.e. your strictly logical TID-like identifiers) deal\n> > with that case?\n>\n> Is the problem you're worried about here that, with something like an\n> index-organized table, you can have multiple row versions that have\n> the same logical tuple ID, i.e. primary key value? And that the\n> interfaces aren't well-suited to that? Because that's a problem I have\n> thought about and can comment on, even though I think the question of\n> having multiple versions with the same TID is distinguishable from the\n> question of how *wide* TIDs should be. But maybe that's not what you\n> are talking about here, in which case I guess I need a clearer\n> explanation of the concern.\n\nThat's what I'm talking about. I'd like to hear what you think about it.\n\nIt's not exactly a narrow concern. For one thing, it is enough to\ntotally validate my suggestion about how we might widen TIDs and still\nhave nbtree deduplication.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 30 Apr 2021 12:29:57 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, Apr 30, 2021 at 3:30 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Is the problem you're worried about here that, with something like an\n> > index-organized table, you can have multiple row versions that have\n> > the same logical tuple ID, i.e. primary key value? And that the\n> > interfaces aren't well-suited to that? Because that's a problem I have\n> > thought about and can comment on, even though I think the question of\n> > having multiple versions with the same TID is distinguishable from the\n> > question of how *wide* TIDs should be. But maybe that's not what you\n> > are talking about here, in which case I guess I need a clearer\n> > explanation of the concern.\n>\n> That's what I'm talking about. I'd like to hear what you think about it.\n\nOK. I thought about this in regards to zheap, which has this exact\nproblem, because it wants to do so-called \"in place\" updates where the\nnew version of the row goes right on top of the old one in the table\npage, and the old version of the row gets written into the undo log.\nJust to keep things simple, we said that initially we'd only use this\nin-place update strategy when no indexed columns were changed, so that\nthere's only ever one set of index entries for a given TID. In that\nmodel, the index AMs don't really need to care that there are actually\nmultiple tuples for the same TID, because those tuples differ only in\ncolumns that the index doesn't care about anyway. An index scan has to\nbe careful to fetch the correct version of the tuple, but it has a\nSnapshot available, so it can do that. However, there's no easy and\nefficient way to handle updates and deletes. Suppose for example that\na tuple has been updated 5 times, creating versions t1..t5. t5 is now\nin the zheap page, and the other versions are in the undo. t5 points\nto t4 which points to t3 and so forth. Now an updater comes along and\nlet's say that the updater's snapshot sees t2. It may be that t3..t5\nare *uncommitted* updates in which case the attempt to update t2 may\nsucceed if the transaction that performed then aborts, or it may be\nthat the updating transactions have committed, in which case we're\ngoing to have to fail. But that decision isn't made by the scan that\nsees t3; it happens when the TID reaches the ModifyTable node. So what\nzheap ends up doing is finding the right tuple version during the\nscan, by making use of the snapshot, and then having to go repeat that\nwork when it's time to try to perform the update. It would be nice to\navoid this. If we could feed system columns from the scan through to\nthe update, we could pass along an undo pointer and avoid the extra\noverhead. So it seems to me, so far anyway, that there's no very\nfundamental problem here, but there is an efficiency issue which we\ncould address if we had a bit more planner and executor infrastructure\nto help us out.\n\nNow in the long run the vision for zheap was that we'd eventually want\nto do in-place updates even when indexed columns have been modified,\nand this gets a whole lot trickier, because now there can be multiple\nsets of index entries pointing at the same TID which don't agree on\nthe values of the indexed columns. As old row versions die off, some\nof those pointers need to be cleaned out, and others do not. I thought\nwe might solve this problem by something akin to retail index\ndeletion: have an update or delete on a zheap tuple go re-find the\nassociated index entries and mark them for possible cleanup, and then\nvacuum can ignore all unmarked tuples. There might be some efficiency\nproblems with this idea I hadn't considered, based on your remarks\ntoday. But regardless of the wisdom or folly of this approach, the\nbroader point is that we can't assume that all heap types are going to\nhave the same maintenance requirements. I think most of them are going\nto have some kind of maintenance operation that need to or at least\ncan optionally be performed from time to time, but it might be\ntriggered by completely different criteria than vacuum. New table AMs\nmight well choose to use 64-bit XIDs, avoiding the need for wraparound\nprocessing altogether. Maybe they have such good opportunistic cleanup\nmechanisms that periodic vacuum for bloat isn't even really needed.\nMaybe they bloat when updates and deletes commit but not when inserts\nand updates abort, because those cases are handled via some other\nmechanism. Who knows, really? It's hard to predict what\nnot-yet-written AMs might care about, and even if we knew, it seems\ncrazy to try to rewrite the way vacuum works to cater to those needs\nbefore we actually have some working AMs to use as a testbed.\n\nIt strikes me that certain interesting cases might not really need\nanything very in-depth here. For example, consider indirect indexes,\nwhere the index references the primary key value rather than the TID.\nWell, the indirect index should probably be vacuumed periodically to\nprevent bloat, but it doesn't need to be vacuumed to recycle TIDs\nbecause it doesn't contain TIDs. BRIN indexes, BTW, also don't contain\nTIDs. Either could, therefore, be optionally vacuumed after vacuum has\ndone absolutely everything else, even truncate the table, or they\ncould be vacuumed on a completely separate schedule that doesn't have\nanything to do with table vacuuming. I suppose we'd have to come up\nwith some solution, but I don't think it would need to be fully\ngeneral; it could just be good enough for that particular feature,\nsince fully general seems rather impossible anyway. So I feel like\nit's pretty fair to just defer this question. Without some solution\nyou can't entirely finish a project like indirect indexes, but without\nvariable-width index payloads you can't even start it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Apr 2021 17:07:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, 2021-04-30 at 12:29 -0700, Peter Geoghegan wrote:\n> > Is the problem you're worried about here that, with something like\n> > an\n> > index-organized table, you can have multiple row versions that have\n> > the same logical tuple ID, i.e. primary key value? \n> \n> That's what I'm talking about. I'd like to hear what you think about\n> it.\n\nFWIW, this is not a problem in my table AM. I am fine having different\nTIDs for each version, just like heapam.\n\nFor index-organized tables it does seem like an interesting problem.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 30 Apr 2021 14:13:03 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, Apr 30, 2021 at 2:13 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> FWIW, this is not a problem in my table AM. I am fine having different\n> TIDs for each version, just like heapam.\n\nThis means that we are largely in agreement about the general nature\nof the problem. That seems like a good basis to redefine TID-like\nidentifiers so that they can accommodate what you want to do.\n\n> For index-organized tables it does seem like an interesting problem.\n\nI strongly suspect that index-organized tables (or indirect indexes,\nor anything else that assumes that TID-like identifiers map directly\nto logical rows as opposed to physical versions) are going to break\ntoo many assumptions to ever be tractable. Assuming I have that right,\nit would advance the discussion if we could all agree on that being a\nnon-goal for the tableam interface in general. This would allow us to\nclearly discuss how to solve the remaining problem of accommodating\ncolumn stores and suchlike. That seems hard, but much more tractable.\n\nThe fact that the tableam has almost no non-goals has always bothered\nme a bit. Especially on this particular point about purely logical\nTID-like identifiers.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 30 Apr 2021 14:22:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, Apr 30, 2021 at 2:07 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> OK. I thought about this in regards to zheap, which has this exact\n> problem, because it wants to do so-called \"in place\" updates where the\n> new version of the row goes right on top of the old one in the table\n> page, and the old version of the row gets written into the undo log.\n> Just to keep things simple, we said that initially we'd only use this\n> in-place update strategy when no indexed columns were changed, so that\n> there's only ever one set of index entries for a given TID. In that\n> model, the index AMs don't really need to care that there are actually\n> multiple tuples for the same TID, because those tuples differ only in\n> columns that the index doesn't care about anyway. An index scan has to\n> be careful to fetch the correct version of the tuple, but it has a\n> Snapshot available, so it can do that.\n\nRight. So zheap (in the current prototype implementation) is like\nheapam with its HOT optimization, except that it isn't subject to the\nsame limitations with regard to fitting heap tuples on the same heap\npage to keep the same HOT chain going over time. You kind of have the\nmoral equivalent of a HOT chain that can largely live in UNDO. That\nseems like a very useful thing on its own. A lot of the problems with\nHOT are in this area -- maybe the vast majority, even.\n\nA remaining problem is that we must generate a new round of index\ntuples for each and every index when only one indexed column is\nlogically modified by an UPDATE statement. I think that this is much\nless of a problem now due to bottom-up index deletion. Sure, it sucks\nthat we still have to dirty the page at all. But it's nevertheless\ntrue that it all but eliminates version-driven page splits, which are\nwhere almost all of the remaining downside is. It's very reasonable to\nnow wonder if this particular all-indexes problem is worth solving at\nall in light of that. (Modern hardware characteristics also make a\ncomprehensive fix less valuable in practice.)\n\n> However, there's no easy and\n> efficient way to handle updates and deletes. Suppose for example that\n> a tuple has been updated 5 times, creating versions t1..t5. t5 is now\n> in the zheap page, and the other versions are in the undo. t5 points\n> to t4 which points to t3 and so forth. Now an updater comes along and\n> let's say that the updater's snapshot sees t2. It may be that t3..t5\n> are *uncommitted* updates in which case the attempt to update t2 may\n> succeed if the transaction that performed then aborts, or it may be\n> that the updating transactions have committed, in which case we're\n> going to have to fail. But that decision isn't made by the scan that\n> sees t3; it happens when the TID reaches the ModifyTable node. So what\n> zheap ends up doing is finding the right tuple version during the\n> scan, by making use of the snapshot, and then having to go repeat that\n> work when it's time to try to perform the update. It would be nice to\n> avoid this.\n\nI believe that this is another consequence of the fact that Postgres\nversions tuples, not pages. This is not a minor theoretical point.\nIt's very different to what Oracle does. It's almost a necessary\nconsequence of our basic approach to extensibility, because you can\nhave things like index tuples whose values are equal but visibly\ndistinct (e.g., the numeric value '5.0' is equal to but distinct from\n'5'). It also has a lot to do with how crash recovery works.\n\n> If we could feed system columns from the scan through to\n> the update, we could pass along an undo pointer and avoid the extra\n> overhead. So it seems to me, so far anyway, that there's no very\n> fundamental problem here, but there is an efficiency issue which we\n> could address if we had a bit more planner and executor infrastructure\n> to help us out.\n\nFWIW you don't necessarily have to do the EPQ stuff. You could in\ntheory do a statement-level rollback, and repeat. The EPQ mechanism is\nunique to Postgres. Maybe it doesn't matter, but I don't think that\nit's essential to follow this in other table AMs.\n\n> Now in the long run the vision for zheap was that we'd eventually want\n> to do in-place updates even when indexed columns have been modified,\n> and this gets a whole lot trickier, because now there can be multiple\n> sets of index entries pointing at the same TID which don't agree on\n> the values of the indexed columns.\n\nIt's much easier when you have a very simple type system that doesn't\nallow differences like my \"numeric '5.0' vs '5'\" example -- a system\nthat is built for this from the ground up. If there are meaningful\nsemantic differences among opclass-equal index tuples, then we can\nnever assume that index tuples will always be locatable after an\nupdate affecting indexed columns (if only because we need to preserve\nthe '5.0' and '5' variants in an index on a numeric column).\n\nIf we could at least be sure that two index tuples that point to the\nsame stable/logical zheap TID (in a world where TIDs were stable\nidentifiers of logical rows) were nevertheless unique, then we'd be\nable to uniquely identify each index tuple during retail index tuple\ndeletion -- they'd still have distinct key values in the index tuple\noverall. That assumption isn't workable in Postgres, though.\n\nI'm pretty sure that there will be a bunch of tacit assumptions like\nthis that would shake out all over the place. You'd have to actually\npursue this design to figure out what they were, but I'm pretty sure\nmany more exist. In any case this one example seems sufficient to make\nme doubt the whole enterprise.\n\n> As old row versions die off, some\n> of those pointers need to be cleaned out, and others do not. I thought\n> we might solve this problem by something akin to retail index\n> deletion: have an update or delete on a zheap tuple go re-find the\n> associated index entries and mark them for possible cleanup, and then\n> vacuum can ignore all unmarked tuples. There might be some efficiency\n> problems with this idea I hadn't considered, based on your remarks\n> today. But regardless of the wisdom or folly of this approach, the\n> broader point is that we can't assume that all heap types are going to\n> have the same maintenance requirements.\n\nNo, we can't. But we had better have a generalized definition that\naccounts for what variation is acceptable, and (most importantly) what\nvariation *isn't* acceptable.\n\n> I think most of them are going\n> to have some kind of maintenance operation that need to or at least\n> can optionally be performed from time to time, but it might be\n> triggered by completely different criteria than vacuum. New table AMs\n> might well choose to use 64-bit XIDs, avoiding the need for wraparound\n> processing altogether. Maybe they have such good opportunistic cleanup\n> mechanisms that periodic vacuum for bloat isn't even really needed.\n> Maybe they bloat when updates and deletes commit but not when inserts\n> and updates abort, because those cases are handled via some other\n> mechanism. Who knows, really? It's hard to predict what\n> not-yet-written AMs might care about, and even if we knew, it seems\n> crazy to try to rewrite the way vacuum works to cater to those needs\n> before we actually have some working AMs to use as a testbed.\n\nNothing is certain, but frankly I just don't believe that anybody is\never going to solve this problem in Postgres. The fundamental\nassumption that TIDs are not stable identifiers of logical rows (they\npoint to versions) is just too baked into everything. And the\ndownsides of that design can be fixed in a localized way.\n\nOn the other hand, Jeff and I agree about the parameters of the\ndiscussion here. I can see myself doing work inside nbtree to\nfacilitate his work. But that is made a lot less likely by the general\nlack of agreement about what ought to ultimately be possible. There is\na very real cost to indefinitely deferring making a hard choice about\nwhat we can rule out for table AMs. It's *not* free.\n\nHow long should that situation be allowed to continue for? This is not\na rhetorical question -- maybe it should be timeboxed in some way.\nRight now the tableam can in theory do. Nobody knows how, but anything\nis possible!\n\n> It strikes me that certain interesting cases might not really need\n> anything very in-depth here. For example, consider indirect indexes,\n> where the index references the primary key value rather than the TID.\n> Well, the indirect index should probably be vacuumed periodically to\n> prevent bloat, but it doesn't need to be vacuumed to recycle TIDs\n> because it doesn't contain TIDs.\n\nWhy is that okay, though? How can you get away with not having\nversion-based TID-like identifiers here? I would be willing to accept\nan answer like \"it's unclear, but it must be possible\" if there was no\ndownside. But as I said, there is a downside.\n\n> BRIN indexes, BTW, also don't contain\n> TIDs.\n\nBRIN indexes aren't suitable as indirect indexes, though.\n\n> Either could, therefore, be optionally vacuumed after vacuum has\n> done absolutely everything else, even truncate the table, or they\n> could be vacuumed on a completely separate schedule that doesn't have\n> anything to do with table vacuuming.\n\nI think that the question of how TID-like identifiers work across\ntable AMs is fundamentally distinct from how VACUUM works. I think\nthat we'll probably always have something like VACUUM. That doesn't\nmean that we cannot ultimately teach VACUUM to run far less often.\nThere is a problem with things being overly coupled inside VACUUM, so\nour practical experience with VACUUM isn't necessarily a reliable\nindicator of how much of a problem VACUUM is long term.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 30 Apr 2021 15:19:05 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, Apr 30, 2021 at 5:22 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I strongly suspect that index-organized tables (or indirect indexes,\n> or anything else that assumes that TID-like identifiers map directly\n> to logical rows as opposed to physical versions) are going to break\n> too many assumptions to ever be tractable. Assuming I have that right,\n> it would advance the discussion if we could all agree on that being a\n> non-goal for the tableam interface in general.\n\nI *emphatically* disagree with the idea of ruling such things out\ncategorically. This is just as naive as the TODO's statement that we\ndo not want \"All backends running as threads in a single process\".\nDoes anyone really believe that we don't want that any more? I\nbelieved it 10 years ago, but not any more. It's costing us very\nsubstantially not only in that in makes parallel query more\ncomplicated and fragile, but more importantly in that we can't scale\nup to connection counts that other databases can handle because we use\nup too many operating system resources. Support threading in\nPostgreSQL isn't a project that someone will pull off over a long\nweekend and it's not something that has to be done tomorrow, but it's\npretty clearly the future.\n\nSo here. The complexity of getting a table AM that does anything\nnon-trivial working is formidable, and I don't expect it to happen\nright away. Picking one that is essentially block-based and can use\n48-bit TIDs is very likely the right initial target because that's the\nclosest we have now, and there's no sense attacking the hardest\nvariant of the problem first. However, as with the\nthreads-vs-processes example, I strongly suspect that having only one\ntable AM is leaving vast amounts of performance on the table. To say\nthat we're never going to pursue the parts of that space that require\na different kind of tuple identifier is to permanently write off tons\nof ideas that have produced promising results in other systems. Let's\nnot do that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 10:41:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, Apr 30, 2021 at 6:19 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> A remaining problem is that we must generate a new round of index\n> tuples for each and every index when only one indexed column is\n> logically modified by an UPDATE statement. I think that this is much\n> less of a problem now due to bottom-up index deletion. Sure, it sucks\n> that we still have to dirty the page at all. But it's nevertheless\n> true that it all but eliminates version-driven page splits, which are\n> where almost all of the remaining downside is. It's very reasonable to\n> now wonder if this particular all-indexes problem is worth solving at\n> all in light of that. (Modern hardware characteristics also make a\n> comprehensive fix less valuable in practice.)\n\nIt's reasonable to wonder. I think it depends on whether the problem\nis bloat or just general slowness. To the extent that the problem is\nbloat, bottom-index deletion will help a lot, but it's not going to\nhelp with slowness because, as you say, we still have to dirty the\npages. And I am pretty confident that slowness is a very significant\npart of the problem here. It's pretty common for people migrating from\nanother database system to have, for example, a table with 10 indexes\nand then repeatedly update a column that is covered by only one of\nthose indexes. Now, with bottom-up index deletion, this should cause a\nlot less bloat, and that's good. But you still have to update all 10\nindexes in the foreground, and that's bad, because the alternative is\nto find just the one affected index and update it twice -- once to\ninsert the new tuple, and a second time to delete-mark the old tuple.\n10 is a lot more than 2, and that's even ignoring the cost of deferred\ncleanup on the other 9 indexes. So I don't really expect this to get\nus out of the woods. Somebody whose workload runs five times slower on\na pristine data load is quite likely to give up on using PostgreSQL\nbefore bloat even enters the picture.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 11:03:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, May 3, 2021 at 7:41 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> So here. The complexity of getting a table AM that does anything\n> non-trivial working is formidable, and I don't expect it to happen\n> right away. Picking one that is essentially block-based and can use\n> 48-bit TIDs is very likely the right initial target because that's the\n> closest we have now, and there's no sense attacking the hardest\n> variant of the problem first.\n\nIt doesn't have to be block-based -- that's not what Jeff is\nproposing. It just has to be able to accept the restriction that\nindexes must have a unique TID-like identifier for each version (not\nquite a version actually -- whatever the equivalent of a HOT chain\nis). This is a restriction that Jeff had pretty much planned on\nworking within before starting this thread (I know this because we\nspoke about it privately).\n\nIt's quite possible to rule out an index-organized table design\nwithout ruling out a column store with logical TID-like identifiers,\nthat aren't block-based. It's fair to wonder if not tightening up the\nrules for TID-like identifiers is actually helping table AM authors in\npractice. I think it's actually making things harder.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 3 May 2021 08:26:28 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, May 3, 2021 at 8:03 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> It's reasonable to wonder. I think it depends on whether the problem\n> is bloat or just general slowness. To the extent that the problem is\n> bloat, bottom-index deletion will help a lot, but it's not going to\n> help with slowness because, as you say, we still have to dirty the\n> pages. And I am pretty confident that slowness is a very significant\n> part of the problem here.\n\nIt's all going to depend on workload of course -- we'll need to wait\nand see what users still complain about with Postgres 14 to really\nhave some idea. You only freshly dirty those leaf pages that weren't\nalready dirty, and the costs will be much more linear, so it's a\ncomplicated question.\n\nHere is a more modest statement that might be more convincing: The\n*relative* importance of making something like HOT more robust to\nthings like long-running xacts has increased now that we have\nbottom-up index deletion. We could improve things here by adding\nsomething like zheap, which allows a HOT chain to mostly live in UNDO,\nand therefore pretty much become arbitrarily long. This seems\nplausible because users will accept that UPDATEs that modify one or\nmore indexed columns kinda suck, as long as there is never any truly\npathological performance. Whereas users will not easily accept that\nHOT (or something like it) doesn't quite work well enough to make\nrelation sizes stable when they avoid updating indexed columns.\n\nI don't think that even the absence of UPDATEs that logically modify\nindexes and the absence of long running transactions (that hold up\ncleanup) is sufficient to make HOT work well enough to keep table\nsizes stable over time. Minor inefficiencies (e.g. LP_DEAD line\npointer bloat) will tend to aggregate over time, leading to heap\nfragmentation.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 3 May 2021 08:48:52 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, May 3, 2021 at 11:26 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> It just has to be able to accept the restriction that\n> indexes must have a unique TID-like identifier for each version (not\n> quite a version actually -- whatever the equivalent of a HOT chain\n> is). This is a restriction that Jeff had pretty much planned on\n> working within before starting this thread (I know this because we\n> spoke about it privately).\n\nWell, I think what I'm saying is that I'm not on board with such a restriction.\n\nIf you're just saying that it has to be possible to identify rows\nsomehow, I am in full agreement, and I think the universe is on board\nas well.\n\nBut if you're saying those identifiers have to be fixed-width and 48\n(or even 64) bits, I disagree that we wish to have such a requirement\nin perpetuity.\n\nThat'd be like going around to automobile manufacturers in 1925 and\nasking them to agree that all future cars ever manufactured must have\na clutch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 12:45:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, May 3, 2021 at 9:45 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> But if you're saying those identifiers have to be fixed-width and 48\n> (or even 64) bits, I disagree that we wish to have such a requirement\n> in perpetuity.\n\nOnce you require that TID-like identifiers must point to particular\nversions (as opposed to particular logical rows), you also virtually\nrequire that the identifiers must always be integer-like (though not\nnecessarily block-based and not necessarily 6 bytes). You've\npractically ensured that clustered index tables (and indirect indexes)\nwill never be possible by accepting this. Those designs are the only\nreal reason to have truly variable-length TID-like identifiers IMV (as\nopposed to 2 or perhaps even 3 standard TID widths).\n\nYou don't accept any of that, though. Fair enough. I predict that\navoiding making a hard choice will make Jeff's work here a lot harder,\nthough.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 3 May 2021 09:59:45 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, May 3, 2021 at 1:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> You don't accept any of that, though. Fair enough. I predict that\n> avoiding making a hard choice will make Jeff's work here a lot harder,\n> though.\n\nI don't really think so, or at least I don't see a reason why it\nshould. As things stand today, I don't think it's possible for a table\nAM author to make any other choice than to assume that their TIDs have\nto look and work like heap TIDs; that is, there had better be a block\nnumber portion and an item number portion, and the item number had\nbetter be smaller than MaxOffsetNumber, and if you want bitmap scans\nto run reasonably quickly, the block number had also better correspond\nto physical locality to some degree. It's not clear to me how exactly\nsomeone would go about fixing all of that, but I think it would be\ngreat if they did. Even if that person wanted to assume for purposes\nof their own patch that fixed-width, integer-like TIDs are the only\nthing we care about, that would be fine with me. Getting to a point\nwhere the available 48 bits can be used in whatever way the table AM\nauthor wants is clearly better than what we have now.\n\nNow I'm personally of the opinion that we shouldn't be content to stop\nthere, but so what? I'm not insisting that Jeff or anyone else has to\nwork on this problem, or that they have to fix more of it rather than\nless. I hope that nobody's going to try to back us into a corner by\nmaking design decisions that deliberately complicate the possibility\nof future improvements in that area, and that's about it. I don't\nreally understand why you think that's unreasonable, or even\nproblematic. I can't see that any way in which the assumption that we\nwill NEVER want to further generalize the TID concept simplifies\nanything anyone wants to do today.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 13:22:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, 3 May 2021 at 19:00, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, May 3, 2021 at 9:45 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > But if you're saying those identifiers have to be fixed-width and 48\n> > (or even 64) bits, I disagree that we wish to have such a requirement\n> > in perpetuity.\n>\n> Once you require that TID-like identifiers must point to particular\n> versions (as opposed to particular logical rows), you also virtually\n> require that the identifiers must always be integer-like (though not\n> necessarily block-based and not necessarily 6 bytes). You've\n> practically ensured that clustered index tables (and indirect indexes)\n> will never be possible by accepting this.\n\nFor IoT, as far as I know, one of the constraints is that there exists\nsome unique constraint on the table, which also defines the ordering.\nAssuming that that is the case, we can use <unique key> + <inserting\ntransaction id> to identify tuple versions.\n\nWith regards,\n\nMatthias van de Meent.\n\n\n", "msg_date": "Mon, 3 May 2021 19:22:39 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, May 3, 2021 at 10:22 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I don't really think so, or at least I don't see a reason why it\n> should. As things stand today, I don't think it's possible for a table\n> AM author to make any other choice than to assume that their TIDs have\n> to look and work like heap TIDs; that is, there had better be a block\n> number portion and an item number portion, and the item number had\n> better be smaller than MaxOffsetNumber, and if you want bitmap scans\n> to run reasonably quickly, the block number had also better correspond\n> to physical locality to some degree. It's not clear to me how exactly\n> someone would go about fixing all of that, but I think it would be\n> great if they did. Even if that person wanted to assume for purposes\n> of their own patch that fixed-width, integer-like TIDs are the only\n> thing we care about, that would be fine with me. Getting to a point\n> where the available 48 bits can be used in whatever way the table AM\n> author wants is clearly better than what we have now.\n\nI don't think it's much good to just do that. You probably need a full\n64-bits for something like a column store. But that's all you need.\n\n> Now I'm personally of the opinion that we shouldn't be content to stop\n> there, but so what? I'm not insisting that Jeff or anyone else has to\n> work on this problem, or that they have to fix more of it rather than\n> less. I hope that nobody's going to try to back us into a corner by\n> making design decisions that deliberately complicate the possibility\n> of future improvements in that area, and that's about it. I don't\n> really understand why you think that's unreasonable, or even\n> problematic. I can't see that any way in which the assumption that we\n> will NEVER want to further generalize the TID concept simplifies\n> anything anyone wants to do today.\n\nIt creates ambiguity of the kind that deters related improvements. I\nfor one am not comfortable with (say) working on generalizing TID to\nthe extent required to facilitate Jeff's work if that obligates me to\nmake some legalistic and wholly insincere statement about future\nimprovements to the definition of TID still being quite possible (to\nfacilitate indirect indexes, or whatever). The truth is that I cannot\npossibly know if facilitating Jeff's work in the short term blocks off\nother things in the long term -- because I don't actually have a clue\nhow these other things could ever really be implemented sensible in\nany case.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 3 May 2021 10:38:37 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, 2021-05-03 at 09:59 -0700, Peter Geoghegan wrote:\n> You don't accept any of that, though. Fair enough. I predict that\n> avoiding making a hard choice will make Jeff's work here a lot\n> harder,\n> though.\n\nFor the purposes of this discussion, what's making my life difficult is\nthat we don't have a good definition for TID, leaving me with two\noptions:\n\n 1. be overly conservative, accept MaxOffsetNumber=2048, wasting a\nbunch of address space; or\n 2. risk the mapping between TID and row number could break at any\ntime\n\nAnd compounding that, it seems that there's a bug in GIN that doesn't\nhonor MaxOffsetNumber, so actually neither of the rules above work\neither. Instead, I need to use 2047 as the max offset number, which has\nno real basis in the postgres design, but I'd be stuck with it for a\nlong time.\n\nWhat I'm looking for is:\n * A declaration of what the actual maximum valid offset number is,\nand that it will be stable enough to use for table AMs for now. (This\nmaximum valid offset number may or may not be called MaxOffsetNumber,\nand may or may not be tied to the maximum number of items that fit on a\npage.)\n * A confirmation that this GIN behavior is a bug that should be\nfixed, now that there are table AMs in existence that need it fixed.\n\nEven if we fix this in v15, we still need some guidance for what table\nAMs should do in earlier versions.\n\nIf we change the way tuple IDs are represented or the table AM in v15\nor beyond, that may require a REINDEX for indexes on some table AMs. As\nlong as we have some robust way to check that a REINDEX is necessary,\nthat's fine with me.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 03 May 2021 10:57:26 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, May 3, 2021 at 10:22 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> For IoT, as far as I know, one of the constraints is that there exists\n> some unique constraint on the table, which also defines the ordering.\n> Assuming that that is the case, we can use <unique key> + <inserting\n> transaction id> to identify tuple versions.\n\nPerhaps that's true in theory, but the resulting design seems likely\nto be useless in the end. In any case I believe that systems that\ngenerally use a heap but give you the choice of using an IoT (I'm\nreally thinking of Oracle) tend to not have many users that actually\navail of IoTs. On modern flash storage the trade-off made by an IoT or\nclustered index design seems like the wrong one on average. You're\nsaving about 1 I/O on average with a PK lookup, which just isn't that\nmuch of an upside compared to the many downsides.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 3 May 2021 11:05:30 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, 2021-05-03 at 13:22 -0400, Robert Haas wrote:\n> to look and work like heap TIDs; that is, there had better be a block\n> number portion and an item number portion,\n\nRight (at least for now).\n\n> and the item number had\n> better be smaller than MaxOffsetNumber,\n\nThat's not clear to me at all, and is the whole reason I began this\nthread.\n\n a. You say \"smaller than MaxOffsetNumber\", but that's a little weird.\nIf an offset can't be MaxOffsetNumber, it's not really the maximum, is\nit?\n b. If you actually meant \"less than or equal to MaxOffsetNumber\",\nthat will fail with the GIN posting list issue raised in my first\nemail. Do you agree that's a bug?\n c. Why can't we go all the way up to MovedPartitionsOffsetNumber - 1?\nRight now, MaxOffsetNumber is poorly named, because it actually\nrepresents the a number slightly higher than the maximum number of\nitems that can fit on a page. That essentially wastes 5 bits of address\nspace for no obvious reason.\n\n> and if you want bitmap scans\n> to run reasonably quickly, the block number had also better\n> correspond\n> to physical locality to some degree.\n\nRight (at least for now).\n\nRegards,\n\tJeff Davis\n\n\n\n\n\n", "msg_date": "Mon, 03 May 2021 11:13:15 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, 2021-05-03 at 10:38 -0700, Peter Geoghegan wrote:\n> I don't think it's much good to just do that. You probably need a\n> full\n> 64-bits for something like a column store. But that's all you need.\n\nI would definitely like that for citus columnar, and it would\ndefinitely make it easier to manage the address space, but I won't\ndemand it today. 48 bits is a workable tuple address space for many\npurposes, especially when you factor in logical partitioning.\n\nI will be dealing with gaps though, so wasting 5 bits of address space\n(2^16 / MaxOffsetNumber = 32) to bring it down to 43 bits is not great.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 03 May 2021 11:36:59 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, May 3, 2021 at 10:57 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> For the purposes of this discussion, what's making my life difficult is\n> that we don't have a good definition for TID, leaving me with two\n> options:\n>\n> 1. be overly conservative, accept MaxOffsetNumber=2048, wasting a\n> bunch of address space; or\n\ntidbitmap.c uses MaxHeapTuplesPerPage as its MAX_TUPLES_PER_PAGE,\nwhich is much lower than MaxOffsetNumber (it's almost 10x lower). I\nwonder what that means for your design.\n\n> 2. risk the mapping between TID and row number could break at any\n> time\n\nThough this clearly is the immediate problem for you, I think that the\nreal problem is that the table AM kind of tacitly assumes that there\nis a universality to item pointer TIDs -- which is obviously not true.\nIt might be useful for you to know what assumptions index AMs can make\nabout how TIDs work in general, but I think that you really need an\nindex-AM level infrastructure that advertises the capabilities of each\nindex AM with respect to handling each possible variation (I suppose\nyou have heapam, 6 byte uint, and maybe 8 byte uint).\n\nThe easiest reasonable short term design for you is probably to find a\nway to make 6 byte TIDs into 48-bit unsigned integers (perhaps only\nconceptually), at least in contexts where the columnar table AM is\nused. You'll still need the index AM for that. This at least makes\n64-bit TID-like identifiers a relatively simple conceptually shift.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 3 May 2021 11:43:08 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, 3 May 2021 at 20:43, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, May 3, 2021 at 10:57 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> > For the purposes of this discussion, what's making my life difficult is\n> > that we don't have a good definition for TID, leaving me with two\n> > options:\n> >\n> > 1. be overly conservative, accept MaxOffsetNumber=2048, wasting a\n> > bunch of address space; or\n>\n> tidbitmap.c uses MaxHeapTuplesPerPage as its MAX_TUPLES_PER_PAGE,\n> which is much lower than MaxOffsetNumber (it's almost 10x lower). I\n> wonder what that means for your design.\n\nOne could relatively easily disable bitmap scans on the table AM by\nnot installing the relevant bitmap support functions on the registered\nTableAM structure, and thus not touch that problem. Some indexes will\nthen never be accessed due to the bitmap scan requirement of their\nIndexAM (gin, brin, bloom, to name a few), and as such won't make\nsense to create on that table, but that's about it I think. We might\nwant to add some safeguards that bitmapscan-only indexams arent used\non tableams that don't support it, but I believe that's a nice-to-have\nand not critical, on a similar level to the deduplication of constaint\nindexes.\n\nWith regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Mon, 3 May 2021 21:06:19 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, May 3, 2021 at 12:06 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> One could relatively easily disable bitmap scans on the table AM by\n> not installing the relevant bitmap support functions on the registered\n> TableAM structure, and thus not touch that problem.\n\nI have no idea how much it'll hurt things if the column store table AM\nsupports no analogue of bitmap scans.\n\n> Some indexes will\n> then never be accessed due to the bitmap scan requirement of their\n> IndexAM (gin, brin, bloom, to name a few), and as such won't make\n> sense to create on that table, but that's about it I think.\n\nRight. More formally: if this restriction is accepted by a table AM\n(say the column store table AM), then any index AM with amgettuple set\nto NULL cannot ever be used (it should probably be treated as an error\ncondition at CREATE INDEX time).\n\nIf this really is the best path forward (again, no idea if that's\ntrue) then that would conveniently make it pretty easy to solve the\nGIN posting list issue raised by Jeff. It just wouldn't matter -- GIN\nindexes cannot be used with the column store anyway.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 3 May 2021 12:45:30 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Fri, 2021-04-30 at 10:55 -0700, Jeff Davis wrote:\n> On Fri, 2021-04-30 at 12:35 -0400, Tom Lane wrote:\n> > ISTM that would be up to the index AM. We'd need some interlocks\n> > on\n> > which index AMs could be used with which table AMs in any case, I\n> > think.\n> \n> I'm not sure why? It seems like we should be able to come up with\n> something that's generic enough.\n\nAnother point: the idea of supporting only some kinds of indexes\ndoesn't mix well with partitioning. If you declare an index on the\nparent, we should do something reasonable if one partition's table AM\ndoesn't support that index AM.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 03 May 2021 14:03:31 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, May 3, 2021 at 2:03 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Another point: the idea of supporting only some kinds of indexes\n> doesn't mix well with partitioning. If you declare an index on the\n> parent, we should do something reasonable if one partition's table AM\n> doesn't support that index AM.\n\nSure, but it either makes sense for the columnar table AM to support\nbitmap scans (or some analogous type of scan that works only slightly\ndifferently) or it doesn't. It's not at all clear which it is right now.\n\nIf it makes sense then it will of course be necessary to describe what\n\"bitmap scan\" actually means with the columnar storage table AM (plus\nyou'll still need to make some in-core changes to places like\ntidbitmap.c). OTOH if it doesn't make sense then that's that -- it's\ngoing to be a bit annoying in the partitioning scenario you describe,\nbut some things are bound to be *inherently* impossible, so it can't be\nhelped.\n\nIt seems senseless to *require* table AMs to support something like a\nbitmap scan. I don't think it's a coincidence that GIN is the index AM\nthat looks like it presents at least 2 problems for the columnar table\nAM. To me this suggests that this will need a much higher level\ndiscussion.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 3 May 2021 15:07:22 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, 2021-05-03 at 15:07 -0700, Peter Geoghegan wrote:\n> Sure, but it either makes sense for the columnar table AM to support\n> bitmap scans (or some analogous type of scan that works only slightly\n> differently) or it doesn't. It's not at all clear which it is right\n> now.\n\nIt makes sense for my columnar table AM -- there's TID locality.\n\n> If it makes sense then it will of course be necessary to describe\n> what\n> \"bitmap scan\" actually means with the columnar storage table AM (plus\n> you'll still need to make some in-core changes to places like\n> tidbitmap.c). OTOH if it doesn't make sense then that's that -- it's\n> going to be a bit annoying in the partitioning scenario you describe,\n> but some things are bound to be *inherently* impossible, so it can't\n> be\n> helped.\n\nI don't see why in-core changes are a strict requirement. It doesn't\nmake too much difference if a lossy TID doesn't correspond exactly to\nthe columnar layout -- it should be fine as long as there's locality,\nright?\n\n> It seems senseless to *require* table AMs to support something like a\n> bitmap scan. \n\nI am not yet convinced that it's \"senseless\", but it is optional and\nthere's probably a reason that it's not required.\n\nWe still need to address the fact that two features have had a minor\ncollision: indexes on a partitioned table and table AMs that don't\nnecessarily support all index types. It's not good to just throw an\nerror, because we could be forcing the user to manually manage the\nindexes on hundreds of partitions just because some tables have a\ndifferent AM and it doesn't support the index type.\n\nWe probably want to do something about that, but as far as I can tell,\nit's not a problem for columnar right now.\n\n> I don't think it's a coincidence that GIN is the index AM\n> that looks like it presents at least 2 problems for the columnar\n> table\n> AM. To me this suggests that this will need a much higher level\n> discussion.\n\nOne problem is that ginpostinglist.c restricts the use of offset\nnumbers higher than MaxOffsetNumber - 1. At best, that's a confusing\nand unnecessary off-by-one error that we happen to be stuck with\nbecause it affects the on-disk format. Now that I'm past that\nparticular confusion, I can live with a workaround until we do\nsomething better.\n\nWhat is the other problem with GIN?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 03 May 2021 17:15:04 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, May 3, 2021 at 5:15 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I don't see why in-core changes are a strict requirement. It doesn't\n> make too much difference if a lossy TID doesn't correspond exactly to\n> the columnar layout -- it should be fine as long as there's locality,\n> right?\n\nBut look at the details: tidbitmap.c uses MaxHeapTuplesPerPage as its\nMAX_TUPLES_PER_PAGE, which seems like a problem -- that's 291 with\ndefault BLCKSZ. I doubt that that restriction is something that you\ncan afford to live with, even just for the time being.\n\n> > It seems senseless to *require* table AMs to support something like a\n> > bitmap scan.\n>\n> I am not yet convinced that it's \"senseless\", but it is optional and\n> there's probably a reason that it's not required.\n\nI mean it's senseless to require it in the general case.\n\n> We still need to address the fact that two features have had a minor\n> collision: indexes on a partitioned table and table AMs that don't\n> necessarily support all index types. It's not good to just throw an\n> error, because we could be forcing the user to manually manage the\n> indexes on hundreds of partitions just because some tables have a\n> different AM and it doesn't support the index type.\n\nI don't see why that's necessarily a problem. Why, in general, should\nevery table AM be able to support every index AM?\n\nI find it puzzling that nobody can find one single thing that the\ntable AM interface *can't* do. What are the chances that the\nabstraction really is perfect?\n\n> > I don't think it's a coincidence that GIN is the index AM\n> > that looks like it presents at least 2 problems for the columnar\n> > table\n> > AM. To me this suggests that this will need a much higher level\n> > discussion.\n>\n> One problem is that ginpostinglist.c restricts the use of offset\n> numbers higher than MaxOffsetNumber - 1. At best, that's a confusing\n> and unnecessary off-by-one error that we happen to be stuck with\n> because it affects the on-disk format. Now that I'm past that\n> particular confusion, I can live with a workaround until we do\n> something better.\n>\n> What is the other problem with GIN?\n\nI just meant the tidbitmap.c stuff, and so on. There is really one big\nproblem: GIN leverages the fact that bitmap scans are all that it\nsupports in many different ways. The reality is that it was designed\nto work with heapam -- that's how it evolved. It seems rather unlikely\nthat problems are confined to this ginpostinglist.c representational\nissue -- which is very surface-level. The only way to figure it out is\nto try to make it work and see what happens, though, so perhaps it\nisn't worth discussing any further until that happens.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 3 May 2021 18:12:02 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, 2021-05-03 at 18:12 -0700, Peter Geoghegan wrote:\n> But look at the details: tidbitmap.c uses MaxHeapTuplesPerPage as its\n> MAX_TUPLES_PER_PAGE, which seems like a problem -- that's 291 with\n> default BLCKSZ. I doubt that that restriction is something that you\n> can afford to live with, even just for the time being.\n\nOh, you're right. I missed that MaxHeapTuplesPerPage was an order of\nmagnitude smaller.\n\n> I don't see why that's necessarily a problem. Why, in general, should\n> every table AM be able to support every index AM?\n\nI didn't propose that every table AM needs to support every index type,\njust that we should do something or at least document something. It's\npretty frustrating to have to fall back to manually managing the\nindexes for dozens or hundreds of partitions when you make use of\nmultiple table AMs.\n\nWe might be conflating support for index AMs with support for features\nlike bitmap scans. If a certain kind of index fails at CREATE INDEX\ntime, that's painful for the partitioning case. But here it's more like\nthe CREATE INDEX would succeed but it would just never be used, which\nis a different kind of frustrating.\n\nWhatever we do or don't do, we should try to avoid surprises. I expect\ntable AMs to be used heavily with partitioning.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 03 May 2021 20:01:35 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "Hi,\n\nOn 2021-04-30 11:51:07 -0700, Peter Geoghegan wrote:\n> I think that it's reasonable to impose some cost on index AMs here,\n> but that needs to be bounded sensibly and unambiguously. For example,\n> it would probably be okay if you had either 6 byte or 8 byte TIDs, but\n> no other variations. You could require index AMs (the subset of index\n> AMs that are ever able to store 8 byte TIDs) to directly encode which\n> width they're dealing with at the level of each IndexTuple. That would\n> create some problems for nbtree deduplication, especially in boundary\n> cases, but ISTM that you can manage the complexity by sensibly\n> restricting how the TIDs work across the board.\n\n> For example, the TIDs should always work like unsigned integers -- the\n> table AM must be willing to work with that restriction.\n\nIsn't that more a question of the encoding than the concrete representation?\n\n\n> You'd then have posting lists tuples in nbtree whose TIDs were all\n> either 6 bytes or 8 bytes wide, with a mix of each possible (though\n> not particularly likely) on the same leaf page. Say when you have a\n> table that exceeds the current MaxBlockNumber restrictions. It would\n> be relatively straightforward for nbtree deduplication to simply\n> refuse to mix 6 byte and 8 byte datums together to avoid complexity in\n> boundary cases. The deduplication pass logic has the flexibility that\n> this requires already.\n\nWhich nbtree cases do you think would have an easier time supporting\nswitching between 6 or 8 byte tids than supporting fully variable width\ntids? Given that IndexTupleData already is variable-width, it's not\nclear to me why supporting two distinct sizes would be harder than a\nfully variable size? I assume it's things like BTDedupState->htids?\n\n\n\n> > What's wrong with varlena headers? It would end up being a 1-byte\n> > header in practically every case, and no variable-width representation\n> > can do without a length word of some sort. I'm not saying varlena is\n> > as efficient as some new design could hypothetically be, but it\n> > doesn't seem like it'd be a big enough problem to stress about. If you\n> > used a variable-width representation for integers, you might actually\n> > save bytes in a lot of cases. An awful lot of the TIDs people store in\n> > practice probably contain several zero bytes, and if we make them\n> > wider, that's going to be even more true.\n> \n> Maybe all of this is true, and maybe it works out to be the best path\n> forward in the long term, all things considered. But whether or not\n> that's true is crucially dependent on what real practical table AMs\n> (of which there will only ever be a tiny number) actually need to do.\n> Why should we assume that the table AM cannot accept some\n> restrictions? What good does it do to legalistically define the\n> problem as a problem for index AMs to solve?\n\nI don't think anybody is arguing that AMs cannot accept any restrictions? I do\nthink it's pretty clear that it's not entirely obvious what the concrete set\nof proper restrictions would be, where we won't end up needing to re-evaluate\nlimits in a few years are.\n\nIf you add to that the fact that variable-width tids will often end up\nconsiderably smaller than our current tids, it's not obvious why we should use\nbitspace somewhere to indicate an 8 byte tid instead of a a variable-width\ntid?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 May 2021 22:01:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, May 3, 2021 at 2:13 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> That's not clear to me at all, and is the whole reason I began this\n> thread.\n>\n> a. You say \"smaller than MaxOffsetNumber\", but that's a little weird.\n> If an offset can't be MaxOffsetNumber, it's not really the maximum, is\n> it?\n\nI wasn't trying to be that precise. I see that OffsetNumberIsValid()\nreturns true if the offset is <= MaxOffsetNumber, so therefore I agree\nthat using exactly MaxOffsetNumber ought to work.\n\n> b. If you actually meant \"less than or equal to MaxOffsetNumber\",\n> that will fail with the GIN posting list issue raised in my first\n> email. Do you agree that's a bug?\n\nGiven the above, yes.\n\n> c. Why can't we go all the way up to MovedPartitionsOffsetNumber - 1?\n> Right now, MaxOffsetNumber is poorly named, because it actually\n> represents the a number slightly higher than the maximum number of\n> items that can fit on a page. That essentially wastes 5 bits of address\n> space for no obvious reason.\n\nBecause of stuff like this:\n\n[rhaas EDBAS]$ git grep -F '[MaxOffsetNumber'\nsrc/backend/access/gist/gistvacuum.c: OffsetNumber\ntodelete[MaxOffsetNumber];\nsrc/backend/access/gist/gistvacuum.c: OffsetNumber\ntodelete[MaxOffsetNumber];\nsrc/backend/access/gist/gistvacuum.c: BlockNumber\nleafs_to_delete[MaxOffsetNumber];\nsrc/backend/access/hash/hash.c: OffsetNumber deletable[MaxOffsetNumber];\nsrc/backend/access/hash/hashinsert.c: OffsetNumber\ndeletable[MaxOffsetNumber];\nsrc/backend/access/hash/hashovfl.c: OffsetNumber\ndeletable[MaxOffsetNumber];\n\nMaybe changing those places to use dynamic allocation wouldn't hurt\nanything in terms of performance, but I'm not sure. Making them 32\ntimes larger categorically does not seem like a good idea.\n\nThere might be other dependencies on this value in other parts of the\ncode; I'm not sure.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 May 2021 12:56:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, 2021-05-03 at 15:07 -0700, Peter Geoghegan wrote:\n> It seems senseless to *require* table AMs to support something like a\n> bitmap scan.\n\nI thought about this some more, and this framing is backwards.\nItemPointers are fundamental to the table AM API: they are passed in to\nrequired methods, and expected to be returned[1].\n\nBitmap scans are optional, but that should be determined by whether the\nauthor wants to implement the bitmap scan methods of their table AM.\nThe fine details of ItemPointer representation should not be making the\ndecision for them.\n\nWe still need to answer the core question that started this thread:\nwhat the heck is an ItemPointer, anyway?\n\nAfter looking at itemptr.h, off.h, ginpostinglist.c and tidbitmap.c, it\nseems that an ItemPointer is a block number from [0, 0xFFFFFFFe]; and\nan offset number from [1, MaxHeapTuplesPerPage] which is by default [1,\n291].\n\nAttached is a patch that clarifies what I've found so far and gives\nclear guidance to table AM authors. Before I commit this I'll make sure\nthat following the guidance actually works for the columnar AM.\n\nRegards,\n\tJeff Davis\n\n[1] Even for the current version of columnar, which doesn't support\nindexes or updates, we implemented a hack to provide dummy TIDs because\nsome places expect them (see analyze.c:compare_rows()).", "msg_date": "Tue, 04 May 2021 11:52:30 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Tue, 2021-05-04 at 12:56 -0400, Robert Haas wrote:\n> b. If you actually meant \"less than or equal to MaxOffsetNumber\",\n> > that will fail with the GIN posting list issue raised in my first\n> > email. Do you agree that's a bug?\n> \n> Given the above, yes.\n\nIf we just subtracted one, it would fit in 11 bits, and that would be\nfine because zero is invalid anyway. Unfortunately, it's on disk, so I\nthink we are stuck with it.\n\nRegardless, the other limitation in tidbitmap.c is more strict anyway\n(MaxHeapTuplesPerPage=291).\n\n> > Because of stuff like this:\n> \n> [rhaas EDBAS]$ git grep -F '[MaxOffsetNumber'\n> src/backend/access/gist/gistvacuum.c: OffsetNumber\n> todelete[MaxOffsetNumber];\n> src/backend/access/gist/gistvacuum.c: OffsetNumber\n> todelete[MaxOffsetNumber];\n> src/backend/access/gist/gistvacuum.c: BlockNumber\n> leafs_to_delete[MaxOffsetNumber];\n> src/backend/access/hash/hash.c: OffsetNumber\n> deletable[MaxOffsetNumber];\n> src/backend/access/hash/hashinsert.c: OffsetNumber\n> deletable[MaxOffsetNumber];\n> src/backend/access/hash/hashovfl.c: OffsetNumber\n> deletable[MaxOffsetNumber];\n\nI don't think those are problems because they represent items on an\n*index* page, not ItemPointers coming from a table.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 04 May 2021 12:04:12 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Tue, May 4, 2021 at 11:52 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Mon, 2021-05-03 at 15:07 -0700, Peter Geoghegan wrote:\n> > It seems senseless to *require* table AMs to support something like a\n> > bitmap scan.\n>\n> I thought about this some more, and this framing is backwards.\n> ItemPointers are fundamental to the table AM API: they are passed in to\n> required methods, and expected to be returned[1].\n\nI prefer my framing, but okay, let's go with yours. What difference\ndoes it make?\n\nThe fact that we're starting with the table AM API doesn't change the\nfundamental fact that quite a few implementation details that are\nlocal to code like the GIN AM and tidbitmap.c were (rightly or\nwrongly) simply built with heapam in mind. The fact that that's true\nis hardly surprising, and hardly argues against the idea of having a\ntable AM to begin with. There is no getting around the need to talk\nabout the first principles here, and to talk about the specific\nimplications for your particular table AM (perhaps others too).\n\nAbstractions are only useful when they serve concrete implementations.\nOf course they should be as general and abstract as possible -- but no\nmore.\n\n> Bitmap scans are optional, but that should be determined by whether the\n> author wants to implement the bitmap scan methods of their table AM.\n> The fine details of ItemPointer representation should not be making the\n> decision for them.\n\nA distinction without a difference. If bitmap scans are optional and\nsome index AMs are 100% built from the ground up to work only with\nbitmap scans, then those index AMs are effectively optional (or\noptional to the extent that bitmap scans themselves are optional). I\nhave absolutely no idea how it would be possible to make GIN work\nwithout having index scans. It would be so different that it wouldn't\nbe GIN anymore.\n\nI think maybe it is possible for GIN to work with your column store\ntable AM in particular. Why aren't we talking about that concrete\nissue, or something like that? We're talking about this abstraction as\nif it must already be perfect, and therefore the standard by which\nevery other thing needs to be measured. But why?\n\n> We still need to answer the core question that started this thread:\n> what the heck is an ItemPointer, anyway?\n>\n> After looking at itemptr.h, off.h, ginpostinglist.c and tidbitmap.c, it\n> seems that an ItemPointer is a block number from [0, 0xFFFFFFFe]; and\n> an offset number from [1, MaxHeapTuplesPerPage] which is by default [1,\n> 291].\n>\n> Attached is a patch that clarifies what I've found so far and gives\n> clear guidance to table AM authors. Before I commit this I'll make sure\n> that following the guidance actually works for the columnar AM.\n\nI don't get what the point of this patch is. Obviously all of the\nparticulars here are just accidents of history that we ought to change\nsooner or later anyway. I don't have any objection to writing them all\ndown someplace official. But what difference does it make if there is\nno underlying *general* set of principles behind any of it? This\ndefinition of a TID can break at any time because it just isn't useful\nor general. This is self-evident -- your definition includes\nMaxHeapTuplesPerPage! How could that possibly be anything other than\nan accident whose details are completely arbitrary and therefore\nsubject to change at any time?\n\nThis is not necessarily a big deal! We can fix it by reconciling\nthings in a pragmatic, bottom-up way. That's what I expected would\nhappen all along. The table AM is not the Ark of the Covenant (just\nlike tidbitmap.c, or anything else).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 4 May 2021 13:51:22 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Mon, May 3, 2021 at 10:01 PM Andres Freund <andres@anarazel.de> wrote:\n> > For example, the TIDs should always work like unsigned integers -- the\n> > table AM must be willing to work with that restriction.\n>\n> Isn't that more a question of the encoding than the concrete representation?\n\nI don't think so, no. How does B-Tree deduplication work without\nsomething like that? The fact of the matter is that things are very\ntightly coupled in all kinds of ways. I'm all for decoupling them to\nthe extent required to facilitate a new and useful table AM. But I am\nunlikely to commit to months of work based on abstract arguments and\nfuture work. I think that you'll find that I'm not the only one that\nsees it that way.\n\n> > You'd then have posting lists tuples in nbtree whose TIDs were all\n> > either 6 bytes or 8 bytes wide, with a mix of each possible (though\n> > not particularly likely) on the same leaf page. Say when you have a\n> > table that exceeds the current MaxBlockNumber restrictions. It would\n> > be relatively straightforward for nbtree deduplication to simply\n> > refuse to mix 6 byte and 8 byte datums together to avoid complexity in\n> > boundary cases. The deduplication pass logic has the flexibility that\n> > this requires already.\n>\n> Which nbtree cases do you think would have an easier time supporting\n> switching between 6 or 8 byte tids than supporting fully variable width\n> tids? Given that IndexTupleData already is variable-width, it's not\n> clear to me why supporting two distinct sizes would be harder than a\n> fully variable size? I assume it's things like BTDedupState->htids?\n\nStuff like that, yeah. The space utilization stuff inside\nnbtsplitloc.c and nbtdedup.c pretty much rests on the assumption that\nTIDs are fixed width. Obviously there are some ways in which that\ncould be revised if there was a really good reason to do so -- like an\nactual concrete reason with some clear basis in reality. You have no\nobligation to make me happy, but FYI I find arguments like \"but why\nwouldn't you just allow arbitrary-width TIDs?\" to be deeply\nunconvincing. Do you really expect me to do a huge amount of work and\nrisk a lot of new bugs, just to facilitate something that may or may\nnot ever happen? Would you do that if you were in my position?\n\n> I don't think anybody is arguing that AMs cannot accept any restrictions? I do\n> think it's pretty clear that it's not entirely obvious what the concrete set\n> of proper restrictions would be, where we won't end up needing to re-evaluate\n> limits in a few years are.\n\nI'm absolutely fine with the fact that the table AM has these issues\n-- I would expect it. I would like to help! I just find these wildly\nabstract discussions to be close to a total waste of time. The idea\nthat we should let a thousand table AM flowers bloom and then review\nwhat to do seems divorced from reality. Even if the table AM becomes\nwildly successful there will still only have been maybe 2 - 4 table\nAMs that ever really had a chance. Supposing that we have no idea what\nthey could possibly look like just yet is just navel gazing.\n\n> If you add to that the fact that variable-width tids will often end up\n> considerably smaller than our current tids, it's not obvious why we should use\n> bitspace somewhere to indicate an 8 byte tid instead of a a variable-width\n> tid?\n\nIt's not really the space overhead. It's the considerable complexity\nthat it would add.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 4 May 2021 14:13:36 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "Hi,\n\nOn 2021-05-04 14:13:36 -0700, Peter Geoghegan wrote:\n> On Mon, May 3, 2021 at 10:01 PM Andres Freund <andres@anarazel.de> wrote:\n> > > For example, the TIDs should always work like unsigned integers -- the\n> > > table AM must be willing to work with that restriction.\n> >\n> > Isn't that more a question of the encoding than the concrete representation?\n> \n> I don't think so, no. How does B-Tree deduplication work without\n> something like that? The fact of the matter is that things are very\n> tightly coupled in all kinds of ways.\n\nWhat does the deduplication actually require from tids? Isn't it just\nthat you need to be able to compare tids?\n\n\n\n> > > You'd then have posting lists tuples in nbtree whose TIDs were all\n> > > either 6 bytes or 8 bytes wide, with a mix of each possible (though\n> > > not particularly likely) on the same leaf page. Say when you have a\n> > > table that exceeds the current MaxBlockNumber restrictions. It would\n> > > be relatively straightforward for nbtree deduplication to simply\n> > > refuse to mix 6 byte and 8 byte datums together to avoid complexity in\n> > > boundary cases. The deduplication pass logic has the flexibility that\n> > > this requires already.\n> >\n> > Which nbtree cases do you think would have an easier time supporting\n> > switching between 6 or 8 byte tids than supporting fully variable width\n> > tids? Given that IndexTupleData already is variable-width, it's not\n> > clear to me why supporting two distinct sizes would be harder than a\n> > fully variable size? I assume it's things like BTDedupState->htids?\n> \n> Stuff like that, yeah. The space utilization stuff inside\n> nbtsplitloc.c and nbtdedup.c pretty much rests on the assumption that\n> TIDs are fixed width.\n\nHm. It doesn't seems look like that'd be all that hard to adjust / that\nit'd be meaningfully easier to support only one other type of tid width.\n\n\n> Obviously there are some ways in which that could be revised if there\n> was a really good reason to do so -- like an actual concrete reason\n> with some clear basis in reality.\n\nThe example of indirect indexes has been brought up repeatedly - you\njust didn't respond to it?\n\n\n> You have no obligation to make me happy, but FYI I find arguments like\n> \"but why wouldn't you just allow arbitrary-width TIDs?\" to be deeply\n> unconvincing. Do you really expect me to do a huge amount of work and\n> risk a lot of new bugs, just to facilitate something that may or may\n> not ever happen? Would you do that if you were in my position?\n\nSo far nobody has expressed any expectation of you doing specific work\nin this thread as far as I can see? I certainly didn't intend to. I\nthink it's perfectly normal to discuss tradeoffs and disagree about\nthem?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 4 May 2021 17:40:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Tue, 2021-05-04 at 13:51 -0700, Peter Geoghegan wrote:\n> I think maybe it is possible for GIN to work with your column store\n> table AM in particular. Why aren't we talking about that concrete\n> issue, or something like that?\n\nHappy to.\n\nAt this point I'd rather obey the constraint that the offset number\nfalls in the range [1, MaxHeapTuplesPerPage] so that columnar will have\nbitmap index scans and GIN. If you see a way to work around this\nlimitation and still have GIN and bitmap index support, so much the\nbetter.\n\nThe cost of obeying this limitation is that, in a workload involving\nlots of small transactions, columnar might run out of TID space and\nforce a VACUUM FULL. In that case, VACUUM FULL is probably a good idea\nanyway (to coalesce the tuples for better compression), but forcing it\nis obviously not ideal. The reason columnar will run out of TID space\nmore quickly for small operations is because small inserts might\nreserve more TIDs then they actually use, leaving gaps; and small\nupdates/deletes will fragment the TID space.\n\nThe benefit of obeying this limitation is that I expect that bitmap\nindex scans will work well with columnar because they avoid random\naccess. And it seems like a nice benefit if we can support the full\nrange of index AMs for columnar.\n\n> I don't get what the point of this patch is. Obviously all of the\n> particulars here are just accidents of history that we ought to\n> change\n> sooner or later anyway.\n\nThe point is if \"sooner\" turns into \"later\" then we at least have some\nguidance for table AM authors in the interim. But if nobody else thinks\nthat's useful, then so be it.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 04 May 2021 18:18:45 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Tue, May 4, 2021 at 5:40 PM Andres Freund <andres@anarazel.de> wrote:\n> What does the deduplication actually require from tids? Isn't it just\n> that you need to be able to compare tids?\n\nIt's hard to know for sure what is essential to the design, and what\ncan be discarded. Though I can say for sure that it depends on there\nbeing cheap to compare TIDs.\n\n> Hm. It doesn't seems look like that'd be all that hard to adjust / that\n> it'd be meaningfully easier to support only one other type of tid width.\n\nIt depends on equi-sized TIDs within a posting list, too -- see\n\"Posting list splits\" from the nbtree README for some idea of what I\nmean. The reason that nbtree deduplication could be enabled by default\n(the reason why it has very little downside) is because there are\nvirtually no special cases, and because the WAL overhead for posting\nlist splits is so low (it's only a tiny bit higher than a simple\nbtinsert()).\n\nAnyway, deduplication could probably still work in about the same way\nif there were some sensible limits on how generalized TIDs could work.\n\n> > Obviously there are some ways in which that could be revised if there\n> > was a really good reason to do so -- like an actual concrete reason\n> > with some clear basis in reality.\n>\n> The example of indirect indexes has been brought up repeatedly - you\n> just didn't respond to it?\n\nI did respond to it -- in detail. I don't see how it will ever be\npossible to make indirect indexes work -- at least within anything\nlike the current framework. And even if I was wrong there, it still\nwouldn't mean that the project was particularly worth pursuing.\n\nWhether or not you find my arguments about indirect indexes convincing\nis ultimately beside the point. The fact is that I just don't believe\nthat that's ever going to happen (it is a fact that that's what I\nbelieve). This will discourage me from becoming involved in anything\nthat touches on how the table AM thinks about TIDs, particularly in\nindex AMs. As far as I'm concerned this TID stuff is up in the air\n(except maybe for something like zheap which is sufficiently close to\nheapam that it doesn't matter).\n\n> So far nobody has expressed any expectation of you doing specific work\n> in this thread as far as I can see? I certainly didn't intend to. I\n> think it's perfectly normal to discuss tradeoffs and disagree about\n> them?\n\nThat's why I was careful to say \"You have no obligation to make me\nhappy\". Of course it's true that I could easily just not get involved\n-- that was never my concern.\n\nHere is my concern: I have an obligation to make it clear that I think\nthat you really ought to straighten out this business with\ngeneralizing TIDs before too long. Not because I say so, but because\nit's holding up progress in general. If you aren't getting cooperation\nfrom people who work on indexing (could be somebody else), then\nconsider the possibility that this business with TIDs and bitmap scans\nhas a lot to do with it. Most people are not as outspoken as I am.\n\nI'm not at all surprised that this happened, simply because of the\nhistory -- it makes sense. I'm glad that the table AM interface\nexists, and it was always going to be something that required\nrefinement over time. I want the table AM to be successful. If I\ndidn't care then I would say nothing at all.\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 4 May 2021 18:24:27 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Tue, May 4, 2021 at 9:24 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Here is my concern: I have an obligation to make it clear that I think\n> that you really ought to straighten out this business with\n> generalizing TIDs before too long. Not because I say so, but because\n> it's holding up progress in general. If you aren't getting cooperation\n> from people who work on indexing (could be somebody else), then\n> consider the possibility that this business with TIDs and bitmap scans\n> has a lot to do with it. Most people are not as outspoken as I am.\n\nIt seems to me that we're doing a lot of disagreeing given that, as I\nsee it, there are only relatively minor differences between the\npositions of the various people here. Andres and I are, I think,\nrelatively convinced that variable-width TIDs would let us do things\nthat aren't otherwise possible, and that those things are potentially\nuseful and I would even venture to say cool. I don't believe you\ndisagree with that, but you think it's going to be too much work to\nimplement. Fair enough; anyone can try it who is interested and see\nhow far they get. Anyone who thinks it's going to be impossibly hard\nprobably will prefer not to try, and that's OK too.\n\nBut if we take that off the table, what about a less-ambitious\ngeneralization of the TID mechanism? I can't really see anyone putting\nup a serious argument against allowing all 48 bits of space available\nin the existing TID format to be used which, as Jeff points out, is\nnot currently the case. So anyone who wants to try to write that patch\nis free to do so. I don't have a clear idea how to make that work, to\nbe honest, but my limited supply of ideas need not prevent anyone else\nfrom trying theirs.\n\nThere might be some slight disagreement about whether it's useful to\ngeneralize TIDs from a 48-bit address space to a 64-bit address space\nwithout making it fully general. Like Andres, I am unconvinced that's\nmeaningfully easier, and I am convinced that it's meaningfully less\ngood, but other people can disagree and that's fine. I'm perfectly\nwilling to change my opinion if somebody shows up with a patch that\ndemonstrates the value of this approach.\n\nThe main point here is one that I think you made a few emails back:\nthe limitations of the current system are defined by what will\nactually work with the code as it exists today, not some mailing list\ndiscussion. It's too early for the project to commit to stability in\nthis area; we have not managed to get a single AM apart from heapam\ninto core, and that situation doesn't appear likely to change in the\nnear future. If and when we have say 5 of those we can probably\narticulate some intelligent ideas about what we think the patterns\nthat need to hold for future AMs are, but it's reckless to extrapolate\nfrom 1 working example, and right now that's all we have.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 10:27:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, May 5, 2021 at 7:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> It seems to me that we're doing a lot of disagreeing given that, as I\n> see it, there are only relatively minor differences between the\n> positions of the various people here.\n\nI'm being very vocal here because I'm concerned that we're going about\ngeneralizing TIDs in the wrong way. To me it feels like there is a\nloss of perspective about what really matters. There just isn't that\nmany table AM TID designs that could ever work, and even among those\nschemes that could ever work there is a pretty clear hierarchy. This\nblue sky thinking about generalizing TIDs 2 years in seems *weird* to\nme.\n\nNobody is obligated to reassure me. I felt that this had to be said.\n\n> Andres and I are, I think,\n> relatively convinced that variable-width TIDs would let us do things\n> that aren't otherwise possible, and that those things are potentially\n> useful and I would even venture to say cool. I don't believe you\n> disagree with that, but you think it's going to be too much work to\n> implement. Fair enough; anyone can try it who is interested and see\n> how far they get. Anyone who thinks it's going to be impossibly hard\n> probably will prefer not to try, and that's OK too.\n\nI think that's accurate. But it's easy to not disagree with the idea\nthat variable-width TIDs might lead to something interesting. Talk is\ncheap.\n\nNo other database system has something like indirect indexes. They\nhave clustered indexes, but that's rather different. I think that\nindirect indexes were a design that was concerned about the issue of\nwrite amplification from non-HOT updates. But do I even remember the\ndetails correctly? We're talking about indirect indexes as if that was\nan idea whose high level user-visible goals were clear, but I don't\neven think that that much is true. This kind of thing concerns me. It\nvery much feels like failing to see the forest for the trees.\n\n> But if we take that off the table, what about a less-ambitious\n> generalization of the TID mechanism? I can't really see anyone putting\n> up a serious argument against allowing all 48 bits of space available\n> in the existing TID format to be used which, as Jeff points out, is\n> not currently the case. So anyone who wants to try to write that patch\n> is free to do so. I don't have a clear idea how to make that work, to\n> be honest, but my limited supply of ideas need not prevent anyone else\n> from trying theirs.\n\nI agree that we should focus on what we can agree on. It seems as if\nwe all more or less agree on this much.\n\n> There might be some slight disagreement about whether it's useful to\n> generalize TIDs from a 48-bit address space to a 64-bit address space\n> without making it fully general. Like Andres, I am unconvinced that's\n> meaningfully easier, and I am convinced that it's meaningfully less\n> good, but other people can disagree and that's fine. I'm perfectly\n> willing to change my opinion if somebody shows up with a patch that\n> demonstrates the value of this approach.\n\nIt's going to be hard if not impossible to provide empirical evidence\nfor the proposition that 64-bit wide TIDs (alongside 48-bit TIDs) are\nthe way to go. Same with any other scheme. We're talking way too much\nabout TIDs themselves and way too little about table AM use cases, the\nway the data structures might work in new table AMs, and so on.\n\n> The main point here is one that I think you made a few emails back:\n> the limitations of the current system are defined by what will\n> actually work with the code as it exists today, not some mailing list\n> discussion.\n\nRight.\n\n> It's too early for the project to commit to stability in\n> this area; we have not managed to get a single AM apart from heapam\n> into core, and that situation doesn't appear likely to change in the\n> near future.\n\nI would be happy if we could commit to committing to stability. I\nreally don't think that it's *that* hard to move significantly closer\nto a design that describes just how close to heapam a table AM should\nbe. It doesn't commit the table AM to all that many details.\n\n> If and when we have say 5 of those we can probably\n> articulate some intelligent ideas about what we think the patterns\n> that need to hold for future AMs are, but it's reckless to extrapolate\n> from 1 working example, and right now that's all we have.\n\nI don't think that there will be 5 table AMs that are credible to\nusers at any point in the future. In any case there only needs to be 1\nor 2 good ones for the table AM to have been a resounding success.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 5 May 2021 08:50:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, May 5, 2021 at 11:50 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'm being very vocal here because I'm concerned that we're going about\n> generalizing TIDs in the wrong way. To me it feels like there is a\n> loss of perspective about what really matters.\n\nWell, which things matter is a question of opinion, not fact.\n\n> No other database system has something like indirect indexes. They\n> have clustered indexes, but that's rather different.\n\nI don't think this is true at all. If you have a clustered index -\ni.e. the table is physically arranged according to the index ordering\n- then your secondary indexes all pretty much have to be what we're\ncalling indirect indexes. They can hardly point to a physical\nidentifier if rows are being moved around. I believe InnoDB works this\nway, and I think Oracle's index-organized tables do too. I suspect\nthere are other examples.\n\n> > There might be some slight disagreement about whether it's useful to\n> > generalize TIDs from a 48-bit address space to a 64-bit address space\n> > without making it fully general. Like Andres, I am unconvinced that's\n> > meaningfully easier, and I am convinced that it's meaningfully less\n> > good, but other people can disagree and that's fine. I'm perfectly\n> > willing to change my opinion if somebody shows up with a patch that\n> > demonstrates the value of this approach.\n>\n> It's going to be hard if not impossible to provide empirical evidence\n> for the proposition that 64-bit wide TIDs (alongside 48-bit TIDs) are\n> the way to go. Same with any other scheme. We're talking way too much\n> about TIDs themselves and way too little about table AM use cases, the\n> way the data structures might work in new table AMs, and so on.\n\nI didn't mean that it has to be a test result showing that 64-bit TIDs\noutperform 56-bit TIDs or something. I just meant there has to be a\nreason to believe it's good, which could be based on a discussion of\nuse cases or whatever. If we *don't* have a reason to believe it's\ngood, we shouldn't do it.\n\nMy point is that so far I am not seeing a whole lot of value of this\nproposed approach. For a 64-bit TID to be valuable to you, one of two\nthings has to be true: you either don't care about having indexes that\nstore TIDs on your new table type, or the index types you want to use\ncan store those 64-bit TIDs. Now, I have not yet heard of anyone\nworking on a table AM who does not want to be able to support adding\nbtree indexes. There may be someone that I don't know about, and if\nso, fine. But otherwise, we need a way to store them. And that\nrequires changing the page format for btree indexes. But surely we do\nnot want to make all TIDs everywhere wider in future btree versions,\nso at least two TID widths - 6 bytes and 8 bytes - would have to be\nsupported. And if we're at all going to do that, I think it's\ncertainly worth asking whether supporting varlena TIDs would really be\nall that much harder. You seem to think it is, and you might be right,\nbut I'm not ready to give up, because I do not see how we are ever\ngoing to get global indexes or indirect indexes without doing it, and\nthose would be good features to have.\n\nIf we can't ever get them, so be it, but you seem to kind of be saying\nthat things like global indexes and indirect indexes are hard, and\ntherefore they don't count as reasons why we might want variable-width\nTIDs. But one very large reason why those things are hard is that they\nrequire variable-width TIDs, so AFAICS this boils down to saying that\nwe don't want the feature because it's hard to implement. But we\nshould not conflate feasibility with desirability. I am quite sure\nthat lots of people want global indexes. The number of people who want\nindirect indexes is in my estimation much smaller, but it's probably\nnot zero, or else Alvaro wouldn't have tried his hand at writing a\npatch. Whether we can *get* those things is in doubt; whether it will\nhappen in the near future is very much in doubt. But I at least am not\nin doubt about whether people want it, because I hear complaints about\nthe lack of global indexes on an almost-daily basis. If those\ncomplaints are all from people hoping to fake me out into spending\ntime on something that is worthless to them, my colleagues are very\ngood actors.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 12:42:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, 2021-05-05 at 10:27 -0400, Robert Haas wrote:\n> It's too early for the project to commit to stability in\n> this area; we have not managed to get a single AM apart from heapam\n> into core\n\n\"In core\" shouldn't matter. In fact, if it's in core, stability of the\nAPIs is much less important.\n\n> If and when we have say 5 of those\n\nThat seems like a standard that we won't reach in any reasonable amount\nof time.\n\n> we can probably\n> articulate some intelligent ideas about what we think the patterns\n> that need to hold for future AMs are, but it's reckless to\n> extrapolate\n> from 1 working example, and right now that's all we have.\n\nWe should count columnar as a second example. While it doesn't support\neverything that heap does, we are actively working on it and it's\ngaining features quickly. It's also showing some impressive real-world\nresults.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 05 May 2021 10:12:59 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, May 5, 2021 at 9:42 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, May 5, 2021 at 11:50 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I'm being very vocal here because I'm concerned that we're going about\n> > generalizing TIDs in the wrong way. To me it feels like there is a\n> > loss of perspective about what really matters.\n>\n> Well, which things matter is a question of opinion, not fact.\n\nI'm not trying to win an argument here. I am giving an opinion in the\nhopes that it leads to some kind of useful synthesis, based on all of\nour opinions.\n\n> > No other database system has something like indirect indexes. They\n> > have clustered indexes, but that's rather different.\n>\n> I don't think this is true at all. If you have a clustered index -\n> i.e. the table is physically arranged according to the index ordering\n> - then your secondary indexes all pretty much have to be what we're\n> calling indirect indexes. They can hardly point to a physical\n> identifier if rows are being moved around. I believe InnoDB works this\n> way, and I think Oracle's index-organized tables do too. I suspect\n> there are other examples.\n\nBut these systems don't have indirect indexes *on a heap table*! Why\nwould they ever do it that way? They already have rowid/TID as a\nstable identifier of logical rows, so having indirect indexes that\npoint to a heap table's rows would be strictly worse than the generic\napproach for indexes on a heap table.\n\nWhat we call indirect indexes seem to me to be a failed attempt to\nsolve the \"TID is not a stable identifier of logical row\" issue that\nis baked-in to Postgres. If I thought it was worth solving that\nproblem then I suppose I'd solve it directly. The \"indirection\" of\nindirect indexes actuallys buys you nothing! It just moves some of the\nproblem somewhere else, at the cost of even more complexity. Indirect\nindexes (without a clustered index) are a muddled idea.\n\nOf course I accept that clustered indexes make sense in general\n(though less and less these days). But the fact that these systems\n\"use indirect indexes\" for secondary indexes is precisely why\nclustered indexes don't seem like a great design with modern hardware!\nShould we invest a huge amount of work in order to have all of the\ndisadvantages, and none of the advantages?\n\n> My point is that so far I am not seeing a whole lot of value of this\n> proposed approach. For a 64-bit TID to be valuable to you, one of two\n> things has to be true: you either don't care about having indexes that\n> store TIDs on your new table type, or the index types you want to use\n> can store those 64-bit TIDs. Now, I have not yet heard of anyone\n> working on a table AM who does not want to be able to support adding\n> btree indexes. There may be someone that I don't know about, and if\n> so, fine. But otherwise, we need a way to store them. And that\n> requires changing the page format for btree indexes. But surely we do\n> not want to make all TIDs everywhere wider in future btree versions,\n> so at least two TID widths - 6 bytes and 8 bytes - would have to be\n> supported.\n\nI agree that we don't want a performance/space overhead for simple\ncases that are quite happy with the current format.\n\n> And if we're at all going to do that, I think it's\n> certainly worth asking whether supporting varlena TIDs would really be\n> all that much harder. You seem to think it is, and you might be right,\n> but I'm not ready to give up, because I do not see how we are ever\n> going to get global indexes or indirect indexes without doing it, and\n> those would be good features to have.\n\nI think that global indexes are well worth having, and should be\nsolved some completely different way. The partition key can be an\nadditive thing. I strongly suspect that indirect indexes (without a\nclustered index) are 100% useless in both theory and practice, so\nnaturally I have little to no interest.\n\nThe difficulty of supporting (say) 6 byte and 8 byte TIDs together is\nvastly lower than variable-width TIDs, for all kinds of reasons. See\nmy remarks to Andres upthread about deduplication.\n\n> If we can't ever get them, so be it, but you seem to kind of be saying\n> that things like global indexes and indirect indexes are hard, and\n> therefore they don't count as reasons why we might want variable-width\n> TIDs.But one very large reason why those things are hard is that they\n> require variable-width TIDs, so AFAICS this boils down to saying that\n> we don't want the feature because it's hard to implement.\n\nMore like very hard to implement for a very low benefit.\n\n> But we\n> should not conflate feasibility with desirability. I am quite sure\n> that lots of people want global indexes.\n\nI do too!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 5 May 2021 10:15:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, 2021-05-05 at 08:50 -0700, Peter Geoghegan wrote:\n> There just isn't that\n> many table AM TID designs that could ever work, and even among those\n> schemes that could ever work there is a pretty clear hierarchy. This\n> blue sky thinking about generalizing TIDs 2 years in seems *weird* to\n> me.\n\nI am happy to keep table AM discussions concrete, as I have plenty of\nconcrete problems which I would like to turn into proposals.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 05 May 2021 10:21:58 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, May 5, 2021 at 1:13 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> \"In core\" shouldn't matter. In fact, if it's in core, stability of the\n> APIs is much less important.\n\nI don't know what to say here. I think it's unrealistic to believe\nthat a very new API that has only 1 in-core user is going to be fully\nstable, or that we can know how it might evolve. I can understand why\nyou and probably other people want that, but if somebody figures out a\nway to make some part of core significantly better and it requires\nchanging that API, they're going to change the API, not give up on the\nidea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 13:32:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, May 5, 2021 at 10:33 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I don't know what to say here. I think it's unrealistic to believe\n> that a very new API that has only 1 in-core user is going to be fully\n> stable, or that we can know how it might evolve. I can understand why\n> you and probably other people want that, but if somebody figures out a\n> way to make some part of core significantly better and it requires\n> changing that API, they're going to change the API, not give up on the\n> idea.\n\nI strongly agree. More generally, we need to decide what downsides\nwe're willing to live with.\n\nWhat we have right now has little chance of failing. It also has\nlittle chance of succeeding (except for something like zheap, which\ncan presumably get by with the heapam's idea of TID).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 5 May 2021 10:48:41 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, May 5, 2021 at 1:15 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I don't think this is true at all. If you have a clustered index -\n> > i.e. the table is physically arranged according to the index ordering\n> > - then your secondary indexes all pretty much have to be what we're\n> > calling indirect indexes. They can hardly point to a physical\n> > identifier if rows are being moved around. I believe InnoDB works this\n> > way, and I think Oracle's index-organized tables do too. I suspect\n> > there are other examples.\n>\n> But these systems don't have indirect indexes *on a heap table*! Why\n> would they ever do it that way? They already have rowid/TID as a\n> stable identifier of logical rows, so having indirect indexes that\n> point to a heap table's rows would be strictly worse than the generic\n> approach for indexes on a heap table.\n\nOne advantage of indirect indexes is that you can potentially avoid a\nlot of writes to the index. If a non-HOT update is performed, but the\nprimary key is not updated, the index does not need to be touched. I\nthink that's a potentially significant savings, even if bottom-up\nindex deletion would have prevented the page splits. Similarly, you\ncan mark a dead line pointer unused without having to scan the\nindirect index, because the index isn't pointing to that dead line\npointer anyway.\n\nHmm, but I guess you have another cleanup problem. What prevents\nsomeone from inserting a new row with the same primary key as a\npreviously-deleted row but different values in some indirectly-indexed\ncolumn? Then the old index entries, if still present, could mistakenly\nrefer to the new row. I don't know whether Alvaro thought of that\nproblem when he was working on this previously, or whether he solved\nit somehow. Possibly that's a big enough problem that the whole idea\nis dead in the water, but it's not obvious to me that this is so.\n\nAnd, anyway, this whole argument is predicated on the fact that the\nonly table AM we have right now is heapam. If we had a table AM that\norganized the data by primary key value, we'd still want to be able to\nhave secondary indexes, and they'd have to use the primary key value\nas the TID.\n\n> I think that global indexes are well worth having, and should be\n> solved some completely different way. The partition key can be an\n> additive thing.\n\nI agree that the partition identifier should be an additive thing, but\nwhere would we add it? It seems to me that the obvious answer is to\nmake it a column of the index tuple. And if we can do that, why can't\nwe put whatever kind of TID-like stuff people want in the index tuple,\ntoo? Maybe part of the problem here is that I don't actually\nunderstand how posting lists are represented...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 13:56:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, 2021-05-05 at 10:48 -0700, Peter Geoghegan wrote:\n> What we have right now has little chance of failing. It also has\n> little chance of succeeding (except for something like zheap, which\n> can presumably get by with the heapam's idea of TID).\n\nWhat has little chance of succeeding? Table AMs?\n\nAnd why isn't columnar an example of someting that can \"get by with\nheapam's idea of TID\"? I mean, it's not a perfect fit, but my primary\ncomplaint this whole thread is that it's undefined, not that it's\ncompletely unworkable.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 05 May 2021 10:56:56 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, May 5, 2021 at 10:57 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> One advantage of indirect indexes is that you can potentially avoid a\n> lot of writes to the index. If a non-HOT update is performed, but the\n> primary key is not updated, the index does not need to be touched. I\n> think that's a potentially significant savings, even if bottom-up\n> index deletion would have prevented the page splits. Similarly, you\n> can mark a dead line pointer unused without having to scan the\n> indirect index, because the index isn't pointing to that dead line\n> pointer anyway.\n\nAs I said, this is equivalent to solving the \"TID is a stable\nidentifier of logical row\" issue (an exceptionally hard problem that I\ndon't think is worth solving), except that you make the secondary\nindexes have potentially larger keys for no benefit. Sure, you can\nconsistently refer to a logical row using its PK value (assuming you\nhave this whole two-phase locking infrastructure), but why wouldn't\nyou \"just\" solve the problem with TID directly instead? What does\ninvolving PK values actually buy you? I am pretty sure that the answer\nis \"less than nothing\".\n\nIt is still true that I'm arguing against ever having a clustered\nindex table AM, which would be somewhat useful to users (that much\nI'll own). The main reason for that is because we'd still be required\nto solve the \"TID is a stable identifier of logical row\" issue, except\nit's not a physiological TID/rowid (it's a fully logical row\nidentifier). So everything seems to lead back to that hard problem\nseeming insoluble.\n\n> Hmm, but I guess you have another cleanup problem. What prevents\n> someone from inserting a new row with the same primary key as a\n> previously-deleted row but different values in some indirectly-indexed\n> column?\n\nTwo-phase locking in indexes stops it. Note that this is pretty much\nwhat happens in Oracle -- it's not just SQL Server.\n\nThis is why we have rich extensibility indexing -- indexes are\nstrictly physical data structures in Postgres.\n\n> And, anyway, this whole argument is predicated on the fact that the\n> only table AM we have right now is heapam. If we had a table AM that\n> organized the data by primary key value, we'd still want to be able to\n> have secondary indexes, and they'd have to use the primary key value\n> as the TID.\n\nBut Jeff has a design for the columnstore table AM where TIDs are\nessentially logical (not physiological like those of heapam), that\nnevertheless will work with the design around TIDs that I have in\nmind. \"Logical identifiers\" versus \"Logical identifiers that stably\nidentify logical rows\" seems like a subtle but important distinction\nhere.\n\nOf course I cannot yet rule out the possibility that this approach to\nTIDs will always be good enough. But it sure seems like it might be,\nand starting with the assumption that it is and working backwards\nseems like a good way to attack the problem as a practical matter.\n\n> > I think that global indexes are well worth having, and should be\n> > solved some completely different way. The partition key can be an\n> > additive thing.\n>\n> I agree that the partition identifier should be an additive thing, but\n> where would we add it? It seems to me that the obvious answer is to\n> make it a column of the index tuple.\n\nRight.\n\n> And if we can do that, why can't\n> we put whatever kind of TID-like stuff people want in the index tuple,\n> too? Maybe part of the problem here is that I don't actually\n> understand how posting lists are represented...\n\nYou want to use the partition identifier for predicate push-down and\nstuff anyway, so making it part of the TID doesn't seem particularly\nnatural to me.\n\n\"Posting list splits\" from the nbtree README will give you some idea\nof why I care about making TIDs integer-like and equi-sized within any\ngiven index tuple. There will be similar considerations for GIN.\nThough I think that nbtree deduplication is important enough on its\nown to try to preserve across table AMs.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 5 May 2021 11:21:48 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "Hi,\n\nOn 2021-05-05 13:32:57 -0400, Robert Haas wrote:\n> I don't know what to say here. I think it's unrealistic to believe\n> that a very new API that has only 1 in-core user is going to be fully\n> stable, or that we can know how it might evolve. I can understand why\n> you and probably other people want that, but if somebody figures out a\n> way to make some part of core significantly better and it requires\n> changing that API, they're going to change the API, not give up on the\n> idea.\n\nYea. I think it would be actively *bad* if tableam were too\nstable. tableam is at best an 80% solution to the abstraction needs\n(those 80% were pretty painful to achieve already, I don't think we\ncould have gotten much more initially). If we get cornered into not\nevolving the API because of 2-3 external users, we're a) going to live\nwith a leaky abstraction for much longer b) getting more hesitant to\nwork incrementally. Both would be bad.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 May 2021 11:22:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "Hi,\n\nOn 2021-05-05 10:56:56 -0700, Jeff Davis wrote:\n> On Wed, 2021-05-05 at 10:48 -0700, Peter Geoghegan wrote:\n> > What we have right now has little chance of failing. It also has\n> > little chance of succeeding (except for something like zheap, which\n> > can presumably get by with the heapam's idea of TID).\n> \n> What has little chance of succeeding? Table AMs?\n> \n> And why isn't columnar an example of someting that can \"get by with\n> heapam's idea of TID\"? I mean, it's not a perfect fit, but my primary\n> complaint this whole thread is that it's undefined, not that it's\n> completely unworkable.\n\nAgreed. And we can increase the fit a good bit without needing invasive\nall-over changes. With those changes often even helping heap.\n\nE.g. tidbitmap.c's harcoded use of MaxHeapTuplesPerPage is a problem\neven for heap - we waste a lot of space that's not commonly used. A\nbetter datastructure (radix tree like I'd say, but several tree shaped\napproaches seem possible).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 May 2021 11:25:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, May 5, 2021 at 10:56 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> What has little chance of succeeding? Table AMs?\n>\n> And why isn't columnar an example of someting that can \"get by with\n> heapam's idea of TID\"? I mean, it's not a perfect fit, but my primary\n> complaint this whole thread is that it's undefined, not that it's\n> completely unworkable.\n\nI think that it could be fairly workable with moderate effort (maybe\neven without that effort, but that doesn't seem so appealing). To do\nit well we have to actually generalize TIDs sensibly. And I think that\nthat means admitting that we'll never solve the \"TID is a stable\nidentifier of a logical row, not a physical version\" problem. ISTM\nthat that's the problem that is at the root of everything here.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 5 May 2021 11:35:45 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, May 5, 2021 at 11:25 AM Andres Freund <andres@anarazel.de> wrote:\n> Agreed. And we can increase the fit a good bit without needing invasive\n> all-over changes. With those changes often even helping heap.\n>\n> E.g. tidbitmap.c's harcoded use of MaxHeapTuplesPerPage is a problem\n> even for heap - we waste a lot of space that's not commonly used. A\n> better datastructure (radix tree like I'd say, but several tree shaped\n> approaches seem possible).\n\nAgreed -- even if we only cared about heapam we still ought to do\nsomething about tidbitmap.c's use of MaxHeapTuplesPerPage.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 5 May 2021 11:40:14 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, 2021-05-05 at 11:22 -0700, Andres Freund wrote:\n> Yea. I think it would be actively *bad* if tableam were too\n> stable. tableam is at best an 80% solution to the abstraction needs\n> (those 80% were pretty painful to achieve already, I don't think we\n> could have gotten much more initially). If we get cornered into not\n> evolving the API because of 2-3 external users, we're a) going to\n> live\n> with a leaky abstraction for much longer b) getting more hesitant to\n> work incrementally. Both would be bad.\n\nLike anything, we make the decision at the time we have a reason to\nbreak something. But why are are exensions disfavored in this\ncalculation vs. in-core? Isn't it a lot easier to update in-core code\nto new APIs?\n\nEvolving the API is one thing, but we should be more careful about\nthings that could affect on-disk state like ItemPointer\nrepresentations. By \"more careful\", I don't mean that we reject all\nproposals; I mean that we don't casually impose new limits in other\nparts of the system that happen to work for heapam but will cause table\nAM extensions to break.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 05 May 2021 12:09:17 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, May 5, 2021 at 12:09 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Like anything, we make the decision at the time we have a reason to\n> break something. But why are are exensions disfavored in this\n> calculation vs. in-core? Isn't it a lot easier to update in-core code\n> to new APIs?\n\nWe don't really have an API for how TIDs behave (unless you happen to\nwant to emulate heapam, which is reasonable and was expected). It's\nunspecified because nobody knows what it is (or what it should be)\njust yet. AFAICT there is no TID API to break.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 5 May 2021 12:26:53 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, 5 May 2021 at 19:15, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, May 5, 2021 at 9:42 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Wed, May 5, 2021 at 11:50 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > I'm being very vocal here because I'm concerned that we're going about\n> > > generalizing TIDs in the wrong way. To me it feels like there is a\n> > > loss of perspective about what really matters.\n> >\n> > Well, which things matter is a question of opinion, not fact.\n>\n> I'm not trying to win an argument here. I am giving an opinion in the\n> hopes that it leads to some kind of useful synthesis, based on all of\n> our opinions.\n>\n> > > No other database system has something like indirect indexes. They\n> > > have clustered indexes, but that's rather different.\n> >\n> > I don't think this is true at all. If you have a clustered index -\n> > i.e. the table is physically arranged according to the index ordering\n> > - then your secondary indexes all pretty much have to be what we're\n> > calling indirect indexes. They can hardly point to a physical\n> > identifier if rows are being moved around. I believe InnoDB works this\n> > way, and I think Oracle's index-organized tables do too. I suspect\n> > there are other examples.\n>\n> But these systems don't have indirect indexes *on a heap table*! Why\n> would they ever do it that way? They already have rowid/TID as a\n> stable identifier of logical rows, so having indirect indexes that\n> point to a heap table's rows would be strictly worse than the generic\n> approach for indexes on a heap table.\n>\n> What we call indirect indexes seem to me to be a failed attempt to\n> solve the \"TID is not a stable identifier of logical row\" issue that\n> is baked-in to Postgres. If I thought it was worth solving that\n> problem then I suppose I'd solve it directly. The \"indirection\" of\n> indirect indexes actuallys buys you nothing! It just moves some of the\n> problem somewhere else, at the cost of even more complexity. Indirect\n> indexes (without a clustered index) are a muddled idea.\n>\n> Of course I accept that clustered indexes make sense in general\n> (though less and less these days). But the fact that these systems\n> \"use indirect indexes\" for secondary indexes is precisely why\n> clustered indexes don't seem like a great design with modern hardware!\n> Should we invest a huge amount of work in order to have all of the\n> disadvantages, and none of the advantages?\n>\n> > My point is that so far I am not seeing a whole lot of value of this\n> > proposed approach. For a 64-bit TID to be valuable to you, one of two\n> > things has to be true: you either don't care about having indexes that\n> > store TIDs on your new table type, or the index types you want to use\n> > can store those 64-bit TIDs. Now, I have not yet heard of anyone\n> > working on a table AM who does not want to be able to support adding\n> > btree indexes. There may be someone that I don't know about, and if\n> > so, fine. But otherwise, we need a way to store them. And that\n> > requires changing the page format for btree indexes. But surely we do\n> > not want to make all TIDs everywhere wider in future btree versions,\n> > so at least two TID widths - 6 bytes and 8 bytes - would have to be\n> > supported.\n>\n> I agree that we don't want a performance/space overhead for simple\n> cases that are quite happy with the current format.\n>\n> > And if we're at all going to do that, I think it's\n> > certainly worth asking whether supporting varlena TIDs would really be\n> > all that much harder. You seem to think it is, and you might be right,\n> > but I'm not ready to give up, because I do not see how we are ever\n> > going to get global indexes or indirect indexes without doing it, and\n> > those would be good features to have.\n>\n> I think that global indexes are well worth having, and should be\n> solved some completely different way. The partition key can be an\n> additive thing.\n\nI believe that it cannot be \"just\" an additive thing, at least not\nthrough a normal INCLUDEd column, as you'd get duplicate TIDs in the\nindex, with its related problems. You also cannot add it as a key\ncolumn, as this would disable UNIQUE indexes; one of the largest use\ncases of global indexes. So, you must create specialized\ninfrastructure for this identifier.\n\nAnd when we're already adding specialized infrastructure, then this\nshould probably be part of a new TID infrastructure.\n\nAnd if we're going to change TID infrastructure to allow for more\nsizes (as we'd need normal TableAM TIDs, and global index\npartition-identifying TIDs), I'd argue that it should not be too much\nmore difficult to create an infrastructure for 'new TID' in which the\ntable AM supplies type, size and strict ordering information for these\n'new TID's.\n\nAnd if this 'new TID' size is not going to be defined by the index AM\nbut by the indexed object (be it a table or a 'global' or whatever\nwe'll build indexes on), I see no reason why this 'new TID'\ninfrastructure couldn't eventually support variable length TIDs; or\nconstant sized usertype TIDs (e.g. the 3 int columns of the primary\nkey of a clustered table).\n\nThe only requirements that I believe to be fundamental for any kind of TID are\n\n1.) Uniqueness during the lifecycle of the tuple, from creation to\nlife to dead to fully dereferenced from all indexes;\n2.) There exists a strict ordering of all TIDs of that type;\n\nAnd maybe to supply some form of efficiency to the underlying tableAM:\n\n3.) There should be an equivalent of bitmap for that TID type.\n\nFor the nbtree deduplication subsystem, and for gin posting lists to\nbe able to work efficiently, the following must also hold:\n\n4.) The TID type has a fixed size, preferably efficiently packable.\n\nOnly the last requirement cannot be met with varlena TID types. But,\nas I also believe that not all indexes can be expected to work (well)\nfor all kinds of TableAM, I don't see how this would be a blocking\nissue.\n\n> I strongly suspect that indirect indexes (without a\n> clustered index) are 100% useless in both theory and practice, so\n> naturally I have little to no interest.\n>\n> The difficulty of supporting (say) 6 byte and 8 byte TIDs together is\n> vastly lower than variable-width TIDs, for all kinds of reasons. See\n> my remarks to Andres upthread about deduplication.\n\nI believe that deduplication is amazing when it works, but it should\nnot be a blocker for new TID infrastructure (assuming it still works\nby default for nbtree+heap). As an example: numeric columns cannot be\ndeduplicated, and that wasn't considered a blocker for deduplication\nto be merged.\n\nThe only reasons I can think of why varlena TIDs cannot be efficiently\ndeduplicated is the storage and search efficiency: I cannot binary\nsearch in a packed array of varlena attributes, but I can binary\nsearch through packed fixed-size attributes. Any fixed-size TID can\nrealistically be deduplicated, assuming enough effort is put into the\npatch implementing the new TID infrastructure.\n\n> > If we can't ever get them, so be it, but you seem to kind of be saying\n> > that things like global indexes and indirect indexes are hard, and\n> > therefore they don't count as reasons why we might want variable-width\n> > TIDs.But one very large reason why those things are hard is that they\n> > require variable-width TIDs, so AFAICS this boils down to saying that\n> > we don't want the feature because it's hard to implement.\n>\n> More like very hard to implement for a very low benefit.\n\nComplicated optimizations have been merged which had only a small gain.\n\nStorage gains for index-oriented tables can become as large as the\nsize of the primary key by not having to store all primary key values\nin both the index and the table; which can thus be around 100% of a\ntable in the least efficient cases of having a PK over all columns.\n\nYes, this might be indeed only a 'small gain' for access latency, but\nnot needing to store another copy of your data (and keeping it in\ncache, etc.) is a significant win in my book.\n\n\nRegarding losing deduplication in btrees when we have varlena TIDs:\nThis loss in [storage] efficiency can be partially mitigated by\nimplementing prefix truncation/prefix deduplication, and as such that\nloss would not necessarily be too problematic when PT/PD is\nimplemented.\n\n\n\nWith regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Wed, 5 May 2021 21:43:01 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, May 5, 2021 at 12:43 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I believe that it cannot be \"just\" an additive thing, at least not\n> through a normal INCLUDEd column, as you'd get duplicate TIDs in the\n> index, with its related problems. You also cannot add it as a key\n> column, as this would disable UNIQUE indexes; one of the largest use\n> cases of global indexes. So, you must create specialized\n> infrastructure for this identifier.\n\nYou're just quibbling about the precise words that I used. Of course\nit is true that there must be some sense in which a global index\npartition key attribute will need to be special to the implementation\n-- how else could a global index enforce uniqueness? That was clearly\nimplied.\n\n> And when we're already adding specialized infrastructure, then this\n> should probably be part of a new TID infrastructure.\n\nThis is a non-sequitur.\n\n> And if we're going to change TID infrastructure to allow for more\n> sizes (as we'd need normal TableAM TIDs, and global index\n> partition-identifying TIDs), I'd argue that it should not be too much\n> more difficult to create an infrastructure for 'new TID' in which the\n> table AM supplies type, size and strict ordering information for these\n> 'new TID's.\n>\n> And if this 'new TID' size is not going to be defined by the index AM\n> but by the indexed object (be it a table or a 'global' or whatever\n> we'll build indexes on), I see no reason why this 'new TID'\n> infrastructure couldn't eventually support variable length TIDs; or\n> constant sized usertype TIDs (e.g. the 3 int columns of the primary\n> key of a clustered table).\n\nYou're not considering the big picture. It's not self-evident that\nanybody will ever have much use for a variable-width TID in their\ntable AM, at least beyond some fairly simple scheme -- because of the\nfundamental issue of TID not working as a stable identifier of logical\nrows in Postgres. If it was very clear that there would be *some*\nsignificant benefit then the costs might start to look reasonable. But\nthere isn't. \"Build it and they will come\" is not at all convincing to\nme.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 5 May 2021 13:08:51 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, 5 May 2021 at 22:09, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, May 5, 2021 at 12:43 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > I believe that it cannot be \"just\" an additive thing, at least not\n> > through a normal INCLUDEd column, as you'd get duplicate TIDs in the\n> > index, with its related problems. You also cannot add it as a key\n> > column, as this would disable UNIQUE indexes; one of the largest use\n> > cases of global indexes. So, you must create specialized\n> > infrastructure for this identifier.\n>\n> You're just quibbling about the precise words that I used. Of course\n> it is true that there must be some sense in which a global index\n> partition key attribute will need to be special to the implementation\n> -- how else could a global index enforce uniqueness? That was clearly\n> implied.\n\nThis implication was not 100% clear to me, and the last thread on\nglobal indexes that implemented it through INCLUDEd columns didn't\nmention this. As such, I wanted to explicitly mention that this\npartition/table identifier would need to be part of the keyspace.\n\n> > And when we're already adding specialized infrastructure, then this\n> > should probably be part of a new TID infrastructure.\n>\n> This is a non-sequitur.\n\nI may have skipped some reasoning:\n\nI believe that the TID is the unique identifier of that tuple, within context.\n\nFor normal indexes, the TID as supplied directly by the TableAM is\nsufficient, as the context is that table.\nFor global indexes, this TID must include enough information to relate\nit to the table the tuple originated from.\nIn the whole database, that would be the OID of the table + the TID as\nsupplied by the table.\n\nAs such, the identifier of the logical row (which can be called the\nTID), as stored in index tuples in global indexes, would need to\nconsist of the TableAM supplied TID + the (local) id of the table\ncontaining the tuple. Assuming we're in agreement on that part, I\nwould think it would be consistent to put this in TID infrastructure,\nsuch that all indexes that use such new TID infrastructure can be\ndefined to be global with only minimal effort.\n\n> > And if we're going to change TID infrastructure to allow for more\n> > sizes (as we'd need normal TableAM TIDs, and global index\n> > partition-identifying TIDs), I'd argue that it should not be too much\n> > more difficult to create an infrastructure for 'new TID' in which the\n> > table AM supplies type, size and strict ordering information for these\n> > 'new TID's.\n> >\n> > And if this 'new TID' size is not going to be defined by the index AM\n> > but by the indexed object (be it a table or a 'global' or whatever\n> > we'll build indexes on), I see no reason why this 'new TID'\n> > infrastructure couldn't eventually support variable length TIDs; or\n> > constant sized usertype TIDs (e.g. the 3 int columns of the primary\n> > key of a clustered table).\n>\n> You're not considering the big picture. It's not self-evident that\n> anybody will ever have much use for a variable-width TID in their\n> table AM, at least beyond some fairly simple scheme -- because of the\n> fundamental issue of TID not working as a stable identifier of logical\n> rows in Postgres.\n\nZHeap states that it can implement stable TIDs within limits, as IIRC\nit requires retail index deletion support for all indexes on the\nupdated columns of that table. I fail to see why this same\ninfrastructure could not be used for supporting clustered tables,\nwhile enforcing these limits only soft enforced in ZHeap (that is,\nonly allowing index AMs that support retail index tuple deletion).\n\n> If it was very clear that there would be *some*\n> significant benefit then the costs might start to look reasonable. But\n> there isn't. \"Build it and they will come\" is not at all convincing to\n> me.\n\nClustered tables / Index-oriented Tables are very useful for tables of\nwhich most columns are contained in the PK, or otherwise are often\nordered by their PK. I don't know of any way that would allow us to\nbuild a clustered table _without_ including the primary key in some\nform into the TID, or otherwise introducing a layer of indirection\nthat would undo the clustered access implicated by the clustered\ntable.\n\nAdditionally, compacting/re-clustering a table would be _much_ cheaper\nfor clustered tables, as the indexes attached to that table would not\nneed rebuilding: all TIDs will stay valid across the clustering\noperation.\n\nWith regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Thu, 6 May 2021 00:18:17 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, May 5, 2021 at 3:18 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I believe that the TID is the unique identifier of that tuple, within context.\n>\n> For normal indexes, the TID as supplied directly by the TableAM is\n> sufficient, as the context is that table.\n> For global indexes, this TID must include enough information to relate\n> it to the table the tuple originated from.\n\nClearly something like a partition identifier column is sometimes just\nlike a regular user-visible column, though occasionally not like one\n-- whichever is useful to the implementation in each context. For\nexample, we probably want to do predicate pushdown, maybe with real\ncataloged operators that access the column like any other user-created\ncolumn (the optimizer knows about the column, which even has a\npg_attribute entry). Note that we only ever access the TID column\nusing an insertion scankey today -- so there are several ways in which\nthe partition identifier really would be much more like a user column\nthan tid/scantid ever was.\n\nThe TID is a key column for most purposes as of Postgres 12 (at least\ninternally). That didn't break all unique indexes due to the existence\nof non-unique TIDs across duplicates! Insertions that must call\n_bt_check_unique() can deal with the issue directly, by temporarily\nunsetting scantid.\n\nWe can easily do roughly the same thing here: be slightly creative\nabout how we interpret whether or not the partition identifier is\n\"just another key column\" across each context. This is also similar to\nthe way the implementation is slightly creative about NULL values,\nwhich are not equal to any other value to the user, but are\nnevertheless just another value from the domain of indexed values to\nthe nbtree implementation. Cleverly defining the semantics of keys to\nget better performance and to avoid the need for special case code is\nmore or less a standard technique.\n\n> In the whole database, that would be the OID of the table + the TID as\n> supplied by the table.\n>\n> As such, the identifier of the logical row (which can be called the\n> TID), as stored in index tuples in global indexes, would need to\n> consist of the TableAM supplied TID + the (local) id of the table\n> containing the tuple.\n\n2 points:\n\n1. Clearly you need to use the partition identifier with the TID in\norder to look up the version in the table -- you need to use both\ntogether in global indexes. But it can still work in much the same way\nas it would in a standard index -- it's just that you handle that\nextra detail as well. That's what I meant by additive.\n\n2. If a TID points to a version of a row (or whatever you want to call\nthe generalized version of a HOT chain -- almost the same thing), then\nof course you can always map it back to the logical row. That must\nalways be true. It is equally true within a global index.\n\nPoints 1 and 2 above seem obvious to me...so I think we agree on that\nmuch. I just don't know how you go from here to \"we need\nvariable-width TIDs\". In all sincerity, I am confused because to me it\njust seems as if you're asserting that it must be necessary to have\nvariable width TIDs again and again, without ever getting around to\njustifying it. Or even trying to.\n\n> Assuming we're in agreement on that part, I\n> would think it would be consistent to put this in TID infrastructure,\n> such that all indexes that use such new TID infrastructure can be\n> defined to be global with only minimal effort.\n\n Abstract definitions can be very useful, but ultimately they're just\ntools. They're seldom useful as a starting point in my experience. I\ntry to start with the reality on the ground, and perhaps arrive at\nsome kind of abstract model or idea much later.\n\n> ZHeap states that it can implement stable TIDs within limits, as IIRC\n> it requires retail index deletion support for all indexes on the\n> updated columns of that table.\n\nWhether or not that's true is not at all clear. What is true is that\nthe prototype version of zheap that we have as of today is notable in\nthat it more or less allows the moral equivalent of a HOT chain to be\narbitrarily long (or much longer, at least). To the best of my\nknowledge there is nothing about retail index tuple deletion in the\ndesign, except perhaps something vague and aspirational.\n\n> I fail to see why this same\n> infrastructure could not be used for supporting clustered tables,\n> while enforcing these limits only soft enforced in ZHeap (that is,\n> only allowing index AMs that support retail index tuple deletion).\n\nYou're ignoring an ocean of complexity here. Principally the need to\nimplement something like two-phase locking (key value locking) in\nindexes to make this work, but also the need to account for how\nfundamentally redefining TID breaks things. To say nothing of how this\nmight affect crash recovery.\n\n> > If it was very clear that there would be *some*\n> > significant benefit then the costs might start to look reasonable. But\n> > there isn't. \"Build it and they will come\" is not at all convincing to\n> > me.\n>\n> Clustered tables / Index-oriented Tables are very useful for tables of\n> which most columns are contained in the PK, or otherwise are often\n> ordered by their PK.\n\nI'm well aware of the fact that clustered index based tables are\nsometimes more useful than heap-based tables.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 5 May 2021 16:22:17 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, May 5, 2021 at 3:43 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I believe that it cannot be \"just\" an additive thing, at least not\n> through a normal INCLUDEd column, as you'd get duplicate TIDs in the\n> index, with its related problems. You also cannot add it as a key\n> column, as this would disable UNIQUE indexes; one of the largest use\n> cases of global indexes. So, you must create specialized\n> infrastructure for this identifier.\n>\n> And when we're already adding specialized infrastructure, then this\n> should probably be part of a new TID infrastructure.\n>\n> And if we're going to change TID infrastructure to allow for more\n> sizes (as we'd need normal TableAM TIDs, and global index\n> partition-identifying TIDs), I'd argue that it should not be too much\n> more difficult to create an infrastructure for 'new TID' in which the\n> table AM supplies type, size and strict ordering information for these\n> 'new TID's.\n>\n> And if this 'new TID' size is not going to be defined by the index AM\n> but by the indexed object (be it a table or a 'global' or whatever\n> we'll build indexes on), I see no reason why this 'new TID'\n> infrastructure couldn't eventually support variable length TIDs; or\n> constant sized usertype TIDs (e.g. the 3 int columns of the primary\n> key of a clustered table).\n>\n> The only requirements that I believe to be fundamental for any kind of TID are\n>\n> 1.) Uniqueness during the lifecycle of the tuple, from creation to\n> life to dead to fully dereferenced from all indexes;\n> 2.) There exists a strict ordering of all TIDs of that type;\n>\n> And maybe to supply some form of efficiency to the underlying tableAM:\n>\n> 3.) There should be an equivalent of bitmap for that TID type.\n>\n> For the nbtree deduplication subsystem, and for gin posting lists to\n> be able to work efficiently, the following must also hold:\n>\n> 4.) The TID type has a fixed size, preferably efficiently packable.\n>\n> Only the last requirement cannot be met with varlena TID types. But,\n> as I also believe that not all indexes can be expected to work (well)\n> for all kinds of TableAM, I don't see how this would be a blocking\n> issue.\n\n+1 to all of that.\n\n> Storage gains for index-oriented tables can become as large as the\n> size of the primary key by not having to store all primary key values\n> in both the index and the table; which can thus be around 100% of a\n> table in the least efficient cases of having a PK over all columns.\n>\n> Yes, this might be indeed only a 'small gain' for access latency, but\n> not needing to store another copy of your data (and keeping it in\n> cache, etc.) is a significant win in my book.\n\nThis is a really good point. Also, if the table is ordered by a\nsynthetic logical TID, range scans on the primary key will be less\nefficient than if the primary key is itself the TID. We have the\nability to CLUSTER on an index for good reasons, and \"Automatically\nmaintain clustering on a table\" has been on the todo list forever.\nIt's hard to imagine this will ever be achieved with the current heap,\nthough: the way to get there is to have a table AM for which this is\nan explicit goal.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 21:07:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "How hard would it be to declare TID as current ItemPointerData with\nsome values prohibited (NULL, SpecTokenOffsetNumber = 0xfffe,\nMovedPartitionsOffsetNumber = 0xfffd, presumably also 0xffff ?).\n\nAnd then commit to fixing usage outside access/heap/ which depend on\nsmall value for MaxHeapTuplesPerPage, currently only nbodes/tidscan.c\n, access/gin/ginpostinglist.c and access/brin/brin_*.c\n\nthere is also MaxHeapTuplesPerPage usage in\n./backend/storage/page/bufpage.c but it seems to be all in\nheapam-dependent functions (PageRepairFragmentation(),\nPageGetHeapFreeSpace() and few others) which most likely should be\nmoved to access/heap/\n\nDoing it this way would leave us with some manageable complexity in\nmapping from TID to 48-bit integer and/or 3 wanted positions in 2^32\n\n------------\nHannu Krosing\n\n\nOn Wed, May 5, 2021 at 8:40 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, May 5, 2021 at 11:25 AM Andres Freund <andres@anarazel.de> wrote:\n> > Agreed. And we can increase the fit a good bit without needing invasive\n> > all-over changes. With those changes often even helping heap.\n> >\n> > E.g. tidbitmap.c's harcoded use of MaxHeapTuplesPerPage is a problem\n> > even for heap - we waste a lot of space that's not commonly used. A\n> > better datastructure (radix tree like I'd say, but several tree shaped\n> > approaches seem possible).\n>\n> Agreed -- even if we only cared about heapam we still ought to do\n> something about tidbitmap.c's use of MaxHeapTuplesPerPage.\n>\n> --\n> Peter Geoghegan\n>\n>\n\n\n", "msg_date": "Thu, 6 May 2021 03:26:22 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Thu, May 6, 2021 at 3:07 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, May 5, 2021 at 3:43 PM Matthias van de Meent\n> > Storage gains for index-oriented tables can become as large as the\n> > size of the primary key by not having to store all primary key values\n> > in both the index and the table; which can thus be around 100% of a\n> > table in the least efficient cases of having a PK over all columns.\n> >\n> > Yes, this might be indeed only a 'small gain' for access latency, but\n> > not needing to store another copy of your data (and keeping it in\n> > cache, etc.) is a significant win in my book.\n>\n> This is a really good point. Also, if the table is ordered by a\n> synthetic logical TID, range scans on the primary key will be less\n> efficient than if the primary key is itself the TID. We have the\n> ability to CLUSTER on an index for good reasons, and \"Automatically\n> maintain clustering on a table\" has been on the todo list forever.\n> It's hard to imagine this will ever be achieved with the current heap,\n> though: the way to get there is to have a table AM for which this is\n> an explicit goal.\n\nBut would this not have the downside that all the secondary indexes\nwill blow up as they now need to have the full table row as the TID ?\n\n-----\nHannu Krosing\n\n\n", "msg_date": "Thu, 6 May 2021 03:32:28 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Thu, 2021-05-06 at 03:26 +0200, Hannu Krosing wrote:\n> How hard would it be to declare TID as current ItemPointerData with\n> some values prohibited (NULL, SpecTokenOffsetNumber = 0xfffe,\n> MovedPartitionsOffsetNumber = 0xfffd, presumably also 0xffff ?).\n\nI don't think there's consensus in this thread that we want to do that,\nbut I'd be fine with it.\n\nIt's possible but not trivial. tidbitmap.c would be the biggest\nchallenge, I think.\n\nRegards,\n\tJeff Davis\n\n\n\n\n\n", "msg_date": "Wed, 05 May 2021 19:53:21 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "-----\nHannu Krosing\n\nOn Thu, May 6, 2021 at 4:53 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Thu, 2021-05-06 at 03:26 +0200, Hannu Krosing wrote:\n> > How hard would it be to declare TID as current ItemPointerData with\n> > some values prohibited (NULL, SpecTokenOffsetNumber = 0xfffe,\n> > MovedPartitionsOffsetNumber = 0xfffd, presumably also 0xffff ?).\n>\n> I don't think there's consensus in this thread that we want to do that,\n> but I'd be fine with it.\n\nSure. I just proposed this as a Minimal Viable Change.\n\nJust hoping that we can agree on an interim solution which addresses\nthe immediate problem and then continue arguing about the ideal way\nfor the rest of v15 cycle (and the v16 and v17 ;) )\n\n>\n> It's possible but not trivial. tidbitmap.c would be the biggest\n> challenge, I think.\n\nI think all these places (tidbitmap, gin, brin) relay on \"relatively\nsmall\" MaxHeapTuplesPerPage as an upper bound for some allocations and\nthen still allocate a lot more than needed.\n\nOne can get to 291 tids / page only when you create a table with no\ncolumns, or less than 8 columns which are all set to NULL. A table\nwith a single non-null boolean column already can fit only 226 tuples\nper page.\n\nIt is definitely a non-trivial amount of work to rewrite these three\nbut going to (almost) full 48 bits from current\ntheoretically-a-little-over-40-effective-bits would expand the number\nof addressable tuples 225 times.\n\nOf course it would be extra nice to also somehow encode the 3 special\nItemPointerData values (NULL, 0xfffe, 0cfffd) \"somewhere else\" and get\nan option of uninterrupted 48-bit address space for non-heap table\nAMs, but this would likely be much more disruptive, if possible at\nall.\nWe could still check, if they are used outside of heapam and maybe\njust fix these places.\n\n>\n> Regards,\n> Jeff Davis\n>\n>\n>\n\n\n", "msg_date": "Thu, 6 May 2021 05:28:36 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Wed, May 5, 2021 at 10:53 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Thu, 2021-05-06 at 03:26 +0200, Hannu Krosing wrote:\n> > How hard would it be to declare TID as current ItemPointerData with\n> > some values prohibited (NULL, SpecTokenOffsetNumber = 0xfffe,\n> > MovedPartitionsOffsetNumber = 0xfffd, presumably also 0xffff ?).\n>\n> I don't think there's consensus in this thread that we want to do that,\n> but I'd be fine with it.\n>\n> It's possible but not trivial. tidbitmap.c would be the biggest\n> challenge, I think.\n\nI think that would be fine, too. I don't think it's the ideal\nsituation, but it seems like a clear improvement over what we have\nnow. We might want to reserve a few values for future projects that\nmight need distinguished values like SpecTokenOffsetNumber or\nMovedPartitionsOffsetNumber, though, so we don't completely box\nourselves into a corner.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 23:39:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Thu, 6 May 2021 at 01:22, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, May 5, 2021 at 3:18 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > I believe that the TID is the unique identifier of that tuple, within context.\n> >\n> > For normal indexes, the TID as supplied directly by the TableAM is\n> > sufficient, as the context is that table.\n> > For global indexes, this TID must include enough information to relate\n> > it to the table the tuple originated from.\n>\n> Clearly something like a partition identifier column is sometimes just\n> like a regular user-visible column, though occasionally not like one\n> -- whichever is useful to the implementation in each context. For\n> example, we probably want to do predicate pushdown, maybe with real\n> cataloged operators that access the column like any other user-created\n> column (the optimizer knows about the column, which even has a\n> pg_attribute entry). Note that we only ever access the TID column\n> using an insertion scankey today -- so there are several ways in which\n> the partition identifier really would be much more like a user column\n> than tid/scantid ever was.\n>\n> The TID is a key column for most purposes as of Postgres 12 (at least\n> internally). That didn't break all unique indexes due to the existence\n> of non-unique TIDs across duplicates! Insertions that must call\n> _bt_check_unique() can deal with the issue directly, by temporarily\n> unsetting scantid.\n>\n> We can easily do roughly the same thing here: be slightly creative\n> about how we interpret whether or not the partition identifier is\n> \"just another key column\" across each context. This is also similar to\n> the way the implementation is slightly creative about NULL values,\n> which are not equal to any other value to the user, but are\n> nevertheless just another value from the domain of indexed values to\n> the nbtree implementation. Cleverly defining the semantics of keys to\n> get better performance and to avoid the need for special case code is\n> more or less a standard technique.\n>\n> > In the whole database, that would be the OID of the table + the TID as\n> > supplied by the table.\n> >\n> > As such, the identifier of the logical row (which can be called the\n> > TID), as stored in index tuples in global indexes, would need to\n> > consist of the TableAM supplied TID + the (local) id of the table\n> > containing the tuple.\n>\n> 2 points:\n>\n> 1. Clearly you need to use the partition identifier with the TID in\n> order to look up the version in the table -- you need to use both\n> together in global indexes. But it can still work in much the same way\n> as it would in a standard index -- it's just that you handle that\n> extra detail as well. That's what I meant by additive.\n>\n> 2. If a TID points to a version of a row (or whatever you want to call\n> the generalized version of a HOT chain -- almost the same thing), then\n> of course you can always map it back to the logical row. That must\n> always be true. It is equally true within a global index.\n>\n> Points 1 and 2 above seem obvious to me...so I think we agree on that\n> much.\n\nYes.\n\n> I just don't know how you go from here to \"we need\n> variable-width TIDs\". In all sincerity, I am confused because to me it\n> just seems as if you're asserting that it must be necessary to have\n> variable width TIDs again and again, without ever getting around to\n> justifying it. Or even trying to.\n\nSee below. I'm not saying we need it _right now_, but at the very\nleast I'd like to argue that we should not close the door on varlena\nTIDs, because there _are_ reasons for those TID types. See also below.\n\n> > Assuming we're in agreement on that part, I\n> > would think it would be consistent to put this in TID infrastructure,\n> > such that all indexes that use such new TID infrastructure can be\n> > defined to be global with only minimal effort.\n>\n> Abstract definitions can be very useful, but ultimately they're just\n> tools. They're seldom useful as a starting point in my experience. I\n> try to start with the reality on the ground, and perhaps arrive at\n> some kind of abstract model or idea much later.\n\nAlthough I agree that abstract definitions are tools, I disagree that\nthey're seldom useful as a starting point. Many have implemented b\n(plus) trees based on the original paper exactly due to the guarantees\nthat are provided by the abstract definition as defined in the paper.\nWhen trying to create completely new things I would agree that\nstarting with the abstract is probably not the right idea, but we're\nnot in a green field. There are many examples for storage engines /\ntable AMs outside PostgreSQL, and I believe that we can take learnings\nfrom those for the potential extendability of PostgreSQL.\n\n> > ZHeap states that it can implement stable TIDs within limits, as IIRC\n> > it requires retail index deletion support for all indexes on the\n> > updated columns of that table.\n>\n> Whether or not that's true is not at all clear. What is true is that\n> the prototype version of zheap that we have as of today is notable in\n> that it more or less allows the moral equivalent of a HOT chain to be\n> arbitrarily long (or much longer, at least). To the best of my\n> knowledge there is nothing about retail index tuple deletion in the\n> design, except perhaps something vague and aspirational.\n\nI'm fairly certain that ZHeap's todo list item no. 1 details retail\nindex tuple deletion as I understand it:\n\n\"Delete marking in indexes: This will allow inplace updates even when\nindex columns are updated and additionally with this we can avoid the\nneed for a dedicated vacuum process to perform retail deletes.\"\n\nIf I read this correctly, this means asking the index AM to delete the\nspecific index tuple corresponding with the deleted item AKA retail\nindex tuple deletion.\n\n> > I fail to see why this same\n> > infrastructure could not be used for supporting clustered tables,\n> > while enforcing these limits only soft enforced in ZHeap (that is,\n> > only allowing index AMs that support retail index tuple deletion).\n>\n> You're ignoring an ocean of complexity here. Principally the need to\n> implement something like two-phase locking (key value locking) in\n> indexes to make this work, but also the need to account for how\n> fundamentally redefining TID breaks things. To say nothing of how this\n> might affect crash recovery.\n\nAn ocean of complexity that is (to the best of my knowledge) to be\nmostly delegated to the TableAM that implements these varlena TIDs.\nI'm not telling you that varlena TIDs are a one-size-fits-all, but I\nwant to make clear that varlena TIDs are required for certain table\nAMs that would otherwise be impossible.\n\n> > > If it was very clear that there would be *some*\n> > > significant benefit then the costs might start to look reasonable. But\n> > > there isn't. \"Build it and they will come\" is not at all convincing to\n> > > me.\n> >\n> > Clustered tables / Index-oriented Tables are very useful for tables of\n> > which most columns are contained in the PK, or otherwise are often\n> > ordered by their PK.\n>\n> I'm well aware of the fact that clustered index based tables are\n> sometimes more useful than heap-based tables.\n\nThen would you also agree that for persistently clustered tables to\nwork in any form of efficiency, they would need to support variable\nlength identifiers? Let me explain my reasoning:\n\nIf the TableAM TID contains information about it's physical location,\nwe cannot implement persistent clustered tables due to instability of\nthis physical location of the base tuple (see also how we cannot\nguarantee the physical location of an index tuple in the index, only\nthe logical location).\nThat being the case, the TID must be fully logical. I hope you agree\nat least up to here.\n\nThe TID must order in the same manner as the table's clustering, as\nyou would want to be able to find the tuple from the table, and the\nonly guarantee you have is that the table has an ordering. If the TID\ndoes not have the same ordering, you cannot find the correct tuple (or\nyou'd need a TID -> key mapping, which would mean you'd lose a lot of\nthe benefits you got from the clustering).\n\nIf the TID must share an ordering with the clustering of the table,\nthen it should also have a similar key space as the clustering: I\ncould cluster a table on a text column, but any fixed size TID cannot\never dream to be used to correctly identify the position of a certain\ntext string within an arbitrary sorted list of text strings. You'll\nhave to use that text string to find it's location in the list. As\nsuch, I believe that the tuple identifier (or also: the identifier\nusing which we can locate the tuple data corresponding to that\nidentifier) for clustered tables / index oriented tables is at the\nvery least the set of columns that that table is clustered on.\n\nDo note that this is all about the TID supplied by a TableAM that\nwould implement persistent clustering, which is how MySQL has\nimplemented tables. I am unaware of a scheme that would maintain\npersistent clustering/ordering across the table on key columns without\nalso leaking these key columns into the identifier that is used to\nfind the tuple in the table.\n\nWith regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Thu, 6 May 2021 13:10:30 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" }, { "msg_contents": "On Thu, May 6, 2021 at 4:10 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> See below. I'm not saying we need it _right now_, but at the very\n> least I'd like to argue that we should not close the door on varlena\n> TIDs, because there _are_ reasons for those TID types. See also below.\n\nPerhaps I was a bit too strident. It's a question of trade-offs.\n\n> Although I agree that abstract definitions are tools, I disagree that\n> they're seldom useful as a starting point. Many have implemented b\n> (plus) trees based on the original paper exactly due to the guarantees\n> that are provided by the abstract definition as defined in the paper.\n\nI was unclear. I agree with this much. I don't think it applies in\nthis situation, though.\n\nThere are many competing considerations besides that. What I'm\nconcerned about is avoiding a strategically ruinous direction that\nworks against our strengths -- we should play to our strengths. For\nexample, it's no coincidence that Postgres is the only system that has\nB-Tree deduplication and enables it by default, with next to no\ndownside. This is more or less a consequence of the fact that indexes\nin Postgres don't need their own locks, unlike any relation DB system\nthat more or less uses a design like ARIES -- indexes are just\nphysical data structures that back the logical database. They are not\npart of the logical database per se.\n\nConsider a 2015 paper from Goetz Graefe about index locking, entitled\n\"Orthogonal key-value locking\":\n\nhttps://subs.emis.de/LNI/Proceedings/Proceedings241/237.pdf\n\nSome kind of locking in indexes (something like ARIES/IM or ARIES/KVL)\nis needed to make TIDs stable identifiers of logical rows -- even in a\nsystem like Oracle (but most obviously in a system like InnoDB or SQL\nServer). According to Graefe, \"further improvements [to index\nconcurrency control] are possible despite multiple available\ntechniques and decades with little progress\". This is why terms like\n\"ruinously expensive\" don't feel like hyperbole when talking about\npursuing TID stability/clustered indexes/whatever -- it is a radically\ndifferent design. And maybe even a basically inferior design these\ndays, all things considered.\n\nI think that the simple approach to storage taken by Postgres has aged\nincredibly well -- there are many ways in which it suits modern\nhardware more than traditional designs. Modern hardware is generally\nmuch more sensitive to the cost of any kind of synchronization [1],\nbut is otherwise very fast in the ways -- rather a reversal from the\nenvironment that ARIES was created in. While there are certainly\ndownsides to the Postgres approach to storage, concurrency control and\nrecovery, these downsides can be fixed directly. Including in the ways\nwe've been discussing on other threads these past few months. I think\nthat we can push the idea of teaching heapam (and maybe nbtree) just\nenough about the logical database to not make really dumb decisions in\nsome cases. Why not see how far that can go first?\n\nGraefe's perspective in the key-value locking paper is roughly the\nMicrosoft SQL Server perspective. It's not hard for me to imagine why\nhe undertook to research index locking used for concurrency control\n(not to be confused with what they call latches and what we call\nLWLocks) while still working for Microsoft. The TPC-E benchmark is\nsomething that Microsoft alone targets with SQL Server (no other DB\nsystem has an official entry on the TPC website). Pity TPC-E isn't\nmore successful, since it seems to have a lot more real world\nrelevance than TPC-C. I bet we could learn a lot from it because it's\nactually realistic (TPC-C isn't that realistic, and TPC-B/pgbench is\nthe furthest possible thing from reality).\n\nThe best analysis I've been able to find about TPC-E is probably \"From\nA to E: Analyzing TPC’s OLTP Benchmarks\" [2]. It concludes that the\nbig bottleneck for TPC-E is \"logical lock contention\", which is the\noverwhelming bottleneck at high client counts. Though I haven't run\nTPC-E properly myself (because of issues with bitrot), I'd speculate\nthat this gives Postgres an edge in TPC-E. I've heard quite a few\nreports of Postgres having much faster writes compared to any of the\nbig proprietary systems. Some of these reports are many years old. And\nyet we routinely seem to talk about our approach as if it's obviously\ninferior. The main weakness that Postgres storage has seems to me to\nbe largely in the area of stability, perhaps only in theoretically\nrare or extreme cases that nevertheless cause real problems in the\nreal world.\n\nOf course I cannot prove that adopting stable TIDs necessarily means\naccepting a similar burden from lock contention in an affected table\nAM, or in general. I cannot prove much of anything when we're\nconsidering such an abstract question. But perhaps this gives you a\nbetter idea about where my skepticism comes from.\n\n> When trying to create completely new things I would agree that\n> starting with the abstract is probably not the right idea, but we're\n> not in a green field. There are many examples for storage engines /\n> table AMs outside PostgreSQL, and I believe that we can take learnings\n> from those for the potential extendability of PostgreSQL.\n\nI agree, up to a point. Perhaps the reason I came up with bottom-up\nindex deletion was that my reading of the DB research literature led\nme to consider the logical database in the context of index AMs. Turns\nout we can use a dirt cheap technique to give B-Tree indexes some\nbasic ideas about the logical database, without having to go to the\nexpense of making nbtree truly have authoritative information about\nit. A little appreciation of other designs can go a long way. We can\nafford to have weaknesses, because we clearly also have strengths. We\nonly need to not have an Achilles' heel.\n\n> \"Delete marking in indexes: This will allow inplace updates even when\n> index columns are updated and additionally with this we can avoid the\n> need for a dedicated vacuum process to perform retail deletes.\"\n>\n> If I read this correctly, this means asking the index AM to delete the\n> specific index tuple corresponding with the deleted item AKA retail\n> index tuple deletion.\n\nDealing with this at the level of nbtree/the physical data structure\nis very easy. The problem is with concurrency control, especially\nwithin index AMs. This is about the logical contents of the database.\nBeing able to think of indexes as basically just data structures is a\nhuge luxury for us.\n\n> Then would you also agree that for persistently clustered tables to\n> work in any form of efficiency, they would need to support variable\n> length identifiers? Let me explain my reasoning:\n\nYes, I definitely accept that. It must be true that clustered indexes\n(not to be confused with indirect indexes + a heap) have *some* merit\n-- that much has to be true. I just doubt that it's a promising area\nfor us. The cost/risks are very high, and the benefits are probably\nquite low. We don't have to tackle the fundamental problem in order to\nadd a column store table AM with logical-ish TID-like identifiers --\nat least not according to Jeff. Why not consider a low cost solution\nlike that for each table AM?\n\n> If the TableAM TID contains information about it's physical location,\n> we cannot implement persistent clustered tables due to instability of\n> this physical location of the base tuple (see also how we cannot\n> guarantee the physical location of an index tuple in the index, only\n> the logical location).\n> That being the case, the TID must be fully logical. I hope you agree\n> at least up to here.\n\nAbsolutely. Agreed.\n\n> If the TID must share an ordering with the clustering of the table,\n> then it should also have a similar key space as the clustering: I\n> could cluster a table on a text column, but any fixed size TID cannot\n> ever dream to be used to correctly identify the position of a certain\n> text string within an arbitrary sorted list of text strings. You'll\n> have to use that text string to find it's location in the list. As\n> such, I believe that the tuple identifier (or also: the identifier\n> using which we can locate the tuple data corresponding to that\n> identifier) for clustered tables / index oriented tables is at the\n> very least the set of columns that that table is clustered on.\n\nI agree. I think that looking at MySQL/InnoDB here might be very useful.\n\nInnoDB secondary indexes use stable identifiers of the logical row in\nthe table -- which is just the primary key value for the relevant row\nin all cases. Systems like Oracle and SQL Server can use TIDs/RIDs\nwith a heap based table, which is stable in the same way (sometimes at\ngreat cost elsewhere). All three systems are more or less the same as\neach other as far as the fundamental nature of TID-like identifiers is\nconcerned (and rather unlike Postgres) -- TIDs/RIDs/pointers in\nsecondary indexes are all stable identifiers of logical rows. Andy\nPavlo has said that InnoDB/Oracle TIDs are logical identifiers, while\nPostgres TIDs are physical identifiers. (I don't find that particular\nterminology useful because it confuses issues elsewhere, but I know\nwhat he meant.)\n\nNo less an authority than Mark Callaghan (well known in the MySQL\ncommunity for his work at Google and later Facebook) did a comparative\nbenchmark between Postgres and MySQL (InnoDB) back in January. I'm a\nbig fan of his analysis of LSM/storage economics stuff, so naturally I\nread his report on this with great interest. The high level outcome\nwas that Postgres 13 was mostly faster than MySQL 8, and Postgres 11\nwas mostly slower than MySQL 5.6:\n\nhttps://smalldatum.blogspot.com/2021/01/sysbench-postgres-vs-mysql-and-impact.html\n\nBut the details are what really interests me. One very notable\nperformance gap was between MySQL 8 + InnoDB and Postgres 13 with\nupdates that affect an indexed column. Apparently one system could be\nas much as 4x slower with sysbench index updates, which is notably the\nstraggler among the tests performed, for one of the systems.\nReportedly the dreaded write amplification from updates is to blame.\nJust like that tawdry Uber blog post from 2016 warned of!\n\nBut guess what? *Postgres* was the system that was faster and had\n*significantly less* write amplification from updates to secondary\nindexes -- to the extent that it really stood out as an *advantage*\n(for this particular set of tests). The pertinent information about\nwrite amplification with index updates is under the \"Write-heavy\"\nsection: \"The largest difference is for the 2nd test (update-index)\nthat requires index maintenance.\" ... \"Write-amp (wKB/o) is worse for\nMySQL especially for the update-index test\". So...what do you think of\nthat?\n\nThis analysis was based on Postgres 13, not Postgres 14, so presumably\nwe'd do better now that we have bottom-up index deletion.\n\nI don't mean to suggest that clearly we're living in opposite land,\nand Postgres was actually the one that did better with index updates\nall along -- as Callaghan says, it's one benchmark that shouldn't be\noverinterpreted and is mostly useful to get the right high level\nintuitions about how each system performs. I believe that sysbench has\n2 or 3 indexes here, including the PK, and I wouldn't be surprised if\nthe final result flipped in favor of InnoDB with more indexes -- even\nwith bottom-up index deletion. All I'm saying is that it seems\nextremely premature to assume that there are very significant\nadvantages to generalizing TID-like identifiers to be stable, be it in\na clustered index table AM design like InnoDB or some other kind of\ntable AM. Maybe there are huge advantages, *and* maybe it's even worth\nthe enormous trouble and risk (to say nothing of the opportunity cost\nof not improving what already seems to work quite well most of the\ntime). But I tend to doubt it, and I'd prefer to work on simple\nincremental approaches that already seem to be working.\n\n[1] https://pathelland.substack.com/p/i-am-so-glad-im-uncoordinated\n[2] https://www.researchgate.net/publication/262275971_From_A_to_E_Analyzing_TPC's_OLTP_Benchmarks_--_The_obsolete_the_ubiquitous_the_unexplored\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 18 May 2021 18:59:49 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: MaxOffsetNumber for Table AMs" } ]
[ { "msg_contents": "Hi,\n\n\nour setup:\n Postgres server is running on CentOS release 6.10 (Final) instance.\n Server Version is PostgreSQL 9.5.9 on x86_64-pc-linux-gnu, compiled by\ngcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18), 64-bit\n\nWith the following parameters set:\n\nwal_level = 'logical' # replica < logical\nmax_replication_slots = 10\nmax_wal_senders = 10\ntrack_commit_timestamp = on\n\n\n\nWe are using decoder_raw module for retrieving/read the WAL data through\nthe logical decoding mechanism.\n\n\nApplication setup:\n\nThe actual TCapture engine is a Java application which runs as a separate\nprogram outside Postgres, and which must be started explicitly.\nWhen TCapture is running, it will scan the transaction log (with TCapt\nmodule) of all primary databases and pick up transactions which must be\nreplicated.\nTransactions which have been picked up are stored in the “Replication\nDatabase”, a PG user database exclusively used by TCapture.\nIn the Replication Database, transaction is ‘copied’ to all replicate\ndatabases which have a subscription for this transaction.\n Transaction is then applied to the replicate tables by inserting it into\nby the dedicated Java application module\n\n\n We runs TCapt module in the loop for reading a primary database which is a\ndocumental database (with binary columns) .\n\n\nBehavior reported (Bug)\n We have TCapture Replication Server running for successfully for weeks\nbut recently we encountered following error:\n\ncat log/TCapture_enodp_2021-04-12-11\\:30\\:16_err.log\n org.postgresql.util.PSQLException: ERROR: got sequence entry 258 for\ntoast chunk 538757697 instead of seq 0\n at\norg.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2553)\n at\norg.postgresql.core.v3.QueryExecutorImpl.processCopyResults(QueryExecutorImpl.java:1212)\n at\norg.postgresql.core.v3.QueryExecutorImpl.readFromCopy(QueryExecutorImpl.java:1112)\n at\norg.postgresql.core.v3.CopyDualImpl.readFromCopy(CopyDualImpl.java:44)\n at\norg.postgresql.core.v3.replication.V3PGReplicationStream.receiveNextData(V3PGReplicationStream.java:160)\n at\norg.postgresql.core.v3.replication.V3PGReplicationStream.readInternal(V3PGReplicationStream.java:125)\n at\norg.postgresql.core.v3.replication.V3PGReplicationStream.readPending(V3PGReplicationStream.java:82)\n at\ncom.edslab.TCapt.receiveChangesOccursBeforTCapt(TCapt.java:421)\n at com.edslab.TCapt.run(TCapt.java:182)\n at java.lang.Thread.run(Thread.java:745)\n\n\n After restarting our TCapt module (see https://www.tcapture.net/ for\nbetter understand the project TCapture), the error went away. But this\ncauses the producer module (Tapt) to shut down.\n\nPlease note that we run TCapture with other Postgres versions (9.6, 10,\n11,ec..) without problems !!\n\nIs there any resolution for this issue or is it resolved in the higher\nversion of postgres?\n\n\nRegards,\nSilvio\n\nHi, our setup:  Postgres server is running on CentOS release 6.10 (Final)  instance.   Server Version is PostgreSQL 9.5.9 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18), 64-bitWith the following parameters set:wal_level = 'logical' # replica < logicalmax_replication_slots = 10max_wal_senders = 10track_commit_timestamp = onWe are using decoder_raw  module for retrieving/read the WAL data through the logical decoding mechanism.Application setup: The actual TCapture engine is a Java application which runs as a separate program outside Postgres, and which must be started explicitly.When TCapture is running, it will scan the transaction log  (with TCapt module) of all primary databases and pick up transactions which must be replicated.Transactions which have been picked up are stored in the “Replication Database”, a PG user database exclusively used by TCapture.In the Replication Database, transaction is ‘copied’ to all replicate databases which have a subscription for this transaction. Transaction is then applied to the replicate tables by inserting it into by the dedicated Java application module  We runs TCapt module in the loop for reading a primary database which is a documental database (with binary columns) . Behavior reported (Bug)  We have TCapture Replication Server  running for successfully for weeks but recently we encountered following error: cat log/TCapture_enodp_2021-04-12-11\\:30\\:16_err.log    org.postgresql.util.PSQLException: ERROR: got sequence entry 258 for toast chunk 538757697 instead of seq 0            at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2553)            at org.postgresql.core.v3.QueryExecutorImpl.processCopyResults(QueryExecutorImpl.java:1212)            at org.postgresql.core.v3.QueryExecutorImpl.readFromCopy(QueryExecutorImpl.java:1112)            at org.postgresql.core.v3.CopyDualImpl.readFromCopy(CopyDualImpl.java:44)            at org.postgresql.core.v3.replication.V3PGReplicationStream.receiveNextData(V3PGReplicationStream.java:160)            at org.postgresql.core.v3.replication.V3PGReplicationStream.readInternal(V3PGReplicationStream.java:125)            at org.postgresql.core.v3.replication.V3PGReplicationStream.readPending(V3PGReplicationStream.java:82)            at com.edslab.TCapt.receiveChangesOccursBeforTCapt(TCapt.java:421)            at com.edslab.TCapt.run(TCapt.java:182)            at java.lang.Thread.run(Thread.java:745) After restarting our TCapt module (see https://www.tcapture.net/ for better understand the project TCapture), the error went away. But this causes the producer module (Tapt) to shut down. Please note that we run TCapture with other Postgres versions (9.6, 10, 11,ec..) without problems !!Is there any  resolution for this issue or is it resolved in the higher version of postgres? Regards,Silvio", "msg_date": "Fri, 30 Apr 2021 10:53:07 +0200", "msg_from": "silvio brandani <sbrandans@gmail.com>", "msg_from_op": true, "msg_subject": "Replication slot used in logical decoding of documental database give\n error: got sequence entry 258 for toast chunk 538757697 instead of seq 0" } ]
[ { "msg_contents": "Hi,\n\nI need to reproduce the CVE-2020-25695 on PostgreSQL 9.2.24. I know this is\nnot a supported version, however, it is important for us to have a\nreproducer for this version as well.\n\nThe reproducer for supported versions[1] is based on REFRESH MATERIALIZED\nVIEW which is not implemented until version 9.3.\n\nI was trying to reproduce this using ANALYZE as you can see in this poc.sql\nfile[2]. However, it doesn't reproduce the issue.\n\nIt would be really appreciated if someone could take a look at it and help.\n\n\n[1]\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/privileges.sql;h=013bc95c74bd20e5ab7f1826ea7e676da2a0e85b;hb=HEAD#l896\n[2] https://pastebin.com/6hgziYRD\n\n\nRegards,\n\n-- \nPatrik Novotný\nAssociate Software Engineer\nRed Hat\npanovotn@redhat.com\n\nHi,I need to reproduce the CVE-2020-25695 on PostgreSQL 9.2.24. I know this is not a supported version, however, it is important for us to have a reproducer for this version as well.The reproducer for supported versions[1] is based on REFRESH MATERIALIZED VIEW which is not implemented until version 9.3.I was trying to reproduce this using ANALYZE as you can see in this poc.sql file[2]. However, it doesn't reproduce the issue.It would be really appreciated if someone could take a look at it and help.[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/privileges.sql;h=013bc95c74bd20e5ab7f1826ea7e676da2a0e85b;hb=HEAD#l896[2] https://pastebin.com/6hgziYRDRegards,-- Patrik NovotnýAssociate Software EngineerRed Hatpanovotn@redhat.com", "msg_date": "Fri, 30 Apr 2021 15:13:43 +0200", "msg_from": "Patrik Novotny <panovotn@redhat.com>", "msg_from_op": true, "msg_subject": "Help needed with a reproducer for CVE-2020-25695 not based on REFRESH\n MATERIALIZED VIEW" }, { "msg_contents": "We've figured it out. Please ignore.\n\n\nRegards.\n\nOn Fri, Apr 30, 2021 at 3:13 PM Patrik Novotny <panovotn@redhat.com> wrote:\n\n> Hi,\n>\n> I need to reproduce the CVE-2020-25695 on PostgreSQL 9.2.24. I know this\n> is not a supported version, however, it is important for us to have a\n> reproducer for this version as well.\n>\n> The reproducer for supported versions[1] is based on REFRESH MATERIALIZED\n> VIEW which is not implemented until version 9.3.\n>\n> I was trying to reproduce this using ANALYZE as you can see in this\n> poc.sql file[2]. However, it doesn't reproduce the issue.\n>\n> It would be really appreciated if someone could take a look at it and help.\n>\n>\n> [1]\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/privileges.sql;h=013bc95c74bd20e5ab7f1826ea7e676da2a0e85b;hb=HEAD#l896\n> [2] https://pastebin.com/6hgziYRD\n>\n>\n> Regards,\n>\n> --\n> Patrik Novotný\n> Associate Software Engineer\n> Red Hat\n> panovotn@redhat.com\n>\n\n\n-- \nPatrik Novotný\nAssociate Software Engineer\nRed Hat\npanovotn@redhat.com\n\nWe've figured it out. Please ignore.Regards.On Fri, Apr 30, 2021 at 3:13 PM Patrik Novotny <panovotn@redhat.com> wrote:Hi,I need to reproduce the CVE-2020-25695 on PostgreSQL 9.2.24. I know this is not a supported version, however, it is important for us to have a reproducer for this version as well.The reproducer for supported versions[1] is based on REFRESH MATERIALIZED VIEW which is not implemented until version 9.3.I was trying to reproduce this using ANALYZE as you can see in this poc.sql file[2]. However, it doesn't reproduce the issue.It would be really appreciated if someone could take a look at it and help.[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/privileges.sql;h=013bc95c74bd20e5ab7f1826ea7e676da2a0e85b;hb=HEAD#l896[2] https://pastebin.com/6hgziYRDRegards,-- Patrik NovotnýAssociate Software EngineerRed Hatpanovotn@redhat.com   \n-- Patrik NovotnýAssociate Software EngineerRed Hatpanovotn@redhat.com", "msg_date": "Fri, 30 Apr 2021 17:24:03 +0200", "msg_from": "Patrik Novotny <panovotn@redhat.com>", "msg_from_op": true, "msg_subject": "Re: Help needed with a reproducer for CVE-2020-25695 not based on\n REFRESH MATERIALIZED VIEW" } ]
[ { "msg_contents": "Hackers,\n\nPostgreSQL defines a number of GUCs that can only be set by superusers. I would like to support granting privileges on subsets of these to non-superuser roles, inspired by Stephen Frost's recent work on pg_read_all_data and pg_write_all_data roles.\n\nThe specific use case motivating this work is that of a PostgreSQL service provider. The provider guarantees certain aspects of the service, such as periodic backups, replication, uptime, availability, etc., while making no guarantees of other aspects, such as performance associated with the design of the schema or the queries executed. The provider should be able to grant to the tenant privileges to set any GUC which cannot be used to \"escape the sandbox\" and interfere with the handful of metrics being guaranteed. Given that the guarantees made by one provider may differ from those made by another, the exact set of GUCs which the provider allows the tenant to control may differ.\n\nBy my count, there are currently 50 such GUCs, already broken down into 15 config groups. Creating a single new role pg_set_all_gucs seems much too coarse a control, but creating 50 new groups may be excessive. We could certainly debate which GUCs could be used to escape the sandbox vs. which ones could not, but I would prefer a design that allows the provider to make that determination. The patch I would like to submit would only give the provider the mechanism for controlling these things, but would not make the security choices for them.\n\nDo folks think it would make sense to create a role per config group? Adding an extra 15 default roles seems high to me, but organizing the feature this way would make the roles easier to document, because there would be a one-to-one correlation between the roles and the config groups.\n\nI have a WIP patch that I'm not attaching, but if I get any feedback, I might be able to adjust the patch before the first version posted. The basic idea is that it allows things like:\n\n GRANT pg_set_stats_monitoring TO tenant_role;\n\nAnd then tenant_role could, for example\n\n SET log_parser_stats TO off;\n\nThanks\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 30 Apr 2021 16:19:22 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "Greetings,\n\nOn Fri, Apr 30, 2021 at 19:19 Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n> PostgreSQL defines a number of GUCs that can only be set by superusers. I\n> would like to support granting privileges on subsets of these to\n> non-superuser roles, inspired by Stephen Frost's recent work on\n> pg_read_all_data and pg_write_all_data roles.\n\n\nThere’s been some effort started in this direction which I was working on\n(see the patches about an “admin” role and set of GUCs). I have been\nmeaning to get back to that but the specific concern I had was about coming\nup with how to define the proper set of GUCs.\n\nThe specific use case motivating this work is that of a PostgreSQL service\n> provider. The provider guarantees certain aspects of the service, such as\n> periodic backups, replication, uptime, availability, etc., while making no\n> guarantees of other aspects, such as performance associated with the design\n> of the schema or the queries executed. The provider should be able to\n> grant to the tenant privileges to set any GUC which cannot be used to\n> \"escape the sandbox\" and interfere with the handful of metrics being\n> guaranteed. Given that the guarantees made by one provider may differ from\n> those made by another, the exact set of GUCs which the provider allows the\n> tenant to control may differ.\n>\n> By my count, there are currently 50 such GUCs, already broken down into 15\n> config groups. Creating a single new role pg_set_all_gucs seems much too\n> coarse a control, but creating 50 new groups may be excessive. We could\n> certainly debate which GUCs could be used to escape the sandbox vs. which\n> ones could not, but I would prefer a design that allows the provider to\n> make that determination. The patch I would like to submit would only give\n> the provider the mechanism for controlling these things, but would not make\n> the security choices for them.\n>\n> Do folks think it would make sense to create a role per config group?\n> Adding an extra 15 default roles seems high to me, but organizing the\n> feature this way would make the roles easier to document, because there\n> would be a one-to-one correlation between the roles and the config groups.\n\n\nNew predefined roles are relatively inexpensive. That said, whatever sets\nwe define need to have some meaning to them- one which is reasonably\nfuture-proofed so that we have some idea what category a new GUC would fit\ninto.\n\n“Can’t be used to gain superuser” may be a sufficiently clear grouping, as\nwas more or less contemplated by the “admin” approach. If that doesn’t\nwork though then we need an understanding of what the limits on these\ngroups are, so we can competently fit new GUCs into these groups (or invent\nnew ones if a new GUC truly falls outside all existing but I would expect\nthat to be a rather rare case..).\n\nWe may also wish to keep some GUCs superuser only when they really only\nmake sense to be used in a developer context...\n\nThanks,\n\nStephen\n\nGreetings,On Fri, Apr 30, 2021 at 19:19 Mark Dilger <mark.dilger@enterprisedb.com> wrote:\nPostgreSQL defines a number of GUCs that can only be set by superusers.  I would like to support granting privileges on subsets of these to non-superuser roles, inspired by Stephen Frost's recent work on pg_read_all_data and pg_write_all_data roles.There’s been some effort started in this direction which I was working on (see the patches about an “admin” role and set of GUCs).  I have been meaning to get back to that but the specific concern I had was about coming up with how to define the proper set of GUCs.\nThe specific use case motivating this work is that of a PostgreSQL service provider.  The provider guarantees certain aspects of the service, such as periodic backups, replication, uptime, availability, etc., while making no guarantees of other aspects, such as performance associated with the design of the schema or the queries executed.  The provider should be able to grant to the tenant privileges to set any GUC which cannot be used to \"escape the sandbox\" and interfere with the handful of metrics being guaranteed.  Given that the guarantees made by one provider may differ from those made by another, the exact set of GUCs which the provider allows the tenant to control may differ.\n\nBy my count, there are currently 50 such GUCs, already broken down into 15 config groups.  Creating a single new role pg_set_all_gucs seems much too coarse a control, but creating 50 new groups may be excessive.  We could certainly debate which GUCs could be used to escape the sandbox vs. which ones could not, but I would prefer a design that allows the provider to make that determination.  The patch I would like to submit would only give the provider the mechanism for controlling these things, but would not make the security choices for them.\n\nDo folks think it would make sense to create a role per config group?  Adding an extra 15 default roles seems high to me, but organizing the feature this way would make the roles easier to document, because there would be a one-to-one correlation between the roles and the config groups.New predefined roles are relatively inexpensive. That said, whatever sets we define need to have some meaning to them- one which is reasonably future-proofed so that we have some idea what category a new GUC would fit into.“Can’t be used to gain superuser” may be a sufficiently clear grouping, as was more or less contemplated by the “admin” approach.  If that doesn’t work though then we need an understanding of what the limits on these groups are, so we can competently fit new GUCs into these groups (or invent new ones if a new GUC truly falls outside all existing but I would expect that to be a rather rare case..). We may also wish to keep some GUCs superuser only when they really only make sense to be used in a developer context...Thanks,Stephen", "msg_date": "Fri, 30 Apr 2021 19:28:05 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "On 04/30/21 19:19, Mark Dilger wrote:\n\n> We could certainly debate which GUCs could be used to escape the sandbox\n> vs. which ones could not, but I would prefer a design that allows the\n> provider to make that determination.\n\nI find myself wondering how many GUCs flagged SUSET are not flagged that way\nbecause of a determination already made that they could be used to escape.\n(Maybe some of the logging ones, only usable to conceal your escape.)\n\nBut there might be ways for a provider, scrutinizing each of those\nindividually, to conclude \"this will not allow escape from the sandbox\n/I/ have set up, provided the value being set satisfies constraints\nx and y\" ... a generalization of the LOAD from $libdir/plugins idea.\n\nSo that suggests to me some mechanism where a provider could grant\nsetting foo to role bar using validator baz().\n\nCan SUSET GUCs be set from SECURITY DEFINER functions? Maybe there are\nalready the pieces to do that, minus some syntax sugar.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 30 Apr 2021 20:02:09 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> On Fri, Apr 30, 2021 at 19:19 Mark Dilger <mark.dilger@enterprisedb.com>\n> wrote:\n>> PostgreSQL defines a number of GUCs that can only be set by superusers. I\n>> would like to support granting privileges on subsets of these to\n>> non-superuser roles, inspired by Stephen Frost's recent work on\n>> pg_read_all_data and pg_write_all_data roles.\n\n> New predefined roles are relatively inexpensive. That said, whatever sets\n> we define need to have some meaning to them- one which is reasonably\n> future-proofed so that we have some idea what category a new GUC would fit\n> into.\n> “Can’t be used to gain superuser” may be a sufficiently clear grouping, as\n> was more or less contemplated by the “admin” approach. If that doesn’t\n> work though then we need an understanding of what the limits on these\n> groups are, so we can competently fit new GUCs into these groups (or invent\n> new ones if a new GUC truly falls outside all existing but I would expect\n> that to be a rather rare case..).\n> We may also wish to keep some GUCs superuser only when they really only\n> make sense to be used in a developer context...\n\nHmm, is there really any point in that? We already have roles\nlike \"pg_write_server_files\" and \"pg_execute_server_program\",\nwhich allow trivial escalation to superuser if one wishes,\nbut are still useful as being roles you're a bit less likely\nto break your database with accidentally than running as full\nsuperuser.\n\nSo ISTM that \"pg_set_superuser_parameters\" could be treated as\nbeing one of those same sorts of roles that you don't give out\nto untrusted people, and then we don't have to worry about\nexactly which GUCs might be exactly how dangerous to whom.\n\nIf we try to define it as being some lesser level of\nprivilege than that, I'm afraid we will spend lots of\nnot-very-productive time trying to classify the security\nthreats from different GUCs ... and they all have *some*\nsecurity issue involved, or they wouldn't be restricted in\nthe first place. Plus, I'm not looking forward to having\nto issue CVEs when we realize we misclassified something.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Apr 2021 20:13:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "\n\n> On Apr 30, 2021, at 4:28 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> “Can’t be used to gain superuser” may be a sufficiently clear grouping, as was more or less contemplated by the “admin” approach. If that doesn’t work though then we need an understanding of what the limits on these groups are, so we can competently fit new GUCs into these groups (or invent new ones if a new GUC truly falls outside all existing but I would expect that to be a rather rare case..). \n\nWhen I first heard that providers want to build sandboxes around PostgreSQL, I thought the idea was a little silly, because providers can just spin up a virtual machine per tenant and give each tenant superuser privileges on their respective VM. Who cares if they mess it up after that?\n\nThe problem with that idea turns out to be that the providers want to take responsibility for some of the database maintenance, possibly including backups, replication, etc. I think the set of controls the provider hands over to the tenant will depend very much on the division of responsibility. If the provider is managing replication, then control over session_replication_role and wal_compression is unlikely to be handed to the tenant, but if the tenant is responsible for their own replication scheme, it might be.\n\nViewing all of this in terms of which controls allow the tenant to escape a hypothetical sandbox seems like the wrong approach. Shouldn't we let service providers decide which controls would allow the tenant to escape the specific sandbox the provider has designed?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 30 Apr 2021 19:00:35 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "On Fri, 30 Apr 2021 at 22:00, Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n\n> Viewing all of this in terms of which controls allow the tenant to escape\n> a hypothetical sandbox seems like the wrong approach. Shouldn't we let\n> service providers decide which controls would allow the tenant to escape\n> the specific sandbox the provider has designed?\n>\n\nI’m not even sure I should be mentioning this possibility, but what if we\nmade each GUC parameter a grantable privilege? I’m honestly not sure if\nthis is insane or not. I mean numerically it’s a lot of privileges, but\nconceptually it’s relatively simple.\n\nWhat I like the least about it is actually the idea of giving up entirely\non the notion of grouping privileges into reasonable packages: some of\nthese privileges would be quite safe to grant in many or even most\ncircumstances, while others would usually not be reasonable to grant.\n\nOn Fri, 30 Apr 2021 at 22:00, Mark Dilger <mark.dilger@enterprisedb.com> wrote: \nViewing all of this in terms of which controls allow the tenant to escape a hypothetical sandbox seems like the wrong approach.  Shouldn't we let service providers decide which controls would allow the tenant to escape the specific sandbox the provider has designed?I’m not even sure I should be mentioning this possibility, but what if we made each GUC parameter a grantable privilege? I’m honestly not sure if this is insane or not. I mean numerically it’s a lot of privileges, but conceptually it’s relatively simple.What I like the least about it is actually the idea of giving up entirely on the notion of grouping privileges into reasonable packages: some of these privileges would be quite safe to grant in many or even most circumstances, while others would usually not be reasonable to grant.", "msg_date": "Fri, 30 Apr 2021 23:27:55 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "On 04/30/21 22:00, Mark Dilger wrote:\n> Viewing all of this in terms of which controls allow the tenant to escape\n> a hypothetical sandbox seems like the wrong approach. Shouldn't we let\n> service providers decide which controls would allow the tenant to escape\n> the specific sandbox the provider has designed?\n\nI agree that sounds more like the right approach. It seems to me that\nin the general case, a provider might conclude that setting foo is\nsafe in the provider-designed sandbox /if the value being assigned\nto it satisfies some provider-determined conditions/.\n\nOn 04/30/21 20:02, Chapman Flack wrote:\n> So that suggests to me some mechanism where a provider could grant\n> setting foo to role bar using validator baz().\n>\n> Can SUSET GUCs be set from SECURITY DEFINER functions? Maybe there are\n> already the pieces to do that, minus some syntax sugar.\n\nThe answer seems to be yes: I just created a SECURITY DEFINER function\nand used it to change a SUSET-only GUC setting.\n\nSo it seems the machinery is already in place with which a provider\ncould allow a chosen set of SUSET-only GUCs to be set, to values that\nsatisfy provider-determined conditions, by users in a provider-chosen\nrole.\n\nSome pretty syntax like GRANT SETTING foo TO ROLE bar WHERE cond;\nwould simply be sugar on top.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sat, 1 May 2021 10:07:04 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "\n\n> On May 1, 2021, at 7:07 AM, Chapman Flack <chap@anastigmatix.net> wrote:\n> \n> On 04/30/21 22:00, Mark Dilger wrote:\n>> Viewing all of this in terms of which controls allow the tenant to escape\n>> a hypothetical sandbox seems like the wrong approach. Shouldn't we let\n>> service providers decide which controls would allow the tenant to escape\n>> the specific sandbox the provider has designed?\n> \n> I agree that sounds more like the right approach. It seems to me that\n> in the general case, a provider might conclude that setting foo is\n> safe in the provider-designed sandbox /if the value being assigned\n> to it satisfies some provider-determined conditions/.\n\n> So it seems the machinery is already in place with which a provider\n> could allow a chosen set of SUSET-only GUCs to be set, to values that\n> satisfy provider-determined conditions, by users in a provider-chosen\n> role.\n\n> Some pretty syntax like GRANT SETTING foo TO ROLE bar WHERE cond;\n> would simply be sugar on top.\n\nI agree with everything you say here. I have some thoughts about usability....\n\nI'd like the experience for the tenant to be as similar as possible to having superuser privileges on their own cluster. The tenant may be migrating an application from a database that they currently manage themselves, and any need to use different syntax from what they have been using is an extra hurdle that could derail the migration.\n\nExtra syntax for use by the service provider seems much easier to justify.\n\nIf the service provider can install extra role-aware check_hooks for gucs, and if the include directive for postgresql.conf can specify a role under which a postgresql.conf.tenant file is processed, then the tenant can port their application and their config file and the only things that should break are those things the provider has intentionally prohibited.\n\nDoes this sound like a reasonable approach?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sat, 1 May 2021 09:13:49 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "On 05/01/21 12:13, Mark Dilger wrote:\n> Extra syntax for use by the service provider seems much easier to justify.\n> \n> If the service provider can install extra role-aware check_hooks for gucs,\n> and if the include directive for postgresql.conf can specify a role under\n> which a postgresql.conf.tenant file is processed, then the tenant can port\n> their application and their config file and the only things that should break\n> are those things the provider has intentionally prohibited.\n\nMaybe version 0 is where the provider just builds a shared object\nto go in shared_preload_libraries. The provider has probably already\ndone a bunch of other stuff more challenging than that.\n\nThe GUC system would have to expose a way for the shared object to\nchain extra_check_hooks off existing GUCs. An extra_check_hook can check\nboth the value and the role of the caller.\n\nThe configfile syntax for include-with-a-role would be the only other\nmissing piece.\n\nVersion 0.5 is maybe where someone contributes code for such a shared\nobject that is somewhat general and configured by a yaml file, or\nsomething. (That would probably be easier if an extra_check_hook accepts\nthe usual void *extra context argument that existing GUC hooks don't.)\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sat, 1 May 2021 12:37:16 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "Hi,\n\nOn Fri, Apr 30, 2021 at 04:19:22PM -0700, Mark Dilger wrote:\n> PostgreSQL defines a number of GUCs that can only be set by\n> superusers. I would like to support granting privileges on subsets of\n> these to non-superuser roles, inspired by Stephen Frost's recent work\n> on pg_read_all_data and pg_write_all_data roles.\n> \n> The specific use case motivating this work is that of a PostgreSQL\n> service provider. The provider guarantees certain aspects of the\n> service, such as periodic backups, replication, uptime, availability,\n> etc., while making no guarantees of other aspects, such as performance\n> associated with the design of the schema or the queries executed. The\n> provider should be able to grant to the tenant privileges to set any\n> GUC which cannot be used to \"escape the sandbox\" and interfere with\n> the handful of metrics being guaranteed. Given that the guarantees\n> made by one provider may differ from those made by another, the exact\n> set of GUCs which the provider allows the tenant to control may\n> differ.\n> \n> By my count, there are currently 50 such GUCs, already broken down\n> into 15 config groups. Creating a single new role pg_set_all_gucs\n> seems much too coarse a control, but creating 50 new groups may be\n> excessive. We could certainly debate which GUCs could be used to\n> escape the sandbox vs. which ones could not, but I would prefer a\n> design that allows the provider to make that determination. The patch\n> I would like to submit would only give the provider the mechanism for\n> controlling these things, but would not make the security choices for\n> them.\n> \n> Do folks think it would make sense to create a role per config group?\n> Adding an extra 15 default roles seems high to me, but organizing the\n> feature this way would make the roles easier to document, because\n> there would be a one-to-one correlation between the roles and the\n> config groups.\n> \n> I have a WIP patch that I'm not attaching, but if I get any feedback,\n> I might be able to adjust the patch before the first version posted.\n> The basic idea is that it allows things like:\n> \n> GRANT pg_set_stats_monitoring TO tenant_role;\n> \n> And then tenant_role could, for example\n> \n> SET log_parser_stats TO off;\n\nJust saying, I've proposed something very similar, albeit for a narrower\nscope (mostly the Reporting and Logging category) here:\nhttps://www.postgresql.org/message-id/flat/c2ee39152957af339ae6f3e851aef09930dd2faf.camel@credativ.de\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB M�nchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 M�nchengladbach\nGesch�ftsf�hrung: Dr. Michael Meskes, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n", "msg_date": "Sat, 1 May 2021 23:31:29 +0200", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "On Sat, May 1, 2021 at 12:37 PM Chapman Flack <chap@anastigmatix.net> wrote:\n> Maybe version 0 is where the provider just builds a shared object\n> to go in shared_preload_libraries. The provider has probably already\n> done a bunch of other stuff more challenging than that.\n>\n> The GUC system would have to expose a way for the shared object to\n> chain extra_check_hooks off existing GUCs. An extra_check_hook can check\n> both the value and the role of the caller.\n\nI think there are two parts to this problem. First, the SP needs to be\nable to delegate to some users but not others the ability to set\nsuperuser GUCs. Second, the SP needs to be able to control which GUCs\nthe privileged users get to set and perhaps to what values. A hook of\nthe type you propose here seems like it might work reasonably well for\nthat second part, but it's not totally obvious to me how it helps with\nthe first part.\n\nInstead of going to the extreme of one predefined role per GUC, maybe\nwe could see if the PGC_SUSET GUCs could be divided into buckets based\non the reason they are so marked? For example, log_parser_stats,\nlog_planner_stats, log_executor_stats, log_statement_stats,\nlog_btree_build_stats, trace_locks, trace_userlocks, trace_lwlocks,\nlog_min_duration_statement, and a bunch of others are probably all\nSUSET just on the theory that only the superuser should have the right\nto control what ends up in the log. But we could make a predefined\nrole that represents the right to control what ends up in the log, and\nthen all of those GUCs could be tied to that role. Is that too\ncoarse-grained? It might be.\n\nOne problem with having a separate predefined role for every PGC_SUSET\nGUC is that it's no help for extensions. Both auto_explain and\npg_stat_statements have such GUCs, and there may be out-of-core\nextensions that do as well. We should try to come up with a system\nthat doesn't leave them out in the cold.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 11:22:52 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "On 05/03/21 11:22, Robert Haas wrote:\n>> The GUC system would have to expose a way for the shared object to\n>> chain extra_check_hooks off existing GUCs. An extra_check_hook can check\n>> both the value and the role of the caller.\n> \n> I think there are two parts to this problem. First, the SP needs to be\n> able to delegate to some users but not others the ability to set\n> superuser GUCs. Second, the SP needs to be able to control which GUCs\n> the privileged users get to set and perhaps to what values. A hook of\n> the type you propose here seems like it might work reasonably well for\n> that second part, but it's not totally obvious to me how it helps with\n> the first part.\n\nI guess I was thinking, but forgot to convey to the keyboard, that the\nexistence of a non-empty extra_check_hooks chain on a SUSET GUC (which\ncould only have been attached from a shared preload library) would\nimplicitly change SUSET to mean settable whenever accepted by the hook(s).\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 3 May 2021 11:44:57 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "\n\n> On May 3, 2021, at 8:22 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> One problem with having a separate predefined role for every PGC_SUSET\n> GUC is that it's no help for extensions. Both auto_explain and\n> pg_stat_statements have such GUCs, and there may be out-of-core\n> extensions that do as well. We should try to come up with a system\n> that doesn't leave them out in the cold.\n\nAs things stand, all custom variables defined via the DefineCustom{Bool,Int,Real,String,Enum}Variable are placed in the CUSTOM_OPTIONS config_group. We could add a role for controlling any SUSET CUSTOM_OPTIONS GUCs, or we could extend those functions to take a config_group option, or perhaps some of both. I haven't thought too much yet about whether allowing extensions to place a custom GUC into one of the predefined groups would be problematic. Any thoughts on that?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 3 May 2021 09:25:04 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "On Mon, May 3, 2021 at 11:45 AM Chapman Flack <chap@anastigmatix.net> wrote:\n> I guess I was thinking, but forgot to convey to the keyboard, that the\n> existence of a non-empty extra_check_hooks chain on a SUSET GUC (which\n> could only have been attached from a shared preload library) would\n> implicitly change SUSET to mean settable whenever accepted by the hook(s).\n\nSure, but the hook still needs a way to know which users are entitled\nto set the GUC.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 13:23:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "On Mon, May 3, 2021 at 12:25 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> As things stand, all custom variables defined via the DefineCustom{Bool,Int,Real,String,Enum}Variable are placed in the CUSTOM_OPTIONS config_group. We could add a role for controlling any SUSET CUSTOM_OPTIONS GUCs, or we could extend those functions to take a config_group option, or perhaps some of both. I haven't thought too much yet about whether allowing extensions to place a custom GUC into one of the predefined groups would be problematic. Any thoughts on that?\n\nWell...\n\nOne idea would be to get rid of PGC_SUSET altogether and instead have\na set of flags associated with each GUC, like PGF_SERVER_LOG,\nPGF_CORRUPT_DATA, PGF_CRASH_SERVER. Then you could associate those\nflags with particular predefined roles and grant them out to whoever\nyou want.\n\nSo if a GUC is flagged PGF_SERVER_LOG|PGF_CRASH_SERVER, then the\nassumption is that it's security-sensitive because it both lets you\nalter the contents of the server log and also lets you crash the\nserver. If you are granted both pg_server_log and pg_crash_server, you\ncan set it, otherwise not.\n\nThis is just wild brainstorming, but my point is that I don't think\ndoing it by options groups is particularly good, because it doesn't\nreally have any relationship to why those things are marked SUSET in\nthe first place. To take an example involving functions rather than\nGUCs, the pageinspect functions are super-user only because you can\ncrash the server by inspecting malformed data that you supply as an\narbitrarily literal, but AFAIK the functions in pgstattuple have no\nsimilar hazard, and are just super-only because we don't really know\nwho the superuser wants to authorize, and maybe it's not everybody. So\nthose cases are really different, even though both are extensions. I\nthink the same likely holds true for GUCs.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 13:33:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "On 05/03/21 13:23, Robert Haas wrote:\n> On Mon, May 3, 2021 at 11:45 AM Chapman Flack <chap@anastigmatix.net> wrote:\n>> I guess I was thinking, but forgot to convey to the keyboard, that the\n>> existence of a non-empty extra_check_hooks chain on a SUSET GUC (which\n>> could only have been attached from a shared preload library) would\n>> implicitly change SUSET to mean settable whenever accepted by the hook(s).\n> \n> Sure, but the hook still needs a way to know which users are entitled\n> to set the GUC.\n\nI was contemplating a version 0 with only that minimal support in core\nfor allowing a shared preload library to set such hooks (and allowing\ninclude-with-a-role in config files), assuming service providers already\ndo sophisticated building of stuff to construct the environments they\nprovide, and a C shared object with hooks that enforce their designed\nconstraints wouldn't be an onerous burden on top of that.\n\nSuch providers could then be the laboratories of democracy building\nvarious forms of such things, and if one of those ends up having a\nreasonably general configuration mechanism and gets offered as a\ncontrib module later or for inclusion in core, well, that's version 0.5.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 3 May 2021 13:43:32 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, May 3, 2021 at 12:25 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > As things stand, all custom variables defined via the DefineCustom{Bool,Int,Real,String,Enum}Variable are placed in the CUSTOM_OPTIONS config_group. We could add a role for controlling any SUSET CUSTOM_OPTIONS GUCs, or we could extend those functions to take a config_group option, or perhaps some of both. I haven't thought too much yet about whether allowing extensions to place a custom GUC into one of the predefined groups would be problematic. Any thoughts on that?\n> \n> Well...\n> \n> One idea would be to get rid of PGC_SUSET altogether and instead have\n> a set of flags associated with each GUC, like PGF_SERVER_LOG,\n> PGF_CORRUPT_DATA, PGF_CRASH_SERVER. Then you could associate those\n> flags with particular predefined roles and grant them out to whoever\n> you want.\n> \n> So if a GUC is flagged PGF_SERVER_LOG|PGF_CRASH_SERVER, then the\n> assumption is that it's security-sensitive because it both lets you\n> alter the contents of the server log and also lets you crash the\n> server. If you are granted both pg_server_log and pg_crash_server, you\n> can set it, otherwise not.\n> \n> This is just wild brainstorming, but my point is that I don't think\n> doing it by options groups is particularly good, because it doesn't\n> really have any relationship to why those things are marked SUSET in\n> the first place. To take an example involving functions rather than\n> GUCs, the pageinspect functions are super-user only because you can\n> crash the server by inspecting malformed data that you supply as an\n> arbitrarily literal, but AFAIK the functions in pgstattuple have no\n> similar hazard, and are just super-only because we don't really know\n> who the superuser wants to authorize, and maybe it's not everybody. So\n> those cases are really different, even though both are extensions. I\n> think the same likely holds true for GUCs.\n\nIn general, I agree that we should be looking at predefined roles as\nbeing similar to the Linux capabilities system- defining certain kinds\nof operations which the user who has that role is allowed to do, and\nthen both in-core and extensions can make decisions based on what\ncapabilities the user has been GRANT'd.\n\nHopefully that would limit the amount of cases where a given capability\nends up being overly broad while at the same time allowing extensions to\nsensibly be able to use the defined capabilities for their own needs.\n\nAs we do in other places, we should make it clear when a certain\ncapability allows a user with that capability to gain superuser access\nas that may not always be clear to a user.\n\nOne thing that seems missing from this discussion and is part of what\npaused my effort on the 'admin' role proposed towards the end of the\nlast cycle is that we really need to consider how this all plays with\nALTER SYSTEM and not just SUSET GUCs but also other (eg: POSTMASTER,\nSIGHUP) GUCs. That is- imv we should have a sensible solution for\nmore-or-less all GUCs and which would allow a non-superuser to be able\nto set POSTMASTER and SIGHUP GUCs (and perhaps others..) through\nALTER SYSTEM. \n\nThanks,\n\nStephen", "msg_date": "Mon, 3 May 2021 14:41:07 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> One thing that seems missing from this discussion and is part of what\n> paused my effort on the 'admin' role proposed towards the end of the\n> last cycle is that we really need to consider how this all plays with\n> ALTER SYSTEM and not just SUSET GUCs but also other (eg: POSTMASTER,\n> SIGHUP) GUCs.\n\nYeah, I'd meant to bring that up too. The ability to use ALTER\nSYSTEM freely is probably a much bigger use-case than messing with\nSUSET variables within one's own session.\n\nI'm still of the opinion that slicing and dicing this at the per-GUC\nlevel is a huge waste of effort. Just invent one role that lets\ngrantees set any GUC, document it as being superuser-equivalent,\nand be done.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 May 2021 14:48:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "On Mon, May 3, 2021 at 2:41 PM Stephen Frost <sfrost@snowman.net> wrote:\n> In general, I agree that we should be looking at predefined roles as\n> being similar to the Linux capabilities system- defining certain kinds\n> of operations which the user who has that role is allowed to do, and\n> then both in-core and extensions can make decisions based on what\n> capabilities the user has been GRANT'd.\n\nCool.\n\n> Hopefully that would limit the amount of cases where a given capability\n> ends up being overly broad while at the same time allowing extensions to\n> sensibly be able to use the defined capabilities for their own needs.\n\nYeah. I think it will be a little tricky to get right, as some of the\ncases are a bit subjective, I think.\n\n> As we do in other places, we should make it clear when a certain\n> capability allows a user with that capability to gain superuser access\n> as that may not always be clear to a user.\n\n+1.\n\n> One thing that seems missing from this discussion and is part of what\n> paused my effort on the 'admin' role proposed towards the end of the\n> last cycle is that we really need to consider how this all plays with\n> ALTER SYSTEM and not just SUSET GUCs but also other (eg: POSTMASTER,\n> SIGHUP) GUCs. That is- imv we should have a sensible solution for\n> more-or-less all GUCs and which would allow a non-superuser to be able\n> to set POSTMASTER and SIGHUP GUCs (and perhaps others..) through\n> ALTER SYSTEM.\n\nI missed the earlier discussion on this topic, but I agree that this\nis very important. I think that the discussion of capabilities might\nhelp us get there. For instance, if I'm a service provider, and I give\nuser \"bob\" the pg_put_whatever_you_want_in_the_server_log role, and\nGUCs are tagged so we know what GUCs that affects, then it seems\nnatural to me to allow Bob to set those GUCs via ALTER SYSTEM as well\nas via ALTER USER or ALTER DATABASE. However, if I don't give him the\npg_frob_shell_commands role, he can't set archive_command.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 14:50:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "On Mon, May 3, 2021 at 2:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm still of the opinion that slicing and dicing this at the per-GUC\n> level is a huge waste of effort. Just invent one role that lets\n> grantees set any GUC, document it as being superuser-equivalent,\n> and be done.\n\nIf you want to grant someone full superuser rights, you can do that\nalready. The trick is what to do when you want someone to be able to\nadminister the cluster in a meaningful way without giving them full\nsuperuser rights.\n\nI agree that in some cases it's fine to have predefined roles that are\nknown to permit easy escalation to superuser privileges, like\npg_execute_server_program. It doesn't provide any real security, but\nlike you said, it can help prevent mistakes. However, there is a real\nuse cases for a privileged user who cannot be permitted to escalate to\nsuperuser or to the OS account, but still needs to be able to do some\nadministration of the cluster. The scenario Mark laid out in his\noriginal post is very common. In fact, it may already be the dominant\nmodel for PostgreSQL deployment, and if it isn't now, it will be in 5\nyears. Letting each individual company that's providing a hosted\nPostgreSQL solution hack up its own solution to that problem, all of\nwhich are subtly incompatible with each other and with upstream, is\nnot good for users or the project.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 15:01:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, May 3, 2021 at 2:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I'm still of the opinion that slicing and dicing this at the per-GUC\n> > level is a huge waste of effort. Just invent one role that lets\n> > grantees set any GUC, document it as being superuser-equivalent,\n> > and be done.\n> \n> If you want to grant someone full superuser rights, you can do that\n> already. The trick is what to do when you want someone to be able to\n> administer the cluster in a meaningful way without giving them full\n> superuser rights.\n\nI would suggest that both are useful, but the one-big-hammer does\nnothing to answer the use-case which was brought up on this particular\nthread (which is also certainly not the first time this has been\ndesired). Instead, I would imagine that there would be a set of\npredefined roles for the capabilities and then we might have another\nrole which is akin to 'pg_monitor' but is 'pg_admin' which is GRANT'd a\nbunch of those capabilities and explicitly documented to be able to\nbecome a superuser if they wished to.\n\nPerhaps we would also have a \"pg_notsuperuser_admin\" which would be\nGRANT'd all the capabilities, excluding the ones that could be used to\ngain superuser access.\n\nAs has also been discussed recently, one of the big missing capabilities\nfor a \"pg_notsuperuser_admin\" is a 'create role' capability. I realize\nthat's not exactly the same as GUCs but it's a big part of what's\nmissing to make all of this \"run a service where the 'DBA' can do\neverything except get out to the OS\" stuff work out of the box.\n\nThanks,\n\nStephen", "msg_date": "Mon, 3 May 2021 15:12:56 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "> On Apr 30, 2021, at 4:19 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Hackers,\n> \n> PostgreSQL defines a number of GUCs that can only be set by superusers. I would like to support granting privileges on subsets of these to non-superuser roles, inspired by Stephen Frost's recent work on pg_read_all_data and pg_write_all_data roles.\n> \n> The specific use case motivating this work is that of a PostgreSQL service provider. The provider guarantees certain aspects of the service, such as periodic backups, replication, uptime, availability, etc., while making no guarantees of other aspects, such as performance associated with the design of the schema or the queries executed. The provider should be able to grant to the tenant privileges to set any GUC which cannot be used to \"escape the sandbox\" and interfere with the handful of metrics being guaranteed. Given that the guarantees made by one provider may differ from those made by another, the exact set of GUCs which the provider allows the tenant to control may differ.\n> \n> By my count, there are currently 50 such GUCs, already broken down into 15 config groups. Creating a single new role pg_set_all_gucs seems much too coarse a control, but creating 50 new groups may be excessive. We could certainly debate which GUCs could be used to escape the sandbox vs. which ones could not, but I would prefer a design that allows the provider to make that determination. The patch I would like to submit would only give the provider the mechanism for controlling these things, but would not make the security choices for them.\n> \n> Do folks think it would make sense to create a role per config group? Adding an extra 15 default roles seems high to me, but organizing the feature this way would make the roles easier to document, because there would be a one-to-one correlation between the roles and the config groups.\n> \n> I have a WIP patch that I'm not attaching, but if I get any feedback, I might be able to adjust the patch before the first version posted. The basic idea is that it allows things like:\n> \n> GRANT pg_set_stats_monitoring TO tenant_role;\n> \n> And then tenant_role could, for example\n> \n> SET log_parser_stats TO off;\n\nOk, here is the first version of the patch for the list (though it is the second version I developed.) The patch is quite long, but most of it is mechanical.\n\nOverview:\n\n- guc.h defines a new set of privilege masks\n- pg_authid.dat defines a new set of roles, with a one-to-one correlation to the privilege masks\n- guc_tables.h extends struct config_generic to include a privilege mask field\n- guc.c extends the structs for all variables to include a mask of privileges required to set the variable, and checks the privileges against the current user's role membership when trying to SET or ALTER SYSTEM SET\n- DefineCustom*Variable functions are extended to take a privileges mask, and all calls to these functions are extended to include privileges for the custom variable being defined\n- new regression tests guc_priv_admin and guc_priv_tenant are defined. The first creates a role \"admin\" and assigns it membership to all the new roles added in pg_authid.dat. The second creates a role \"tenant\" and assigns it to just the few new roles that appear reasonable for a tenant. Both tests then go on to SET SESSION AUTHORIZATION to the new role and then attempt to SET, RESET, ALTER SYSTEM SET, and ALTER SYSTEM RESET most of the variables defined in guc.c. These tests might be too verbose to be worth committing, but I thought they made an easy reference for those reviewing the patch who just want to quickly see the behavior.\n\nOne of the consequences of the design is that if a user belongs to a role with permission to SET a variable, they can also ALTER SYSTEM SET that variable, at least to the extent that ALTER SYSTEM SET would allow the superuser to do so. Not all variables can be changed via ALTER SYSTEM SET, though. This means that some variables, \"data_directory\" for example, cannot be changed by any of the new roles. The first version of the patch, never posted, allowed 'include' directives in postgresql.conf to be annotated with roles, such that the included file would be processed with privileges restricted to just the listed roles. This patch doesn't bother, since everything we are likely to care about can be performed using ALTER SYSTEM SET, but I can resurrect the 'include' directive logic if there is general demand for that.\n\nAny user can still SET a PGC_USERSET variable, just as before this patch, but the default permission to do so does not translate into permission to ALTER SYSTEM SET that same variable. For that, the user needs to belong to a role with permission to set the variable, which in general for PGC_USERSET variables is the \"pg_internal_settings\" role. I'm not sure this is the right role for all of these, for example \"password_encryption\" seems like a better fit for role \"pg_interface_settings\", but for the first patch posted to the list I didn't fuss too much about roles assigned to PGC_USERSET variables. \n\nI didn't bother updating the docs yet, as I doubt the set of privileges/roles in this patch will survive contact with this list. They are:\n\npg_internal_settings:\n - changes to purely internal behavior\npg_stats_settings:\n - changes to stats collection\npg_maintenance_settings\n - changes to autovacuum behavior\npg_storage_settings\n - changes to dealing with storage errors, such as fsync or checksum failure\npg_wal_settings\n - changes to wal, recovery, and replication settings\npg_logging_settings\n - changes to what gets logged\npg_interface_settings\n - changes to the external interface, such as port, authentication, etc.\npg_resource_usage\n - changes to memory, cpu, and disk usage\npg_filesystem_security\n - changes to where files and directories are located, permissions bits on\n files and directories, etc.\npg_exec_command\n - changes to external commands that get executed\npg_server_configuration\n - changes to the configuration of the server vis-a-vis the operating system\n facilities, such as shared memory model used\npg_security_settings\n - changes that relax security, such as turning off privilege checking,\n changing security critical logging settings, adjusting developer options\n which have security implications, or changing settings which could be\n used to create a denial of services attack\n\nNote that some GUC variables have more than one privilege bit set, meaning a user must belong to all corresponding roles before they can change the setting. For example, \"log_file_mode\" requires both pg_filesystem_security and pg_logging_settings.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 12 May 2021 08:59:18 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "On Wed, May 12, 2021 at 11:59 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I didn't bother updating the docs yet, as I doubt the set of privileges/roles in this patch will survive contact with this list. They are:\n>\n> [ various things ]\n\nInteresting classification. I think the trick here is going to be to\nfigure out how we should divide things up in a way that makes sense.\nThere are a couple of possible approaches that suggest themselves:\n\n- One role for all settings, as suggested by Tom. Seems too\ncoarse-grained to be of any use.\n\n- A separate grantable privilege for each setting. Very flexible, but unwieldy.\n\n- Group things by which section of postgresql.conf they're in, and\nthen further restrict some of them as security-sensitive. This is\nreasonably close to what you've got, but not exactly what you've got.\nOne issue is that it risks separating things that are in practice not\nuseful to separate, creating more predefined roles to manage than we\nreally need. With your division, what are the chances that someone\nwants to grant pg_stats_settings but not pg_maintenance_settings or\nthe other way around?\n\n- Group things by the security exposure that they present, along the\nlines of what I proposed previously. This could be possibly combined\nwith some other categorization, e.g. section of postgresql.conf. But\nif you don't do that, an idea like this in its pure form would say,\nok, well we have a role like pg_administrator which is entitled to\nchange all GUCs that we think aren't security-sensitive, and then\nroles like pg_server_file_acccess, pg_execute_server_programs, etc.\nthat further restrict some GUCs. The risk here is that mashing too\nmany things together reduces the chances that somebody's going to be\nable to get exactly what they want out of the system.\n\n- Something else.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 May 2021 15:58:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "\n\n> On May 12, 2021, at 12:58 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> - Group things by which section of postgresql.conf they're in, and\n> then further restrict some of them as security-sensitive. This is\n> reasonably close to what you've got, but not exactly what you've got.\n> One issue is that it risks separating things that are in practice not\n> useful to separate, creating more predefined roles to manage than we\n> really need. With your division, what are the chances that someone\n> wants to grant pg_stats_settings but not pg_maintenance_settings or\n> the other way around?\n\nI think our conversation off-list was worth enough to reiterate here....\n\nWhen classifying GUC variables, the philosophy of classification needs to be consistent and easily understandable so that, among other considerations, all future GUC variables have a reasonable chance of be classified correctly by their patch authors and committers. The patch I posted falls short in this regard. You and I discussed two organizational options:\n\nTheme+Security:\n - security is considered as falling into three groupings: (a) host security, which includes files and permissions, running external commands, etc., (b) network security, which includes all connection options and authentications, and (c) schema security, which includes database internal object security like rls, object ownership, etc.\n - theme is based on the GUC config_group, either having one theme per config_group, or basing the theme on the prefix of the config_group such that, for example, QUERY_TUNING_METHOD, QUERY_TUNING_COST, QUERY_TUNING_GEQO, and QUERY_TUNING_OTHER could all be in one theme named \"pg_query_tuning\".\n\nAdmin+Security\n - security works the same as Theme+Security\n - a pg_admin role is required to set all non PGC_USERSET gucs, but some of those gucs *also* require one or more of the security roles\n\nThe Theme+Security approach might be insufficient for extensibility, given that 3rd-party custom GUCs might not have a corresponding theme. The Admin+Security approach appears better in this regard.\n\nAdmin+Security seems sufficient, in conjunction with Chapman's idea of extensible check_hooks.\n\nThoughts?\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 13 May 2021 09:05:57 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On May 12, 2021, at 12:58 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> > - Group things by which section of postgresql.conf they're in, and\n> > then further restrict some of them as security-sensitive. This is\n> > reasonably close to what you've got, but not exactly what you've got.\n> > One issue is that it risks separating things that are in practice not\n> > useful to separate, creating more predefined roles to manage than we\n> > really need. With your division, what are the chances that someone\n> > wants to grant pg_stats_settings but not pg_maintenance_settings or\n> > the other way around?\n> \n> I think our conversation off-list was worth enough to reiterate here....\n> \n> When classifying GUC variables, the philosophy of classification needs to be consistent and easily understandable so that, among other considerations, all future GUC variables have a reasonable chance of be classified correctly by their patch authors and committers. The patch I posted falls short in this regard. You and I discussed two organizational options:\n> \n> Theme+Security:\n> - security is considered as falling into three groupings: (a) host security, which includes files and permissions, running external commands, etc., (b) network security, which includes all connection options and authentications, and (c) schema security, which includes database internal object security like rls, object ownership, etc.\n> - theme is based on the GUC config_group, either having one theme per config_group, or basing the theme on the prefix of the config_group such that, for example, QUERY_TUNING_METHOD, QUERY_TUNING_COST, QUERY_TUNING_GEQO, and QUERY_TUNING_OTHER could all be in one theme named \"pg_query_tuning\".\n> \n> Admin+Security\n> - security works the same as Theme+Security\n> - a pg_admin role is required to set all non PGC_USERSET gucs, but some of those gucs *also* require one or more of the security roles\n> \n> The Theme+Security approach might be insufficient for extensibility, given that 3rd-party custom GUCs might not have a corresponding theme. The Admin+Security approach appears better in this regard.\n> \n> Admin+Security seems sufficient, in conjunction with Chapman's idea of extensible check_hooks.\n\nI'm not entirely following what the difference here is that's being\nsuggested. At a high level, I like the idea of defining capabilities\nalong the lines of \"host security\", \"network security\", \"schema\nsecurity\". I do think we should consider maybe breaking those down a\nbit more but I don't know that we'd really need to have much more.\n\nIn general, I'm not really keen on such a generic role as 'pg_admin'. I\nwould have thought we'd have a matrix where we have categories for GUCs\nand roles which are allowed to modify those categories, with the\nadditional requirement of having host/network/schema capability for\nthose GUCs which imply that level of access. Having the low-level\ncapabilities plus the GUC groups would seem likely to cover most cases\nthat 3rd party extensions might wish for, in a pretty granular way,\nthough we could always consider adding more in the future.\n\nThanks,\n\nStephen", "msg_date": "Thu, 13 May 2021 13:41:27 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "\n\n> On May 13, 2021, at 10:41 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Greetings,\n> \n> * Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n>>> On May 12, 2021, at 12:58 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n>>> - Group things by which section of postgresql.conf they're in, and\n>>> then further restrict some of them as security-sensitive. This is\n>>> reasonably close to what you've got, but not exactly what you've got.\n>>> One issue is that it risks separating things that are in practice not\n>>> useful to separate, creating more predefined roles to manage than we\n>>> really need. With your division, what are the chances that someone\n>>> wants to grant pg_stats_settings but not pg_maintenance_settings or\n>>> the other way around?\n>> \n>> I think our conversation off-list was worth enough to reiterate here....\n>> \n>> When classifying GUC variables, the philosophy of classification needs to be consistent and easily understandable so that, among other considerations, all future GUC variables have a reasonable chance of be classified correctly by their patch authors and committers. The patch I posted falls short in this regard. You and I discussed two organizational options:\n>> \n>> Theme+Security:\n>> - security is considered as falling into three groupings: (a) host security, which includes files and permissions, running external commands, etc., (b) network security, which includes all connection options and authentications, and (c) schema security, which includes database internal object security like rls, object ownership, etc.\n>> - theme is based on the GUC config_group, either having one theme per config_group, or basing the theme on the prefix of the config_group such that, for example, QUERY_TUNING_METHOD, QUERY_TUNING_COST, QUERY_TUNING_GEQO, and QUERY_TUNING_OTHER could all be in one theme named \"pg_query_tuning\".\n>> \n>> Admin+Security\n>> - security works the same as Theme+Security\n>> - a pg_admin role is required to set all non PGC_USERSET gucs, but some of those gucs *also* require one or more of the security roles\n>> \n>> The Theme+Security approach might be insufficient for extensibility, given that 3rd-party custom GUCs might not have a corresponding theme. The Admin+Security approach appears better in this regard.\n>> \n>> Admin+Security seems sufficient, in conjunction with Chapman's idea of extensible check_hooks.\n> \n> I'm not entirely following what the difference here is that's being\n> suggested. At a high level, I like the idea of defining capabilities\n> along the lines of \"host security\", \"network security\", \"schema\n> security\". I do think we should consider maybe breaking those down a\n> bit more but I don't know that we'd really need to have much more.\n\nThe distinction that Theme+Security would make is that capabilities can be categorized by the area of the system:\n -- planner\n -- replication\n -- logging\n ...\nbut also by the security implications of what is being done:\n -- host\n -- schema\n -- network\n\nSo if a GUC variable is related to replication, but also impacts the security of libpq connections to the server, then you'd need to be a member of both pg_replication_role and pg_network_admin. If another GUC variable is related to logging, but also impacts the file permissions or ownership of the log file, you'd need to be a member of both pg_logging_role and pg_host_admin.\n\n\nThe Admin+Security idea would instead say that to SET any GUC variable other than PGC_USERSET gucs, or to ALTER SYSTEM SET on any GUC variable, you'd need to be a member of pg_admin_role. If the GUC variable also impacts host security (file permissions, etc.) you'd have to also be a member of pg_host_admin, or if it impacts security of connections to the server, you'd have to also be a member of pg_network_admin.\n\nI'm just making up names like \"pg_replication_role\" and such for illustration.\n\n> In general, I'm not really keen on such a generic role as 'pg_admin'. I\n> would have thought we'd have a matrix where we have categories for GUCs\n> and roles which are allowed to modify those categories, with the\n> additional requirement of having host/network/schema capability for\n> those GUCs which imply that level of access. \n\nYeah, that's the Theme+Security idea, or at least it seems so to me.\n\n> Having the low-level\n> capabilities plus the GUC groups would seem likely to cover most cases\n> that 3rd party extensions might wish for, in a pretty granular way,\n> though we could always consider adding more in the future.\n\nI'm imagining the security concerns splitting three ways, and the themes splitting on the order of ten different ways. We can quibble over how fine grained the themes should be. There is a simplicity argument to having them be one-to-one with the config_group.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 13 May 2021 11:42:27 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "On Thu, 2021-05-13 at 11:42 -0700, Mark Dilger wrote:\r\n> The distinction that Theme+Security would make is that capabilities\r\n> can be categorized by the area of the system:\r\n> -- planner\r\n> -- replication\r\n> -- logging\r\n> ...\r\n> but also by the security implications of what is being done:\r\n> -- host\r\n> -- schema\r\n> -- network\r\nSince the \"security\" buckets are being used for both proposals -- how\r\nyou would deal with overlap between them? When a GUC gives you enough\r\nhost access to bleed into the schema and network domains, does it get\r\nall three attributes assigned to it, and thus require membership in all\r\nthree roles?\r\n\r\n(Thanks, by the way, for this thread -- I think a \"capability system\"\r\nfor superuser access is a great idea.)\r\n\r\n--Jacob\r\n", "msg_date": "Thu, 13 May 2021 19:18:32 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "Greetings,\n\n* Jacob Champion (pchampion@vmware.com) wrote:\n> On Thu, 2021-05-13 at 11:42 -0700, Mark Dilger wrote:\n> > The distinction that Theme+Security would make is that capabilities\n> > can be categorized by the area of the system:\n> > -- planner\n> > -- replication\n> > -- logging\n> > ...\n> > but also by the security implications of what is being done:\n> > -- host\n> > -- schema\n> > -- network\n> Since the \"security\" buckets are being used for both proposals -- how\n> you would deal with overlap between them? When a GUC gives you enough\n> host access to bleed into the schema and network domains, does it get\n> all three attributes assigned to it, and thus require membership in all\n> three roles?\n\nThe question is about exactly what the operation is, not about what that\noperation might allow someone to be able to do by using that access.\n\n'network' might, in theory, allow someone to connect out on a port that\nhappens to have a bash shell that's running as root on the local box too\nwhich means that it \"could\" be used to gain 'host' access but that's not\nreally our concern.\n\nTo that point, if it's allowing access to run programs on the host then\n'host' is required, but I don't think we should also require 'network'\nfor 'run programs on the host' because someone might run 'curl' with\nthat access- that's an issue for the admin and the curl utility to\nfigure out.\n\n> (Thanks, by the way, for this thread -- I think a \"capability system\"\n> for superuser access is a great idea.)\n\nWe've been working in that direction for a long time. :)\n\nThanks,\n\nStephen", "msg_date": "Thu, 13 May 2021 15:27:15 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "\n\n> On May 13, 2021, at 12:18 PM, Jacob Champion <pchampion@vmware.com> wrote:\n> \n> On Thu, 2021-05-13 at 11:42 -0700, Mark Dilger wrote:\n>> The distinction that Theme+Security would make is that capabilities\n>> can be categorized by the area of the system:\n>> -- planner\n>> -- replication\n>> -- logging\n>> ...\n>> but also by the security implications of what is being done:\n>> -- host\n>> -- schema\n>> -- network\n> Since the \"security\" buckets are being used for both proposals -- how\n> you would deal with overlap between them? When a GUC gives you enough\n> host access to bleed into the schema and network domains, does it get\n> all three attributes assigned to it, and thus require membership in all\n> three roles?\n\nYeah, from a security standpoint, pg_host_admin basically gives everything away. I doubt service providers would give the \"host\" or \"network\" security to their tenants, but they would probably consider giving \"schema\" security to the tenants.\n\n> (Thanks, by the way, for this thread -- I think a \"capability system\"\n> for superuser access is a great idea.)\n\nI am happy to work on this, and appreciate feedback....\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 13 May 2021 12:30:43 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Granting control of SUSET gucs to non-superusers" }, { "msg_contents": "> On May 13, 2021, at 12:30 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> \n> \n>> On May 13, 2021, at 12:18 PM, Jacob Champion <pchampion@vmware.com> wrote:\n>> \n>> On Thu, 2021-05-13 at 11:42 -0700, Mark Dilger wrote:\n>>> The distinction that Theme+Security would make is that capabilities\n>>> can be categorized by the area of the system:\n>>> -- planner\n>>> -- replication\n>>> -- logging\n>>> ...\n>>> but also by the security implications of what is being done:\n>>> -- host\n>>> -- schema\n>>> -- network\n>> Since the \"security\" buckets are being used for both proposals -- how\n>> you would deal with overlap between them? When a GUC gives you enough\n>> host access to bleed into the schema and network domains, does it get\n>> all three attributes assigned to it, and thus require membership in all\n>> three roles?\n> \n> Yeah, from a security standpoint, pg_host_admin basically gives everything away. I doubt service providers would give the \"host\" or \"network\" security to their tenants, but they would probably consider giving \"schema\" security to the tenants.\n> \n>> (Thanks, by the way, for this thread -- I think a \"capability system\"\n>> for superuser access is a great idea.)\n> \n> I am happy to work on this, and appreciate feedback....\n\nPlease find attached five new patches each intended to reduce the number of administrative tasks that require superuser privileges.\n\nv3-0001 adds a new pg_logical_replication role with permission to manage publications and subscriptions.\n\nv3-0002 adds a new pg_host_security role with permission to manage extensions, event triggers and tablespaces.\n\nv3-0003 adds a new pg_network_security role with pemission to manage foreign servers and data wrappers.\n\nv3-0004 adds a new pg_database_security role with permission to perform many actions that would otherwise require superuser, so long as those actions do not compromise the security of the host or network. This role, along with pg_logical_replication, is intended to be safe to delegate to the tenant of a database provided as a service.\n\nv3-0005 associates all GUC variables with security roles and allows both SET and ALTER SYSTEM SET on those variables by users belonging to the necessary security role(s). This patch extends the significance of the pg_host_security, pg_network_security, and pg_database_security roles added in the previous patches, as those roles are associated with GUC variables that implicate the same security concerns.\n\nThese patches likely still need some adjustment, as there are a large number of security relevant permission decisions in here which some hackers may debate, but I think these are mature enough to solicit feedback.\n\nI admit right upfront that the regression tests guc_priv_admin and guc_priv_tenant in v3-0005 could be made to cover a subset of GUC variables rather than the full set of them, but I'm delaying pruning them down until I know if the rest of the patches are basically acceptable.\n\n\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 25 May 2021 13:33:54 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Tue, May 25, 2021 at 01:33:54PM -0700, Mark Dilger wrote:\n> v3-0001 adds a new pg_logical_replication role with permission to manage publications and subscriptions.\n\n> v3-0004 adds a new pg_database_security role with permission to perform many\n> actions that would otherwise require superuser, so long as those actions do\n> not compromise the security of the host or network. This role, along with\n> pg_logical_replication, is intended to be safe to delegate to the tenant of\n> a database provided as a service.\n\npg_logical_replication would not be safe to delegate that way:\nhttps://postgr.es/m/flat/CACqFVBbx6PDq%2B%3DvHM0n78kHzn8tvOM-kGO_2q_q0zNAMT%2BTzdA%40mail.gmail.com\n\n\n", "msg_date": "Thu, 27 May 2021 23:06:18 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On May 27, 2021, at 11:06 PM, Noah Misch <noah@leadboat.com> wrote:\n> \n> On Tue, May 25, 2021 at 01:33:54PM -0700, Mark Dilger wrote:\n>> v3-0001 adds a new pg_logical_replication role with permission to manage publications and subscriptions.\n> \n>> v3-0004 adds a new pg_database_security role with permission to perform many\n>> actions that would otherwise require superuser, so long as those actions do\n>> not compromise the security of the host or network. This role, along with\n>> pg_logical_replication, is intended to be safe to delegate to the tenant of\n>> a database provided as a service.\n> \n> pg_logical_replication would not be safe to delegate that way:\n> https://postgr.es/m/flat/CACqFVBbx6PDq%2B%3DvHM0n78kHzn8tvOM-kGO_2q_q0zNAMT%2BTzdA%40mail.gmail.com\n\nOh, I agree that this patch set does not go the extra step to make it safe. You are quite right to push back, as my email was poorly worded. I should have said \"intended to be eventually made safe to delegate\". The idea is that the patch set addresses most places in the sources where we test for superuser and tests instead for (superuser || <SOME_ROLE>), and then uses that same set of roles to control who has sufficient privileges to set GUCs. The pg_host_security and pg_network_security roles are not intended to eventually be safe to delegate. Or at least, I can't see any clear path to getting there. The pg_database_security and pg_logical_replication roles should be ones we can make safe. If we can agree as a community which set of roles are appropriate, then we can have separate patches as needed for tightening the security around them.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 28 May 2021 10:42:21 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On 2021-05-26 05:33, Mark Dilger wrote:\n>> On May 13, 2021, at 12:30 PM, Mark Dilger \n>> <mark.dilger@enterprisedb.com> wrote:\n>> \n>> \n>> \n>>> On May 13, 2021, at 12:18 PM, Jacob Champion <pchampion@vmware.com> \n>>> wrote:\n>>> \n>>> On Thu, 2021-05-13 at 11:42 -0700, Mark Dilger wrote:\n>>>> The distinction that Theme+Security would make is that capabilities\n>>>> can be categorized by the area of the system:\n>>>> -- planner\n>>>> -- replication\n>>>> -- logging\n>>>> ...\n>>>> but also by the security implications of what is being done:\n>>>> -- host\n>>>> -- schema\n>>>> -- network\n>>> Since the \"security\" buckets are being used for both proposals -- how\n>>> you would deal with overlap between them? When a GUC gives you enough\n>>> host access to bleed into the schema and network domains, does it get\n>>> all three attributes assigned to it, and thus require membership in \n>>> all\n>>> three roles?\n>> \n>> Yeah, from a security standpoint, pg_host_admin basically gives \n>> everything away. I doubt service providers would give the \"host\" or \n>> \"network\" security to their tenants, but they would probably consider \n>> giving \"schema\" security to the tenants.\n>> \n>>> (Thanks, by the way, for this thread -- I think a \"capability system\"\n>>> for superuser access is a great idea.)\n>> \n>> I am happy to work on this, and appreciate feedback....\n> \n> Please find attached five new patches each intended to reduce the\n> number of administrative tasks that require superuser privileges.\n> \n> v3-0001 adds a new pg_logical_replication role with permission to\n> manage publications and subscriptions.\n> \n> v3-0002 adds a new pg_host_security role with permission to manage\n> extensions, event triggers and tablespaces.\n> \n> v3-0003 adds a new pg_network_security role with pemission to manage\n> foreign servers and data wrappers.\n> \n> v3-0004 adds a new pg_database_security role with permission to\n> perform many actions that would otherwise require superuser, so long\n> as those actions do not compromise the security of the host or\n> network. This role, along with pg_logical_replication, is intended to\n> be safe to delegate to the tenant of a database provided as a service.\n> \n> v3-0005 associates all GUC variables with security roles and allows\n> both SET and ALTER SYSTEM SET on those variables by users belonging to\n> the necessary security role(s). This patch extends the significance\n> of the pg_host_security, pg_network_security, and pg_database_security\n> roles added in the previous patches, as those roles are associated\n> with GUC variables that implicate the same security concerns.\n> \n> These patches likely still need some adjustment, as there are a large\n> number of security relevant permission decisions in here which some\n> hackers may debate, but I think these are mature enough to solicit\n> feedback.\n> \n> I admit right upfront that the regression tests guc_priv_admin and\n> guc_priv_tenant in v3-0005 could be made to cover a subset of GUC\n> variables rather than the full set of them, but I'm delaying pruning\n> them down until I know if the rest of the patches are basically\n> acceptable.\n\nThanks for working on this topic, I appreciate it!\n\nBTW, do these patches enable non-superusers to create user with\nbypassrls?\nSince I failed to apply the patches and didn't test them,\nI may have overlooked something but I didn't find the\ncorresponding codes.\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 14 Jun 2021 21:51:39 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles" }, { "msg_contents": "\n\n> On Jun 14, 2021, at 5:51 AM, torikoshia <torikoshia@oss.nttdata.com> wrote:\n> \n> Thanks for working on this topic, I appreciate it!\n\nThank you for taking a look!\n\n> BTW, do these patches enable non-superusers to create user with\n> bypassrls?\n\nNo, I did not break out the ability to create such users.\n\n> Since I failed to apply the patches and didn't test them,\n> I may have overlooked something but I didn't find the\n> corresponding codes.\n\nDo you believe that functionality should be added? I have not thought much about that issue.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 14 Jun 2021 07:53:01 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles" }, { "msg_contents": "On 2021-06-14 23:53, Mark Dilger wrote:\n>> On Jun 14, 2021, at 5:51 AM, torikoshia <torikoshia@oss.nttdata.com> \n>> wrote:\n>> \n>> Thanks for working on this topic, I appreciate it!\n> \n> Thank you for taking a look!\n> \n>> BTW, do these patches enable non-superusers to create user with\n>> bypassrls?\n> \n> No, I did not break out the ability to create such users.\n> \n>> Since I failed to apply the patches and didn't test them,\n>> I may have overlooked something but I didn't find the\n>> corresponding codes.\n> \n> Do you believe that functionality should be added? I have not thought\n> much about that issue.\n\nI just noticed that because I was looking into operations that can only \nbe done by superusers.\n\nIt might be somewhat inconvenient in PostgreSQL service providers that \ndon't give users superuser privileges, but at least I don't have a \nspecific demand for it.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 15 Jun 2021 10:40:27 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles" }, { "msg_contents": "Greetings,\n\n* torikoshia (torikoshia@oss.nttdata.com) wrote:\n> On 2021-06-14 23:53, Mark Dilger wrote:\n> >>On Jun 14, 2021, at 5:51 AM, torikoshia <torikoshia@oss.nttdata.com>\n> >>wrote:\n> >>BTW, do these patches enable non-superusers to create user with\n> >>bypassrls?\n[...]\n> >Do you believe that functionality should be added? I have not thought\n> >much about that issue.\n> \n> I just noticed that because I was looking into operations that can only be\n> done by superusers.\n\nIn general, I agree with the sentiment that we should be providing a way\nto have non-superusers able to do things that only a superuser can do\ntoday. I'd love to get rid of all of the explicit superuser checks in\nthe backend except the one that makes a superuser a member of all roles.\n\nThanks,\n\nStephen", "msg_date": "Tue, 15 Jun 2021 11:39:07 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles" }, { "msg_contents": "Please find attached a new set of patches.\n\n> On May 27, 2021, at 11:06 PM, Noah Misch <noah@leadboat.com> wrote:\n> \n> pg_logical_replication would not be safe to delegate that way:\n> https://postgr.es/m/flat/CACqFVBbx6PDq%2B%3DvHM0n78kHzn8tvOM-kGO_2q_q0zNAMT%2BTzdA%40mail.gmail.com\n\nv3-0001 creates a pg_logical_replication role and respects privileges on tables in the table sync and apply workers. With this change, by creating a user in role pg_logical_replication, only giving that user INSERT, UPDATE, DELETE, or TRUNCATE privileges as appropriate on the intended tables, and having that user rather than a superuser create a subscription, one may prevent the replication of unwanted DML on these tables as well as the replication of any DML to any other tables.\n\n> On Jun 14, 2021, at 5:51 AM, torikoshia <torikoshia@oss.nttdata.com> wrote:\n> \n> BTW, do these patches enable non-superusers to create user with\n> bypassrls?\n\nv3-0004 creates a pg_database_security role and allows users in this role to create roles with BYPASSRLS.\n\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 29 Jun 2021 18:25:31 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "> On Jun 29, 2021, at 6:25 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Please find attached a new set of patches.\n\nAnd again, this time attaching a fifth patch which includes the work to allow users who belong to the right security role to SET and ALTER SYSTEM SET variables without being a superuser.\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 1 Jul 2021 08:59:02 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> 1 июля 2021 г., в 20:59, Mark Dilger <mark.dilger@enterprisedb.com> написал(а):\n> \n> \n> \n>> On Jun 29, 2021, at 6:25 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> \n>> Please find attached a new set of patches.\n> \n> And again, this time attaching a fifth patch which includes the work to allow users who belong to the right security role to SET and ALTER SYSTEM SET variables without being a superuser.\n\nI'm not sure, but maybe we should allow replication role to change session_replication_role?\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 5 Jul 2021 13:50:19 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Jul 5, 2021, at 1:50 AM, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> I'm not sure, but maybe we should allow replication role to change session_replication_role?\n\nThanks, Andrey, for taking a look.\n\nYes, there is certainly some logic to that suggestion. The patch v4-0005 only delegates authority to perform ALTER SYSTEM SET to three roles: pg_database_security, pg_network_security, and pg_host_security. I don't mind expanding this list to include the replication attribute, but I am curious about opinions on the general design. There may be an advantage in keeping the list short. In particular, as the list gets longer, will it get harder to decide which role to associate with each new GUC that gets added? For third-party extensions, will it be harder for them to decide in any principled way which role to assign to each GUC that they add? There are multiple ways to cut up the set of all GUCs. database/host/network is not an entirely clean distinction, and perhaps database/host/network/replication is better, but I'm uncertain. \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 6 Jul 2021 09:38:01 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Thu, Jul 1, 2021 at 9:42 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Jun 29, 2021, at 6:25 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> >\n> > Please find attached a new set of patches.\n>\n> And again, this time attaching a fifth patch which includes the work to allow users who belong to the right security role to SET and ALTER SYSTEM SET variables without being a superuser.\n\nOne of the patches\nv4-0004-Add-default-role-for-database-operations.patch does not apply\non head, please post an updated patch:\npatching file src/backend/commands/dbcommands.c\nHunk #1 succeeded at 748 (offset -15 lines).\nHunk #2 FAILED at 780.\nHunk #3 succeeded at 1883 (offset -42 lines).\n1 out of 3 hunks FAILED -- saving rejects to file\nsrc/backend/commands/dbcommands.c.rej\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 22 Jul 2021 11:32:58 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Fri, May 28, 2021 at 1:42 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > pg_logical_replication would not be safe to delegate that way:\n> > https://postgr.es/m/flat/CACqFVBbx6PDq%2B%3DvHM0n78kHzn8tvOM-kGO_2q_q0zNAMT%2BTzdA%40mail.gmail.com\n>\n> Oh, I agree that this patch set does not go the extra step to make it safe. You are quite right to push back, as my email was poorly worded. I should have said \"intended to be eventually made safe to delegate\". The idea is that the patch set addresses most places in the sources where we test for superuser and tests instead for (superuser || <SOME_ROLE>), and then uses that same set of roles to control who has sufficient privileges to set GUCs. The pg_host_security and pg_network_security roles are not intended to eventually be safe to delegate. Or at least, I can't see any clear path to getting there. The pg_database_security and pg_logical_replication roles should be ones we can make safe. If we can agree as a community which set of roles are appropriate, then we can have separate patches as needed for tightening the security around them.\n\nI don't think that we want to commit a patch to add a\npg_logical_replication role that can \"eventually\" be made staff to\ndelegate to non-superusers. Whatever issues need to be fixed should be\nfixed first, and then this change can be considered afterwards. It\nseems like you try to fix at least some of the issues in the patch,\nbecause I see permission checks being added in\nsrc/backend/replication/logical/worker.c, and I don't think that\nshould happen in the same patch that adds the new predefined role. I\nalso think it should be accompanied not only by new test cases (which\nyou seem to have added, though I have not reviewed them in detail) but\nalso documentation changes (which seem to be missing, since the doc\nchanges are all about the new predefined role). This is a really\nsignificant behavior change to logical replication IMV and shouldn't\njust be slipped into some other patch.\n\nIt also seems based on Noah's comments and your response that there\nmight be some other issue here, and I haven't understood what that is,\nbut I think that should also be fixed separately, and first.\nConsidering all this, I would suggest not having this be patch #1 in\nyour series; make something come first that doesn't have\nprerequisites.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Jul 2021 11:29:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Jul 22, 2021, at 8:29 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> I don't think that we want to commit a patch to add a\n> pg_logical_replication role that can \"eventually\" be made staff to\n> delegate to non-superusers.\n\nCertainly not. What I meant on May 28 by \"eventually\" was that the patch set (posted May 25 and named \"v3\") had not yet implemented such security, as I was fishing for comments from the community about whether the basic division of superuser into these new roles was the right division. Having gotten little feedback on that, on June 29 I posted another patch set (confusingly also named \"v3\", my apologies) in which patch 0001 had expanded to include new security restrictions.\n\n> Whatever issues need to be fixed should be\n> fixed first, and then this change can be considered afterwards. It\n> seems like you try to fix at least some of the issues in the patch,\n> because I see permission checks being added in\n> src/backend/replication/logical/worker.c, and I don't think that\n> should happen in the same patch that adds the new predefined role.\n\nPrior to this patch, the logical replication workers run under the userid of the owner of the subscription. This is unchanged after the patch. The real difference is that prior to the patch, only superusers can own subscriptions, so checking permissions on tables during replication would be silly (though not harmful). The worker is assured of passing all such permission checks by virtue of being a superuser. After the patch, since subscription owners need not be superusers, the permission checks are no longer silly. There is no assurance that they have permission to apply changes to a table, so naturally that has to be checked, and it is. \n\nI don't really see this as two separate patches, since the addition of the permissions checks without the addition of non-superusers as logical replication workers is silly. But I don't mind that much, either. I'll break them in two for the next patch set.\n\n> I\n> also think it should be accompanied not only by new test cases (which\n> you seem to have added, though I have not reviewed them in detail) but\n> also documentation changes (which seem to be missing, since the doc\n> changes are all about the new predefined role). This is a really\n> significant behavior change to logical replication IMV and shouldn't\n> just be slipped into some other patch.\n\nI'm not sure what is meant by \"slipped into some other patch\", but I *think* you mean that the documentation changes should not be in a separate patch from the behavioral changes. I agree with that. I'll add documentation of the changes to logical replication in the same patch as the changes themselves.\n\n> It also seems based on Noah's comments and your response that there\n> might be some other issue here, and I haven't understood what that is,\n> but I think that should also be fixed separately, and first.\n> Considering all this, I would suggest not having this be patch #1 in\n> your series; make something come first that doesn't have\n> prerequisites.\n\nThe issue that gets thrown around in the email archive is that \"arbitrary code\" can be made to run on the subscriber side. As I understand the problem, this is because trigger functions can be created on tables with arbitrary code in them, and that code will be executed under the userid of the user who causes the trigger to fire during an insert/update/delete rather than as the user who created the trigger. This of course is not peculiar to logical replication; it is how triggers work generally. What is peculiar is that a non-superuser who can create tables, triggers, publications and subscriptions can get the logical replication worker to perform inserts/updates/deletes on those tables, thereby firing those triggers, and executing the trigger code as superuser. That is ordinarily not something that a user can do simply by creating a table with a trigger, since there would be no mechanism to force the superuser to perform operations on the table.\n\nAfter patch 0001 (which will be split in the next patch set, but hasn't been split yet) the user who creates the subscription is also the user whose permissions are checked when operating on the table and executing the trigger. This closes the security hole, so far as I am aware. I would very much like more eyeballs on this patch, and if anybody sees why this is an insufficient solution, please speak up. But it's not as if I punted the security issue down the road to some ill-defined future patch. On the contrary, this patch both creates the ability to delegate subscription creation authority to a non-superuser and closes the security hole which that would otherwise entail, or at least, that is the intent.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 22 Jul 2021 10:29:13 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Thu, Jul 1, 2021 at 11:59 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote\n> > On Jun 29, 2021, at 6:25 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > Please find attached a new set of patches.\n>\n> And again, this time attaching a fifth patch which includes the work to allow users who belong to the right security role to SET and ALTER SYSTEM SET variables without being a superuser.\n\nIn general, I don't like this kind of coding:\n\n- /* Superusers bypass all permission checking. */\n- if (superuser_arg(roleid))\n+ /*\n+ * Superusers and members of the pg_host_security role bypass all\n+ * permission checking.\n+ */\n+ if (superuser_arg(roleid) ||\n+ has_privs_of_role(roleid, ROLE_PG_HOST_SECURITY))\n return true;\n\n From a theoretical point of view, the superuser has the privileges of\nevery role, so this is redundant. From a coding point of view,\nhas_privs_of_role() does a superuser_arg() test already, and it seems\nlikely to be a loser to do it twice. Arguably, we should take the same\napproach in code comments and documentation, rather than saying \"You\nmust be superuser or a member of role XYZ\" everywhere, but there seems\nto be some existing precedent for mentioning superuser explicitly in\nthose cases, so maybe it's fine. I think it's kind of weird though,\nbecause in other places we don't do it, e.g.:\n\n unique or primary key constraint in the referenced table. The user\n must have <literal>REFERENCES</literal> permission on the referenced table\n (either the whole table, or the specific referenced columns). The\n\nWe could have said \"or be superuser\" there, but we didn't. It doesn't\nseem efficient to say \"or be superuser\" every time we mention a\nrequired permission, rather than just taking it as a given that the\nsuperuser has all permissions. Yet, again, there's some precedent for\nyour approach:\n\n To create a database, you must be a superuser or have the special\n <literal>CREATEDB</literal> privilege.\n See <xref linkend=\"sql-createrole\"/>.\n\nSo I don't know. At the very least I think we should not do it as an\n\"or\" in the code; what we want to do in comments and documentation I'm\nless sure.\n\nI think 0002 needs more explanation of the theory behind the specific\npermissions granted. It enumerates what they are in both the commit\nmessage and the documentation, but no theory is offered explaining why\nthese permissions are included and not others. I think my idea was\nthat \"host\" security would encompass everything that touches the\nfilesystem on the server where the database is running. I agree that\nthis naturally includes the ability to create a tablespace and\nprobably, at least for symmetry, the ability to drop it. But, you\ncan't ALTER a tablespace's location, so I see no reason why that\nshould be tied to this permission. I think it's arguable whether it\nincludes creating and dropping extensions, but I would argue that it\nshouldn't. True, the extensions include SQL files installed in the\nfilesystem, and shared libraries also installed on the filesystem, but\nultimately everything you ever do involves files in some way, so I\ndon't see that as a very compelling argument. These operations on\nextensions don't let you modify the filesystem in any way, and they\nonly let you read from carefully sandboxed things that are designed\nfor precisely that purpose, so the system administrator really already\nhas good control. The sorts of things I'd include in this category are\nthings like server-side COPY FROM or COPY TO. When we come to the\nthird thing the patch includes in this category, creating and dropping\nevent triggers, I *really* don't understand why that one is considered\nhost security. That one isn't touching the filesystem even to the\nextent that the extension stuff is; it seems to me to be purely\ninternal to the database. Yeah, OK, that could involve writing files\nbecause we make catalog entries, but so could any DDL. Now, maybe\nthere's a theory of operation that you have in mind that makes this\nall make more sense the way you have it, but if so, it seems not to be\nspelled out anywhere in the patch itself or the commit message you\nwrote for it, so I'm in the dark.\n\nI also tend to think that functions like pg_read_file() ought to come\nwith execute permission pre-granted, with grant option, to\npg_host_security, and perhaps similarly for adminpack.\n\n From the department of nitpicking, all four of your commit messages\nbegin with the word \"Reducing\" which I think should be just \"Reduce\".\nOtherwise, at least to me, it doesn't look like a proper sentence.\n\n0004 has this kind of thing all over the place:\n\n- /* Superusers bypass all permission checking. */\n- if (superuser_arg(roleid))\n+ /*\n+ * Superusers and members of the pg_database_security role bypass all\n+ * permission checking.\n+ */\n\nIf that were true, the pg_database_security role would be equivalent\nto superuser, which isn't the intent, so I think the comment needs\nmore thought. Also, I think that somewhere in the patch, either in\ncode comments or at the very least in the commit message, there needs\nto be some justification of why the approach taken here is correct.\nLike, the idea here is that if you have pg_database_security, you can\ndo whatever you want to objects within the database as long as you\ndon't try to touch the network or the host filesystem. So that would\nimply that you can do anything you like to databases. So it sorta\nmakes sense to me that the patch goes about that by changing\npg_database_aclmask(). But I would feel better if there were some\nexplanation somewhere of why such a change is expecting to allow\nprecisely DDL-related database commands and nothing else. I think\nthat's true if pg_database_aclmask() is used for that purpose and not\nfor any other purpose, which may well be true, but I think it would be\nbest to be more explicit about the assumptions. I'm sure we don't want\na lengthy comment about this in every pg_*_aclmask() function, but I\nthink we should have a general explanation of it somewhere.\n\nI also think that in this case, as in 0002 and 0003, we really need\nsome documentation of what this new role is all about. The\ndocumentation changes in 0004 are really extremely minimal. Users need\nto understand what they can expect to happen if they grant this new\nrole to someone, and hackers need to understand how to update the code\nthe next time they're patching something that interacts with this, and\nI do not think that what you've got here now is going to be sufficient\nto meet the needs of either group. (I realize that you may have been\nplanning to wait until there was more consensus to flesh this out, but\nbecause the definitional issues here are so tricky, I don't think it\ncan wait in this case.)\n\nIn 0005, I do not think the function name role_has_privileges() is\nsufficiently specific. Maybe role_can_change_guc()? Also, I think here\nagain you should draft some documentation changes. We're going to need\nto indicate a category for every GUC somehow, and I'm not quite sure\nhow we're going to do that. If you want to just do a few examples for\nnow and also provide some general documentation on how the system is\nintended to work, we can wait to do every GUC until we settle on how\nto categorize everything, but I think we need to see what the general\nplan looks like there. Consider the way that we currently indicate (a)\nthe GUC's data type and (b) when the GUC can be changed. The former is\nshown in fixed point type in parentheses after the GUC name. The\nlatter is indicated by adding a sentence such as \"This parameter can\nonly be set at server start.\" to everything that is PGC_POSTMASTER (I\nthink). What are we going to do with this new categorization?\n\nI see that you've categorized things like restart_after_crash and\nzero_damaged_pages as GUC_HOST_SECURITY. I think I like that, but it\nagain begs the definitional question. If host security basically means\ntouching the server filesystem, well, restart_after_crash doesn't. It\ncan be justified from the perspective that restart_after_crash is a\nproperty of the host system, not something strictly internal to the\ndatabase. So when you go to write definitions of what these categories\nare actually supposed to mean, they've got to be written in such a way\nthat these categorizations end up looking correct. Or else these have\ngot to be recategorized somehow. Anyway the point is that it \"feels\ngood\" but as you have it without the definitions it's hard to really\nknow.\n\nThe categorization of the logging GUCs looks haphazard to me. Why is\nlog_duration GUC_HOST_SECURITY but debug_print_parse is\nGUC_DATABASE_SECURITY, for example? Again, we need clear definitions,\nbut I'm inclined to think this doesn't look great. I even less\nunderstand why autovacuum is classified as GUC_HOST_SECURITY. That\nseems like it's probably database security, while db_user_namespace\nfeels to me like network security. Another oddity is the replication\nsettings, which seem to be mostly classified as GUC_HOST_SECURITY. I\ncan see why you don't want to make them GUC_DATABASE_SECURITY, but eh,\nwhat do they have to do with host security? It's similarly odd to me\nthat hash_mem_multiplier is GUC_DATABASE_SECURITY while work_mem, for\nwhich it is a multiplier, is GUC_HOST_SECURITY.\n\nPerhaps we need to break this up into a few more buckets to make sense\nof it; I'm not really sure. For example, we could add buckets for\ncontrolling what goes to the server log, resource utilization, system\nintegrity, and split inbound and outbound network security. Well, now\nI just turned your three predefined roles into seven, which maybe is a\nbad idea, but perhaps it's worth it if it gets us to a place where we\ncan clearly categorize everything. On the other hand, maybe if we did\nthat there'd just be a new set of things that look a little ambiguous.\nI don't know. I guess trying to write a good set of definitions might\nbe job one.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Jul 2021 14:21:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Thu, Jul 22, 2021 at 1:29 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Certainly not. What I meant on May 28 by \"eventually\" was that the patch set (posted May 25 and named \"v3\") had not yet implemented such security, as I was fishing for comments from the community about whether the basic division of superuser into these new roles was the right division. Having gotten little feedback on that, on June 29 I posted another patch set (confusingly also named \"v3\", my apologies) in which patch 0001 had expanded to include new security restrictions.\n\nOh.\n\n> Prior to this patch, the logical replication workers run under the userid of the owner of the subscription. This is unchanged after the patch. The real difference is that prior to the patch, only superusers can own subscriptions, so checking permissions on tables during replication would be silly (though not harmful). The worker is assured of passing all such permission checks by virtue of being a superuser. After the patch, since subscription owners need not be superusers, the permission checks are no longer silly. There is no assurance that they have permission to apply changes to a table, so naturally that has to be checked, and it is.\n\nAren't you supposing that the set of superusers never changes? Unless\nwe have some code for this that we don't have elsewhere, a superuser\ncould create a subscription and then be de-superuser'd, or the\nsubscription's owner could be altered.\n\n> > I\n> > also think it should be accompanied not only by new test cases (which\n> > you seem to have added, though I have not reviewed them in detail) but\n> > also documentation changes (which seem to be missing, since the doc\n> > changes are all about the new predefined role). This is a really\n> > significant behavior change to logical replication IMV and shouldn't\n> > just be slipped into some other patch.\n>\n> I'm not sure what is meant by \"slipped into some other patch\", but I *think* you mean that the documentation changes should not be in a separate patch from the behavioral changes. I agree with that. I'll add documentation of the changes to logical replication in the same patch as the changes themselves.\n\nI just meant that I think the behavioral change to logical replication\nis significant in its own right and should be a separate patch.\nPerhaps it's not as significant as I thought, but I still think it\nshould be made separately and likely documented as an incompatibility\nwith previous releases, unless I'm still confused.\n\n> > It also seems based on Noah's comments and your response that there\n> > might be some other issue here, and I haven't understood what that is,\n> > but I think that should also be fixed separately, and first.\n> > Considering all this, I would suggest not having this be patch #1 in\n> > your series; make something come first that doesn't have\n> > prerequisites.\n>\n> The issue that gets thrown around in the email archive is that \"arbitrary code\" can be made to run on the subscriber side. As I understand the problem, this is because trigger functions can be created on tables with arbitrary code in them, and that code will be executed under the userid of the user who causes the trigger to fire during an insert/update/delete rather than as the user who created the trigger. This of course is not peculiar to logical replication; it is how triggers work generally. What is peculiar is that a non-superuser who can create tables, triggers, publications and subscriptions can get the logical replication worker to perform inserts/updates/deletes on those tables, thereby firing those triggers, and executing the trigger code as superuser. That is ordinarily not something that a user can do simply by creating a table with a trigger, since there would be no mechanism to force the superuser to perform operations on the table.\n>\n> After patch 0001 (which will be split in the next patch set, but hasn't been split yet) the user who creates the subscription is also the user whose permissions are checked when operating on the table and executing the trigger. This closes the security hole, so far as I am aware. I would very much like more eyeballs on this patch, and if anybody sees why this is an insufficient solution, please speak up. But it's not as if I punted the security issue down the road to some ill-defined future patch. On the contrary, this patch both creates the ability to delegate subscription creation authority to a non-superuser and closes the security hole which that would otherwise entail, or at least, that is the intent.\n\nOK. I thought Noah must be talking about some other problem, because\non May 28th you wrote \"Oh, I agree that this patch set does not go the\nextra step to make it safe\" and I failed to understand that you\nthought you'd addressed this in v4.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Jul 2021 14:35:13 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Jul 1, 2021 at 11:59 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote\n> > > On Jun 29, 2021, at 6:25 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > > Please find attached a new set of patches.\n> >\n> > And again, this time attaching a fifth patch which includes the work to allow users who belong to the right security role to SET and ALTER SYSTEM SET variables without being a superuser.\n> \n> In general, I don't like this kind of coding:\n> \n> - /* Superusers bypass all permission checking. */\n> - if (superuser_arg(roleid))\n> + /*\n> + * Superusers and members of the pg_host_security role bypass all\n> + * permission checking.\n> + */\n> + if (superuser_arg(roleid) ||\n> + has_privs_of_role(roleid, ROLE_PG_HOST_SECURITY))\n> return true;\n> \n> >From a theoretical point of view, the superuser has the privileges of\n> every role, so this is redundant. From a coding point of view,\n> has_privs_of_role() does a superuser_arg() test already, and it seems\n> likely to be a loser to do it twice. Arguably, we should take the same\n> approach in code comments and documentation, rather than saying \"You\n> must be superuser or a member of role XYZ\" everywhere, but there seems\n> to be some existing precedent for mentioning superuser explicitly in\n> those cases, so maybe it's fine. I think it's kind of weird though,\n> because in other places we don't do it, e.g.:\n> \n> unique or primary key constraint in the referenced table. The user\n> must have <literal>REFERENCES</literal> permission on the referenced table\n> (either the whole table, or the specific referenced columns). The\n\nI tend to agree that it'd be better to clean this up and just use\nhas_privs_of_role() and not include explicit superuser checks. I don't\nthink we need to constantly re-remind ourselves in the code that\nsuperusers are members of all roles.\n\n> We could have said \"or be superuser\" there, but we didn't. It doesn't\n> seem efficient to say \"or be superuser\" every time we mention a\n> required permission, rather than just taking it as a given that the\n> superuser has all permissions. Yet, again, there's some precedent for\n> your approach:\n> \n> To create a database, you must be a superuser or have the special\n> <literal>CREATEDB</literal> privilege.\n> See <xref linkend=\"sql-createrole\"/>.\n\nI'm a bit on the fence about the documentation side... I could be\nconvinced either way, really, but I generally agree that it'd be good to\npick one and be consistent. I don't think the places where we do/don't\nmention it were done for any particular reason.\n\n> So I don't know. At the very least I think we should not do it as an\n> \"or\" in the code; what we want to do in comments and documentation I'm\n> less sure.\n\nAgreed.\n\n> I think 0002 needs more explanation of the theory behind the specific\n> permissions granted. It enumerates what they are in both the commit\n> message and the documentation, but no theory is offered explaining why\n> these permissions are included and not others. I think my idea was\n> that \"host\" security would encompass everything that touches the\n> filesystem on the server where the database is running. I agree that\n> this naturally includes the ability to create a tablespace and\n> probably, at least for symmetry, the ability to drop it. But, you\n> can't ALTER a tablespace's location, so I see no reason why that\n> should be tied to this permission. I think it's arguable whether it\n> includes creating and dropping extensions, but I would argue that it\n> shouldn't. True, the extensions include SQL files installed in the\n> filesystem, and shared libraries also installed on the filesystem, but\n> ultimately everything you ever do involves files in some way, so I\n> don't see that as a very compelling argument. These operations on\n> extensions don't let you modify the filesystem in any way, and they\n> only let you read from carefully sandboxed things that are designed\n> for precisely that purpose, so the system administrator really already\n> has good control. The sorts of things I'd include in this category are\n> things like server-side COPY FROM or COPY TO. When we come to the\n> third thing the patch includes in this category, creating and dropping\n> event triggers, I *really* don't understand why that one is considered\n> host security. That one isn't touching the filesystem even to the\n> extent that the extension stuff is; it seems to me to be purely\n> internal to the database. Yeah, OK, that could involve writing files\n> because we make catalog entries, but so could any DDL. Now, maybe\n> there's a theory of operation that you have in mind that makes this\n> all make more sense the way you have it, but if so, it seems not to be\n> spelled out anywhere in the patch itself or the commit message you\n> wrote for it, so I'm in the dark.\n\nI agree that installing extensions and event triggers are different\nthings, and if what we're left with is \"create tablespaces\" then maybe\nwe should have that as an explicit permission on its own.\n\n> I also tend to think that functions like pg_read_file() ought to come\n> with execute permission pre-granted, with grant option, to\n> pg_host_security, and perhaps similarly for adminpack.\n\nWhen it comes to these- we already have pg_read_server_files and\npg_write_server_files, so I'm not sure I see why it'd make sense to have\nanother thing that grants filesystem access like this..?\n\n> 0004 has this kind of thing all over the place:\n> \n> - /* Superusers bypass all permission checking. */\n> - if (superuser_arg(roleid))\n> + /*\n> + * Superusers and members of the pg_database_security role bypass all\n> + * permission checking.\n> + */\n> \n> If that were true, the pg_database_security role would be equivalent\n> to superuser, which isn't the intent, so I think the comment needs\n> more thought. Also, I think that somewhere in the patch, either in\n> code comments or at the very least in the commit message, there needs\n> to be some justification of why the approach taken here is correct.\n> Like, the idea here is that if you have pg_database_security, you can\n> do whatever you want to objects within the database as long as you\n> don't try to touch the network or the host filesystem. So that would\n> imply that you can do anything you like to databases. So it sorta\n> makes sense to me that the patch goes about that by changing\n> pg_database_aclmask(). But I would feel better if there were some\n> explanation somewhere of why such a change is expecting to allow\n> precisely DDL-related database commands and nothing else. I think\n> that's true if pg_database_aclmask() is used for that purpose and not\n> for any other purpose, which may well be true, but I think it would be\n> best to be more explicit about the assumptions. I'm sure we don't want\n> a lengthy comment about this in every pg_*_aclmask() function, but I\n> think we should have a general explanation of it somewhere.\n\nConsidering it's a role, and roles aren't specific to databases, I don't\nthink \"pg_database_security\" is a good name. To me, at least, it\nimplies a way to allow a given user the ability to do most everything in\na specific database, but that's not the case at all, and further saying\n'security' comes across as suggesting that this would be a role granted\nto someone setting up permissions or otherwise modifying the database's\nsecurity aspects, not someone who is being given full access to every\nobject in the system.\n\nI'm also left wondering if this doesn't end up introducing opportunities\nfor someone with this role to become superuser pretty easily. Maybe it\ndoes and maybe we're ok with that, but I would think that it'd be really\nuseful to have a role that can't become superuser easily which can\naccess/modify most objects in the system.\n\n> I also think that in this case, as in 0002 and 0003, we really need\n> some documentation of what this new role is all about. The\n> documentation changes in 0004 are really extremely minimal. Users need\n> to understand what they can expect to happen if they grant this new\n> role to someone, and hackers need to understand how to update the code\n> the next time they're patching something that interacts with this, and\n> I do not think that what you've got here now is going to be sufficient\n> to meet the needs of either group. (I realize that you may have been\n> planning to wait until there was more consensus to flesh this out, but\n> because the definitional issues here are so tricky, I don't think it\n> can wait in this case.)\n\nAgreed.\n\n> In 0005, I do not think the function name role_has_privileges() is\n> sufficiently specific. Maybe role_can_change_guc()? Also, I think here\n> again you should draft some documentation changes. We're going to need\n> to indicate a category for every GUC somehow, and I'm not quite sure\n> how we're going to do that. If you want to just do a few examples for\n> now and also provide some general documentation on how the system is\n> intended to work, we can wait to do every GUC until we settle on how\n> to categorize everything, but I think we need to see what the general\n> plan looks like there. Consider the way that we currently indicate (a)\n> the GUC's data type and (b) when the GUC can be changed. The former is\n> shown in fixed point type in parentheses after the GUC name. The\n> latter is indicated by adding a sentence such as \"This parameter can\n> only be set at server start.\" to everything that is PGC_POSTMASTER (I\n> think). What are we going to do with this new categorization?\n\nAgreed.\n\n> I see that you've categorized things like restart_after_crash and\n> zero_damaged_pages as GUC_HOST_SECURITY. I think I like that, but it\n> again begs the definitional question. If host security basically means\n> touching the server filesystem, well, restart_after_crash doesn't. It\n> can be justified from the perspective that restart_after_crash is a\n> property of the host system, not something strictly internal to the\n> database. So when you go to write definitions of what these categories\n> are actually supposed to mean, they've got to be written in such a way\n> that these categorizations end up looking correct. Or else these have\n> got to be recategorized somehow. Anyway the point is that it \"feels\n> good\" but as you have it without the definitions it's hard to really\n> know.\n\nI don't really see either of those as being filesystem changing things.\n\n> The categorization of the logging GUCs looks haphazard to me. Why is\n> log_duration GUC_HOST_SECURITY but debug_print_parse is\n> GUC_DATABASE_SECURITY, for example? Again, we need clear definitions,\n> but I'm inclined to think this doesn't look great. I even less\n> understand why autovacuum is classified as GUC_HOST_SECURITY. That\n> seems like it's probably database security, while db_user_namespace\n> feels to me like network security. Another oddity is the replication\n> settings, which seem to be mostly classified as GUC_HOST_SECURITY. I\n> can see why you don't want to make them GUC_DATABASE_SECURITY, but eh,\n> what do they have to do with host security? It's similarly odd to me\n> that hash_mem_multiplier is GUC_DATABASE_SECURITY while work_mem, for\n> which it is a multiplier, is GUC_HOST_SECURITY.\n\n> Perhaps we need to break this up into a few more buckets to make sense\n> of it; I'm not really sure. For example, we could add buckets for\n> controlling what goes to the server log, resource utilization, system\n> integrity, and split inbound and outbound network security. Well, now\n> I just turned your three predefined roles into seven, which maybe is a\n> bad idea, but perhaps it's worth it if it gets us to a place where we\n> can clearly categorize everything. On the other hand, maybe if we did\n> that there'd just be a new set of things that look a little ambiguous.\n> I don't know. I guess trying to write a good set of definitions might\n> be job one.\n\nIt's often the case that logging/auditing are handled by a different\ngroup than those who might be creating/modifying objects. Yet another\ngroup is often the group that actually handles granting access. Broad\nclasses being:\n\n- Users\n- Auditors (controls what's logged, what is audited, etc)\n- Security (controls who has access to what)\n\nNote that 'security' and 'auditors' shouldn't have access to the actual\ndata either, or have the ability to do things like modify data. Not\nsure all of this quite fits what we're going for here but figured it\nmight help with sorting out what other buckets we need.\n\nThanks,\n\nStephen", "msg_date": "Thu, 22 Jul 2021 15:00:44 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Thu, Jul 22, 2021 at 3:00 PM Stephen Frost <sfrost@snowman.net> wrote:\n> I'm a bit on the fence about the documentation side... I could be\n> convinced either way, really, but I generally agree that it'd be good to\n> pick one and be consistent. I don't think the places where we do/don't\n> mention it were done for any particular reason.\n>\n> > So I don't know. At the very least I think we should not do it as an\n> > \"or\" in the code; what we want to do in comments and documentation I'm\n> > less sure.\n>\n> Agreed.\n\nThanks for weighing in.\n\n> > I also tend to think that functions like pg_read_file() ought to come\n> > with execute permission pre-granted, with grant option, to\n> > pg_host_security, and perhaps similarly for adminpack.\n>\n> When it comes to these- we already have pg_read_server_files and\n> pg_write_server_files, so I'm not sure I see why it'd make sense to have\n> another thing that grants filesystem access like this..?\n\nIt's awkward. I think that we can't afford to create a separate\npredefined role for every single thing that someone could\ntheoretically want to sever, because then we'll have a zillion of them\nand it will be unmaintainable. So the idea was to try to break up\neverything someone might want to do either via DDL or by setting GUCs\ninto one of three categories: internal to the database\n(pg_database_security), facing outward toward the network\n(pg_network_security), and facing inward toward the host\n(pg_host_security). If we didn't have any predefined security-related\nroles already, this would still have complications, but as things\nstand it has more, because as you point out, pg_read_server_files\noverlaps with pg_host_security. But what do we do about that? Neither\npg_read_server_files nor pg_write_server_files covers the ability to\ncreate tablespaces or set log_directory, but I think it's more\ndesirable to lump those things together in one bucket than to have a\nton of separate buckets for each individual thing. I guess one option\nwould to grant the existing roles pg_read_server_files and\npg_write_server_files to the new pg_host_security role, or whatever we\ndecide to call it (pg_access_server_filesystem?\npg_administer_server_files? pg_hack_postgres_account?). But I'm open\nto suggestions. See also below here on the overall intent.\n\n> I'm also left wondering if this doesn't end up introducing opportunities\n> for someone with this role to become superuser pretty easily. Maybe it\n> does and maybe we're ok with that, but I would think that it'd be really\n> useful to have a role that can't become superuser easily which can\n> access/modify most objects in the system.\n\nCreating something like that is precisely the intention here because,\nlike you, I think that would be extremely handy. If it's possible for\nthat role to become superuser, we've lost the plot.\n\n> I don't really see either of those as being filesystem changing things.\n\nI think the thought process here was that if you are a managed\nservices provider you would not want the user to change\nzero_damaged_pages or wal_sync_method or things like that because that\nstuff is the provider's responsibility; similar for the recovery\nsettings. But yes ... we need something better here, I think.\n\n> It's often the case that logging/auditing are handled by a different\n> group than those who might be creating/modifying objects. Yet another\n> group is often the group that actually handles granting access. Broad\n> classes being:\n>\n> - Users\n> - Auditors (controls what's logged, what is audited, etc)\n> - Security (controls who has access to what)\n>\n> Note that 'security' and 'auditors' shouldn't have access to the actual\n> data either, or have the ability to do things like modify data. Not\n> sure all of this quite fits what we're going for here but figured it\n> might help with sorting out what other buckets we need.\n\nHmm, interesting point. The division between the \"security\" group, who\nI suppose would be the DBA, and the \"auditors\" group is one I had\nthought about only slightly.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Jul 2021 16:01:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Jul 22, 2021, at 11:21 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> When we come to the\n> third thing the patch includes in this category, creating and dropping\n> event triggers, I *really* don't understand why that one is considered\n> host security. That one isn't touching the filesystem even to the\n> extent that the extension stuff is; it seems to me to be purely\n> internal to the database. Yeah, OK, that could involve writing files\n> because we make catalog entries, but so could any DDL. Now, maybe\n> there's a theory of operation that you have in mind that makes this\n> all make more sense the way you have it, but if so, it seems not to be\n> spelled out anywhere in the patch itself or the commit message you\n> wrote for it, so I'm in the dark.\n\nI agree with the need to document the theory behind all this. Event triggers are dangerous because they can trap a superuser into executing code they do not intend:\n\n create table super_special (big_darn_secret integer);\n revoke all privileges on super_special from public;\n insert into super_special values (42);\n -- imagine that \"untrustworth_bob\" is a member of as-yet unimplemented role\n -- pg_database_security, and has the ability to create event triggers; to simulate\n -- that, we'll put bob into superuser temporarily while the event trigger is\n -- created, then remove superuser.\n create role untrustworthy_bob superuser;\n set session authorization untrustworthy_bob;\n create function update_super_special_big_darn_secret() returns event_trigger as $$\n begin\n -- note that non-superusers should draw an error if they try this\n update super_special set big_darn_secret = big_darn_secret + 1;\n\n -- note that non-pg_host_security roles should draw an error if they try this\n perform pg_rotate_logfile();\n end;\n $$ language plpgsql;\n create event trigger secret_sauce on sql_drop\n execute procedure update_super_special_big_darn_secret();\n reset session authorization;\n alter role untrustworthy_bob nosuperuser;\n set session authorization untrustworthy_bob;\n update super_special set big_darn_secret = big_darn_secret + 1;\n ERROR: permission denied for table super_special\n select pg_rotate_logfile();\n ERROR: permission denied for function pg_rotate_logfile\n reset session authorization;\n select * from super_special;\n big_darn_secret\n -----------------\n 42\n (1 row)\n\n create table foo_tmp (t integer);\n drop table foo_tmp;\n WARNING: rotation not possible because log collection not active\n select * from super_special;\n big_darn_secret\n -----------------\n 43\n (1 row)\n\nWhen the superuser dropped table foo_tmp, pg_rotate_logfile() got called, as did an update of table super_special. Any other function could have been called from there instead. That's a big deal. If creating event triggers is delegated to nonsuperuser members of pg_database_security, I think it means that pg_database_security has a privilege escalation path to become superuser. Since pg_host_security already has such an escalation path, it makes more sense to me to use this role for event trigger creation. The argument for dropping event triggers is less clear, but it seems that event triggers may be used to implement an auditing system, and we wouldn't want pg_database_security to be enough privilege to circumvent the auditing system.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 22 Jul 2021 14:35:22 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Thu, Jul 22, 2021 at 5:35 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I agree with the need to document the theory behind all this. Event triggers are dangerous because they can trap a superuser into executing code they do not intend:\n\nThat's true. Regular triggers and views can also do that, and so can\noperators and functions that capture queries intended to reference\nsystem-defined objects. It's already the case that a superuser who\nexecutes any optimizable query potentially compromises the system\nunless (a) they first sanitize their search_path and (b) the query is\na SELECT that involves no views or an INSERT, UPDATE, or DELETE on a\ntable that has no triggers. However, event triggers owned by\nnon-superusers would extend the hazard to nearly all DDL commands.\n\nClassifying event triggers as \"host\" security doesn't seem like the\nright solution, though. I think the only way that fits into the \"host\"\nsecurity category is if you use some kind of definition that works by\nexclusion: things that are \"database\" security shouldn't have\nconsequence X, therefore anything that does must go in some other\ncategory. I think that kind of definition is very hard for people to\nunderstand, though. It's much nicer to have definitions that tell you\nwhat does go into a category than what doesn't.\n\nI suppose one alternative we could consider is just leaving some\nthings uncategorized. I had hoped we could put everything in a bucket\nso that the superuser role was really divided into pieces and not just\nhaving bits chipped off of it, but maybe that's too ambitious. The\nmore we drift into saying that some things like \"well this has to be\nhost security because database security breaks the model\" the more\nuseless host security is likely to be as a concept. It's not a related\ncollection of things any more; it's just whatever didn't fit in the\nother bucket. And nobody wants to GRANT a_bunch_of_other_stuff.\n\nHowever, I also wonder whether we should think about engineering a\nsolution to this problem. For example, we have a row_security GUC. If\nyou set it to off, no RLS policies will be applied; if applying one\nwould have been mandatory, you will get an error instead. I don't\nthink that exact design will work here because there's no such thing\nas permission to bypass event triggers, but we could have a\nfire_event_triggers GUC (or whatever we call it) which is normally\n\"on\" but privileged users can turn it off. Now if you're worried about\nthis problem, you have an easy way around it.\n\nAnd I think that's a good illustration of why it's a bad idea to\ncategorize things according to whether or not they have a certain\nconsequence. Suppose we said, ah well, let's make event triggers\n\"host\" security because it's too dangerous to make them \"database\"\nsecurity. Well then say someone comes along and implements the feature\nI just described, reducing the danger. Do we then reclassify that\nfeature as \"database\" security? The original rationale for making it\nsomething else is no longer valid, but on the other hand, what about\nbackward compatibility? Classifying things based on what they do,\nrather than on the ultimate consequences that they may have, avoids\nthis kind of conundrum.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Jul 2021 09:22:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Jul 23, 2021, at 6:22 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> And I think that's a good illustration of why it's a bad idea to\n> categorize things according to whether or not they have a certain\n> consequence.\n\nWell, one very big reason for wanting to break superuser into separate roles is to make postgres-as-a-service providers comfortable granting more privileges to customers. If we design any privilege escalations into one of those roles, then no sensible service provider is ever going to grant it to anybody, which fairly much defeats the purpose of this work. The privilege escalations we need to prevent are not just escalations to superuser, but also escalations to other privileged roles. Contrary to this design goal, the \"pg_host_security\" role is a bit of a synonym for \"superuser\", since being able to write files or execute shell commands is a path to superuser, and we can't do too much about that. \"pg_database_security\", \"pg_network_security\", and \"pg_logical_replication\" are not synonyms for \"superuser\".\n\nI like your idea of designing some extra security around event triggers to resolve their privilege escalation problems. A GUC seems the wrong approach to me.\n\nI think a superuser-only GUC to suppress firing event triggers won't quite cut it, because the other privileged roles would still be in danger of being trapped by a clever pg_database_security event trigger author; but extending permissions on the GUC to include the other roles would mean that they, and not just superuser, could evade event trigger based auditing solutions. That is odd, because you wouldn't expect granting pg_network_security or pg_logical_replication to have anything to do with granting privilege to defeat audit logging.\n\nA superuser-only GUC for this is also a bit too heavy handed. The superuser may not want to circumvent all event triggers, just those put in place by the pg_database_security role. If that sounds arbitrary, just consider the postgres-as-a-service case. The superuser wants to be able to grant pg_database_security to the customer, but doesn't want the customer to be able to use that to trap the service provider.\n\nInstead of a GUC, how about checking permissions inside event triggers for both the user firing the trigger *and* the trigger owner. That's a backward compatibility break, but maybe not a bad one, since until now only superusers have been allowed to create event triggers. Systems which create an event trigger using a role that later has superuser revoked, or which change ownership to a non-superuser, will see a behavior change. I'm not super happy with that, but I think it is better than the GUC based solution. Event triggers owned by a superuser continue to work as they do now. Event triggers owned by a non-superuser cannot be used to force a privileged user to run a command that the event trigger owner could not have run for themself.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 23 Jul 2021 09:11:14 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Jul 23, 2021, at 6:22 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> > And I think that's a good illustration of why it's a bad idea to\n> > categorize things according to whether or not they have a certain\n> > consequence.\n> \n> Well, one very big reason for wanting to break superuser into separate roles is to make postgres-as-a-service providers comfortable granting more privileges to customers. If we design any privilege escalations into one of those roles, then no sensible service provider is ever going to grant it to anybody, which fairly much defeats the purpose of this work. The privilege escalations we need to prevent are not just escalations to superuser, but also escalations to other privileged roles. Contrary to this design goal, the \"pg_host_security\" role is a bit of a synonym for \"superuser\", since being able to write files or execute shell commands is a path to superuser, and we can't do too much about that. \"pg_database_security\", \"pg_network_security\", and \"pg_logical_replication\" are not synonyms for \"superuser\".\n> \n> I like your idea of designing some extra security around event triggers to resolve their privilege escalation problems. A GUC seems the wrong approach to me.\n> \n> I think a superuser-only GUC to suppress firing event triggers won't quite cut it, because the other privileged roles would still be in danger of being trapped by a clever pg_database_security event trigger author; but extending permissions on the GUC to include the other roles would mean that they, and not just superuser, could evade event trigger based auditing solutions. That is odd, because you wouldn't expect granting pg_network_security or pg_logical_replication to have anything to do with granting privilege to defeat audit logging.\n\nThese considerations were addressed with row_security by allowing the\nGUC to be set by anyone, but throwing an ERROR if RLS would have been\nrequired by the query instead of just allowing it. I don't see any\nobvious reason why that couldn't be the case for event triggers..?\n\n> A superuser-only GUC for this is also a bit too heavy handed. The superuser may not want to circumvent all event triggers, just those put in place by the pg_database_security role. If that sounds arbitrary, just consider the postgres-as-a-service case. The superuser wants to be able to grant pg_database_security to the customer, but doesn't want the customer to be able to use that to trap the service provider.\n\nHaving a trust system for triggers, functions, etc, where you can say\nwhose triggers you're ok running might be interesting but it also seems\nlike an awful lot of work and I'm not sure that it's actually really\nthat much better than a GUC similar to row_security.\n\n> Instead of a GUC, how about checking permissions inside event triggers for both the user firing the trigger *and* the trigger owner. That's a backward compatibility break, but maybe not a bad one, since until now only superusers have been allowed to create event triggers. Systems which create an event trigger using a role that later has superuser revoked, or which change ownership to a non-superuser, will see a behavior change. I'm not super happy with that, but I think it is better than the GUC based solution. Event triggers owned by a superuser continue to work as they do now. Event triggers owned by a non-superuser cannot be used to force a privileged user to run a command that the event trigger owner could not have run for themself.\n\nI'm not following what this suggestion is, exactly. What permissions\nare being checked inside the event trigger being run, exactly..? Who\nwould get to set those permissions? Any object owner, today, can GRANT\naccess to any other user in the system, we don't prevent that.\n\nThanks,\n\nStephen", "msg_date": "Fri, 23 Jul 2021 12:20:29 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Jul 23, 2021, at 9:20 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> These considerations were addressed with row_security by allowing the\n> GUC to be set by anyone, but throwing an ERROR if RLS would have been\n> required by the query instead of just allowing it. I don't see any\n> obvious reason why that couldn't be the case for event triggers..?\n\nBecause a postgres-as-a-service provider may want to install their own event triggers as well as allowing the customer to do so, and it seems too coarse grained to either skip all of them or none of them. It's perfectly reasonable to want to skip your customer's event triggers while not skipping your own.\n\n>> A superuser-only GUC for this is also a bit too heavy handed. The superuser may not want to circumvent all event triggers, just those put in place by the pg_database_security role. If that sounds arbitrary, just consider the postgres-as-a-service case. The superuser wants to be able to grant pg_database_security to the customer, but doesn't want the customer to be able to use that to trap the service provider.\n> \n> Having a trust system for triggers, functions, etc, where you can say\n> whose triggers you're ok running might be interesting but it also seems\n> like an awful lot of work and I'm not sure that it's actually really\n> that much better than a GUC similar to row_security.\n\nMy first impression was that it is too much work, which is why I put event trigger creation into the pg_host_security bucket. It might be more sane to just leave it as superuser-only. But if we're going to fix this and make it a pg_database_security usable feature, then I think we need to solve the problems a naive approach would create for service providers.\n\n>> Instead of a GUC, how about checking permissions inside event triggers for both the user firing the trigger *and* the trigger owner. That's a backward compatibility break, but maybe not a bad one, since until now only superusers have been allowed to create event triggers. Systems which create an event trigger using a role that later has superuser revoked, or which change ownership to a non-superuser, will see a behavior change. I'm not super happy with that, but I think it is better than the GUC based solution. Event triggers owned by a superuser continue to work as they do now. Event triggers owned by a non-superuser cannot be used to force a privileged user to run a command that the event trigger owner could not have run for themself.\n> \n> I'm not following what this suggestion is, exactly. What permissions\n> are being checked inside the event trigger being run, exactly..? Who\n> would get to set those permissions? Any object owner, today, can GRANT\n> access to any other user in the system, we don't prevent that.\n\nI don't think GRANT is really relevant here, as what I'm trying to avoid is a less privileged user trapping a more privileged user into running a function that the less privileged user can't directly run. Certainly such a user cannot GRANT privilege on a function that they cannot even run, else your system has a privilege escalation hazard already. \n\nIt's a substantial change to the security model, but the idea is that inside an event trigger, we'd SetUserIdAndSecContext to a new type of context similar to SECURITY_LOCAL_USERID_CHANGE but where instead of simply changing to the owner of the event trigger, we'd be changing to a virtual user who is defined to only have the privileges of the intersection of the current user and the event trigger owner. That entails at least two problems, though I don't see that they are insoluble. First, all places in the code that check permissions need to check in a way that works in this mode. We might not be able to call GetUserId() as part of aclchecks any longer, and instead have to call some new function GetVirtualUserId() as part of aclchecks, reserving GetUserId() just for cases where you're not trying to perform a permissions check. Second, since event triggers can cause other event triggers to fire, we'd need these virtual users to be able to nest, so we'd have to push a stack of users and check all of them in each case, and we'd have to think carefully about how to handle errors, since GetUserIdAndSecContext and SetUserIdAndSecContext are called inside transaction start and end, where errors must not be raised. But since transaction start and end would never want to set or reset the state to a virtual user, those should never throw, and the calls elsewhere are at liberty to throw if they like, so we'd just have to be careful to use the right version of these operations in the right places.\n\nThis all sounds a bit much, but it has knock-on benefits. We'd be able to preserve the historical behavior of table triggers while having a mode that behaves in this new way, which means that privileged users could be a bit more cavalier than they can now about DML against user defined tables. This mode might become the standard operating mode for service provider scripts, whether running as superuser, as pg_network_security, or whatever. This might also be used to secure operations on indexes over user defined functions. If the index operation is run in this mode, the index operation could throw an error rather than performing a function maliciously embedded inside the index function call.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 23 Jul 2021 10:43:00 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Fri, Jul 23, 2021 at 12:11 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> A superuser-only GUC for this is also a bit too heavy handed.\n\nYeah, but you're inventing a system for allowing the restriction on a\nGUC to be something other than is-superuser in the very patch we're\ntalking about. So it could be something like is-database-security.\nTherefore I don't grok the objection.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Jul 2021 16:54:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Jul 23, 2021, at 1:54 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> Yeah, but you're inventing a system for allowing the restriction on a\n> GUC to be something other than is-superuser in the very patch we're\n> talking about. So it could be something like is-database-security.\n> Therefore I don't grok the objection.\n\nI'm not objecting to how hard it would be to implement. I'm objecting to the semantics. If the only non-superuser who can set the GUC is pg_database_security, then it is absolutely worthless in preventing pg_database_security from trapping actions performed by pg_network_security members. On the other hand, if pg_network_security can also set the GUC, then pg_network_security can circumvent audit logging that pg_database_security put in place. What's the point in having these as separate roles if they can circumvent each other's authority?\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 23 Jul 2021 13:57:37 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Jul 23, 2021, at 1:57 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> What's the point in having these as separate roles if they can circumvent each other's authority?\n\nThat was probably too brief a reply, so let me try again. If the GUC circumvents the event trigger, then my answer above stands. If the GUC merely converts the event trigger into an error, then you have the problem that the customer can create event triggers which the service provider will need to disable (because they cause the service providers legitimate actions to error rather than succeed). Presumably the service provider can disable them logged in as superuser. But that means the service customer has their event trigger turned off, at least for some length of time, which is not good if the event trigger is performing audit logging for compliance purposes, etc. Also, we can't say whether pg_network_security role has been given to the customer, or if that is being kept for the provider's use only, so we're not really sure whether pg_network_security should be able to do these sorts of things, but in the case that the service provider is keeping pg_network_security for themself, it seems they wouldn't want the customer to cause pg_network_security operations to fail. We can't make too many assumptions about the exact relationship between those two roles.\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 23 Jul 2021 14:04:10 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Jul 23, 2021, at 2:04 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> If the GUC merely converts the event trigger into an error, then you have the problem that the customer can create event triggers which the service provider will need to disable (because they cause the service providers legitimate actions to error rather than succeed).\n\nI'd like to expound on this a little more.\n\nImagine the service provider has scripts that perform actions within the database, such as physical replication, or the creation and removal of database users in response to actions taken at the service portal web interface, and they don't want the actions performed by those scripts to be leveraged by the customer to break out of the jail.\n\nThe customer has event triggers which perform no illicit activities. They don't try to break out of the jail. But for compliance with HIPAA regulations (or whatever), they need to audit log everything, and they can't just have the service provider's actions unlogged.\n\nWhat to do? If the service provider disables the event triggers, then the customer will fail their regulation audit. If the service provider allows the event triggers to fire, the customer might create a new event trigger embedding illicit actions. The service provider is totally stuck.\n\nOTOH, if there were a mechanism by which an event trigger could run with only the intersection of the privileges enjoyed by the service provider's scripts and the customer's event trigger owner, then the service provider can allow their own actions to be logged, without fear that any hijacking of their privilege will occur.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 23 Jul 2021 14:24:04 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Fri, Jul 23, 2021 at 4:57 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > On Jul 23, 2021, at 1:54 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> > Yeah, but you're inventing a system for allowing the restriction on a\n> > GUC to be something other than is-superuser in the very patch we're\n> > talking about. So it could be something like is-database-security.\n> > Therefore I don't grok the objection.\n>\n> I'm not objecting to how hard it would be to implement. I'm objecting to the semantics. If the only non-superuser who can set the GUC is pg_database_security, then it is absolutely worthless in preventing pg_database_security from trapping actions performed by pg_network_security members. On the other hand, if pg_network_security can also set the GUC, then pg_network_security can circumvent audit logging that pg_database_security put in place. What's the point in having these as separate roles if they can circumvent each other's authority?\n\nRight, that would be bad. I had forgotten how this worked, but it\nseems that event triggers are called with the privileges of the user\nwhose action caused the event trigger to be fired, not the privileges\nof the user who owns the trigger. So as you say, if you can get\nsomebody to do something that causes an event trigger to be fired, you\ncan do anything they can do. As far as I can see, the only reasonable\nconclusion is that, unless we change the security model, doing\nanything with event triggers will have to remain superuser-only. In\nother words I don't think we can give it to any of\npg_database_security or pg_host_security or pg_network_security, or\nany similar role. We could have a pg_event_triggers role that is\ndocumented as able to usurp superuser, but I don't see the point.\n\nNow, the other alternative is changing the security model for event\ntriggers, but I am not sure that really fixes anything. You proposed\nhaving a new mode where someone could only do things that could be\ndone by either user, but that troubles me for a number of reasons. One\nis that it often makes a difference who actually did a particular\noperation. For example it might be that alice and bob both have the\nability to give charlie permission on some table, but the ACL for that\ntable will record who actually issued the grant. It might be that both\nalice and bob have the ability to create a table, but the table will\nbe owned by whoever actually does. Suppose bob is about to be\nterminated but can arrange for alice (who is a star performer) to\ngrant permissions to his accomplice charlie, thus arranging for those\npermissions to survive his impending termination. That's bad.\n\nAlso, what about just throwing an ERROR? Anybody's allowed to do that,\nbut that doesn't mean that it's OK for one user to block everything\nsome other user wants to do. If seward and bates respectively have\npg_database_security and pg_network_security, it's OK for seward to\ninterfere with attempts by bates to access database objects, but it's\nnot OK for seward to prevent bates from reconfiguring network access\nto PostgreSQL. Because event triggers don't fire for ALTER SYSTEM or\nDDL commands on global objects, we might almost be OK here, but I'm\nnot sure if it's completely OK.\n\nI'm pretty sure that the reason we set this up the way we did was\nbecause we assumed that the person creating the event trigger would\nalways have maximum privileges i.e. superuser. Therefore, it seemed\n\"safer\" to run the code under the less-privileged account. If we'd\nthought about this from the perspective of having non-superuser-owned\nevent triggers, I think we would have made the opposite decision,\nsince running code as yourself in somebody else's session is less\ndangerous than running code as somebody else straight up.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Jul 2021 15:09:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Greetings,\n\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Fri, Jul 23, 2021 at 4:57 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > > On Jul 23, 2021, at 1:54 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> > > Yeah, but you're inventing a system for allowing the restriction on a\n> > > GUC to be something other than is-superuser in the very patch we're\n> > > talking about. So it could be something like is-database-security.\n> > > Therefore I don't grok the objection.\n> >\n> > I'm not objecting to how hard it would be to implement. I'm objecting to the semantics. If the only non-superuser who can set the GUC is pg_database_security, then it is absolutely worthless in preventing pg_database_security from trapping actions performed by pg_network_security members. On the other hand, if pg_network_security can also set the GUC, then pg_network_security can circumvent audit logging that pg_database_security put in place. What's the point in having these as separate roles if they can circumvent each other's authority?\n> \n> Right, that would be bad. I had forgotten how this worked, but it\n> seems that event triggers are called with the privileges of the user\n> whose action caused the event trigger to be fired, not the privileges\n> of the user who owns the trigger. So as you say, if you can get\n> somebody to do something that causes an event trigger to be fired, you\n> can do anything they can do. As far as I can see, the only reasonable\n> conclusion is that, unless we change the security model, doing\n> anything with event triggers will have to remain superuser-only. In\n> other words I don't think we can give it to any of\n> pg_database_security or pg_host_security or pg_network_security, or\n> any similar role. We could have a pg_event_triggers role that is\n> documented as able to usurp superuser, but I don't see the point.\n\nRight- event triggers work just the same as how regular triggers on\ntables do and how RLS works. All of these also have the possibility of\nleveraging security definer functions, of course, but that doesn't\naddress the issue of the trigger author attempting to attack the\nindividual running the trigger.\n\nI do think it'd be useful to have a pg_event_triggers or such role, so\nthat someone could create them without being a superuser. A bit more\ndiscussion about that below though..\n\n> Now, the other alternative is changing the security model for event\n> triggers, but I am not sure that really fixes anything. You proposed\n> having a new mode where someone could only do things that could be\n> done by either user, but that troubles me for a number of reasons. One\n> is that it often makes a difference who actually did a particular\n> operation. For example it might be that alice and bob both have the\n> ability to give charlie permission on some table, but the ACL for that\n> table will record who actually issued the grant. It might be that both\n> alice and bob have the ability to create a table, but the table will\n> be owned by whoever actually does. Suppose bob is about to be\n> terminated but can arrange for alice (who is a star performer) to\n> grant permissions to his accomplice charlie, thus arranging for those\n> permissions to survive his impending termination. That's bad.\n\nAs I understood Mark's suggestion, the trigger would run but would have\nthe privileges of the intersection of both user's permissions, which is\nan interesting idea but not one we've got any way to really do today as\neach privilege check would now need to check two different roles for\nprivilege- and if one of the privilege checks fails, then what..?\nPresumably there would be an ERROR returned, meaning that the operation\nwould be able to be prevented from happening by the trigger author,\nwhich was objected to as not being acceptable either, per below.\n\n> Also, what about just throwing an ERROR? Anybody's allowed to do that,\n> but that doesn't mean that it's OK for one user to block everything\n> some other user wants to do. If seward and bates respectively have\n> pg_database_security and pg_network_security, it's OK for seward to\n> interfere with attempts by bates to access database objects, but it's\n> not OK for seward to prevent bates from reconfiguring network access\n> to PostgreSQL. Because event triggers don't fire for ALTER SYSTEM or\n> DDL commands on global objects, we might almost be OK here, but I'm\n> not sure if it's completely OK.\n\nRegular table triggers can be used to block someone from messing with\nthat table, so this isn't entirely unheard of. Deciding that someone\nwith event trigger access is allowed to prevent certain things from\nhappening in a database that they're allowed to connect and create event\ntriggers in may not be completely unreasonable.\n\n> I'm pretty sure that the reason we set this up the way we did was\n> because we assumed that the person creating the event trigger would\n> always have maximum privileges i.e. superuser. Therefore, it seemed\n> \"safer\" to run the code under the less-privileged account. If we'd\n> thought about this from the perspective of having non-superuser-owned\n> event triggers, I think we would have made the opposite decision,\n> since running code as yourself in somebody else's session is less\n> dangerous than running code as somebody else straight up.\n\nNot sure that this is really the case- as noted above, it's the same for\ntable-level triggers and RLS.\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Jul 23, 2021, at 2:04 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > \n> > If the GUC merely converts the event trigger into an error, then you have the problem that the customer can create event triggers which the service provider will need to disable (because they cause the service providers legitimate actions to error rather than succeed).\n> \n> I'd like to expound on this a little more.\n> \n> Imagine the service provider has scripts that perform actions within the database, such as physical replication, or the creation and removal of database users in response to actions taken at the service portal web interface, and they don't want the actions performed by those scripts to be leveraged by the customer to break out of the jail.\n\nThese specific use-cases are interesting because they seem to all be\nabout global-level objects, right? Surely the service provider would be\nbetter off having a separate database that they connect to, which no one\nelse has access to. Not to mention that event triggers can't be created\non global objects anyway, but even so.\n\nOn the flip side- if we create a way for roles to be created by a\nnon-superuser in a way that doesn't end up giving away the farm (unlike\nCREATEROLE privs today), then suddenly the service provider doesn't have\nany need to use a superuser role or otherwise privileged role to perform\nthat action and they could either punt that entirely to the client to\ndeal with, or have a way to log in *as* the client to perform the\naction while not risking the client ending up getting the service\nprovider's system to run code that the client wrote.\n\nAnd this last point is the most relevant to all of this, in my view.\nEverything that service providers provide web forms and such for to\nallow the client to perform things that they can't just GRANT directly\nto the client's account is the problem- they don't write those web pages\nand run things as superuser because they want to, they do it because the\ndatabase system doesn't allow them any way to do it that doesn't give up\nthe farm (or, perhaps in some cases, they have to do *other* things too,\nthat clearly they can't just give the client's account access to do,\nlike creating tablespaces where the volume has to also be created and\nattached to the instance, but those are cases where we probably don't\nneed to come up with a better solution..? though I'm not against a role\nto allow creating tablespaces that isn't a superuser, since the service\nprovider would probably be happier running that action with that role as\na good way to reduce risk further).\n\n> The customer has event triggers which perform no illicit activities. They don't try to break out of the jail. But for compliance with HIPAA regulations (or whatever), they need to audit log everything, and they can't just have the service provider's actions unlogged.\n> \n> What to do? If the service provider disables the event triggers, then the customer will fail their regulation audit. If the service provider allows the event triggers to fire, the customer might create a new event trigger embedding illicit actions. The service provider is totally stuck.\n> \n> OTOH, if there were a mechanism by which an event trigger could run with only the intersection of the privileges enjoyed by the service provider's scripts and the customer's event trigger owner, then the service provider can allow their own actions to be logged, without fear that any hijacking of their privilege will occur.\n\nIf all such actions could be performed by the client role, then the\nservice provider suddenly doesn't have such a concern or issue- they can\ntell the client to do whatever it is and then it gets logged properly.\nIn my view, that's really the end goal here, we just need to build these\nthings in a way that granting such privileges to the client doesn't end\nup giving them a way to get superuser privileges.\n\nThanks,\n\nStephen", "msg_date": "Mon, 26 Jul 2021 16:05:42 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Mon, Jul 26, 2021 at 4:05 PM Stephen Frost <sfrost@snowman.net> wrote:\n> As I understood Mark's suggestion, the trigger would run but would have\n> the privileges of the intersection of both user's permissions, which is\n> an interesting idea but not one we've got any way to really do today as\n> each privilege check would now need to check two different roles for\n> privilege- and if one of the privilege checks fails, then what..?\n> Presumably there would be an ERROR returned, meaning that the operation\n> would be able to be prevented from happening by the trigger author,\n> which was objected to as not being acceptable either, per below.\n\nI think I may not have expressed myself clearly enough here. What I'm\nconcerned about is: Alice should not be permitted to preventing Bob\nfrom doing something which Bob is allowed to do and Alice is not\nallowed to do. If Alice is the administrator of PostgreSQL's XYZ\nsubsystem, she can permit Bob from using it if she wishes. But if Bob\nis an administrator of XYZ and Alice is not, there shouldn't be a way\nfor Alice to obstruct Bob's access to that system.\n\nDo you agree?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Jul 2021 16:12:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Mon, Jul 26, 2021 at 4:12 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think I may not have expressed myself clearly enough here. What I'm\n> concerned about is: Alice should not be permitted to preventing Bob\n> from doing something which Bob is allowed to do and Alice is not\n> allowed to do. If Alice is the administrator of PostgreSQL's XYZ\n> subsystem, she can permit Bob from using it if she wishes. But if Bob\n\nargh, typo. I meant prevent, not permit.\n\n> is an administrator of XYZ and Alice is not, there shouldn't be a way\n> for Alice to obstruct Bob's access to that system.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Jul 2021 16:13:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> As I understood Mark's suggestion, the trigger would run but would have\n> the privileges of the intersection of both user's permissions, which is\n> an interesting idea but not one we've got any way to really do today as\n> each privilege check would now need to check two different roles for\n> privilege- and if one of the privilege checks fails, then what..?\n> Presumably there would be an ERROR returned, meaning that the operation\n> would be able to be prevented from happening by the trigger author,\n> which was objected to as not being acceptable either, per below.\n\nI've not been paying close attention, so maybe this was already\nconsidered, but ...\n\nWhat if we allow event triggers owned by non-superusers, but only fire\nthem on commands performed by the trigger's owner? This sidesteps all\nthe issues of who has which privileges and whether Alice is malicious\ntowards Bob or vice versa, because there is no change of privilege\ndomain. Admittedly, it fails to cover some use-cases, but I think it\nwould still handle a lot of interesting cases. The impression I have\nis that a lot of applications do everything under just one or a few\nroles.\n\nPossibly this could be generalized to \"fire on commands performed by\nany role the trigger owner is a member of\", but then I'm a bit less\nsure that it's safe from both roles' perspectives.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Jul 2021 16:24:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Jul 26, 2021, at 1:12 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> Alice should not be permitted to preventing Bob\n> from doing something which Bob is allowed to do and Alice is not\n> allowed to do.\n\nThat sounds intuitively reasonable, though it depends on what \"which Bob is allowed to do\" means. For instance, if Alice is only allowed to enable or disable connections to the database, and she disables them, then she has prevented Bob from, for example, creating tables, something which Bob is otherwise allowed to do, because without the ability to connect, he cannot create tables.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 26 Jul 2021 13:25:29 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Jul 26, 2021 at 4:12 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I think I may not have expressed myself clearly enough here. What I'm\n> > concerned about is: Alice should not be permitted to preventing Bob\n> > from doing something which Bob is allowed to do and Alice is not\n> > allowed to do. If Alice is the administrator of PostgreSQL's XYZ\n> > subsystem, she can permit Bob from using it if she wishes. But if Bob\n> \n> argh, typo. I meant prevent, not permit.\n> \n> > is an administrator of XYZ and Alice is not, there shouldn't be a way\n> > for Alice to obstruct Bob's access to that system.\n\n> Do you agree?\n\nso ... yes and no. There's an awful lot being ascribed to\n'administrator' without any definition of it being actually given. We\nare working in this thread to explicitly split up superuser privileges\nto allow them to be granted to non-superusers and talking about cases\nwhere those privileges end up interacting with each other. Is Alice, as\nthe 'network' manager considered an 'administrator' of XYZ? Is Bob, as\nthe 'database' manager considered an 'administrator'? Perhaps both are,\nperhaps neither are. It doesn't seem helpful to be vague.\n\nIf Alice is given the right to create event triggers in a given\ndatabase, then that's explicitly giving Alice the right to block anyone\nfrom dropping tables in that database because that's an inherent part of\nthe event trigger system. Should superusers be able to bypass that?\nYes, they probably should be able to and, ideally, they'd be able to do\nthat just in a particular session. Should a user who has been allowed\nto modify certain GUCs that perhaps Alice hasn't been allowed to modify\nbe able to be prevented from modifying those GUCs by Alice, when neither\nis a superuser? That's definitely a trickier question and I don't know\nthat I've got an answer offhand.\n\nThanks,\n\nStephen", "msg_date": "Mon, 26 Jul 2021 16:28:54 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On 2021-Jul-26, Tom Lane wrote:\n\n> What if we allow event triggers owned by non-superusers, but only fire\n> them on commands performed by the trigger's owner? This sidesteps all\n> the issues of who has which privileges and whether Alice is malicious\n> towards Bob or vice versa, because there is no change of privilege\n> domain. Admittedly, it fails to cover some use-cases, but I think it\n> would still handle a lot of interesting cases. The impression I have\n> is that a lot of applications do everything under just one or a few\n> roles.\n\nThis is similar but not quite an idea I had: have event triggers owned\nby non-superusers run for all non-superusers, but not for superusers.\nIt is still the case that all non-superusers have to trust everyone with\nthe event-trigger-create permission, but that's probably the database\nowner so most of the time you have to trust them already.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Saca el libro que tu religión considere como el indicado para encontrar la\noración que traiga paz a tu alma. Luego rebootea el computador\ny ve si funciona\" (Carlos Duclós)\n\n\n", "msg_date": "Mon, 26 Jul 2021 16:46:13 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Greetings,\n\n* Alvaro Herrera (alvherre@alvh.no-ip.org) wrote:\n> On 2021-Jul-26, Tom Lane wrote:\n> \n> > What if we allow event triggers owned by non-superusers, but only fire\n> > them on commands performed by the trigger's owner? This sidesteps all\n> > the issues of who has which privileges and whether Alice is malicious\n> > towards Bob or vice versa, because there is no change of privilege\n> > domain. Admittedly, it fails to cover some use-cases, but I think it\n> > would still handle a lot of interesting cases. The impression I have\n> > is that a lot of applications do everything under just one or a few\n> > roles.\n> \n> This is similar but not quite an idea I had: have event triggers owned\n> by non-superusers run for all non-superusers, but not for superusers.\n> It is still the case that all non-superusers have to trust everyone with\n> the event-trigger-create permission, but that's probably the database\n> owner so most of the time you have to trust them already.\n\nThis sort of logic is what has caused issues with CREATEROLE, imv. It's\nsimply not so simple as \"don't run this when the superuser flag is set\"\nbecause non-superuser roles can become superusers. We need something\nbetter to have something like this actually be safe. Tom's suggestion\nwould work, of course, but it would mean having to create event triggers\nfor all the roles in the system, and would those roles who own those\nevent triggers be able to disable them..? If so, it would almost\ncertainly be against the point of an auditing event trigger..\n\nThanks,\n\nStephen", "msg_date": "Mon, 26 Jul 2021 16:54:33 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "I wrote:\n> Possibly this could be generalized to \"fire on commands performed by\n> any role the trigger owner is a member of\", but then I'm a bit less\n> sure that it's safe from both roles' perspectives.\n\nAfter further thought, I can't poke a hole in that concept.\nWe'd keep the rule that the trigger executes as the calling user.\nTherefore, the trigger cannot perform any action that the calling\nuser couldn't do if she chose. Conversely, since the trigger\nowner could become a member of that role and then do whatever the\ntrigger intends to do, this scheme does not give the trigger owner\nany new abilities either. All we've done is provide what some\nprogramming languages call an observer or annotation.\n\nI also like the fact that with this rule, superusers' ability to\ncreate event triggers that fire for everything is not a special case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Jul 2021 16:57:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> ... Tom's suggestion\n> would work, of course, but it would mean having to create event triggers\n> for all the roles in the system, and would those roles who own those\n> event triggers be able to disable them..?\n\nUh, why not? If you own the trigger, you can drop it, so why shouldn't\nyou be able to temporarily disable it?\n\n> If so, it would almost\n> certainly be against the point of an auditing event trigger..\n\nIf you want auditing capability, you make an auditor role that is\na member of every other role, and then it owns the trigger. (If\nyou need to audit superuser actions too, then the auditor has to\nbe a superuser itself, but that's no worse than before; and I'd\nargue that non-superusers shouldn't be able to audit superusers\nanyway.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Jul 2021 17:01:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On 2021-Jul-26, Tom Lane wrote:\n\n> Stephen Frost <sfrost@snowman.net> writes:\n> > ... Tom's suggestion\n> > would work, of course, but it would mean having to create event triggers\n> > for all the roles in the system, and would those roles who own those\n> > event triggers be able to disable them..?\n> \n> Uh, why not? If you own the trigger, you can drop it, so why shouldn't\n> you be able to temporarily disable it?\n\nI think an auditing system that can be turned off by the audited user is\npretty much useless. Or did I misunderstood what you are suggesting?\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Crear es tan difícil como ser libre\" (Elsa Triolet)\n\n\n", "msg_date": "Mon, 26 Jul 2021 17:16:25 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Jul-26, Tom Lane wrote:\n>> Uh, why not? If you own the trigger, you can drop it, so why shouldn't\n>> you be able to temporarily disable it?\n\n> I think an auditing system that can be turned off by the audited user is\n> pretty much useless. Or did I misunderstood what you are suggesting?\n\nFor auditing purposes, you make a trusted role that owns the trigger,\nand is a member of the roles whose actions are to be audited (but NOT\nvice versa). I think that any idea that the auditing role doesn't\nneed to be trusted that much is foolhardy. What we can buy here is\nnot requiring the auditing role to be full superuser ... assuming that\nyou don't need auditing of superusers.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Jul 2021 17:30:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Mon, Jul 26, 2021 at 4:28 PM Stephen Frost <sfrost@snowman.net> wrote:\n> so ... yes and no. There's an awful lot being ascribed to\n> 'administrator' without any definition of it being actually given. We\n> are working in this thread to explicitly split up superuser privileges\n> to allow them to be granted to non-superusers and talking about cases\n> where those privileges end up interacting with each other. Is Alice, as\n> the 'network' manager considered an 'administrator' of XYZ? Is Bob, as\n> the 'database' manager considered an 'administrator'? Perhaps both are,\n> perhaps neither are. It doesn't seem helpful to be vague.\n\nXYZ was intended to stand in for something like 'network' or\n'database' or whatever other particular part of PostgreSQL Alice might\nbe charged with administering.\n\n> If Alice is given the right to create event triggers in a given\n> database, then that's explicitly giving Alice the right to block anyone\n> from dropping tables in that database because that's an inherent part of\n> the event trigger system. Should superusers be able to bypass that?\n> Yes, they probably should be able to and, ideally, they'd be able to do\n> that just in a particular session.\n\nI agree.\n\n> Should a user who has been allowed\n> to modify certain GUCs that perhaps Alice hasn't been allowed to modify\n> be able to be prevented from modifying those GUCs by Alice, when neither\n> is a superuser? That's definitely a trickier question and I don't know\n> that I've got an answer offhand.\n\nMy answer would be \"no\".\n\nI concede Mark's point in another email that if Alice can entirely\nprevent Bob from connecting to the database then by inference she can\nalso prevent him from exercising any other privileges he may have. I'm\nprepared to say that's OK; if Alice is administering network\nconnections to the database and cuts everyone else off, then I guess\nthat's just how it is. But if Bob does somehow succeed in getting a\nconnection to the database, then he should be able to exercise his\nright to change those GUCs which he has permission to change. Alice\nshouldn't be able to thwart that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Jul 2021 17:33:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Mon, Jul 26, 2021 at 4:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> After further thought, I can't poke a hole in that concept.\n> We'd keep the rule that the trigger executes as the calling user.\n> Therefore, the trigger cannot perform any action that the calling\n> user couldn't do if she chose. Conversely, since the trigger\n> owner could become a member of that role and then do whatever the\n> trigger intends to do, this scheme does not give the trigger owner\n> any new abilities either. All we've done is provide what some\n> programming languages call an observer or annotation.\n>\n> I also like the fact that with this rule, superusers' ability to\n> create event triggers that fire for everything is not a special case.\n\nI think this has potential. In a managed services environment, you can\nimagine the provider as the super-duper user, having the ability to do\nanything - because they control the box, so there's really no stopping\nit - but presumably very little interest in what happens within the\ndatabase. Then you have the tenant, who is a semi-super-user,\nauthorized by the provider to do anything internal to the database\nthat the provider doesn't think will cause them problems. With the\nsetup you're proposing here, I suppose what the provider needs to do\nis have a role like 'tenant' and make all the other tenant role\nmembers of that master role. Then the tenant can log in as 'tenant' as\nset up event triggers that will apply to all of those users, but\nthere's no security compromise for the provider because the role (or\nroles) that they use to log in are not members of 'tenant'.\n\nI thought for a while there might be a problem with tenant users\ncreating event triggers and then altering the owner to 'tenant' but I\nthink now that was backwards thinking. 'tenant' is a member of all of\nthe tenant users but not the other way around, so they can't give\ntheir event triggers away to 'tenant'.\n\nDo I have that right?\n\nI agree with you that it's really nice that this eliminates the\nspecial case for superusers.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Jul 2021 10:19:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Tue, 27 Jul 2021 at 10:19, Robert Haas <robertmhaas@gmail.com> wrote:\n\n\n> I think this has potential. In a managed services environment, you can\n> imagine the provider as the super-duper user, having the ability to do\n> anything - because they control the box, so there's really no stopping\n> it - but presumably very little interest in what happens within the\n> database. Then you have the tenant, who is a semi-super-user,\n> authorized by the provider to do anything internal to the database\n> that the provider doesn't think will cause them problems. With the\n> setup you're proposing here, I suppose what the provider needs to do\n> is have a role like 'tenant' and make all the other tenant role\n> members of that master role. Then the tenant can log in as 'tenant' as\n> set up event triggers that will apply to all of those users, but\n> there's no security compromise for the provider because the role (or\n> roles) that they use to log in are not members of 'tenant'.\n>\n\nIsn’t this backwards? If all those roles are members of \"tenant\" then they\ncan do anything \"tenant\" can do. The reverse might work - make \"tenant\" a\nmember of all the related roles - although I haven’t thought through in\ndetail.\n\nThe comparison is to making all roles members of \"postgres\" (disaster) vs.\nmaking \"postgres\" a member of all roles (redundant, because of how\npermissions work for superuser, but harmless).\n\nOn Tue, 27 Jul 2021 at 10:19, Robert Haas <robertmhaas@gmail.com> wrote: \nI think this has potential. In a managed services environment, you can\nimagine the provider as the super-duper user, having the ability to do\nanything - because they control the box, so there's really no stopping\nit - but presumably very little interest in what happens within the\ndatabase. Then you have the tenant, who is a semi-super-user,\nauthorized by the provider to do anything internal to the database\nthat the provider doesn't think will cause them problems. With the\nsetup you're proposing here, I suppose what the provider needs to do\nis have a role like 'tenant' and make all the other tenant role\nmembers of that master role. Then the tenant can log in as 'tenant' as\nset up event triggers that will apply to all of those users, but\nthere's no security compromise for the provider because the role (or\nroles) that they use to log in are not members of 'tenant'.Isn’t this backwards? If all those roles are members of \"tenant\" then they can do anything \"tenant\" can do. The reverse might work - make \"tenant\" a member of all the related roles - although I haven’t thought through in detail.The comparison is to making all roles members of \"postgres\" (disaster) vs. making \"postgres\" a member of all roles (redundant, because of how permissions work for superuser, but harmless).", "msg_date": "Tue, 27 Jul 2021 10:23:54 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Tue, Jul 27, 2021 at 10:24 AM Isaac Morland <isaac.morland@gmail.com> wrote:\n> Isn’t this backwards? If all those roles are members of \"tenant\" then they can do anything \"tenant\" can do. The reverse might work - make \"tenant\" a member of all the related roles - although I haven’t thought through in detail.\n\nDang it, yes. The tenant needs to be members of all the other users,\nnot the other way around. I spent a long time trying to not get that\nbackwards and still did.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Jul 2021 11:27:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Dang it, yes. The tenant needs to be members of all the other users,\n> not the other way around. I spent a long time trying to not get that\n> backwards and still did.\n\nThe \"membership\" terminology is inherently confusing I fear.\nMaybe better to say that all the roles-to-be-audited must\nbe GRANTed to the \"tenant\" role?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Jul 2021 11:36:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "> On Jul 22, 2021, at 1:01 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> It's awkward. I think that we can't afford to create a separate\n> predefined role for every single thing that someone could\n> theoretically want to sever, because then we'll have a zillion of them\n> and it will be unmaintainable. So the idea was to try to break up\n> everything someone might want to do either via DDL or by setting GUCs\n> into one of three categories: internal to the database\n> (pg_database_security), facing outward toward the network\n> (pg_network_security), and facing inward toward the host\n> (pg_host_security). If we didn't have any predefined security-related\n> roles already, this would still have complications, but as things\n> stand it has more, because as you point out, pg_read_server_files\n> overlaps with pg_host_security. But what do we do about that?\n\nI gave up on the idea of splitting all superuser functions into three roles.\n\nPatch v5-0001 refactors the guc code to allow non-superuser roles to be associated with guc variables. Any such role can then set the variable, including via \"alter system set\". The patch stops short of creating any new roles or assigning any roles to any guc variable.\n\nPatches v5-0002 through v5-0005 create four new roles for managing host resource settings, vacuum settings, autovacuum settings, and logging settings. That last one excludes \"where to log\" settings, because we don't want the role to be able to write to arbitrary locations on the server. Remaining guc variables not in these four categories continue to belong to the superuser.\n\nPatches v5-0006 and v5-0007 allow non-superusers to own event triggers, and limit the event triggers to only running for events triggered by roles that the event trigger owner belongs to. This is backward compatible, because event triggers have historically belonged only to superusers, and superusers have implicit membership in all groups.\n\nPatches v5-0008 through v5-0010 allow non-superusers to own subscriptions while restricting the tablesync and apply workers to only work on tables that the subscription owner has permissions on. This is almost backward compatible, because subscriptions have historically belonged only to superusers, as above, except for unlikely scenarios where superusers have given ownership to non-superusers. In those cases, the new code will refuse to apply in situations where the old code would blindly apply changes. Does anybody see a problem with this?\n\nPatch v5-0011 is a bug fix posted elsewhere that hasn't been committed yet but which must be committed in preparation for v5-0012.\n\nPatch v5-0012 creates a new role, pg_manage_database_objects, which can do anything with an object that the owner could do with it, as long as the owner is not a superuser. This role is intended as a \"tenant\" role, and is in some sense a less powerful replacement for the pg_database_security role previously proposed.\n\nI doubt that I will create any replacement for the pg_host_security role previously proposed, as I think that role is just synonymous with \"superuser\", so it serves no purpose.\n\nI am uncertain about creating a role similar to the pg_network_security role previously proposed, as the changes to how publications and subscriptions work in patches v5-0008 through v5-0010 may be enough. In any event, I'd like feedback on those patches before designing one or more additional roles for this.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 23 Aug 2021 10:36:27 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Jul 22, 2021, at 1:01 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> > It's awkward. I think that we can't afford to create a separate\n> > predefined role for every single thing that someone could\n> > theoretically want to sever, because then we'll have a zillion of them\n> > and it will be unmaintainable. So the idea was to try to break up\n> > everything someone might want to do either via DDL or by setting GUCs\n> > into one of three categories: internal to the database\n> > (pg_database_security), facing outward toward the network\n> > (pg_network_security), and facing inward toward the host\n> > (pg_host_security). If we didn't have any predefined security-related\n> > roles already, this would still have complications, but as things\n> > stand it has more, because as you point out, pg_read_server_files\n> > overlaps with pg_host_security. But what do we do about that?\n> \n> I gave up on the idea of splitting all superuser functions into three roles.\n\nI can't say that I blame you for that. :) For my 2c, at least, the ones\nproposed never really felt like they were very directly tied to specific\ncapabilities, which I think was one of the issues with that approach.\n\n> Patch v5-0001 refactors the guc code to allow non-superuser roles to be associated with guc variables. Any such role can then set the variable, including via \"alter system set\". The patch stops short of creating any new roles or assigning any roles to any guc variable.\n\nHaven't looked at the patch yet but this does generally seem like an\ninteresting approach.\n\n> Patches v5-0002 through v5-0005 create four new roles for managing host resource settings, vacuum settings, autovacuum settings, and logging settings. That last one excludes \"where to log\" settings, because we don't want the role to be able to write to arbitrary locations on the server. Remaining guc variables not in these four categories continue to belong to the superuser.\n\nWe do have a role today who is allowed to write to arbitrary locations\non the server, so I wonder if for those log settings we'd include a\nrequirement for the user to have both of those roles instead..?\n\n> Patches v5-0006 and v5-0007 allow non-superusers to own event triggers, and limit the event triggers to only running for events triggered by roles that the event trigger owner belongs to. This is backward compatible, because event triggers have historically belonged only to superusers, and superusers have implicit membership in all groups.\n\nWhile I generally agree that this doesn't end up opening up security\nissues, it's going to certainly be a change in how event triggers work\nas they'll no longer *always* fire, and that seems quite at odds with\nhow triggers are generally thought of. So much so that I worry about\nmis-use due to this. Then again, if we're going to go down this route\nat all, I can't think of any particular way to avoid the security issues\nof running the trigger for everyone when it's owned by a non-superuser.\n\n> Patches v5-0008 through v5-0010 allow non-superusers to own subscriptions while restricting the tablesync and apply workers to only work on tables that the subscription owner has permissions on. This is almost backward compatible, because subscriptions have historically belonged only to superusers, as above, except for unlikely scenarios where superusers have given ownership to non-superusers. In those cases, the new code will refuse to apply in situations where the old code would blindly apply changes. Does anybody see a problem with this?\n\nThis doesn't particularly bother me, at least.\n\n> Patch v5-0011 is a bug fix posted elsewhere that hasn't been committed yet but which must be committed in preparation for v5-0012.\n\nNo idea what it is as I hadn't looked yet, but if it's a bug fix then\nshouldn't it be separated and back-patched..?\n\n> Patch v5-0012 creates a new role, pg_manage_database_objects, which can do anything with an object that the owner could do with it, as long as the owner is not a superuser. This role is intended as a \"tenant\" role, and is in some sense a less powerful replacement for the pg_database_security role previously proposed.\n\nThis I have to object to pretty strongly- we have got to get away from\nthe idea that just because X isn't a superuser or isn't owned by a\nsuperuser that it's fine to allow some non-superuser to mess with it.\nIn particlar, just because a role isn't explicitly marked as a superuser\ndoesn't mean that the role can't *become* a superuser, or that it hasn't\ngot privileged access to the system in other ways, such as by being a\nmember of other predefined roles that perhaps the role who is a member\nof pg_manage_database_objects doesn't have. Such a check against\nmodifying of \"superuser owned\" objects implies that it's providing some\nkind of protection against the role being able to become a superuser\nwhen it doesn't actually provide that protection in any kind of reliable\nfashion and instead ends up fooling the user.\n\nThis is the issue with CREATEROLE and we definitely shouldn't be\ndoubling-down on that mistake, and also brings up the point that I, at\nleast, had certainly hoped that part of this effort would include a way\nfor roles to be created by a user with an appropriate predefined role,\nand w/o CREATEROLE (which would then be deprecated or, ideally, just\noutright removed). I get that this doesn't have to be in the first\npatch or even patch set going down this road but the lack of discussion\nor of any coordination between this effort and the other one that is\ntrying to address the CREATEROLE issue seems likely to land us in a bad\nplace with two distinct approaches being used.\n\n> I doubt that I will create any replacement for the pg_host_security role previously proposed, as I think that role is just synonymous with \"superuser\", so it serves no purpose.\n> \n> I am uncertain about creating a role similar to the pg_network_security role previously proposed, as the changes to how publications and subscriptions work in patches v5-0008 through v5-0010 may be enough. In any event, I'd like feedback on those patches before designing one or more additional roles for this.\n\n\"Able to create network connections\" sure seems like a useful\ncapabilitiy to be able to delegate and which would cover postgres_fdw\nand dblink use-cases also.\n\nThanks,\n\nStephen", "msg_date": "Mon, 23 Aug 2021 14:13:51 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Mon, Aug 23, 2021 at 2:13 PM Stephen Frost <sfrost@snowman.net>\nwrote:> This I have to object to pretty strongly- we have got to get\naway from\n> the idea that just because X isn't a superuser or isn't owned by a\n> superuser that it's fine to allow some non-superuser to mess with it.\n> In particlar, just because a role isn't explicitly marked as a superuser\n> doesn't mean that the role can't *become* a superuser, or that it hasn't\n> got privileged access to the system in other ways, such as by being a\n> member of other predefined roles that perhaps the role who is a member\n> of pg_manage_database_objects doesn't have. Such a check against\n> modifying of \"superuser owned\" objects implies that it's providing some\n> kind of protection against the role being able to become a superuser\n> when it doesn't actually provide that protection in any kind of reliable\n> fashion and instead ends up fooling the user.\n\nI think you make a good point here, but it seems to me that we need\n*something*. We need a way to create a \"lead tenant\" role that can\ncreate other tenant roles and then, err, boss them around. Not only\ndrop the roles again, but also drop or alter or change the owner of\ntheir objects, or really bypass any security those roles would like to\nassert as against the lead tenant. If we can't see a way to create\nsome sort of role of that sort, then I don't think we can really say\nwe've solved anything much.\n\n> This is the issue with CREATEROLE and we definitely shouldn't be\n> doubling-down on that mistake, and also brings up the point that I, at\n> least, had certainly hoped that part of this effort would include a way\n> for roles to be created by a user with an appropriate predefined role,\n> and w/o CREATEROLE (which would then be deprecated or, ideally, just\n> outright removed). I get that this doesn't have to be in the first\n> patch or even patch set going down this road but the lack of discussion\n> or of any coordination between this effort and the other one that is\n> trying to address the CREATEROLE issue seems likely to land us in a bad\n> place with two distinct approaches being used.\n\nIs there an active effort to do something about CREATEROLE? Do you\nhave a link to the thread? I feel like this is one of those things\nthat has occasioned discussion over the years but I am not aware of an\nactive project or a specific proposal to do something about this.\n\nMaybe this can be solved from the other end? Like, as opposed to\nworking by exception and saying, \"well, everything but superusers,\"\nmaybe we need to explicitly declare who is included. Like, perhaps we\ncould somehow represent the fact that role A has super-powers with\nrespect to roles B, C, D, and any future roles that A may create, but\nnot other roles that exist on the system, or something of that sort?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Aug 2021 14:42:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Aug 23, 2021 at 2:13 PM Stephen Frost <sfrost@snowman.net>\n> wrote:> This I have to object to pretty strongly- we have got to get\n> away from\n> > the idea that just because X isn't a superuser or isn't owned by a\n> > superuser that it's fine to allow some non-superuser to mess with it.\n> > In particlar, just because a role isn't explicitly marked as a superuser\n> > doesn't mean that the role can't *become* a superuser, or that it hasn't\n> > got privileged access to the system in other ways, such as by being a\n> > member of other predefined roles that perhaps the role who is a member\n> > of pg_manage_database_objects doesn't have. Such a check against\n> > modifying of \"superuser owned\" objects implies that it's providing some\n> > kind of protection against the role being able to become a superuser\n> > when it doesn't actually provide that protection in any kind of reliable\n> > fashion and instead ends up fooling the user.\n> \n> I think you make a good point here, but it seems to me that we need\n> *something*. We need a way to create a \"lead tenant\" role that can\n> create other tenant roles and then, err, boss them around. Not only\n> drop the roles again, but also drop or alter or change the owner of\n> their objects, or really bypass any security those roles would like to\n> assert as against the lead tenant. If we can't see a way to create\n> some sort of role of that sort, then I don't think we can really say\n> we've solved anything much.\n\nSure, but we can't just use the \"superuser\" flag for that, we need\nsomething better. The \"better\" in my mind here would be akin to what\nwe're thinking about doing for event triggers, but for roles which\nactually already have a distinction between becoming a role vs. being\nable to GRANT that role to another role, and that's the 'admin' option.\n\nIn other words, the user we imagine being GRANT'd this hypothetical\npg_manage_database_objects role wouldn't actually need that role to\nexplicitly give them access to be able to modify the objects of other\nroles- it would be able to do that by virtue of just being a member of\nthose roles. The roles who are allowed to modify existing role\nmembership should have the 'admin' right on those roles, and what we\njust need is a new predefined role that's basically \"allow roles to be\ncreated or dropped\" but where the only roles which can be GRANT'd by a\nuser with that ability are the ones that they have admin rights on, and\nthe only roles that they're allowed to drop they also have to have admin\nrights on.\n\n> > This is the issue with CREATEROLE and we definitely shouldn't be\n> > doubling-down on that mistake, and also brings up the point that I, at\n> > least, had certainly hoped that part of this effort would include a way\n> > for roles to be created by a user with an appropriate predefined role,\n> > and w/o CREATEROLE (which would then be deprecated or, ideally, just\n> > outright removed). I get that this doesn't have to be in the first\n> > patch or even patch set going down this road but the lack of discussion\n> > or of any coordination between this effort and the other one that is\n> > trying to address the CREATEROLE issue seems likely to land us in a bad\n> > place with two distinct approaches being used.\n> \n> Is there an active effort to do something about CREATEROLE? Do you\n> have a link to the thread? I feel like this is one of those things\n> that has occasioned discussion over the years but I am not aware of an\n> active project or a specific proposal to do something about this.\n\nHrmpf, I had been thinking of this:\n\nhttps://www.postgresql.org/message-id/flat/c2ee39152957af339ae6f3e851aef09930dd2faf.camel@credativ.de\n\nregistered in the CF here: https://commitfest.postgresql.org/34/2918/\n\nthough I see now that it isn't trying to explicitly deal with the\nCREATEROLE bit (which I had understood from some other discussion was a\ntopic of interest to the author), but is definitely caught up in the\ndiscussion about who is allowed to set what GUCs, and therefore still\nseems rather related to me.\n\n> Maybe this can be solved from the other end? Like, as opposed to\n> working by exception and saying, \"well, everything but superusers,\"\n> maybe we need to explicitly declare who is included. Like, perhaps we\n> could somehow represent the fact that role A has super-powers with\n> respect to roles B, C, D, and any future roles that A may create, but\n> not other roles that exist on the system, or something of that sort?\n\nIsn't this exactly what having the 'admin' option on a role is? You're\nGRANT'd that role and further are allowed to then GRANT that role to\nother roles. Being a member of that role means you're considered to\nhave 'ownership' level rights for all the objects that that role owns\ntoo.\n\nMaybe also need to have some condition around \"you can only set\nattributes on roles which you already have\", or maybe we need to invent\n'admin' options for each of the role attributes if we think it needs to\nbe more granular. The only other thing I can think of is that we should\nalso contemplate what to do with things like CONNECTION LIMIT, VALID\nUNTIL, perhaps others. Those aren't currently allowed to be modified by\na role who has 'admin' rights on another role and so maybe we make those\nrequire the new 'pg_manage_roles' predefined role and the 'admin' option\non a given role to be set.\n\nI'm not against the idea of inventing something new here too... but the\n'admin' option sure looks like what we're talking about here.\n\nThanks,\n\nStephen", "msg_date": "Mon, 23 Aug 2021 15:09:48 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Aug 23, 2021, at 11:13 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> This I have to object to pretty strongly- we have got to get away from\n> the idea that just because X isn't a superuser or isn't owned by a\n> superuser that it's fine to allow some non-superuser to mess with it.\n\nI thought we were trying to create a set of roles which could cumulatively do everything *inside the sandbox* that superuser can do, but which cannot escape the sandbox. I would think this pg_manage_database_objects role would be reasonable in the context of that effort.\n\n> In particlar, just because a role isn't explicitly marked as a superuser\n> doesn't mean that the role can't *become* a superuser, or that it hasn't\n> got privileged access to the system in other ways, such as by being a\n> member of other predefined roles that perhaps the role who is a member\n> of pg_manage_database_objects doesn't have.\n\nThe implementation does not allow pg_manage_database_objects to mess with objects that are owned by a role which satisfies superuser_arg(). If you are renting out a database to a tenant and change the ownership of stuff to a non-superuser, then you get what you get. But why would you do that?\n\n> Such a check against\n> modifying of \"superuser owned\" objects implies that it's providing some\n> kind of protection against the role being able to become a superuser\n> when it doesn't actually provide that protection in any kind of reliable\n> fashion and instead ends up fooling the user.\n\nPlease provide steps to reproduce this issue. Assume that a database is initialized and that everything is owned by the system. A \"tenant\" role is created and granted pg_manage_database_objects, and other non-superuser roles are created. Now, what exactly can \"tenant\" do that you find objectionable?\n\n> This is the issue with CREATEROLE and we definitely shouldn't be\n> doubling-down on that mistake, and also brings up the point that I, at\n> least, had certainly hoped that part of this effort would include a way\n> for roles to be created by a user with an appropriate predefined role,\n> and w/o CREATEROLE (which would then be deprecated or, ideally, just\n> outright removed).\n\nWell, pg_manage_database_objects has no special ability to create or drop roles. I thought separating those powers made more sense than grouping them together. We can have a new role for doing what you say, but that seems redundant with CREATEROLE. I didn't want this patch set to be bogged down waiting for a consensus on how to change the CREATEROLE privilege.\n\n> I get that this doesn't have to be in the first\n> patch or even patch set going down this road but the lack of discussion\n> or of any coordination between this effort and the other one that is\n> trying to address the CREATEROLE issue seems likely to land us in a bad\n> place with two distinct approaches being used.\n\nI'm confused. This patch set doesn't come within a country mile of CREATEROLE. Why should this patch set have to coordinate with that one? I'm not arguing with you -- merely asking what I'm misunderstanding?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 23 Aug 2021 12:29:01 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Aug 23, 2021, at 11:13 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> > This I have to object to pretty strongly- we have got to get away from\n> > the idea that just because X isn't a superuser or isn't owned by a\n> > superuser that it's fine to allow some non-superuser to mess with it.\n> \n> I thought we were trying to create a set of roles which could cumulatively do everything *inside the sandbox* that superuser can do, but which cannot escape the sandbox. I would think this pg_manage_database_objects role would be reasonable in the context of that effort.\n\nI wasn't objecting to the general concept of trying to have a role that\ncan do lots of things inside the sandbox but aren't allowed to escape\nit. I was specifically objecting to the idea that just checking if an\nobject is directly owned by a superuser is not sufficient to prevent a\nrole from being able to escape the sandbox.\n\n> > In particlar, just because a role isn't explicitly marked as a superuser\n> > doesn't mean that the role can't *become* a superuser, or that it hasn't\n> > got privileged access to the system in other ways, such as by being a\n> > member of other predefined roles that perhaps the role who is a member\n> > of pg_manage_database_objects doesn't have.\n> \n> The implementation does not allow pg_manage_database_objects to mess with objects that are owned by a role which satisfies superuser_arg(). If you are renting out a database to a tenant and change the ownership of stuff to a non-superuser, then you get what you get. But why would you do that?\n\nSimply using superuser_arg() isn't sufficient is exactly the point that\nI'm trying to make. As a 'landlord', I might very well want to have\nsome kind of 'landlord' role that isn't directly a superuser but which\ncould *become* a superuser by having been GRANT'd a superuser role- but\nI certainly don't want that role's objects to be able to be messed with\nby the tenant.\n\n> > Such a check against\n> > modifying of \"superuser owned\" objects implies that it's providing some\n> > kind of protection against the role being able to become a superuser\n> > when it doesn't actually provide that protection in any kind of reliable\n> > fashion and instead ends up fooling the user.\n> \n> Please provide steps to reproduce this issue. Assume that a database is initialized and that everything is owned by the system. A \"tenant\" role is created and granted pg_manage_database_objects, and other non-superuser roles are created. Now, what exactly can \"tenant\" do that you find objectionable?\n\nIf one of those other non-superuser roles is, itself, a role that can\nbecome a superuser and it decides to create some functions for its own\npurposes, then the tenant role would be able to modify those functions,\nallowing the tenant to gain access to the non-superuser role, and from\nthere being able to gain access to superuser.\n\nSomething along these lines, basically:\n\nCREATE USER tenant;\nGRANT pg_manage_database_objects TO tenant;\nCREATE USER landlord;\nGRANT postgres TO landlord;\nSET ROLE landlord;\nCREATE FUNCTION do_stuff();\nput call to do_stuff() into a cronjob\nSET ROLE tenant;\nCREATE OR REPLACE do_stuff(); -- with code to take over landlord\n\npoof- tenant has ability to be landlord and then further to become\npostgres.\n\nAll of the above applies beyond just superuser too- consider a\nnon-superuser role which has been grant'd pg_execute_server_program.\nThat won't trip up superuser_arg() but it sure would allow a role to\nbreak out of the sandbox.\n\n> > This is the issue with CREATEROLE and we definitely shouldn't be\n> > doubling-down on that mistake, and also brings up the point that I, at\n> > least, had certainly hoped that part of this effort would include a way\n> > for roles to be created by a user with an appropriate predefined role,\n> > and w/o CREATEROLE (which would then be deprecated or, ideally, just\n> > outright removed).\n> \n> Well, pg_manage_database_objects has no special ability to create or drop roles. I thought separating those powers made more sense than grouping them together. We can have a new role for doing what you say, but that seems redundant with CREATEROLE. I didn't want this patch set to be bogged down waiting for a consensus on how to change the CREATEROLE privilege.\n\nCREATEROLE doesn't work to give to folks generally because of the issues\nabove- its check is, similarly, too simple and always has been. This\nisn't news either, it's been discussed in various places from time to\ntime and is part of why people who run cloud providers end up either not\ngiving out that role attribute and providing another way, or they hack\nup the PG core code to handle the way that attribute works differently.\n\n> > I get that this doesn't have to be in the first\n> > patch or even patch set going down this road but the lack of discussion\n> > or of any coordination between this effort and the other one that is\n> > trying to address the CREATEROLE issue seems likely to land us in a bad\n> > place with two distinct approaches being used.\n> \n> I'm confused. This patch set doesn't come within a country mile of CREATEROLE. Why should this patch set have to coordinate with that one? I'm not arguing with you -- merely asking what I'm misunderstanding?\n\nPerhaps it's just because I'm looking at the exact same issues cropping\nup here that do with the CREATEROLE situation and I'd really like to\nfind a solution that gets us away from putting out a half-solution that\nwon't actually be directly usable by the folks who care about making\nsure people don't break out of the sandbox because of the issues\noutlined above.\n\nThanks,\n\nStephen", "msg_date": "Mon, 23 Aug 2021 15:51:30 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Aug 23, 2021, at 12:51 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Simply using superuser_arg() isn't sufficient is exactly the point that\n> I'm trying to make. As a 'landlord', I might very well want to have\n> some kind of 'landlord' role that isn't directly a superuser but which\n> could *become* a superuser by having been GRANT'd a superuser role- but\n> I certainly don't want that role's objects to be able to be messed with\n> by the tenant.\n\n> If one of those other non-superuser roles is, itself, a role that can\n> become a superuser \n\nIf you have a sandbox-superuser who can do anything within the sandbox but nothing outside the sandbox, then you need a pretty good wall at the periphery of the sandbox. Breaking sandbox-superuser-ishness into multiple distinct privileges rather than one monolithic privilege doesn't change the need for a good wall at the periphery. The pg_manage_database_objects role doesn't encompass all sandbox-superuser privileges, but it is on that side of the wall.\n\nWe could agree to move the wall a little, and say that non-superuser roles who have the ability to become superusers are on the other side of the wall. That's fine. I'd have to rework the patch a bit, but conceptually that seems doable. We could also say that non-superusers who are members of privileged roles (pg_execute_server_programs, pg_signal_backend, etc) are likewise on the other side of that wall.\n\nDoes that work?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 23 Aug 2021 13:40:06 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Aug 23, 2021, at 12:51 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> > Simply using superuser_arg() isn't sufficient is exactly the point that\n> > I'm trying to make. As a 'landlord', I might very well want to have\n> > some kind of 'landlord' role that isn't directly a superuser but which\n> > could *become* a superuser by having been GRANT'd a superuser role- but\n> > I certainly don't want that role's objects to be able to be messed with\n> > by the tenant.\n> \n> > If one of those other non-superuser roles is, itself, a role that can\n> > become a superuser \n> \n> If you have a sandbox-superuser who can do anything within the sandbox but nothing outside the sandbox, then you need a pretty good wall at the periphery of the sandbox. Breaking sandbox-superuser-ishness into multiple distinct privileges rather than one monolithic privilege doesn't change the need for a good wall at the periphery. The pg_manage_database_objects role doesn't encompass all sandbox-superuser privileges, but it is on that side of the wall.\n> \n> We could agree to move the wall a little, and say that non-superuser roles who have the ability to become superusers are on the other side of the wall. That's fine. I'd have to rework the patch a bit, but conceptually that seems doable. We could also say that non-superusers who are members of privileged roles (pg_execute_server_programs, pg_signal_backend, etc) are likewise on the other side of that wall.\n> \n> Does that work?\n\nI'd much rather we go down the path that Robert had suggested where we\nfind a way to make a connection between the tenant role and everything\nthat they create, and leave everything that is outside of that box on\nthe other side of the 'wall'. There's also the risk that the landlord\ncreates a role one day but then GRANT's superuser rights to that role on\nanother day, that happened to be after the tenant managed to gain\ncontrol of that role. That kind of thing is something we should work\nhard to make difficult to happen- the landlord should have to explicitly\ngive the tenant control over something that the landlord creates, it\nshouldn't happen automagically.\n\nHaving hard-coded lists of which predefined roles are 'ok' and which\naren't sounds generally bad and I don't think we'd actually want to\ninclude all predefined roles in that list either (even if it'd be fine\ntoday, which I don't think it is given things like pg_monitor and\npg_signal_backend, though perhaps there could be some debate over\nthose...).\n\nThanks,\n\nStephen", "msg_date": "Mon, 23 Aug 2021 16:46:08 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Aug 23, 2021, at 11:13 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n>> Patch v5-0011 is a bug fix posted elsewhere that hasn't been committed yet but which must be committed in preparation for v5-0012.\n> \n> No idea what it is as I hadn't looked yet, but if it's a bug fix then\n> shouldn't it be separated and back-patched..?\n\nIt is already a patch waiting for commit.\n\nDiscussion: https://www.postgresql.org/message-id/1F238937-7CC2-4703-A1B1-6DC225B8978A%40enterprisedb.com\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 24 Aug 2021 07:42:49 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Aug 23, 2021, at 1:46 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> I'd much rather we go down the path that Robert had suggested where we\n> find a way to make a connection between the tenant role and everything\n> that they create, and leave everything that is outside of that box on\n> the other side of the 'wall'. \n\nI am coming around to this way of thinking. The main difficulty here stems (as you know) from how CREATEROLE was implemented. You and Tom had conversations about that back in 2005 [1], and Tom even suggested perhaps roles have owners:\n\n> * Tom Lane (tgl(at)sss(dot)pgh(dot)pa(dot)us) wrote:\n\n> Possibly for 8.2 we could invent a notion of roles having owners.\n> Offhand I don't see any harm in letting non-CREATEROLE users create\n> non-login roles, and manipulate the membership of roles they have\n> created (or that have been assigned to them by a superuser). On the\n> other hand, it could be that the WITH ADMIN OPTION feature is already\n> sufficient for this. This really needs some thought ...\n\n\nMaking roles owners of roles they create, and giving them the power to manipulate objects which belong to roles they own (recursively), seems to solve most of our problems we have been discussing. The remaining problem is that roles without createrole or superuser cannot create other roles. We don't want tenants to need either of those things, at least not as they are currently defined. We could either modify the createrole privilege to be far less powerful, or create a new privilege.\n\nIf role owners can alter and drop roles they own (and ones those roles own, etc.) then we could redefine CREATEROLE to really just mean the ability to create new roles. The ability to alter or drop roles would not stem from having CREATEROLE, but rather from owning the role. For setups where one admin role has CREATEROLE and creates all other roles (except the superuser which created the admin) nothing changes. In setups with multiple admins, where none own the others, each admin would have its own fiefdom, managing everything downstream from itself, but having no special privilege over the other fiefdoms. I think that setup wasn't implemented for 8.1 more for lack of time than because it was unwanted.\n\nAlternately, we could just create a new privilege parallel to CREATEROLE, but that seems confusing more than helpful.\n\nThoughts?\n\n\n[1] https://www.postgresql.org/message-id/17554.1120258001%40sss.pgh.pa.us\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 27 Aug 2021 15:47:53 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "> On Aug 27, 2021, at 3:47 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Making roles owners of roles they create, and giving them the power to manipulate objects which belong to roles they own (recursively), seems to solve most of our problems we have been discussing. \n\nNot hearing any responses, this part is implemented in v6-0018 and v6-0019.\n\n> The remaining problem is that roles without createrole or superuser cannot create other roles. We don't want tenants to need either of those things, at least not as they are currently defined. We could either modify the createrole privilege to be far less powerful, or create a new privilege.\n> \n> If role owners can alter and drop roles they own (and ones those roles own, etc.) then we could redefine CREATEROLE to really just mean the ability to create new roles. The ability to alter or drop roles would not stem from having CREATEROLE, but rather from owning the role. For setups where one admin role has CREATEROLE and creates all other roles (except the superuser which created the admin) nothing changes. In setups with multiple admins, where none own the others, each admin would have its own fiefdom, managing everything downstream from itself, but having no special privilege over the other fiefdoms. I think that setup wasn't implemented for 8.1 more for lack of time than because it was unwanted.\n\nThis really deserves more thought from the list. CREATEROLE's behavior is unchanged in this patch set.\n\n> On Aug 23, 2021, at 11:13 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n>> Patches v5-0002 through v5-0005 create four new roles for managing host resource settings, vacuum settings, autovacuum settings, and logging settings. That last one excludes \"where to log\" settings, because we don't want the role to be able to write to arbitrary locations on the server. Remaining guc variables not in these four categories continue to belong to the superuser.\n> \n> We do have a role today who is allowed to write to arbitrary locations\n> on the server, so I wonder if for those log settings we'd include a\n> requirement for the user to have both of those roles instead..?\n\nFollowing your advice, pg_manage_logging_settings + pg_write_server_files is made sufficient to set \"where to log\" settings in v6-0005.\n\nPatches v6-0002 through v6-0005 add roles intended to allow tenants to set values for a limited number of appropriate guc variables. That seems fine for the purpose of facilitating postgres-as-a-service.\n\nThere is another reason to have roles with the power to SET or ALTER SYSTEM SET guc variables, though. For large deployments of postgres databases in the cloud, being able to log in as a non-superuser role in order to configure the database means having one fewer reasons to need to allow superuser connections to the server. That's valuable for its own sake.\n\nPatches v6-0006 through v6-0012 add yet more roles with authority to set additional guc variables. They don't go quite so far as including all gucs, but the majority of gucs are covered, and we can add additional groupings if anybody has suggestions.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 31 Aug 2021 18:41:51 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "> On Aug 31, 2021, at 6:41 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> <v6-0019-Giving-role-owners-control-over-owned-roles.patch>\n\nSynopsis:\n\nThe major change in version 7 is a reworking of role ownership and the CREATEROLE attribute to make it no longer a near-superuser privilege. This new functionality is in v7-0021.\n\nDetails:\n\nThe changes in version 7 of the patchset are:\n\nv7-0001 is a new patch that introduces a single new regression test covering various aspects of the permissions system surrounding creating, altering, dropping and granting membership in roles. The functional changes in v7-0021 do not cause pre-existing regression test failures, not even when running check-world, despite fundamentally changing how much of this works. This new test adds coverage for create role, and as each patch in the series introduces changes, is modified to reflect them.\n\nv6-0001 through v6-0019 correspond to v7-0002 through v7-0020 and are mostly unchanged, but are updated to apply cleanly to the current git master, to fix a bug that was present in the v6 patch set, to update the regression tests for security labels where CREATEROLE is used, and to update the create_role regression test from v7-0001 as needed per patch.\n\nv7-0021 redesigns the CREATEROLE attribute to no longer bestow nearly so much power. The ability to alter or drop a role no longer flows from having the CREATEROLE attribute, but rather from being the role's owner. The ADMIN option works as before, but role owners implicitly have ADMIN on roles which they own.\n\nRoles with the CREATEROLE attribute may create new roles, but those new roles may not be created with privileges which the creating role lacks. Specifically, SUPERUSER, REPLICATION, BYPASSRLS, CREATEDB, CREATEROLE and LOGIN privilege may not be granted the new role unless the creating role has them. (This rule is adhered to but trivial in the case of the CREATEROLE privilege, since the creator must necessarily have that one.) When creating a new role using the IN ROLE, ROLE, or ADMIN clauses, the creating role must have sufficient privileges on the roles named by these clauses to perform the GRANTs these roles entail. Merely having the CREATEROLE attribute is insufficient to perform arbitrary grants of role memberships.\n\nThe INHERIT, VALID UNTIL, and CONNECTION LIMIT attributes are not thought about as privileges in the patch; perhaps they should be? It would be quite reasonable to say that a role with a finite connection limit should have that limit thought about as a \"pool\" and should have to assign connection rights from that pool to other roles it creates. Likewise, a role with a VALID UNTIL limit could be constrained to only create roles with VALID UNTIL less than or equal to its own limit. Perhaps a NOINHERIT role should only be able to create NOINHERIT roles? The patch does none of these things, but feedback is much appreciated.\n\nThe docs are adjusted, but drop_role.sgml may need to be further adjusted:\n\n <para>\n The SQL standard defines <command>DROP ROLE</command>, but it allows\n only one role to be dropped at a time, and it specifies different\n privilege requirements than <productname>PostgreSQL</productname> uses.\n </para>\n\nI lack a copy of the SQL standard, so I'm uncertain if this patch has, by chance, changed the privilege requirements to match that of the spec?\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 15 Sep 2021 07:38:44 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\nOn 9/15/21 10:38 AM, Mark Dilger wrote:\n>> On Aug 31, 2021, at 6:41 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>>\n>> <v6-0019-Giving-role-owners-control-over-owned-roles.patch>\n> Synopsis:\n>\n> The major change in version 7 is a reworking of role ownership and the CREATEROLE attribute to make it no longer a near-superuser privilege. This new functionality is in v7-0021.\n>\n> Details:\n>\n> The changes in version 7 of the patchset are:\n>\n> v7-0001 is a new patch that introduces a single new regression test covering various aspects of the permissions system surrounding creating, altering, dropping and granting membership in roles. The functional changes in v7-0021 do not cause pre-existing regression test failures, not even when running check-world, despite fundamentally changing how much of this works. This new test adds coverage for create role, and as each patch in the series introduces changes, is modified to reflect them.\n>\n> v6-0001 through v6-0019 correspond to v7-0002 through v7-0020 and are mostly unchanged, but are updated to apply cleanly to the current git master, to fix a bug that was present in the v6 patch set, to update the regression tests for security labels where CREATEROLE is used, and to update the create_role regression test from v7-0001 as needed per patch.\n>\n> v7-0021 redesigns the CREATEROLE attribute to no longer bestow nearly so much power. The ability to alter or drop a role no longer flows from having the CREATEROLE attribute, but rather from being the role's owner. The ADMIN option works as before, but role owners implicitly have ADMIN on roles which they own.\n>\n> Roles with the CREATEROLE attribute may create new roles, but those new roles may not be created with privileges which the creating role lacks. Specifically, SUPERUSER, REPLICATION, BYPASSRLS, CREATEDB, CREATEROLE and LOGIN privilege may not be granted the new role unless the creating role has them. (This rule is adhered to but trivial in the case of the CREATEROLE privilege, since the creator must necessarily have that one.) When creating a new role using the IN ROLE, ROLE, or ADMIN clauses, the creating role must have sufficient privileges on the roles named by these clauses to perform the GRANTs these roles entail. Merely having the CREATEROLE attribute is insufficient to perform arbitrary grants of role memberships.\n>\n> The INHERIT, VALID UNTIL, and CONNECTION LIMIT attributes are not thought about as privileges in the patch; perhaps they should be? It would be quite reasonable to say that a role with a finite connection limit should have that limit thought about as a \"pool\" and should have to assign connection rights from that pool to other roles it creates. Likewise, a role with a VALID UNTIL limit could be constrained to only create roles with VALID UNTIL less than or equal to its own limit. Perhaps a NOINHERIT role should only be able to create NOINHERIT roles? The patch does none of these things, but feedback is much appreciated.\n>\n> The docs are adjusted, but drop_role.sgml may need to be further adjusted:\n>\n> <para>\n> The SQL standard defines <command>DROP ROLE</command>, but it allows\n> only one role to be dropped at a time, and it specifies different\n> privilege requirements than <productname>PostgreSQL</productname> uses.\n> </para>\n>\n> I lack a copy of the SQL standard, so I'm uncertain if this patch has, by chance, changed the privilege requirements to match that of the spec?\n>\n>\n\nThis patch set is failing to apply for me - it fails on patch 2.\n\n\nI haven't dug terribly deeply into it yet, but I notice that there is a\nvery large increase in test volume, which appears to account for much of\nthe 44635 lines of the patch set. I think we're probably going to want\nto reduce that. We've had complaints in the past from prominent hackers\nabout adding too much volume to the regression tests.\n\n\nI do like the basic thrust of reducing the power of CREATEROLE. There's\nan old legal maxim I learned in my distant youth that says \"nemo dat\nquod non habet\" - Nobody can give something they don't own. This seems\nto be in that spirit, and I approve :-)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 21 Sep 2021 15:58:15 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "> On Sep 21, 2021, at 12:58 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> This patch set is failing to apply for me - it fails on patch 2.\n\nThanks for looking! I have pulled together a new patch set which applies cleanly against current master.\n\n> I haven't dug terribly deeply into it yet, but I notice that there is a\n> very large increase in test volume, which appears to account for much of\n> the 44635 lines of the patch set. I think we're probably going to want\n> to reduce that. We've had complaints in the past from prominent hackers\n> about adding too much volume to the regression tests.\n\nThe v8 patch set is much smaller, with the reduction being in the size of regression tests covering which roles can perform SET, RESET, ALTER SYSTEM SET, and ALTER SYSTEM RESET and on which GUCs. The v7 patch set did exhaustive testing on this, which is why it was so big. The v8 set does just a sampling of GUCs per role. The total number of lines for the patch set drops from 44635 to 13026, with only 1960 lines total between the .sql and .out tests of GUC privileges.\n\n> I do like the basic thrust of reducing the power of CREATEROLE. There's\n> an old legal maxim I learned in my distant youth that says \"nemo dat\n> quod non habet\" - Nobody can give something they don't own. This seems\n> to be in that spirit, and I approve :-)\n\nGreat! I'm glad to hear the approach has some support.\n\n\nOther changes in v8:\n\nAdd a new test for subscriptions owned by non-superusers to verify that the tablesync and apply workers replicating their subscription won't replicate into schemas and tables that the subscription owner lacks privilege to touch. The logic to prevent that existed in the v7 patch, but I overlooked adding tests for it. Fixed.\n\nAllow non-superusers to create event triggers. The logic already existed and is unchanged to handle event triggers owned by non-superusers and conditioning those triggers firing on the (trigger-owner, role-performing-event) pair. The v7 patch set didn't go quite so far as allowing non-superusers to create event triggers, but that undercuts much of the benefit of the changes for no obvious purpose.\n\n\nNot changed in v8, but worth discussing:\n\nNon-superusers are still prohibited from creating subscriptions, despite improvements to the security around that circumstance. Improvements to the security model around event triggers does not have to also include permission for non-superuser to create event triggers, but v8 does both. These could be viewed as inconsistent choices, but I struck the balance this way because roles creating event triggers does not entail them doing anything that they can't already do, whereas allowing arbitrary users to create subscriptions would entail an ordinary user causing external network connections being initiated. We likely need to create another privileged role and require a non-superuser to be part of that role before they can create subscriptions. That seems, however, like something to do as a follow-on patch, since tightening up the security on subscriptions as done in this patch doesn't depend on that.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 27 Sep 2021 11:15:05 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On 9/27/21, 11:16 AM, \"Mark Dilger\" <mark.dilger@enterprisedb.com> wrote:\r\n> On Sep 21, 2021, at 12:58 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\r\n>> I do like the basic thrust of reducing the power of CREATEROLE. There's\r\n>> an old legal maxim I learned in my distant youth that says \"nemo dat\r\n>> quod non habet\" - Nobody can give something they don't own. This seems\r\n>> to be in that spirit, and I approve :-)\r\n>\r\n> Great! I'm glad to hear the approach has some support.\r\n\r\nI'd also like to voice my support for this effort. I haven't been\r\nfollowing this thread too closely, but I did take a gander at the\r\nlatest patch set. There is a lot to unpack here. I think this could\r\neasily be split into 3 or 4 threads.\r\n\r\nThe changes for adding GUC management roles seem pretty\r\nstraightforward and would likely be helpful for service providers.\r\nHowever, I was kind of surprised that membership to such roles also\r\nprovided access to ALTER SYSTEM SET. IMO there's quite a big\r\ndifference between allowing a user to set a GUC per-session versus\r\ncluster-wide. With these patches, if I just want to allow a user to\r\nset a GUC like temp_file_limit or log_statement, I also have to give\r\nthem the ability to change it (and several other GUCs) for all roles\r\non the system.\r\n\r\nI haven't spent too much time looking at the event trigger and logical\r\nreplication changes yet.\r\n\r\nFor the CREATEROLE changes, the main thing on my mind is how this\r\nmight impact upgrades. IIUC roles with CREATEROLE will lose many\r\nprivileges after pg_upgrade. I think one way to deal with this would\r\nbe to have such upgrades grant all the privileges they are losing, but\r\nmost CREATEROLE roles likely aren't using the full extent of their\r\npowers, so that approach may be a little extreme. Perhaps it is okay\r\nto just add a blurb in the release notes about this backwards-\r\nincompatible change.\r\n\r\nAnother interesting thing I found is that if a role has ownership of\r\na role that later obtains SUPERUSER, the owning role basically loses\r\nall control of the role. It makes sense to avoid letting non-\r\nsuperusers mess with superusers, but this led me to wonder if there\r\nshould be a mechanism for transferring role ownership (e.g., ALTER\r\nROLE or REASSIGNED OWNED BY). Presently, REASSIGNED OWNED BY fails\r\nwith an \"unexpected classid\" ERROR. Such functionality might also\r\ncome in handy for the pg_dump changes for maintaining role ownership.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 5 Oct 2021 00:22:04 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control\n of SUSET gucs to non-superusers)" }, { "msg_contents": "Greetings,\n\n* Bossart, Nathan (bossartn@amazon.com) wrote:\n> On 9/27/21, 11:16 AM, \"Mark Dilger\" <mark.dilger@enterprisedb.com> wrote:\n> > On Sep 21, 2021, at 12:58 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> >> I do like the basic thrust of reducing the power of CREATEROLE. There's\n> >> an old legal maxim I learned in my distant youth that says \"nemo dat\n> >> quod non habet\" - Nobody can give something they don't own. This seems\n> >> to be in that spirit, and I approve :-)\n> >\n> > Great! I'm glad to hear the approach has some support.\n> \n> I'd also like to voice my support for this effort. I haven't been\n> following this thread too closely, but I did take a gander at the\n> latest patch set. There is a lot to unpack here. I think this could\n> easily be split into 3 or 4 threads.\n\nI tend to agree. I'm also generally supportive but following everything\nthat's going on in this particular patch set isn't easy.\n\n> For the CREATEROLE changes, the main thing on my mind is how this\n> might impact upgrades. IIUC roles with CREATEROLE will lose many\n> privileges after pg_upgrade. I think one way to deal with this would\n> be to have such upgrades grant all the privileges they are losing, but\n> most CREATEROLE roles likely aren't using the full extent of their\n> powers, so that approach may be a little extreme. Perhaps it is okay\n> to just add a blurb in the release notes about this backwards-\n> incompatible change.\n\nThis is definitely a pretty big change. There needs to be a bigger and\nindependent discussion about the general concept of role 'self\nadministration' as we talk about it in the comments of the role system\nand which this doesn't really address too. I've been planning for a\nwhile to start a specific thread about that and I'll try to do that so\nthat we can discuss that specifically, as it's quite relevant to all of\nthis, in my view.\n\n> Another interesting thing I found is that if a role has ownership of\n> a role that later obtains SUPERUSER, the owning role basically loses\n> all control of the role. It makes sense to avoid letting non-\n> superusers mess with superusers, but this led me to wonder if there\n> should be a mechanism for transferring role ownership (e.g., ALTER\n> ROLE or REASSIGNED OWNED BY). Presently, REASSIGNED OWNED BY fails\n> with an \"unexpected classid\" ERROR. Such functionality might also\n> come in handy for the pg_dump changes for maintaining role ownership.\n\nI really think we need to stop addressing roles explicitly as\n'superuser' vs. 'non-superuser', because a non-superuser role can be\nGRANT'd a superuser role, which makes that distinction really not\nsensible. This has continued to be a problem and we need to cleanly\naddress it. Not sure exactly how to do that today but it's certainly an\nissue.\n\nThanks,\n\nStephen", "msg_date": "Mon, 4 Oct 2021 22:07:38 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On 10/4/21, 7:08 PM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n> I really think we need to stop addressing roles explicitly as\r\n> 'superuser' vs. 'non-superuser', because a non-superuser role can be\r\n> GRANT'd a superuser role, which makes that distinction really not\r\n> sensible. This has continued to be a problem and we need to cleanly\r\n> address it. Not sure exactly how to do that today but it's certainly an\r\n> issue.\r\n\r\nAgreed. Maybe one option is to convert most of the role attributes to\r\nbe predefined roles. Then we could just check for membership in\r\npg_superuser instead of trying to deal with membership in roles that\r\nhave the SUPERUSER attribute.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 5 Oct 2021 03:26:09 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control\n of SUSET gucs to non-superusers)" }, { "msg_contents": "On Mon, Sep 27, 2021 at 11:45 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> Thanks for looking! I have pulled together a new patch set which applies cleanly against current master.\n\nHi Mark, thanks for this work. I'm late to be here in this thread,\nplease note that I didn't go through the entire thread as it is quite\nlong for me to read.\n\nI have a question: it looks like the view pg_backend_memory_contexts\nand the function pg_log_backend_memory_contexts are superuser only.\nIsn't it a good idea to allow users with a pg_monitor or some other\nsimilar role to use these as well? This question may be unrelated here\nbut I'm curious to know whether your patch set has a solution.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 7 Oct 2021 11:18:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Mon, Oct 4, 2021 at 8:22 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> The changes for adding GUC management roles seem pretty\n> straightforward and would likely be helpful for service providers.\n> However, I was kind of surprised that membership to such roles also\n> provided access to ALTER SYSTEM SET. IMO there's quite a big\n> difference between allowing a user to set a GUC per-session versus\n> cluster-wide. With these patches, if I just want to allow a user to\n> set a GUC like temp_file_limit or log_statement, I also have to give\n> them the ability to change it (and several other GUCs) for all roles\n> on the system.\n\nThere's also ALTER ROLE and ALTER DATABASE, which provide more ways to\nset GUCs. I agree that we could choose make distinctions here, but I\nthink if we make too many distinctions, it will become complicated to\nadminister. And if we don't have *any* way to delegate ALTER SYSTEM,\nthen I think we have missed the mark.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Oct 2021 12:47:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Oct 6, 2021, at 10:48 PM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> Hi Mark, thanks for this work. I'm late to be here in this thread,\n> please note that I didn't go through the entire thread as it is quite\n> long for me to read.\n\nThanks for joining.\n\n> I have a question: it looks like the view pg_backend_memory_contexts\n> and the function pg_log_backend_memory_contexts are superuser only.\n> Isn't it a good idea to allow users with a pg_monitor or some other\n> similar role to use these as well? This question may be unrelated here\n> but I'm curious to know whether your patch set has a solution.\n\nPossibly, but I have stopped adding new topics to this particular patch set, as I'm already getting requests to break it into separate sets and email threads. \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 7 Oct 2021 09:58:57 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Thu, Oct 7, 2021 at 10:29 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > I have a question: it looks like the view pg_backend_memory_contexts\n> > and the function pg_log_backend_memory_contexts are superuser only.\n> > Isn't it a good idea to allow users with a pg_monitor or some other\n> > similar role to use these as well? This question may be unrelated here\n> > but I'm curious to know whether your patch set has a solution.\n>\n> Possibly, but I have stopped adding new topics to this particular patch set, as I'm already getting requests to break it into separate sets and email threads.\n\nThanks Mark. I will discuss it in a separate thread.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 7 Oct 2021 22:44:26 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Le lundi 27 septembre 2021, 20:15:05 CEST Mark Dilger a écrit :\n> > On Sep 21, 2021, at 12:58 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> > \n> > This patch set is failing to apply for me - it fails on patch 2.\n> \n> Thanks for looking! I have pulled together a new patch set which applies\n> cleanly against current master.\n> > I haven't dug terribly deeply into it yet, but I notice that there is a\n> > very large increase in test volume, which appears to account for much of\n> > the 44635 lines of the patch set. I think we're probably going to want\n> > to reduce that. We've had complaints in the past from prominent hackers\n> > about adding too much volume to the regression tests.\n> \n> The v8 patch set is much smaller, with the reduction being in the size of\n> regression tests covering which roles can perform SET, RESET, ALTER SYSTEM\n> SET, and ALTER SYSTEM RESET and on which GUCs. The v7 patch set did\n> exhaustive testing on this, which is why it was so big. The v8 set does\n> just a sampling of GUCs per role. The total number of lines for the patch\n> set drops from 44635 to 13026, with only 1960 lines total between the .sql\n> and .out tests of GUC privileges.\n> > I do like the basic thrust of reducing the power of CREATEROLE. There's\n> > an old legal maxim I learned in my distant youth that says \"nemo dat\n> > quod non habet\" - Nobody can give something they don't own. This seems\n> > to be in that spirit, and I approve :-)\n> \n> Great! I'm glad to hear the approach has some support.\n> \n> \n> Other changes in v8:\n> \n> Add a new test for subscriptions owned by non-superusers to verify that the\n> tablesync and apply workers replicating their subscription won't replicate\n> into schemas and tables that the subscription owner lacks privilege to\n> touch. The logic to prevent that existed in the v7 patch, but I overlooked\n> adding tests for it. Fixed.\n> \n> Allow non-superusers to create event triggers. The logic already existed\n> and is unchanged to handle event triggers owned by non-superusers and\n> conditioning those triggers firing on the (trigger-owner,\n> role-performing-event) pair. The v7 patch set didn't go quite so far as\n> allowing non-superusers to create event triggers, but that undercuts much\n> of the benefit of the changes for no obvious purpose.\n> \n> \n> Not changed in v8, but worth discussing:\n> \n> Non-superusers are still prohibited from creating subscriptions, despite\n> improvements to the security around that circumstance. Improvements to the\n> security model around event triggers does not have to also include\n> permission for non-superuser to create event triggers, but v8 does both. \n> These could be viewed as inconsistent choices, but I struck the balance\n> this way because roles creating event triggers does not entail them doing\n> anything that they can't already do, whereas allowing arbitrary users to\n> create subscriptions would entail an ordinary user causing external network\n> connections being initiated. We likely need to create another privileged\n> role and require a non-superuser to be part of that role before they can\n> create subscriptions. That seems, however, like something to do as a\n> follow-on patch, since tightening up the security on subscriptions as done\n> in this patch doesn't depend on that.\n\nThe changes proposed around subscription management make a lot of sense, \nespecially considering that now that we don't allow to run ALTER SUBSCRIPTION \nREFRESH in a function anymore, there is no way to delegate this to a non \nsuperuser (using a security definer function). Since it doesn't involve the \nrest of the patchset, patches 16, 17 and 18 could be separated in another \nthread / patchset. \n\n-- \nRonan Dunklau\n\n\n\n\n", "msg_date": "Tue, 19 Oct 2021 11:51:40 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Mon, 2021-09-27 at 11:15 -0700, Mark Dilger wrote:\n> Allow non-superusers to create event triggers. The logic already\n> existed and is unchanged to handle event triggers owned by non-\n> superusers and conditioning those triggers firing on the (trigger-\n> owner, role-performing-event) pair. The v7 patch set didn't go quite\n> so far as allowing non-superusers to create event triggers, but that\n> undercuts much of the benefit of the changes for no obvious purpose.\n\nThe thread on role self-administration seems like a dependency here.\nAnd it doesn't look like there's consensus that we should be\nconditioning event trigger firing on role membership:\n\nhttps://postgr.es/m/20211005043438.GB314685@rfd.leadboat.com\n\nInstead, how about:\n\n* make a predefined role pg_event_trigger that allows creating event\ntriggers\n* make it an error for a superuser to fire an event trigger created by\na non-superuser\n\nIt doesn't solve the problem hierarchically, but we don't solve other\npredefined role privileges hierarchically, either (and for many of them\nit makes no sense).\n\nA downside is that the privileged event trigger creator could\naccidentally make life annoying for a superuser that's trying to issue\nDDL: the superuser would need to disable the event trigger, perform the\naction, then re-enable it. But that shouldn't be a practical problem in\nsane setups -- superusers shouldn't be performing a lot of DDL, and if\nthey are, it's good to be explicit that they are bypassing something\nconfigured by their pseudo-admin.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 19 Oct 2021 12:28:20 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Oct 19, 2021, at 12:28 PM, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> On Mon, 2021-09-27 at 11:15 -0700, Mark Dilger wrote:\n>> Allow non-superusers to create event triggers. The logic already\n>> existed and is unchanged to handle event triggers owned by non-\n>> superusers and conditioning those triggers firing on the (trigger-\n>> owner, role-performing-event) pair. The v7 patch set didn't go quite\n>> so far as allowing non-superusers to create event triggers, but that\n>> undercuts much of the benefit of the changes for no obvious purpose.\n> \n> The thread on role self-administration seems like a dependency here.\n> And it doesn't look like there's consensus that we should be\n> conditioning event trigger firing on role membership:\n> \n> https://postgr.es/m/20211005043438.GB314685@rfd.leadboat.com\n\nI have noticed the lack of consensus. The resistance to having roles own other roles should get more attention, I think.\n\nStephen and I went into the weeds on what DROP ROLE rolename CASCADE should mean, but I don't think that should hold up the idea of role ownership. He wanted a different command to do the work rather than this command, but I don't see anything in what he wrote to suggest that the idea is unacceptable, only a different preference on how that functionality gets spelled.\n\nThere was also some difference in interpretation on what exact differences there are between \"ownership\" and \"dependency\". To me, \"ownership\" is a subtype of dependency, just as \"is indexed by\" and \"is contained in\" are subtypes of dependency. Indexes are dependent on the tables they index, tables are dependent on schemas that contain them, objects are dependent on roles that own them, and so forth. Stephen seemed to have a different view. I'm not at all clear on whether his different view is a showstopper.\n\nBefore redesigning the way we fix up event triggers for v15, I'd like to have a sense of how contentious all this is. If it's just a matter of definitions and command spellings, we can work around it.\n\nThanks for participating in this thread, BTW.\n\n> Instead, how about:\n> \n> * make a predefined role pg_event_trigger that allows creating event\n> triggers\n> * make it an error for a superuser to fire an event trigger created by\n> a non-superuser\n\nI think blocking superuser actions is a non-starter, but you address that below....\n\n> It doesn't solve the problem hierarchically, but we don't solve other\n> predefined role privileges hierarchically, either (and for many of them\n> it makes no sense).\n> \n> A downside is that the privileged event trigger creator could\n> accidentally make life annoying for a superuser that's trying to issue\n> DDL: the superuser would need to disable the event trigger, perform the\n> action, then re-enable it. But that shouldn't be a practical problem in\n> sane setups -- superusers shouldn't be performing a lot of DDL, and if\n> they are, it's good to be explicit that they are bypassing something\n> configured by their pseudo-admin.\n\nI'd prefer not to assume much about the sanity of the setup, and I agree the superuser should be able to unconditionally disable the offending event trigger, but I think it is a pretty poor solution that a superuser would need to disable and then re-enable a trigger. Other commands in other sessions would be able to sneak through during the window of time when the trigger is disabled. Wouldn't it be much cleaner to have superuser bypass the trigger?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 19 Oct 2021 13:17:56 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Tue, 2021-10-19 at 13:17 -0700, Mark Dilger wrote:\n> Wouldn't it be much cleaner to have superuser bypass the trigger?\n\nMaybe it could be a user property like \"BYPASS_EVENT_TRIGGERS\", and\nonly superusers could adjust it (like the SUPERUSER and REPLICATION\nproperties).\n\nI suppose it would default to BYPASS_EVENT_TRIGGERS for superusers and\nnot for non-superusers. A little awkward to have different defaults,\nbut it seems sensible in this case.\n\nWould this bypass all event triggers, or only the event triggers of\nanother user?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 19 Oct 2021 15:18:55 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Oct 19, 2021, at 3:18 PM, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> On Tue, 2021-10-19 at 13:17 -0700, Mark Dilger wrote:\n>> Wouldn't it be much cleaner to have superuser bypass the trigger?\n> \n> Maybe it could be a user property like \"BYPASS_EVENT_TRIGGERS\", and\n> only superusers could adjust it (like the SUPERUSER and REPLICATION\n> properties).\n> \n> I suppose it would default to BYPASS_EVENT_TRIGGERS for superusers and\n> not for non-superusers. A little awkward to have different defaults,\n> but it seems sensible in this case.\n> \n> Would this bypass all event triggers, or only the event triggers of\n> another user?\n\nThe difficulty is that non-superuser owned event triggers could be something of a minefield for scripts run as superuser. The cleanest way around that would be to have them never fire in response to superuser actions. Installations could still have event triggers that cover all users, including superusers, as long as they have those triggers owned by superuser.\n\nThe implementation in the patch set does this, but with finer grained precision, because the universe of roles is divided into more than just superuser vs. non-superuser.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 19 Oct 2021 15:26:20 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Greetings,\n\nOn Tue, Oct 19, 2021 at 18:26 Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n>\n>\n> > On Oct 19, 2021, at 3:18 PM, Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > On Tue, 2021-10-19 at 13:17 -0700, Mark Dilger wrote:\n> >> Wouldn't it be much cleaner to have superuser bypass the trigger?\n> >\n> > Maybe it could be a user property like \"BYPASS_EVENT_TRIGGERS\", and\n> > only superusers could adjust it (like the SUPERUSER and REPLICATION\n> > properties).\n> >\n> > I suppose it would default to BYPASS_EVENT_TRIGGERS for superusers and\n> > not for non-superusers. A little awkward to have different defaults,\n> > but it seems sensible in this case.\n> >\n> > Would this bypass all event triggers, or only the event triggers of\n> > another user?\n>\n> The difficulty is that non-superuser owned event triggers could be\n> something of a minefield for scripts run as superuser. The cleanest way\n> around that would be to have them never fire in response to superuser\n> actions. Installations could still have event triggers that cover all\n> users, including superusers, as long as they have those triggers owned by\n> superuser.\n>\n> The implementation in the patch set does this, but with finer grained\n> precision, because the universe of roles is divided into more than just\n> superuser vs. non-superuser.\n\n\nThis last point is particularly important. Non-super users may be able to\nbecome superuser and those roles which are able to need to also be\nprotected. Only protecting superuser roles themselves is *not* enough.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Tue, Oct 19, 2021 at 18:26 Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n> On Oct 19, 2021, at 3:18 PM, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> On Tue, 2021-10-19 at 13:17 -0700, Mark Dilger wrote:\n>> Wouldn't it be much cleaner to have superuser bypass the trigger?\n> \n> Maybe it could be a user property like \"BYPASS_EVENT_TRIGGERS\", and\n> only superusers could adjust it (like the SUPERUSER and REPLICATION\n> properties).\n> \n> I suppose it would default to BYPASS_EVENT_TRIGGERS for superusers and\n> not for non-superusers. A little awkward to have different defaults,\n> but it seems sensible in this case.\n> \n> Would this bypass all event triggers, or only the event triggers of\n> another user?\n\nThe difficulty is that non-superuser owned event triggers could be something of a minefield for scripts run as superuser.  The cleanest way around that would be to have them never fire in response to superuser actions.  Installations could still have event triggers that cover all users, including superusers, as long as they have those triggers owned by superuser.\n\nThe implementation in the patch set does this, but with finer grained precision, because the universe of roles is divided into more than just superuser vs. non-superuser.This last point is particularly important. Non-super users may be able to become superuser and those roles which are able to need to also be protected. Only protecting superuser roles themselves is *not* enough. Thanks,Stephen", "msg_date": "Tue, 19 Oct 2021 18:52:46 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Tue, 2021-10-19 at 18:52 -0400, Stephen Frost wrote:\n> > The implementation in the patch set does this, but with finer\n> > grained precision, because the universe of roles is divided into\n> > more than just superuser vs. non-superuser.\n> \n> This last point is particularly important. Non-super users may be\n> able to become superuser and those roles which are able to need to\n> also be protected. Only protecting superuser roles themselves is\n> *not* enough. \n\nI'm trying to suggest an approach that is flexible enough (not limited\nto superusers), but also addresses Noah's complaint that ordinary role\nmembership should not implicitly control event trigger firing.\n\nThe most direct way to address Noah's complaint is to make a new\nexplicit user option BYPASS_EVENT_TRIGGERS, which controls whether\nevent triggers fire or not.\n\nTo create an event trigger, a user would still need to be a member of\npredefined role pg_create_event_trigger, which would still be a highly\nprivileged user that can essentially take over any user without\nBYPASS_EVENT_TRIGGER. In practice, the members of\npg_create_event_trigger would be pseudo-superusers, or highly-\nprivileged roles that come along with a C extension that needs event\ntriggers.\n\nDetails:\n * Event triggers created by a superuser would never be bypassed.\n * Superusers would always bypass event triggers unless the event\ntrigger is owned by another superuser.\n * If a role is highly privileged and/or can become superuser, it\nshould have BYPASS_EVENT_TRIGGERS specified so that members of\npg_create_event_trigger can't escalate to that role.\n * Normal users should not have BYPASS_EVENT_TRIGGERS.\n\nAnother benefit of this is that it makes this event trigger change\nindependent of the Role Self-Administration discussion, so it could\npotentially be committed sooner.\n\nA downside is that with my suggestion, event triggers would still be\nfor the highly-privileged roles only. Allowing unprivileged users to\ncreate event triggers that have limited scope might allow some really\ninteresting use cases. There might be some options here, like allowing\nany user to create an event trigger that only affects that user.\n\nRegards,\n\tJeff Davis\n\n(Aside: I'm not really sure where the line is between adding a\npredefined role and adding a user option. Are user options just the old\nway, and predefined roles the new way?)\n\n\n\n\n", "msg_date": "Wed, 20 Oct 2021 10:20:28 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Oct 20, 2021, at 10:20 AM, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> I'm trying to suggest an approach that is flexible enough (not limited\n> to superusers), but also addresses Noah's complaint that ordinary role\n> membership should not implicitly control event trigger firing.\n> \n> The most direct way to address Noah's complaint is to make a new\n> explicit user option BYPASS_EVENT_TRIGGERS, which controls whether\n> event triggers fire or not.\n\nI'd like to have a much clearer understanding of Noah's complaint first. There are multiple things to consider: (1) the role which owns the trigger, (2) the role which is performing an action which would cause the trigger to fire, (3) the set of roles role #1 belongs to, (4) the set of roles role #1 has ADMIN privilege on, (5) the set of roles that role #2 belongs to, and (6) the set of roles that role #2 has ADMIN privilege on. Maybe more?\n\nAnd that's before we even get into having roles own other roles, which the event trigger patches *do not depend on*. In the patch set associated with this thread, the event trigger stuff is in patches 0014 and 0015. The changes to CREATEROLE and role ownership are not until patches 0019, 0020, and 0021. (I'm presently writing another set of emails to split this all into four threads/patch sets.) \n\nI'd like to know precisely which combinations of these six things are objectionable, and why. There may be a way around the objections without needing to create new user options or new privileged roles.\n\n> Another benefit of this is that it makes this event trigger change\n> independent of the Role Self-Administration discussion, so it could\n> potentially be committed sooner.\n\nI don't think the two are related, though I can understand the confusion. That is, in fact, a small part of why I'm splitting the patches into different email threads.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 20 Oct 2021 10:32:51 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Wed, 2021-10-20 at 10:32 -0700, Mark Dilger wrote:\n> I'd like to have a much clearer understanding of Noah's complaint\n> first. There are multiple things to consider: (1) the role which\n> owns the trigger, (2) the role which is performing an action which\n> would cause the trigger to fire, (3) the set of roles role #1 belongs\n> to, (4) the set of roles role #1 has ADMIN privilege on, (5) the set\n> of roles that role #2 belongs to, and (6) the set of roles that role\n> #2 has ADMIN privilege on. Maybe more?\n> \n> And that's before we even get into having roles own other roles,\n> which the event trigger patches *do not depend on*. In the patch set\n> associated with this thread, the event trigger stuff is in patches\n> 0014 and 0015. The changes to CREATEROLE and role ownership are not\n> until patches 0019, 0020, and 0021. (I'm presently writing another\n> set of emails to split this all into four threads/patch sets.) \n> \n> I'd like to know precisely which combinations of these six things are\n> objectionable, and why. There may be a way around the objections\n> without needing to create new user options or new privileged roles.\n\nI can't speak for Noah, but my interpretation is that it would be\nsurprising if GRANT/REVOKE or membership in an ordinary role had\neffects other than \"permission denied\" errors. It might make sense for\nevent trigger firing in all the cases we can think of, but it would\ncertainly be strange if we started accumulating a collection of\nbehaviors that implicitly change when you move in or out of a role.\n\nThat's pretty general, so to answer your question: it seems like a\nproblem to use #3-6 in the calculation about whether to fire an event\ntrigger.\n\nHowever, if we have a concept of role *ownership*, that's something\nnew. It may be less surprising to use that to determine additional\nbehaviors, like whether event triggers fire.\n\nWe can also consider adding some additional language to the CREATE\nEVENT TRIGGER syntax to make it more explicit what the scope is. For\ninstance:\n\n CREATE EVENT TRIGGER name\n ON event\n [ FOR {ALL|OWNED} ROLES ]\n [ WHEN filter_variable IN (filter_value [, ... ]) [ AND ... ] ]\n EXECUTE { FUNCTION | PROCEDURE } function_name()\n\nFor a superuser ALL and OWNED would be the same, but regular users\nwould need to specify \"FOR OWNED ROLES\" or they'd get an error.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 20 Oct 2021 11:27:11 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "I have now received multiple requests to split this patchset into multiple parts, including some off-list. I'll be posting four new patch sets on four new topics:\n\n\nNew privileged roles which can SET and ALTER SYSTEM SET (v8-0002 through v8-0013)\n\nNon-superuser event trigger owners (v8-0014 and v8-0015)\n\nNon-superuser subscription owners (v8-0016 through v8-0018)\n\nCREATEROLE and role ownership hierarchies (v8-0001 and v8-0019 through v8-0021)\n\n\nI cannot presume everyone will be interested, so I will only cc'd Andrew on the new threads, given his status as a reviewer. I don't mean by this to cut anyone out of the discussion.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 20 Oct 2021 11:39:52 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Thu, 2021-05-27 at 23:06 -0700, Noah Misch wrote:\n> pg_logical_replication would not be safe to delegate that way:\n> \nhttps://postgr.es/m/flat/CACqFVBbx6PDq%2B%3DvHM0n78kHzn8tvOM-kGO_2q_q0zNAMT%2BTzdA%40mail.gmail.com\n\nWhat do you mean \"that way\"? Do you mean it's not safe to delegate\nsubscription creation to non-superusers at all?\n\n From the thread above, I don't see anything so dangerous that it can't\nbe delegated:\n\n * persistent background workers on subscriber\n - still seems reasonable to delegate to a privileged user\n * arbitrary code execution by the apply worker on subscriber\n - apply worker runs as subscription owner, so doesn't seem\n like a problem?\n * connection info may be visible to non-superusers\n - seems either solvable or not necessarily a problem\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 20 Oct 2021 12:09:08 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Wed, Oct 20, 2021 at 1:20 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> A downside is that with my suggestion, event triggers would still be\n> for the highly-privileged roles only. Allowing unprivileged users to\n> create event triggers that have limited scope might allow some really\n> interesting use cases. There might be some options here, like allowing\n> any user to create an event trigger that only affects that user.\n\nI think that's basically giving up the important part of this idea,\nwhich is to allow meaningful administration without superuser\nprivileges. \"highly-privileged roles only\" sounds like in practice it\nwould amount to the superuser or someone who can become the superuser\n-- and thus probably wouldn't include the \"master tenant\" role in a\nservice provider environment.\n\nI don't really see what the problem is with Tom's proposal[1,2], or\nwhy the role self-administration thread is necessarily a blocker. It's\ntrue that if X creates an event trigger and it fires for Y because X\ncan become Y, then Y might be able to revoke membership in Y from X\nand thus circumvent the event trigger firing. But that is a severable\nproblem. We can fail to solve that problem and still be better off\nthan today, because at least with the proposed change a cooperating\ngroup of users (or one whose ability to execute GRANT and REVOKE is\nrestricted by some other means) can benefit from event triggers\nwithout any of them being superuser. If we make this change *and also*\nresolve the role self-administration problem, then it can also work in\ncases where a more privileged user needs to enforce event trigger\nfiring against a less-privileged user.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n[1] http://postgr.es/m/214052.1627331086@sss.pgh.pa.us\n[2] http://postgr.es/m/216038.1627333077@sss.pgh.pa.us\n\n\n", "msg_date": "Wed, 20 Oct 2021 16:23:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Greetings,\n\nOn Wed, Oct 20, 2021 at 16:23 Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Oct 20, 2021 at 1:20 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > A downside is that with my suggestion, event triggers would still be\n> > for the highly-privileged roles only. Allowing unprivileged users to\n> > create event triggers that have limited scope might allow some really\n> > interesting use cases. There might be some options here, like allowing\n> > any user to create an event trigger that only affects that user.\n>\n> I think that's basically giving up the important part of this idea,\n> which is to allow meaningful administration without superuser\n> privileges. \"highly-privileged roles only\" sounds like in practice it\n> would amount to the superuser or someone who can become the superuser\n> -- and thus probably wouldn't include the \"master tenant\" role in a\n> service provider environment.\n\n\nI’m in agreement with Robert on this point.\n\nI don't really see what the problem is with Tom's proposal[1,2], or\n> why the role self-administration thread is necessarily a blocker. It's\n> true that if X creates an event trigger and it fires for Y because X\n> can become Y, then Y might be able to revoke membership in Y from X\n> and thus circumvent the event trigger firing. But that is a severable\n> problem. We can fail to solve that problem and still be better off\n> than today, because at least with the proposed change a cooperating\n> group of users (or one whose ability to execute GRANT and REVOKE is\n> restricted by some other means) can benefit from event triggers\n> without any of them being superuser. If we make this change *and also*\n> resolve the role self-administration problem, then it can also work in\n> cases where a more privileged user needs to enforce event trigger\n> firing against a less-privileged user.\n\n\nI’m not thrilled with the idea of putting this out there without addressing\nthe risk that a role could circumvent it.\n\nI did want to say that the idea put forward by Jeff of being able to\nspecify a set of users which an event trigger should fire for is an\ninteresting and potentially quite useful capability. Perhaps I don’t want a\ngiven event trigger to fire for ALL of the roles which I have admin rights\n(or ownership or whatever) over but rather some subset. Now, perhaps I\ncould create a role for that purpose, but also, maybe I haven’t been given\nthat right for $reasons. Being able to specify the roles for which an\nevent trigger fires would be useful in such cases- and now we are down to\njust working out who is allowed to create event triggers to operate upon\nwhom. Admin rights on the role could certainly be one way of addressing\nthat, or we could have that be “ownership”, or it could potentially be an\nexplicitly GRANT’able ability on a role, similar to what “admin” is today.\n\nI have to say that the last of those options strikes me as particularly\nappealing and potentially something which could be extended (what about\nsetting GUCs for a particular role? Or maybe some subset of GUCs? Or\nprivilege to change/reset a password, or other things?).\n\nI don’t think this information would be stored in the same manner as the\nper-object GRANT system that we have today and by avoiding that perhaps we\ncan a more extensible method that hasn’t got us worrying about having\nenough bits too.\n\nThanks,\n\nStephen\n\nGreetings,On Wed, Oct 20, 2021 at 16:23 Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Oct 20, 2021 at 1:20 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> A downside is that with my suggestion, event triggers would still be\n> for the highly-privileged roles only. Allowing unprivileged users to\n> create event triggers that have limited scope might allow some really\n> interesting use cases. There might be some options here, like allowing\n> any user to create an event trigger that only affects that user.\n\nI think that's basically giving up the important part of this idea,\nwhich is to allow meaningful administration without superuser\nprivileges. \"highly-privileged roles only\" sounds like in practice it\nwould amount to the superuser or someone who can become the superuser\n-- and thus probably wouldn't include the \"master tenant\" role in a\nservice provider environment.I’m in agreement with Robert on this point.\nI don't really see what the problem is with Tom's proposal[1,2], or\nwhy the role self-administration thread is necessarily a blocker. It's\ntrue that if X creates an event trigger and it fires for Y because X\ncan become Y, then Y might be able to revoke membership in Y from X\nand thus circumvent the event trigger firing. But that is a severable\nproblem. We can fail to solve that problem and still be better off\nthan today, because at least with the proposed change a cooperating\ngroup of users (or one whose ability to execute GRANT and REVOKE is\nrestricted by some other means) can benefit from event triggers\nwithout any of them being superuser. If we make this change *and also*\nresolve the role self-administration problem, then it can also work in\ncases where a more privileged user needs to enforce event trigger\nfiring against a less-privileged user.I’m not thrilled with the idea of putting this out there without addressing the risk that a role could circumvent it.I did want to say that the idea put forward by Jeff of being able to specify a set of users which an event trigger should fire for is an interesting and potentially quite useful capability. Perhaps I don’t want a given event trigger to fire for ALL of the roles which I have admin rights (or ownership or whatever) over but rather some subset. Now, perhaps I could create a role for that purpose, but also, maybe I haven’t been given that right for $reasons.  Being able to specify the roles for which an event trigger fires would be useful in such cases- and now we are down to just working out who is allowed to create event triggers to operate upon whom.  Admin rights on the role could certainly be one way of addressing that, or we could have that be “ownership”, or it could potentially be an explicitly GRANT’able ability on a role, similar to what “admin” is today. I have to say that the last of those options strikes me as particularly appealing and potentially something which could be extended (what about setting GUCs for a particular role? Or maybe some subset of GUCs?  Or privilege to change/reset a password, or other things?).  I don’t think this information would be stored in the same manner as the per-object GRANT system that we have today and by avoiding that perhaps we can a more extensible method that hasn’t got us worrying about having enough bits too.Thanks,Stephen", "msg_date": "Wed, 20 Oct 2021 16:36:06 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Wed, 2021-10-20 at 16:36 -0400, Stephen Frost wrote:\n> > I think that's basically giving up the important part of this idea,\n> > which is to allow meaningful administration without superuser\n> > privileges. \"highly-privileged roles only\" sounds like in practice\n> > it\n> > would amount to the superuser or someone who can become the\n> > superuser\n> > -- and thus probably wouldn't include the \"master tenant\" role in a\n> > service provider environment.\n> \n> I’m in agreement with Robert on this point.\n\nI'm OK to move past this and continue with Mark's approach.\n\nNoah made the original complaint, though, so he might have something to\nadd.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 20 Oct 2021 14:55:54 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Wed, Oct 20, 2021 at 11:27:11AM -0700, Jeff Davis wrote:\n> On Wed, 2021-10-20 at 10:32 -0700, Mark Dilger wrote:\n> > I'd like to have a much clearer understanding of Noah's complaint\n> > first. There are multiple things to consider: (1) the role which\n> > owns the trigger, (2) the role which is performing an action which\n> > would cause the trigger to fire, (3) the set of roles role #1 belongs\n> > to, (4) the set of roles role #1 has ADMIN privilege on, (5) the set\n> > of roles that role #2 belongs to, and (6) the set of roles that role\n> > #2 has ADMIN privilege on. Maybe more?\n\n> > I'd like to know precisely which combinations of these six things are\n> > objectionable, and why. There may be a way around the objections\n> > without needing to create new user options or new privileged roles.\n> \n> I can't speak for Noah, but my interpretation is that it would be\n> surprising if GRANT/REVOKE or membership in an ordinary role had\n> effects other than \"permission denied\" errors. It might make sense for\n> event trigger firing in all the cases we can think of, but it would\n> certainly be strange if we started accumulating a collection of\n> behaviors that implicitly change when you move in or out of a role.\n> \n> That's pretty general, so to answer your question: it seems like a\n> problem to use #3-6 in the calculation about whether to fire an event\n> trigger.\n\nExactly. That's the main point. Also, it's currently a best practice for\nonly non-LOGIN roles to have members. The proposed approach invites folks to\nabandon that best practice. I take the two smells as a sign that we'll regret\nadopting the proposal, despite not knowing how it will go seriously wrong.\n\nOn Wed, Oct 20, 2021 at 12:09:08PM -0700, Jeff Davis wrote:\n> On Thu, 2021-05-27 at 23:06 -0700, Noah Misch wrote:\n> > pg_logical_replication would not be safe to delegate that way:\n> > \n> https://postgr.es/m/flat/CACqFVBbx6PDq%2B%3DvHM0n78kHzn8tvOM-kGO_2q_q0zNAMT%2BTzdA%40mail.gmail.com\n> \n> What do you mean \"that way\"? Do you mean it's not safe to delegate\n> subscription creation to non-superusers at all?\n\nI meant \"pg_logical_replication would not be safe to delegate to the tenant of\na database provided as a service.\" It's not safe today, but it can be made\nsafe:\n\n> From the thread above, I don't see anything so dangerous that it can't\n> be delegated:\n> \n> * persistent background workers on subscriber\n> - still seems reasonable to delegate to a privileged user\n\nAgreed, I don't have a problem with pg_logical_replication implying that\nability. If you can create this worker, you can bypass ADMIN OPTION by\nrunning the GRANT/REVOKE inside a subscription. That's probably fine if\ndocumented, or else is_admin_of_role() could prevent it.\n\n\n> * arbitrary code execution by the apply worker on subscriber\n> - apply worker runs as subscription owner, so doesn't seem\n> like a problem?\n\nSounds right. I think Mark Dilger drafted a patch to add ACL checks and a TAP\ntest confirming that the worker does get permission denied. That change has\nno disadvantage, so this problem is on the way to getting solved.\n\n> * connection info may be visible to non-superusers\n> - seems either solvable or not necessarily a problem\n\nYes.\n\nThe other matter from the thread you linked is \"the connection to the\npublisher must enforce the equivalent of dblink_security_check()\". I think\nMark Dilger drafted a patch for that, too.\n\n\n", "msg_date": "Sat, 23 Oct 2021 01:42:35 +0000", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Greetings,\n\n* Noah Misch (noah@leadboat.com) wrote:\n> On Wed, Oct 20, 2021 at 11:27:11AM -0700, Jeff Davis wrote:\n> > On Wed, 2021-10-20 at 10:32 -0700, Mark Dilger wrote:\n> > > I'd like to have a much clearer understanding of Noah's complaint\n> > > first. There are multiple things to consider: (1) the role which\n> > > owns the trigger, (2) the role which is performing an action which\n> > > would cause the trigger to fire, (3) the set of roles role #1 belongs\n> > > to, (4) the set of roles role #1 has ADMIN privilege on, (5) the set\n> > > of roles that role #2 belongs to, and (6) the set of roles that role\n> > > #2 has ADMIN privilege on. Maybe more?\n> \n> > > I'd like to know precisely which combinations of these six things are\n> > > objectionable, and why. There may be a way around the objections\n> > > without needing to create new user options or new privileged roles.\n> > \n> > I can't speak for Noah, but my interpretation is that it would be\n> > surprising if GRANT/REVOKE or membership in an ordinary role had\n> > effects other than \"permission denied\" errors. It might make sense for\n> > event trigger firing in all the cases we can think of, but it would\n> > certainly be strange if we started accumulating a collection of\n> > behaviors that implicitly change when you move in or out of a role.\n> > \n> > That's pretty general, so to answer your question: it seems like a\n> > problem to use #3-6 in the calculation about whether to fire an event\n> > trigger.\n> \n> Exactly. That's the main point. Also, it's currently a best practice for\n> only non-LOGIN roles to have members. The proposed approach invites folks to\n> abandon that best practice. I take the two smells as a sign that we'll regret\n> adopting the proposal, despite not knowing how it will go seriously wrong.\n\nThis seems like a pretty good point, which leads me to again think that\nwe should explicitly add a way for an individual who can create event\ntriggers to be able to specify for whom the event trigger should fire,\nand only allow them to specify roles other than their own provided they\nhave been given that authority (either explicitly somehow or implicitly\nbased on some defined access that they have to that other role).\n\nThanks,\n\nStephen", "msg_date": "Mon, 25 Oct 2021 12:20:36 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Mon, Oct 25, 2021 at 12:20 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Exactly. That's the main point. Also, it's currently a best practice for\n> > only non-LOGIN roles to have members. The proposed approach invites folks to\n> > abandon that best practice. I take the two smells as a sign that we'll regret\n> > adopting the proposal, despite not knowing how it will go seriously wrong.\n>\n> This seems like a pretty good point, which leads me to again think that\n> we should explicitly add a way for an individual who can create event\n> triggers to be able to specify for whom the event trigger should fire,\n> and only allow them to specify roles other than their own provided they\n> have been given that authority (either explicitly somehow or implicitly\n> based on some defined access that they have to that other role).\n\nI agree that Noah has a reasonably good point here. I don't think it's\na total slam-dunk but it it's certainly not a stupid argument.\nConceding that point for the purposes of discussion, I don't\nunderstand how this kind of proposal gets us out from under the\nproblem. Surely, it can't be the case that user X can cause event\ntrigger E to run as user Y unless X can become Y, because to do so\nwould allow X to usurp Y's privileges, except in the corner case where\nY never does anything that can trigger an event trigger. But if X has\nto be able to become Y in order to force E to be run by Y, then I\nthink we've made no progress in addressing Noah's complaint.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Oct 2021 13:24:45 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Oct 25, 2021 at 12:20 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > Exactly. That's the main point. Also, it's currently a best practice for\n> > > only non-LOGIN roles to have members. The proposed approach invites folks to\n> > > abandon that best practice. I take the two smells as a sign that we'll regret\n> > > adopting the proposal, despite not knowing how it will go seriously wrong.\n> >\n> > This seems like a pretty good point, which leads me to again think that\n> > we should explicitly add a way for an individual who can create event\n> > triggers to be able to specify for whom the event trigger should fire,\n> > and only allow them to specify roles other than their own provided they\n> > have been given that authority (either explicitly somehow or implicitly\n> > based on some defined access that they have to that other role).\n> \n> I agree that Noah has a reasonably good point here. I don't think it's\n> a total slam-dunk but it it's certainly not a stupid argument.\n\nOk.\n\n> Conceding that point for the purposes of discussion, I don't\n> understand how this kind of proposal gets us out from under the\n> problem. Surely, it can't be the case that user X can cause event\n> trigger E to run as user Y unless X can become Y, because to do so\n> would allow X to usurp Y's privileges, except in the corner case where\n> Y never does anything that can trigger an event trigger. But if X has\n> to be able to become Y in order to force E to be run by Y, then I\n> think we've made no progress in addressing Noah's complaint.\n\nX having rights over Y is what would allow X to create an event trigger\nwhich fires when Y does $something, but the act of GRANT'ing Y to X\nwouldn't make it automatically start happening. The latter is what I\nbelieved Noah's concern was around.\n\nThe downside there though is that GRANT'ing of roles to other roles is\nhow we build up sets of roles and you'd certainly wish to be able to\nleverage that when deciding which roles a given event trigger should\nfire for. If we made that work for event triggers then you'd still have\nthe case that *some* GRANT A to B would cause event triggers to suddenly\nstart happening for B without other actions being taken. Still, in that\ncase you could create specific such roles to manage that independently\nof which roles happened to have admin rights over which other roles.\n\nExamples might help here.\n\nCREATE ROLE X;\nCREATE ROLE Y;\nCREATE ROLE Z;\n\nGRANT Y to X;\nGRANT Z to X;\n\nSET ROLE X;\nCREATE EVENT TRIGGER do_stuff();\n\nUnder one approach, that event trigger then fires for X, Y and Z. What\nif you don't actually want it to though? What if some role Q is later\ncreated and GRANT'd to X? Then the event trigger would also fire for\nthem.\n\nConsider instead:\n\nCREATE ROLE X;\nCREATE ROLE Y;\nCREATE ROLE Z;\n\nGRANT Y to X;\nGRANT Z to X;\n\nSET ROLE X;\nCREATE EVENT TRIGGER FOR Y do_stuff();\n\nNow, X has explicitly said that they want the event trigger to fire for\nrole Y and if the event trigger fires or not is no longer based on\nmembership in the role creating the trigger but instead depends on being\nthe role which the event trigger was explicitly defined to fire on.\n\nDoes membership in role Y cause the event trigger to fire for that role?\nI'd argue that the answer is probably 'yes', but at least it's no longer\ntied back to membership in X (the owner of the trigger). That Noah\nexplicitly mentioned 'login' roles vs. 'non-login' roles makes me think\nthis is more in line with what the argument was about- the owner of the\ntrigger would almost certainly be a 'login' role. All that said, this\nis definitely a complex area and there's certainly a lot of different\nways we could go.\n\nThanks,\n\nStephen", "msg_date": "Mon, 25 Oct 2021 14:30:48 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Mon, Oct 25, 2021 at 2:30 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Does membership in role Y cause the event trigger to fire for that role?\n> I'd argue that the answer is probably 'yes', but at least it's no longer\n> tied back to membership in X (the owner of the trigger). That Noah\n> explicitly mentioned 'login' roles vs. 'non-login' roles makes me think\n> this is more in line with what the argument was about- the owner of the\n> trigger would almost certainly be a 'login' role. All that said, this\n> is definitely a complex area and there's certainly a lot of different\n> ways we could go.\n\nI mean I get all this. I am not convinced that it's a big problem,\nbecause it seems a bit hypothetical, but if it is a problem, then\nintroducing some explicit mechanism to control which triggers fire for\nwhich users is a solution. I'm a bit concerned that it's just going to\nmake it complicated to configure your event triggers to no real\nbenefit. Suppose that, as a master tenant, have 10 event triggers and\n100 users and all the users are supposed to run all the event\ntriggers. When I add user #101, if I have to say, yes, I want that\nuser to fire the same 10 event triggers, running a separate SQL\ncommand for each of one, that's kind of annoying. If I can just create\nthe new user and I automatically gain membership in that user and it\ntherefore fires all my event triggers, I get the behavior I wanted\nanyway without having to do any special steps.\n\nBut also, Noah writes: \"Also, it's currently a best practice for only\nnon-LOGIN roles to have members. The proposed approach invites folks\nto abandon that best practice.\"\n\nThe kind of mechanism you're proposing here doesn't seem to make that\nany less likely.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Oct 2021 15:25:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Oct 25, 2021, at 11:30 AM, Stephen Frost <sfrost@snowman.net> wrote:\n\n> Consider instead:\n> \n> CREATE ROLE X;\n> CREATE ROLE Y;\n> CREATE ROLE Z;\n> \n> GRANT Y to X;\n> GRANT Z to X;\n> \n> SET ROLE X;\n> CREATE EVENT TRIGGER FOR Y do_stuff();\n> \n> Now, X has explicitly said that they want the event trigger to fire for\n> role Y and if the event trigger fires or not is no longer based on\n> membership in the role creating the trigger but instead depends on being\n> the role which the event trigger was explicitly defined to fire on.\n\nI don't think your proposal quite works, because the set of users you'd like to audit with an event trigger based auditing system may be both large and dynamic:\n\nCREATE ROLE batman;\nCREATE ROLE robin;\n\nSET ROLE batman;\nCREATE ROLE gotham_residents NOLOGIN;\nCREATE ROLE riddler IN ROLE gotham_residents LOGIN;\n-- create millions of other Gotham residents....\nCREATE EVENT TRIGGER FOR gotham_residents audit_criminal_activity();\n\nBatman is not superuser, but he's pretty powerful, and he wants to audit all the criminal activity in Gotham. How should he expect this example to work?\n\nHaving the \"FOR gotham_residents\" clause mean anybody with membership in role gotham_residents is problematic, because it means that being granted into a role both increases and decreases your freedoms, rather than merely giving you more freedoms. If Batman covets privileges that Robin has, but wants not to be subjected to any event triggers that fire for Robin, he both wants into and out of role Robin.\n\nHaving \"FOR gotham_residents\" mean that only actions performed by role \"gotham_residents\" should fire the trigger is useless, since Gotham residents don't log in as that, but as themselves. Batman won't catch anybody this way.\n\nHaving to list each new resident to the trigger is tedious and error-prone. Batman may not be able to pass a compliance audit.\n\nHaving Batman *own* all residents in Gotham city would work, if we can agree on a role ownership system. It has the downside that only a role's (direct or indirect) owner can do the auditing, though. That's more flexible than what we have today, where only superuser can do it, but maybe some people would want to argue for a different solution with even more flexibility? A grantable privilege perhaps? But whatever it is, the reasoning about who gets audited and who does not must be clear enough that Batman can pass a compliance audit.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 25 Oct 2021 12:30:31 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Mon, Oct 25, 2021 at 3:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> But also, Noah writes: \"Also, it's currently a best practice for only\n> non-LOGIN roles to have members. The proposed approach invites folks\n> to abandon that best practice.\"\n>\n> The kind of mechanism you're proposing here doesn't seem to make that\n> any less likely.\n\nOK, I have a new idea. Any system for multi-tenancy necessarily\ninvolves keeping track of the association between users and tenants -\ni.e. for any given user, to which tenant do they belong? In Mark's\nsystem, you do that by creating a role, and you make that role the\nowner of all of the other roles that are part of that tenant (so that\nit can drop them) and you also make it a member of all of those other\nroles (so that it can use their privileges).\n\nBut the thing that defines a tenant need not be a role. It can be some\nother kind of object. Suppose we invent a CREATE TENANT command. Every\nuser is associated with exactly one tenant, and can optionally be an\nadministrator for that tenant. If you are the administrator for a\ntenant, you can create event triggers that affect every user\nassociated with that tenant. You also have the privileges of every\nuser associated with that tenant. If you have the CREATEROLE\nprivilege, you can create new users who will be associated with the\nsame tenant as you are, and you can drop existing users that are\nassociated with that tenant (but not roles associated with other\ntenants).\n\nThis is effectively the same thing as Mark's proposal, but just using\na new kind of object (TENANT) where Mark used an existing kind of\nobject (ROLE). And it addresses Noah's concern, perhaps, because with\nthe approach the tenant administrator isn't a member of every role,\nbut rather just gets the privileges of all the roles as if they were.\nYou might argue that's a distinction without a difference, but I don't\nthink so: the superuser is in effect a member of every role as things\nstand, and the whole idea of this project is to all for\nquasi-superusers who can administer a subset of the users in the\nsystem, so something of this kind seems like it has to exist for the\nproposal to achieve its object. But it need not be role membership per\nse, and maybe shouldn't be.\n\nI don't know if this is a better idea than what Mark did, but I think\nit has some appealing properties. One is that you don't need to (and\nindeed can't) have people log in as the lead tenant role directly -\nbecause that concept does not exist. If Google is your customer, then\nin Mark's proposal, you have a high-privilege 'google' account and you\nlog into that when you want to do high-privilege things. But what if\nyou have multiple people who are entitled to administer the Google\ntenant? Either they share access to that single account, or, well, I\ndon't know. A second user could have membership in every login and\nnon-login role for the tenant, but only one user can be the owner of\nall of those roles, so maybe there's no other way to do it. If the\ntenant is a separate concept that sits to one side of the role system,\nyou can just make multiple users administrators of the google tenant,\nand that should be fine. And that way, each one has their own password\nand can be separately disabled if that person leaves the company or\ndemoted to a non-administrator if they get moved to a different job.\n\nThere might well be problems with this idea, either on a grand scale\nthat make the whole thing a non-starter, or on a smaller scale that\nmean that the definition of what it means to be a tenant administrator\nneeds fine-tuning ... but I don't know what they are, so I thought I'd\nthrow this out there and see what people think.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Oct 2021 16:10:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Oct 27, 2021, at 1:10 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> This is effectively the same thing as Mark's proposal, but just using\n> a new kind of object (TENANT) where Mark used an existing kind of\n> object (ROLE). And it addresses Noah's concern, perhaps, because with\n> the approach the tenant administrator isn't a member of every role,\n> but rather just gets the privileges of all the roles as if they were.\n> You might argue that's a distinction without a difference, but I don't\n> think so: the superuser is in effect a member of every role as things\n> stand, and the whole idea of this project is to all for\n> quasi-superusers who can administer a subset of the users in the\n> system, so something of this kind seems like it has to exist for the\n> proposal to achieve its object. But it need not be role membership per\n> se, and maybe shouldn't be.\n\nIt feels to me that the traditional concept of users and groups could map, one-to-one, onto users and roles, but we've mapped both users and groups, many-to-one, onto roles, leaving no distinct concept of groups, and now we're proposing adding a concept called \"tenant\" that means something like \"group\". I find that simultaneously helpful and pretty confusing.\n\nCompare that to the help and confusion created by my proposal. The idea that roles can own roles, just as roles can own tables, indexes, etc., doesn't seem confusing to me, but perhaps it does to others. If you accept that roles can own roles, then the idea that roles can drop roles that they own, or change characteristics of roles that they own, is entirely analogous to roles being able to drop or alter any other sort of object that they own. To me, that is perfectly consistent and unsurprising, but again, perhaps not to others.\n\nNoah's concern, as I understood it, was not about roles owning roles, but about role membership being what controls if an event trigger fires. If anything, that concern stems from the lack of role ownership, not the existence of it, because I wrote the event trigger patch set to not depend on the role ownership patch set. Once you have a concept of role ownership, it is perfectly natural that the trigger could fire based on whether the trigger owner is the owner of (or the same as) the role performing the action. That completely sidesteps the concern about the event trigger role needing to be a member of any log-in role, because you no longer need the event trigger owner to be a member of the log-in role.\n\nThere are semantic details to be worked out with role ownership, such as whether a role owner automatically has the privileges of roles it owns, whether such privilege, if any, should behave à la INHERIT or NOINHERIT, whether superusers should own roles they create or whether there should be a special rule that superuser created roles should belong to the bootstrap superuser, etc. The patch set has taken a position on each of these, because it cannot be implemented without some choice being made, but many of these decisions could be changed if they are the source of confusion. If, on the other hand, having parallel concepts \"role A owns role B\" and \"role C is a member of role D\" is too confusing for people to ever keep straight, then perhaps we need something like \"tenant\" to help lessen the confusion.\n \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 28 Oct 2021 11:02:14 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Wed, 2021-10-27 at 16:10 -0400, Robert Haas wrote:\n> But the thing that defines a tenant need not be a role. It can be\n> some\n> other kind of object. Suppose we invent a CREATE TENANT command. \n\nWould it be a recursive concept? Could a tenant create a sub-tenant?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 29 Oct 2021 15:56:50 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Thu, 2021-10-28 at 11:02 -0700, Mark Dilger wrote:\n> It feels to me that the traditional concept of users and groups could\n> map, one-to-one, onto users and roles, but we've mapped both users\n> and groups, many-to-one, onto roles, leaving no distinct concept of\n> groups, and now we're proposing adding a concept called \"tenant\" that\n> means something like \"group\". I find that simultaneously helpful and\n> pretty confusing.\n\nThat's a good point. There are a lot of concepts involved; adding one\nmore could certainly cause confusion.\n\nBut I don't think the concept of role ownership has zero potential\nconfusion, either. For instance, I could certainly imagine a user A\ncreating a role B (and therefore owning it), and then doing \"GRANT A TO\nB\". Is there a reason to do that, or is the user confused about what\nmembership versus ownership mean?\n\n> Noah's concern, as I understood it, was not about roles owning roles,\n> but about role membership being what controls if an event trigger\n> fires. If anything, that concern stems from the lack of role\n> ownership, not the existence of it, because I wrote the event trigger\n> patch set to not depend on the role ownership patch set.\n\nYour patch[0] causes role membership to control whether and event\ntrigger fires. If it was solely based on role *ownership* and had\nnothing to do with role *membership*, that does seem better to me.\n\n[0] \nhttps://postgr.es/m/914FF898-5AC4-4E02-8A05-3876087007FB@enterprisedb.com\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 29 Oct 2021 16:46:52 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "On Fri, Oct 29, 2021 at 6:56 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Wed, 2021-10-27 at 16:10 -0400, Robert Haas wrote:\n> > But the thing that defines a tenant need not be a role. It can be\n> > some\n> > other kind of object. Suppose we invent a CREATE TENANT command.\n>\n> Would it be a recursive concept? Could a tenant create a sub-tenant?\n\nI could imagine a system like this with or without that concept.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Oct 2021 20:34:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Oct 29, 2021, at 4:46 PM, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> But I don't think the concept of role ownership has zero potential\n> confusion, either. For instance, I could certainly imagine a user A\n> creating a role B (and therefore owning it), and then doing \"GRANT A TO\n> B\". Is there a reason to do that, or is the user confused about what\n> membership versus ownership mean?\n\nIn general, I think that would be the result of the user being confused. But it is hard to say that definitively, because perhaps users A and C want to create a single user B with the union of both their roles, and have agreed to perform:\n\nuser_a% CREATE ROLE B;\nuser_a% GRANT A TO B;\nuser_c% GRANT C TO B;\n\nThe easiest way of thinking about role ownership is that a role's owner is superuser in so far as that role is concerned. It can drop them, modify them, take their objects away from them, assign other objects to them, etc. Anything a superuser could do to impoverish them, their owner can do to impoverish them. The difference is that an actual superuser can enrich them with anything the superuser likes, whereas their owner can only enrich them with objects and privileges that the owner itself has rights to assign.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sun, 31 Oct 2021 17:24:05 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Oct 29, 2021, at 4:46 PM, Jeff Davis <pgsql@j-davis.com> wrote:\n> > But I don't think the concept of role ownership has zero potential\n> > confusion, either. For instance, I could certainly imagine a user A\n> > creating a role B (and therefore owning it), and then doing \"GRANT A TO\n> > B\". Is there a reason to do that, or is the user confused about what\n> > membership versus ownership mean?\n> \n> In general, I think that would be the result of the user being confused. But it is hard to say that definitively, because perhaps users A and C want to create a single user B with the union of both their roles, and have agreed to perform:\n> \n> user_a% CREATE ROLE B;\n> user_a% GRANT A TO B;\n> user_c% GRANT C TO B;\n> \n> The easiest way of thinking about role ownership is that a role's owner is superuser in so far as that role is concerned. It can drop them, modify them, take their objects away from them, assign other objects to them, etc. Anything a superuser could do to impoverish them, their owner can do to impoverish them. The difference is that an actual superuser can enrich them with anything the superuser likes, whereas their owner can only enrich them with objects and privileges that the owner itself has rights to assign.\n\nI can generally get behind the idea that a user who has been allowed to\ncreate other roles should be able to do various other things with that\nrole, but should also be limited by what rights they themselves have\n(unlike how CREATEROLE works today).\n\nThat said, I have a hard time seeing why we're drawing this distinction\nof 'ownership' as being ultimately different from simple 'admin' rights\non a role. In other words, beyond the ability to actually create/drop\nroles, having 'admin' rights on a role already conveys just about\neverything 'ownership' would. The things that are getting in the way\nthere are:\n\n - Ability to actually create/alter/drop roles, this needs to be\n addressed somehow but doesn't necessarily imply a need for\n 'ownership' as a concept.\n\n - Restriction of a role from being able to implicitly have 'admin'\n rights on itself, as I started a discussion about elsewhere.\n\n - Some system for deciding who event triggers should fire for. I don't\n think this should really be tied into the question of who has admin\n rights on whom except to the extent that maybe you can only cause\n event triggers to fire for roles you've got admin rights on (or maybe\n membership in).\n\nOne thing that comes to mind is that event triggers aren't the only\nthing out there and I have to wonder if we should be thinking about\nother things. As a thought exercise- how is an event trigger really\ndifferent from a table-level trigger? Anyone who has the ability to\ncreate objects is able to create tables, create functions, create\noperators, and a user logging in and running SQL can certainly end up\nrunning those things with their privileges. We've generally recognized\nthat that's not great and there's been work to get it so that the\n'public' schema that everyone has in their search_path by default won't\nbe world-writable but that isn't exactly a cure-all for the general\nproblem.\n\nOne of the interesting bits is that there's two sides to this. On the\none hand, as a user, maybe I don't want to run functions of people who I\ndon't trust. As an admin/superuser/landlord, maybe I want to require\neveryone who I have authority over to run these functions/event\ntriggers. I'm not sure that we can find a solution to everything with\nthis but figure I'd share these thoughts.\n\nLast thought I'll share is that I do believe we're going to want to\nprovide flexibility when it comes to defining who event triggers run\nfor, as a given admin may wish for that set to be different from the set\nof roles that they ultimately have control over. I dislike tying these\ntwo things together at such a core level and therefore continue to feel\nthat CREATE EVENT TRIGGER should be extended in some fashion to allow\nindividuals who can create them to specify who they are to run for.\nOpen to different ideas as to how a user could express that, but it\nfeels to me like that should be a core part of the definition of a\nuser-defined event trigger (ie: could be \"FOR ALL ROLES I OWN\" or\nwhatever, and maybe that's the default, but having that be the only\noption isn't appealing).\n\nThanks,\n\nStephen", "msg_date": "Mon, 1 Nov 2021 15:44:15 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Oct 25, 2021, at 11:30 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> > Consider instead:\n> > \n> > CREATE ROLE X;\n> > CREATE ROLE Y;\n> > CREATE ROLE Z;\n> > \n> > GRANT Y to X;\n> > GRANT Z to X;\n> > \n> > SET ROLE X;\n> > CREATE EVENT TRIGGER FOR Y do_stuff();\n> > \n> > Now, X has explicitly said that they want the event trigger to fire for\n> > role Y and if the event trigger fires or not is no longer based on\n> > membership in the role creating the trigger but instead depends on being\n> > the role which the event trigger was explicitly defined to fire on.\n> \n> I don't think your proposal quite works, because the set of users you'd like to audit with an event trigger based auditing system may be both large and dynamic:\n> \n> CREATE ROLE batman;\n> CREATE ROLE robin;\n> \n> SET ROLE batman;\n> CREATE ROLE gotham_residents NOLOGIN;\n> CREATE ROLE riddler IN ROLE gotham_residents LOGIN;\n> -- create millions of other Gotham residents....\n> CREATE EVENT TRIGGER FOR gotham_residents audit_criminal_activity();\n> \n> Batman is not superuser, but he's pretty powerful, and he wants to audit all the criminal activity in Gotham. How should he expect this example to work?\n> \n> Having the \"FOR gotham_residents\" clause mean anybody with membership in role gotham_residents is problematic, because it means that being granted into a role both increases and decreases your freedoms, rather than merely giving you more freedoms. If Batman covets privileges that Robin has, but wants not to be subjected to any event triggers that fire for Robin, he both wants into and out of role Robin.\n\nThe privileges afforded to 'robin' could be GRANT'd to another role\ncreated for that purpose which is then GRANT'd to 'batman' though.\nIndeed, that role could be used as the role which GRANT's 'robin' those\nrights in the first place too. This kind of permission management is\nlargely the point of the role-based system we have.\n\n> Having \"FOR gotham_residents\" mean that only actions performed by role \"gotham_residents\" should fire the trigger is useless, since Gotham residents don't log in as that, but as themselves. Batman won't catch anybody this way.\n\nNaturally. That doesn't mean that there couldn't be some other role\nwhich all of those roles are made a member of though. Either way,\nthere's a big list of \"roles this event trigger runs for\" and that has\nto be managed. That it happens to be \"roles owned by batman\", if we\nwent with your suggested approach, instead of other role membership\ndoesn't really 'fix' that because there'll be other roles in the system\nwhich 'batman' doesn't own. One nice thing of using roles for this is\nthat you end up being able to use the same role multiple ways- consider\nthis: I want to audit all roles who login to database mydb. Ah-hah, now\nI can say:\n\nCREATE DATABASE mydb;\nCREATE EVENT TRIGGER FOR gotham_residents audit_stuff();\nREVOKE CONNECT ON DATABASE mydb FROM PUBLIC;\nGRANT CONNECT ON DATABASE mydb TO gotham_residents;\n\nNow the two are connected- if you can connect to that database, then\nyou're going to get audited, and if you manage access to the 'mydb'\ndatabase using membership in that role then there's no way for a role to\nbe able to connect to that database without being audited (except for a\ntrue superuser, but that's always going to be an exception).\n\n> Having to list each new resident to the trigger is tedious and error-prone. Batman may not be able to pass a compliance audit.\n\nAgreed. Also wouldn't be great since eventually the role list might\nhave to get TOAST'd and then you're doing an extra lookup to pull back\nthe list, yuck.\n\n> Having Batman *own* all residents in Gotham city would work, if we can agree on a role ownership system. It has the downside that only a role's (direct or indirect) owner can do the auditing, though. That's more flexible than what we have today, where only superuser can do it, but maybe some people would want to argue for a different solution with even more flexibility? A grantable privilege perhaps? But whatever it is, the reasoning about who gets audited and who does not must be clear enough that Batman can pass a compliance audit.\n\nWhat about roles which Batman owns but which he *doesn't* want the event\ntrigger to fire for?\n\nNote that event triggers are not strictly limited to the auditing case.\nViewing them through that lense masks other quite common use-cases which\nare also importnat to consider (like preventing many users, but not all,\nfrom being able to DROP objects as a clear example).\n\nThanks,\n\nStephen", "msg_date": "Mon, 1 Nov 2021 16:13:06 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Nov 1, 2021, at 12:44 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> I can generally get behind the idea that a user who has been allowed to\n> create other roles should be able to do various other things with that\n> role, but should also be limited by what rights they themselves have\n> (unlike how CREATEROLE works today).\n\nI intend to rearrange the role ownership patch set to have the 0004-Restrict-power-granted-via-CREATEROLE.patch come before, and be independent of, the patches that introduce role ownership. That would put the less controversial patch first, and might get committed what we all agree.\n\n> That said, I have a hard time seeing why we're drawing this distinction\n> of 'ownership' as being ultimately different from simple 'admin' rights\n> on a role. In other words, beyond the ability to actually create/drop\n> roles, having 'admin' rights on a role already conveys just about\n> everything 'ownership' would. The things that are getting in the way\n> there are:\n> \n> - Ability to actually create/alter/drop roles, this needs to be\n> addressed somehow but doesn't necessarily imply a need for\n> 'ownership' as a concept.\n> \n> - Restriction of a role from being able to implicitly have 'admin'\n> rights on itself, as I started a discussion about elsewhere.\n> \n> - Some system for deciding who event triggers should fire for. I don't\n> think this should really be tied into the question of who has admin\n> rights on whom except to the extent that maybe you can only cause\n> event triggers to fire for roles you've got admin rights on (or maybe\n> membership in).\n\nYou and I are not that far apart on this issue. The reason I wanted to use \"ownership\" rather than ADMIN is that the spec has a concept of ADMIN and I don't know that we can fix everything we want to fix and still be within compliance with the spec.\n\n> One thing that comes to mind is that event triggers aren't the only\n> thing out there and I have to wonder if we should be thinking about\n> other things. As a thought exercise- how is an event trigger really\n> different from a table-level trigger? Anyone who has the ability to\n> create objects is able to create tables, create functions, create\n> operators, and a user logging in and running SQL can certainly end up\n> running those things with their privileges.\n\nThe difference in my mind is that table triggers owned by non-superusers have been around for a long time and are in heavy use, so changing how that behaves is a huge backwards compatibility break. Event triggers owned by non-superusers are only a fluke, not an intentional feature, and only occur when a superuser creates an event trigger and later has superuser privileges revoked. We can expect that far fewer users are really depending on that to work compared with table triggers.\n\nIn a green field, I would not create table triggers to work as they do.\n\n> We've generally recognized\n> that that's not great and there's been work to get it so that the\n> 'public' schema that everyone has in their search_path by default won't\n> be world-writable but that isn't exactly a cure-all for the general\n> problem.\n\nI agree.\n\n> One of the interesting bits is that there's two sides to this. On the\n> one hand, as a user, maybe I don't want to run functions of people who I\n> don't trust. As an admin/superuser/landlord, maybe I want to require\n> everyone who I have authority over to run these functions/event\n> triggers. I'm not sure that we can find a solution to everything with\n> this but figure I'd share these thoughts.\n\nIf roles were not cluster-wide, I might not have such a problem with leaving things mostly as they are. But there is something really objectionable to having two separate databases in a cluster intended for two separate purposes and with two separate sets of roles, and the set of roles in one database can mess with the roles intended for the other database. I think some kind of partitioning is needed, and I saw role ownership as the cleanest solution to it. I share your intuitions that perhaps the WITH ADMIN OPTION stuff could be used instead, but I don't see quite how.\n\n> Last thought I'll share is that I do believe we're going to want to\n> provide flexibility when it comes to defining who event triggers run\n> for, as a given admin may wish for that set to be different from the set\n> of roles that they ultimately have control over. I dislike tying these\n> two things together at such a core level and therefore continue to feel\n> that CREATE EVENT TRIGGER should be extended in some fashion to allow\n> individuals who can create them to specify who they are to run for.\n\nWithin reason, sure. It is fine by me if we support CREATE EVENT TRIGGER...AUTHORIZATION... in order to accomplish that. But the role running that command still needs to be limited to just a (proper or otherwise) subset of their own privileges.\n\nI thought about this some when originally writing the event trigger patch. The author of the event trigger is free to add a preamble to the trigger that exits early if the user, time of day, phase of the moon, etc., are inappropriate per the reasoning of the trigger author. We only need the system to prevent the event trigger from casting too wide a net. The event trigger author can limit the scope of the net further if desired.\n\n> Open to different ideas as to how a user could express that, but it\n> feels to me like that should be a core part of the definition of a\n> user-defined event trigger (ie: could be \"FOR ALL ROLES I OWN\" or\n> whatever, and maybe that's the default, but having that be the only\n> option isn't appealing).\n\nI am not strongly against adding syntactic support for FOR ALL ROLES vs. FOR role, role, ..., so long as that syntax cannot expand the net. It does seem a bit arbitrary to me, though, since you could also say FOR HOURS OF DAY 11PM through 3AM, and once you open the door to supporting all that in the syntax, and tracking it in the catalogs, you've opened a can of worms.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 1 Nov 2021 13:14:16 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Nov 1, 2021, at 1:13 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n>> Having Batman *own* all residents in Gotham city would work, if we can agree on a role ownership system. It has the downside that only a role's (direct or indirect) owner can do the auditing, though. That's more flexible than what we have today, where only superuser can do it, but maybe some people would want to argue for a different solution with even more flexibility? A grantable privilege perhaps? But whatever it is, the reasoning about who gets audited and who does not must be clear enough that Batman can pass a compliance audit.\n> \n> What about roles which Batman owns but which he *doesn't* want the event\n> trigger to fire for?\n\nI think Batman just has the event trigger exit early for that. There is nothing we can hardcode for filtering users into and out of the trigger that will be as flexible as the logic that Batman can implement in the trigger itself. We only need to worry about Batman over stepping his authority. It's not our job to filter further than that.\n\n> Note that event triggers are not strictly limited to the auditing case.\n> Viewing them through that lense masks other quite common use-cases which\n> are also importnat to consider (like preventing many users, but not all,\n> from being able to DROP objects as a clear example).\n\nNothing in my proposal limits what superusers can do with event triggers they create. The issue under discussion is entirely to do with what non-superusers are allowed to do with event triggers. I see no reason why some ordinary role \"joe\" should be allowed to thwart DROP commands issued on a table that \"joe\" doesn't own by roles that \"joe\" doesn't own. Maybe \"own\" here should be \"have ADMIN on\", but it has to be something.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 1 Nov 2021 13:23:23 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Nov 1, 2021, at 12:44 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> > I can generally get behind the idea that a user who has been allowed to\n> > create other roles should be able to do various other things with that\n> > role, but should also be limited by what rights they themselves have\n> > (unlike how CREATEROLE works today).\n> \n> I intend to rearrange the role ownership patch set to have the 0004-Restrict-power-granted-via-CREATEROLE.patch come before, and be independent of, the patches that introduce role ownership. That would put the less controversial patch first, and might get committed what we all agree.\n\nI've not directly looked at that patch, but I like it based on the name,\nif we think we can actually get folks to agree to is as it's quite a\nchange from the current situation. Previously I've felt that we\nwouldn't have support for breaking backwards compatibility in such a\nmanner but perhaps I'm wrong on that. There's also something to be\nsaid, in my view anyway, of having a predefined role be used for what we\nwant CREATEROLE to be rather than changing the existing CREATEROLE role\nattribute. Reason being that such a predefined role could then be\nGRANT'd to some other role along with whatever other generally-relevant\nprivileges there are and then that GRANT'd to whomever should have those\nrights. That's not really possible with the current CREATEROLE role\nattribute.\n\n> > That said, I have a hard time seeing why we're drawing this distinction\n> > of 'ownership' as being ultimately different from simple 'admin' rights\n> > on a role. In other words, beyond the ability to actually create/drop\n> > roles, having 'admin' rights on a role already conveys just about\n> > everything 'ownership' would. The things that are getting in the way\n> > there are:\n> > \n> > - Ability to actually create/alter/drop roles, this needs to be\n> > addressed somehow but doesn't necessarily imply a need for\n> > 'ownership' as a concept.\n> > \n> > - Restriction of a role from being able to implicitly have 'admin'\n> > rights on itself, as I started a discussion about elsewhere.\n> > \n> > - Some system for deciding who event triggers should fire for. I don't\n> > think this should really be tied into the question of who has admin\n> > rights on whom except to the extent that maybe you can only cause\n> > event triggers to fire for roles you've got admin rights on (or maybe\n> > membership in).\n> \n> You and I are not that far apart on this issue. The reason I wanted to use \"ownership\" rather than ADMIN is that the spec has a concept of ADMIN and I don't know that we can fix everything we want to fix and still be within compliance with the spec.\n\nThere's no concept in the spec of event triggers, I don't believe\nanyway, so I'm not really buying this particular argument. Seems like\nwe'd be more likely to run afoul of some future spec by creating a\nconcept of role ownership and creating a definition around what that\nmeans than using something existing in the spec as controlling what some\nother not-in-spec thing does.\n\n> > One thing that comes to mind is that event triggers aren't the only\n> > thing out there and I have to wonder if we should be thinking about\n> > other things. As a thought exercise- how is an event trigger really\n> > different from a table-level trigger? Anyone who has the ability to\n> > create objects is able to create tables, create functions, create\n> > operators, and a user logging in and running SQL can certainly end up\n> > running those things with their privileges.\n> \n> The difference in my mind is that table triggers owned by non-superusers have been around for a long time and are in heavy use, so changing how that behaves is a huge backwards compatibility break. Event triggers owned by non-superusers are only a fluke, not an intentional feature, and only occur when a superuser creates an event trigger and later has superuser privileges revoked. We can expect that far fewer users are really depending on that to work compared with table triggers.\n> \n> In a green field, I would not create table triggers to work as they do.\n\nI don't think we're entirely beholden to having table-level triggers\nwork the way they do today. I agree that we can't simply stop having\nthem fire for some users while letting things continue to happen on the\ntable but throwing an error and rolling back a transaction with an error\nsaying \"you were about to run this trigger which runs this function with\nyour privileges and you don't trust the person who wrote it\" seems\nentirely within reason, were we to have such a concept.\n\n> > We've generally recognized\n> > that that's not great and there's been work to get it so that the\n> > 'public' schema that everyone has in their search_path by default won't\n> > be world-writable but that isn't exactly a cure-all for the general\n> > problem.\n> \n> I agree.\n> \n> > One of the interesting bits is that there's two sides to this. On the\n> > one hand, as a user, maybe I don't want to run functions of people who I\n> > don't trust. As an admin/superuser/landlord, maybe I want to require\n> > everyone who I have authority over to run these functions/event\n> > triggers. I'm not sure that we can find a solution to everything with\n> > this but figure I'd share these thoughts.\n> \n> If roles were not cluster-wide, I might not have such a problem with leaving things mostly as they are. But there is something really objectionable to having two separate databases in a cluster intended for two separate purposes and with two separate sets of roles, and the set of roles in one database can mess with the roles intended for the other database. I think some kind of partitioning is needed, and I saw role ownership as the cleanest solution to it. I share your intuitions that perhaps the WITH ADMIN OPTION stuff could be used instead, but I don't see quite how.\n\nI agree that roles existing cluster-level is an issue in some instances\nthough I'm not quite sure what the concern here is (how could a role in\ndatabase A mess with roles in database B unless the first role had some\nkind of access on those roles, in which case, what's the issue..?).\n\nAnother thread/patch under discussion is around role membership being\nmade to be able to be per-database, which could be pretty interesting,\nthough I don't think it directly helps with what you're suggesting\nabove, unfortunately.\n\n> > Last thought I'll share is that I do believe we're going to want to\n> > provide flexibility when it comes to defining who event triggers run\n> > for, as a given admin may wish for that set to be different from the set\n> > of roles that they ultimately have control over. I dislike tying these\n> > two things together at such a core level and therefore continue to feel\n> > that CREATE EVENT TRIGGER should be extended in some fashion to allow\n> > individuals who can create them to specify who they are to run for.\n> \n> Within reason, sure. It is fine by me if we support CREATE EVENT TRIGGER...AUTHORIZATION... in order to accomplish that. But the role running that command still needs to be limited to just a (proper or otherwise) subset of their own privileges.\n> \n> I thought about this some when originally writing the event trigger patch. The author of the event trigger is free to add a preamble to the trigger that exits early if the user, time of day, phase of the moon, etc., are inappropriate per the reasoning of the trigger author. We only need the system to prevent the event trigger from casting too wide a net. The event trigger author can limit the scope of the net further if desired.\n\nI don't know that all such event triggers will necessarily be able to be\nmodified by everyone who will want to use them in the way you're\nsuggesting. Consider that there's things which require the event\ntrigger to be a C function as a simple example.\n\n> > Open to different ideas as to how a user could express that, but it\n> > feels to me like that should be a core part of the definition of a\n> > user-defined event trigger (ie: could be \"FOR ALL ROLES I OWN\" or\n> > whatever, and maybe that's the default, but having that be the only\n> > option isn't appealing).\n> \n> I am not strongly against adding syntactic support for FOR ALL ROLES vs. FOR role, role, ..., so long as that syntax cannot expand the net. It does seem a bit arbitrary to me, though, since you could also say FOR HOURS OF DAY 11PM through 3AM, and once you open the door to supporting all that in the syntax, and tracking it in the catalogs, you've opened a can of worms.\n\nI disagree that it's a \"can of worms\" that one would be opening. Sure,\nfolks can ask for all kinds of things and that's true today, but\nultimately we're the arbitrators of what is a sensible and common enough\nuse-case and what's not. We seem to be in pretty clear agreement that\nit's a sensible and reasonably common use-case for an event trigger\ndefiner to wish for it to only be run for some subset of individuals and\nthat subset might not always be the exact subset of individuals that a\ngiven role has 'ownership' or 'admin' rights over. Your approach puts\nthe onus of limiting that on the trigger author, who might not even be\ninvolved if it's some function that's provided from an extension and\nwritten in C and distributed in a packaged form from PGDG. There's also\nno way to tie together privileges between who is allowed to do some\naction and the individuals who the event trigger fires for, which seems\nunfortuante to me.\n\nThanks,\n\nStephen", "msg_date": "Mon, 1 Nov 2021 17:00:31 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Nov 1, 2021, at 1:13 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> >> Having Batman *own* all residents in Gotham city would work, if we can agree on a role ownership system. It has the downside that only a role's (direct or indirect) owner can do the auditing, though. That's more flexible than what we have today, where only superuser can do it, but maybe some people would want to argue for a different solution with even more flexibility? A grantable privilege perhaps? But whatever it is, the reasoning about who gets audited and who does not must be clear enough that Batman can pass a compliance audit.\n> > \n> > What about roles which Batman owns but which he *doesn't* want the event\n> > trigger to fire for?\n> \n> I think Batman just has the event trigger exit early for that. There is nothing we can hardcode for filtering users into and out of the trigger that will be as flexible as the logic that Batman can implement in the trigger itself. We only need to worry about Batman over stepping his authority. It's not our job to filter further than that.\n\nAs noted in my other email you're likely currently reading, this\npresumes that Batman is the author of the trigger and is able to make\nsuch changes. I'm also not thrilled with the presumption that, even if\nbatman is the author and maintainer, that batman would then have to\nwrite in such exclusions for what strikes me as a pretty commonly wished\nfor use-case.\n\n> > Note that event triggers are not strictly limited to the auditing case.\n> > Viewing them through that lense masks other quite common use-cases which\n> > are also importnat to consider (like preventing many users, but not all,\n> > from being able to DROP objects as a clear example).\n> \n> Nothing in my proposal limits what superusers can do with event triggers they create. The issue under discussion is entirely to do with what non-superusers are allowed to do with event triggers. I see no reason why some ordinary role \"joe\" should be allowed to thwart DROP commands issued on a table that \"joe\" doesn't own by roles that \"joe\" doesn't own. Maybe \"own\" here should be \"have ADMIN on\", but it has to be something.\n\nI agree that we're talking about non-superuser event triggers. I wasn't\nsuggesting that a non-superuser role 'joe' be able to create event\ntriggers that impact roles that 'joe' doesn't have rights of some kind\nover. I'm not quite following how your response here is addressing the\npoint that I brought up in what was quoted above it.\n\nThanks,\n\nStephen", "msg_date": "Mon, 1 Nov 2021 17:05:22 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Nov 1, 2021, at 2:00 PM, Stephen Frost <sfrost@snowman.net> wrote:\n\n> I've not directly looked at that patch, but I like it based on the name,\n> if we think we can actually get folks to agree to is as it's quite a\n> change from the current situation. Previously I've felt that we\n> wouldn't have support for breaking backwards compatibility in such a\n> manner but perhaps I'm wrong on that. \n\nI am neutral on this. I prefer not to break backward compatibility, but I also prefer to fix broken features rather than leave them as traps for the unwary. The CREATEROLE attribute exists and is defined in a way that is broadly viewed as a misfeature. Fixing it has long term benefits, but short term compatibility concerns.\n\n> There's also something to be\n> said, in my view anyway, of having a predefined role be used for what we\n> want CREATEROLE to be rather than changing the existing CREATEROLE role\n> attribute.\n\nI don't see an additional benefit beyond preserving compatibility with how CREATEROLE has historically worked.\n\n> Reason being that such a predefined role could then be\n> GRANT'd to some other role along with whatever other generally-relevant\n> privileges there are and then that GRANT'd to whomever should have those\n> rights. That's not really possible with the current CREATEROLE role\n> attribute.\n\nI'm not seeing that. If you create a role \"role_admin\" and give it CREATEROLE and other stuff, maybe CREATEDB, pg_read_all_data, and so forth, then you grant \"stephen\" membership in \"role_admin\", doesn't that work? Why would \"role_admin\" being a member of some new role, say \"pg_can_create_roles\", work differently than \"role_admin\" having the CREATEROLE attribute?\n\n>>> That said, I have a hard time seeing why we're drawing this distinction\n>>> of 'ownership' as being ultimately different from simple 'admin' rights\n>>> on a role. In other words, beyond the ability to actually create/drop\n>>> roles, having 'admin' rights on a role already conveys just about\n>>> everything 'ownership' would. The things that are getting in the way\n>>> there are:\n>>> \n>>> - Ability to actually create/alter/drop roles, this needs to be\n>>> addressed somehow but doesn't necessarily imply a need for\n>>> 'ownership' as a concept.\n>>> \n>>> - Restriction of a role from being able to implicitly have 'admin'\n>>> rights on itself, as I started a discussion about elsewhere.\n>>> \n>>> - Some system for deciding who event triggers should fire for. I don't\n>>> think this should really be tied into the question of who has admin\n>>> rights on whom except to the extent that maybe you can only cause\n>>> event triggers to fire for roles you've got admin rights on (or maybe\n>>> membership in).\n>> \n>> You and I are not that far apart on this issue. The reason I wanted to use \"ownership\" rather than ADMIN is that the spec has a concept of ADMIN and I don't know that we can fix everything we want to fix and still be within compliance with the spec.\n> \n> There's no concept in the spec of event triggers, I don't believe\n> anyway, so I'm not really buying this particular argument. Seems like\n> we'd be more likely to run afoul of some future spec by creating a\n> concept of role ownership and creating a definition around what that\n> means than using something existing in the spec as controlling what some\n> other not-in-spec thing does.\n\nThe WITH ADMIN OPTION feature has a really well defined meaning. If you have ADMIN on a role, you can grant and revoke that role to/from other roles. That's it. If we start tying a bunch of other stuff to that, we're breaking reasonable expectations about how WITH ADMIN OPTION works, and since the spec defines how that works, we're then in violation of the spec.\n\nCREATEROLE, on the other hand, has no defined meaning in the spec. It's a postgres invention. So if we change what it means, we're not breaking compability with the spec, only backward compatibility with older version of postgres vis-a-vis the CREATEROLE misfeature that most people presumably don't use. I find that far preferable to breaking spec compliance. It is strange to me that you view changing how WITH ADMIN OPTION functions as being motivated by spec compliance, since I see it as going in the opposite direction.\n\nAs you say above, we'd have to tie the ability to create, alter, and drop roles to the ADMIN option. That already sounds like a non-starter to me. We'd further want to tie other stuff, like event triggers, to ADMIN option. I don't see how this furthers spec compliance.\n\nTying this stuff to CREATEROLE seems perfectly fair. Nobody coming from another database vendor to postgres should have any spec-compatibility-based expectations about how CREATEROLE works.\n\n>>> One thing that comes to mind is that event triggers aren't the only\n>>> thing out there and I have to wonder if we should be thinking about\n>>> other things. As a thought exercise- how is an event trigger really\n>>> different from a table-level trigger? Anyone who has the ability to\n>>> create objects is able to create tables, create functions, create\n>>> operators, and a user logging in and running SQL can certainly end up\n>>> running those things with their privileges.\n>> \n>> The difference in my mind is that table triggers owned by non-superusers have been around for a long time and are in heavy use, so changing how that behaves is a huge backwards compatibility break. Event triggers owned by non-superusers are only a fluke, not an intentional feature, and only occur when a superuser creates an event trigger and later has superuser privileges revoked. We can expect that far fewer users are really depending on that to work compared with table triggers.\n>> \n>> In a green field, I would not create table triggers to work as they do.\n> \n> I don't think we're entirely beholden to having table-level triggers\n> work the way they do today. I agree that we can't simply stop having\n> them fire for some users while letting things continue to happen on the\n> table but throwing an error and rolling back a transaction with an error\n> saying \"you were about to run this trigger which runs this function with\n> your privileges and you don't trust the person who wrote it\" seems\n> entirely within reason, were we to have such a concept.\n\nYou're pushing at an open door. If the community doesn't object to fixing the security problems with table triggers, I'm not going to object either.\n\n>> If roles were not cluster-wide, I might not have such a problem with leaving things mostly as they are. But there is something really objectionable to having two separate databases in a cluster intended for two separate purposes and with two separate sets of roles, and the set of roles in one database can mess with the roles intended for the other database. I think some kind of partitioning is needed, and I saw role ownership as the cleanest solution to it. I share your intuitions that perhaps the WITH ADMIN OPTION stuff could be used instead, but I don't see quite how.\n> \n> I agree that roles existing cluster-level is an issue in some instances\n> though I'm not quite sure what the concern here is (how could a role in\n> database A mess with roles in database B unless the first role had some\n> kind of access on those roles, in which case, what's the issue..?).\n\nThe problem is that superusers can act in any database, so role administration in database A must be done by a non-superuser if you want that administrator to be unable to mess with the roles used in database B. But what mechanism do we have for that? WITH ADMIN OPTION is too narrow to do it, and I've already argued why I don't want to expand that power, and CREATEROLE as currently implemented is too broad.\n\n> Another thread/patch under discussion is around role membership being\n> made to be able to be per-database, which could be pretty interesting,\n> though I don't think it directly helps with what you're suggesting\n> above, unfortunately.\n\nYes, I took some interest in that conversation. Like you, I'm not sure I see how it fixes the problems under discussion here.\n\n>>> Last thought I'll share is that I do believe we're going to want to\n>>> provide flexibility when it comes to defining who event triggers run\n>>> for, as a given admin may wish for that set to be different from the set\n>>> of roles that they ultimately have control over. I dislike tying these\n>>> two things together at such a core level and therefore continue to feel\n>>> that CREATE EVENT TRIGGER should be extended in some fashion to allow\n>>> individuals who can create them to specify who they are to run for.\n>> \n>> Within reason, sure. It is fine by me if we support CREATE EVENT TRIGGER...AUTHORIZATION... in order to accomplish that. But the role running that command still needs to be limited to just a (proper or otherwise) subset of their own privileges.\n>> \n>> I thought about this some when originally writing the event trigger patch. The author of the event trigger is free to add a preamble to the trigger that exits early if the user, time of day, phase of the moon, etc., are inappropriate per the reasoning of the trigger author. We only need the system to prevent the event trigger from casting too wide a net. The event trigger author can limit the scope of the net further if desired.\n> \n> I don't know that all such event triggers will necessarily be able to be\n> modified by everyone who will want to use them in the way you're\n> suggesting. Consider that there's things which require the event\n> trigger to be a C function as a simple example.\n\nI don't care much about this. You can implement that if you want, and I'm not going to have a reason to complain, unless it somehow allows people to cast too wide a net. Narrowing the net is, to my mind, entirely orthogonal to this discussion.\n\n>>> Open to different ideas as to how a user could express that, but it\n>>> feels to me like that should be a core part of the definition of a\n>>> user-defined event trigger (ie: could be \"FOR ALL ROLES I OWN\" or\n>>> whatever, and maybe that's the default, but having that be the only\n>>> option isn't appealing).\n>> \n>> I am not strongly against adding syntactic support for FOR ALL ROLES vs. FOR role, role, ..., so long as that syntax cannot expand the net. It does seem a bit arbitrary to me, though, since you could also say FOR HOURS OF DAY 11PM through 3AM, and once you open the door to supporting all that in the syntax, and tracking it in the catalogs, you've opened a can of worms.\n> \n> I disagree that it's a \"can of worms\" that one would be opening. Sure,\n> folks can ask for all kinds of things and that's true today, but\n> ultimately we're the arbitrators of what is a sensible and common enough\n> use-case and what's not. We seem to be in pretty clear agreement that\n> it's a sensible and reasonably common use-case for an event trigger\n> definer to wish for it to only be run for some subset of individuals and\n> that subset might not always be the exact subset of individuals that a\n> given role has 'ownership' or 'admin' rights over. Your approach puts\n> the onus of limiting that on the trigger author, who might not even be\n> involved if it's some function that's provided from an extension and\n> written in C and distributed in a packaged form from PGDG. There's also\n> no way to tie together privileges between who is allowed to do some\n> action and the individuals who the event trigger fires for, which seems\n> unfortuante to me.\n\nI understand why you want this, but I don't understand why you think it is tied to a security patch. I'm not being facetious when I mention having syntax to support event triggers to fire only at certain times of day. Plenty of deployments I have encountered have exactly this type of restriction, limiting the time of day when certain sorts of actions can be performed. Similarly, I have seen deployments which have their heaviest activity around the Christmas shopping season. They might want event triggers that fire between Black Friday and Boxing Day that prevent schema changes during such heavy load, but not the rest of the year. And they might want them to fire for some roles and not others.\n\nThe idea in the event trigger patch is to make it reasonable, from a security standpoint, to allow non-superusers to create event triggers. The only thing that makes it *unreasonable* for them to do so is that an event trigger could block or alter actions performed by roles which the non-superuser trigger owner should not have the privilege to block or alter. So restrictions on when the event trigger fires to get around that problem seem on topic. Other filters, no matter how good the idea, are simply off topic. Go and implement them with my blessing. But I don't see why I should have to implement them as part of this patch set.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 1 Nov 2021 14:42:42 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" }, { "msg_contents": "\n\n> On Nov 1, 2021, at 2:05 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> I'm not quite following how your response here is addressing the\n> point that I brought up in what was quoted above it.\n\nCould you clarify which question I didn't answer? I fear I may have left something unanswered, but I don't know to what you are referring.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 1 Nov 2021 15:42:06 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Delegating superuser tasks to new security roles (Was: Granting\n control of SUSET gucs to non-superusers)" } ]
[ { "msg_contents": "Hi hackers,\n\nI was wondering if we can improve the error messages for acl permission\nfailures.\nCurrent implementation to report errors is in \"backend/catalog/aclchk.c\"\n void aclcheck_error(AclResult aclerr, ObjectType objtype, const char\n*objectname);\n\nbased on the AclResult type, it print log messages like\n \"permission denied for schema %s\"\nwhich tells the admins what could be the domain of the permission-deny,\nlike table name or schema name.\n\nHowever, I find that the log messages *lack* more details, i.e., the\n*exact permission* that causes the permission-deny. For the novice users,\nthey may end up over-granting the permission to fix the issues\nand cause security vulnerability in the database.\n\nI think the log messages can be better if we add some diagnostic\ninformation like which *role* is denied and what *permission* it lacks.\nThis way the users know which permission to grant exactly\nwithout the trial-and-errors.\n\nIt is not hard to improve the log messages after looking into the code.\nMost places use the function aclcheck_error() exactly after the permission\ncheck, e.g., pg_type_aclcheck(), pg_tablespace_aclcheck().\nFor example, in backend/commands/dbcommands.c, it checks whether\nthe user has CREATE permission.\n\naclresult = pg_tablespace_aclcheck(dst_deftablespace, GetUserId(),\nACL_CREATE);\nif (aclresult != ACLCHECK_OK)\n aclcheck_error(aclresult, OBJECT_TABLESPACE, tablespacename);\n\nWe can simply change the aclcheck_error() function parameter a bit,\nthen we can pass the exact permission to the function, and tell the users\nexactly why the permission is denied. Something would be like\n\nvoid aclcheck_error(AclResult aclerr, ObjectType objtype,\n const char *objectname,\n const char *privilegename)\n\nAny thoughts would be appreciated. Thanks!\n\nBest regards,\nBingyu\n\nHi hackers,I was wondering if we can improve the error messages for acl permission failures.Current implementation to report errors is in \"backend/catalog/aclchk.c\"     void aclcheck_error(AclResult aclerr, ObjectType objtype, const char *objectname);based on the AclResult type, it print log messages like    \"permission denied for schema %s\"which tells the admins what could be the domain of the permission-deny,like table name or schema name.However, I find that the log messages *lack* more details, i.e., the*exact permission* that causes the permission-deny. For the novice users,they may end up over-granting the permission to fix the issuesand cause security vulnerability in the database. I think the log messages can be better if we add some diagnosticinformation like which *role* is denied and what *permission* it lacks.This way the users know which permission to grant exactlywithout the trial-and-errors. It is not hard to improve the log messages after looking into the code.Most places use the function aclcheck_error() exactly after the permissioncheck, e.g., pg_type_aclcheck(), pg_tablespace_aclcheck().For example, in backend/commands/dbcommands.c, it checks whetherthe user has CREATE permission.aclresult = pg_tablespace_aclcheck(dst_deftablespace, GetUserId(), ACL_CREATE);if (aclresult != ACLCHECK_OK)    aclcheck_error(aclresult, OBJECT_TABLESPACE, tablespacename);We can simply change the aclcheck_error() function parameter a bit,then we can pass the exact permission to the function, and tell the users exactly why the permission is denied. Something would be like void aclcheck_error(AclResult aclerr, ObjectType objtype,                              const char *objectname,                               const char *privilegename)Any thoughts would be appreciated. Thanks!Best regards,Bingyu", "msg_date": "Fri, 30 Apr 2021 16:55:18 -0700", "msg_from": "Bingyu Shen <ahshenbingyu@gmail.com>", "msg_from_op": true, "msg_subject": "Log enhancement for aclcheck permissions failures" }, { "msg_contents": "On Sat, May 1, 2021 at 5:26 AM Bingyu Shen <ahshenbingyu@gmail.com> wrote:\n> Hi hackers,\n>\n> I was wondering if we can improve the error messages for acl permission failures.\n> Current implementation to report errors is in \"backend/catalog/aclchk.c\"\n> void aclcheck_error(AclResult aclerr, ObjectType objtype, const char *objectname);\n>\n> based on the AclResult type, it print log messages like\n> \"permission denied for schema %s\"\n> which tells the admins what could be the domain of the permission-deny,\n> like table name or schema name.\n>\n> However, I find that the log messages *lack* more details, i.e., the\n> *exact permission* that causes the permission-deny. For the novice users,\n> they may end up over-granting the permission to fix the issues\n> and cause security vulnerability in the database.\n>\n> I think the log messages can be better if we add some diagnostic\n> information like which *role* is denied and what *permission* it lacks.\n> This way the users know which permission to grant exactly\n> without the trial-and-errors.\n\nI think it's easy for users (even if they are novice) to know exactly\nwhat permission they are lacking by just looking at the query. See,\nthe permissions we have in parsenodes.h with ACL_XXXX, they are quite\nclear and can be understood by the type of query. So, I don't think\nprinting that obvious information in the log message is something we\nwould want to improve.\n\nTo know the current role with which the query is run, users can use\nCURRENT_ROLE or pg_roles.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 1 May 2021 10:42:45 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Log enhancement for aclcheck permissions failures" } ]
[ { "msg_contents": "Hi,\n\nTRAP: FailedAssertion(\"!all_visible_according_to_vm ||\nprunestate.all_visible\", File:\n\"/home/pgsql/build-farm/buildroot/HEAD/pgsql.build/../pgsql/src/backend/access/heap/vacuumlazy.c\",\nLine: 1347, PID: 16926)\n\n2021-04-30 04:15:50.317 PDT [10349:18] DETAIL: Failed process was\nrunning: autovacuum: VACUUM ANALYZE pg_catalog.pg_attribute\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mereswine&dt=2021-04-30%2006:43:27\n\n\n", "msg_date": "Sat, 1 May 2021 16:43:21 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "New assertion failed in lazy_scan_heap() on mereswine" }, { "msg_contents": "On Sat, May 1, 2021 at 10:14 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> Hi,\n>\n> TRAP: FailedAssertion(\"!all_visible_according_to_vm ||\n> prunestate.all_visible\", File:\n> \"/home/pgsql/build-farm/buildroot/HEAD/pgsql.build/../pgsql/src/backend/access/heap/vacuumlazy.c\",\n> Line: 1347, PID: 16926)\n>\n> 2021-04-30 04:15:50.317 PDT [10349:18] DETAIL: Failed process was\n> running: autovacuum: VACUUM ANALYZE pg_catalog.pg_attribute\n>\n\nThere is another thread[1] that has reported the same issue and also\nprovided a script to reproduce the issue.\n\n[1] https://www.postgresql.org/message-id/OS0PR01MB611340CBD300A7C4FD6B6101FB5F9@OS0PR01MB6113.jpnprd01.prod.outlook.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 1 May 2021 10:41:10 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: New assertion failed in lazy_scan_heap() on mereswine" }, { "msg_contents": "On Sat, May 01, 2021 at 04:43:21PM +1200, Thomas Munro wrote:\n> TRAP: FailedAssertion(\"!all_visible_according_to_vm || prunestate.all_visible\", File: \"/home/pgsql/build-farm/buildroot/HEAD/pgsql.build/../pgsql/src/backend/access/heap/vacuumlazy.c\", Line: 1347, PID: 16926)\n> 2021-04-30 04:15:50.317 PDT [10349:18] DETAIL: Failed process was\n> running: autovacuum: VACUUM ANALYZE pg_catalog.pg_attribute\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mereswine&dt=2021-04-30%2006:43:27\n\nThe same assertion was reported here\nhttps://www.postgresql.org/message-id/flat/OS0PR01MB611340CBD300A7C4FD6B6101FB5F9%40OS0PR01MB6113.jpnprd01.prod.outlook.com\n|TRAP: FailedAssertion(\"!all_visible_according_to_vm || prunestate.all_visible\", File: \"/home/pgsql/build-farm/buildroot/HEAD/pgsql.build/../pgsql/src/backend/access/heap/vacuumlazy.c\", Line: 1347, PID: 16926)\n|2021-04-30 04:15:50.317 PDT [10349:18] DETAIL: Failed process was running: autovacuum: VACUUM ANALYZE pg_catalog.pg_attribute\n\nAnd Michael added\nhttps://wiki.postgresql.org/index.php?title=PostgreSQL_14_Open_Items&type=revision&diff=35954&oldid=35952\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 1 May 2021 00:11:43 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: New assertion failed in lazy_scan_heap() on mereswine" } ]
[ { "msg_contents": "Hi,\n\nBeing able to extend core parser has been requested multiple times, and AFAICT\nall previous attempts were rejected not because this isn't wanted but because\nthe proposed implementations required plugins to reimplement all of the core\ngrammar with their own changes, as bison generated parsers aren't extensible.\n\nI'd like to propose an alternative approach, which is to allow multiple parsers\nto coexist, and let third-party parsers optionally fallback on the core\nparsers. I'm sending this now as a follow-up of [1] and to avoid duplicated\nefforts, as multiple people are interested in that topic.\n\nObviously, since this is only about parsing, all modules can only implement\nsome kind of syntactic sugar, as they have to produce valid parsetrees, but\nthis could be a first step to later allow custom nodes and let plugins\nimplement e.g. new UTILITY commands.\n\nSo, this approach should allow different custom parser implementations:\n\n1 implement only a few new commands on top of core grammar. For instance, an\n extension could add support for CREATE [PHYSICAL | LOGICAL] REPLICATION SLOT\n and rewrite that to a SelectStmt on top of the extisting function, or add a\n CREATE HYPOTHETICAL INDEX, which would internally add a new option in\n IndexStmt->options, to be intercepted in processUtility and bypass its\n execution with the extension approach instead.\n\n2 implement a totally different grammar for a different language. In case of\n error, just silently fallback to core parser (or another hook) so both\n parsers can still be used. Any language could be parsed as long as you can\n produce a valid postgres parsetree.\n\n3 implement a superuser of core grammar and replace core parser entirely. This\n could arguably be done like the 1st case, but the idea is to avoid to\n possibly parse the same input string twice, or to forbid the core parser if\n that's somehow wanted.\n\n\nI'm attaching some POC patches that implement this approach to start a\ndiscussion. I split the infrastructure part in 2 patches to make it easier to\nreview, and I'm also adding 2 other patches with a small parser implementation\nto be able to test the infrastructure. Here are some more details on the\npatches and implementation details:\n\n0001 simply adds a parser hook, which is called instead of raw_parser. This is\nenough to make multiple parser coexist with one exception: multi-statement\nquery string. If multiple statements are provided, then all of them will be\nparsed using the same grammar, which obviously won't work if they are written\nfor different grammars.\n\n0002 implements a lame \"sqlol\" parser, based on LOLCODE syntax, with only the\nability to produce \"select [col, ] col FROM table\" parsetree, for testing\npurpose. I chose it to ensure that everything works properly even with a\ntotally different grammar that has different keywords, which doesn't even ends\nstatements with a semicolon but a plain keyword.\n\n0003 is where the real modifications are done to allow multi-statement string\nto be parsed using different grammar. It implements a new MODE_SINGLE_QUERY\nmode, which is used when a parser_hook is present. In that case,\npg_parse_query() will only parse part of the query string and loop until\neverything is parsed (or some error happens).\n\npg_parse_query() will instruct plugins to parse a query at a time. They're\nfree to ignore that mode if they want to implement the 3rd mode. If so, they\nshould either return multiple RawStmt, a single RawStmt with a 0 or\nstrlen(query_string) stmt_len, or error out. Otherwise, they will implement\neither mode 1 or 2, and they should always return a List containing a single\nRawStmt with properly set stmt_len, even if the underlying statement is NULL.\nThis is required to properly skip valid strings that don't contain a\nstatements, and pg_parse_query() will skip RawStmt that don't contain an\nunderlying statement.\n\nIt also teaches the core parser to do the same, by optionally start parsing\nsomewhere in the input string and stop parsing once a valid statement is found.\n\nNote that the whole input string is provided to the parsers in order to report\ncorrect cursor position, so all token can get a correct location. This means\nthat raw_parser() signature needs an additional offset to know where the\nparsing should start.\n\nFinally, 0004 modifies the sqlol parser to implement the MODE_SINGLE_QUERY\nmode, adds grammar for creating views and adds some regression test to validate\nproper parsing and error location reporting with multi-statements input string.\n\nAs far as I can tell it's all working as expected but I may have missed some\nusecases. The regression tests still work with the additional parser\nconfigured. The only difference is for pg_stat_statements, as in\nMODE_SINGLE_QUERY the trailing semicolon has to be included in the statement,\nsince other grammars may understand semicolons differently.\n\nThe obvious drawback is that it can cause overhead as the same input can be\nparsed multiple time. This could be avoided with plugins implementing a GUC to\nenable/disable their parser, so it's only active by default for some\nusers/database, or requires to be enabled interactively by the client app.\n\nAlso, the error messages can also be unhelpful for cases 1 and 2. If the\ncustom parser doesn't error out, it means that the syntax errors will be raised\nby the core parser based on the core grammar, which will likely point out an\nunrelated problem. Some of that can be avoided by letting the custom parsers\nraise errors when they know for sure it's parsing what it's supposed to parse\n(there's an example of that in the sqlol parser for qualified_name parsing, as\nit can only happen once some specific keywords already matched). For the rest\nof the errors, the only option I can think of is another GUC to let custom\nparsers always raise an error (or raise a warning) to help people debug their\nqueries.\n\nI'll park this patch in the next commitfest so it can be discussed when pg15\ndevelopment starts.\n\n[1]: https://www.postgresql.org/message-id/20210315164336.ak32whndsxna5mjf@nol", "msg_date": "Sat, 1 May 2021 15:24:58 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Hook for extensible parsing." }, { "msg_contents": "On Sat, May 01, 2021 at 03:24:58PM +0800, Julien Rouhaud wrote:\n> \n> I'm attaching some POC patches that implement this approach to start a\n> discussion.\n\nI just noticed that the cfbot fails with the v1 patch. Attached v2 that should\nfix that.", "msg_date": "Sun, 6 Jun 2021 14:50:19 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Sun, Jun 06, 2021 at 02:50:19PM +0800, Julien Rouhaud wrote:\n> On Sat, May 01, 2021 at 03:24:58PM +0800, Julien Rouhaud wrote:\n> > \n> > I'm attaching some POC patches that implement this approach to start a\n> > discussion.\n> \n> I just noticed that the cfbot fails with the v1 patch. Attached v2 that should\n> fix that.\n\nThe cfbot then revealed a missing dependency in the makefile to generate the\ncontrib parser, which triggers in make check-world without a previous\nmake -C contrib.\n\nThanks a lot to Thomas Munro for getting me the logfile from the failed cfbot\nrun and the fix!", "msg_date": "Tue, 8 Jun 2021 12:16:48 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Tue, Jun 08, 2021 at 12:16:48PM +0800, Julien Rouhaud wrote:\n> On Sun, Jun 06, 2021 at 02:50:19PM +0800, Julien Rouhaud wrote:\n> > On Sat, May 01, 2021 at 03:24:58PM +0800, Julien Rouhaud wrote:\n> > > \n> > > I'm attaching some POC patches that implement this approach to start a\n> > > discussion.\n\nThe regression tests weren't stable, v4 fixes that.", "msg_date": "Sat, 12 Jun 2021 16:29:12 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Sat, Jun 12, 2021 at 4:29 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> I'd like to propose an alternative approach, which is to allow multiple parsers\n> to coexist, and let third-party parsers optionally fallback on the core\n> parsers. I'm sending this now as a follow-up of [1] and to avoid duplicated\n> efforts, as multiple people are interested in that topic.\n\nThe patches all build properly and pass all regressions tests.\n\n> pg_parse_query() will instruct plugins to parse a query at a time. They're\n> free to ignore that mode if they want to implement the 3rd mode. If so, they\n> should either return multiple RawStmt, a single RawStmt with a 0 or\n> strlen(query_string) stmt_len, or error out. Otherwise, they will implement\n> either mode 1 or 2, and they should always return a List containing a single\n> RawStmt with properly set stmt_len, even if the underlying statement is NULL.\n> This is required to properly skip valid strings that don't contain a\n> statements, and pg_parse_query() will skip RawStmt that don't contain an\n> underlying statement.\n\nWouldn't we want to only loop through the individual statements if parser_hook\nexists? The current patch seems to go through the new code path regardless\nof the hook being grabbed.\n\n\n", "msg_date": "Tue, 6 Jul 2021 15:26:16 -0400", "msg_from": "Jim Mlodgenski <jimmy76@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "Thanks for the review Jim!\n\nOn Wed, Jul 7, 2021 at 3:26 AM Jim Mlodgenski <jimmy76@gmail.com> wrote:\n>\n> On Sat, Jun 12, 2021 at 4:29 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> The patches all build properly and pass all regressions tests.\n\nNote that the cfbot reports a compilation error on windows. That's on\nthe grammar extension part, so I'm really really interested in trying\nto fix that for now, as it's mostly a quick POC to demonstrate how one\ncould implement a different grammar and validate that everything works\nas expected.\n\nAlso, if this patch is eventually committed and having some code to\nexperience the hook is wanted it would probably be better to have a\nvery naive parser (based on a few strcmp() calls or something like\nthat) to validate the behavior rather than having a real parser.\n\n> > pg_parse_query() will instruct plugins to parse a query at a time. They're\n> > free to ignore that mode if they want to implement the 3rd mode. If so, they\n> > should either return multiple RawStmt, a single RawStmt with a 0 or\n> > strlen(query_string) stmt_len, or error out. Otherwise, they will implement\n> > either mode 1 or 2, and they should always return a List containing a single\n> > RawStmt with properly set stmt_len, even if the underlying statement is NULL.\n> > This is required to properly skip valid strings that don't contain a\n> > statements, and pg_parse_query() will skip RawStmt that don't contain an\n> > underlying statement.\n>\n> Wouldn't we want to only loop through the individual statements if parser_hook\n> exists? The current patch seems to go through the new code path regardless\n> of the hook being grabbed.\n\nI did think about it, but I eventually chose to write it this way.\nHaving a different code path for the no-hook situation won't make the\nwith-hook code any easier (it should only remove some check for the\nhook in some places that have 2 or 3 other checks already). On the\nother hand, having a single code path avoid some (minimal) code\nduplication, and also ensure that the main loop is actively tested\neven without the hook being set. That's not 100% coverage, but it's\nbetter than nothing. Performance wise, it shouldn't make any\nnoticeable difference for the no-hook case.\n\n\n", "msg_date": "Wed, 7 Jul 2021 17:25:56 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Wed, Jul 7, 2021 at 5:26 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Also, if this patch is eventually committed and having some code to\n> experience the hook is wanted it would probably be better to have a\n> very naive parser (based on a few strcmp() calls or something like\n> that) to validate the behavior rather than having a real parser.\n>\n\nThe test module is very useful to show how to use the hook but it isn't\nvery useful to the general user like most other things in contrib. It probably\nfits better in src/test/modules\n\n\n", "msg_date": "Wed, 7 Jul 2021 08:44:56 -0400", "msg_from": "Jim Mlodgenski <jimmy76@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Wed, Jul 7, 2021 at 8:45 PM Jim Mlodgenski <jimmy76@gmail.com> wrote:\n>\n> The test module is very useful to show how to use the hook but it isn't\n> very useful to the general user like most other things in contrib. It probably\n> fits better in src/test/modules\n\nI agree that it's not useful at all to eventually have it as a\ncontrib, but it's somewhat convenient at this stage to be able to\neasily test the hook, possibly with different behavior.\n\nBut as I said, if there's an agreement on the approach and the\nimplementation, I don't think that it would make sense to keep it even\nin the src/test/modules. A full bison parser, even with a limited\ngrammar, will have about 99% of noise when it comes to demonstrate how\nthe hook is supposed to work, which basically is having a \"single\nquery\" parser or a \"full input string\" parser. I'm not even convinced\nthat flex/bison will be the preferred choice for someone who wants to\nimplement a custom parser.\n\nI tried to add really thorough comments in the various parts of the\npatch to make it clear how to do that and how the system will react\ndepending on what a hook does. I also added some protection to catch\ninconsistent hook implementation. I think that's the best way to help\nexternal parser authors to implement what they want, and I'll be happy\nto improve the comments if necessary. But if eventually people would\nlike to have a real parser in the tree, for testing or guidance, I\nwill of course take care of doing the required changes and moving the\ndemo parser in src/test/modules.\n\n\n", "msg_date": "Wed, 7 Jul 2021 22:19:36 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Sat, Jun 12, 2021 at 1:59 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, Jun 08, 2021 at 12:16:48PM +0800, Julien Rouhaud wrote:\n> > On Sun, Jun 06, 2021 at 02:50:19PM +0800, Julien Rouhaud wrote:\n> > > On Sat, May 01, 2021 at 03:24:58PM +0800, Julien Rouhaud wrote:\n> > > >\n> > > > I'm attaching some POC patches that implement this approach to start a\n> > > > discussion.\n>\n> The regression tests weren't stable, v4 fixes that.\n\n1) CFBOT showed the following compilation errors in windows:\n\"C:\\projects\\postgresql\\pgsql.sln\" (default target) (1) ->\n\"C:\\projects\\postgresql\\sqlol.vcxproj\" (default target) (69) ->\n(ClCompile target) ->\n c1 : fatal error C1083: Cannot open source file:\n'contrib/sqlol/sqlol_gram.c': No such file or directory\n[C:\\projects\\postgresql\\sqlol.vcxproj]\n c:\\projects\\postgresql\\contrib\\sqlol\\sqlol_gramparse.h(25): fatal\nerror C1083: Cannot open include file: 'sqlol_gram.h': No such file or\ndirectory (contrib/sqlol/sqlol.c)\n[C:\\projects\\postgresql\\sqlol.vcxproj]\n c:\\projects\\postgresql\\contrib\\sqlol\\sqlol_gramparse.h(25): fatal\nerror C1083: Cannot open include file: 'sqlol_gram.h': No such file or\ndirectory (contrib/sqlol/sqlol_keywords.c)\n[C:\\projects\\postgresql\\sqlol.vcxproj]\nc1 : fatal error C1083: Cannot open source file:\n'contrib/sqlol/sqlol_scan.c': No such file or directory\n[C:\\projects\\postgresql\\sqlol.vcxproj]\n0 Warning(s)\n4 Error(s)\n6123\n6124Time Elapsed 00:05:40.23\n6125\n\n2) There was one small whitespace error with the patch:\ngit am v4-0002-Add-a-sqlol-parser.patch\nApplying: Add a sqlol parser.\n.git/rebase-apply/patch:818: new blank line at EOF.\n+\nwarning: 1 line adds whitespace errors.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 22 Jul 2021 12:01:34 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Thu, Jul 22, 2021 at 12:01:34PM +0530, vignesh C wrote:\n> \n> 1) CFBOT showed the following compilation errors in windows:\n\nThanks for looking at it. I'm aware of this issue on windows, but as mentioned\nin the thread the new contrib is there to demonstrate how the new\ninfrastructure works. If there were some interest in pushing the patch, I\ndon't think that we would add a full bison parser, whether it's in contrib or\ntest modules.\n\nSo unless there's a clear indication from a committer that we would want to\nintegrate such a parser, or if someone is interested in reviewing the patch and\nonly has a windows machine, I don't plan to spend time trying to fix a windows\nonly problem for something that will disappear anyway.\n\n> 2) There was one small whitespace error with the patch:\n> git am v4-0002-Add-a-sqlol-parser.patch\n> Applying: Add a sqlol parser.\n> .git/rebase-apply/patch:818: new blank line at EOF.\n> +\n> warning: 1 line adds whitespace errors.\n\nIndeed, there's a trailing empty line in contrib/sqlol/sqlol_keywords.c. I\nfixed it locally, but as I said this module is most certainly going to\ndisappear so I'm not sending an updating patch right now.\n\n\n", "msg_date": "Thu, 22 Jul 2021 15:04:19 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Thu, Jul 22, 2021 at 03:04:19PM +0800, Julien Rouhaud wrote:\n> On Thu, Jul 22, 2021 at 12:01:34PM +0530, vignesh C wrote:\n> > \n> > 1) CFBOT showed the following compilation errors in windows:\n> \n> Thanks for looking at it. I'm aware of this issue on windows, but as mentioned\n> in the thread the new contrib is there to demonstrate how the new\n> infrastructure works. If there were some interest in pushing the patch, I\n> don't think that we would add a full bison parser, whether it's in contrib or\n> test modules.\n> \n> So unless there's a clear indication from a committer that we would want to\n> integrate such a parser, or if someone is interested in reviewing the patch and\n> only has a windows machine, I don't plan to spend time trying to fix a windows\n> only problem for something that will disappear anyway.\n\nI'm not sure what changed in the Windows part of the cfbot, but somehow it's\nnot hitting any compilation error anymore and all the tests are now green.\n\n\n", "msg_date": "Sun, 22 Aug 2021 16:16:51 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "v5 attached, fixing conflict with 639a86e36a (Remove Value node struct)", "msg_date": "Wed, 15 Sep 2021 16:50:12 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Sat, 1 May 2021 at 08:24, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Being able to extend core parser has been requested multiple times, and AFAICT\n> all previous attempts were rejected not because this isn't wanted but because\n> the proposed implementations required plugins to reimplement all of the core\n> grammar with their own changes, as bison generated parsers aren't extensible.\n>\n> I'd like to propose an alternative approach, which is to allow multiple parsers\n> to coexist, and let third-party parsers optionally fallback on the core\n> parsers.\n\nYes, that approach has been discussed by many people, most recently\naround the idea to create a fast-path grammar to make the most\nfrequently used SQL parse faster.\n\n> 0002 implements a lame \"sqlol\" parser, based on LOLCODE syntax, with only the\n> ability to produce \"select [col, ] col FROM table\" parsetree, for testing\n> purpose. I chose it to ensure that everything works properly even with a\n> totally different grammar that has different keywords, which doesn't even ends\n> statements with a semicolon but a plain keyword.\n\nThe general rule has always been that we don't just put hooks in, we\nalways require an in-core use for those hooks. I was reminded of that\nmyself recently.\n\nWhat we need is something in core that actually makes use of this. The\nreason for that is not politics, but a simple test of whether the\nfeature makes sense AND includes all required bells and whistles to be\nuseful in the real world.\n\nCore doesn't need a LOL parser and I don't think we should commit that.\n\nIf we do this, I think it should have CREATE LANGUAGE support, so that\neach plugin can be seen as an in-core object and allow security around\nwhich users can execute which language types, allow us to switch\nbetween languages and have default languages for specific users or\ndatabases.\n\n--\nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 15 Sep 2021 14:25:17 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Wed, Sep 15, 2021 at 9:25 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> The general rule has always been that we don't just put hooks in, we\n> always require an in-core use for those hooks. I was reminded of that\n> myself recently.\n>\nThat's not historically what has happened. There are several hooks with\nno in core use such as emit_log_hook and ExplainOneQuery_hook. The recent\nopenssl_tls_init_hook only has a usage in src/test/modules\n\n> What we need is something in core that actually makes use of this. The\n> reason for that is not politics, but a simple test of whether the\n> feature makes sense AND includes all required bells and whistles to be\n> useful in the real world.\n>\nAgreed. There should be something in src/test/modules to exercise this\nbut probably more to flush things out. Maybe extending adminpack to use\nthis so if enabled, it can use syntax like:\nFILE READ 'foo.txt'\n\n> Core doesn't need a LOL parser and I don't think we should commit that.\n>\n+1\n\n> If we do this, I think it should have CREATE LANGUAGE support, so that\n> each plugin can be seen as an in-core object and allow security around\n> which users can execute which language types, allow us to switch\n> between languages and have default languages for specific users or\n> databases.\n>\nThis hook allows extension developers to supplement syntax in addition\nto adding a whole new language allowing the extension to appear more\nnative to the end user. For example, pglogical could use this to add\nsyntax to do a CREATE NODE instead of calling the function create_node.\nAdding CREATE LANGUAGE support around this would just be for a narrow\nset of use cases where a new language is added.\n\n\n", "msg_date": "Wed, 15 Sep 2021 10:14:38 -0400", "msg_from": "Jim Mlodgenski <jimmy76@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Wed, Sep 15, 2021 at 10:14 PM Jim Mlodgenski <jimmy76@gmail.com> wrote:\n>\n> On Wed, Sep 15, 2021 at 9:25 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> >\n> > The general rule has always been that we don't just put hooks in, we\n> > always require an in-core use for those hooks. I was reminded of that\n> > myself recently.\n> >\n> That's not historically what has happened. There are several hooks with\n> no in core use such as emit_log_hook and ExplainOneQuery_hook. The recent\n> openssl_tls_init_hook only has a usage in src/test/modules\n\nYes, I also think that it's not a strict requirement that all hooks\nhave a caller in the core, even if it's obviously better if that's the\ncase.\n\n> > What we need is something in core that actually makes use of this. The\n> > reason for that is not politics, but a simple test of whether the\n> > feature makes sense AND includes all required bells and whistles to be\n> > useful in the real world.\n> >\n> Agreed. There should be something in src/test/modules to exercise this\n> but probably more to flush things out. Maybe extending adminpack to use\n> this so if enabled, it can use syntax like:\n> FILE READ 'foo.txt'\n\nFor this hook, maintaining a real alternative parser seems like way\ntoo much trouble to justify an in-core user. The fact that many\npeople have asked for such a feature over the year should be enough to\njustify the use case. We could try to invent some artificial need\nlike the one you suggest for adminpack, but it also feels like a waste\nof resources.\n\nAs far as I'm concerned a naive strcmp-based parser in\nsrc/test/modules should be enough to validate that the hook is\nworking, there's no need for more. In any case if the only\nrequirement for it to be committed is to write a real parser, whether\nin contrib or src/test/modules, I'll be happy to do it.\n\n> > Core doesn't need a LOL parser and I don't think we should commit that.\n> >\n> +1\n\nI entirely agree, and I repeatedly mentioned in that thread that I did\n*not* want to add this parser in core. The only purpose of patches\n0002 and 0004 is to make the third-party bison based parser\nrequirements less abstract, and demonstrate that this approach can\nsuccessfully make two *radically different* parsers cohabit.\n\n> > If we do this, I think it should have CREATE LANGUAGE support, so that\n> > each plugin can be seen as an in-core object and allow security around\n> > which users can execute which language types, allow us to switch\n> > between languages and have default languages for specific users or\n> > databases.\n> >\n> This hook allows extension developers to supplement syntax in addition\n> to adding a whole new language allowing the extension to appear more\n> native to the end user. For example, pglogical could use this to add\n> syntax to do a CREATE NODE instead of calling the function create_node.\n> Adding CREATE LANGUAGE support around this would just be for a narrow\n> set of use cases where a new language is added.\n\nYes, this hook can be used to implement multiple things as I mentioned\nmy initial email. Additionally, if this is eventually committed I'd\nlike to add support for CREATE HYPOTHETICAL INDEX grammar in hypopg.\nSuch a parser would only support one command (that extends an existing\none), so it can't really be called a language. Of course if would be\nbetter to have the core parser accept a CREATE [ HYPOTHETICAL ] INDEX\nand setup a flag so that third-parrty module can intercept this\nutility command, but until that happens I could provide that syntactic\nsugar for my users as long as I'm motivated enough to write this\nparser.\n\nAlso, a hook based approach is still compatible with per database /\nrole configuration. It can be done either via specific\nsession_preload_libraries, or via a custom GUC if for some reason the\nmodule requires to be in shared_preload_libraries.\n\n\n", "msg_date": "Wed, 15 Sep 2021 22:55:17 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Wed, Sep 15, 2021 at 02:25:17PM +0100, Simon Riggs wrote:\n> On Sat, 1 May 2021 at 08:24, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> > Being able to extend core parser has been requested multiple times, and AFAICT\n> > all previous attempts were rejected not because this isn't wanted but because\n> > the proposed implementations required plugins to reimplement all of the core\n> > grammar with their own changes, as bison generated parsers aren't extensible.\n> >\n> > I'd like to propose an alternative approach, which is to allow multiple parsers\n> > to coexist, and let third-party parsers optionally fallback on the core\n> > parsers.\n> \n> Yes, that approach has been discussed by many people, most recently\n> around the idea to create a fast-path grammar to make the most\n> frequently used SQL parse faster.\n> \n> > 0002 implements a lame \"sqlol\" parser, based on LOLCODE syntax, with only the\n> > ability to produce \"select [col, ] col FROM table\" parsetree, for testing\n> > purpose. I chose it to ensure that everything works properly even with a\n> > totally different grammar that has different keywords, which doesn't even ends\n> > statements with a semicolon but a plain keyword.\n> \n> The general rule has always been that we don't just put hooks in, we\n> always require an in-core use for those hooks. I was reminded of that\n> myself recently.\n> \n> What we need is something in core that actually makes use of this. The\n> reason for that is not politics, but a simple test of whether the\n> feature makes sense AND includes all required bells and whistles to be\n> useful in the real world.\n> \n> Core doesn't need a LOL parser and I don't think we should commit that.\n\nIt doesn't, but it very likely needs something people can use when\nthey create a new table AM, and that we should use the hook in core to\nimplement the heap* table AM to make sure the thing is working at DDL\ntime.\n\n> If we do this, I think it should have CREATE LANGUAGE support, so\n> that each plugin can be seen as an in-core object and allow security\n> around which users can execute which language types, allow us to\n> switch between languages and have default languages for specific\n> users or databases.\n\nThat's a great idea, but I must be missing something important as it\nrelates to parser hooks. Could you connect those a little more\nexplicitly?\n\nBest,\nDavid.\n\n* It's not actually a heap in the sense that the term is normally used\nin computing. I'd love to find out how it got to have this name and\ndocument same so others aren't also left wondering.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Wed, 15 Sep 2021 14:58:23 +0000", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "st 15. 9. 2021 v 16:55 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> On Wed, Sep 15, 2021 at 10:14 PM Jim Mlodgenski <jimmy76@gmail.com> wrote:\n> >\n> > On Wed, Sep 15, 2021 at 9:25 AM Simon Riggs\n> > <simon.riggs@enterprisedb.com> wrote:\n> > >\n> > > The general rule has always been that we don't just put hooks in, we\n> > > always require an in-core use for those hooks. I was reminded of that\n> > > myself recently.\n> > >\n> > That's not historically what has happened. There are several hooks with\n> > no in core use such as emit_log_hook and ExplainOneQuery_hook. The recent\n> > openssl_tls_init_hook only has a usage in src/test/modules\n>\n> Yes, I also think that it's not a strict requirement that all hooks\n> have a caller in the core, even if it's obviously better if that's the\n> case.\n>\n> > > What we need is something in core that actually makes use of this. The\n> > > reason for that is not politics, but a simple test of whether the\n> > > feature makes sense AND includes all required bells and whistles to be\n> > > useful in the real world.\n> > >\n> > Agreed. There should be something in src/test/modules to exercise this\n> > but probably more to flush things out. Maybe extending adminpack to use\n> > this so if enabled, it can use syntax like:\n> > FILE READ 'foo.txt'\n>\n> For this hook, maintaining a real alternative parser seems like way\n> too much trouble to justify an in-core user. The fact that many\n> people have asked for such a feature over the year should be enough to\n> justify the use case. We could try to invent some artificial need\n> like the one you suggest for adminpack, but it also feels like a waste\n> of resources.\n>\n> As far as I'm concerned a naive strcmp-based parser in\n> src/test/modules should be enough to validate that the hook is\n> working, there's no need for more. In any case if the only\n> requirement for it to be committed is to write a real parser, whether\n> in contrib or src/test/modules, I'll be happy to do it.\n>\n> > > Core doesn't need a LOL parser and I don't think we should commit that.\n> > >\n> > +1\n>\n> I entirely agree, and I repeatedly mentioned in that thread that I did\n> *not* want to add this parser in core. The only purpose of patches\n> 0002 and 0004 is to make the third-party bison based parser\n> requirements less abstract, and demonstrate that this approach can\n> successfully make two *radically different* parsers cohabit.\n>\n> > > If we do this, I think it should have CREATE LANGUAGE support, so that\n> > > each plugin can be seen as an in-core object and allow security around\n> > > which users can execute which language types, allow us to switch\n> > > between languages and have default languages for specific users or\n> > > databases.\n> > >\n> > This hook allows extension developers to supplement syntax in addition\n> > to adding a whole new language allowing the extension to appear more\n> > native to the end user. For example, pglogical could use this to add\n> > syntax to do a CREATE NODE instead of calling the function create_node.\n> > Adding CREATE LANGUAGE support around this would just be for a narrow\n> > set of use cases where a new language is added.\n>\n> Yes, this hook can be used to implement multiple things as I mentioned\n> my initial email. Additionally, if this is eventually committed I'd\n> like to add support for CREATE HYPOTHETICAL INDEX grammar in hypopg.\n> Such a parser would only support one command (that extends an existing\n> one), so it can't really be called a language. Of course if would be\n> better to have the core parser accept a CREATE [ HYPOTHETICAL ] INDEX\n> and setup a flag so that third-parrty module can intercept this\n> utility command, but until that happens I could provide that syntactic\n> sugar for my users as long as I'm motivated enough to write this\n> parser.\n>\n>\nThere were nice stream databases, but that ended because maintaining a fork\nis too expensive. And without direct SQL (without possibility of parser\nenhancing), the commands based on function call API were not readable and\nworkable flexible like SQL. Sometimes we really don't want to replace\nPostgreSQL, but just enhance the main interface for extensions.\n\n\nAlso, a hook based approach is still compatible with per database /\n> role configuration. It can be done either via specific\n> session_preload_libraries, or via a custom GUC if for some reason the\n> module requires to be in shared_preload_libraries.\n>\n>\n>\n\nst 15. 9. 2021 v 16:55 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Wed, Sep 15, 2021 at 10:14 PM Jim Mlodgenski <jimmy76@gmail.com> wrote:\n>\n> On Wed, Sep 15, 2021 at 9:25 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> >\n> > The general rule has always been that we don't just put hooks in, we\n> > always require an in-core use for those hooks. I was reminded of that\n> > myself recently.\n> >\n> That's not historically what has happened. There are several hooks with\n> no in core use such as emit_log_hook and ExplainOneQuery_hook. The recent\n> openssl_tls_init_hook only has a usage in src/test/modules\n\nYes, I also think that it's not a strict requirement that all hooks\nhave a caller in the core, even if it's obviously better if that's the\ncase.\n\n> > What we need is something in core that actually makes use of this. The\n> > reason for that is not politics, but a simple test of whether the\n> > feature makes sense AND includes all required bells and whistles to be\n> > useful in the real world.\n> >\n> Agreed. There should be something in src/test/modules to exercise this\n> but probably more to flush things out. Maybe extending adminpack to use\n> this so if enabled, it can use syntax like:\n> FILE READ 'foo.txt'\n\nFor this hook, maintaining a real alternative parser seems like way\ntoo much trouble to justify an in-core user.  The fact that many\npeople have asked for such a feature over the year should be enough to\njustify the use case.  We could try to invent some artificial need\nlike the one you suggest for adminpack, but it also feels like a waste\nof resources.\n\nAs far as I'm concerned a naive strcmp-based parser in\nsrc/test/modules should be enough to validate that the hook is\nworking, there's no need for more.  In any case if the only\nrequirement for it to be committed is to write a real parser, whether\nin contrib or src/test/modules, I'll be happy to do it.\n\n> > Core doesn't need a LOL parser and I don't think we should commit that.\n> >\n> +1\n\nI entirely agree, and I repeatedly mentioned in that thread that I did\n*not* want to add this parser in core.  The only purpose of patches\n0002 and 0004 is to make the third-party bison based parser\nrequirements less abstract, and demonstrate that this approach can\nsuccessfully make two *radically different* parsers cohabit.\n\n> > If we do this, I think it should have CREATE LANGUAGE support, so that\n> > each plugin can be seen as an in-core object and allow security around\n> > which users can execute which language types, allow us to switch\n> > between languages and have default languages for specific users or\n> > databases.\n> >\n> This hook allows extension developers to supplement syntax in addition\n> to adding a whole new language allowing the extension to appear more\n> native to the end user. For example, pglogical could use this to add\n> syntax to do a CREATE NODE instead of calling the function create_node.\n> Adding CREATE LANGUAGE support around this would just be for a narrow\n> set of use cases where a new language is added.\n\nYes, this hook can be used to implement multiple things as I mentioned\nmy initial email.  Additionally, if this is eventually committed I'd\nlike to add support for CREATE HYPOTHETICAL INDEX grammar in hypopg.\nSuch a parser would only support one command (that extends an existing\none), so it can't really be called a language.  Of course if would be\nbetter to have the core parser accept a CREATE [ HYPOTHETICAL ] INDEX\nand setup a flag so that third-parrty module can intercept this\nutility command, but until that happens I could provide that syntactic\nsugar for my users as long as I'm motivated enough to write this\nparser.\nThere were nice stream databases, but that ended because maintaining a fork is too expensive.  And without direct SQL (without possibility of parser enhancing), the commands based on function call API were not readable and workable flexible like SQL. Sometimes we really don't want to replace PostgreSQL, but just enhance the main interface for extensions. \nAlso, a hook based approach is still compatible with per database /\nrole configuration.  It can be done either via specific\nsession_preload_libraries, or via a custom GUC if for some reason the\nmodule requires to be in shared_preload_libraries.", "msg_date": "Wed, 15 Sep 2021 17:08:29 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "Jim Mlodgenski <jimmy76@gmail.com> writes:\n> On Wed, Sep 15, 2021 at 9:25 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n>> The general rule has always been that we don't just put hooks in, we\n>> always require an in-core use for those hooks. I was reminded of that\n>> myself recently.\n\n> That's not historically what has happened. There are several hooks with\n> no in core use such as emit_log_hook and ExplainOneQuery_hook.\n\nYeah. I think the proper expectation is that there be a sufficiently\nworked-out example to convince us that the proposed hooks have real-world\nusefulness, and are not missing any basic requirements to make them do\nsomething useful. Whether the example ends up in our tree is a\ncase-by-case decision.\n\nIn the case at hand, what's troubling me is that I don't see any\nparticular use in merely substituting a new bison grammar, if it\nstill has to produce parse trees that the rest of the system will\nunderstand. Yeah, you could make some very simple surface-syntax\nchanges that way, but it doesn't seem like you could do anything\ninteresting (like, say, support Oracle-style outer join syntax).\nAFAICS, to get to a useful feature, you'd then need to invent an\nextensible Node system (which'd be hugely invasive if it's feasible\nat all), and then probably more things on top of that. So I'm not\nconvinced that you've demonstrated any real-world usefulness.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Sep 2021 11:26:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Wed, Sep 15, 2021 at 11:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> In the case at hand, what's troubling me is that I don't see any\n> particular use in merely substituting a new bison grammar, if it\n> still has to produce parse trees that the rest of the system will\n> understand. Yeah, you could make some very simple surface-syntax\n> changes that way, but it doesn't seem like you could do anything\n> interesting (like, say, support Oracle-style outer join syntax).\n> AFAICS, to get to a useful feature, you'd then need to invent an\n> extensible Node system (which'd be hugely invasive if it's feasible\n> at all), and then probably more things on top of that. So I'm not\n> convinced that you've demonstrated any real-world usefulness.\n\nI agree that this patchset can only implement syntactic sugars,\nnothing more (although for utility command you can do a bit more than\nthat). But that's already something people can use, mostly for\nmigration to postgres use cases probably.\n\nI'm not sure why you couldn't implement an Oracle-style outer join\nwith such a hook? The requirement is that the parser can't leak any\nnode that the rest of the system doesn't know about, but you can do\nwhat you want inside the parser. And as far as I can see we already\nhave an extensible node since bcac23de73b, so it seems to me that\nthere's enough infrastructure to handle this kind of use case.\n\nThe main downside is that you'll have to make a first pass to\ntransform your \"custom raw statement\" into a valid RawStmt in your\nparser, and the system will do another one to transform it in a Query.\nBut apart from that it should work. Am I missing something?\n\n\n", "msg_date": "Thu, 16 Sep 2021 00:13:35 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> I'm not sure why you couldn't implement an Oracle-style outer join\n> with such a hook?\n\nTry it.\n\n> The requirement is that the parser can't leak any\n> node that the rest of the system doesn't know about, but you can do\n> what you want inside the parser.\n\nThat's not what the patch actually does, though. It only replaces\nthe grammar, not semantic analysis. So you couldn't associate the\n(+)-decorated WHERE clause with the appropriate join. (And no,\nI will not accept that it's okay to perform catalog lookups in\nthe grammar to get around that. See comment at the head of gram.y.)\n\nIn general, I'm having a hard time believing that anything very\ninteresting can be done at only the grammar level without changing\nthe parse analysis phase. That's not unrelated to the restriction\nthat the grammar can't do catalog accesses. Maybe with some fundamental\nrestructuring, we could get around that issue ... but this patch isn't\ndoing any fundamental restructuring, it's just putting a hook where it's\neasy to do so. We've often found that such hooks aren't as useful as\nthey initially seem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Sep 2021 12:57:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Thu, Sep 16, 2021 at 12:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > The requirement is that the parser can't leak any\n> > node that the rest of the system doesn't know about, but you can do\n> > what you want inside the parser.\n>\n> That's not what the patch actually does, though. It only replaces\n> the grammar, not semantic analysis. So you couldn't associate the\n> (+)-decorated WHERE clause with the appropriate join. (And no,\n> I will not accept that it's okay to perform catalog lookups in\n> the grammar to get around that. See comment at the head of gram.y.)\n\nI never said that one should do catalog lookup for that? What I said\nis that you can do a specific semantic analysis pass in the hook if\nyou know that you can have extensible nodes in your parsetree, and you\ncan do that with that hook unless I'm missing something?\n\nYes that's not ideal, but I don't see how it can be worse than writing\nsome middleware that parses the query, rewrite it to postgres style\nsql on the fly so that postgres can parse it again. I'm also not sure\nhow the semantic analysis could be made generally extensible, if\npossible at all, so that's the best I can propose.\n\nIf that approach is a deal breaker then fine I can accept it.\n\n\n", "msg_date": "Thu, 16 Sep 2021 01:23:42 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Thu, Sep 16, 2021 at 1:23 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Sep 16, 2021 at 12:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > > The requirement is that the parser can't leak any\n> > > node that the rest of the system doesn't know about, but you can do\n> > > what you want inside the parser.\n> >\n> > That's not what the patch actually does, though. It only replaces\n> > the grammar, not semantic analysis. So you couldn't associate the\n> > (+)-decorated WHERE clause with the appropriate join. (And no,\n> > I will not accept that it's okay to perform catalog lookups in\n> > the grammar to get around that. See comment at the head of gram.y.)\n>\n> I never said that one should do catalog lookup for that? What I said\n> is that you can do a specific semantic analysis pass in the hook if\n> you know that you can have extensible nodes in your parsetree, and you\n> can do that with that hook unless I'm missing something?\n\nAh, now that I think more about it I think that you're talking about\nunqualified fields? I was naively assuming that those wouldn't be\nallowed by oracle, but I guess that wishful thinking.\n\n\n", "msg_date": "Thu, 16 Sep 2021 01:44:52 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "Hi,\n\nOn 2021-09-15 12:57:00 -0400, Tom Lane wrote:\n> That's not what the patch actually does, though. It only replaces\n> the grammar, not semantic analysis. So you couldn't associate the\n> (+)-decorated WHERE clause with the appropriate join. (And no,\n> I will not accept that it's okay to perform catalog lookups in\n> the grammar to get around that. See comment at the head of gram.y.)\n\n> In general, I'm having a hard time believing that anything very\n> interesting can be done at only the grammar level without changing\n> the parse analysis phase. That's not unrelated to the restriction\n> that the grammar can't do catalog accesses. Maybe with some fundamental\n> restructuring, we could get around that issue ... but this patch isn't\n> doing any fundamental restructuring, it's just putting a hook where it's\n> easy to do so. We've often found that such hooks aren't as useful as\n> they initially seem.\n\nAgreed - it doesn't make sense to me to have a hook that only replaces raw\nparsing, without also hooking into parse-analysis. ISTM that the least a\npatchset going for a parser hook would have to do is to do sufficient\nrestructuring so that one could hook together into both raw parsing and\nanalysis. It could still be two callbacks, but perhaps we'd ensure that\nthey're both set.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 Sep 2021 12:55:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Wed, Sep 15, 2021 at 3:55 PM Andres Freund <andres@anarazel.de> wrote:\n\n> On 2021-09-15 12:57:00 -0400, Tom Lane wrote:\n> > That's not what the patch actually does, though. It only replaces\n> > the grammar, not semantic analysis. So you couldn't associate the\n> > (+)-decorated WHERE clause with the appropriate join. (And no,\n> > I will not accept that it's okay to perform catalog lookups in\n> > the grammar to get around that. See comment at the head of gram.y.)\n>\n> > In general, I'm having a hard time believing that anything very\n> > interesting can be done at only the grammar level without changing\n> > the parse analysis phase. That's not unrelated to the restriction\n> > that the grammar can't do catalog accesses. Maybe with some fundamental\n> > restructuring, we could get around that issue ... but this patch isn't\n> > doing any fundamental restructuring, it's just putting a hook where it's\n> > easy to do so. We've often found that such hooks aren't as useful as\n> > they initially seem.\n>\n> Agreed - it doesn't make sense to me to have a hook that only replaces raw\n> parsing, without also hooking into parse-analysis. ISTM that the least a\n> patchset going for a parser hook would have to do is to do sufficient\n> restructuring so that one could hook together into both raw parsing and\n> analysis. It could still be two callbacks, but perhaps we'd ensure that\n> they're both set.\n>\n\nThis is a bad example as it doesn't require semantic analysis from\nPostgres. While most of the tools out there tend to do simple replacement,\nthis can be done within a custom parser by simply walking its own AST,\nevaluating join conditions against the expression, and rewriting the join\naccordingly. Or, do you have an example that couldn't be done this way\nwithin a custom parser?\n\n-- \nJonah H. Harris\n\nOn Wed, Sep 15, 2021 at 3:55 PM Andres Freund <andres@anarazel.de> wrote:On 2021-09-15 12:57:00 -0400, Tom Lane wrote:\n> That's not what the patch actually does, though.  It only replaces\n> the grammar, not semantic analysis.  So you couldn't associate the\n> (+)-decorated WHERE clause with the appropriate join.  (And no,\n> I will not accept that it's okay to perform catalog lookups in\n> the grammar to get around that.  See comment at the head of gram.y.)\n\n> In general, I'm having a hard time believing that anything very\n> interesting can be done at only the grammar level without changing\n> the parse analysis phase.  That's not unrelated to the restriction\n> that the grammar can't do catalog accesses.  Maybe with some fundamental\n> restructuring, we could get around that issue ... but this patch isn't\n> doing any fundamental restructuring, it's just putting a hook where it's\n> easy to do so.  We've often found that such hooks aren't as useful as\n> they initially seem.\n\nAgreed - it doesn't make sense to me to have a hook that only replaces raw\nparsing, without also hooking into parse-analysis. ISTM that the least a\npatchset going for a parser hook would have to do is to do sufficient\nrestructuring so that one could hook together into both raw parsing and\nanalysis. It could still be two callbacks, but perhaps we'd ensure that\nthey're both set.This is a bad example as it doesn't require semantic analysis from Postgres. While most of the tools out there tend to do simple replacement, this can be done within a custom parser by simply walking its own AST, evaluating join conditions against the expression, and rewriting the join accordingly. Or, do you have an example that couldn't be done this way within a custom parser?-- Jonah H. Harris", "msg_date": "Wed, 15 Sep 2021 16:35:53 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "Hi,\n\nOn 2021-09-15 16:35:53 -0400, Jonah H. Harris wrote:\n> On Wed, Sep 15, 2021 at 3:55 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2021-09-15 12:57:00 -0400, Tom Lane wrote:\n> > Agreed - it doesn't make sense to me to have a hook that only replaces raw\n> > parsing, without also hooking into parse-analysis. ISTM that the least a\n> > patchset going for a parser hook would have to do is to do sufficient\n> > restructuring so that one could hook together into both raw parsing and\n> > analysis. It could still be two callbacks, but perhaps we'd ensure that\n> > they're both set.\n> >\n> \n> This is a bad example as it doesn't require semantic analysis from\n> Postgres.\n\n\"it\"? I assume you mean a different type of join? If so, I'm highly doubtful -\nwithout semantic analysis you can't really handle column references.\n\n\n> While most of the tools out there tend to do simple replacement,\n> this can be done within a custom parser by simply walking its own AST,\n> evaluating join conditions against the expression, and rewriting the join\n> accordingly. Or, do you have an example that couldn't be done this way\n> within a custom parser?\n\nYou cannot just \"evaluate conditions\" in a raw parse tree... You don't even\nknow what things are functions, columns etc, nor to what relation a column\nbelongs.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 Sep 2021 13:41:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Agreed - it doesn't make sense to me to have a hook that only replaces raw\n> parsing, without also hooking into parse-analysis. ISTM that the least a\n> patchset going for a parser hook would have to do is to do sufficient\n> restructuring so that one could hook together into both raw parsing and\n> analysis. It could still be two callbacks, but perhaps we'd ensure that\n> they're both set.\n\nThe other problem here is that a simple call-this-instead-of-that\ntop-level hook doesn't seem all that useful anyway, because it leaves\nyou with the task of duplicating a huge amount of functionality that\nyou're then going to make some tweaks within. That's already an issue\nwhen you're just thinking about the grammar, and if you have to buy\ninto it for parse analysis too, I doubt that it's going to be very\npractical. If, say, you'd like to support some weird function that\nrequires special parsing and analysis rules, I don't see how you get\nthat out of this without first duplicating a very large fraction of\nsrc/backend/parser/.\n\n(As a comparison point, we do have a top-level hook for replacing\nthe planner; but I have never heard of anyone actually doing so.\nThere are people using that hook to *wrap* the planner with some\nbefore-and-after processing, which is quite a different thing.)\n\nI don't have any better ideas to offer :-( ... but I very much fear\nthat the approach proposed here is a dead end.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Sep 2021 16:51:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "Hi,\n\nOn 2021-09-15 16:51:37 -0400, Tom Lane wrote:\n> The other problem here is that a simple call-this-instead-of-that\n> top-level hook doesn't seem all that useful anyway, because it leaves\n> you with the task of duplicating a huge amount of functionality that\n> you're then going to make some tweaks within. That's already an issue\n> when you're just thinking about the grammar, and if you have to buy\n> into it for parse analysis too, I doubt that it's going to be very\n> practical. If, say, you'd like to support some weird function that\n> requires special parsing and analysis rules, I don't see how you get\n> that out of this without first duplicating a very large fraction of\n> src/backend/parser/.\n\nWe do have a small amount of infrastructure around this - the hackery that\nplpgsql uses. That's not going to help you with everything, but I think it\nshould be be enough to recognize e.g. additional top-level\nstatements. Obviously not enough to intercept parsing deeper into a statement,\nbut at least something.\n\nAnd parse-analysis for some types of things will be doable with the current\ninfrastructure, by e.g. handling the new top-level statement in the hook, and\nthen passing the buck to the normal parse analysis for e.g. expressions in\nthat.\n\nObviously not going to get you that far...\n\n\n> (As a comparison point, we do have a top-level hook for replacing\n> the planner; but I have never heard of anyone actually doing so.\n> There are people using that hook to *wrap* the planner with some\n> before-and-after processing, which is quite a different thing.)\n\nCitus IIRC has some paths that do not end up calling into the standard\nplanner, but only for a few simplistic cases.\n\n\n> I don't have any better ideas to offer :-( ... but I very much fear\n> that the approach proposed here is a dead end.\n\nI unfortunately don't see a good way forward without changing the way we do\nparsing on a more fundamental level :(.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 Sep 2021 14:40:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Thu, Sep 16, 2021 at 5:40 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> > I don't have any better ideas to offer :-( ... but I very much fear\n> > that the approach proposed here is a dead end.\n>\n> I unfortunately don't see a good way forward without changing the way we do\n> parsing on a more fundamental level :(.\n\nUsing the ExtensibleNode infrastructure, I can see two ways to try to\nleverage that.\n\nFirst one would be to require modules to wrap their RawStmt->stmt in\nan ExtensibleNode if they want to do anything that requires semantic\nanalysis, and handle such node in transformStmt() with another hook.\nI think it would allow modules to do pretty much anything, at the cost\nof walking the stmt twice and duplicating possibly huge amount of\nanalyze.c and friends.\n\nThe other one would be to allow the parser to leak ExtensibleNode in\nthe middle of the RawStmt and catch them in the transform* functions,\nwith e.g. some generic transformExtensibleNode(pstate, node,\nsome_identifier...) (the identifier giving both the general transform\naction and some secondary info, like ParseExprKind for expressions).\nThis would avoid the downsides of the first approach, but would\nrequire to call this new hook in a bunch of places.\n\nOr we could combine both approaches so that the most common use cases,\nlike transformExprRecurse(), would be easily handled while more exotic\ncases will have to go the hard way. Parser authors could still ask\nfor adding a new call to this new hook to ease their work in the next\nmajor version.\n\nWould any of that be a reasonable approach?\n\n\n", "msg_date": "Thu, 16 Sep 2021 12:33:21 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Thu, 16 Sept 2021 at 05:33, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Would any of that be a reasonable approach?\n\nThe way I summarize all of the above is that\n1) nobody is fundamentally opposed to the idea\n2) we just need to find real-world example(s) and show that any\nassociated in-core patch provides all that is needed in a clean way,\nsince that point is currently in-doubt by senior committers.\n\nSo what is needed is some actual prototypes that explore this. I guess\nthat means they have to be open source, but those examples could be\nunder a different licence, as long as the in-core patch is clearly a\nproject submission to PostgreSQL.\n\nI presume a few real-world examples could be:\n* Grammar extensions to support additional syntax for Greenplum, Citus, XL\n* A grammar that adds commands for an extension, such as pglogical\n(Jim's example)\n* A strict SQL Standard grammar/parser\n* GQL implementation\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 23 Sep 2021 07:37:27 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Thu, Sep 23, 2021 at 07:37:27AM +0100, Simon Riggs wrote:\n> On Thu, 16 Sept 2021 at 05:33, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> > Would any of that be a reasonable approach?\n> \n> The way I summarize all of the above is that\n> 1) nobody is fundamentally opposed to the idea\n> 2) we just need to find real-world example(s) and show that any\n> associated in-core patch provides all that is needed in a clean way,\n> since that point is currently in-doubt by senior committers.\n> \n> So what is needed is some actual prototypes that explore this. I guess\n> that means they have to be open source, but those examples could be\n> under a different licence, as long as the in-core patch is clearly a\n> project submission to PostgreSQL.\n> \n> I presume a few real-world examples could be:\n> * Grammar extensions to support additional syntax for Greenplum, Citus, XL\n> * A grammar that adds commands for an extension, such as pglogical\n> (Jim's example)\n> * A strict SQL Standard grammar/parser\n> * GQL implementation\n\nAs I mentioned, there's at least one use case that would work with that\napproach that I will be happy to code in hypopg, which is an open source\nproject. As a quick prototype, here's a basic overview of how I can use this\nhook to implement a CREATE HYPOTHETICAL INDEX command:\n\nrjuju=# LOAD 'hypopg';\nLOAD\nrjuju=# create hypothetical index meh on t1(id);\nCREATE INDEX\nrjuju=# explain select * from t1 where id = 1;\n QUERY PLAN\n--------------------------------------------------------------------------------\n Index Scan using \"<13543>btree_t1_id\" on t1 (cost=0.04..8.05 rows=1 width=13)\n Index Cond: (id = 1)\n(2 rows)\n\nrjuju=# \\d t1\n Table \"public.t1\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n id | integer | | |\n val | text | | |\n\n\nMy POC's grammar is only like:\n\nCREATE HYPOTHETICAL INDEX opt_index_name ON relation_expr '(' index_params ')'\n\t{\n\t\tIndexStmt *n = makeNode(IndexStmt);\n\t\tn->idxname = $4;\n\t\tn->relation = $6;\n\t\tn->accessMethod = DEFAULT_INDEX_TYPE;\n\t\tn->indexParams = $8;\n\t\tn->options = list_make1(makeDefElem(\"hypothetical\", NULL, -1));\n\t\t$$ = (Node *) n;\n\t}\n\nas I'm not willing to import the whole CREATE INDEX grammar for now for a patch\nthat may be rejected. I can however publish this POC if that helps. Note\nthat once my parser returns this parse tree, all I need to do is intercept\nIndexStmt containing this option in a utility_hook and run my code rather than\na plain DefineIndex(), which works as intended as I showed above.\n\nOne could easily imagine similar usage to extend existing commands, like\nimplementing a new syntax on top of CREATE TABLE to implement an automatic\npartition creation grammar (which would return multiple CreateStmt),\nor even a partition manager.\n\nI'm not an expert in other RDBMS syntax, but maybe you could use such a\nhook to implement SQL Server or mysql syntax, which use at least different\nquoting rules. Maybe Amazon people could confirm that as it looks like they\nimplemented an SQL Server parser using a similar hook?\n\nSo yes you can't create new commands or implement grammars that require\nadditional semantic analysis with this hook, but I think that there are still\nreal use cases that can be implemented using only a different parser.\n\n\n", "msg_date": "Thu, 23 Sep 2021 21:31:58 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> My POC's grammar is only like:\n\n> CREATE HYPOTHETICAL INDEX opt_index_name ON relation_expr '(' index_params ')'\n> \t{\n> \t\tIndexStmt *n = makeNode(IndexStmt);\n> \t\tn->idxname = $4;\n> \t\tn->relation = $6;\n> \t\tn->accessMethod = DEFAULT_INDEX_TYPE;\n> \t\tn->indexParams = $8;\n> \t\tn->options = list_make1(makeDefElem(\"hypothetical\", NULL, -1));\n> \t\t$$ = (Node *) n;\n> \t}\n\nI'm not too impressed by this example, because there seems little\nreason why you couldn't just define \"hypothetical\" as an index_param\noption, and not need to touch the grammar at all.\n\n> as I'm not willing to import the whole CREATE INDEX grammar for now for a patch\n> that may be rejected.\n\nThe fact that that's so daunting seems to me to be a perfect illustration\nof the problems with this concept. Doing anything interesting with a\nhook like this will create a maintenance nightmare, because you'll have\nto duplicate (and track every change in) large swaths of gram.y. To the\nextent that you fail to, say, match every detail of the core's expression\ngrammar, you'll be creating a crappy user experience.\n\n> that once my parser returns this parse tree, all I need to do is intercept\n> IndexStmt containing this option in a utility_hook and run my code rather than\n> a plain DefineIndex(), which works as intended as I showed above.\n\nAnd I'm even less impressed by the idea of half a dozen extensions\neach adding its own overhead to the parser and also to ProcessUtility\nso that they can process statements in this klugy, highly-restricted\nway.\n\nI do have sympathy for the idea that extensions would like to define\ntheir own statement types. I just don't see a practical way to do it\nin our existing parser infrastructure. This patch certainly doesn't\noffer that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Sep 2021 10:21:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "On Thu, Sep 23, 2021 at 10:21:20AM -0400, Tom Lane wrote:\n> \n> I do have sympathy for the idea that extensions would like to define\n> their own statement types. I just don't see a practical way to do it\n> in our existing parser infrastructure. This patch certainly doesn't\n> offer that.\n\nAllowing extensions to define their own (utility) statement type is just a\nmatter of allowing ExtensibleNode as top level statement. As far as I can\nsee the only change required for that is to give those a specific command tag\nin CreateCommandTag(), since transformStmt() default to emitting a utility\ncommand. You can then easily intercept such statement in the utility hook and\nfetch your custom struct.\n\nI could do that but I'm assuming that you still wouldn't be satisfied as\ncustom parser would still be needed, whihc may or may not require to\ncopy/paste chunks of the core grammar?\n\nIf so, do you have any suggestion for an approach you would accept?\n\n\n", "msg_date": "Fri, 24 Sep 2021 14:33:59 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hook for extensible parsing." }, { "msg_contents": "Hi,\n\nOn Fri, Sep 24, 2021 at 02:33:59PM +0800, Julien Rouhaud wrote:\n> On Thu, Sep 23, 2021 at 10:21:20AM -0400, Tom Lane wrote:\n> > \n> > I do have sympathy for the idea that extensions would like to define\n> > their own statement types. I just don't see a practical way to do it\n> > in our existing parser infrastructure. This patch certainly doesn't\n> > offer that.\n> \n> Allowing extensions to define their own (utility) statement type is just a\n> matter of allowing ExtensibleNode as top level statement. As far as I can\n> see the only change required for that is to give those a specific command tag\n> in CreateCommandTag(), since transformStmt() default to emitting a utility\n> command. You can then easily intercept such statement in the utility hook and\n> fetch your custom struct.\n> \n> I could do that but I'm assuming that you still wouldn't be satisfied as\n> custom parser would still be needed, whihc may or may not require to\n> copy/paste chunks of the core grammar?\n> \n> If so, do you have any suggestion for an approach you would accept?\n\nGiven the total lack of answer on the various improvements I suggested, I'm\nassuming that no one is interested in that feature, so I'm marking it as\nRejected.\n\n\n", "msg_date": "Mon, 17 Jan 2022 08:33:34 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hook for extensible parsing." } ]
[ { "msg_contents": "I found a nasty performance problem in commit 824bf7190: given the\nright sort of regex, checkmatchall() takes an unreasonable amount\nof time. For example,\n\nregression=# SELECT regexp_matches('', '(.|){20}','');\n regexp_matches \n----------------\n {\"\"}\n(1 row)\n\nTime: 129.213 ms\nregression=# SELECT regexp_matches('', '(.|){25}','');\n regexp_matches \n----------------\n {\"\"}\n(1 row)\n\nTime: 4101.416 ms (00:04.101)\nregression=# SELECT regexp_matches('', '(.|){30}','');\n regexp_matches \n----------------\n {\"\"}\n(1 row)\n\nTime: 130803.927 ms (02:10.804)\n\nThat's quite awful compared to v13, where these cases take\nonly a couple ms.\n\nWorse still, you can't get out of it with control-C, because\ncheckmatchall_recurse lacks any check-for-interrupt.\n\nThe problem here is basically that we're willing to recursively\nexamine all paths out of the same NFA state over and over, once for\neach possible way of arriving at that state. That's dumb and we can\ndo better, though it takes a little more code and some more memory.\nThe attached patch applies a few different methods to make this\nbetter:\n\n* Before starting the recursive search, do a linear-time pass\nthrough all the states to check whether there are any non-RAINBOW\narcs. This allows failing fast for most non-matchall NFAs.\n\n* Memo-ize the results of the recursive search, by storing an\narray of possible path lengths for each state after we've examined\nit once.\n\n* Rewrite the checks for pseudocolor arcs to make them linear\ntime rather than O(N^2) --- I decided I'd better do that too,\nafter noting that the problem cases had fairly large numbers\nof pre-state outarcs. This makes them cheap enough to do\nbefore the recursive search not after.\n\n* Put a heuristic upper bound on the number of NFA states for\nwhich we'll attempt this optimization at all. The main reason\nfor this is to bound the amount of memory we can expend on\nmemoization results. I think that it will result in little\nif any degradation in our ability to recognize matchall NFAs,\nbecause of the existing restriction that we can't represent\ncases involving path lengths that are finite but more than DUPINF.\nIf there are a lot more than DUPINF states then (I think) it becomes\npretty likely that we'd fail due to that restriction anyhow.\nHowever, I've not made much attempt to quantify that argument;\nI just chose DUPINF * 4 out of the air.\n\n* Just in case that's not enough to fix things, add a cancel check\nwithin checkmatchall_recurse.\n\nThe main thing I find a bit ugly about this solution is that\nI'm abusing the state->tmp fields (which are declared struct state *)\nto hold the memoization results (which are \"bool *\"). It'd be\npossible to avoid that by allocating an additional \"bool **\" array\nindexed by state number, but whether it's worth that depends on how\nallergic you are to weird pointer casts.\n\nComments?\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 01 May 2021 15:46:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Regex performance regression induced by match-all code" }, { "msg_contents": "On Sat, May 1, 2021, at 21:46, Tom Lane wrote:\n> I found a nasty performance problem in commit 824bf7190: given the\n> right sort of regex, checkmatchall() takes an unreasonable amount\n> of time.\n\nNice catch.\n\n> fix-exponential-cost-of-checkmatchall-1.patch\n\nI've successfully tested the patch on the regex corpus:\n\nSELECT\n is_match <> (subject ~ pattern),\n captured IS DISTINCT FROM regexp_match(subject, pattern, flags),\n COUNT(*)\nFROM performance_test\nGROUP BY 1,2\nORDER BY 1,2;\n?column? | ?column? | count\n----------+----------+---------\nf | f | 3253889\n(1 row)\n\nHEAD (651d005e76bc0b9542615f609b4d0d946035dc58)\nTime: 94096.020 ms (01:34.096)\nTime: 93102.287 ms (01:33.102)\nTime: 93333.746 ms (01:33.334)\n\nfix-exponential-cost-of-checkmatchall-1.patch\nTime: 95247.529 ms (01:35.248)\nTime: 92617.502 ms (01:32.618)\nTime: 93259.700 ms (01:33.260)\n\nI've also tested the problematic type of regexes:\n\nHEAD (651d005e76bc0b9542615f609b4d0d946035dc58)\nSELECT regexp_matches('', '(.|){20}','');\nTime: 61.613 ms\nSELECT regexp_matches('', '(.|){25}','');\nTime: 1928.674 ms (00:01.929)\nSELECT regexp_matches('', '(.|){27}','');\nTime: 7789.601 ms (00:07.790)\n\nfix-exponential-cost-of-checkmatchall-1.patch\nSELECT regexp_matches('', '(.|){20}','');\nTime: 0.965 ms\nSELECT regexp_matches('', '(.|){25}','');\nTime: 0.586 ms\nSELECT regexp_matches('', '(.|){27}','');\nTime: 0.788 ms\n\nNice improvement, thanks.\n\n/Joel\nOn Sat, May 1, 2021, at 21:46, Tom Lane wrote:I found a nasty performance problem in commit 824bf7190: given theright sort of regex, checkmatchall() takes an unreasonable amountof time.Nice catch.fix-exponential-cost-of-checkmatchall-1.patchI've successfully tested the patch on the regex corpus:SELECT  is_match <> (subject ~ pattern),  captured IS DISTINCT FROM regexp_match(subject, pattern, flags),  COUNT(*)FROM performance_testGROUP BY 1,2ORDER BY 1,2;?column? | ?column? |  count----------+----------+---------f        | f        | 3253889(1 row)HEAD (651d005e76bc0b9542615f609b4d0d946035dc58)Time: 94096.020 ms (01:34.096)Time: 93102.287 ms (01:33.102)Time: 93333.746 ms (01:33.334)fix-exponential-cost-of-checkmatchall-1.patchTime: 95247.529 ms (01:35.248)Time: 92617.502 ms (01:32.618)Time: 93259.700 ms (01:33.260)I've also tested the problematic type of regexes:HEAD (651d005e76bc0b9542615f609b4d0d946035dc58)SELECT regexp_matches('', '(.|){20}','');Time: 61.613 msSELECT regexp_matches('', '(.|){25}','');Time: 1928.674 ms (00:01.929)SELECT regexp_matches('', '(.|){27}','');Time: 7789.601 ms (00:07.790)fix-exponential-cost-of-checkmatchall-1.patchSELECT regexp_matches('', '(.|){20}','');Time: 0.965 msSELECT regexp_matches('', '(.|){25}','');Time: 0.586 msSELECT regexp_matches('', '(.|){27}','');Time: 0.788 msNice improvement, thanks./Joel", "msg_date": "Sun, 02 May 2021 07:46:39 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Regex performance regression induced by match-all code" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> On Sat, May 1, 2021, at 21:46, Tom Lane wrote:\n>> I found a nasty performance problem in commit 824bf7190: given the\n>> right sort of regex, checkmatchall() takes an unreasonable amount\n>> of time.\n\n> Nice catch.\n> I've successfully tested the patch on the regex corpus:\n\nThanks for testing!\n\n>> The main thing I find a bit ugly about this solution is that\n>> I'm abusing the state->tmp fields (which are declared struct state *)\n>> to hold the memoization results (which are \"bool *\"). It'd be\n>> possible to avoid that by allocating an additional \"bool **\" array\n>> indexed by state number, but whether it's worth that depends on how\n>> allergic you are to weird pointer casts.\n\nI tried rewriting it like that, and I have to say I do like it better\nthat way. I think it's clearer, which seems well worth one extra\nmalloc.\n\nAlso, I poked a little more at the question of the heuristic limit\non number of states, by checking the actual numbers of states in\nvarious ways of writing matchall regexes. It looks to me like\nwe can cut the limit to DUPINF*2 and still have lots of daylight,\nbecause reasonable (and even not so reasonable) ways to write a\npattern that can match up to K characters seem to come out with\nnot much more than K states.\n\nHence, PFA v2. I also added a couple of test cases based on\nlooking at code coverage in this area, as well as a case that\ntakes an unreasonably long time without this fix.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 02 May 2021 12:53:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Regex performance regression induced by match-all code" }, { "msg_contents": "On Sun, May 2, 2021, at 18:53, Tom Lane wrote:\n> fix-exponential-cost-of-checkmatchall-2.patch\n\nSuccessfully tested.\n\nSELECT\n is_match <> (subject ~ pattern),\n captured IS DISTINCT FROM regexp_match(subject, pattern, flags),\n COUNT(*)\nFROM performance_test\nGROUP BY 1,2\nORDER BY 1,2;\n?column? | ?column? | count\n----------+----------+---------\nf | f | 3253889\n(1 row)\n\nTime: 94149.542 ms (01:34.150)\nTime: 91565.305 ms (01:31.565)\nTime: 91565.305 ms (01:31.565)\n\nSELECT regexp_matches('', '(.|){20}','');\nregexp_matches\n----------------\n{\"\"}\n(1 row)\n\nTime: 0.541 ms\n\nSELECT regexp_matches('', '(.|){25}','');\nregexp_matches\n----------------\n{\"\"}\n(1 row)\n\nTime: 0.724 ms\n\nSELECT regexp_matches('', '(.|){27}','');\nregexp_matches\n----------------\n{\"\"}\n(1 row)\n\nTime: 0.782 ms\n\n/Joel\nOn Sun, May 2, 2021, at 18:53, Tom Lane wrote:fix-exponential-cost-of-checkmatchall-2.patchSuccessfully tested.SELECT  is_match <> (subject ~ pattern),  captured IS DISTINCT FROM regexp_match(subject, pattern, flags),  COUNT(*)FROM performance_testGROUP BY 1,2ORDER BY 1,2;?column? | ?column? |  count----------+----------+---------f        | f        | 3253889(1 row)Time: 94149.542 ms (01:34.150)Time: 91565.305 ms (01:31.565)Time: 91565.305 ms (01:31.565)SELECT regexp_matches('', '(.|){20}','');regexp_matches----------------{\"\"}(1 row)Time: 0.541 msSELECT regexp_matches('', '(.|){25}','');regexp_matches----------------{\"\"}(1 row)Time: 0.724 msSELECT regexp_matches('', '(.|){27}','');regexp_matches----------------{\"\"}(1 row)Time: 0.782 ms/Joel", "msg_date": "Mon, 03 May 2021 21:31:27 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Regex performance regression induced by match-all code" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> On Sun, May 2, 2021, at 18:53, Tom Lane wrote:\n>> fix-exponential-cost-of-checkmatchall-2.patch\n\n> Successfully tested.\n\nAgain, thanks for checking!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 May 2021 15:38:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Regex performance regression induced by match-all code" }, { "msg_contents": "On Mon, May 3, 2021, at 21:38, Tom Lane wrote:\n> \"Joel Jacobson\" <joel@compiler.org <mailto:joel%40compiler.org>> writes:\n> > On Sun, May 2, 2021, at 18:53, Tom Lane wrote:\n> >> fix-exponential-cost-of-checkmatchall-2.patch\n> \n> > Successfully tested.\n> \n> Again, thanks for checking!\n\nYou're welcome, thanks for coding!\n\n/Joel\nOn Mon, May 3, 2021, at 21:38, Tom Lane wrote:\"Joel Jacobson\" <joel@compiler.org> writes:> On Sun, May 2, 2021, at 18:53, Tom Lane wrote:>> fix-exponential-cost-of-checkmatchall-2.patch> Successfully tested.Again, thanks for checking!You're welcome, thanks for coding!/Joel", "msg_date": "Mon, 03 May 2021 21:43:13 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Regex performance regression induced by match-all code" } ]
[ { "msg_contents": "On Sat, May 1, 2021 at 12:55 AM Yu Zhao <yuzhao@google.com> wrote:\n>\n> Greetings,\n>\n> We are proposing a patchset to the Linux kernel, which brings very\n> promising performance improvements under memory pressure. I'm\n> wondering if anybody would be interested in benchmarking it on top of\n> the latest 5.12 kernel.\n> https://lore.kernel.org/linux-mm/20210413065633.2782273-1-yuzhao@google.com/\n>\n> My knowledge in PostgreSQL is minimal. But I'm happy to provide any\n> assistance you might need from the kernel side.\n>\n> Thanks in advance for any expertise you are able to provide. I look\n> forward to hearing from you.\n\nSorry about the cross-posting. I didn't know it's prohibited until my\nprevious message got rejected.\n\nI have removed pgsql-performance@, because pgsql-hacker@ seems more\nrelevant to me. Please let me know if I got it wrong again.\n\nThanks!\n\n\n", "msg_date": "Sat, 1 May 2021 14:27:29 -0600", "msg_from": "Yu Zhao <yuzhao@google.com>", "msg_from_op": true, "msg_subject": "Re: performance benchmark" } ]
[ { "msg_contents": "Hi hackers,\n\nI recently did a pg_upgrade to 13 at $work, and noticed it did not\npreserve the comments I had added locally on the pg_* predefined roles.\n\nWe have a bgworker that runs periodically and makes a report of existing\nroles, memberships, and grants, showing the comments on the roles, so\nI had added comments on the predefined ones so they would not look naked\nand unexplained in the report.\n\nAll I had to do was go back to a pre-upgrade version of the report\nand re-add the comments.\n\nIs there an inherent technical or policy reason for pg_upgrade not to\npreserve comments on predefined roles (or on predefined objects generally)?\n\nFor that matter, would it be objectionable for the predefined roles to\ncome with comments right out of the box? I guess one possible objection\ncould be \"what next? comments on everything in pg_catalog?\", but perhaps\nthere is a way to distinguish the case of predefined roles: they are a\nrelatively recent, um, encroachment into a namespace traditionally\nmanaged by the admin, so maybe there's that much extra reason for them\nto come with explanations attached.\n\nAnother objection might be that they'd presumably be subject to translation,\nand would need some way for initdb to install the proper localized versions.\n\nSo maybe it is simpler to leave them uncommented by default, but perhaps\ndesirable for pg_upgrade to preserve comments locally added.\n\nI've appended the comments we use for them at $work, anyway.\n\nRegards,\n-Chap\n\n\n\n\n\nCOMMENT ON ROLE pg_execute_server_program IS 'Allow executing programs on\nthe database server as the user the database runs as with COPY and other\nfunctions which allow executing a server-side program. Since PG 11.';\n\nCOMMENT ON ROLE pg_monitor IS 'Read/execute various monitoring views and\nfunctions. This role is a member of pg_read_all_settings, pg_read_all_stats\nand pg_stat_scan_tables. Since PG 10.';\n\nCOMMENT ON ROLE pg_read_all_settings IS 'Read all configuration variables,\neven those normally visible only to superusers. Since PG 10.';\n\nCOMMENT ON ROLE pg_read_all_stats IS 'Read all pg_stat_* views and use\nvarious statistics related extensions, even those normally visible only to\nsuperusers. Since PG 10.';\n\nCOMMENT ON ROLE pg_read_server_files IS 'Allow reading files from any\nlocation the database user can access on the server with COPY and other\nfile-access functions. Since PG 11.';\n\nCOMMENT ON ROLE pg_signal_backend IS 'Send signals to other backends (eg:\ncancel query, terminate). Since PG 9.6.';\n\nCOMMENT ON ROLE pg_stat_scan_tables IS 'Execute monitoring functions that\nmay take ACCESS SHARE locks on tables, potentially for a long time. Since PG\n10.';\n\nCOMMENT ON ROLE pg_write_server_files IS 'Allow writing to files in any\nlocation the database user can access on the server with COPY and other\nfile-access functions. Since PG 11.';\n\n\n", "msg_date": "Sun, 2 May 2021 15:52:19 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "pg_upgrade not preserving comments on predefined roles" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> Is there an inherent technical or policy reason for pg_upgrade not to\n> preserve comments on predefined roles (or on predefined objects generally)?\n\nI think this is absolutely out of scope for pg_dump. We generally expect\nthat system objects' properties are not dumped, because they might be\ndifferent in a newer version, and overwriting the system definition with\na possibly-obsolete version would be a bad thing.\n\nYou could quibble about comments being a different matter, but I don't\nbuy it.\n\nAlso, our one venture into this space (allowing custom modifications of\nsystem-object privileges to be propagated by pg_dump) has IMV been an\nunmitigated disaster. Years later, it *still* has unresolved bugs and\ndefinitional issues. So I'm going to run away screaming from any proposal\nto do likewise for other object properties.\n\n> For that matter, would it be objectionable for the predefined roles to\n> come with comments right out of the box?\n\nThat, however, seems reasonable enough. We deliver built-in functions and\noperators with comments, so why not roles?\n\n> Another objection might be that they'd presumably be subject to translation,\n> and would need some way for initdb to install the proper localized versions.\n\nWe've not worried about that for functions/operators.\n\n> I've appended the comments we use for them at $work, anyway.\n\nIMO these would have to be shortened quite a bit to be friendly for\n\"\\du+\" displays. I'm not against the concept though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 02 May 2021 19:25:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade not preserving comments on predefined roles" }, { "msg_contents": "On 05/02/21 19:25, Tom Lane wrote:\n> Chapman Flack <chap@anastigmatix.net> writes:\n>> I've appended the comments we use for them at $work, anyway.\n> \n> IMO these would have to be shortened quite a bit to be friendly for\n> \"\\du+\" displays. I'm not against the concept though.\n\nI'm certainly not wedded to the exact wording we used here.\n\nThat said, our particular reason for having them (the bgworker-produced\nroles/memberships/permissions graphviz thing) is not meant just for the\neyes of technical staff, but also to be presentable to, say, compliance\nfolks coming around asking about security provisions. So making the\ncomments so terse they'd require expansion for such an audience\nmight kind of defeat that purpose.\n\nIn pg_proc there seem to be a handful of things with 80 to 90 character\ncomments and one (a statistics function) at 111. Could that be taken\nfor precedent as to reasonable length?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 6 May 2021 20:12:54 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade not preserving comments on predefined roles" } ]
[ { "msg_contents": "Hi,\n\nIn apply_handle_truncate, the following comment before ExecuteTruncateGuts\nsays that it defaults to RESTRICT even if the CASCADE option has been\nspecified in publisher's TRUNCATE command.\n /*\n * Even if we used CASCADE on the upstream primary we explicitly default\n * to replaying changes without further cascading. This might be later\n * changeable with a user specified option.\n */\nI tried the following use case to see if that's actually true:\n1) Created two tables tbl_pk (primary key), tbl_fk(references tbl_pk\nprimary key via foreign key) on both publisher and subscriber.\n2) In general, TRUNCATE tbl_pk; or TRUNCATE tbl_pk RESTRICT; would fail\nbecause tbl_fk is dependent on tbl_pk.\n3) TRUNCATE tbl_pk, tbl_fk; would work because the dependent table is\nspecified in the command.\n4) TRUNCATE tbl_pk CASCADE; would work because of the CASCADE option and\nboth tbl_pk and tbl_fk are truncated. When this command is run on the\npublisher, the CASCADE option is sent to the subscriber, see\nDecodeTruncate. But the apply worker ignores it and passes DROP_RESTRICT to\nExecuteTruncateGuts. Therefore, the expectation(per the comment) is that on\nthe subscriber, the behavior should be equivalent to TRUNCATE tbl_pk;, so\nan error is expected. But we are also receiving the tbl_fk in the remote\nrels along with tbl_pk, so the behavior is equivalent to (3) and both\ntbl_pk and tbl_fk are truncated.\n\nDoes the comment still hold true? Does ignoring the CASCADE option make\nsense in apply_handle_truncate, as we are receiving all the dependent\nrelations in the remote rels from the publisher? Am I missing something?\n\nThe commit id of the feature \"Logical replication support for TRUNCATE\" is\n039eb6e92f, and adding relevant people in cc.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nHi,\n\nIn apply_handle_truncate, the following comment before ExecuteTruncateGuts says that it defaults to RESTRICT even if the CASCADE option has been specified in publisher's TRUNCATE command.\n    /*\n     * Even if we used CASCADE on the upstream primary we explicitly default\n     * to replaying changes without further cascading. This might be later\n     * changeable with a user specified option.\n     */\nI tried the following use case to see if that's actually true:\n1) Created two tables tbl_pk (primary key), tbl_fk(references tbl_pk primary key via foreign key) on both publisher and subscriber.\n2) In general, TRUNCATE tbl_pk; or TRUNCATE tbl_pk RESTRICT; would fail because tbl_fk is dependent on tbl_pk.\n3) TRUNCATE tbl_pk, tbl_fk; would work because the dependent table is specified in the command.\n4) TRUNCATE tbl_pk CASCADE; would work because of the CASCADE option and both tbl_pk and tbl_fk are truncated. When this command is run on the publisher, the CASCADE option is sent to the subscriber, see DecodeTruncate. But the apply worker ignores it and passes DROP_RESTRICT to ExecuteTruncateGuts. Therefore, the expectation(per the comment) is that on the subscriber, the behavior should be equivalent to TRUNCATE tbl_pk;, so an error is expected. But we are also receiving the tbl_fk in the remote rels along with tbl_pk, so the behavior is equivalent to (3) and both tbl_pk and tbl_fk are truncated.\n\nDoes the comment still hold true? Does ignoring the CASCADE option make sense in apply_handle_truncate, as we are receiving all the dependent relations in the remote rels from the publisher? Am I missing something?The commit id of the feature \"Logical replication support for TRUNCATE\" is 039eb6e92f, and adding relevant people in cc.\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 3 May 2021 10:42:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Logical Replication - behavior of TRUNCATE ... CASCADE" }, { "msg_contents": "On Mon, May 3, 2021 at 10:42 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> In apply_handle_truncate, the following comment before ExecuteTruncateGuts says that it defaults to RESTRICT even if the CASCADE option has been specified in publisher's TRUNCATE command.\n> /*\n> * Even if we used CASCADE on the upstream primary we explicitly default\n> * to replaying changes without further cascading. This might be later\n> * changeable with a user specified option.\n> */\n> I tried the following use case to see if that's actually true:\n> 1) Created two tables tbl_pk (primary key), tbl_fk(references tbl_pk primary key via foreign key) on both publisher and subscriber.\n> 2) In general, TRUNCATE tbl_pk; or TRUNCATE tbl_pk RESTRICT; would fail because tbl_fk is dependent on tbl_pk.\n> 3) TRUNCATE tbl_pk, tbl_fk; would work because the dependent table is specified in the command.\n> 4) TRUNCATE tbl_pk CASCADE; would work because of the CASCADE option and both tbl_pk and tbl_fk are truncated. When this command is run on the publisher, the CASCADE option is sent to the subscriber, see DecodeTruncate. But the apply worker ignores it and passes DROP_RESTRICT to ExecuteTruncateGuts. Therefore, the expectation(per the comment) is that on the subscriber, the behavior should be equivalent to TRUNCATE tbl_pk;, so an error is expected. But we are also receiving the tbl_fk in the remote rels along with tbl_pk, so the behavior is equivalent to (3) and both tbl_pk and tbl_fk are truncated.\n>\n> Does the comment still hold true? Does ignoring the CASCADE option make sense in apply_handle_truncate, as we are receiving all the dependent relations in the remote rels from the publisher? Am I missing something?\n>\n> The commit id of the feature \"Logical replication support for TRUNCATE\" is 039eb6e92f, and adding relevant people in cc.\n\nAssume this case\npublisher: tbl_pk -> tbl_fk_pub\nsubscriber: tbl_pk-> tbl_fk_sub\n\nNow, in this case, this comment is true right because we are not\nsupposed to truncate tbl_fk_sub on the subscriber side and this should\nerror out.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 11:59:32 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication - behavior of TRUNCATE ... CASCADE" }, { "msg_contents": "On Mon, May 3, 2021 at 11:59 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, May 3, 2021 at 10:42 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > In apply_handle_truncate, the following comment before ExecuteTruncateGuts says that it defaults to RESTRICT even if the CASCADE option has been specified in publisher's TRUNCATE command.\n> > /*\n> > * Even if we used CASCADE on the upstream primary we explicitly default\n> > * to replaying changes without further cascading. This might be later\n> > * changeable with a user specified option.\n> > */\n> > I tried the following use case to see if that's actually true:\n> > 1) Created two tables tbl_pk (primary key), tbl_fk(references tbl_pk primary key via foreign key) on both publisher and subscriber.\n> > 2) In general, TRUNCATE tbl_pk; or TRUNCATE tbl_pk RESTRICT; would fail because tbl_fk is dependent on tbl_pk.\n> > 3) TRUNCATE tbl_pk, tbl_fk; would work because the dependent table is specified in the command.\n> > 4) TRUNCATE tbl_pk CASCADE; would work because of the CASCADE option and both tbl_pk and tbl_fk are truncated. When this command is run on the publisher, the CASCADE option is sent to the subscriber, see DecodeTruncate. But the apply worker ignores it and passes DROP_RESTRICT to ExecuteTruncateGuts. Therefore, the expectation(per the comment) is that on the subscriber, the behavior should be equivalent to TRUNCATE tbl_pk;, so an error is expected. But we are also receiving the tbl_fk in the remote rels along with tbl_pk, so the behavior is equivalent to (3) and both tbl_pk and tbl_fk are truncated.\n> >\n> > Does the comment still hold true? Does ignoring the CASCADE option make sense in apply_handle_truncate, as we are receiving all the dependent relations in the remote rels from the publisher? Am I missing something?\n> >\n> > The commit id of the feature \"Logical replication support for TRUNCATE\" is 039eb6e92f, and adding relevant people in cc.\n>\n> Assume this case\n> publisher: tbl_pk -> tbl_fk_pub\n> subscriber: tbl_pk-> tbl_fk_sub\n>\n> Now, in this case, this comment is true right because we are not\n> supposed to truncate tbl_fk_sub on the subscriber side and this should\n> error out.\n\nHere's what I tried, let me know if I'm wrong:\n\nOn publisher:\nCREATE TABLE tbl_pk(id int primary key);\nCREATE TABLE tbl_fk(fkey int references tbl_pk(id));\nINSERT INTO tbl_pk (SELECT x FROM generate_series(1,10) x);\nINSERT INTO tbl_fk (SELECT x % 10 + 1 FROM generate_series(5,25) x);\nDROP PUBLICATION testpub;\nCREATE PUBLICATION testpub FOR TABLE tbl_pk, tbl_fk;\n\nOn subscriber:\nCREATE TABLE tbl_pk(id int primary key);\nCREATE TABLE tbl_fk(fkey int references tbl_pk(id));\nDROP SUBSCRIPTION testsub;\nCREATE SUBSCRIPTION testsub CONNECTION 'host=localhost dbname=postgres\nuser=bharath port=5432' PUBLICATION testpub;\n\nOn both publisher and subscriber to ensure that the initial rows were\nreplicated:\nSELECT count(id) FROM tbl_pk; -- non zero\nSELECT count(fkey) FROM tbl_fk; -- non zero\n\nOn publisher:\nTRUNCATE tbl_pk CASCADE;\nSELECT count(id) FROM tbl_pk; -- 0\nSELECT count(fkey) FROM tbl_fk; -- 0\n\nOn subscriber also we get to see 0 rows:\nSELECT count(id) FROM tbl_pk; -- 0\nSELECT count(fkey) FROM tbl_fk; -- 0\n\nBut the comment says that tbl_fk shouldn't be truncated as it doesn't\npass the cascade option to ExecuteTruncateGuts even though it was\nreceived from the publisher. This behaviour is not in accordance with\nthe comment, right?\nIf we see why this is so: the publisher sends both tbl_pk and tbl_fk\nrels to the subscriber and the TRUNCATE tbl_pk, tbl_fk; is allowed\n(see the code in heap_truncate_check_FKs) even if RESTRICT option is\nspecified.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 12:37:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication - behavior of TRUNCATE ... CASCADE" }, { "msg_contents": " sh,On Mon, May 3, 2021 at 12:37 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, May 3, 2021 at 11:59 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, May 3, 2021 at 10:42 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > In apply_handle_truncate, the following comment before ExecuteTruncateGuts says that it defaults to RESTRICT even if the CASCADE option has been specified in publisher's TRUNCATE command.\n> > > /*\n> > > * Even if we used CASCADE on the upstream primary we explicitly default\n> > > * to replaying changes without further cascading. This might be later\n> > > * changeable with a user specified option.\n> > > */\n> > > I tried the following use case to see if that's actually true:\n> > > 1) Created two tables tbl_pk (primary key), tbl_fk(references tbl_pk primary key via foreign key) on both publisher and subscriber.\n> > > 2) In general, TRUNCATE tbl_pk; or TRUNCATE tbl_pk RESTRICT; would fail because tbl_fk is dependent on tbl_pk.\n> > > 3) TRUNCATE tbl_pk, tbl_fk; would work because the dependent table is specified in the command.\n> > > 4) TRUNCATE tbl_pk CASCADE; would work because of the CASCADE option and both tbl_pk and tbl_fk are truncated. When this command is run on the publisher, the CASCADE option is sent to the subscriber, see DecodeTruncate. But the apply worker ignores it and passes DROP_RESTRICT to ExecuteTruncateGuts. Therefore, the expectation(per the comment) is that on the subscriber, the behavior should be equivalent to TRUNCATE tbl_pk;, so an error is expected. But we are also receiving the tbl_fk in the remote rels along with tbl_pk, so the behavior is equivalent to (3) and both tbl_pk and tbl_fk are truncated.\n> > >\n> > > Does the comment still hold true? Does ignoring the CASCADE option make sense in apply_handle_truncate, as we are receiving all the dependent relations in the remote rels from the publisher? Am I missing something?\n> > >\n> > > The commit id of the feature \"Logical replication support for TRUNCATE\" is 039eb6e92f, and adding relevant people in cc.\n> >\n> > Assume this case\n> > publisher: tbl_pk -> tbl_fk_pub\n> > subscriber: tbl_pk-> tbl_fk_sub\n> >\n> > Now, in this case, this comment is true right because we are not\n> > supposed to truncate tbl_fk_sub on the subscriber side and this should\n> > error out.\n>\n> Here's what I tried, let me know if I'm wrong:\n>\n> On publisher:\n> CREATE TABLE tbl_pk(id int primary key);\n> CREATE TABLE tbl_fk(fkey int references tbl_pk(id));\n> INSERT INTO tbl_pk (SELECT x FROM generate_series(1,10) x);\n> INSERT INTO tbl_fk (SELECT x % 10 + 1 FROM generate_series(5,25) x);\n> DROP PUBLICATION testpub;\n> CREATE PUBLICATION testpub FOR TABLE tbl_pk, tbl_fk;\n>\n> On subscriber:\n> CREATE TABLE tbl_pk(id int primary key);\n> CREATE TABLE tbl_fk(fkey int references tbl_pk(id));\n> DROP SUBSCRIPTION testsub;\n> CREATE SUBSCRIPTION testsub CONNECTION 'host=localhost dbname=postgres\n> user=bharath port=5432' PUBLICATION testpub;\n>\n> On both publisher and subscriber to ensure that the initial rows were\n> replicated:\n> SELECT count(id) FROM tbl_pk; -- non zero\n> SELECT count(fkey) FROM tbl_fk; -- non zero\n>\n> On publisher:\n> TRUNCATE tbl_pk CASCADE;\n> SELECT count(id) FROM tbl_pk; -- 0\n> SELECT count(fkey) FROM tbl_fk; -- 0\n>\n> On subscriber also we get to see 0 rows:\n> SELECT count(id) FROM tbl_pk; -- 0\n> SELECT count(fkey) FROM tbl_fk; -- 0\n>\n> But the comment says that tbl_fk shouldn't be truncated as it doesn't\n> pass the cascade option to ExecuteTruncateGuts even though it was\n> received from the publisher. This behaviour is not in accordance with\n> the comment, right?\n\nI think you are comparing the user-exposed behavior with the internal\ncode comments. The meaning of the comments is that it should not\ntruncate any table on subscriber using cascade, because there might be\nsome subscriber-specific relations that depend upon the primary table\nand those should not get truncated as a side-effect of the cascade.\n\nFor example, you can slightly change your example as below\n> On subscriber:\n> CREATE TABLE tbl_pk(id int primary key);\n> CREATE TABLE tbl_fk_sub(fkey int references tbl_pk(id)); -> this table doesn't refer to tbl_pk on the publisher\n\nSo now as part of the truncate tbl_pk the tbl_fk_subould not get\ntruncated and that is what the comment is trying to say.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 13:01:51 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication - behavior of TRUNCATE ... CASCADE" }, { "msg_contents": "On Mon, May 3, 2021 at 1:02 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I think you are comparing the user-exposed behavior with the internal\n> code comments. The meaning of the comments is that it should not\n> truncate any table on subscriber using cascade, because there might be\n> some subscriber-specific relations that depend upon the primary table\n> and those should not get truncated as a side-effect of the cascade.\n>\n> For example, you can slightly change your example as below\n> > On subscriber:\n> > CREATE TABLE tbl_pk(id int primary key);\n> > CREATE TABLE tbl_fk_sub(fkey int references tbl_pk(id)); -> this table doesn't refer to tbl_pk on the publisher\n>\n> So now as part of the truncate tbl_pk the tbl_fk_subould not get\n> truncated and that is what the comment is trying to say.\n\nThanks. I was of the thinking that the subscriber table can not have\nreferences to other subscriber-local tables and they should also be\nhaving the same column constraints as the publisher table columns.\nBut I was wrong. I tried the use case [1] where the subscriber table\ntbl_pk, that was subscribed to the changes from the publisher, is\nbeing referenced by another subscriber-local table tbl_fk. In this\ncase, the comment and the code that sends only RESTRICT behaviour\nignoring the upstream CASCADE option make sense.\n\nHaving said that, isn't it good if we can provide a subscription\n(CREATE/ALTER) level option say \"cascade\"(similar to other options\nsuch as binary, synchronous_commit, stream) default being false, when\nset to true, we send upstream CASCADE option to ExecuteTruncateGuts in\napply_handle_truncate? It will be useful to truncate all the dependent\ntables in the subscriber. Users will have to use it with caution\nthough.\n\nNote that the comment already says this:\n /*\n * Even if we used CASCADE on the upstream primary we explicitly default\n * to replaying changes without further cascading. This might be later\n * changeable with a user specified option.\n */\n\nThoughts?\n\n[1]\nOn publisher:\nDROP TABLE tbl_pk CASCADE;\nCREATE TABLE tbl_pk(id int primary key);\nINSERT INTO tbl_pk (SELECT x FROM generate_series(1,10) x);\nDROP PUBLICATION testpub;\nCREATE PUBLICATION testpub FOR TABLE tbl_pk;\n\nOn subscriber:\nDROP TABLE tbl_pk CASCADE;\nCREATE TABLE tbl_pk(id int primary key);\nDROP TABLE tbl_fk;\nCREATE TABLE tbl_fk(id1 int references tbl_pk(id));\nDROP SUBSCRIPTION testsub;\nCREATE SUBSCRIPTION testsub CONNECTION 'host=localhost dbname=postgres\nuser=bharath port=5432' PUBLICATION testpub;\nINSERT INTO tbl_fk (SELECT x FROM generate_series(1,10) x);\n\nOn publisher:\nTRUNCATE tbl_pk CASCADE;\nSELECT count(id) FROM tbl_pk; -- 0\n\nOn subscriber we get error, because the RESTRICT option is passed to\nExecuteTruncateGuts in logical apply worker and the table tbl_pk is\nreferenced by tbl_fk, so tbl_pk is not truncated.\nSELECT count(id) FROM tbl_pk; -- non-zero\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 18:08:25 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication - behavior of TRUNCATE ... CASCADE" }, { "msg_contents": "On Mon, May 3, 2021 at 6:08 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Having said that, isn't it good if we can provide a subscription\n> (CREATE/ALTER) level option say \"cascade\"(similar to other options\n> such as binary, synchronous_commit, stream) default being false, when\n> set to true, we send upstream CASCADE option to ExecuteTruncateGuts in\n> apply_handle_truncate? It will be useful to truncate all the dependent\n> tables in the subscriber. Users will have to use it with caution\n> though.\n\nI think this could be a useful feature in some cases. Suppose\nsubscriber has some table that is dependent on the subscribed table,\nin such case if the main table gets truncated it will always error out\nin subscriber, which is fine. But if user doesn’t want error and he\nis fine even if the dependent table gets truncated so I feel there\nshould be some option to set that. I think the documentation should\nclearly say the impact of setting this to true.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 May 2021 18:05:57 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication - behavior of TRUNCATE ... CASCADE" }, { "msg_contents": "On Fri, May 7, 2021 at 6:06 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, May 3, 2021 at 6:08 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Having said that, isn't it good if we can provide a subscription\n> > (CREATE/ALTER) level option say \"cascade\"(similar to other options\n> > such as binary, synchronous_commit, stream) default being false, when\n> > set to true, we send upstream CASCADE option to ExecuteTruncateGuts in\n> > apply_handle_truncate? It will be useful to truncate all the dependent\n> > tables in the subscriber. Users will have to use it with caution\n> > though.\n>\n> I think this could be a useful feature in some cases. Suppose\n> subscriber has some table that is dependent on the subscribed table,\n> in such case if the main table gets truncated it will always error out\n> in subscriber, which is fine. But if user doesn’t want error and he\n> is fine even if the dependent table gets truncated so I feel there\n> should be some option to set that.\n>\n\nSuch a case is possible in theory but why would the user need it? We\ngenerally recommend having the same schema for relations between\npublishers and subscribers, so won't that mean that there is less\nchance of such cases? And after we have DDL replication, won't\ndefining a different schema for replicated objects be difficult to\nmaintain.\n\nHaving said that, I see a different use case of such an option which\nis related to the proposal [1] where the patch provides a truncate\noption to truncate tables before initial sync. The cascade option\ncould be useful in that feature to resolve some of the PK-FK issues\nraised in that thread.\n\n[1] - https://www.postgresql.org/message-id/CF3B6672-2A43-4204-A60A-68F359218A9B%40endpoint.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 20 May 2021 17:02:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication - behavior of TRUNCATE ... CASCADE" }, { "msg_contents": "On Thu, May 20, 2021 at 5:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 7, 2021 at 6:06 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, May 3, 2021 at 6:08 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > Having said that, isn't it good if we can provide a subscription\n> > > (CREATE/ALTER) level option say \"cascade\"(similar to other options\n> > > such as binary, synchronous_commit, stream) default being false, when\n> > > set to true, we send upstream CASCADE option to ExecuteTruncateGuts in\n> > > apply_handle_truncate? It will be useful to truncate all the dependent\n> > > tables in the subscriber. Users will have to use it with caution\n> > > though.\n> >\n> > I think this could be a useful feature in some cases. Suppose\n> > subscriber has some table that is dependent on the subscribed table,\n> > in such case if the main table gets truncated it will always error out\n> > in subscriber, which is fine. But if user doesn’t want error and he\n> > is fine even if the dependent table gets truncated so I feel there\n> > should be some option to set that.\n> >\n>\n> Such a case is possible in theory but why would the user need it? We\n> generally recommend having the same schema for relations between\n> publishers and subscribers, so won't that mean that there is less\n> chance of such cases? And after we have DDL replication, won't\n> defining a different schema for replicated objects be difficult to\n> maintain.\n\nThis proposal can be useful even after the DDL replication feature\ngets in. I think having a use case like the following is quite\npossible(in theory) without any differences in the DDLs in both the\npublisher and subscriber tables: a subscriber table which is\nsubscribed to a publisher, can act as primary key table for other\nsubscriber tables called foreign key tables, refer to the use case I\nspecified at [1]. In that case, TRUNCATE ... CASCADE on the publisher\ntable, will not truncate the subscriber foreign key tables. The\nproposal here is to pass on the CASCADE option to the subscriber so\nthat the subscriber foreign key tables too get truncated. And I agree\nthat this is a very narrow problem to solve. I came across the comment\nbefore ExecuteTruncateGuts in logical/worker.c, so I thought it's\nworth doing it.\n\n[1] - https://www.postgresql.org/message-id/CALj2ACWDFU%3DqawNQkapKeWzVnN6x5zAUzH6hq_3Yybsn6Q89DA%40mail.gmail.com\n\n> Having said that, I see a different use case of such an option which\n> is related to the proposal [1] where the patch provides a truncate\n> option to truncate tables before initial sync. The cascade option\n> could be useful in that feature to resolve some of the PK-FK issues\n> raised in that thread.\n\nThanks. I will respond in that thread.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 21 May 2021 15:39:58 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication - behavior of TRUNCATE ... CASCADE" }, { "msg_contents": "On Fri, May 21, 2021 at 3:39 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Having said that, I see a different use case of such an option which\n> > is related to the proposal [1] where the patch provides a truncate\n> > option to truncate tables before initial sync. The cascade option\n> > could be useful in that feature to resolve some of the PK-FK issues\n> > raised in that thread.\n>\n> Thanks. I will respond in that thread.\n\nJust for the records: I responded with my thoughts at [1].\n\n[1] - https://www.postgresql.org/message-id/CALj2ACUOkYk9WBHLmqEp9JDM333jZXCvqNQnS%3D0ukRdnTOcftg%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 22 May 2021 10:01:25 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication - behavior of TRUNCATE ... CASCADE" }, { "msg_contents": "On Thu, May 20, 2021 at 5:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 7, 2021 at 6:06 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, May 3, 2021 at 6:08 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > Having said that, isn't it good if we can provide a subscription\n> > > (CREATE/ALTER) level option say \"cascade\"(similar to other options\n> > > such as binary, synchronous_commit, stream) default being false, when\n> > > set to true, we send upstream CASCADE option to ExecuteTruncateGuts in\n> > > apply_handle_truncate? It will be useful to truncate all the dependent\n> > > tables in the subscriber. Users will have to use it with caution\n> > > though.\n> >\n> > I think this could be a useful feature in some cases. Suppose\n> > subscriber has some table that is dependent on the subscribed table,\n> > in such case if the main table gets truncated it will always error out\n> > in subscriber, which is fine. But if user doesn’t want error and he\n> > is fine even if the dependent table gets truncated so I feel there\n> > should be some option to set that.\n> >\n>\n> Such a case is possible in theory but why would the user need it? We\n> generally recommend having the same schema for relations between\n> publishers and subscribers, so won't that mean that there is less\n> chance of such cases? And after we have DDL replication, won't\n> defining a different schema for replicated objects be difficult to\n> maintain.\n\nI agree we suggest having the same schema but we still allow something\nextra on the subscriber side, e.g if the publisher table has T(a,b)\nthen we allow the subscriber to have T(a,b,c) right? Since this is\nlogical replication is always good that we don't enforce on what\nschema/dependent table subscriber can have unless it is of utmost\nnecessity. That's the reason we enforce that at least the table name\nof the publisher should match on subscriber and at least the column\nname of the publisher should be there on the subscriber, but there is\nno restriction on the subscriber to have a few extra columns. I think\nif we allow subscribers to have some extra FK table on the subscribed\ntable then that will only improve the flexibility. In the current\ncase since we don't want to truncate the subscriber's local table, we\nare restricting the cascade but then we are restricting the subscriber\nto have any FK table on the subscribed table which I think is a\nrestriction and it will be good to allow this based on some parameter.\n\nSad that, if we assume that the subscriber schema/dependent table\nshould always be the same as primary then shouldn't we just do the\nCASCADE truncate if the publisher is doing that. We are not doing\nthat to protect the truncation of some of the local tables on\nsubscribers.\n\nI think DDL replication should not change this. DDL replication will\nreplicate all object from publisher to subscriber but that doesn't\nmean that subscriber can not create some extra tables which are\ndependent on the replicated table. I think such an extension is good\nfor logical replication.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 22 May 2021 10:32:51 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication - behavior of TRUNCATE ... CASCADE" }, { "msg_contents": "On Sat, May 22, 2021 at 10:33 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, May 20, 2021 at 5:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, May 7, 2021 at 6:06 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Mon, May 3, 2021 at 6:08 PM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > >\n> > > > Having said that, isn't it good if we can provide a subscription\n> > > > (CREATE/ALTER) level option say \"cascade\"(similar to other options\n> > > > such as binary, synchronous_commit, stream) default being false, when\n> > > > set to true, we send upstream CASCADE option to ExecuteTruncateGuts in\n> > > > apply_handle_truncate? It will be useful to truncate all the dependent\n> > > > tables in the subscriber. Users will have to use it with caution\n> > > > though.\n> > >\n> > > I think this could be a useful feature in some cases. Suppose\n> > > subscriber has some table that is dependent on the subscribed table,\n> > > in such case if the main table gets truncated it will always error out\n> > > in subscriber, which is fine. But if user doesn’t want error and he\n> > > is fine even if the dependent table gets truncated so I feel there\n> > > should be some option to set that.\n> > >\n> >\n> > Such a case is possible in theory but why would the user need it? We\n> > generally recommend having the same schema for relations between\n> > publishers and subscribers, so won't that mean that there is less\n> > chance of such cases? And after we have DDL replication, won't\n> > defining a different schema for replicated objects be difficult to\n> > maintain.\n>\n> I agree we suggest having the same schema but we still allow something\n> extra on the subscriber side, e.g if the publisher table has T(a,b)\n> then we allow the subscriber to have T(a,b,c) right? Since this is\n> logical replication is always good that we don't enforce on what\n> schema/dependent table subscriber can have unless it is of utmost\n> necessity. That's the reason we enforce that at least the table name\n> of the publisher should match on subscriber and at least the column\n> name of the publisher should be there on the subscriber, but there is\n> no restriction on the subscriber to have a few extra columns. I think\n> if we allow subscribers to have some extra FK table on the subscribed\n> table then that will only improve the flexibility. In the current\n> case since we don't want to truncate the subscriber's local table, we\n> are restricting the cascade but then we are restricting the subscriber\n> to have any FK table on the subscribed table which I think is a\n> restriction and it will be good to allow this based on some parameter.\n>\n\nI don't deny that this can allow some additional cases than we allow\ntoday but was just not sure whether users really need it. If we want\nto go with such an option then as mentioned earlier, we should\nconsider another proposal for subscriber-side truncate [1] because we\nmight need a cascade operation there as well but for a slightly\ndifferent purpose.\n\n> Sad that, if we assume that the subscriber schema/dependent table\n> should always be the same as primary then shouldn't we just do the\n> CASCADE truncate if the publisher is doing that. We are not doing\n> that to protect the truncation of some of the local tables on\n> subscribers.\n>\n> I think DDL replication should not change this. DDL replication will\n> replicate all object from publisher to subscriber but that doesn't\n> mean that subscriber can not create some extra tables which are\n> dependent on the replicated table. I think such an extension is good\n> for logical replication.\n>\n\nFair enough. I mentioned DDL replication because once we have that\nthen probably lesser users want to manually perform schema alterations\nfor the replicated objects.\n\n[1] - https://www.postgresql.org/message-id/CF3B6672-2A43-4204-A60A-68F359218A9B%40endpoint.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 24 May 2021 11:21:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication - behavior of TRUNCATE ... CASCADE" }, { "msg_contents": "On Mon, May 24, 2021 at 11:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I don't deny that this can allow some additional cases than we allow\n> today but was just not sure whether users really need it. If we want\n> to go with such an option then as mentioned earlier, we should\n> consider another proposal for subscriber-side truncate [1] because we\n> might need a cascade operation there as well but for a slightly\n> different purpose.\n\nI'm thinking how we can utilize the truncate option proposed at [1]\nfor the idea here. Because, currently the truncate option(proposed at\n[1]) is boolean, (of course we can change this to take \"cascade\",\n\"restrict\" options). But how can we differentiate the usage of the\ntruncate option at [1] for two purposes 1) for before copy\ndata/initial table sync operation and 2) for the replication of\nTRUNCATE command as proposed here in this thread. Any thoughts?\n\n[1] - https://www.postgresql.org/message-id/CALj2ACVWNBttS-YcMYuUPK25tsuqtbpuNjT-CBRQLUHXezWERw%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 24 May 2021 14:18:43 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication - behavior of TRUNCATE ... CASCADE" }, { "msg_contents": "On Mon, May 24, 2021 at 2:18 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, May 24, 2021 at 11:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I don't deny that this can allow some additional cases than we allow\n> > today but was just not sure whether users really need it. If we want\n> > to go with such an option then as mentioned earlier, we should\n> > consider another proposal for subscriber-side truncate [1] because we\n> > might need a cascade operation there as well but for a slightly\n> > different purpose.\n>\n> I'm thinking how we can utilize the truncate option proposed at [1]\n> for the idea here. Because, currently the truncate option(proposed at\n> [1]) is boolean, (of course we can change this to take \"cascade\",\n> \"restrict\" options). But how can we differentiate the usage of the\n> truncate option at [1] for two purposes 1) for before copy\n> data/initial table sync operation and 2) for the replication of\n> TRUNCATE command as proposed here in this thread. Any thoughts?\n>\n\nI think we can do this as a separate patch. Let's not try to combine\nboth patches.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 25 May 2021 15:17:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication - behavior of TRUNCATE ... CASCADE" } ]
[ { "msg_contents": "Hi\n\nI am testing a new subscripting interface for jsonb, and I found one issue.\n\nDO $$\nDECLARE v jsonb;\nBEGIN\n v['a'] = '\"Ahoj\"';\n RAISE NOTICE '%', v['a'];\nEND;\n$$;\nNOTICE: \"Ahoj\"\nDO\n\nWhen I use this interface for reading, the jsonb type is returned. What is\nthe correct way for casting from jsonb text to text value? I would not\ndouble quotes inside the result. Cast to text doesn't help. For operator\nAPI we can use \"->>\" symbol. But we have nothing similar for subscript API.\n\nRegards\n\nPavel\n\nHiI am testing a new subscripting interface for jsonb, and I found one issue.DO $$DECLARE v jsonb;BEGIN  v['a'] = '\"Ahoj\"';  RAISE NOTICE '%', v['a'];END;$$;NOTICE:  \"Ahoj\"DOWhen I use this interface for reading, the jsonb type is returned. What is the correct way for casting from jsonb text to text value? I would not double quotes inside the result. Cast to text doesn't help. For operator API we can use \"->>\" symbol. But we have nothing similar for subscript API.RegardsPavel", "msg_date": "Mon, 3 May 2021 11:15:08 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "how to correctly cast json value to text?" }, { "msg_contents": "Hi\n\npo 3. 5. 2021 v 11:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> I am testing a new subscripting interface for jsonb, and I found one issue.\n>\n> DO $$\n> DECLARE v jsonb;\n> BEGIN\n> v['a'] = '\"Ahoj\"';\n> RAISE NOTICE '%', v['a'];\n> END;\n> $$;\n> NOTICE: \"Ahoj\"\n> DO\n>\n> When I use this interface for reading, the jsonb type is returned. What is\n> the correct way for casting from jsonb text to text value? I would not\n> double quotes inside the result. Cast to text doesn't help. For operator\n> API we can use \"->>\" symbol. But we have nothing similar for subscript API.\n>\n\nnow I need function like\n\nCREATE OR REPLACE FUNCTION public.value_to_text(jsonb)\n RETURNS text\n LANGUAGE plpgsql\n IMMUTABLE\nAS $function$\nDECLARE x jsonb;\nBEGIN\n x['x'] = $1;\n RETURN x->>'x';\nEND;\n$function$\n\nDO $$\nDECLARE v jsonb;\nBEGIN\n -- hodnota musi byt validni json\n v['a'] = '\"Ahoj\"';\n RAISE NOTICE '%', value_to_text(v['a']);\nEND;\n$$;\nNOTICE: Ahoj\nDO\n\nIs it possible to do this with built functionality?\n\nI miss the cast function for json scalar string value to string.\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>\n\nHipo 3. 5. 2021 v 11:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:HiI am testing a new subscripting interface for jsonb, and I found one issue.DO $$DECLARE v jsonb;BEGIN  v['a'] = '\"Ahoj\"';  RAISE NOTICE '%', v['a'];END;$$;NOTICE:  \"Ahoj\"DOWhen I use this interface for reading, the jsonb type is returned. What is the correct way for casting from jsonb text to text value? I would not double quotes inside the result. Cast to text doesn't help. For operator API we can use \"->>\" symbol. But we have nothing similar for subscript API.now I need function like CREATE OR REPLACE FUNCTION public.value_to_text(jsonb) RETURNS text LANGUAGE plpgsql IMMUTABLEAS $function$DECLARE x jsonb;BEGIN  x['x'] = $1;  RETURN x->>'x';END;$function$DO $$             DECLARE v jsonb;BEGIN  -- hodnota musi byt validni json  v['a'] = '\"Ahoj\"';  RAISE NOTICE '%', value_to_text(v['a']);END;$$;NOTICE:  AhojDOIs it possible to do this with built functionality?I miss the cast function for json scalar string value to string. RegardsPavelRegardsPavel", "msg_date": "Mon, 3 May 2021 11:23:52 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: how to correctly cast json value to text?" }, { "msg_contents": "On Mon, May 3, 2021 at 12:24 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Is it possible to do this with built functionality?\n>\n> I miss the cast function for json scalar string value to string.\n>\n\n#>>'{}'\n\n\n.m\n\nOn Mon, May 3, 2021 at 12:24 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:Is it possible to do this with built functionality?I miss the cast function for json scalar string value to string. #>>'{}'.m", "msg_date": "Mon, 3 May 2021 12:25:59 +0300", "msg_from": "Marko Tiikkaja <marko@joh.to>", "msg_from_op": false, "msg_subject": "Re: how to correctly cast json value to text?" }, { "msg_contents": "po 3. 5. 2021 v 11:26 odesílatel Marko Tiikkaja <marko@joh.to> napsal:\n\n> On Mon, May 3, 2021 at 12:24 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> Is it possible to do this with built functionality?\n>>\n>> I miss the cast function for json scalar string value to string.\n>>\n>\n> #>>'{}'\n>\n\nIt is working. Thank you. But this syntax is a little bit scary. Maybe we\ncan introduce some functions for this case. Until to pg 14 this\nfunctionality was not necessary, but now it can be nice to have it.\n\nDO $$\nDECLARE v jsonb;\nBEGIN\n -- hodnota musi byt validni json\n v['a'] = '\"Ahoj\"';\n RAISE NOTICE '%', v['a'] #>> '{}';\nEND;\n$$;\nNOTICE: Ahoj\nDO\n\nSome ideas about the name of this function?\n\nCREATE OR REPLACE FUNCTION jsonscalar_to_text(jsonb)\nRETURNS text AS $$\nSELECT $1 #>> '{}'\n$$ LANGUAGE sql;\n\n\n\n>\n> .m\n>\n\npo 3. 5. 2021 v 11:26 odesílatel Marko Tiikkaja <marko@joh.to> napsal:On Mon, May 3, 2021 at 12:24 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:Is it possible to do this with built functionality?I miss the cast function for json scalar string value to string. #>>'{}'It is working. Thank you. But this syntax is a little bit scary.  Maybe we can introduce some functions for this case. Until to pg 14 this functionality was not necessary, but now it can be nice to have it.DO $$DECLARE v jsonb;BEGIN  -- hodnota musi byt validni json  v['a'] = '\"Ahoj\"';  RAISE NOTICE '%', v['a'] #>> '{}';END;$$;NOTICE:  AhojDOSome ideas about the name of this function?CREATE OR REPLACE FUNCTION jsonscalar_to_text(jsonb)RETURNS text AS $$SELECT $1 #>> '{}' $$ LANGUAGE sql;.m", "msg_date": "Mon, 3 May 2021 11:35:34 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: how to correctly cast json value to text?" } ]
[ { "msg_contents": "We have already pushed the configurable lz4 toast compression code[1].\nIn the custom compression thread, we were already having the patch to\nsupport the compression method options[2]. But the design for the\nbase patches was heavily modified before commit but I never rebased\nthis patch based on the new design. Now, I have rebased this patch so\nthat we don't lose track and we can work on this for v15. This is\nstill a WIP patch.\n\nFor v15 I will work on improving the code and I will also work on\nanalyzing the usage of compression method options (compression\nspeed/ratio).\n\n[1] https://www.postgresql.org/message-id/E1lNKw9-0008DT-1L%40gemulon.postgresql.org\n[2] https://www.postgresql.org/message-id/CAFiTN-s7fno8pGwfK7jwSf7uNaVhPZ38C3LAcF%2B%3DWHu7jNvy7g%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 3 May 2021 18:27:05 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Toast compression method options" }, { "msg_contents": "On Mon, May 3, 2021 at 6:27 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> We have already pushed the configurable lz4 toast compression code[1].\n> In the custom compression thread, we were already having the patch to\n> support the compression method options[2]. But the design for the\n> base patches was heavily modified before commit but I never rebased\n> this patch based on the new design. Now, I have rebased this patch so\n> that we don't lose track and we can work on this for v15. This is\n> still a WIP patch.\n>\n> For v15 I will work on improving the code and I will also work on\n> analyzing the usage of compression method options (compression\n> speed/ratio).\n>\n> [1] https://www.postgresql.org/message-id/E1lNKw9-0008DT-1L%40gemulon.postgresql.org\n> [2] https://www.postgresql.org/message-id/CAFiTN-s7fno8pGwfK7jwSf7uNaVhPZ38C3LAcF%2B%3DWHu7jNvy7g%40mail.gmail.com\n>\n\nI have fixed some comments offlist reported by Justin. Apart from\nthat, I have also improved documentation and test case. Stil it has a\nlot of cleanup to be done but I am not planning to do that\nimmediately.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 6 May 2021 19:23:48 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Toast compression method options" }, { "msg_contents": "On Thu, May 06, 2021 at 07:23:48PM +0530, Dilip Kumar wrote:\n> I have fixed some comments offlist reported by Justin. Apart from\n> that, I have also improved documentation and test case. Stil it has a\n> lot of cleanup to be done but I am not planning to do that\n> immediately.\n\nI was testing the various compression algos we touch in core, and I am\nnot really convinced that we need more code to control that. First,\npglz is living as-is in the backend code for a very long time and no\none has expressed an interest in controlling the compression strategy\nused AFAIK. On top of that, LZ4 outclasses it easily, so if there is\na need to worry about performance with the TOAST data, users could\njust move to use LZ4.\n\n+ if (strcmp(def->defname, \"acceleration\") == 0)\n+ {\n+ int32 acceleration =\n+ pg_atoi(defGetString(def), sizeof(acceleration), 0);\n+\n+ if (acceleration < INT32_MIN || acceleration > INT32_MAX)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"unexpected value for lz4 compression acceleration: \\\"%s\\\"\",\n+ defGetString(def)),\n+ errhint(\"expected value between INT32_MIN and\nINT32_MAX\")\n+ ));\n\nThen comes the part with LZ4 and its acceleration. The default\ncompression level used by LZ4 compresses data the most, and it is\nalready pretty cheap in CPU. Do you have cases where this would be\nuseful? Increasing the acceleration reduces the compression to be\nclose to zero, but if one cares about the compression cost, he/she\ncould fall back to the external storage for basically the same\neffect. Is there really a use-case for something in-between?\n--\nMichael", "msg_date": "Fri, 18 Jun 2021 15:43:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Toast compression method options" }, { "msg_contents": "On Fri, Jun 18, 2021 at 12:13 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, May 06, 2021 at 07:23:48PM +0530, Dilip Kumar wrote:\n> > I have fixed some comments offlist reported by Justin. Apart from\n> > that, I have also improved documentation and test case. Stil it has a\n> > lot of cleanup to be done but I am not planning to do that\n> > immediately.\n>\n> I was testing the various compression algos we touch in core, and I am\n> not really convinced that we need more code to control that. First,\n> pglz is living as-is in the backend code for a very long time and no\n> one has expressed an interest in controlling the compression strategy\n> used AFAIK. On top of that, LZ4 outclasses it easily, so if there is\n> a need to worry about performance with the TOAST data, users could\n> just move to use LZ4.\n>\n> + if (strcmp(def->defname, \"acceleration\") == 0)\n> + {\n> + int32 acceleration =\n> + pg_atoi(defGetString(def), sizeof(acceleration), 0);\n> +\n> + if (acceleration < INT32_MIN || acceleration > INT32_MAX)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"unexpected value for lz4 compression acceleration: \\\"%s\\\"\",\n> + defGetString(def)),\n> + errhint(\"expected value between INT32_MIN and\n> INT32_MAX\")\n> + ));\n>\n> Then comes the part with LZ4 and its acceleration. The default\n> compression level used by LZ4 compresses data the most, and it is\n> already pretty cheap in CPU. Do you have cases where this would be\n> useful? Increasing the acceleration reduces the compression to be\n> close to zero, but if one cares about the compression cost, he/she\n> could fall back to the external storage for basically the same\n> effect. Is there really a use-case for something in-between?\n\nIMHO there is certainly a use case, basically, if we compress the data\nso that we can avoid storing it externally. Now suppose for some\ndata, with default LZ4 compression, the compression ratio is so high\nthat you are able to compress to the size which is way under the\nlimit. So for such data, the acceleration can be increased such that\ncompression is fast and compression ratio is good enough that it is\nnot going to the external storage. I agree it will be difficult for\nthe user to make such a decision and select the acceleration value but\nbased on the data pattern and their compressed length the admin can\nmake such a decision. So in short select the acceleration value such\nthat you can compress it fast and the compression ratio is good enough\nto keep it from storing externally.\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Jun 2021 11:05:22 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Toast compression method options" }, { "msg_contents": "On Tue, Jun 22, 2021 at 11:05:22AM +0530, Dilip Kumar wrote:\n> IMHO there is certainly a use case, basically, if we compress the data\n> so that we can avoid storing it externally. Now suppose for some\n> data, with default LZ4 compression, the compression ratio is so high\n> that you are able to compress to the size which is way under the\n> limit. So for such data, the acceleration can be increased such that\n> compression is fast and compression ratio is good enough that it is\n> not going to the external storage. I agree it will be difficult for\n> the user to make such a decision and select the acceleration value but\n> based on the data pattern and their compressed length the admin can\n> make such a decision. So in short select the acceleration value such\n> that you can compress it fast and the compression ratio is good enough\n> to keep it from storing externally.\n\nTheoritically, I agree that there could be a use case, and that was\nthe point I was trying to outline above. My point is more from a\npractical point of view. LZ4 is designed to be fast and cheap in CPU\nwith a rather low compression ratio compared to other modern algos.\n\nCould it be possible to think about some worst cases where one may\nwant to reduce its compression to save some CPU? The point, as you\nsay, to allow a tuning of the acceleration would be that one may want\nto save a bit of CPU and does not care about the extra disk space it\ntakes. Still, I am wondering why one would not just store the values\nexternally in such cases and just save as much compression effort as\npossible.\n--\nMichael", "msg_date": "Tue, 22 Jun 2021 17:07:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Toast compression method options" }, { "msg_contents": "On Thu, May 6, 2021 at 7:24 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, May 3, 2021 at 6:27 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > We have already pushed the configurable lz4 toast compression code[1].\n> > In the custom compression thread, we were already having the patch to\n> > support the compression method options[2]. But the design for the\n> > base patches was heavily modified before commit but I never rebased\n> > this patch based on the new design. Now, I have rebased this patch so\n> > that we don't lose track and we can work on this for v15. This is\n> > still a WIP patch.\n> >\n> > For v15 I will work on improving the code and I will also work on\n> > analyzing the usage of compression method options (compression\n> > speed/ratio).\n> >\n> > [1] https://www.postgresql.org/message-id/E1lNKw9-0008DT-1L%40gemulon.postgresql.org\n> > [2] https://www.postgresql.org/message-id/CAFiTN-s7fno8pGwfK7jwSf7uNaVhPZ38C3LAcF%2B%3DWHu7jNvy7g%40mail.gmail.com\n> >\n>\n> I have fixed some comments offlist reported by Justin. Apart from\n> that, I have also improved documentation and test case. Stil it has a\n> lot of cleanup to be done but I am not planning to do that\n> immediately.\n\nThe patch does not apply on Head anymore, could you rebase and post a\npatch. I'm changing the status to \"Waiting for Author\".\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 14 Jul 2021 17:35:27 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Toast compression method options" }, { "msg_contents": "On Tue, Jun 22, 2021 at 1:37 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jun 22, 2021 at 11:05:22AM +0530, Dilip Kumar wrote:\n> > IMHO there is certainly a use case, basically, if we compress the data\n> > so that we can avoid storing it externally. Now suppose for some\n> > data, with default LZ4 compression, the compression ratio is so high\n> > that you are able to compress to the size which is way under the\n> > limit. So for such data, the acceleration can be increased such that\n> > compression is fast and compression ratio is good enough that it is\n> > not going to the external storage. I agree it will be difficult for\n> > the user to make such a decision and select the acceleration value but\n> > based on the data pattern and their compressed length the admin can\n> > make such a decision. So in short select the acceleration value such\n> > that you can compress it fast and the compression ratio is good enough\n> > to keep it from storing externally.\n>\n> Theoritically, I agree that there could be a use case, and that was\n> the point I was trying to outline above. My point is more from a\n> practical point of view. LZ4 is designed to be fast and cheap in CPU\n> with a rather low compression ratio compared to other modern algos.\n>\n> Could it be possible to think about some worst cases where one may\n> want to reduce its compression to save some CPU? The point, as you\n> say, to allow a tuning of the acceleration would be that one may want\n> to save a bit of CPU and does not care about the extra disk space it\n> takes. Still, I am wondering why one would not just store the values\n> externally in such cases and just save as much compression effort as\n> possible.\n\nWell, that actually depends upon the data, basically, LZ4 acceleration\nsearches in wider increments, which may reduce the number of potential\nmatches but increase the speed. So based on the actual data pattern\nit is highly possible that you get the speed benefit without losing\nmuch or nothing on the compression ratio. So IMHO, this is user\nexposed option so based on the user's data pattern why it is not wise\nto provide an option for the user to give the acceration when the user\nis sure that selecting a better speed will not harm anything on\ncompression ratio for their data pattern.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 14 Jul 2021 19:27:49 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Toast compression method options" }, { "msg_contents": "On Wed, Jul 14, 2021 at 5:35 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, May 6, 2021 at 7:24 PM Dilip Kumar <dilipbalaut@gmail.com> wrote\n>\n> The patch does not apply on Head anymore, could you rebase and post a\n> patch. I'm changing the status to \"Waiting for Author\".\n\nOkay, I will rebase and send it by next week.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 18 Jul 2021 21:15:03 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Toast compression method options" }, { "msg_contents": "On Sun, Jul 18, 2021 at 9:15 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Jul 14, 2021 at 5:35 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Thu, May 6, 2021 at 7:24 PM Dilip Kumar <dilipbalaut@gmail.com> wrote\n> >\n> > The patch does not apply on Head anymore, could you rebase and post a\n> > patch. I'm changing the status to \"Waiting for Author\".\n>\n> Okay, I will rebase and send it by next week.\n\nI have rebased the patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 19 Jul 2021 13:24:03 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Toast compression method options" }, { "msg_contents": "On Mon, Jul 19, 2021 at 01:24:03PM +0530, Dilip Kumar wrote:\n> On Sun, Jul 18, 2021 at 9:15 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Jul 14, 2021 at 5:35 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Thu, May 6, 2021 at 7:24 PM Dilip Kumar <dilipbalaut@gmail.com> wrote\n> > >\n> > > The patch does not apply on Head anymore, could you rebase and post a\n> > > patch. I'm changing the status to \"Waiting for Author\".\n> >\n> > Okay, I will rebase and send it by next week.\n> \n> I have rebased the patch.\n> \n\nHi,\n\nThis doesn't apply cleanly nor compile.\nAre you planning to send a rebased version?\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Fri, 10 Sep 2021 00:10:53 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: Toast compression method options" }, { "msg_contents": "On Fri, 10 Sep 2021 at 10:40 AM, Jaime Casanova <\njcasanov@systemguards.com.ec> wrote:\n\n> On Mon, Jul 19, 2021 at 01:24:03PM +0530, Dilip Kumar wrote:\n> > On Sun, Jul 18, 2021 at 9:15 PM Dilip Kumar <dilipbalaut@gmail.com>\n> wrote:\n> > >\n> > > On Wed, Jul 14, 2021 at 5:35 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > On Thu, May 6, 2021 at 7:24 PM Dilip Kumar <dilipbalaut@gmail.com>\n> wrote\n> > > >\n> > > > The patch does not apply on Head anymore, could you rebase and post a\n> > > > patch. I'm changing the status to \"Waiting for Author\".\n> > >\n> > > Okay, I will rebase and send it by next week.\n> >\n> > I have rebased the patch.\n> >\n>\n> Hi,\n>\n> This doesn't apply cleanly nor compile.\n> Are you planning to send a rebased version?\n\n\nI will do that early next week.\n\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, 10 Sep 2021 at 10:40 AM, Jaime Casanova <jcasanov@systemguards.com.ec> wrote:On Mon, Jul 19, 2021 at 01:24:03PM +0530, Dilip Kumar wrote:\n> On Sun, Jul 18, 2021 at 9:15 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Jul 14, 2021 at 5:35 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Thu, May 6, 2021 at 7:24 PM Dilip Kumar <dilipbalaut@gmail.com> wrote\n> > >\n> > > The patch does not apply on Head anymore, could you rebase and post a\n> > > patch. I'm changing the status to \"Waiting for Author\".\n> >\n> > Okay, I will rebase and send it by next week.\n> \n> I have rebased the patch.\n> \n\nHi,\n\nThis doesn't apply cleanly nor compile.\nAre you planning to send a rebased version?I will do that early next week.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 10 Sep 2021 10:54:04 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Toast compression method options" }, { "msg_contents": "On Fri, Sep 10, 2021 at 10:54:04AM +0530, Dilip Kumar wrote:\n> On Fri, 10 Sep 2021 at 10:40 AM, Jaime Casanova <\n> jcasanov@systemguards.com.ec> wrote:\n> \n> > On Mon, Jul 19, 2021 at 01:24:03PM +0530, Dilip Kumar wrote:\n> > > On Sun, Jul 18, 2021 at 9:15 PM Dilip Kumar <dilipbalaut@gmail.com>\n> > wrote:\n> > > >\n> > > > On Wed, Jul 14, 2021 at 5:35 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, May 6, 2021 at 7:24 PM Dilip Kumar <dilipbalaut@gmail.com>\n> > wrote\n> > > > >\n> > > > > The patch does not apply on Head anymore, could you rebase and post a\n> > > > > patch. I'm changing the status to \"Waiting for Author\".\n> > > >\n> > > > Okay, I will rebase and send it by next week.\n> > >\n> > > I have rebased the patch.\n> > >\n> >\n> > Hi,\n> >\n> > This doesn't apply cleanly nor compile.\n> > Are you planning to send a rebased version?\n> \n> \n> I will do that early next week.\n> \n\nGreat! I'm marking the patch as \"waiting on author\". \nThanks for keep working on this.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Fri, 10 Sep 2021 11:24:01 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: Toast compression method options" }, { "msg_contents": "On Fri, Sep 10, 2021 at 9:54 PM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n\n> > I will do that early next week.\n> >\n>\n> Great! I'm marking the patch as \"waiting on author\".\n> Thanks for keep working on this.\n>\n\nI haved rebased the patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 13 Sep 2021 17:10:22 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Toast compression method options" }, { "msg_contents": "On Mon, Sep 13, 2021 at 05:10:22PM +0530, Dilip Kumar wrote:\n> I haved rebased the patch.\n\nPlease note that the patch does not apply. FWIW, I still don't think\nthat this is a good idea to have that. I don't recall seeing much on\nthis list that users would like to have such a level of tuning for\npglz, while lz4 only offers the option to reduce the compression rate\nwhile being already very cheap in CPU so the impact is limited. On\ntop of that, this adds a new attribute to pg_attribute with much more\ncomplexity into ALTER TABLE code paths in tablecmds.c..\n--\nMichael", "msg_date": "Fri, 1 Oct 2021 16:12:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Toast compression method options" }, { "msg_contents": "On Fri, Oct 1, 2021 at 12:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Sep 13, 2021 at 05:10:22PM +0530, Dilip Kumar wrote:\n> > I haved rebased the patch.\n>\n> Please note that the patch does not apply. FWIW, I still don't think\n> that this is a good idea to have that. I don't recall seeing much on\n> this list that users would like to have such a level of tuning for\n> pglz, while lz4 only offers the option to reduce the compression rate\n> while being already very cheap in CPU so the impact is limited. On\n> top of that, this adds a new attribute to pg_attribute with much more\n> complexity into ALTER TABLE code paths in tablecmds.c..\n\nThanks for the feedback, seeing no much interest from other hackers\nand also as Michael pointed out that there is no much use case for\nthis, I am withdrawing this patch from the commitfest.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 1 Oct 2021 14:41:46 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Toast compression method options" } ]
[ { "msg_contents": "Hi,\n\nThere's a well known limitation in the amount of memory that any\nVACUUM process can use, capped at 1GB now. This is not reflected in\nthe documentation\n\nhttps://www.postgresql.org/docs/current/runtime-config-resource.html\n\nWe should add a line that indicates that there is a limitation (that\nshould be IMO, backported to documentation of earlier versions as it\naffects all supported versions), at least until such limitation is\nlifted.\n\nKind regards, Martín\n\n-- \nMartín Marqués\nIt’s not that I have something to hide,\nit’s that I have nothing I want you to see\n\n\n", "msg_date": "Mon, 3 May 2021 13:48:21 -0300", "msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <martin.marques@gmail.com>", "msg_from_op": true, "msg_subject": "Update maintenance_work_mem/autovacuum_work_mem to reflect the 1GB\n limitation with VACUUM" }, { "msg_contents": "On Mon, 2021-05-03 at 13:48 -0300, Martín Marqués wrote:\n> We should add a line that indicates that there is a limitation (that\n> should be IMO, backported to documentation of earlier versions as it\n> affects all supported versions), at least until such limitation is\n> lifted.\n\nHere is a patch for that,\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 05 May 2021 12:03:08 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Update maintenance_work_mem/autovacuum_work_mem to reflect the\n 1GB limitation with VACUUM" }, { "msg_contents": "On Wed, 2021-05-05 at 12:03 +0200, Laurenz Albe wrote:\n> On Mon, 2021-05-03 at 13:48 -0300, Martín Marqués wrote:\n> > We should add a line that indicates that there is a limitation (that\n> > should be IMO, backported to documentation of earlier versions as it\n> > affects all supported versions), at least until such limitation is\n> > lifted.\n> \n> Here is a patch for that\n\nJust sending a reply to -hackers so I can add it to the commitfest.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Thu, 20 May 2021 17:52:38 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Update maintenance_work_mem/autovacuum_work_mem to reflect the\n 1GB limitation with VACUUM" }, { "msg_contents": "s/Node/Note/\r\n\r\nOther than that, +1 to the patch and +1 to backpatching.\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Tue, 01 Jun 2021 19:25:18 +0000", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Update maintenance_work_mem/autovacuum_work_mem to reflect the\n 1GB\n limitation with VACUUM" }, { "msg_contents": "On Wed, May 5, 2021 at 7:03 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Mon, 2021-05-03 at 13:48 -0300, Martín Marqués wrote:\n> > We should add a line that indicates that there is a limitation (that\n> > should be IMO, backported to documentation of earlier versions as it\n> > affects all supported versions), at least until such limitation is\n> > lifted.\n>\n> Here is a patch for that,\n\nThe patch adds the description in the autovacuum_work_mem section.\nIsn't it better to add it in mantenance_work section or VACUUM command\nsection since this limitation is not only for autovacuum?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 2 Jun 2021 18:16:08 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Update maintenance_work_mem/autovacuum_work_mem to reflect the\n 1GB limitation with VACUUM" }, { "msg_contents": "On Wed, 2021-06-02 at 18:16 +0900, Masahiko Sawada wrote:\n> > > We should add a line that indicates that there is a limitation (that\n> > > should be IMO, backported to documentation of earlier versions as it\n> > > affects all supported versions), at least until such limitation is\n> > > lifted.\n> > Here is a patch for that,\n> \n> The patch adds the description in the autovacuum_work_mem section.\n> Isn't it better to add it in mantenance_work section or VACUUM command\n> section since this limitation is not only for autovacuum?\n\nYou are right; theoretically, the correct place to document that\nwould be the VACUUM documentation. But I guess that most people who\nare curious about VACUUM's memory usage will read the documentation for\n\"maintenance_work_mem\" or \"autovacuum_work_mem\".\n\nI have fixed a typo and added a similar paragraph to \"maintenance_work_mem\".\n\nYours,\nLaurenz Albe", "msg_date": "Fri, 04 Jun 2021 14:34:42 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Update maintenance_work_mem/autovacuum_work_mem to reflect the\n 1GB limitation with VACUUM" }, { "msg_contents": "Should we say \"currently has\"?", "msg_date": "Fri, 04 Jun 2021 15:16:36 +0000", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Update maintenance_work_mem/autovacuum_work_mem to reflect the\n 1GB\n limitation with VACUUM" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nLatest patch looks fine to me, to be clear.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Thu, 17 Jun 2021 14:56:52 +0000", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Update maintenance_work_mem/autovacuum_work_mem to reflect the\n 1GB\n limitation with VACUUM" }, { "msg_contents": "On Fri, 21 May 2021 at 03:52, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> Just sending a reply to -hackers so I can add it to the commitfest.\n\nI had a look at the patch in [1] and I find it a bit weird that we'd\nwrite the following about autovacuum_work_mem in our docs:\n\n+ <para>\n+ Note that <command>VACUUM</command> has a hard-coded limit of 1GB\n+ for the amount of memory used, so setting\n+ <varname>autovacuum_work_mem</varname> higher than that has no effect.\n+ </para>\n\nSince that setting is *only* used for auto vacuum, why don't we just\nlimit the range of the GUC to 1GB?\n\nOf course, it wouldn't be wise to backpatch the reduced limit of\nautovacuum_work_mem as it might upset people who have higher values in\ntheir postgresql.conf when their database fails to restart after an\nupgrade. I think what might be best is just to reduce the limit in\nmaster and apply the doc patch for just maintenance_work_mem in all\nsupported versions. We could just ignore doing anything with\nautovacuum_work_mem in the back branches and put it down to a\nhistorical mistake that can't easily be fixed now.\n\nI've attached what I came up with.\n\nWhat do you think?\n\n[1] https://www.postgresql.org/message-id/514fe5ce4714b7b33cb0a611f0c7b72df413bef5.camel%40cybertec.at", "msg_date": "Fri, 2 Jul 2021 23:31:07 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Update maintenance_work_mem/autovacuum_work_mem to reflect the\n 1GB limitation with VACUUM" }, { "msg_contents": "On Fri, 2021-07-02 at 23:31 +1200, David Rowley wrote:\n> I had a look at the patch in [1] and I find it a bit weird that we'd\n> write the following about autovacuum_work_mem in our docs:\n> \n> + <para>\n> + Note that <command>VACUUM</command> has a hard-coded limit of 1GB\n> + for the amount of memory used, so setting\n> + <varname>autovacuum_work_mem</varname> higher than that has no effect.\n> + </para>\n> \n> Since that setting is *only* used for auto vacuum, why don't we just\n> limit the range of the GUC to 1GB?\n> \n> Of course, it wouldn't be wise to backpatch the reduced limit of\n> autovacuum_work_mem as it might upset people who have higher values in\n> their postgresql.conf when their database fails to restart after an\n> upgrade. I think what might be best is just to reduce the limit in\n> master and apply the doc patch for just maintenance_work_mem in all\n> supported versions. We could just ignore doing anything with\n> autovacuum_work_mem in the back branches and put it down to a\n> historical mistake that can't easily be fixed now.\n> \n> I've attached what I came up with.\n> \n> What do you think?\n> \n> [1] https://www.postgresql.org/message-id/514fe5ce4714b7b33cb0a611f0c7b72df413bef5.camel%40cybertec.at\n\nI think that is much better.\nI am fine with that patch.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 02 Jul 2021 14:40:52 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Update maintenance_work_mem/autovacuum_work_mem to reflect the\n 1GB limitation with VACUUM" }, { "msg_contents": "On Sat, 3 Jul 2021 at 00:40, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Fri, 2021-07-02 at 23:31 +1200, David Rowley wrote:\n> > I had a look at the patch in [1] and I find it a bit weird that we'd\n> > write the following about autovacuum_work_mem in our docs:\n> >\n> > + <para>\n> > + Note that <command>VACUUM</command> has a hard-coded limit of 1GB\n> > + for the amount of memory used, so setting\n> > + <varname>autovacuum_work_mem</varname> higher than that has no effect.\n> > + </para>\n> >\n> > Since that setting is *only* used for auto vacuum, why don't we just\n> > limit the range of the GUC to 1GB?\n> >\n> > Of course, it wouldn't be wise to backpatch the reduced limit of\n> > autovacuum_work_mem as it might upset people who have higher values in\n> > their postgresql.conf when their database fails to restart after an\n> > upgrade. I think what might be best is just to reduce the limit in\n> > master and apply the doc patch for just maintenance_work_mem in all\n> > supported versions. We could just ignore doing anything with\n> > autovacuum_work_mem in the back branches and put it down to a\n> > historical mistake that can't easily be fixed now.\n> >\n>\n> I think that is much better.\n> I am fine with that patch.\n\nThanks for looking. I've pushed the doc fix patch for\nmaintenance_work_mem and back-patched to 9.6.\n\nI could do with a 2nd opinion about if we should just adjust the\nmaximum value for the autovacuum_work_mem GUC to 1GB in master.\n\nI'm also not sure if since we'd not backpatch the GUC max value\nadjustment if we need to document the upper limit in the manual.\n\nDavid", "msg_date": "Sun, 4 Jul 2021 22:38:33 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Update maintenance_work_mem/autovacuum_work_mem to reflect the\n 1GB limitation with VACUUM" }, { "msg_contents": "On Sun, 4 Jul 2021 at 22:38, David Rowley <dgrowleyml@gmail.com> wrote:\n> I could do with a 2nd opinion about if we should just adjust the\n> maximum value for the autovacuum_work_mem GUC to 1GB in master.\n>\n> I'm also not sure if since we'd not backpatch the GUC max value\n> adjustment if we need to document the upper limit in the manual.\n\nI was just looking at this again and I see that GIN indexes are able\nto use more than 1GB of memory during VACUUM. That discovery makes me\nthink having the docs say that vacuum cannot use more than 1GB of\nmemory is at best misleading and more likely just incorrect.\n\nRight now I'm considering if it might just be better to revert\nec34040af and call it quits here.\n\nDavid\n\n\n", "msg_date": "Wed, 7 Jul 2021 23:44:16 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Update maintenance_work_mem/autovacuum_work_mem to reflect the\n 1GB limitation with VACUUM" }, { "msg_contents": "On Wed, Jul 7, 2021 at 8:44 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sun, 4 Jul 2021 at 22:38, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I could do with a 2nd opinion about if we should just adjust the\n> > maximum value for the autovacuum_work_mem GUC to 1GB in master.\n> >\n> > I'm also not sure if since we'd not backpatch the GUC max value\n> > adjustment if we need to document the upper limit in the manual.\n>\n> I was just looking at this again and I see that GIN indexes are able\n> to use more than 1GB of memory during VACUUM.\n\nI think you meant that autovacuums can use more than 1GB of memory\nduring cleaning up a gin pending list (in ginInsertCleanup()). The\ndescription updated by that commit is not true as of now as you\npointed out but IIUC it uses maintenance_work_mem *in addition to* the\nsame amount memory used by lazy vacuum. This memory usage seems rather\nweird to me. Is it worh considering having gin pending list cleanup\nuse work_mem instead of maintenance_work_mem also in autovacuum cases\nlike btree indexes do? If we do that, the description will become\ntrue, although we might need to update work_mem section somewhat.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 19 Jul 2021 17:27:10 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Update maintenance_work_mem/autovacuum_work_mem to reflect the\n 1GB limitation with VACUUM" }, { "msg_contents": "On Wed, 7 Jul 2021 at 23:44, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sun, 4 Jul 2021 at 22:38, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I could do with a 2nd opinion about if we should just adjust the\n> > maximum value for the autovacuum_work_mem GUC to 1GB in master.\n> >\n> > I'm also not sure if since we'd not backpatch the GUC max value\n> > adjustment if we need to document the upper limit in the manual.\n>\n> I was just looking at this again and I see that GIN indexes are able\n> to use more than 1GB of memory during VACUUM. That discovery makes me\n> think having the docs say that vacuum cannot use more than 1GB of\n> memory is at best misleading and more likely just incorrect.\n\nThe attached patch aims to put right where I went wrong with the\ndocumentation about vacuum/autovacuum only using maintainance_work_mem\nmemory for dead tuple collection.\n\nI plan to push this and backpatch to 9.6 shortly unless there are any\nbetter ideas.\n\nWhat's in there right now is wrong and I want that fixed before the\ncut-off for the next set of bug fix releases.\n\nDavid", "msg_date": "Mon, 9 Aug 2021 14:44:13 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Update maintenance_work_mem/autovacuum_work_mem to reflect the\n 1GB limitation with VACUUM" }, { "msg_contents": "On Mon, 9 Aug 2021 at 14:44, David Rowley <dgrowleyml@gmail.com> wrote:\n> I plan to push this and backpatch to 9.6 shortly unless there are any\n> better ideas.\n\nI pushed this patch. I've now marked the entry in the commitfest app\nas committed too.\n\nDavid\n\n\n", "msg_date": "Mon, 9 Aug 2021 16:53:02 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Update maintenance_work_mem/autovacuum_work_mem to reflect the\n 1GB limitation with VACUUM" } ]
[ { "msg_contents": "Hi,\n\nA colleague debugged an issue where their postgres was occasionally\ncrash-restarting under load.\n\nThe cause turned out to be that a relatively complex archive_command was\nused, which could in some rare circumstances have a bash subshell\npipeline not succeed. It wasn't at all obvious why that'd cause a crash\nthough - the archive command handles the error.\n\nThe issue turns out to be that postgres was in a container, with pid\nnamespaces enabled. Because postgres was run directly in the container,\nwithout a parent process inside, it thus becomes pid 1. Which mostly\nworks without a problem. Until, as the case here with the archive\ncommand, a sub-sub process exits while it still has a child. Then that\nchild gets re-parented to postmaster (as init).\n\nSuch a child is likely to have exited not just with 0 or 1, but\nsomething else. As the pid won't match anything in reaper(), we'll go to\nCleanupBackend(). Where any exit status but 0/1 will unconditionally\ntrigger a restart:\n\n\tif (!EXIT_STATUS_0(exitstatus) && !EXIT_STATUS_1(exitstatus))\n\t{\n\t\tHandleChildCrash(pid, exitstatus, _(\"server process\"));\n\t\treturn;\n\t}\n\n\nThis kind of thing is pretty hard to debug, because it's not easy to\neven figure out what the \"crashing\" pid belonged to.\n\nI wonder if we should work a bit harder to try to identify whether an\nexiting process was a \"server process\" before identifying it as such?\n\nAnd perhaps we ought to warn about postgres running as \"init\" unless we\nmake that robust?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 May 2021 12:07:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "PG in container w/ pid namespace is init, process exits cause restart" }, { "msg_contents": "On 2021-May-03, Andres Freund wrote:\n\n> The issue turns out to be that postgres was in a container, with pid\n> namespaces enabled. Because postgres was run directly in the container,\n> without a parent process inside, it thus becomes pid 1. Which mostly\n> works without a problem. Until, as the case here with the archive\n> command, a sub-sub process exits while it still has a child. Then that\n> child gets re-parented to postmaster (as init).\n\nHah .. interesting. I think we should definitely make this work, since\ncontainerized stuff is going to become more and more prevalent.\n\nI also heard a story where things ran into trouble (I didn't get the\nwhole story of *what* was the problem with that) because the datadir is /.\nI know -- nobody in their right mind would put the datadir in / -- but\napparently in the container world that's not something as stupid as it\nsounds. That's of course not related to what you describe here\ncode-wise, but the underlying reason is the same.\n\n> I wonder if we should work a bit harder to try to identify whether an\n> exiting process was a \"server process\" before identifying it as such?\n\nWell, we've never made any effort there because it just wasn't possible.\nNobody ever had postmaster also be init .. until containers. Let's fix\nit.\n\n> And perhaps we ought to warn about postgres running as \"init\" unless we\n> make that robust?\n\nI guess we can do that in older releases, but do we really need it? As\nI understand, the only thing we need to do is verify that the dying PID\nis a backend PID, and not cause a crash cycle if it isn't.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Mon, 3 May 2021 15:25:53 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init, process exits cause\n restart" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-May-03, Andres Freund wrote:\n>> The issue turns out to be that postgres was in a container, with pid\n>> namespaces enabled. Because postgres was run directly in the container,\n>> without a parent process inside, it thus becomes pid 1. Which mostly\n>> works without a problem. Until, as the case here with the archive\n>> command, a sub-sub process exits while it still has a child. Then that\n>> child gets re-parented to postmaster (as init).\n\n> Hah .. interesting. I think we should definitely make this work, since\n> containerized stuff is going to become more and more prevalent.\n\nHow would we make it \"work\"? The postmaster can't possibly be expected\nto know the right thing to do with unexpected children.\n\n> I guess we can do that in older releases, but do we really need it? As\n> I understand, the only thing we need to do is verify that the dying PID\n> is a backend PID, and not cause a crash cycle if it isn't.\n\nI think that'd be a net reduction in reliability, not an improvement.\nIn most scenarios it'd do little except mask bugs. And who's to say\nthat ignoring unexpected child deaths is okay, anyway? We could hardly\nbe sure that the dead process hadn't been connected to shared memory.\n\nMaybe we should put in a startup-time check, analogous to the\ncan't-run-as-root test, that the postmaster mustn't be PID 1.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 May 2021 15:37:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init,\n process exits cause restart" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I also heard a story where things ran into trouble (I didn't get the\n> whole story of *what* was the problem with that) because the datadir is /.\n\nBTW, as far as that goes, I think the general recommendation is that\nthe datadir shouldn't be a mount point, because bad things happen if\nyou mount or unmount the drive while the postmaster is up. I could\nsee enforcing that, if we could find a reasonably platform-independent\nway to do it.\n\n(Of course, / can't be unmounted, so I wonder exactly what bad thing\nhappened in that story.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 May 2021 15:43:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init,\n process exits cause restart" }, { "msg_contents": "On 2021-May-03, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > I also heard a story where things ran into trouble (I didn't get the\n> > whole story of *what* was the problem with that) because the datadir is /.\n> \n> BTW, as far as that goes, I think the general recommendation is that\n> the datadir shouldn't be a mount point, because bad things happen if\n> you mount or unmount the drive while the postmaster is up. I could\n> see enforcing that, if we could find a reasonably platform-independent\n> way to do it.\n\n/ is not a mount point; it's just that the container system binds (?)\nsome different directory as / for the process to run into. I suppose it\nmust be similar to chrooting to /, but I'm not sure if it's exactly\nthat.\n\n> (Of course, / can't be unmounted, so I wonder exactly what bad thing\n> happened in that story.)\n\nIt's not related to unmounting. I'll try to get the details.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Mon, 3 May 2021 15:53:07 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init, process exits cause\n restart" }, { "msg_contents": "Hi,\n\nOn 2021-05-03 15:37:24 -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2021-May-03, Andres Freund wrote:\n> >> The issue turns out to be that postgres was in a container, with pid\n> >> namespaces enabled. Because postgres was run directly in the container,\n> >> without a parent process inside, it thus becomes pid 1. Which mostly\n> >> works without a problem. Until, as the case here with the archive\n> >> command, a sub-sub process exits while it still has a child. Then that\n> >> child gets re-parented to postmaster (as init).\n>\n> > Hah .. interesting. I think we should definitely make this work, since\n> > containerized stuff is going to become more and more prevalent.\n>\n> How would we make it \"work\"? The postmaster can't possibly be expected\n> to know the right thing to do with unexpected children.\n\nNot saying that we should, but we could check if we're pid 1 / init, and\njust warn about children we don't know anything about. Which we could\ndetect by iterating over BackendList/BackgroundWorkerList before\ncrash-restarting in CleanupBackend(). Then we'd not loose reliability\nin the \"normal\" case, while not reducing reliability in the container\ncase.\n\nI'm not quite sure I buy the reliability argument, tbh: The additional\nprocess exits we see as pid 1 are after all process exits that we'd not\nsee if we weren't pid 1. And if we're not pid 1 then there really should\nnever be any \"unexpected children\" - we know what processes postmaster\nitself forked after all. So where would unexpected children come from,\nexcept reparenting?\n\n\n> And who's to say that ignoring unexpected child deaths is okay,\n> anyway? We could hardly be sure that the dead process hadn't been\n> connected to shared memory.\n\nI don't think checking the exit status of unexpected children to see\nwhether we should crash-restart out of that concern is meaningful: We\ndon't know that the child didn't do anything bad with shared memory when\nthey exited with exit(1), instead of exit(2).\n\n\nRandom thought: I wonder if we ought to set madvise(MADV_DONTFORK) on\nshared memory in postmaster children, where available. Then we could be\nfairly certain that there aren't processes we don't know about that are\nattached to shared memory (unless there's some nasty\nshared_preload_library forking early during backend startup - but that's\nhard to get excited about).\n\n\n> > I guess we can do that in older releases, but do we really need it? As\n> > I understand, the only thing we need to do is verify that the dying PID\n> > is a backend PID, and not cause a crash cycle if it isn't.\n>\n> I think that'd be a net reduction in reliability, not an improvement.\n> In most scenarios it'd do little except mask bugs.\n\nDo you feel the same about having different logging between the \"known\"\nand \"unknown\" child processes?\n\n\nPersonally I don't think it's of utmost importance to support running as\npid 1. But we should at least print useful log messages about what\nprocesses exited...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 May 2021 13:12:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG in container w/ pid namespace is init, process exits cause\n restart" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-03 15:37:24 -0400, Tom Lane wrote:\n>> And who's to say that ignoring unexpected child deaths is okay,\n>> anyway? We could hardly be sure that the dead process hadn't been\n>> connected to shared memory.\n\n> I don't think checking the exit status of unexpected children to see\n> whether we should crash-restart out of that concern is meaningful: We\n> don't know that the child didn't do anything bad with shared memory when\n> they exited with exit(1), instead of exit(2).\n\nHmm, by that argument, any unexpected child PID in reaper() ought to be\ngrounds for a restart, regardless of its exit code. Which'd be fine by\nme. I'm on board with being more restrictive about this, not less so.\n\n> Do you feel the same about having different logging between the \"known\"\n> and \"unknown\" child processes?\n\nNo objection to logging such cases more clearly, for sure.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 May 2021 16:20:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init,\n process exits cause restart" }, { "msg_contents": "Hi,\n\nOn 2021-05-03 15:25:53 -0400, Alvaro Herrera wrote:\n> I also heard a story where things ran into trouble (I didn't get the\n> whole story of *what* was the problem with that) because the datadir is /.\n> I know -- nobody in their right mind would put the datadir in / -- but\n> apparently in the container world that's not something as stupid as it\n> sounds. That's of course not related to what you describe here\n> code-wise, but the underlying reason is the same.\n\nIt still seems pretty insane in the container world too. Postgres needs\nshared libraries (even if you managed to link postgres itself\nstatically, something we do not support). Postgres needs to write to the\ndata directory. Putting shared libraries inside the data directory seems\nlike a bad idea.\n\nUsing / for a single statically linked binary that e.g. just serves a\nbunch of hardcoded files is one thing. Putting actual data in / for\nsomething like postgres another.\n\n\n> > I wonder if we should work a bit harder to try to identify whether an\n> > exiting process was a \"server process\" before identifying it as such?\n> \n> Well, we've never made any effort there because it just wasn't possible.\n> Nobody ever had postmaster also be init .. until containers. Let's fix\n> it.\n\n> > And perhaps we ought to warn about postgres running as \"init\" unless we\n> > make that robust?\n> \n> I guess we can do that in older releases, but do we really need it? As\n> I understand, the only thing we need to do is verify that the dying PID\n> is a backend PID, and not cause a crash cycle if it isn't.\n\nI think there's a few more special cases when running as init, other\nthan reparenting. E.g. I think the default signal handlers are\ndifferent, the kernel kills the process in fewer cases etc. I am not\nopposed to adding support for it, but I think it'd need a bit of care.\n\nGiven that we probably shouldn't just break things in a minor release by\nrefusing to run as 1, a warning seems to be the easiest thing for now?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 May 2021 13:23:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG in container w/ pid namespace is init, process exits cause\n restart" }, { "msg_contents": "On 2021-May-03, Andres Freund wrote:\n\n> Using / for a single statically linked binary that e.g. just serves a\n> bunch of hardcoded files is one thing. Putting actual data in / for\n> something like postgres another.\n\nYeah, I just had a word with them and I had misunderstood what they were\ndoing. They were attempting something completely insane and pointless,\nso I'm going to leave it at that.\n\n> I think there's a few more special cases when running as init, other\n> than reparenting. E.g. I think the default signal handlers are\n> different, the kernel kills the process in fewer cases etc. I am not\n> opposed to adding support for it, but I think it'd need a bit of care.\n\nOk, we can leave that as future development then.\n\n> Given that we probably shouldn't just break things in a minor release by\n> refusing to run as 1, a warning seems to be the easiest thing for now?\n\nWFM.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Once again, thank you and all of the developers for your hard work on\nPostgreSQL. This is by far the most pleasant management experience of\nany database I've worked on.\" (Dan Harris)\nhttp://archives.postgresql.org/pgsql-performance/2006-04/msg00247.php\n\n\n", "msg_date": "Mon, 3 May 2021 16:28:09 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init, process exits cause\n restart" }, { "msg_contents": "\nOn 5/3/21 3:07 PM, Andres Freund wrote:\n> Hi,\n>\n> A colleague debugged an issue where their postgres was occasionally\n> crash-restarting under load.\n>\n> The cause turned out to be that a relatively complex archive_command was\n> used, which could in some rare circumstances have a bash subshell\n> pipeline not succeed. It wasn't at all obvious why that'd cause a crash\n> though - the archive command handles the error.\n>\n> The issue turns out to be that postgres was in a container, with pid\n> namespaces enabled. Because postgres was run directly in the container,\n> without a parent process inside, it thus becomes pid 1. Which mostly\n> works without a problem. Until, as the case here with the archive\n> command, a sub-sub process exits while it still has a child. Then that\n> child gets re-parented to postmaster (as init).\n>\n> Such a child is likely to have exited not just with 0 or 1, but\n> something else. As the pid won't match anything in reaper(), we'll go to\n> CleanupBackend(). Where any exit status but 0/1 will unconditionally\n> trigger a restart:\n>\n> \tif (!EXIT_STATUS_0(exitstatus) && !EXIT_STATUS_1(exitstatus))\n> \t{\n> \t\tHandleChildCrash(pid, exitstatus, _(\"server process\"));\n> \t\treturn;\n> \t}\n>\n>\n> This kind of thing is pretty hard to debug, because it's not easy to\n> even figure out what the \"crashing\" pid belonged to.\n>\n> I wonder if we should work a bit harder to try to identify whether an\n> exiting process was a \"server process\" before identifying it as such?\n>\n> And perhaps we ought to warn about postgres running as \"init\" unless we\n> make that robust?\n>\n\nHmm, my initial reaction was if we detect very early on we're PID 1 then\nfork and do all our work in the child, and in the parent just wait until\nthere are no more children. Not sure if that's feasible but I thought\nI'd throw it out there.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 3 May 2021 16:32:38 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init, process exits cause\n restart" }, { "msg_contents": "Hi,\n\nOn 2021-05-03 16:20:43 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-05-03 15:37:24 -0400, Tom Lane wrote:\n> >> And who's to say that ignoring unexpected child deaths is okay,\n> >> anyway? We could hardly be sure that the dead process hadn't been\n> >> connected to shared memory.\n> \n> > I don't think checking the exit status of unexpected children to see\n> > whether we should crash-restart out of that concern is meaningful: We\n> > don't know that the child didn't do anything bad with shared memory when\n> > they exited with exit(1), instead of exit(2).\n> \n> Hmm, by that argument, any unexpected child PID in reaper() ought to be\n> grounds for a restart, regardless of its exit code. Which'd be fine by\n> me. I'm on board with being more restrictive about this, not less so.\n\nAre there any holes / races that could lead to this \"legitimately\"\nhappening? To me the signal blocking looks like it should prevent that?\n\nI'm a bit worried that we'd find some harmless corner cases under adding\na new instability. So personally I'd be inclined to just make it a\nwarning, but ...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 May 2021 13:35:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: PG in container w/ pid namespace is init, process exits cause\n restart" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> On 2021-May-03, Andres Freund wrote:\n>>> The issue turns out to be that postgres was in a container, with pid\n>>> namespaces enabled. Because postgres was run directly in the container,\n>>> without a parent process inside, it thus becomes pid 1. Which mostly\n>>> works without a problem. Until, as the case here with the archive\n>>> command, a sub-sub process exits while it still has a child. Then that\n>>> child gets re-parented to postmaster (as init).\n>\n>> Hah .. interesting. I think we should definitely make this work, since\n>> containerized stuff is going to become more and more prevalent.\n>\n> How would we make it \"work\"? The postmaster can't possibly be expected\n> to know the right thing to do with unexpected children.\n>\n>> I guess we can do that in older releases, but do we really need it? As\n>> I understand, the only thing we need to do is verify that the dying PID\n>> is a backend PID, and not cause a crash cycle if it isn't.\n\n> Maybe we should put in a startup-time check, analogous to the\n> can't-run-as-root test, that the postmaster mustn't be PID 1.\n\nGiven that a number of minimal `init`s already exist specifically for\nthe case of running a single application in a container, I don't think\nPostgres should to reinvent that wheel. A quick eyball of the output of\n`apt search container init` on a Debian Bullseyse system reveals at\nleast four:\n\n - https://github.com/Yelp/dumb-init\n - https://github.com/krallin/tini\n - https://github.com/fpco/pid1\n - https://github.com/openSUSE/catatonit\n\nThe first one also explains why there's more to being PID 1 than just\nhandling reparented children.\n\n- ilmari\n-- \n\"The surreality of the universe tends towards a maximum\" -- Skud's Law\n\"Never formulate a law or axiom that you're not prepared to live with\n the consequences of.\" -- Skud's Meta-Law\n\n\n", "msg_date": "Mon, 03 May 2021 22:13:29 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init,\n process exits cause restart" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-03 16:20:43 -0400, Tom Lane wrote:\n>> Hmm, by that argument, any unexpected child PID in reaper() ought to be\n>> grounds for a restart, regardless of its exit code. Which'd be fine by\n>> me. I'm on board with being more restrictive about this, not less so.\n\n> Are there any holes / races that could lead to this \"legitimately\"\n> happening? To me the signal blocking looks like it should prevent that?\n\nIf it did happen it would imply a bug in the postmaster's child-process\nbookkeeping.\n\n(Or, I guess, some preloaded module deciding that launching its own\nchildren was OK, whether or not it could find out whether they\nsucceeded.)\n\n> I'm a bit worried that we'd find some harmless corner cases under adding\n> a new instability. So personally I'd be inclined to just make it a\n> warning, but ...\n\nWell, I wouldn't recommend adding such a check in released branches,\nbut I'd be in favor of changing it in HEAD (or waiting till v15\nopens).\n\nMeanwhile, it seems like we both thought of complaining if the\npostmaster's PID is 1. I'm not quite sure if there are any\nportability hazards from that, but on the whole it sounds like\na good way to detect badly-configured containers.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 May 2021 17:13:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init,\n process exits cause restart" }, { "msg_contents": "On Mon, 3 May 2021 at 15:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > I also heard a story where things ran into trouble (I didn't get the\n> > whole story of *what* was the problem with that) because the datadir is /.\n>\n> BTW, as far as that goes, I think the general recommendation is that\n> the datadir shouldn't be a mount point, because bad things happen if\n> you mount or unmount the drive while the postmaster is up. I could\n> see enforcing that, if we could find a reasonably platform-independent\n> way to do it.\n\nI don't think the problem is unmounting -- on BSD you have to try\nreally hard to unmount filesystems that have files open on them and\nafaik you can't do it on Linux at all (which I still claim is the\noriginal sin that led to the fsync issues).\n\nThe problem was mounting filesystems if it happened late -- ie. After\nPostgres had started up. It was exacerbated by some startup scripts\nthat would automatically run initdb if there was nothing present.\n\nOffhand I don't actually see anything special about the Postgres\ndirectory root being the mountpoint though. There's nothing stopping\nsomeone from mounting on top of some parent directory other than it\nbeing slightly harder to imagine someone creating the whole directory\ntree up from the postgres root rather than just running initdb.\n\nFwiw, I have a suspicion that the right check for being init is\nwhether `pid == ppid`.\n\n-- \ngreg\n\n\n", "msg_date": "Tue, 4 May 2021 13:35:50 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init,\n process exits cause restart" }, { "msg_contents": "Greg Stark <stark@mit.edu> writes:\n> On Mon, 3 May 2021 at 15:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> BTW, as far as that goes, I think the general recommendation is that\n>> the datadir shouldn't be a mount point, because bad things happen if\n>> you mount or unmount the drive while the postmaster is up. I could\n>> see enforcing that, if we could find a reasonably platform-independent\n>> way to do it.\n\n> I don't think the problem is unmounting -- on BSD you have to try\n> really hard to unmount filesystems that have files open on them and\n> afaik you can't do it on Linux at all (which I still claim is the\n> original sin that led to the fsync issues).\n> The problem was mounting filesystems if it happened late -- ie. After\n> Postgres had started up. It was exacerbated by some startup scripts\n> that would automatically run initdb if there was nothing present.\n\nYeah, at least that was the case that somebody (Joe Conway if memory\nserves) reported years ago.\n\n> Offhand I don't actually see anything special about the Postgres\n> directory root being the mountpoint though.\n\nI think one good reason not to do it is that a mount point directory\nought to be root-owned. I don't recall the specific reasoning\nbehind that practice, but it seems sound. Also, if the filesystem\nis one that likes having a lost+found directory, you have some\nfinagling to do to keep initdb from complaining about that.\n\n> Fwiw, I have a suspicion that the right check for being init is\n> whether `pid == ppid`.\n\nMakes sense, and seems nicer than hard-coding an assumption that\nPID 1 is special.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 May 2021 13:43:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init,\n process exits cause restart" }, { "msg_contents": "On Mon, May 3, 2021 at 3:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I guess we can do that in older releases, but do we really need it? As\n> > I understand, the only thing we need to do is verify that the dying PID\n> > is a backend PID, and not cause a crash cycle if it isn't.\n>\n> I think that'd be a net reduction in reliability, not an improvement.\n> In most scenarios it'd do little except mask bugs. And who's to say\n> that ignoring unexpected child deaths is okay, anyway? We could hardly\n> be sure that the dead process hadn't been connected to shared memory.\n\nThis argument doesn't make any sense to me. In almost all cases,\npostgres is not init, and if a backend forks a child which stomps on\nshared memory and exits, the postmaster will not know that there is a\nproblem and will not restart. In practice this is not a problem,\nbecause the core code is careful not to touch shared memory in\nchildren that it forks, and extensions written by reasonably smart\npeople aren't going to do that either, because it's not very hard to\nfigure out that it can't possibly work. So, in the rare case where\npostgres IS init, and it finds out that a descendent process which is\nnot a direct child has exited, it should do the same thing that we do\nin all the other cases where a descendent process that is not a direct\nchild has exited, viz. nothing. And if that's the wrong idea - I don't\nthink it is - then we should fix it in all cases, not just the one\nwhere postgres is init.\n\nI don't have a view on whether it is reasonable or prudent to teach\npostgres to work as init, because I don't really know what's involved.\nBut I think you're taking a position that is basically blind panic. If\nsomething happens that we normally wouldn't even know about, and\nbecause of an unusual circumstance we do know about it, we should not\nleap to the conclusion that it is something bad. All that does is make\nthe system behavior less consistent, and thus harder for users.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 May 2021 14:05:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init,\n process exits cause restart" }, { "msg_contents": "On 5/4/21 1:43 PM, Tom Lane wrote:\n> Greg Stark <stark@mit.edu> writes:\n>> On Mon, 3 May 2021 at 15:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> BTW, as far as that goes, I think the general recommendation is that\n>>> the datadir shouldn't be a mount point, because bad things happen if\n>>> you mount or unmount the drive while the postmaster is up. I could\n>>> see enforcing that, if we could find a reasonably platform-independent\n>>> way to do it.\n> \n>> I don't think the problem is unmounting -- on BSD you have to try\n>> really hard to unmount filesystems that have files open on them and\n>> afaik you can't do it on Linux at all (which I still claim is the\n>> original sin that led to the fsync issues).\n>> The problem was mounting filesystems if it happened late -- ie. After\n>> Postgres had started up. It was exacerbated by some startup scripts\n>> that would automatically run initdb if there was nothing present.\n> \n> Yeah, at least that was the case that somebody (Joe Conway if memory\n> serves) reported years ago.\n\n\nGuilty as charged ;-)\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Tue, 4 May 2021 14:07:06 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init, process exits cause\n restart" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, May 3, 2021 at 3:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think that'd be a net reduction in reliability, not an improvement.\n>> In most scenarios it'd do little except mask bugs. And who's to say\n>> that ignoring unexpected child deaths is okay, anyway? We could hardly\n>> be sure that the dead process hadn't been connected to shared memory.\n\n> This argument doesn't make any sense to me. In almost all cases,\n> postgres is not init, and if a backend forks a child which stomps on\n> shared memory and exits, the postmaster will not know that there is a\n> problem and will not restart. In practice this is not a problem,\n> because the core code is careful not to touch shared memory in\n> children that it forks, and extensions written by reasonably smart\n> people aren't going to do that either, because it's not very hard to\n> figure out that it can't possibly work. So, in the rare case where\n> postgres IS init, and it finds out that a descendent process which is\n> not a direct child has exited, it should do the same thing that we do\n> in all the other cases where a descendent process that is not a direct\n> child has exited, viz. nothing. And if that's the wrong idea - I don't\n> think it is - then we should fix it in all cases, not just the one\n> where postgres is init.\n\nYou are arguing from assumptions not in evidence, specifically that\nif we reap a PID that isn't one we recognize, this must be what\nhappened. I think it's *at least* as likely that the case implies\nsome bug in the postmaster's child-process bookkeeping, in which\ncase doing nothing is not a good answer. (The fact that that's\nwhat we do today doesn't make it right.) I don't wish to\nlobotomize our ability to detect such problems in order to support\nincompetently-configured containers.\n\nIndependently of that, as was pointed out upthread, being init requires\nmore than just ignoring unrecognized results from waitpid. We shouldn't\ntake on that responsibility when there are perfectly good solutions out\nthere already.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 May 2021 14:26:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init,\n process exits cause restart" }, { "msg_contents": "On Tue, May 4, 2021 at 2:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> You are arguing from assumptions not in evidence, specifically that\n> if we reap a PID that isn't one we recognize, this must be what\n> happened. I think it's *at least* as likely that the case implies\n> some bug in the postmaster's child-process bookkeeping, ...\n\nIt's hard to rule that out completely, but it doesn't seem incredibly\nlikely to me. I would think that if we had such bugs they would result\nin system instability that is also not in evidence.\n\n> Independently of that, as was pointed out upthread, being init requires\n> more than just ignoring unrecognized results from waitpid. We shouldn't\n> take on that responsibility when there are perfectly good solutions out\n> there already.\n\nThat's a separate point that should be judged on its own merits. I\ndon't have an educated opinion on how hard it would be, or how\nvaluable it would be.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 May 2021 14:55:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init,\n process exits cause restart" }, { "msg_contents": "\nOn 5/3/21 5:13 PM, Dagfinn Ilmari Mannsåker wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>\n>\n>> Maybe we should put in a startup-time check, analogous to the\n>> can't-run-as-root test, that the postmaster mustn't be PID 1.\n> Given that a number of minimal `init`s already exist specifically for\n> the case of running a single application in a container, I don't think\n> Postgres should to reinvent that wheel. A quick eyball of the output of\n> `apt search container init` on a Debian Bullseyse system reveals at\n> least four:\n>\n> - https://github.com/Yelp/dumb-init\n> - https://github.com/krallin/tini\n> - https://github.com/fpco/pid1\n> - https://github.com/openSUSE/catatonit\n>\n> The first one also explains why there's more to being PID 1 than just\n> handling reparented children.\n>\n\n\n\nI looked at the first of these, and it seems perfectly sensible. So I\nagree all we really need to do is refuse to run as PID 1.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 4 May 2021 15:57:39 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init, process exits cause\n restart" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 5/3/21 5:13 PM, Dagfinn Ilmari Mannsåker wrote:\n>> Given that a number of minimal `init`s already exist specifically for\n>> the case of running a single application in a container, I don't think\n>> Postgres should to reinvent that wheel. A quick eyball of the output of\n>> `apt search container init` on a Debian Bullseyse system reveals at\n>> least four:\n>> \n>> - https://github.com/Yelp/dumb-init\n>> - https://github.com/krallin/tini\n>> - https://github.com/fpco/pid1\n>> - https://github.com/openSUSE/catatonit\n>> \n>> The first one also explains why there's more to being PID 1 than just\n>> handling reparented children.\n\n> I looked at the first of these, and it seems perfectly sensible. So I\n> agree all we really need to do is refuse to run as PID 1.\n\n[ for the archives' sake ] I looked at the documentation for dumb-init,\nand it claims there are basically two things weird about init:\n\n1. The kernel applies different signal handling rules to it.\n\n2. It has to reap children it didn't spawn.\n\nWhether that list is exhaustive, I dunno ... it has an odor of\nLinux-specificity to me. Anyway, #2 is clearly no problem for\nthe postmaster, since it's doing that anyway; quibbles about\nwhether it *should* do that without complaining aside. We could\nimagine trying to handle #1, but that seems like the sort of dank\nsystem-specific corner that we'd regret having got into. If the\nbehavior for init isn't consistent with our needs, or changes\nacross platforms or kernel versions, things could get very messy\nindeed. I'm still thinking that we're best off refusing to do\nthat and making people install one of these shims that's meant\nfor the job.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 May 2021 16:35:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init,\n process exits cause restart" }, { "msg_contents": "On Tue, May 04, 2021 at 01:35:50PM -0400, Greg Stark wrote:\n> Fwiw, I have a suspicion that the right check for being init is\n> whether `pid == ppid`.\n\npryzbyj@pryzbyj:~$ ps -wwf 1\nUID PID PPID C STIME TTY STAT TIME CMD\nroot 1 0 0 2020 ? Ss 10:28 /sbin/init\n\nAs I recall, on some OS, pid 0 is the \"swapper\".\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 4 May 2021 15:53:40 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init, process exits cause\n restart" }, { "msg_contents": "On Tue, May 4, 2021 at 4:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm still thinking that we're best off refusing to do\n> that and making people install one of these shims that's meant\n> for the job.\n\nI have to admit that I care less about the specific issue here than\nabout the general issue of being open to hearing what the user needs\nactually are. I honestly have no idea whether it's sensible to want to\nrun postgres as init. If people who know about container stuff say\nthat's a dumb idea and you shouldn't do it, then IMHO your conclusion\nthat we should simply disallow it is 100% correct. But if those people\nshow up and say, no, it's actually super-convenient for postgres to\nrun as init and using one of those shim things has significant\ndownsides that are hard to mitigate, and if further we could do what\nthey say they need with just a little bit of extra code, then IMHO\nyour conclusion is 100% wrong. Now so far as I can see right now\nneither conclusion is crystal clear - opinions seem to be a bit mixed.\nSo right now I don't really know what to think. I just don't want to\nfall into the trap of thinking that core developers are somehow in a\nbetter place to know the right answer than users.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 21:16:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init,\n process exits cause restart" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I have to admit that I care less about the specific issue here than\n> about the general issue of being open to hearing what the user needs\n> actually are. I honestly have no idea whether it's sensible to want to\n> run postgres as init. If people who know about container stuff say\n> that's a dumb idea and you shouldn't do it, then IMHO your conclusion\n> that we should simply disallow it is 100% correct. But if those people\n> show up and say, no, it's actually super-convenient for postgres to\n> run as init and using one of those shim things has significant\n> downsides that are hard to mitigate, and if further we could do what\n> they say they need with just a little bit of extra code, then IMHO\n> your conclusion is 100% wrong. Now so far as I can see right now\n> neither conclusion is crystal clear - opinions seem to be a bit mixed.\n> So right now I don't really know what to think. I just don't want to\n> fall into the trap of thinking that core developers are somehow in a\n> better place to know the right answer than users.\n\nI don't claim to have an opinion about how convenient it would be\nfor users to not need an init shim. I do claim to have a qualified\nopinion about how hard it would be for us to support the case. It'd\nhobble our ability to detect child-process bookkeeping errors, and\nit'd put constraints on how we manage the postmaster's signal handling.\nMaybe those constraints will never matter, but that's a contract I\ndon't really want to buy into for this seemingly-not-large benefit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 May 2021 21:31:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG in container w/ pid namespace is init,\n process exits cause restart" } ]
[ { "msg_contents": "I and various colleagues of mine have from time to time encountered\nsystems that got a bit behind on WAL archiving, because the\narchive_command started failing and nobody noticed right away.\nIdeally, people should have monitoring for this and put it to rights\nimmediately, but some people don't. If those people happen to have a\nrelatively small pg_wal partition, they will likely become aware of\nthe issue when it fills up and takes down the server, but some users\nprovision disk space pretty generously and therefore nothing compels\nthem to notice the issue until they fill it up. In at least one case,\non a system that was actually generating a reasonable amount of WAL,\nthis took in excess of six months.\n\nAs you might imagine, pg_wal can get fairly large in such scenarios,\nbut the user is generally less concerned with solving that problem\nthan they are with getting the system back up. It is doubtless true\nthat the user would prefer to shrink the disk usage down to something\nmore reasonable over time, but on the facts as presented, it can't\nreally be an urgent issue for them. What they really need is just free\nup a little disk space somehow or other and then get archiving running\nfast enough to keep up with future WAL generation. Regrettably, the\narchiver cannot do this, not even if you set archive_command =\n/bin/true, because the archiver will barely ever actually run the\narchive_command. Instead, it will spend virtually all of its time\ncalling readdir(), because for some reason it feels a need to make a\ncomplete scan of the archive_status directory before archiving a WAL\nfile, and then it has to make another scan before archiving the next\none.\n\nSomeone - and it's probably for the best that the identity of that\nperson remains unknown to me - came up with a clever solution to this\nproblem, which is now used almost as a matter of routine whenever this\ncomes up. You just run pg_archivecleanup on your pg_wal directory, and\nthen remove all the corresponding .ready files and call it a day. I\nhaven't scrutinized the code for pg_archivecleanup, but evidently it\navoids needing O(n^2) time for this and therefore can clean up the\nwhole directory in something like the amount of time the archiver\nwould take to deal with a single file. While this seems to be quite an\neffective procedure and I have not yet heard any user complaints, it\nseems disturbingly error-prone, and honestly shouldn't ever be\nnecessary. The issue here is only that pgarch.c acts as though after\narchiving 000000010000000000000001, 000000010000000000000002, and then\n000000010000000000000003, we have no idea what file we might need to\narchive next. Could it, perhaps, be 000000010000000000000004? Only a\nfull directory scan will tell us the answer!\n\nI have two possible ideas for addressing this; perhaps other people\nwill have further suggestions. A relatively non-invasive fix would be\nto teach pgarch.c how to increment a WAL file name. After archiving\nsegment N, check using stat() whether there's an .ready file for\nsegment N+1. If so, do that one next. If not, then fall back to\nperforming a full directory scan. As far as I can see, this is just\ncheap insurance. If archiving is keeping up, the extra stat() won't\nmatter much. If it's not, this will save more system calls than it\ncosts. Since during normal operation it shouldn't really be possible\nfor files to show up in pg_wal out of order, I don't really see a\nscenario where this changes the behavior, either. If there are gaps in\nthe sequence at startup time, this will cope with it exactly the same\nas we do now, except with a better chance of finishing before I\nretire.\n\nHowever, that's still pretty wasteful. Every time we have to wait for\nthe next file to be ready for archiving, we'll basically fall back to\nrepeatedly scanning the whole directory, waiting for it to show up.\nAnd I think that we can't get around that by just using stat() to look\nfor the appearance of the file we expect to see, because it's possible\nthat we might be doing all of this on a standby which then gets\npromoted, or some upstream primary gets promoted, and WAL files start\nappearing on a different timeline, making our prediction of what the\nnext filename will be incorrect. But perhaps we could work around this\nby allowing pgarch.c to access shared memory, in which case it could\nexamine the current timeline whenever it wants, and probably also\nwhatever LSNs it needs to know what's safe to archive. If we did that,\ncould we just get rid of the .ready and .done files altogether? Are\nthey just a really expensive IPC mechanism to avoid a shared memory\nconnection, or is there some more fundamental reason why we need them?\nAnd is there any good reason why the archiver shouldn't be connected\nto shared memory? It is certainly nice to avoid having more processes\nconnected to shared memory than necessary, but the current scheme is\nso inefficient that I think we end up worse off.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 16:49:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": ".ready and .done files considered harmful" }, { "msg_contents": "Hi,\n\nOn 2021-05-03 16:49:16 -0400, Robert Haas wrote:\n> I have two possible ideas for addressing this; perhaps other people\n> will have further suggestions. A relatively non-invasive fix would be\n> to teach pgarch.c how to increment a WAL file name. After archiving\n> segment N, check using stat() whether there's an .ready file for\n> segment N+1. If so, do that one next. If not, then fall back to\n> performing a full directory scan.\n\nHm. I wonder if it'd not be better to determine multiple files to be\narchived in one readdir() pass?\n\n\n> As far as I can see, this is just cheap insurance. If archiving is\n> keeping up, the extra stat() won't matter much. If it's not, this will\n> save more system calls than it costs. Since during normal operation it\n> shouldn't really be possible for files to show up in pg_wal out of\n> order, I don't really see a scenario where this changes the behavior,\n> either. If there are gaps in the sequence at startup time, this will\n> cope with it exactly the same as we do now, except with a better\n> chance of finishing before I retire.\n\nThere's definitely gaps in practice :(. Due to the massive performance\nissues with archiving there are several tools that archive multiple\nfiles as part of one archive command invocation (and mark the additional\narchived files as .done immediately).\n\n\n> However, that's still pretty wasteful. Every time we have to wait for\n> the next file to be ready for archiving, we'll basically fall back to\n> repeatedly scanning the whole directory, waiting for it to show up.\n\nHm. That seems like it's only an issue because .done and .ready are in\nthe same directory? Otherwise the directory would be empty while we're\nwaiting for the next file to be ready to be archived. I hate that that's\na thing but given teh serial nature of archiving, with high per-call\noverhead, I don't think it'd be ok to just break that without a\nreplacement :(.\n\n\n> But perhaps we could work around this by allowing pgarch.c to access\n> shared memory, in which case it could examine the current timeline\n> whenever it wants, and probably also whatever LSNs it needs to know\n> what's safe to archive.\n\nFWIW, the shared memory stats patch implies doing that, since the\narchiver reports stats.\n\n\n> If we did that, could we just get rid of the .ready and .done files\n> altogether? Are they just a really expensive IPC mechanism to avoid a\n> shared memory connection, or is there some more fundamental reason why\n> we need them?\n\nWhat kind of shared memory mechanism are you thinking of? Due to\ntimelines and history files I don't think simple position counters would\nbe quite enough.\n\nI think the aforementioned \"batching\" archive commands are part of the\nproblem :(.\n\n\n\n> And is there any good reason why the archiver shouldn't be connected\n> to shared memory? It is certainly nice to avoid having more processes\n> connected to shared memory than necessary, but the current scheme is\n> so inefficient that I think we end up worse off.\n\nI think there is no fundamental for avoiding shared memory in the\narchiver. I guess there's a minor robustness advantage, because the\nforked shell to start the archvive command won't be attached to shared\nmemory. But that's only until the child exec()s to the archive command.\n\nThere is some minor performance advantage as well, not having to process\nthe often large and contended memory mapping for shared_buffers is\nprobably measurable - but swamped by the cost of needing to actually\narchive the segment.\n\n\nMy only \"concern\" with doing anything around this is that I think the\nwhole approach of archive_command is just hopelessly broken, with even\njust halfway busy servers only able to keep up archiving if they muck\naround with postgres internal data during archive command execution. Add\nto that how hard it is to write a robust archive command (e.g. the one\nin our docs still suggests test ! -f && cp, which means that copy\nfailing in the middle yields an incomplete archive)...\n\nWhile I don't think it's all that hard to design a replacement, it's\nhowever likely still more work than addressing the O(n^2) issue, so ...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 May 2021 21:27:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "\n\n> 4 мая 2021 г., в 09:27, Andres Freund <andres@anarazel.de> написал(а):\n> \n> Hi,\n> \n> On 2021-05-03 16:49:16 -0400, Robert Haas wrote:\n>> I have two possible ideas for addressing this; perhaps other people\n>> will have further suggestions. A relatively non-invasive fix would be\n>> to teach pgarch.c how to increment a WAL file name. After archiving\n>> segment N, check using stat() whether there's an .ready file for\n>> segment N+1. If so, do that one next. If not, then fall back to\n>> performing a full directory scan.\n> \n> Hm. I wonder if it'd not be better to determine multiple files to be\n> archived in one readdir() pass?\n\nFWIW we use both methods [0]. WAL-G has a pipe with WAL-push candidates.\nWe add there some predictions, and if it does not fill upload concurrency - list archive_status contents (concurrently to background uploads).\n\n> \n> \n>> As far as I can see, this is just cheap insurance. If archiving is\n>> keeping up, the extra stat() won't matter much. If it's not, this will\n>> save more system calls than it costs. Since during normal operation it\n>> shouldn't really be possible for files to show up in pg_wal out of\n>> order, I don't really see a scenario where this changes the behavior,\n>> either. If there are gaps in the sequence at startup time, this will\n>> cope with it exactly the same as we do now, except with a better\n>> chance of finishing before I retire.\n> \n> There's definitely gaps in practice :(. Due to the massive performance\n> issues with archiving there are several tools that archive multiple\n> files as part of one archive command invocation (and mark the additional\n> archived files as .done immediately).\nInterestingly, we used to rename .ready->.done some years ago. But pgBackRest developers convinced me that it's not a good idea to mess with data dir [1]. Then pg_probackup developers convinced me that renaming .ready->.done on our own scales better and implemented this functionality for us [2].\n\n>> If we did that, could we just get rid of the .ready and .done files\n>> altogether? Are they just a really expensive IPC mechanism to avoid a\n>> shared memory connection, or is there some more fundamental reason why\n>> we need them?\n> \n> What kind of shared memory mechanism are you thinking of? Due to\n> timelines and history files I don't think simple position counters would\n> be quite enough.\n> \n> I think the aforementioned \"batching\" archive commands are part of the\n> problem :(.archiv\nI'd be happy if we had a table with files that need to be archived, a table with registered archivers and a function to say \"archiver number X has done its job on file Y\". Archiver could listen to some archiver channel while sleeping or something like that.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n[0] https://github.com/x4m/wal-g/blob/c8a785217fe1123197280fd24254e51492bf5a68/internal/bguploader.go#L119-L137\n[1] https://www.postgresql.org/message-id/flat/20180828200754.GI3326%40tamriel.snowman.net#0b07304710b9ce5244438b7199447ee7\n[2] https://github.com/wal-g/wal-g/pull/950\n\n", "msg_date": "Tue, 4 May 2021 13:07:48 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Tue, May 4, 2021 at 12:27 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-05-03 16:49:16 -0400, Robert Haas wrote:\n> > I have two possible ideas for addressing this; perhaps other people\n> > will have further suggestions. A relatively non-invasive fix would be\n> > to teach pgarch.c how to increment a WAL file name. After archiving\n> > segment N, check using stat() whether there's an .ready file for\n> > segment N+1. If so, do that one next. If not, then fall back to\n> > performing a full directory scan.\n>\n> Hm. I wonder if it'd not be better to determine multiple files to be\n> archived in one readdir() pass?\n\nI think both methods have some merit. If we had a way to pass a range\nof files to archive_command instead of just one, then your way is\ndistinctly better, and perhaps we should just go ahead and invent such\na thing. If not, your way doesn't entirely solve the O(n^2) problem,\nsince you have to choose some upper bound on the number of file names\nyou're willing to buffer in memory, but it may lower it enough that it\nmakes no practical difference. I am somewhat inclined to think that it\nwould be good to start with the method I'm proposing, since it is a\nclear-cut improvement over what we have today and can be done with a\nrelatively limited amount of code change and no redesign, and then\nperhaps do something more ambitious afterward.\n\n> There's definitely gaps in practice :(. Due to the massive performance\n> issues with archiving there are several tools that archive multiple\n> files as part of one archive command invocation (and mark the additional\n> archived files as .done immediately).\n\nGood to know.\n\n> > However, that's still pretty wasteful. Every time we have to wait for\n> > the next file to be ready for archiving, we'll basically fall back to\n> > repeatedly scanning the whole directory, waiting for it to show up.\n>\n> Hm. That seems like it's only an issue because .done and .ready are in\n> the same directory? Otherwise the directory would be empty while we're\n> waiting for the next file to be ready to be archived.\n\nI think that's right.\n\n> I hate that that's\n> a thing but given teh serial nature of archiving, with high per-call\n> overhead, I don't think it'd be ok to just break that without a\n> replacement :(.\n\nI don't know quite what you mean by this. Moving .done files to a\nseparate directory from .ready files could certainly be done and I\ndon't think it even would be that hard. It does seem like a bit of a\nhalf measure though. If we're going to redesign this I think we ought\nto be more ambitious than that.\n\n> > But perhaps we could work around this by allowing pgarch.c to access\n> > shared memory, in which case it could examine the current timeline\n> > whenever it wants, and probably also whatever LSNs it needs to know\n> > what's safe to archive.\n>\n> FWIW, the shared memory stats patch implies doing that, since the\n> archiver reports stats.\n\nAre you planning to commit that for v15? If so, will it be early in\nthe cycle, do you think?\n\n> What kind of shared memory mechanism are you thinking of? Due to\n> timelines and history files I don't think simple position counters would\n> be quite enough.\n\nI was thinking of simple position counters, but we could do something\nmore sophisticated. I don't even care if we stick with .ready/.done\nfor low-frequency stuff like timeline and history files. But I think\nwe'd be better off avoiding it for WAL files, because there are just\ntoo many of them, and it's too hard to create a system that actually\nscales. Or else we need a way for a single .ready file to cover many\nWAL files in need of being archived, rather than just one.\n\n> I think there is no fundamental for avoiding shared memory in the\n> archiver. I guess there's a minor robustness advantage, because the\n> forked shell to start the archvive command won't be attached to shared\n> memory. But that's only until the child exec()s to the archive command.\n\nThat doesn't seem like a real issue because we're not running\nuser-defined code between fork() and exec().\n\n> There is some minor performance advantage as well, not having to process\n> the often large and contended memory mapping for shared_buffers is\n> probably measurable - but swamped by the cost of needing to actually\n> archive the segment.\n\nProcess it how?\n\nAnother option would be to have two processes. You could have one that\nstayed connected to shared memory and another that JUST ran the\narchive_command, and they could talk over a socket or something. But\nthat would add a bunch of extra complexity, so I don't want to do it\nunless we actually need to do it.\n\n> My only \"concern\" with doing anything around this is that I think the\n> whole approach of archive_command is just hopelessly broken, with even\n> just halfway busy servers only able to keep up archiving if they muck\n> around with postgres internal data during archive command execution. Add\n> to that how hard it is to write a robust archive command (e.g. the one\n> in our docs still suggests test ! -f && cp, which means that copy\n> failing in the middle yields an incomplete archive)...\n>\n> While I don't think it's all that hard to design a replacement, it's\n> however likely still more work than addressing the O(n^2) issue, so ...\n\nI think it is probably a good idea to fix the O(n^2) issue first, and\nthen as a separate step try to redefine things so that a decent\narchive command doesn't have to poke around as much at internal stuff.\nPart of that should probably involve having a way to pass a range of\nfiles to archive_command instead of a single file. I was also\nwondering whether we should go further and allow for the archiving to\nbe performed by C code running inside the backend rather than shelling\nout to an external command.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 May 2021 10:07:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": " iOn Tue, May 4, 2021 at 7:38 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, May 4, 2021 at 12:27 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2021-05-03 16:49:16 -0400, Robert Haas wrote:\n> > > I have two possible ideas for addressing this; perhaps other people\n> > > will have further suggestions. A relatively non-invasive fix would be\n> > > to teach pgarch.c how to increment a WAL file name. After archiving\n> > > segment N, check using stat() whether there's an .ready file for\n> > > segment N+1. If so, do that one next. If not, then fall back to\n> > > performing a full directory scan.\n> >\n> > Hm. I wonder if it'd not be better to determine multiple files to be\n> > archived in one readdir() pass?\n>\n> I think both methods have some merit. If we had a way to pass a range\n> of files to archive_command instead of just one, then your way is\n> distinctly better, and perhaps we should just go ahead and invent such\n> a thing. If not, your way doesn't entirely solve the O(n^2) problem,\n> since you have to choose some upper bound on the number of file names\n> you're willing to buffer in memory, but it may lower it enough that it\n> makes no practical difference. I am somewhat inclined to think that it\n> would be good to start with the method I'm proposing, since it is a\n> clear-cut improvement over what we have today and can be done with a\n> relatively limited amount of code change and no redesign, and then\n> perhaps do something more ambitious afterward.\n\nI agree that if we continue to archive one file using the archive\ncommand then Robert's solution of checking the existence of the next\nWAL segment (N+1) has an advantage. But, currently, if you notice\npgarch_readyXlog always consider any history file as the oldest file\nbut that will not be true if we try to predict the next WAL segment\nname. For example, if we have archived 000000010000000000000004 then\nnext we will look for 000000010000000000000005 but after generating\nsegment 000000010000000000000005, if there is a timeline switch then\nwe will have the below files in the archive status\n(000000010000000000000005.ready, 00000002.history file). Now, the\nexisting archiver will archive 00000002.history first whereas our code\nwill archive 000000010000000000000005 first. Said that I don't see\nany problem with that because before archiving any segment file from\nTL 2 we will definitely archive the 00000002.history file because we\nwill not find the 000000010000000000000006.ready and we will scan the\nfull directory and now we will find 00000002.history as oldest file.\n\n>\n> > > However, that's still pretty wasteful. Every time we have to wait for\n> > > the next file to be ready for archiving, we'll basically fall back to\n> > > repeatedly scanning the whole directory, waiting for it to show up.\n\nIs this true? that only when we have to wait for the next file to be\nready we got for scanning? If I read the code in\n\"pgarch_ArchiverCopyLoop\", for every single file to achieve it is\ncalling \"pgarch_readyXlog\", wherein it scans the directory every time.\nSo I did not understand your point that only when it needs to wait for\nthe next .ready file it need to scan the full directory. It appeared\nit always scans the full directory after archiving each WAL segment.\nWhat am I missing?\n\n> > Hm. That seems like it's only an issue because .done and .ready are in\n> > the same directory? Otherwise the directory would be empty while we're\n> > waiting for the next file to be ready to be archived.\n>\n> I think that's right.\n\nIf we agree with your above point that it only needs to scan the full\ndirectory when it has to wait for the next file to be ready then\nmaking a separate directory for .done file can improve a lot because\nthe directory will be empty so scanning will not be very costly.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 May 2021 21:24:33 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Tue, May 4, 2021 at 11:54 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I agree that if we continue to archive one file using the archive\n> command then Robert's solution of checking the existence of the next\n> WAL segment (N+1) has an advantage. But, currently, if you notice\n> pgarch_readyXlog always consider any history file as the oldest file\n> but that will not be true if we try to predict the next WAL segment\n> name. For example, if we have archived 000000010000000000000004 then\n> next we will look for 000000010000000000000005 but after generating\n> segment 000000010000000000000005, if there is a timeline switch then\n> we will have the below files in the archive status\n> (000000010000000000000005.ready, 00000002.history file). Now, the\n> existing archiver will archive 00000002.history first whereas our code\n> will archive 000000010000000000000005 first. Said that I don't see\n> any problem with that because before archiving any segment file from\n> TL 2 we will definitely archive the 00000002.history file because we\n> will not find the 000000010000000000000006.ready and we will scan the\n> full directory and now we will find 00000002.history as oldest file.\n\nOK, that makes sense and is good to know.\n\n> > > > However, that's still pretty wasteful. Every time we have to wait for\n> > > > the next file to be ready for archiving, we'll basically fall back to\n> > > > repeatedly scanning the whole directory, waiting for it to show up.\n>\n> Is this true? that only when we have to wait for the next file to be\n> ready we got for scanning? If I read the code in\n> \"pgarch_ArchiverCopyLoop\", for every single file to achieve it is\n> calling \"pgarch_readyXlog\", wherein it scans the directory every time.\n> So I did not understand your point that only when it needs to wait for\n> the next .ready file it need to scan the full directory. It appeared\n> it always scans the full directory after archiving each WAL segment.\n> What am I missing?\n\nIt's not true now, but my proposal would make it true.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 May 2021 12:42:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Tue, May 4, 2021 at 10:12 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> > Is this true? that only when we have to wait for the next file to be\n> > ready we got for scanning? If I read the code in\n> > \"pgarch_ArchiverCopyLoop\", for every single file to achieve it is\n> > calling \"pgarch_readyXlog\", wherein it scans the directory every time.\n> > So I did not understand your point that only when it needs to wait for\n> > the next .ready file it need to scan the full directory. It appeared\n> > it always scans the full directory after archiving each WAL segment.\n> > What am I missing?\n>\n> It's not true now, but my proposal would make it true.\n\nOkay, got it. Thanks.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 11:03:41 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, May 4, 2021 at 11:54 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I agree that if we continue to archive one file using the archive\n> > command then Robert's solution of checking the existence of the next\n> > WAL segment (N+1) has an advantage. But, currently, if you notice\n> > pgarch_readyXlog always consider any history file as the oldest file\n> > but that will not be true if we try to predict the next WAL segment\n> > name. For example, if we have archived 000000010000000000000004 then\n> > next we will look for 000000010000000000000005 but after generating\n> > segment 000000010000000000000005, if there is a timeline switch then\n> > we will have the below files in the archive status\n> > (000000010000000000000005.ready, 00000002.history file). Now, the\n> > existing archiver will archive 00000002.history first whereas our code\n> > will archive 000000010000000000000005 first. Said that I don't see\n> > any problem with that because before archiving any segment file from\n> > TL 2 we will definitely archive the 00000002.history file because we\n> > will not find the 000000010000000000000006.ready and we will scan the\n> > full directory and now we will find 00000002.history as oldest file.\n> \n> OK, that makes sense and is good to know.\n\nI expect David will chime in on this thread too, but I did want to point\nout that when it coming to archiving history files you'd *really* like\nthat to be done just about as quickly as absolutely possible, to avoid\nthe case that we saw before that code was added, to wit: two promotions\ndone too quickly that ended up with conflicting history and possibly\nconflicting WAL files trying to be archived, and ensuing madness.\n\nIt's not just about making sure that we archive the history file for a\ntimeline before archiving WAL segments along that timeline but also\nabout making sure we get that history file into the archive as fast as\nwe can, and archiving a 16MB WAL first would certainly delay that.\n\nThanks,\n\nStephen", "msg_date": "Wed, 5 May 2021 13:06:01 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Wed, May 5, 2021 at 1:06 PM Stephen Frost <sfrost@snowman.net> wrote:\n> It's not just about making sure that we archive the history file for a\n> timeline before archiving WAL segments along that timeline but also\n> about making sure we get that history file into the archive as fast as\n> we can, and archiving a 16MB WAL first would certainly delay that.\n\nOoph. That's a rather tough constraint. Could we get around it by\nintroducing some kind of signalling mechanism, perhaps? Like if\nthere's a new history file, that must mean the server has switched\ntimelines -- I think, anyway -- so if we notified the archiver every\ntime there was a timeline switch it could react accordingly.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 16:00:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, May 5, 2021 at 1:06 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > It's not just about making sure that we archive the history file for a\n> > timeline before archiving WAL segments along that timeline but also\n> > about making sure we get that history file into the archive as fast as\n> > we can, and archiving a 16MB WAL first would certainly delay that.\n> \n> Ooph. That's a rather tough constraint. Could we get around it by\n> introducing some kind of signalling mechanism, perhaps? Like if\n> there's a new history file, that must mean the server has switched\n> timelines -- I think, anyway -- so if we notified the archiver every\n> time there was a timeline switch it could react accordingly.\n\nI would think something like that would be alright and not worse than\nwhat we've got now.\n\nThat said, in an ideal world, we'd have a way to get the new timeline to\nswitch to in a way that doesn't leave open race conditions, so as long\nwe're talking about big changes to the way archiving and archive_command\nwork (or about throwing out the horrible idea that is archive_command in\nthe first place and replacing it with appropriate hooks such that\nsomeone could install an extension which would handle archiving...), I\nwould hope we'd have a way of saying \"please, atomically, go get me a new\ntimeline.\"\n\nJust as a reminder for those following along at home, as I'm sure you're\nalready aware, the way we figure out what timeline to switch to when a\nreplica is getting promoted is that we go run the restore command asking\nfor history files until we get back \"nope, there is no file named\n0000123.history\", and then we switch to that timeline and then try to\npush such a history file into the repo and hope that it works.\n\nThanks,\n\nStephen", "msg_date": "Wed, 5 May 2021 16:13:08 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Wed, May 5, 2021 at 4:13 PM Stephen Frost <sfrost@snowman.net> wrote:\n> I would think something like that would be alright and not worse than\n> what we've got now.\n\nOK.\n\n> That said, in an ideal world, we'd have a way to get the new timeline to\n> switch to in a way that doesn't leave open race conditions, so as long\n> we're talking about big changes to the way archiving and archive_command\n> work (or about throwing out the horrible idea that is archive_command in\n> the first place and replacing it with appropriate hooks such that\n> someone could install an extension which would handle archiving...), I\n> would hope we'd have a way of saying \"please, atomically, go get me a new\n> timeline.\"\n>\n> Just as a reminder for those following along at home, as I'm sure you're\n> already aware, the way we figure out what timeline to switch to when a\n> replica is getting promoted is that we go run the restore command asking\n> for history files until we get back \"nope, there is no file named\n> 0000123.history\", and then we switch to that timeline and then try to\n> push such a history file into the repo and hope that it works.\n\nHuh, I had not thought about that problem. So, at the risk of getting\nsidetracked, what exactly are you asking for here? Let the extension\npick the timeline using an algorithm of its own devising, rather than\nhaving core do it? Or what?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 16:22:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Hi,\n\nOn 2021-05-05 16:13:08 -0400, Stephen Frost wrote:\n> Just as a reminder for those following along at home, as I'm sure you're\n> already aware, the way we figure out what timeline to switch to when a\n> replica is getting promoted is that we go run the restore command asking\n> for history files until we get back \"nope, there is no file named\n> 0000123.history\", and then we switch to that timeline and then try to\n> push such a history file into the repo and hope that it works.\n\nWhich is why the whole concept of timelines as we have them right now is\npretty much useless. It is fundamentally impossible to guarantee unique\ntimeline ids in all cases if they are assigned sequentially at timeline\ncreation - consider needing to promote a node on both ends of a split\nnetwork. I'm quite doubtful that pretending to tackle this problem via\narchiving order is a good idea, given the fundamentally racy nature.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 May 2021 13:27:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Hi,\n\nOn 2021-05-05 16:22:21 -0400, Robert Haas wrote:\n> Huh, I had not thought about that problem. So, at the risk of getting\n> sidetracked, what exactly are you asking for here? Let the extension\n> pick the timeline using an algorithm of its own devising, rather than\n> having core do it? Or what?\n\nNot Stephen, but to me the most reasonable way to address this is to\nmake timeline identifier wider and randomly allocated. The sequential\nlooking natures of timelines imo is actively unhelpful.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 May 2021 13:31:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, May 5, 2021 at 4:13 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > That said, in an ideal world, we'd have a way to get the new timeline to\n> > switch to in a way that doesn't leave open race conditions, so as long\n> > we're talking about big changes to the way archiving and archive_command\n> > work (or about throwing out the horrible idea that is archive_command in\n> > the first place and replacing it with appropriate hooks such that\n> > someone could install an extension which would handle archiving...), I\n> > would hope we'd have a way of saying \"please, atomically, go get me a new\n> > timeline.\"\n> >\n> > Just as a reminder for those following along at home, as I'm sure you're\n> > already aware, the way we figure out what timeline to switch to when a\n> > replica is getting promoted is that we go run the restore command asking\n> > for history files until we get back \"nope, there is no file named\n> > 0000123.history\", and then we switch to that timeline and then try to\n> > push such a history file into the repo and hope that it works.\n> \n> Huh, I had not thought about that problem. So, at the risk of getting\n> sidetracked, what exactly are you asking for here? Let the extension\n> pick the timeline using an algorithm of its own devising, rather than\n> having core do it? Or what?\n\nHaving the extension do it somehow is an interesting idea and one which\nmight be kind of cool.\n\nThe first thought I had was to make it archive_command's job to \"pick\"\nthe timeline by just re-trying to push the .history file (the actual\ncontents of it don't change, as the information in the file is about the\ntimeline we are switching *from* and at what LSN). That requires an\narchive command which will fail if that file already exists though and,\nideally, would perform the file archival in an atomic fashion (though\nthis last bit isn't stricly necessary- anything along these lines would\ncertainly be better than the current state).\n\nHaving an entirely independent command/hook that's explicitly for this\ncase would be another approach, of course, either in a manner that\nallows the extension to pick the destination timeline or is defined to\nbe \"return success only if the file is successfully archived, but do\n*not* overwrite any existing file of the same name and return an error\ninstead.\" and then the same approach as outlined above.\n\nThanks,\n\nStephen", "msg_date": "Wed, 5 May 2021 16:32:09 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Wed, May 5, 2021 at 4:31 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-05-05 16:22:21 -0400, Robert Haas wrote:\n> > Huh, I had not thought about that problem. So, at the risk of getting\n> > sidetracked, what exactly are you asking for here? Let the extension\n> > pick the timeline using an algorithm of its own devising, rather than\n> > having core do it? Or what?\n>\n> Not Stephen, but to me the most reasonable way to address this is to\n> make timeline identifier wider and randomly allocated. The sequential\n> looking natures of timelines imo is actively unhelpful.\n\nYeah, I always wondered why we didn't assign them randomly.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 16:36:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, May 5, 2021 at 4:31 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2021-05-05 16:22:21 -0400, Robert Haas wrote:\n> > > Huh, I had not thought about that problem. So, at the risk of getting\n> > > sidetracked, what exactly are you asking for here? Let the extension\n> > > pick the timeline using an algorithm of its own devising, rather than\n> > > having core do it? Or what?\n> >\n> > Not Stephen, but to me the most reasonable way to address this is to\n> > make timeline identifier wider and randomly allocated. The sequential\n> > looking natures of timelines imo is actively unhelpful.\n> \n> Yeah, I always wondered why we didn't assign them randomly.\n\nBased on what we do today regarding the info we put into .history files,\ntrying to figure out which is the \"latest\" timeline might be a bit\ntricky with randomly selected timelines. Maybe we could find a way to\nsolve that though.\n\nI do note that this comment is timeline.c is, ahem, perhaps over-stating\nthings a bit:\n\n * Note: while this is somewhat heuristic, it does positively guarantee\n * that (result + 1) is not a known timeline, and therefore it should\n * be safe to assign that ID to a new timeline.\n\nThanks,\n\nStephen", "msg_date": "Wed, 5 May 2021 16:53:01 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Wed, May 5, 2021 at 4:53 PM Stephen Frost <sfrost@snowman.net> wrote:\n> I do note that this comment is timeline.c is, ahem, perhaps over-stating\n> things a bit:\n>\n> * Note: while this is somewhat heuristic, it does positively guarantee\n> * that (result + 1) is not a known timeline, and therefore it should\n> * be safe to assign that ID to a new timeline.\n\nOK, that made me laugh out loud.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 20:28:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "At Tue, 4 May 2021 10:07:51 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Tue, May 4, 2021 at 12:27 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2021-05-03 16:49:16 -0400, Robert Haas wrote:\n> > > But perhaps we could work around this by allowing pgarch.c to access\n> > > shared memory, in which case it could examine the current timeline\n> > > whenever it wants, and probably also whatever LSNs it needs to know\n> > > what's safe to archive.\n> >\n> > FWIW, the shared memory stats patch implies doing that, since the\n> > archiver reports stats.\n> \n> Are you planning to commit that for v15? If so, will it be early in\n> the cycle, do you think?\n\nFWIW It's already done for v14 individually.\n\nAuthor: Fujii Masao <fujii@postgresql.org>\nDate: Mon Mar 15 13:13:14 2021 +0900\n\n Make archiver process an auxiliary process.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 06 May 2021 16:23:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Thu, May 6, 2021 at 3:23 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> FWIW It's already done for v14 individually.\n>\n> Author: Fujii Masao <fujii@postgresql.org>\n> Date: Mon Mar 15 13:13:14 2021 +0900\n>\n> Make archiver process an auxiliary process.\n\nOh, I hadn't noticed. Thanks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 May 2021 06:24:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "How are you envisioning the shared-memory signaling should work in the\noriginal sample case, where the archiver had been failing for half a\nyear ?\n\nOr should we perhaps have a system table for ready-to-archive WAL\nfiles to get around limitation sof file system to return just the\nneeded files with ORDER BY ... LIMIT as we already know how to make\nlookups in database fast ?\n\nCheers\nHannu\n\n\nOn Thu, May 6, 2021 at 12:24 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, May 6, 2021 at 3:23 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > FWIW It's already done for v14 individually.\n> >\n> > Author: Fujii Masao <fujii@postgresql.org>\n> > Date: Mon Mar 15 13:13:14 2021 +0900\n> >\n> > Make archiver process an auxiliary process.\n>\n> Oh, I hadn't noticed. Thanks.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n>\n\n\n", "msg_date": "Thu, 6 May 2021 21:23:36 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Hi,\n\nOn 2021-05-06 21:23:36 +0200, Hannu Krosing wrote:\n> How are you envisioning the shared-memory signaling should work in the\n> original sample case, where the archiver had been failing for half a\n> year ?\n\nIf we leave history files and gaps in the .ready sequence aside for a\nsecond, we really only need an LSN or segment number describing the\ncurrent \"archive position\". Then we can iterate over the segments\nbetween the \"archive position\" and the flush position (which we already\nknow). Even if we needed to keep statting .ready/.done files (to handle\ngaps due to archive command mucking around with .ready/done), it'd still\nbe a lot cheaper than what we do today. It probably would even still be\ncheaper if we just statted all potentially relevant timeline history\nfiles all the time to send them first.\n\n\n> Or should we perhaps have a system table for ready-to-archive WAL\n> files to get around limitation sof file system to return just the\n> needed files with ORDER BY ... LIMIT as we already know how to make\n> lookups in database fast ?\n\nArchiving needs to work on a standby so that doesn't seem like an\noption.\n\nRegards,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 May 2021 13:01:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Hi,\n\nWe have addressed the O(n^2) problem which involves directory scan for\narchiving individual WAL files by maintaining a WAL counter to identify\nthe next WAL file in a sequence.\n\nWAL archiver scans the status directory to identify the next WAL file\nwhich needs to be archived. This directory scan can be minimized by\nmaintaining the log segment number of the current file which is being\narchived\nand incrementing it by '1' to get the next WAL file in a sequence. Archiver\ncan check the availability of the next file in status directory and in case\nif the\nfile is not available then it should fall-back to directory scan to get the\noldest\nWAL file.\n\nPlease find attached patch v1.\n\nThanks,\nDipesh\n\nOn Fri, May 7, 2021 at 1:31 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2021-05-06 21:23:36 +0200, Hannu Krosing wrote:\n> > How are you envisioning the shared-memory signaling should work in the\n> > original sample case, where the archiver had been failing for half a\n> > year ?\n>\n> If we leave history files and gaps in the .ready sequence aside for a\n> second, we really only need an LSN or segment number describing the\n> current \"archive position\". Then we can iterate over the segments\n> between the \"archive position\" and the flush position (which we already\n> know). Even if we needed to keep statting .ready/.done files (to handle\n> gaps due to archive command mucking around with .ready/done), it'd still\n> be a lot cheaper than what we do today. It probably would even still be\n> cheaper if we just statted all potentially relevant timeline history\n> files all the time to send them first.\n>\n>\n> > Or should we perhaps have a system table for ready-to-archive WAL\n> > files to get around limitation sof file system to return just the\n> > needed files with ORDER BY ... LIMIT as we already know how to make\n> > lookups in database fast ?\n>\n> Archiving needs to work on a standby so that doesn't seem like an\n> option.\n>\n> Regards,\n>\n> Andres Freund\n>\n>\n>", "msg_date": "Tue, 6 Jul 2021 11:36:32 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Tue, Jul 6, 2021 at 11:36 AM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n>\n> Hi,\n>\n> We have addressed the O(n^2) problem which involves directory scan for\n> archiving individual WAL files by maintaining a WAL counter to identify\n> the next WAL file in a sequence.\n>\n> WAL archiver scans the status directory to identify the next WAL file\n> which needs to be archived. This directory scan can be minimized by\n> maintaining the log segment number of the current file which is being archived\n> and incrementing it by '1' to get the next WAL file in a sequence. Archiver\n> can check the availability of the next file in status directory and in case if the\n> file is not available then it should fall-back to directory scan to get the oldest\n> WAL file.\n>\n> Please find attached patch v1.\n>\n\nI have a few suggestions on the patch\n1.\n+\n+ /*\n+ * Found the oldest WAL, reset timeline ID and log segment number to generate\n+ * the next WAL file in the sequence.\n+ */\n+ if (found && !historyFound)\n+ {\n+ XLogFromFileName(xlog, &curFileTLI, &nextLogSegNo, wal_segment_size);\n+ ereport(LOG,\n+ (errmsg(\"directory scan to archive write-ahead log file \\\"%s\\\"\",\n+ xlog)));\n+ }\n\nIf a history file is found we are not updating curFileTLI and\nnextLogSegNo, so it will attempt the previously found segment. This\nis fine because it will not find that segment and it will rescan the\ndirectory. But I think we can do better, instead of searching the\nsame old segment in the previous timeline we can search that old\nsegment in the new TL so that if the TL switch happened within the\nsegment then we will find the segment and we will avoid the directory\nsearch.\n\n\n /*\n+ * Log segment number and timeline ID to get next WAL file in a sequence.\n+ */\n+static XLogSegNo nextLogSegNo = 0;\n+static TimeLineID curFileTLI = 0;\n+\n\nSo everytime archiver will start with searching segno=0 in timeline=0.\nInstead of doing this can't we first scan the directory and once we\nget the first segment to archive then only we can start predicting the\nnext wal segment? I think there is nothing wrong even if we try to\nlook for seg 0 in timeline 0, everytime we start the archivar but that\nwill be true only once in the history of the cluster so why not skip\nthis until we scan the directory once?\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Jul 2021 13:50:46 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Greetings,\n\n* Dipesh Pandit (dipesh.pandit@gmail.com) wrote:\n> We have addressed the O(n^2) problem which involves directory scan for\n> archiving individual WAL files by maintaining a WAL counter to identify\n> the next WAL file in a sequence.\n\nThis seems to have missed the concerns raised in\nhttps://postgr.es/m/20210505170601.GF20766@tamriel.snowman.net ..?\n\nAnd also the comments immediately above the ones being added here:\n\n> @@ -596,29 +606,55 @@ pgarch_archiveXlog(char *xlog)\n> * larger ID; the net result being that past timelines are given higher\n> * priority for archiving. This seems okay, or at least not obviously worth\n> * changing.\n> + *\n> + * WAL files are generated in a specific order of log segment number. The\n> + * directory scan for each WAL file can be minimized by identifying the next\n> + * WAL file in the sequence. This can be achieved by maintaining log segment\n> + * number and timeline ID corresponding to WAL file currently being archived.\n> + * The log segment number of current WAL file can be incremented by '1' upon\n> + * successful archival to point to the next WAL file.\n\nspecifically about history files being given higher priority for\narchiving. If we go with this change then we'd at least want to rewrite\nor remove those comments, but I don't actually agree that we should\nremove that preference to archive history files ahead of WAL, for the\nreasons brought up previously.\n\nAs was suggested on that subthread, it seems like it should be possible\nto just track the current timeline and adjust what we're doing if the\ntimeline changes, and we should even know what the .history file is at\nthat point and likely don't even need to scan the directory for it, as\nit'll be the old timeline ID.\n\nThanks,\n\nStephen", "msg_date": "Tue, 6 Jul 2021 09:34:58 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "> specifically about history files being given higher priority for\n> archiving. If we go with this change then we'd at least want to rewrite\n> or remove those comments, but I don't actually agree that we should\n> remove that preference to archive history files ahead of WAL, for the\n> reasons brought up previously.\n\n> As was suggested on that subthread, it seems like it should be possible\n> to just track the current timeline and adjust what we're doing if the\n> timeline changes, and we should even know what the .history file is at\n> that point and likely don't even need to scan the directory for it, as\n> it'll be the old timeline ID.\n\nI agree, I missed this part. The .history file should be given higher\npreference.\nI will take care of it in the next patch.\n\nThanks,\nDipesh\n\n> specifically about history files being given higher priority for\n> archiving.  If we go with this change then we'd at least want to rewrite\n> or remove those comments, but I don't actually agree that we should\n> remove that preference to archive history files ahead of WAL, for the\n> reasons brought up previously.\n\n> As was suggested on that subthread, it seems like it should be possible\n> to just track the current timeline and adjust what we're doing if the\n> timeline changes, and we should even know what the .history file is at\n> that point and likely don't even need to scan the directory for it, as\n> it'll be the old timeline ID.I agree, I missed this part. The .history file should be given higher preference.I will take care of it in the next patch.Thanks,Dipesh", "msg_date": "Tue, 6 Jul 2021 20:08:42 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "> I have a few suggestions on the patch\n> 1.\n> +\n> + /*\n> + * Found the oldest WAL, reset timeline ID and log segment number to\n> generate\n> + * the next WAL file in the sequence.\n> + */\n> + if (found && !historyFound)\n> + {\n> + XLogFromFileName(xlog, &curFileTLI, &nextLogSegNo, wal_segment_size);\n> + ereport(LOG,\n> + (errmsg(\"directory scan to archive write-ahead log file \\\"%s\\\"\",\n> + xlog)));\n> + }\n>\n> If a history file is found we are not updating curFileTLI and\n> nextLogSegNo, so it will attempt the previously found segment. This\n> is fine because it will not find that segment and it will rescan the\n> directory. But I think we can do better, instead of searching the\n> same old segment in the previous timeline we can search that old\n> segment in the new TL so that if the TL switch happened within the\n> segment then we will find the segment and we will avoid the directory\n> search.\n>\n>\n> /*\n> + * Log segment number and timeline ID to get next WAL file in a sequence.\n> + */\n> +static XLogSegNo nextLogSegNo = 0;\n> +static TimeLineID curFileTLI = 0;\n> +\n>\n> So everytime archiver will start with searching segno=0 in timeline=0.\n> Instead of doing this can't we first scan the directory and once we\n> get the first segment to archive then only we can start predicting the\n> next wal segment? I think there is nothing wrong even if we try to\n> look for seg 0 in timeline 0, everytime we start the archivar but that\n> will be true only once in the history of the cluster so why not skip\n> this until we scan the directory once?\n>\n\n+1, I like Dilip's ideas here to optimize further.\n\nAlso, one minor comment:\n\n+ /*\n+ * Log segment number already points to the next file in the sequence\n\n+ * (as part of successful archival of the previous file). Generate the\npath\n+ * for status file.\n\n+ */\n\nThis comment is a bit confusing with the name of the variable nextLogSegNo.\nI think the name of the variable is appropriate here, but maybe we can\nreword\nthe comment something like:\n\n+ /*\n+ * We already have the next anticipated log segment number and the\n+ * timeline, check if this WAL file is ready to be archived. If\nyes, skip\n+ * the directory scan.\n+ */\n\nRegards,\nJeevan Ladhe\n\n I have a few suggestions on the patch\n1.\n+\n+ /*\n+ * Found the oldest WAL, reset timeline ID and log segment number to generate\n+ * the next WAL file in the sequence.\n+ */\n+ if (found && !historyFound)\n+ {\n+ XLogFromFileName(xlog, &curFileTLI, &nextLogSegNo, wal_segment_size);\n+ ereport(LOG,\n+ (errmsg(\"directory scan to archive write-ahead log file \\\"%s\\\"\",\n+ xlog)));\n+ }\n\nIf a history file is found we are not updating curFileTLI and\nnextLogSegNo, so it will attempt the previously found segment.  This\nis fine because it will not find that segment and it will rescan the\ndirectory.  But I think we can do better, instead of searching the\nsame old segment in the previous timeline we can search that old\nsegment in the new TL so that if the TL switch happened within the\nsegment then we will find the segment and we will avoid the directory\nsearch.\n\n\n /*\n+ * Log segment number and timeline ID to get next WAL file in a sequence.\n+ */\n+static XLogSegNo nextLogSegNo = 0;\n+static TimeLineID curFileTLI = 0;\n+\n\nSo everytime archiver will start with searching segno=0 in timeline=0.\nInstead of doing this can't we first scan the directory and once we\nget the first segment to archive then only we can start predicting the\nnext wal segment?  I think there is nothing wrong even if we try to\nlook for seg 0 in timeline 0, everytime we start the archivar but that\nwill be true only once in the history of the cluster so why not skip\nthis until we scan the directory once?+1, I like Dilip's ideas here to optimize further.Also, one minor comment:+   /*+    * Log segment number already points to the next file in the sequence            +    * (as part of successful archival of the previous file). Generate the path      +    * for status file.                                                              +    */This comment is a bit confusing with the name of the variable nextLogSegNo.I think the name of the variable is appropriate here, but maybe we can rewordthe comment something like:+       /*+        * We already have the next anticipated log segment number and the+        * timeline, check if this WAL file is ready to be archived. If yes, skip+        * the directory scan.+        */Regards,Jeevan Ladhe", "msg_date": "Tue, 6 Jul 2021 20:18:47 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Hi,\n\n> I agree, I missed this part. The .history file should be given higher\npreference.\n> I will take care of it in the next patch.\n\nArchiver does not have access to shared memory and the current timeline ID\nis not available at archiver. In order to keep track of timeline switch we\nhave\nto push a notification from backend to archiver. Backend can send a signal\nto notify archiver about the timeline change. Archiver can register this\nnotification and perform a full directory scan to make sure that archiving\nhistory files take precedence over archiving WAL files.\n\n> If a history file is found we are not updating curFileTLI and\n> nextLogSegNo, so it will attempt the previously found segment. This\n> is fine because it will not find that segment and it will rescan the\n> directory. But I think we can do better, instead of searching the\n> same old segment in the previous timeline we can search that old\n> segment in the new TL so that if the TL switch happened within the\n> segment then we will find the segment and we will avoid the directory\n> search.\n\nThis could have been done with the approach mentioned in patch v1 but now\nconsidering archiving history file takes precedence over WAL files we cannot\nupdate the \"curFileTLI\" whenever a history file is found.\n\n> So everytime archiver will start with searching segno=0 in timeline=0.\n> Instead of doing this can't we first scan the directory and once we\n> get the first segment to archive then only we can start predicting the\n> next wal segment?\n\nDone.\n\n> This comment is a bit confusing with the name of the variable\nnextLogSegNo.\n> I think the name of the variable is appropriate here, but maybe we can\nreword\n> the comment something like:\n\nDone.\n\nI have incorporated these changes and updated a new patch. PFA, patch v2.\n\nThanks,\nDipesh", "msg_date": "Mon, 19 Jul 2021 17:42:56 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Mon, Jul 19, 2021 at 5:43 PM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n>\n> Hi,\n>\n> > I agree, I missed this part. The .history file should be given higher preference.\n> > I will take care of it in the next patch.\n>\n> Archiver does not have access to shared memory and the current timeline ID\n> is not available at archiver. In order to keep track of timeline switch we have\n> to push a notification from backend to archiver. Backend can send a signal\n> to notify archiver about the timeline change. Archiver can register this\n> notification and perform a full directory scan to make sure that archiving\n> history files take precedence over archiving WAL files.\n\nYeah, that makes sense, some comments on v2.\n\n1.\n+pgarch_timeline_switch(SIGNAL_ARGS)\n+{\n+ int save_errno = errno;\n+\n+ /* Set the flag to register a timeline switch */\n+ timeline_switch = true;\n+ SetLatch(MyLatch);\n+\n\nOn the timeline switch, setting a flag should be enough, I don't think\nthat we need to wake up the archiver. Because it will just waste the\nscan cycle. We have set the flag and that should be enough and let\nthe XLogArchiveNotify() wake this up when something is ready to be\narchived and that time we will scan the directory first based on the\nflag.\n\n\n2.\n+ */\n+ if (XLogArchivingActive() && ArchiveRecoveryRequested)\n+ XLogArchiveNotifyTLISwitch();\n+\n+\n.....\n\n /*\n+ * Signal archiver to notify timeline switch\n+ */\n+void\n+XLogArchiveNotifyTLISwitch(void)\n+{\n+ if (IsUnderPostmaster)\n+ PgArchNotifyTLISwitch();\n+}\n\nWhy do we need multi level interfaces? I mean instead of calling first\nXLogArchiveNotifyTLISwitch and then calling PgArchNotifyTLISwitch,\ncan't we directly call PgArchNotifyTLISwitch()?\n\n3.\n+ if (timeline_switch)\n+ {\n+ /* Perform a full directory scan in next cycle */\n+ dirScan = true;\n+ timeline_switch = false;\n+ }\n\nI suggest you can add some comments atop this check.\n\n4.\n+PgArchNotifyTLISwitch(void)\n+{\n+ int arch_pgprocno = PgArch->pgprocno;\n+\n+ if (arch_pgprocno != INVALID_PGPROCNO)\n+ {\n+ int archiver_pid = ProcGlobal->allProcs[arch_pgprocno].pid;\n+\n+ if (kill(archiver_pid, SIGINT) < 0)\n+ elog(ERROR, \"could not notify timeline change to archiver\");\n\n\nI think you should use %m in the error message so that it also prints\nthe OS error code.\n\n5.\n+/* Flag to specify a full directory scan to find next log file */\n+static bool dirScan = true;\n\nWhy is this a global variable? I mean whenever you enter the function\npgarch_ArchiverCopyLoop(), this can be set to true and after that you\ncan pass this as inout parameter to pgarch_readyXlog() there in it can\nbe conditionally set to false once we get some segment and whenever\nthe timeline switch we can set it back to the true.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Jul 2021 11:11:59 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Tue, Jul 6, 2021 at 9:34 AM Stephen Frost <sfrost@snowman.net> wrote:\n> As was suggested on that subthread, it seems like it should be possible\n> to just track the current timeline and adjust what we're doing if the\n> timeline changes, and we should even know what the .history file is at\n> that point and likely don't even need to scan the directory for it, as\n> it'll be the old timeline ID.\n\nI'm a little concerned that this might turn out to be more complicated\nthan it's worth. It's not a case that should happen often, and if you\nhandle it then you have to be careful to handle cases like two\ntimeline switches in very rapid succession, which seems like it could\nbe tricky.\n\nMaybe it's fine, though. I'm not really sure.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Jul 2021 14:32:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Hi,\n\n> some comments on v2.\nThanks for your comments. I have incorporated the changes\nand updated a new patch. Please find the details below.\n\n> On the timeline switch, setting a flag should be enough, I don't think\n> that we need to wake up the archiver. Because it will just waste the\n> scan cycle.\nYes, I modified it.\n\n> Why do we need multi level interfaces? I mean instead of calling first\n> XLogArchiveNotifyTLISwitch and then calling PgArchNotifyTLISwitch,\n> can't we directly call PgArchNotifyTLISwitch()?\nYes, multilevel interfaces are not required. Removed extra interface.\n\n> + if (timeline_switch)\n> + {\n> + /* Perform a full directory scan in next cycle */\n> + dirScan = true;\n> + timeline_switch = false;\n> + }\n\n> I suggest you can add some comments atop this check.\nAdded comment to specify the action required in case of a\ntimeline switch.\n\n> I think you should use %m in the error message so that it also prints\n> the OS error code.\nDone.\n\n> Why is this a global variable? I mean whenever you enter the function\n> pgarch_ArchiverCopyLoop(), this can be set to true and after that you\n> can pass this as inout parameter to pgarch_readyXlog() there in it can\n> be conditionally set to false once we get some segment and whenever\n> the timeline switch we can set it back to the true.\nYes, It is not necessary to have global scope for \"dirScan\". Changed\nthe scope to local for \"dirScan\" and \"nextLogSegNo\".\n\nPFA patch v3.\n\nThanks,\nDipesh", "msg_date": "Thu, 22 Jul 2021 12:46:07 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Thanks, Dipesh. The patch LGTM.\n\nSome minor suggestions:\n\n+ *\n\n+ * \"nextLogSegNo\" identifies the next log file to be archived in a log\n\n+ * sequence and the flag \"dirScan\" specifies a full directory scan to find\n\n+ * the next log file.\n\n\nIMHO, this comment should go atop of pgarch_readyXlog() as a description\n\nof its parameters, and not in pgarch_ArchiverCopyLoop().\n\n\n /*\n\n+ * Interrupt handler for archiver\n\n+ *\n\n+ * There is a timeline switch and we have been notified by backend.\n\n+ */\n\n\nInstead, I would suggest having something like this:\n\n\n+/*\n\n+ * Interrupt handler for handling the timeline switch.\n\n+ *\n\n+ * A timeline switch has been notified, mark this event so that the next\niteration\n\n+ * of pgarch_ArchiverCopyLoop() archives the history file, and we set the\n\n+ * timeline to the new one for the next anticipated log segment.\n\n+ */\n\n\nRegards,\n\nJeevan Ladhe\n\nOn Thu, Jul 22, 2021 at 12:46 PM Dipesh Pandit <dipesh.pandit@gmail.com>\nwrote:\n\n> Hi,\n>\n> > some comments on v2.\n> Thanks for your comments. I have incorporated the changes\n> and updated a new patch. Please find the details below.\n>\n> > On the timeline switch, setting a flag should be enough, I don't think\n> > that we need to wake up the archiver. Because it will just waste the\n> > scan cycle.\n> Yes, I modified it.\n>\n> > Why do we need multi level interfaces? I mean instead of calling first\n> > XLogArchiveNotifyTLISwitch and then calling PgArchNotifyTLISwitch,\n> > can't we directly call PgArchNotifyTLISwitch()?\n> Yes, multilevel interfaces are not required. Removed extra interface.\n>\n> > + if (timeline_switch)\n> > + {\n> > + /* Perform a full directory scan in next cycle */\n> > + dirScan = true;\n> > + timeline_switch = false;\n> > + }\n>\n> > I suggest you can add some comments atop this check.\n> Added comment to specify the action required in case of a\n> timeline switch.\n>\n> > I think you should use %m in the error message so that it also prints\n> > the OS error code.\n> Done.\n>\n> > Why is this a global variable? I mean whenever you enter the function\n> > pgarch_ArchiverCopyLoop(), this can be set to true and after that you\n> > can pass this as inout parameter to pgarch_readyXlog() there in it can\n> > be conditionally set to false once we get some segment and whenever\n> > the timeline switch we can set it back to the true.\n> Yes, It is not necessary to have global scope for \"dirScan\". Changed\n> the scope to local for \"dirScan\" and \"nextLogSegNo\".\n>\n> PFA patch v3.\n>\n> Thanks,\n> Dipesh\n>\n\nThanks, Dipesh. The patch LGTM.Some minor suggestions:+ *\n+ * \"nextLogSegNo\" identifies the next log file to be archived in a log\n+ * sequence and the flag \"dirScan\" specifies a full directory scan to find\n+ * the next log file.\n\nIMHO, this comment should go atop of pgarch_readyXlog() as a descriptionof its parameters, and not in pgarch_ArchiverCopyLoop().\n\n /*\n+ * Interrupt handler for archiver\n+ *\n+ * There is a timeline switch and we have been notified by backend.\n+ */\n\nInstead, I would suggest having something like this:\n\n+/*\n+ * Interrupt handler for handling the timeline switch.\n+ *\n+ * A timeline switch has been notified, mark this event so that the next iteration\n+ * of pgarch_ArchiverCopyLoop() archives the history file, and we set the\n+ * timeline to the new one for the next anticipated log segment.\n+ */Regards,Jeevan LadheOn Thu, Jul 22, 2021 at 12:46 PM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:Hi,> some comments on v2.\nThanks for your comments. I have incorporated the changesand updated a new patch. Please find the details below.> On the timeline switch, setting a flag should be enough, I don't think\n> that we need to wake up the archiver.  Because it will just waste the\n> scan cycle.Yes, I modified it.> Why do we need multi level interfaces? I mean instead of calling first> XLogArchiveNotifyTLISwitch and then calling PgArchNotifyTLISwitch,\n> can't we directly call PgArchNotifyTLISwitch()?Yes, multilevel interfaces are not required. Removed extra interface.> +        if (timeline_switch)\n> +        {\n> +            /* Perform a full directory scan in next cycle */\n> +            dirScan = true;\n> +            timeline_switch = false;\n> +        }\n\n> I suggest you can add some comments atop this check.\nAdded comment to specify the action required in case of atimeline switch.> I think you should use %m in the error message so that it also prints\n> the OS error code.Done.> Why is this a global variable?  I mean whenever you enter the function\n> pgarch_ArchiverCopyLoop(), this can be set to true and after that you\n> can pass this as inout parameter to pgarch_readyXlog() there in it can\n> be conditionally set to false once we get some segment and whenever\n> the timeline switch we can set it back to the true.Yes, It is not necessary to have global scope for \"dirScan\". Changedthe scope to local for \"dirScan\" and \"nextLogSegNo\".PFA patch v3.Thanks,Dipesh", "msg_date": "Thu, 22 Jul 2021 13:40:58 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 5/6/21, 1:01 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n> If we leave history files and gaps in the .ready sequence aside for a\r\n> second, we really only need an LSN or segment number describing the\r\n> current \"archive position\". Then we can iterate over the segments\r\n> between the \"archive position\" and the flush position (which we already\r\n> know). Even if we needed to keep statting .ready/.done files (to handle\r\n> gaps due to archive command mucking around with .ready/done), it'd still\r\n> be a lot cheaper than what we do today. It probably would even still be\r\n> cheaper if we just statted all potentially relevant timeline history\r\n> files all the time to send them first.\r\n\r\nMy apologies for chiming in so late to this thread, but a similar idea\r\ncrossed my mind while working on a bug where .ready files get created\r\ntoo early [0]. Specifically, instead of maintaining a status file per\r\nWAL segment, I was thinking we could narrow it down to a couple of\r\nfiles to keep track of the boundaries we care about:\r\n\r\n 1. earliest_done: the oldest segment that has been archived and\r\n can be recycled/removed\r\n 2. latest_done: the newest segment that has been archived\r\n 3. latest_ready: the newest segment that is ready for archival\r\n\r\nThis might complicate matters for backup utilities that currently\r\nmodify the .ready/.done files, but it would simplify this archive\r\nstatus stuff quite a bit and eliminate the need to worry about the\r\ndirectory scans in the first place.\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/message-id/flat/CBDDFA01-6E40-46BB-9F98-9340F4379505@amazon.com\r\n\r\n", "msg_date": "Fri, 23 Jul 2021 21:46:37 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Fri, Jul 23, 2021 at 5:46 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> My apologies for chiming in so late to this thread, but a similar idea\n> crossed my mind while working on a bug where .ready files get created\n> too early [0]. Specifically, instead of maintaining a status file per\n> WAL segment, I was thinking we could narrow it down to a couple of\n> files to keep track of the boundaries we care about:\n>\n> 1. earliest_done: the oldest segment that has been archived and\n> can be recycled/removed\n> 2. latest_done: the newest segment that has been archived\n> 3. latest_ready: the newest segment that is ready for archival\n>\n> This might complicate matters for backup utilities that currently\n> modify the .ready/.done files, but it would simplify this archive\n> status stuff quite a bit and eliminate the need to worry about the\n> directory scans in the first place.\n\nIn terms of immediate next steps, I think we should focus on\neliminating the O(n^2) problem and not get sucked into a bigger\nredesign. The patch on the table aims to do just that much and I think\nthat's a good thing.\n\nBut in the longer term I agree that we want to redesign the signalling\nsomehow. I am not convinced that using a file is the right way to go.\nIf we had to rewrite that file for every change, and especially if we\nhad to fsync it, it would be almost as bad as what we're doing right\nnow in terms of the amount of traffic to the filesystem. Atomicity is\na problem too, because if we simply create a file then after a crash\nit will either exist or not, but a file might end up garbled with a\nmix of old and new contents unless we always write a temporary file\nand automatically rename that over the existing one. As I said in my\noriginal post, I'm kind of wondering about keeping the information in\nshared memory instead of using the filesystem. I think we would still\nneed to persist it to disk at least occasionally but perhaps there is\na way to avoid having to do that as frequently as what we do now. I\nhaven't thought too deeply about what the requirements are here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Jul 2021 09:31:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 7/26/21, 6:31 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> In terms of immediate next steps, I think we should focus on\r\n> eliminating the O(n^2) problem and not get sucked into a bigger\r\n> redesign. The patch on the table aims to do just that much and I think\r\n> that's a good thing.\r\n\r\nI agree. I'll leave further discussion about a redesign for another\r\nthread.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 26 Jul 2021 16:14:23 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "> Some minor suggestions:\nThanks for your comments. I have incorporated the changes\nand updated a new patch. Please find the attached patch v4.\n\nThanks,\nDipesh\n\nOn Mon, Jul 26, 2021 at 9:44 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n\n> On 7/26/21, 6:31 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\n> > In terms of immediate next steps, I think we should focus on\n> > eliminating the O(n^2) problem and not get sucked into a bigger\n> > redesign. The patch on the table aims to do just that much and I think\n> > that's a good thing.\n>\n> I agree. I'll leave further discussion about a redesign for another\n> thread.\n>\n> Nathan\n>\n>", "msg_date": "Tue, 27 Jul 2021 13:13:00 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Tue, Jul 27, 2021 at 3:43 AM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n> and updated a new patch. Please find the attached patch v4.\n\nSome review:\n\n /*\n+ * If archiver is active, send notification that timeline has switched.\n+ */\n+ if (XLogArchivingActive() && ArchiveRecoveryRequested &&\n+ IsUnderPostmaster)\n+ PgArchNotifyTLISwitch();\n\nThere are a few other places in xlog.c that are conditional on\nXLogArchivingActive(), but none of them test ArchiveRecoveryRequested\nor IsUnderPostmaster. It appears to me that PgArchStartupAllowed()\ncontrols whether the archiver runs, and that's not contingent on\nArchiveRecoveryRequested and indeed couldn't be, since it's running in\nthe postmaster where that variable wouldn't be initialized. So why do\nwe care about ArchiveRecoveryRequested here? This is not entirely a\nrhetorical question; maybe there's some reason we should care. If so,\nthe comment ought to mention it. If not, the test should go away.\n\nIsUnderPostmaster does make a difference, but I think that test could\nbe placed inside PgArchNotifyTLISwitch() rather than putting it here\nin StartupXLOG(). In fact, I think the test could be removed entirely,\nsince if PgArchNotifyTLISwitch() is called in single-user mode, it\nwill presumably just discover that arch_pgprocno == INVALID_PGPROCNO,\nso it will simply do nothing even without the special-case code.\n\n+ pqsignal(SIGINT, pgarch_timeline_switch);\n\nI don't think it's great that we're using up SIGINT for this purpose.\nThere aren't that many signals available at the O/S level that we can\nuse for our purposes, and we generally try to multiplex them at the\napplication layer, e.g. by setting a latch or a flag in shared memory,\nrather than using a separate signal. Can we do something of that sort\nhere? Or maybe we don't even need a signal. ThisTimeLineID is already\nvisible in shared memory, so why not just have the archiver just check\nand see whether it's changed, say via a new accessor function\nGetCurrentTimeLineID()? I guess there could be a concern about the\nexpensive of that, because we'd probably be taking a spinlock or an\nlwlock for every cycle, but I don't think it's probably that bad,\nbecause I doubt we can archive much more than a double-digit number of\nfiles per second even with a very fast archive_command, and contention\non a lock generally requires a five digit number of acquisitions per\nsecond. It would be worth testing to see if we can see a problem here,\nbut I'm fairly hopeful that it's not an issue. If we do feel that it's\nimportant to avoid repeatedly taking a lock, let's see if we can find\na way to do it without dedicating a signal to this purpose.\n\n+ *\n+ * \"nextLogSegNo\" identifies the next log file to be archived in a log\n+ * sequence and the flag \"dirScan\" specifies a full directory\nscan to find\n+ * the next log file.\n */\n- while (pgarch_readyXlog(xlog))\n+ while (pgarch_readyXlog(xlog, &dirScan, &nextLogSegNo))\n\nI do not like this very much. dirScan and nextLogSegNo aren't clearly\nowned either by pgarch_ArchiverCopyLoop() or by pgarch_readyXlog(),\nsince both functions modify both variables, in each case\nconditionally, while also relying on the way that the other function\nmanipulates them. Essentially these are global variables in disguise.\nThere's a third, related variable too, which is handled differently:\n\n+ static TimeLineID curFileTLI = 0;\n\nThis is really the same kind of thing as the other two, but because\npgarch_readyXlog() happens not to need this one, you just made it\nstatic inside pgarch_readyXlog() instead of passing it back and forth.\n\nThe problem with all this is that you can't understand either function\nin isolation. Unless you read them both together and look at all of\nthe ways these three variables are manipulated, you can't really\nunderstand the logic. And there's really no reason why that needs to\nbe true. The job of cleaning timeline_switch and setting dirScan could\nbe done entirely within pgarch_readyXlog(), and so could the job of\nincrementing nextLogSegNo, because we're not going to again call\npgarch_readyXlog() unless archiving succeeded.\n\nAlso note that the TLI which is stored in curFileTLI corresponds to\nthe segment number stored in nextLogSegNo, yet one of them has \"cur\"\nfor \"current\" in the name and the other has \"next\". It would be easier\nto read the code if the names were chosen more consistently.\n\nMy tentative idea as to how to clean this up is: declare a new struct\nwith a name like readyXlogState and members lastTLI and lastSegNo.\nHave pgarch_ArchiverCopyLoop() declare a variable of this type, zero\nit, pass it as a parameter to pgarch_readyXlog(), and otherwise leave\nit alone. Then let pgarch_readyXlog() do all of the manipulation of\nthe values stored therein.\n\n+ /*\n+ * Fall-back to directory scan\n+ *\n+ * open xlog status directory and read through list of xlogs\nthat have the\n+ * .ready suffix, looking for earliest file. It is possible to optimise\n+ * this code, though only a single file is expected on the vast majority\n+ * of calls, so....\n+ */\n\nYou've moved this comment from its original location, but the trouble\nis that the comment is 100% false. In fact, the whole reason why you\nwrote this patch is *because* this comment is 100% false. In fact it\nis not difficult to create cases where each scan finds many files, and\nthe purpose of the patch is precisely to optimize the code that the\nperson who wrote this thought didn't need optimizing. Now it may take\nsome work to figure out what we want to say here exactly, but\npreserving the comment as it's written here is certainly misleading.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Jul 2021 13:48:13 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Hi,\n\n> I don't think it's great that we're using up SIGINT for this purpose.\n> There aren't that many signals available at the O/S level that we can\n> use for our purposes, and we generally try to multiplex them at the\n> application layer, e.g. by setting a latch or a flag in shared memory,\n> rather than using a separate signal. Can we do something of that sort\n> here? Or maybe we don't even need a signal. ThisTimeLineID is already\n> visible in shared memory, so why not just have the archiver just check\n> and see whether it's changed, say via a new accessor function\n> GetCurrentTimeLineID()?\n\nAs of now shared memory is not attached to the archiver. Archiver cannot\naccess ThisTimeLineID or a flag available in shared memory.\n\n if (strcmp(argv[1], \"--forkbackend\") == 0 ||\n\n strcmp(argv[1], \"--forkavlauncher\") == 0 ||\n\n strcmp(argv[1], \"--forkavworker\") == 0 ||\n\n strcmp(argv[1], \"--forkboot\") == 0 ||\n\n strncmp(argv[1], \"--forkbgworker=\", 15) == 0)\n\n PGSharedMemoryReAttach();\n\n else\n\n PGSharedMemoryNoReAttach();\n\nThis is the reason we have thought of sending a notification to the\narchiver if\nthere is a timeline switch. Should we consider attaching shared memory to\narchiver process or explore more on notification mechanism to avoid\nusing SIGINT?\n\nThanks,\nDipesh\n\nHi,> I don't think it's great that we're using up SIGINT for this purpose.> There aren't that many signals available at the O/S level that we can\r\n> use for our purposes, and we generally try to multiplex them at the\r\n> application layer, e.g. by setting a latch or a flag in shared memory,\r\n> rather than using a separate signal. Can we do something of that sort\r\n> here? Or maybe we don't even need a signal. ThisTimeLineID is already\r\n> visible in shared memory, so why not just have the archiver just check\r\n> and see whether it's changed, say via a new accessor function\r\n> GetCurrentTimeLineID()?As of now shared memory is not attached to the archiver. Archiver cannotaccess ThisTimeLineID or a flag available in shared memory.    if (strcmp(argv[1], \"--forkbackend\") == 0 ||                                                                 strcmp(argv[1], \"--forkavlauncher\") == 0 ||                                                              strcmp(argv[1], \"--forkavworker\") == 0 ||                                                                strcmp(argv[1], \"--forkboot\") == 0 ||                                                                    strncmp(argv[1], \"--forkbgworker=\", 15) == 0)                                                            PGSharedMemoryReAttach();                                                                            else                                                                                                         PGSharedMemoryNoReAttach(); This is the reason we have thought of sending a notification to the archiver ifthere is a timeline switch. Should we consider attaching shared memory to archiver process or explore more on notification mechanism to avoid using SIGINT?Thanks,Dipesh", "msg_date": "Wed, 28 Jul 2021 16:18:26 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Wed, Jul 28, 2021 at 6:48 AM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n> As of now shared memory is not attached to the archiver. Archiver cannot\n> access ThisTimeLineID or a flag available in shared memory.\n\nIf that is true, why are there functions PgArchShmemSize() and\nPgArchShmemInit(), and how does this statement in PgArchiverMain()\nmanage not to core dump?\n\n /*\n * Advertise our pgprocno so that backends can use our latch to wake us up\n * while we're sleeping.\n */\n PgArch->pgprocno = MyProc->pgprocno;\n\nI think what you are saying is true before v14, but not in v14 and master.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Jul 2021 09:27:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Hi,\n\n> I think what you are saying is true before v14, but not in v14 and master.\nYes, we can use archiver specific shared memory. Thanks.\n\n> I don't think it's great that we're using up SIGINT for this purpose.\n> There aren't that many signals available at the O/S level that we can\n> use for our purposes, and we generally try to multiplex them at the\n> application layer, e.g. by setting a latch or a flag in shared memory,\n> rather than using a separate signal. Can we do something of that sort\n> here? Or maybe we don't even need a signal. ThisTimeLineID is already\n> visible in shared memory, so why not just have the archiver just check\n> and see whether it's changed, say via a new accessor function\n> GetCurrentTimeLineID()? I guess there could be a concern about the\n> expensive of that, because we'd probably be taking a spinlock or an\n> lwlock for every cycle, but I don't think it's probably that bad,\n> because I doubt we can archive much more than a double-digit number of\n> files per second even with a very fast archive_command, and contention\n> on a lock generally requires a five digit number of acquisitions per\n> second. It would be worth testing to see if we can see a problem here,\n> but I'm fairly hopeful that it's not an issue. If we do feel that it's\n> important to avoid repeatedly taking a lock, let's see if we can find\n> a way to do it without dedicating a signal to this purpose.\n\nWe can maintain the current timeline ID in archiver specific shared memory.\nIf we switch to a new timeline then the backend process can update the new\ntimeline ID in shared memory. Archiver can keep a track of current timeline\nID\nand if it finds that there is a timeline switch then it can perform a full\ndirectory\nscan to make sure that archiving history files takes precedence over WAL\nfiles.\nAccess to the shared memory area can be protected by adding a\nWALArchiverLock.\nIf we take this approach then it doesn't require to use a dedicated signal\nto notify\na timeline switch.\n\n> The problem with all this is that you can't understand either function\n> in isolation. Unless you read them both together and look at all of\n> the ways these three variables are manipulated, you can't really\n> understand the logic. And there's really no reason why that needs to\n> be true. The job of cleaning timeline_switch and setting dirScan could\n> be done entirely within pgarch_readyXlog(), and so could the job of\n> incrementing nextLogSegNo, because we're not going to again call\n> pgarch_readyXlog() unless archiving succeeded.\n\n> Also note that the TLI which is stored in curFileTLI corresponds to\n> the segment number stored in nextLogSegNo, yet one of them has \"cur\"\n> for \"current\" in the name and the other has \"next\". It would be easier\n> to read the code if the names were chosen more consistently.\n\n> My tentative idea as to how to clean this up is: declare a new struct\n> with a name like readyXlogState and members lastTLI and lastSegNo.\n> Have pgarch_ArchiverCopyLoop() declare a variable of this type, zero\n> it, pass it as a parameter to pgarch_readyXlog(), and otherwise leave\n> it alone. Then let pgarch_readyXlog() do all of the manipulation of\n> the values stored therein.\n\nMake sense, we can move the entire logic to a single function\npgarch_readyXlog()\nand declare a new struct readyXLogState.\n\nI think we cannot declare a variable of this type in\npgarch_ArchiverCopyLoop()\ndue to the fact that this function will be called every time the archiver\nwakes up.\nInitializing readyXLogState here will reset the next anticipated log\nsegment number\nwhen the archiver wakes up from a wait state. We can declare and initialize\nit in\npgarch_MainLoop() to avoid resetting the next anticipated log segment\nnumber\nwhen the archiver wakes up.\n\n> You've moved this comment from its original location, but the trouble\n> is that the comment is 100% false. In fact, the whole reason why you\n> wrote this patch is *because* this comment is 100% false. In fact it\n> is not difficult to create cases where each scan finds many files, and\n> the purpose of the patch is precisely to optimize the code that the\n> person who wrote this thought didn't need optimizing. Now it may take\n> some work to figure out what we want to say here exactly, but\n> preserving the comment as it's written here is certainly misleading.\n\nYes, I agree. We can update the comments here to list the scenarios\nwhere we may need to perform a full directory scan.\n\nI have incorporated these changes and updated a new patch. Please find\nthe attached patch v5.\n\nThanks,\nDipesh", "msg_date": "Mon, 2 Aug 2021 18:36:00 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Mon, Aug 2, 2021 at 9:06 AM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n> We can maintain the current timeline ID in archiver specific shared memory.\n> If we switch to a new timeline then the backend process can update the new\n> timeline ID in shared memory. Archiver can keep a track of current timeline ID\n> and if it finds that there is a timeline switch then it can perform a full directory\n> scan to make sure that archiving history files takes precedence over WAL files.\n> Access to the shared memory area can be protected by adding a WALArchiverLock.\n> If we take this approach then it doesn't require to use a dedicated signal to notify\n> a timeline switch.\n\nHi,\n\nI don't really understand why you are storing something in shared\nmemory specifically for the archiver. Can't we use XLogCtl's\nThisTimeLineID instead of storing another copy of the information?\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 3 Aug 2021 12:54:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "+\t/*\r\n+\t * Perform a full directory scan to identify the next log segment. There\r\n+\t * may be one of the following scenarios which may require us to perform a\r\n+\t * full directory scan.\r\n+\t *\r\n+\t * 1. This is the first cycle since archiver has started and there is no\r\n+\t * idea about the next anticipated log segment.\r\n+\t *\r\n+\t * 2. There is a timeline switch, i.e. the timeline ID tracked at archiver\r\n+\t * does not match with current timeline ID. Archive history file as part of\r\n+\t * this timeline switch.\r\n+\t *\r\n+\t * 3. The next anticipated log segment is not available.\r\n\r\nOne benefit of the current implementation of pgarch_readyXlog() is\r\nthat .ready files created out of order will be prioritized before\r\nsegments with greater LSNs. IIUC, with this patch, as long as there\r\nis a \"next anticipated\" segment available, the archiver won't go back\r\nand archive segments it missed. I don't think the archive status\r\nfiles are regularly created out of order, but XLogArchiveCheckDone()\r\nhas handling for that case, and the work to avoid creating .ready\r\nfiles too early [0] seems to make it more likely. Perhaps we should\r\nalso force a directory scan when we detect that we are creating a\r\n.ready file for a segment that is older than the \"next anticipated\"\r\nsegment.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/DA71434B-7340-4984-9B91-F085BC47A778%40amazon.com\r\n\r\n", "msg_date": "Tue, 3 Aug 2021 20:46:57 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Hi,\n\n> I don't really understand why you are storing something in shared\n> memory specifically for the archiver. Can't we use XLogCtl's\n> ThisTimeLineID instead of storing another copy of the information?\n\nYes, we can avoid storing another copy of information. We can\nuse XLogCtl's ThisTimeLineID on Primary. However,\nXLogCtl's ThisTimeLineID is not set to the current timeline ID on\nStandby server. It's value is set to '0'. Can we use XLogCtl's\nreplayEndTLI on the Standby server to get the current timeline ID?\n\nThanks,\nDipesh\n\nHi,> I don't really understand why you are storing something in shared> memory specifically for the archiver. Can't we use XLogCtl's\n> ThisTimeLineID instead of storing another copy of the information?Yes, we can avoid storing another copy of information. We canuse XLogCtl's ThisTimeLineID on Primary. However, XLogCtl's ThisTimeLineID is not set to the current timeline ID onStandby server. It's value is set to '0'. Can we use XLogCtl's replayEndTLI on the Standby server to get the current timeline ID?Thanks,Dipesh", "msg_date": "Thu, 5 Aug 2021 17:09:21 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Thu, Aug 5, 2021 at 7:39 AM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n> Yes, we can avoid storing another copy of information. We can\n> use XLogCtl's ThisTimeLineID on Primary. However,\n> XLogCtl's ThisTimeLineID is not set to the current timeline ID on\n> Standby server. It's value is set to '0'. Can we use XLogCtl's\n> replayEndTLI on the Standby server to get the current timeline ID?\n\nI'm not sure. I think we need the value to be accurate during\nrecovery, so I'm not sure whether replayEndTLI would get us there.\nAnother approach might be to set ThisTimeLineID on standbys also.\nActually just taking a fast look at the code I'm not quite sure why\nthat isn't happening already. Do you have any understanding of that?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 5 Aug 2021 10:00:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "> I'm not sure. I think we need the value to be accurate during\n> recovery, so I'm not sure whether replayEndTLI would get us there.\n> Another approach might be to set ThisTimeLineID on standbys also.\n> Actually just taking a fast look at the code I'm not quite sure why\n> that isn't happening already. Do you have any understanding of that?\n\nDuring investigation I found that the current timeline ID (ThisTimeLineID)\ngets updated in XLogCtl’s ThisTimeLineID once it gets finalised as part\nof archive recovery.\n\n /*\n * Write the timeline history file, and have it archived. After this\n * point (or rather, as soon as the file is archived), the timeline\n * will appear as \"taken\" in the WAL archive and to any standby\n * servers. If we crash before actually switching to the new\n * timeline, standby servers will nevertheless think that we\nswitched\n * to the new timeline, and will try to connect to the new timeline.\n * To minimize the window for that, try to do as little as possible\n * between here and writing the end-of-recovery record.\n */\n\nIn case of Standby this happens only when it gets promoted.\n\nIf Standby is in recovery mode then replayEndTLI points to the most\nrecent TLI corresponding to the replayed records. Also, if replying a\nrecord causes timeline switch then replayEndTLI gets updated with\nthe new timeline. As long as it is in recovery mode replayEndTLI should\npoint to the current timeline ID on Standby. Thoughts?\n\nThanks,\nDipesh\n\n> I'm not sure. I think we need the value to be accurate during\n> recovery, so I'm not sure whether replayEndTLI would get us there.> Another approach might be to set ThisTimeLineID on standbys also.\n> Actually just taking a fast look at the code I'm not quite sure why\n> that isn't happening already. Do you have any understanding of that?During investigation I found that the current timeline ID (ThisTimeLineID) gets updated in XLogCtl’s ThisTimeLineID once it gets finalised as part of archive recovery.         /*         * Write the timeline history file, and have it archived. After this         * point (or rather, as soon as the file is archived), the timeline         * will appear as \"taken\" in the WAL archive and to any standby         * servers.  If we crash before actually switching to the new         * timeline, standby servers will nevertheless think that we switched         * to the new timeline, and will try to connect to the new timeline.         * To minimize the window for that, try to do as little as possible         * between here and writing the end-of-recovery record.         */In case of Standby this happens only when it gets promoted. If Standby is in recovery mode then replayEndTLI points to the mostrecent TLI corresponding to the replayed records. Also, if replying a record causes timeline switch then replayEndTLI gets updated with the new timeline. As long as it is in recovery mode replayEndTLI should point to the current timeline ID on Standby. Thoughts?Thanks,Dipesh", "msg_date": "Thu, 5 Aug 2021 21:53:30 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "At Tue, 3 Aug 2021 20:46:57 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> +\t/*\n> +\t * Perform a full directory scan to identify the next log segment. There\n> +\t * may be one of the following scenarios which may require us to perform a\n> +\t * full directory scan.\n> +\t *\n> +\t * 1. This is the first cycle since archiver has started and there is no\n> +\t * idea about the next anticipated log segment.\n> +\t *\n> +\t * 2. There is a timeline switch, i.e. the timeline ID tracked at archiver\n> +\t * does not match with current timeline ID. Archive history file as part of\n> +\t * this timeline switch.\n> +\t *\n> +\t * 3. The next anticipated log segment is not available.\n> \n> One benefit of the current implementation of pgarch_readyXlog() is\n> that .ready files created out of order will be prioritized before\n> segments with greater LSNs. IIUC, with this patch, as long as there\n> is a \"next anticipated\" segment available, the archiver won't go back\n> and archive segments it missed. I don't think the archive status\n> files are regularly created out of order, but XLogArchiveCheckDone()\n> has handling for that case, and the work to avoid creating .ready\n> files too early [0] seems to make it more likely. Perhaps we should\n> also force a directory scan when we detect that we are creating a\n> .ready file for a segment that is older than the \"next anticipated\"\n> segment.\n> \n> Nathan\n> \n> [0] https://postgr.es/m/DA71434B-7340-4984-9B91-F085BC47A778%40amazon.com\n\nIt works the current way always at the first iteration of\npgarch_ArchiveCopyLoop() becuse in the last iteration of\npgarch_ArchiveCopyLoop(), pgarch_readyXlog() erases the last\nanticipated segment. The shortcut works only when\npgarch_ArchiveCopyLoop archives more than once successive segments at\nonce. If the anticipated next segment found to be missing a .ready\nfile while archiving multiple files, pgarch_readyXLog falls back to\nthe regular way.\n\nSo I don't see the danger to happen perhaps you are considering.\n\nIn the first place, .ready are added while holding WALWriteLock in\nXLogWrite, and while removing old segments after a checkpoint (which\nhappens while recovery). Assuming that no one manually remove .ready\nfiles on an active server, the former is the sole place doing that. So\nI don't see a chance that .ready files are created out-of-order way.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 06 Aug 2021 10:26:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 8/5/21, 6:26 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> It works the current way always at the first iteration of\r\n> pgarch_ArchiveCopyLoop() becuse in the last iteration of\r\n> pgarch_ArchiveCopyLoop(), pgarch_readyXlog() erases the last\r\n> anticipated segment. The shortcut works only when\r\n> pgarch_ArchiveCopyLoop archives more than once successive segments at\r\n> once. If the anticipated next segment found to be missing a .ready\r\n> file while archiving multiple files, pgarch_readyXLog falls back to\r\n> the regular way.\r\n>\r\n> So I don't see the danger to happen perhaps you are considering.\r\n\r\nI think my concern is that there's no guarantee that we will ever do\r\nanother directory scan. A server that's generating a lot of WAL could\r\ntheoretically keep us in the next-anticipated-log code path\r\nindefinitely.\r\n\r\n> In the first place, .ready are added while holding WALWriteLock in\r\n> XLogWrite, and while removing old segments after a checkpoint (which\r\n> happens while recovery). Assuming that no one manually remove .ready\r\n> files on an active server, the former is the sole place doing that. So\r\n> I don't see a chance that .ready files are created out-of-order way.\r\n\r\nPerhaps a more convincing example is when XLogArchiveNotify() fails.\r\nAFAICT this can fail without ERROR-ing, in which case the server can\r\ncontinue writing WAL and creating .ready files for later segments. At\r\nsome point, the checkpointer process will call RemoveOldXlogFiles()\r\nand try to create the missing .ready file.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 6 Aug 2021 02:34:24 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "At Thu, 5 Aug 2021 21:53:30 +0530, Dipesh Pandit <dipesh.pandit@gmail.com> wrote in \r\n> > I'm not sure. I think we need the value to be accurate during\r\n> > recovery, so I'm not sure whether replayEndTLI would get us there.\r\n> > Another approach might be to set ThisTimeLineID on standbys also.\r\n> > Actually just taking a fast look at the code I'm not quite sure why\r\n> > that isn't happening already. Do you have any understanding of that?\r\n> \r\n> During investigation I found that the current timeline ID (ThisTimeLineID)\r\n> gets updated in XLogCtl’s ThisTimeLineID once it gets finalised as part\r\n> of archive recovery.\r\n> \r\n> /*\r\n> * Write the timeline history file, and have it archived. After this\r\n> * point (or rather, as soon as the file is archived), the timeline\r\n> * will appear as \"taken\" in the WAL archive and to any standby\r\n> * servers. If we crash before actually switching to the new\r\n> * timeline, standby servers will nevertheless think that we\r\n> switched\r\n> * to the new timeline, and will try to connect to the new timeline.\r\n> * To minimize the window for that, try to do as little as possible\r\n> * between here and writing the end-of-recovery record.\r\n> */\r\n> \r\n> In case of Standby this happens only when it gets promoted.\r\n> \r\n> If Standby is in recovery mode then replayEndTLI points to the most\r\n> recent TLI corresponding to the replayed records. Also, if replying a\r\n> record causes timeline switch then replayEndTLI gets updated with\r\n> the new timeline. As long as it is in recovery mode replayEndTLI should\r\n> point to the current timeline ID on Standby. Thoughts?\r\n\r\nAs I mentioned in another branch of this thread, pgarch_readyXlog()\r\nalways goes into the fall back path at the first iteration of\r\npgarch_ArchiverCopyLoop() and the current (or expected) TLI is\r\ninformed there. So no need of shared timeline ID at that time.\r\n\r\nWhen pgarch_ArchiverCopyLoop meets a timeline switch, the short cut\r\npath fails to find the next anticipated .ready file then goes into the\r\nfallback path, which should find the history file for the next TLI\r\n(unless any timing misalignment I'm not aware of happens).\r\n\r\nSo the shared timeline id works only to let the fast path give way to\r\nthe fall back path to find the just created history file as earlier as\r\npossible. Notifying the timeline ID that the startup process\r\nrecognizes to archiver makes thing more complex than requied.\r\nCurrently archiver doesn't use SIGINT, so I think we can use sigint\r\nfor the purpose.\r\n\r\nFurthermore, it seems to me that we can make the TLI and the next\r\nanticipated segment number function-local static variables. It would\r\nbe workable assuming that the only caller pgarch_ArchiverCopyLoop\r\nobeys the contract that it must call pgarch_readyXlog() until it\r\nreturns false. However, there seems to be no reason for it not to\r\nwork even otherwise, unless I'm missing something (that's likely),\r\nthough.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Fri, 06 Aug 2021 11:53:54 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "At Fri, 6 Aug 2021 02:34:24 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 8/5/21, 6:26 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n> > It works the current way always at the first iteration of\n> > pgarch_ArchiveCopyLoop() becuse in the last iteration of\n> > pgarch_ArchiveCopyLoop(), pgarch_readyXlog() erases the last\n> > anticipated segment. The shortcut works only when\n> > pgarch_ArchiveCopyLoop archives more than once successive segments at\n> > once. If the anticipated next segment found to be missing a .ready\n> > file while archiving multiple files, pgarch_readyXLog falls back to\n> > the regular way.\n> >\n> > So I don't see the danger to happen perhaps you are considering.\n> \n> I think my concern is that there's no guarantee that we will ever do\n> another directory scan. A server that's generating a lot of WAL could\n> theoretically keep us in the next-anticipated-log code path\n> indefinitely.\n\nTheoretically possible. Supposing that .ready may be created\nout-of-order (for the following reason, as a possibility), when once\nthe fast path bailed out then the fallback path finds that the second\noldest file has .ready, the succeeding fast path continues running\nleaving the oldest file.\n\n> > In the first place, .ready are added while holding WALWriteLock in\n> > XLogWrite, and while removing old segments after a checkpoint (which\n> > happens while recovery). Assuming that no one manually remove .ready\n> > files on an active server, the former is the sole place doing that. So\n> > I don't see a chance that .ready files are created out-of-order way.\n> \n> Perhaps a more convincing example is when XLogArchiveNotify() fails.\n> AFAICT this can fail without ERROR-ing, in which case the server can\n> continue writing WAL and creating .ready files for later segments. At\n> some point, the checkpointer process will call RemoveOldXlogFiles()\n> and try to create the missing .ready file.\n\nMmm. Assuming that could happen, a history file gets cursed to lose a\nchance to be archived forever once that disaster falls onto it. Apart\nfrom this patch, maybe we need a measure to notify the history files\nthat are once missed a chance.\n\nAssuming that all such forgotten files would be finally re-marked as\n.ready anywhere, they can be re-found by archiver by explicitly\ntriggering the fallback path. Currently the trigger fires implicitly\nby checking shared timeline movement, but by causing the trigger by,\nfor example by a signal as mentioned in a nearby message, that\nbehavior would be easily to implement.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 06 Aug 2021 13:39:58 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Hi,\n\nThanks for the feedback.\n\nThe possible path that archiver can take for each cycle is either a fast\npath or a fall-back patch. The fast path involves checking availability of\nnext anticipated log segment and decide the next target for archival or\na fall-back path which involves full directory scan to get the next log\nsegment.\nWe need a mechanism that enables the archiver to select the desired path\nfor each cycle.\n\nThis can be achieved by maintaining a shared memory flag. If this flag is\nset\nthen archiver should take the fall-back path otherwise it should continue\nwith\nthe fast path.\n\nThis flag can be set by backend in case if an action like timeline switch,\n.ready files created out of order,... requires archiver to perform a full\ndirectory scan.\n\nI have incorporated these changes and updated a new patch. PFA patch v6.\n\nThanks,\nDipesh", "msg_date": "Thu, 12 Aug 2021 17:22:12 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "+\t * This .ready file is created out of order, notify archiver to perform\r\n+\t * a full directory scan to archive corresponding WAL file.\r\n+\t */\r\n+\tStatusFilePath(archiveStatusPath, xlog, \".ready\");\r\n+\tif (stat(archiveStatusPath, &stat_buf) == 0)\r\n+\t\tPgArchEnableDirScan();\r\n\r\nWe may want to call PgArchWakeup() after setting the flag.\r\n\r\n+\t * Perform a full directory scan to identify the next log segment. There\r\n+\t * may be one of the following scenarios which may require us to perform a\r\n+\t * full directory scan.\r\n...\r\n+\t * - The next anticipated log segment is not available.\r\n\r\nI wonder if we really need to perform a directory scan in this case.\r\nUnless there are other cases where the .ready files are created out of\r\norder, I think this just causes an unnecessary directory scan every\r\ntime the archiver catches up.\r\n\r\n+\t * Flag to enable/disable directory scan. If this flag is set then it\r\n+\t * forces archiver to perform a full directory scan to get the next log\r\n+\t * segment.\r\n+\t */\r\n+\tpg_atomic_flag dirScan;\r\n\r\nI personally don't think it's necessary to use an atomic here. A\r\nspinlock or LWLock would probably work just fine, as contention seems\r\nunlikely. If we use a lock, we also don't have to worry about memory\r\nbarriers.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 16 Aug 2021 04:51:23 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 8/15/21, 9:52 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> +\t * Perform a full directory scan to identify the next log segment. There\r\n> +\t * may be one of the following scenarios which may require us to perform a\r\n> +\t * full directory scan.\r\n> ...\r\n> +\t * - The next anticipated log segment is not available.\r\n>\r\n> I wonder if we really need to perform a directory scan in this case.\r\n> Unless there are other cases where the .ready files are created out of\r\n> order, I think this just causes an unnecessary directory scan every\r\n> time the archiver catches up.\r\n\r\nThinking further, I suppose this is necessary for when lastSegNo gets\r\nreset after processing an out-of-order .ready file.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 16 Aug 2021 05:05:01 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Thanks for the feedback.\n\n> + StatusFilePath(archiveStatusPath, xlog, \".ready\");\n> + if (stat(archiveStatusPath, &stat_buf) == 0)\n> + PgArchEnableDirScan();\n\n> We may want to call PgArchWakeup() after setting the flag.\n\nYes, added a call to wake up archiver.\n\n> > + * - The next anticipated log segment is not available.\n> >\n> > I wonder if we really need to perform a directory scan in this case.\n> > Unless there are other cases where the .ready files are created out of\n> > order, I think this just causes an unnecessary directory scan every\n> > time the archiver catches up.\n\n> Thinking further, I suppose this is necessary for when lastSegNo gets\n> reset after processing an out-of-order .ready file.\n\nAlso, this is necessary when lastTLI gets reset after switching to a new\ntimeline.\n\n> + pg_atomic_flag dirScan;\n\n> I personally don't think it's necessary to use an atomic here. A\n> spinlock or LWLock would probably work just fine, as contention seems\n> unlikely. If we use a lock, we also don't have to worry about memory\n> barriers.\n\nHistory file should be archived as soon as it gets created. The atomic flag\nhere will make sure that there is no reordering of read/write instructions\nwhile\naccessing the flag in shared memory. Archiver needs to read this flag at\nthe\nbeginning of each cycle. Write to atomic flag is synchronized and it\nprovides\na lockless read. I think an atomic flag here is an efficient choice unless\nI am\nmissing something.\n\nPlease find the attached patch v7.\n\nThanks,\nDipesh", "msg_date": "Tue, 17 Aug 2021 18:22:42 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 8/17/21, 5:53 AM, \"Dipesh Pandit\" <dipesh.pandit@gmail.com> wrote:\r\n>> I personally don't think it's necessary to use an atomic here. A\r\n>> spinlock or LWLock would probably work just fine, as contention seems\r\n>> unlikely. If we use a lock, we also don't have to worry about memory\r\n>> barriers.\r\n>\r\n> History file should be archived as soon as it gets created. The atomic flag\r\n> here will make sure that there is no reordering of read/write instructions while\r\n> accessing the flag in shared memory. Archiver needs to read this flag at the \r\n> beginning of each cycle. Write to atomic flag is synchronized and it provides \r\n> a lockless read. I think an atomic flag here is an efficient choice unless I am \r\n> missing something.\r\n\r\nSorry, I think my note was not very clear. I agree that a flag should\r\nbe used for this purpose, but I think we should just use a regular\r\nbool protected by a spinlock or LWLock instead of an atomic. The file\r\natomics.h has the following note:\r\n\r\n * Use higher level functionality (lwlocks, spinlocks, heavyweight locks)\r\n * whenever possible. Writing correct code using these facilities is hard.\r\n\r\nIOW I don't think the extra complexity is necessary. From a\r\nperformance standpoint, contention seems unlikely. We only need to\r\nread the flag roughly once per WAL segment, and we only ever set it in\r\nuncommon scenarios such as a timeline switch or the creation of an\r\nout-of-order .ready file.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 17 Aug 2021 16:33:54 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Tue, Aug 17, 2021 at 12:33 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> Sorry, I think my note was not very clear. I agree that a flag should\n> be used for this purpose, but I think we should just use a regular\n> bool protected by a spinlock or LWLock instead of an atomic. The file\n> atomics.h has the following note:\n>\n> * Use higher level functionality (lwlocks, spinlocks, heavyweight locks)\n> * whenever possible. Writing correct code using these facilities is hard.\n>\n> IOW I don't think the extra complexity is necessary. From a\n> performance standpoint, contention seems unlikely. We only need to\n> read the flag roughly once per WAL segment, and we only ever set it in\n> uncommon scenarios such as a timeline switch or the creation of an\n> out-of-order .ready file.\n\nIn the interest of full disclosure, I think that I was probably the\none who suggested to Dipesh that he should look into using atomics,\nalthough I can't quite remember right now why I thought we might want\nto do that.\n\nI do not on general principle very much like code that does\nLWLockAcquire(whatever);\nexactly-one-assignment-statement-that-modifies-a-1-2-or-4-byte-quantity;\nLWLockRelease(whatever). If you had two assignments in there, then you\nknow why you have a lock: it's to make those behave as an atomic,\nindivisible unit. But when you only have one, what are you protecting\nagainst? You're certainly not making anything atomic that would not\nhave been anyway, so you must be using the LWLock as a memory barrier.\nBut then you really kind of have to think about memory barriers\nanyway: why do you need one at all, and what things need to be\nseparated? It's not clear that spelling pg_memory_barrier() as\nLWLockAcquire() and/or LWLockRelease() is actually saving you anything\nin terms of notional complexity.\n\nIn this patch, it appears to me that the atomic flag is only ever\nbeing read unlocked, so I think that we're actually getting no benefit\nat all from the use of pg_atomic_flag here. We're not making anything\natomic, because there's only one bit of shared state, and we're not\ngetting any memory barrier semantics, because it looks to me like the\nflag is only ever tested using pg_atomic_unlocked_test_flag, which is\ndocumented not to have barrier semantics. So as far as I can see,\nthere's no point in using either an LWLock or atomics here. We could\njust use bool with no lock and the code would do exactly what it does\nnow. So I guess the question is whether that's correct or whether we\nneed some kind of synchronization and, if so, of what sort.\n\nI can't actually see that there's any kind of hard synchronization\nrequirement here at all. What we're trying to do is guarantee that if\nthe timeline changes, we'll pick up the timeline history for the new\ntimeline next, and that if files are archived out of order, we'll\nswitch to archiving the oldest file that is now present rather than\ncontinuing with consecutive files. But suppose we just use an\nunsynchronized bool. The worst case is that we'll archive one extra\nfile proceeding in order before we jump to the file that we were\nsupposed to archive next. It's not evident to me that this is all that\nbad. The same thing would have happened if the previous file had been\narchived slightly faster than it actually was, so that we began\narchiving the next file just before, rather than just after, the\nnotification was sent. And if it is bad, wrapping an LWLock around the\naccesses to the flag variable, or using an atomic, does nothing to\nstop it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 17 Aug 2021 14:27:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 8/17/21, 11:28 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> I can't actually see that there's any kind of hard synchronization\r\n> requirement here at all. What we're trying to do is guarantee that if\r\n> the timeline changes, we'll pick up the timeline history for the new\r\n> timeline next, and that if files are archived out of order, we'll\r\n> switch to archiving the oldest file that is now present rather than\r\n> continuing with consecutive files. But suppose we just use an\r\n> unsynchronized bool. The worst case is that we'll archive one extra\r\n> file proceeding in order before we jump to the file that we were\r\n> supposed to archive next. It's not evident to me that this is all that\r\n> bad. The same thing would have happened if the previous file had been\r\n> archived slightly faster than it actually was, so that we began\r\n> archiving the next file just before, rather than just after, the\r\n> notification was sent. And if it is bad, wrapping an LWLock around the\r\n> accesses to the flag variable, or using an atomic, does nothing to\r\n> stop it.\r\n\r\nI am inclined to agree. The archiver only ever reads the flag and\r\nsets it to false (if we are doing a directory scan). Others only ever\r\nset the flag to true. The only case I can think of where we might\r\nmiss the timeline switch or out-of-order .ready file is when the\r\narchiver sets the flag to false and then ReadDir() fails. However,\r\nthat seems to cause the archiver process to restart, and we always\r\nstart with a directory scan at first.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 17 Aug 2021 19:09:07 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 8/17/21, 12:11 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> On 8/17/21, 11:28 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n>> I can't actually see that there's any kind of hard synchronization\r\n>> requirement here at all. What we're trying to do is guarantee that if\r\n>> the timeline changes, we'll pick up the timeline history for the new\r\n>> timeline next, and that if files are archived out of order, we'll\r\n>> switch to archiving the oldest file that is now present rather than\r\n>> continuing with consecutive files. But suppose we just use an\r\n>> unsynchronized bool. The worst case is that we'll archive one extra\r\n>> file proceeding in order before we jump to the file that we were\r\n>> supposed to archive next. It's not evident to me that this is all that\r\n>> bad. The same thing would have happened if the previous file had been\r\n>> archived slightly faster than it actually was, so that we began\r\n>> archiving the next file just before, rather than just after, the\r\n>> notification was sent. And if it is bad, wrapping an LWLock around the\r\n>> accesses to the flag variable, or using an atomic, does nothing to\r\n>> stop it.\r\n>\r\n> I am inclined to agree. The archiver only ever reads the flag and\r\n> sets it to false (if we are doing a directory scan). Others only ever\r\n> set the flag to true. The only case I can think of where we might\r\n> miss the timeline switch or out-of-order .ready file is when the\r\n> archiver sets the flag to false and then ReadDir() fails. However,\r\n> that seems to cause the archiver process to restart, and we always\r\n> start with a directory scan at first.\r\n\r\nThinking further, I think the most important thing to ensure is that\r\nresetting the flag happens before we begin the directory scan.\r\nConsider the following scenario in which a timeline history file would\r\npotentially be lost:\r\n\r\n 1. Archiver completes directory scan.\r\n 2. A timeline history file is created and the flag is set.\r\n 3. Archiver resets the flag.\r\n\r\nI don't think there's any problem with the archiver reading a stale\r\nvalue for the flag. It should eventually be updated and route us to\r\nthe directory scan code path.\r\n\r\nI'd also note that we're depending on the directory scan logic for\r\npicking up all timeline history files and out-of-order .ready files\r\nthat may have been created each time the flag is set. AFAICT that is\r\nsafe since we prioritize timeline history files and reset the archiver\r\nstate anytime we do a directory scan. We'll first discover timeline\r\nhistory files via directory scans, and then we'll move on to .ready\r\nfiles, starting at the one with the lowest segment number. If a new\r\ntimeline history file or out-of-order .ready file is created, the\r\narchiver is notified, and we start over.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 17 Aug 2021 20:19:25 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Hi,\n\nThanks for the feedback. I have incorporated the suggestion\nto use an unsynchronized boolean flag to force directory scan.\nThis flag is being set if there is a timeline switch or .ready file\nis created out of order. Archiver resets this flag in case if it is\nbeing set before it begins directory scan.\n\nPFA patch v8.\n\nThanks,\nDipesh", "msg_date": "Wed, 18 Aug 2021 16:35:23 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Tue, Aug 17, 2021 at 4:19 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> Thinking further, I think the most important thing to ensure is that\n> resetting the flag happens before we begin the directory scan.\n> Consider the following scenario in which a timeline history file would\n> potentially be lost:\n>\n> 1. Archiver completes directory scan.\n> 2. A timeline history file is created and the flag is set.\n> 3. Archiver resets the flag.\n\nDipesh says in his latest email that the archiver resets the flag just\nbefore it begins a directory scan. If that's accurate, then I think\nthis sequence of events can't occur.\n\nIf there is a race condition here with setting the flag, then an\nalternative design would be to use a counter - either a plain old\nuint64 or perhaps pg_atomic_uint64 - and have the startup process\nincrement the counter when it wants to trigger a scan. In this design,\nthe archiver would never modify the counter itself, but just remember\nthe last value that it saw. If it later sees a different value it\nknows that a full scan is required. I think this kind of system is\nextremely robust against the general class of problems that you're\ntalking about here, but I'm not sure whether we need it, because I'm\nnot sure whether there is a race with just the bool.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 18 Aug 2021 10:23:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Thanks for the new version of the patch. Overall, I think it is on\r\nthe right track.\r\n\r\n+ /*\r\n+ * This .ready file is created out of order, notify archiver to perform\r\n+ * a full directory scan to archive corresponding WAL file.\r\n+ */\r\n+ StatusFilePath(archiveStatusPath, xlog, \".ready\");\r\n+ if (stat(archiveStatusPath, &stat_buf) == 0)\r\n+ {\r\n+ PgArchEnableDirScan();\r\n+ PgArchWakeup();\r\n+ }\r\n\r\nShould we have XLogArchiveNotify(), writeTimeLineHistory(), and\r\nwriteTimeLineHistoryFile() enable the directory scan instead? Else,\r\nwe have to exhaustively cover all such code paths, which may be\r\ndifficult to maintain. Another reason I am bringing this up is that\r\nmy patch for adjusting .ready file creation [0] introduces more\r\nopportunities for .ready files to be created out-of-order.\r\n\r\n+ /*\r\n+ * This is a fall-back path, check if we are here due to the unavailability\r\n+ * of next anticipated log segment or the archiver is being forced to\r\n+ * perform a full directory scan. Reset the flag in shared memory only if\r\n+ * it has been enabled to force a full directory scan and then proceed with\r\n+ * directory scan.\r\n+ */\r\n+ if (PgArch->dirScan)\r\n+ PgArch->dirScan = false;\r\n\r\nWhy do we need to check that the flag is set before we reset it? I\r\nthink we could just always reset it since we are about to do a\r\ndirectory scan anyway.\r\n\r\nOn 8/18/21, 7:25 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> On Tue, Aug 17, 2021 at 4:19 PM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> Thinking further, I think the most important thing to ensure is that\r\n>> resetting the flag happens before we begin the directory scan.\r\n>> Consider the following scenario in which a timeline history file would\r\n>> potentially be lost:\r\n>>\r\n>> 1. Archiver completes directory scan.\r\n>> 2. A timeline history file is created and the flag is set.\r\n>> 3. Archiver resets the flag.\r\n>\r\n> Dipesh says in his latest email that the archiver resets the flag just\r\n> before it begins a directory scan. If that's accurate, then I think\r\n> this sequence of events can't occur.\r\n>\r\n> If there is a race condition here with setting the flag, then an\r\n> alternative design would be to use a counter - either a plain old\r\n> uint64 or perhaps pg_atomic_uint64 - and have the startup process\r\n> increment the counter when it wants to trigger a scan. In this design,\r\n> the archiver would never modify the counter itself, but just remember\r\n> the last value that it saw. If it later sees a different value it\r\n> knows that a full scan is required. I think this kind of system is\r\n> extremely robust against the general class of problems that you're\r\n> talking about here, but I'm not sure whether we need it, because I'm\r\n> not sure whether there is a race with just the bool.\r\n\r\nI'm not sure, either. Perhaps it would at least be worth adding a\r\npg_memory_barrier() after setting dirScan to false to avoid the\r\nscenario I mentioned (which may or may not be possible). IMO this\r\nstuff would be much easier to reason about if we used a lock instead,\r\neven if the synchronization was not strictly necessary. However, I\r\ndon't want to hold this patch up too much on this point.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/05AD5FE2-9A53-4D11-A3F8-3A83EBB0EB93%40amazon.com\r\n\r\n", "msg_date": "Wed, 18 Aug 2021 20:35:12 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Hi,\n\nThanks for the feedback.\n\n> Should we have XLogArchiveNotify(), writeTimeLineHistory(), and\n> writeTimeLineHistoryFile() enable the directory scan instead? Else,\n> we have to exhaustively cover all such code paths, which may be\n> difficult to maintain. Another reason I am bringing this up is that\n> my patch for adjusting .ready file creation [0] introduces more\n> opportunities for .ready files to be created out-of-order.\n\nXLogArchiveNotify() notifies Archiver when a log segment is ready for\narchival by creating a .ready file. This function is being called for each\nlog segment and placing a call to enable directory scan here will result\nin directory scan for each log segment.\n\nWe can have writeTimeLineHistory() and writeTimeLineHistoryFile() to\nenable directory scan to handle the scenarios related to timeline switch.\n\nHowever, in other scenarios, I think we have to explicitly call\nPgArchEnableDirScan()\nto enable directory scan. PgArchEnableDirScan() takes care of waking up\narchiver so that the caller of this function need not have to nudge the\narchiver.\n\n> + /*\n> + * This is a fall-back path, check if we are here due to the\nunavailability\n> + * of next anticipated log segment or the archiver is being forced to\n> + * perform a full directory scan. Reset the flag in shared memory\nonly if\n> + * it has been enabled to force a full directory scan and then\nproceed with\n> + * directory scan.\n> + */\n> + if (PgArch->dirScan)\n> + PgArch->dirScan = false;\n\n> Why do we need to check that the flag is set before we reset it? I\n> think we could just always reset it since we are about to do a\n> directory scan anyway\n\nYes, I agree.\n\n> > If there is a race condition here with setting the flag, then an\n> > alternative design would be to use a counter - either a plain old\n> > uint64 or perhaps pg_atomic_uint64 - and have the startup process\n> > increment the counter when it wants to trigger a scan. In this design,\n> > the archiver would never modify the counter itself, but just remember\n> > the last value that it saw. If it later sees a different value it\n> > knows that a full scan is required. I think this kind of system is\n> > extremely robust against the general class of problems that you're\n> > talking about here, but I'm not sure whether we need it, because I'm\n> > not sure whether there is a race with just the bool.\n\n> I'm not sure, either. Perhaps it would at least be worth adding a\n> pg_memory_barrier() after setting dirScan to false to avoid the\n> scenario I mentioned (which may or may not be possible). IMO this\n> stuff would be much easier to reason about if we used a lock instead,\n> even if the synchronization was not strictly necessary. However, I\n> don't want to hold this patch up too much on this point.\n\nThere is one possible scenario where it may run into a race condition. If\narchiver has just finished archiving all .ready files and the next\nanticipated\nlog segment is not available then in this case archiver takes the fall-back\npath to scan directory. It resets the flag before it begins directory scan.\nNow, if a directory scan is enabled by a timeline switch or .ready file\ncreated\nout of order in parallel to the event that the archiver resets the flag\nthen this\nmight result in a race condition. But in this case also archiver is\neventually\ngoing to perform a directory scan and the desired file will be archived as\npart\nof directory scan. Apart of this I can't think of any other scenario which\nmay\nresult into a race condition unless I am missing something.\n\nI have incorporated the suggestions and updated a new patch. PFA patch v9.\n\nThanks,\nDipesh", "msg_date": "Thu, 19 Aug 2021 18:09:53 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 8/19/21, 5:42 AM, \"Dipesh Pandit\" <dipesh.pandit@gmail.com> wrote:\r\n>> Should we have XLogArchiveNotify(), writeTimeLineHistory(), and\r\n>> writeTimeLineHistoryFile() enable the directory scan instead? Else,\r\n>> we have to exhaustively cover all such code paths, which may be\r\n>> difficult to maintain. Another reason I am bringing this up is that\r\n>> my patch for adjusting .ready file creation [0] introduces more\r\n>> opportunities for .ready files to be created out-of-order.\r\n>\r\n> XLogArchiveNotify() notifies Archiver when a log segment is ready for\r\n> archival by creating a .ready file. This function is being called for each \r\n> log segment and placing a call to enable directory scan here will result\r\n> in directory scan for each log segment. \r\n\r\nCould we have XLogArchiveNotify() check the archiver state and only\r\ntrigger a directory scan if we detect that we are creating an out-of-\r\norder .ready file?\r\n\r\n> There is one possible scenario where it may run into a race condition. If\r\n> archiver has just finished archiving all .ready files and the next anticipated\r\n> log segment is not available then in this case archiver takes the fall-back \r\n> path to scan directory. It resets the flag before it begins directory scan. \r\n> Now, if a directory scan is enabled by a timeline switch or .ready file created\r\n> out of order in parallel to the event that the archiver resets the flag then this\r\n> might result in a race condition. But in this case also archiver is eventually \r\n> going to perform a directory scan and the desired file will be archived as part\r\n> of directory scan. Apart of this I can't think of any other scenario which may \r\n> result into a race condition unless I am missing something.\r\n\r\nWhat do you think about adding an upper limit to the number of files\r\nwe can archive before doing a directory scan? The more I think about\r\nthe directory scan flag, the more I believe it is a best-effort tool\r\nthat will remain prone to race conditions. If we have a guarantee\r\nthat a directory scan will happen within the next N files, there's\r\nprobably less pressure to make sure that it's 100% correct.\r\n\r\nOn an unrelated note, do we need to add some extra handling for backup\r\nhistory files and partial WAL files?\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 19 Aug 2021 21:12:46 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 5/4/21, 7:07 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> On Tue, May 4, 2021 at 12:27 AM Andres Freund <andres@anarazel.de> wrote:\r\n>> On 2021-05-03 16:49:16 -0400, Robert Haas wrote:\r\n>> > I have two possible ideas for addressing this; perhaps other people\r\n>> > will have further suggestions. A relatively non-invasive fix would be\r\n>> > to teach pgarch.c how to increment a WAL file name. After archiving\r\n>> > segment N, check using stat() whether there's an .ready file for\r\n>> > segment N+1. If so, do that one next. If not, then fall back to\r\n>> > performing a full directory scan.\r\n>>\r\n>> Hm. I wonder if it'd not be better to determine multiple files to be\r\n>> archived in one readdir() pass?\r\n>\r\n> I think both methods have some merit. If we had a way to pass a range\r\n> of files to archive_command instead of just one, then your way is\r\n> distinctly better, and perhaps we should just go ahead and invent such\r\n> a thing. If not, your way doesn't entirely solve the O(n^2) problem,\r\n> since you have to choose some upper bound on the number of file names\r\n> you're willing to buffer in memory, but it may lower it enough that it\r\n> makes no practical difference. I am somewhat inclined to think that it\r\n> would be good to start with the method I'm proposing, since it is a\r\n> clear-cut improvement over what we have today and can be done with a\r\n> relatively limited amount of code change and no redesign, and then\r\n> perhaps do something more ambitious afterward.\r\n\r\nI was curious about this, so I wrote a patch (attached) to store\r\nmultiple files per directory scan and tested it against the latest\r\npatch in this thread (v9) [0]. Specifically, I set archive_command to\r\n'false', created ~20K WAL segments, then restarted the server with\r\narchive_command set to 'true'. Both the v9 patch and the attached\r\npatch completed archiving all segments in just under a minute. (I\r\ntested the attached patch with NUM_FILES_PER_DIRECTORY_SCAN set to 64,\r\n128, and 256 and didn't observe any significant difference.) The\r\nexisting logic took over 4 minutes to complete.\r\n\r\nI'm hoping to do this test again with many more (100K+) status files,\r\nas I believe that the v9 patch will be faster at that scale, but I'm\r\nnot sure how much faster it will be.\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/message-id/attachment/125543/v9-0001-mitigate-directory-scan-for-WAL-archiver.patch", "msg_date": "Sun, 22 Aug 2021 04:28:51 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 8/21/21, 9:29 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> I was curious about this, so I wrote a patch (attached) to store\r\n> multiple files per directory scan and tested it against the latest\r\n> patch in this thread (v9) [0]. Specifically, I set archive_command to\r\n> 'false', created ~20K WAL segments, then restarted the server with\r\n> archive_command set to 'true'. Both the v9 patch and the attached\r\n> patch completed archiving all segments in just under a minute. (I\r\n> tested the attached patch with NUM_FILES_PER_DIRECTORY_SCAN set to 64,\r\n> 128, and 256 and didn't observe any significant difference.) The\r\n> existing logic took over 4 minutes to complete.\r\n>\r\n> I'm hoping to do this test again with many more (100K+) status files,\r\n> as I believe that the v9 patch will be faster at that scale, but I'm\r\n> not sure how much faster it will be.\r\n\r\nI ran this again on a bigger machine with 200K WAL files pending\r\narchive. The v9 patch took ~5.5 minutes, the patch I sent took ~8\r\nminutes, and the existing logic took just under 3 hours.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 23 Aug 2021 02:31:45 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Sun, Aug 22, 2021 at 10:31 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> I ran this again on a bigger machine with 200K WAL files pending\n> archive. The v9 patch took ~5.5 minutes, the patch I sent took ~8\n> minutes, and the existing logic took just under 3 hours.\n\nHmm. On the one hand, 8 minutes > 5.5 minutes, and presumably the gap\nwould only get wider if the number of files were larger or if reading\nthe directory were slower. I am pretty sure that reading the directory\nmust be much slower in some real deployments where this problem has\ncome up. On the other hand, 8.8 minutes << 3 hours, and your patch\nwould win if somehow we had a ton of gaps in the sequence of files.\nI'm not sure how likely that is to be the cause - probably not very\nlikely at all if you aren't using an archive command that cheats, but\nmaybe really common if you are. Hmm, but I think if the\narchive_command cheats by marking a bunch of files done when it is\ntasked with archiving just one, your patch will break, because, unless\nI'm missing something, it doesn't re-evaluate whether things have\nchanged on every pass through the loop as Dipesh's patch does. So I\nguess I'm not quite sure I understand why you think this might be the\nway to go?\n\nMaintaining the binary heap in lowest-priority-first order is very\nclever, and the patch does look quite elegant. I'm just not sure I\nunderstand the point.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Aug 2021 09:42:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "\r\n\r\nOn 8/23/21, 6:42 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> On Sun, Aug 22, 2021 at 10:31 PM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> I ran this again on a bigger machine with 200K WAL files pending\r\n>> archive. The v9 patch took ~5.5 minutes, the patch I sent took ~8\r\n>> minutes, and the existing logic took just under 3 hours.\r\n>\r\n> Hmm. On the one hand, 8 minutes > 5.5 minutes, and presumably the gap\r\n> would only get wider if the number of files were larger or if reading\r\n> the directory were slower. I am pretty sure that reading the directory\r\n> must be much slower in some real deployments where this problem has\r\n> come up. On the other hand, 8.8 minutes << 3 hours, and your patch\r\n> would win if somehow we had a ton of gaps in the sequence of files.\r\n> I'm not sure how likely that is to be the cause - probably not very\r\n> likely at all if you aren't using an archive command that cheats, but\r\n> maybe really common if you are. Hmm, but I think if the\r\n> archive_command cheats by marking a bunch of files done when it is\r\n> tasked with archiving just one, your patch will break, because, unless\r\n> I'm missing something, it doesn't re-evaluate whether things have\r\n> changed on every pass through the loop as Dipesh's patch does. So I\r\n> guess I'm not quite sure I understand why you think this might be the\r\n> way to go?\r\n\r\nTo handle a \"cheating\" archive command, I'd probably need to add a\r\nstat() for every time pgarch_readyXLog() returned something from\r\narch_files. I suspect something similar might be needed in Dipesh's\r\npatch to handle backup history files and partial WAL files.\r\n\r\nIn any case, I think Dipesh's patch is the way to go. It obviously\r\nwill perform better in the extreme cases discussed in this thread. I\r\nthink it's important to make sure the patch doesn't potentially leave\r\nfiles behind to be picked up by a directory scan that might not\r\nhappen, but there are likely ways to handle that. In the worst case,\r\nperhaps we need to force a directory scan every N files to make sure\r\nnothing gets left behind. But maybe we can do better.\r\n\r\n> Maintaining the binary heap in lowest-priority-first order is very\r\n> clever, and the patch does look quite elegant. I'm just not sure I\r\n> understand the point.\r\n\r\nThis was mostly an exploratory exercise to get some numbers for the\r\ndifferent approaches discussed in this thread.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 23 Aug 2021 15:50:29 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Mon, Aug 23, 2021 at 11:50 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> To handle a \"cheating\" archive command, I'd probably need to add a\n> stat() for every time pgarch_readyXLog() returned something from\n> arch_files. I suspect something similar might be needed in Dipesh's\n> patch to handle backup history files and partial WAL files.\n\nI think he's effectively got that already, although it's probably\ninside of pgarch_readyXLog(). The idea there is that instead of having\na cache of files to be returned (as in your case) he just checks\nwhether the next file in sequence happens to be present and if so\nreturns that file name. To see whether it's present, he uses stat().\n\n> In any case, I think Dipesh's patch is the way to go. It obviously\n> will perform better in the extreme cases discussed in this thread. I\n> think it's important to make sure the patch doesn't potentially leave\n> files behind to be picked up by a directory scan that might not\n> happen, but there are likely ways to handle that. In the worst case,\n> perhaps we need to force a directory scan every N files to make sure\n> nothing gets left behind. But maybe we can do better.\n\nIt seems to me that we can handle that by just having the startup\nprocess notify the archiver every time some file is ready for\narchiving that's not the next one in the sequence. We have to make\nsure we cover all the relevant code paths, but that seems like it\nshould be doable, and we have to decide on the synchronization\ndetails, but that also seems pretty manageable, even if we haven't\ntotally got it sorted yet. The thing is, as soon as you go back to\nforcing a directory scan every N files, you've made it formally O(N^2)\nagain, which might not matter in practice if the constant factor is\nlow enough, but I don't think it will be. Either you force the scans\nevery, say, 1000 files, in which case it's going to make the whole\nmechanism a lot less effective in terms of getting out from under\nproblem cases -- or you force scans every, say, 1000000 files, in\nwhich case it's not really going to cause any missed files to get\narchived soon enough to make anyone happy. I doubt there is really a\nhappy medium in there.\n\nI suppose the two approaches could be combined, too - remember the\nfirst N files you think you'll encounter and then after that try\nsuccessive filenames until one is missing. That would be more\nresilient against O(N^2) behavior in the face of frequent gaps. But it\nmight also be more engineering than is strictly required.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Aug 2021 13:49:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 8/23/21, 10:49 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> On Mon, Aug 23, 2021 at 11:50 AM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> To handle a \"cheating\" archive command, I'd probably need to add a\r\n>> stat() for every time pgarch_readyXLog() returned something from\r\n>> arch_files. I suspect something similar might be needed in Dipesh's\r\n>> patch to handle backup history files and partial WAL files.\r\n>\r\n> I think he's effectively got that already, although it's probably\r\n> inside of pgarch_readyXLog(). The idea there is that instead of having\r\n> a cache of files to be returned (as in your case) he just checks\r\n> whether the next file in sequence happens to be present and if so\r\n> returns that file name. To see whether it's present, he uses stat().\r\n\r\nIIUC partial WAL files are handled because the next file in the\r\nsequence with the given TimeLineID won't be there, so we will fall\r\nback to a directory scan and pick it up. Timeline history files are\r\nhandled by forcing a directory scan, which should work because they\r\nalways have the highest priority. Backup history files, however, do\r\nnot seem to be handled. I think one approach to fixing that is to\r\nalso treat backup history files similarly to timeline history files.\r\nIf one is created, we force a directory scan, and the directory scan\r\nlogic will consider backup history files as higher priority than\r\neverything but timeline history files.\r\n\r\nI've been looking at the v9 patch with fresh eyes, and I still think\r\nwe should be able to force the directory scan as needed in\r\nXLogArchiveNotify(). Unless the file to archive is a regular WAL file\r\nthat is > our stored location in archiver memory, we should force a\r\ndirectory scan. I think it needs to be > instead of >= because we\r\ndon't know if the archiver has just completed a directory scan and\r\nfound a later segment to use to update the archiver state (but hasn't\r\nyet updated the state in shared memory).\r\n\r\nAlso, I think we need to make sure to set PgArch->dirScan back to true\r\nat the end of pgarch_readyXlog() unless we've found a new regular WAL\r\nfile that we can use to reset the archiver's stored location. This\r\nensures that we'll keep doing directory scans as long as there are\r\ntimeline/backup history files to process. \r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 24 Aug 2021 00:03:37 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "At Tue, 24 Aug 2021 00:03:37 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 8/23/21, 10:49 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\n> > On Mon, Aug 23, 2021 at 11:50 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> >> To handle a \"cheating\" archive command, I'd probably need to add a\n> >> stat() for every time pgarch_readyXLog() returned something from\n> >> arch_files. I suspect something similar might be needed in Dipesh's\n> >> patch to handle backup history files and partial WAL files.\n> >\n> > I think he's effectively got that already, although it's probably\n> > inside of pgarch_readyXLog(). The idea there is that instead of having\n> > a cache of files to be returned (as in your case) he just checks\n> > whether the next file in sequence happens to be present and if so\n> > returns that file name. To see whether it's present, he uses stat().\n> \n> IIUC partial WAL files are handled because the next file in the\n> sequence with the given TimeLineID won't be there, so we will fall\n> back to a directory scan and pick it up. Timeline history files are\n> handled by forcing a directory scan, which should work because they\n> always have the highest priority. Backup history files, however, do\n> not seem to be handled. I think one approach to fixing that is to\n> also treat backup history files similarly to timeline history files.\n> If one is created, we force a directory scan, and the directory scan\n> logic will consider backup history files as higher priority than\n> everything but timeline history files.\n\nBackup history files are (currently) just informational and they are\nfinally processed at the end of a bulk-archiving performed by the fast\npath. However, I feel that it is cleaner to trigger a directory scan\nevery time we add an other-than-a-regular-WAL-file, as base-backup or\npromotion are not supposed happen so infrequently.\n\n> I've been looking at the v9 patch with fresh eyes, and I still think\n> we should be able to force the directory scan as needed in\n> XLogArchiveNotify(). Unless the file to archive is a regular WAL file\n> that is > our stored location in archiver memory, we should force a\n> directory scan. I think it needs to be > instead of >= because we\n> don't know if the archiver has just completed a directory scan and\n> found a later segment to use to update the archiver state (but hasn't\n> yet updated the state in shared memory).\n\nI'm afraid that it can be seen as a violation of modularity. I feel\nthat wal-emitter side should not be aware of that datail of\narchiving. Instead, I would prefer to keep directory scan as far as it\nfound an smaller segment id than the next-expected segment id ever\narchived by the fast-path (if possible). This would be\nless-performant in the case out-of-order segments are frequent but I\nthink the overall objective of the original patch will be kept.\n\n> Also, I think we need to make sure to set PgArch->dirScan back to true\n> at the end of pgarch_readyXlog() unless we've found a new regular WAL\n> file that we can use to reset the archiver's stored location. This\n> ensures that we'll keep doing directory scans as long as there are\n> timeline/backup history files to process. \n\nRight.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 24 Aug 2021 11:35:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "(sigh..)\n\nAt Tue, 24 Aug 2021 11:35:06 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > IIUC partial WAL files are handled because the next file in the\n> > sequence with the given TimeLineID won't be there, so we will fall\n> > back to a directory scan and pick it up. Timeline history files are\n> > handled by forcing a directory scan, which should work because they\n> > always have the highest priority. Backup history files, however, do\n> > not seem to be handled. I think one approach to fixing that is to\n> > also treat backup history files similarly to timeline history files.\n> > If one is created, we force a directory scan, and the directory scan\n> > logic will consider backup history files as higher priority than\n> > everything but timeline history files.\n> \n> Backup history files are (currently) just informational and they are\n> finally processed at the end of a bulk-archiving performed by the fast\n> path. However, I feel that it is cleaner to trigger a directory scan\n> every time we add an other-than-a-regular-WAL-file, as base-backup or\n- promotion are not supposed happen so infrequently.\n+ promotion are not supposed happen so frequently.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 24 Aug 2021 11:36:55 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Thanks for the feedback.\n\n> > > IIUC partial WAL files are handled because the next file in the\n> > > sequence with the given TimeLineID won't be there, so we will fall\n> > > back to a directory scan and pick it up. Timeline history files are\n> > > handled by forcing a directory scan, which should work because they\n> > > always have the highest priority. Backup history files, however, do\n> > > not seem to be handled. I think one approach to fixing that is to\n> > > also treat backup history files similarly to timeline history files.\n> > > If one is created, we force a directory scan, and the directory scan\n> > > logic will consider backup history files as higher priority than\n> > > everything but timeline history files.\n> >\n> > Backup history files are (currently) just informational and they are\n> > finally processed at the end of a bulk-archiving performed by the fast\n> > path. However, I feel that it is cleaner to trigger a directory scan\n> > every time we add an other-than-a-regular-WAL-file, as base-backup or\n> - promotion are not supposed happen so infrequently.\n> + promotion are not supposed happen so frequently.\n\nI have incorporated the changes to trigger a directory scan in case of a\nbackup history file. Also, updated archiver to prioritize archiving a backup\nhistory file over regular WAL files during directory scan to make sure that\nbackup history file gets archived before the directory scan gets disabled\nas part of archiving a regular WAL file.\n\n> > I've been looking at the v9 patch with fresh eyes, and I still think\n> > we should be able to force the directory scan as needed in\n> > XLogArchiveNotify(). Unless the file to archive is a regular WAL file\n> > that is > our stored location in archiver memory, we should force a\n> > directory scan. I think it needs to be > instead of >= because we\n> > don't know if the archiver has just completed a directory scan and\n> > found a later segment to use to update the archiver state (but hasn't\n> > yet updated the state in shared memory).\n>\n> I'm afraid that it can be seen as a violation of modularity. I feel\n> that wal-emitter side should not be aware of that datail of\n> archiving. Instead, I would prefer to keep directory scan as far as it\n> found an smaller segment id than the next-expected segment id ever\n> archived by the fast-path (if possible). This would be\n> less-performant in the case out-of-order segments are frequent but I\n> think the overall objective of the original patch will be kept.\n\nArchiver selects the file with lowest segment number as part of directory\nscan and the next segment number gets resets based on this file. It starts\na new sequence from here and check the availability of the next file. If\nthere are holes then it will continue to fall back to directory scan. This\nwill\ncontinue until it finds the next sequence in order. I think this is already\nhandled unless I am missing something.\n\n> Also, I think we need to make sure to set PgArch->dirScan back to true\n> > at the end of pgarch_readyXlog() unless we've found a new regular WAL\n> > file that we can use to reset the archiver's stored location. This\n> > ensures that we'll keep doing directory scans as long as there are\n> > timeline/backup history files to process.\n>\n> Right.\n\nDone.\n\nPlease find the attached patch v10.\n\nThanks,\nDipesh", "msg_date": "Tue, 24 Aug 2021 18:00:56 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 8/24/21, 5:31 AM, \"Dipesh Pandit\" <dipesh.pandit@gmail.com> wrote:\r\n>> > I've been looking at the v9 patch with fresh eyes, and I still think\r\n>> > we should be able to force the directory scan as needed in\r\n>> > XLogArchiveNotify(). Unless the file to archive is a regular WAL file\r\n>> > that is > our stored location in archiver memory, we should force a\r\n>> > directory scan. I think it needs to be > instead of >= because we\r\n>> > don't know if the archiver has just completed a directory scan and\r\n>> > found a later segment to use to update the archiver state (but hasn't\r\n>> > yet updated the state in shared memory).\r\n>> \r\n>> I'm afraid that it can be seen as a violation of modularity. I feel\r\n>> that wal-emitter side should not be aware of that datail of\r\n>> archiving. Instead, I would prefer to keep directory scan as far as it\r\n>> found an smaller segment id than the next-expected segment id ever\r\n>> archived by the fast-path (if possible). This would be\r\n>> less-performant in the case out-of-order segments are frequent but I\r\n>> think the overall objective of the original patch will be kept.\r\n>\r\n> Archiver selects the file with lowest segment number as part of directory \r\n> scan and the next segment number gets resets based on this file. It starts\r\n> a new sequence from here and check the availability of the next file. If \r\n> there are holes then it will continue to fall back to directory scan. This will \r\n> continue until it finds the next sequence in order. I think this is already \r\n> handled unless I am missing something.\r\n\r\nI'm thinking of the following scenario:\r\n 1. Status file 2.ready is created.\r\n 2. Archiver finds 2.ready and uses it to update its state.\r\n 3. Status file 1.ready is created.\r\n\r\nAt this point, the archiver will look for 3.ready next. If it finds\r\n3.ready, it'll look for 4.ready. Let's say it keeps finding status\r\nfiles up until 1000000.ready. In this case, the archiver won't go\r\nback and archive segment 1 until we've archived ~1M files. I'll admit\r\nthis is a contrived example, but I think it demonstrates how certain\r\nassumptions could fail with this approach.\r\n\r\nI think Horiguchi-san made a good point that the .ready file creators\r\nshould ideally not need to understand archiving details. However, I\r\nthink this approach requires them to be inextricably linked. In the\r\nhappy case, the archiver will follow the simple path of processing\r\neach consecutive WAL file without incurring a directory scan. Any\r\ntime there is something other than a regular WAL file to archive, we\r\nneed to take special action to make sure it is picked up.\r\n\r\nThis sort of problem doesn't really show up in the always-use-\r\ndirectory-scan approaches. If you imagine the .ready file creators as\r\nthrowing status files over a fence at random times and in no\r\nparticular order, directory scans are ideal because you are\r\nessentially starting with a clean slate each time. The logic to\r\nprioritize timeline history files is nice to have, but even if it\r\nwasn't there, the archiver would still pick it up eventually. IOW\r\nthere's no situation (except perhaps infinite timeline history file\r\ngeneration) that puts us in danger of skipping files indefinitely.\r\nEven if we started creating a completely new type of status file, the\r\ndirectory scan approaches would probably work without any changes.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 24 Aug 2021 17:26:20 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Tue, Aug 24, 2021 at 1:26 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> I think Horiguchi-san made a good point that the .ready file creators\n> should ideally not need to understand archiving details. However, I\n> think this approach requires them to be inextricably linked. In the\n> happy case, the archiver will follow the simple path of processing\n> each consecutive WAL file without incurring a directory scan. Any\n> time there is something other than a regular WAL file to archive, we\n> need to take special action to make sure it is picked up.\n\nI think they should be inextricably linked, really. If we know\nsomething - like that there's a file ready to be archived - then it\nseems like we should not throw that information away and force\nsomebody else to rediscover it through an expensive process. The whole\nproblem here comes from the fact that we're using the filesystem as an\nIPC mechanism, and it's sometimes a very inefficient one.\n\nI can't quite decide whether the problems we're worrying about here\nare real issues or just kind of hypothetical. I mean, today, it seems\nto be possible that we fail to mark some file ready for archiving,\nemit a log message, and then a huge amount of time could go by before\nwe try again to mark it ready for archiving. Are the problems we're\ntalking about here objectively worse than that, or just different? Is\nit a problem in practice, or just in theory?\n\nI really want to avoid getting backed into a corner where we decide\nthat the status quo is the best we can do, because I'm pretty sure\nthat has to be the wrong conclusion. If we think that\nget-a-bunch-of-files-per-readdir approach is better than the\nkeep-trying-the-next-file approach, I mean that's OK with me; I just\nwant to do something about this. I am not sure whether or not that's\nthe right course of action.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 24 Aug 2021 15:08:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 8/24/21, 12:09 PM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> I can't quite decide whether the problems we're worrying about here\r\n> are real issues or just kind of hypothetical. I mean, today, it seems\r\n> to be possible that we fail to mark some file ready for archiving,\r\n> emit a log message, and then a huge amount of time could go by before\r\n> we try again to mark it ready for archiving. Are the problems we're\r\n> talking about here objectively worse than that, or just different? Is\r\n> it a problem in practice, or just in theory?\r\n\r\nIf a .ready file is created out of order, the directory scan logic\r\nwill pick it up about as soon as possible based on its priority. If\r\nthe archiver is keeping up relatively well, there's a good chance such\r\na file will have the highest archival priority and will be picked up\r\nthe next time the archiver looks for a file to archive. With the\r\npatch proposed in this thread, an out-of-order .ready file has no such\r\nguarantee. As long as the archiver never has to fall back to a\r\ndirectory scan, it won't be archived. The proposed patch handles the\r\ncase where RemoveOldXlogFiles() creates missing .ready files by\r\nforcing a directory scan, but I'm not sure this is enough. I think we\r\nhave to check the archiver state each time we create a .ready file to\r\nsee whether we're creating one out-of-order.\r\n\r\nWhile this may be an extremely rare problem in practice, archiving\r\nsomething after the next checkpoint completes seems better than never\r\narchiving it at all. IMO this isn't an area where there is much space\r\nto take risks.\r\n\r\n> I really want to avoid getting backed into a corner where we decide\r\n> that the status quo is the best we can do, because I'm pretty sure\r\n> that has to be the wrong conclusion. If we think that\r\n> get-a-bunch-of-files-per-readdir approach is better than the\r\n> keep-trying-the-next-file approach, I mean that's OK with me; I just\r\n> want to do something about this. I am not sure whether or not that's\r\n> the right course of action.\r\n\r\nI certainly think we can do better. The get-a-bunch-of-files-per-\r\nreaddir approach can help us cut down on the directory scans by one or\r\ntwo orders of magnitude, which is still a huge win. Plus, such an\r\napproach retains much of the resilience of the current implementation\r\n(although there may be bit more delay for the special cases).\r\n\r\nThat being said, I still think the keep-trying-the-next-file approach\r\nis worth exploring, but I think it's really important to consider that\r\nthere is no guarantee that a directory scan will happen anytime soon.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 24 Aug 2021 22:31:22 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "> If a .ready file is created out of order, the directory scan logic\n> will pick it up about as soon as possible based on its priority. If\n> the archiver is keeping up relatively well, there's a good chance such\n> a file will have the highest archival priority and will be picked up\n> the next time the archiver looks for a file to archive. With the\n> patch proposed in this thread, an out-of-order .ready file has no such\n> guarantee. As long as the archiver never has to fall back to a\n> directory scan, it won't be archived. The proposed patch handles the\n> case where RemoveOldXlogFiles() creates missing .ready files by\n> forcing a directory scan, but I'm not sure this is enough. I think we\n> have to check the archiver state each time we create a .ready file to\n> see whether we're creating one out-of-order.\n\nWe can handle the scenario where .ready file is created out of order\nin XLogArchiveNotify(). This way we can avoid making an explicit call\nto enable directory scan from different code paths which may result\ninto creating an out of order .ready file.\n\nArchiver can store the segment number corresponding to the last or most\nrecent .ready file found. When a .ready file is created in\nXLogArchiveNotify(),\nthe log segment number of the current .ready file can be compared with the\nsegment number of the last .ready file found at archiver to detect if this\nfile is\ncreated out of order. A directory scan can be forced if required.\n\nI have incorporated these changes in patch v11.\n\n> While this may be an extremely rare problem in practice, archiving\n> something after the next checkpoint completes seems better than never\n> archiving it at all. IMO this isn't an area where there is much space\n> to take risks.\n\nAn alternate approach could be to force a directory scan at checkpoint to\nbreak the infinite wait for a .ready file which is being missed due to the\nfact that it is created out of order. This will make sure that the file\ngets archived within the checkpoint boundaries.\n\nThoughts?\n\nPlease find attached patch v11.\n\nThanks,\nDipesh", "msg_date": "Wed, 25 Aug 2021 16:41:03 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 8/25/21, 4:11 AM, \"Dipesh Pandit\" <dipesh.pandit@gmail.com> wrote:\r\n> Please find attached patch v11.\r\n\r\nApologies for the delay. I still intend to review this.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 31 Aug 2021 06:52:08 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 8/25/21, 4:11 AM, \"Dipesh Pandit\" <dipesh.pandit@gmail.com> wrote:\r\n> An alternate approach could be to force a directory scan at checkpoint to\r\n> break the infinite wait for a .ready file which is being missed due to the \r\n> fact that it is created out of order. This will make sure that the file\r\n> gets archived within the checkpoint boundaries.\r\n\r\nI think this is a good idea.\r\n\r\n> Please find attached patch v11.\r\n\r\nThanks for the new version of the patch.\r\n\r\n+\t/*\r\n+\t * History files or a .ready file created out of order requires archiver to\r\n+\t * perform a full directory scan.\r\n+\t */\r\n+\tif (IsTLHistoryFileName(xlog) || IsBackupHistoryFileName(xlog) ||\r\n+\t\t\tfileOutOfOrder)\r\n+\t\tPgArchEnableDirScan();\r\n\r\nI think we should force a directory scan for everything that isn't a\r\nregular WAL file. IOW we can use !IsXLogFileName(xlog) instead of\r\nenumerating all the different kinds of files we might want to archive.\r\n\r\n+\t/*\r\n+\t * Segment number of the most recent .ready file found by archiver,\r\n+\t * protected by WALArchiveLock.\r\n+\t */\r\n+\tXLogSegNo\tlastReadySegNo;\r\n } PgArchData;\r\n \r\n+/*\r\n+ * Segment number and timeline ID to identify the next file in a WAL sequence\r\n+ */\r\n+typedef struct readyXLogState\r\n+{\r\n+\tXLogSegNo\tlastSegNo;\r\n+\tTimeLineID\tlastTLI;\r\n+} readyXLogState;\r\n\r\nlastSegNo and lastReadySegNo appear to be the same thing. Couldn't we\r\njust use the value in PgArchData?\r\n\r\n+\treturn (curSegNo < lastSegNo) ? true : false;\r\n\r\nI think this needs to be <=. If the two values are equal,\r\npgarch_readyXlog() may have just completed a directory scan and might\r\nbe just about to set PgArch->lastSegNo to a greater value.\r\n\r\n+\tLWLockAcquire(WALArchiveLock, LW_EXCLUSIVE);\r\n+\tPgArch->lastReadySegNo = segNo;\r\n+\tLWLockRelease(WALArchiveLock);\r\n\r\nIMO we should just use a spinlock instead of introducing a new LWLock.\r\nIt looks like you really only need the lock for a couple of simple\r\nfunctions. I still think protecting PgArch->dirScan with a spinlock\r\nis a good idea, if for no other reason than it makes it easier to\r\nreason about this logic.\r\n\r\n+\t\tif (stat(xlogready, &st) == 0)\r\n\r\nI think we should ERROR if stat() fails for any other reason than\r\nENOENT.\r\n\r\n+\t\tishistory = IsTLHistoryFileName(basename) ||\r\n+\t\t\tIsBackupHistoryFileName(basename);\r\n\r\nI suspect we still want to prioritize timeline history files over\r\nbackup history files. TBH I find the logic below this point for\r\nprioritizing history files to be difficult to follow, and I think we\r\nshould refactor it into some kind of archive priority comparator\r\nfunction.\r\n\r\n+\t\t\t/*\r\n+\t\t\t * Reset the flag only when we found a regular WAL file to make\r\n+\t\t\t * sure that we are done with processing history files.\r\n+\t\t\t */\r\n+\t\t\tPgArch->dirScan = false;\r\n\r\nI think we really want to unset dirScan before we start the directory\r\nscan, and then we set it to true afterwards if we didn't find a\r\nregular WAL file. If someone requests a directory scan in the middle\r\nof an ongoing directory scan, we don't want to lose that request.\r\n\r\nI attached two patches that demonstrate what I'm thinking this change\r\nshould look like. One is my take on the keep-trying-the-next-file\r\napproach, and the other is a new version of the multiple-files-per-\r\nreaddir approach (with handling for \"cheating\" archive commands). I\r\npersonally feel that the multiple-files-per-readdir approach winds up\r\nbeing a bit cleaner and more resilient than the keep-trying-the-next-\r\nfile approach. However, the keep-trying-the-next-file approach will\r\ncertainly be more efficient (especially for the extreme cases\r\ndiscussed in this thread), and I don't have any concrete concerns with\r\nthis approach that seem impossible to handle.\r\n\r\nNathan", "msg_date": "Wed, 1 Sep 2021 23:44:15 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Hi,\n\nThanks for the feedback.\n\n> I attached two patches that demonstrate what I'm thinking this change\n> should look like. One is my take on the keep-trying-the-next-file\n> approach, and the other is a new version of the multiple-files-per-\n> readdir approach (with handling for \"cheating\" archive commands). I\n> personally feel that the multiple-files-per-readdir approach winds up\n> being a bit cleaner and more resilient than the keep-trying-the-next-\n> file approach. However, the keep-trying-the-next-file approach will\n> certainly be more efficient (especially for the extreme cases\n> discussed in this thread), and I don't have any concrete concerns with\n> this approach that seem impossible to handle.\n\nI agree that multiple-files-pre-readdir is cleaner and has the resilience\nof the\ncurrent implementation. However, I have a few suggestion on keep-trying-the\n-next-file approach patch shared in previous thread.\n\n+ /* force directory scan the first time we call pgarch_readyXlog() */\n+ PgArchForceDirScan();\n+\n\nWe should not force a directory in pgarch_ArchiverCopyLoop(). This gets\ncalled\nwhenever archiver wakes up from the wait state. This will result in a\nsituation where the archiver performs a full directory scan despite having\nthe\naccurate information about the next anticipated log segment.\nInstead we can check if lastSegNo is initialized and continue directory\nscan\nuntil it gets initialized in pgarch_readyXlog().\n\n+ return lastSegNo;\nWe should return \"true\" here.\n\nI am thinking if we can add a log message for files which are\narchived as part of directory scan. This will be useful for diagnostic\npurpose\nto check if desired files gets archived as part of directory scan in\nspecial\nscenarios. I also think that we should add a few comments in\npgarch_readyXlog().\n\nI have incorporated these changes and attached a patch\nv1-0001-keep-trying-the-next-file-approach.patch.\n\n+ /*\n+ * We must use <= because the archiver may have just completed a\n+ * directory scan and found a later segment (but hasn't updated\n+ * shared memory yet).\n+ */\n+ if (this_segno <= arch_segno)\n+ PgArchForceDirScan();\n\nI still think that we should use '<' operator here because\narch_segno represents the segment number of the most recent\n.ready file found by the archiver. This gets updated in shared\nmemory only if archiver has successfully found a .ready file.\nIn a normal scenario this_segno will be greater than arch_segno\nwhereas in cases where a .ready file is created out of order\nthis_segno may be less than arch_segno. I am wondering\nif there is a scenario where arch_segno is equal to this_segno\nunless I am missing something.\n\nThanks,\nDipesh", "msg_date": "Thu, 2 Sep 2021 18:50:48 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 9/2/21, 6:22 AM, \"Dipesh Pandit\" <dipesh.pandit@gmail.com> wrote:\r\n> I agree that multiple-files-pre-readdir is cleaner and has the resilience of the\r\n> current implementation. However, I have a few suggestion on keep-trying-the\r\n> -next-file approach patch shared in previous thread.\r\n\r\nWhich approach do you think we should use? I think we have decent\r\npatches for both approaches at this point, so perhaps we should see if\r\nwe can get some additional feedback from the community on which one we\r\nshould pursue further.\r\n\r\n> + /* force directory scan the first time we call pgarch_readyXlog() */\r\n> + PgArchForceDirScan();\r\n> +\r\n>\r\n> We should not force a directory in pgarch_ArchiverCopyLoop(). This gets called\r\n> whenever archiver wakes up from the wait state. This will result in a\r\n> situation where the archiver performs a full directory scan despite having the\r\n> accurate information about the next anticipated log segment. \r\n> Instead we can check if lastSegNo is initialized and continue directory scan \r\n> until it gets initialized in pgarch_readyXlog().\r\n\r\nThe problem I see with this is that pgarch_archiveXlog() might end up\r\nfailing. If it does, we won't retry archiving the file until we do a\r\ndirectory scan. I think we could try to avoid forcing a directory\r\nscan outside of these failure cases and archiver startup, but I'm not\r\nsure it's really worth it. When pgarch_readyXlog() returns false, it\r\nmost likely means that there are no .ready files present, so I'm not\r\nsure we are gaining a whole lot by avoiding a directory scan in that\r\ncase. I guess it might help a bit if there are a ton of .done files,\r\nthough.\r\n\r\n> + return lastSegNo;\r\n> We should return \"true\" here.\r\n\r\nOops. Good catch.\r\n\r\n> I am thinking if we can add a log message for files which are \r\n> archived as part of directory scan. This will be useful for diagnostic purpose\r\n> to check if desired files gets archived as part of directory scan in special \r\n> scenarios. I also think that we should add a few comments in pgarch_readyXlog().\r\n\r\nI agree, but it should probably be something like DEBUG3 instead of\r\nLOG.\r\n\r\n> + /*\r\n> + * We must use <= because the archiver may have just completed a\r\n> + * directory scan and found a later segment (but hasn't updated\r\n> + * shared memory yet).\r\n> + */\r\n> + if (this_segno <= arch_segno)\r\n> + PgArchForceDirScan();\r\n>\r\n> I still think that we should use '<' operator here because\r\n> arch_segno represents the segment number of the most recent\r\n> .ready file found by the archiver. This gets updated in shared \r\n> memory only if archiver has successfully found a .ready file.\r\n> In a normal scenario this_segno will be greater than arch_segno \r\n> whereas in cases where a .ready file is created out of order \r\n> this_segno may be less than arch_segno. I am wondering\r\n> if there is a scenario where arch_segno is equal to this_segno\r\n> unless I am missing something.\r\n\r\nThe pg_readyXlog() logic looks a bit like this:\r\n\r\n 1. Try to skip directory scan. If that succeeds, we're done.\r\n 2. Do a directory scan.\r\n 3. If we found a regular WAL file, update PgArch and return\r\n what we found.\r\n\r\nLet's say step 1 looks for WAL file 10, but 10.ready doesn't exist\r\nyet. The following directory scan ends up finding 11.ready. Just\r\nbefore we update the PgArch state, XLogArchiveNotify() is called and\r\ncreates 10.ready. However, pg_readyXlog() has already decided to\r\nreturn WAL segment 11 and update the state to look for 12 next. If we\r\njust used '<', we won't force a directory scan, and segment 10 will\r\nnot be archived until the next one happens. If we use '<=', I don't\r\nthink we have the same problem.\r\n\r\nI've also thought about another similar scenario. Let's say step 1\r\nlooks for WAL file 10, but it doesn't exist yet (just like the\r\nprevious example). The following directory scan ends up finding\r\n12.ready, but just before we update PgArch, we create 11.ready. In\r\nthis case, we'll still skip forcing a directory scan until 10.ready is\r\ncreated later on. I believe it all eventually works out as long as we\r\ncan safely assume that all files that should have .ready files will\r\neventually get them.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 2 Sep 2021 21:52:22 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Hi,\n\nThanks for the feedback.\n\n> Which approach do you think we should use? I think we have decent\n> patches for both approaches at this point, so perhaps we should see if\n> we can get some additional feedback from the community on which one we\n> should pursue further.\n\nIn my opinion both the approaches have benefits over current implementation.\nI think in keep-trying-the-next-file approach we have handled all rare and\nspecific\nscenarios which requires us to force a directory scan to archive the\ndesired files.\nIn addition to this with the recent change to force a directory scan at\ncheckpoint\nwe can avoid an infinite wait for a file which is still being missed out\ndespite\nhandling the special scenarios. It is also more efficient in extreme\nscenarios\nas discussed in this thread. However, multiple-files-per-readdir approach\nis\ncleaner with resilience of current implementation.\n\nI agree that we should decide on which approach to pursue further based on\nadditional feedback from the community.\n\n> The problem I see with this is that pgarch_archiveXlog() might end up\n> failing. If it does, we won't retry archiving the file until we do a\n> directory scan. I think we could try to avoid forcing a directory\n> scan outside of these failure cases and archiver startup, but I'm not\n> sure it's really worth it. When pgarch_readyXlog() returns false, it\n> most likely means that there are no .ready files present, so I'm not\n> sure we are gaining a whole lot by avoiding a directory scan in that\n> case. I guess it might help a bit if there are a ton of .done files,\n> though.\n\nYes, I think it will be useful when we have a bunch of .done files and\nthe frequency of .ready files is such that the archiver goes to wait\nstate before the next WAL file is ready for archival.\n\n> I agree, but it should probably be something like DEBUG3 instead of\n> LOG.\n\nI will update it in the next patch.\n\nThanks,\nDipesh\n\nHi,Thanks for the feedback.> Which approach do you think we should use?  I think we have decent\n> patches for both approaches at this point, so perhaps we should see if> we can get some additional feedback from the community on which one we\n> should pursue further.In my opinion both the approaches have benefits over current implementation.I think in keep-trying-the-next-file approach we have handled all rare and specific scenarios which requires us to force a directory scan to archive the desired files.In addition to this with the recent change to force a directory scan at checkpoint we can avoid an infinite wait for a file which is still being missed out despite handling the special scenarios. It is also more efficient in extreme scenariosas discussed in this thread. However, multiple-files-per-readdir approach is cleaner with resilience of current implementation. I agree that we should decide on which approach to pursue further based on additional feedback from the community.> The problem I see with this is that pgarch_archiveXlog() might end up\n> failing.  If it does, we won't retry archiving the file until we do a\n> directory scan.  I think we could try to avoid forcing a directory\n> scan outside of these failure cases and archiver startup, but I'm not\n> sure it's really worth it.  When pgarch_readyXlog() returns false, it> most likely means that there are no .ready files present, so I'm not\n> sure we are gaining a whole lot by avoiding a directory scan in that\n> case.  I guess it might help a bit if there are a ton of .done files,\n> though.Yes, I think it will be useful when we have a bunch of .done files and the frequency of .ready files is such that the archiver goes to waitstate before the next WAL file is ready for archival.> I agree, but it should probably be something like DEBUG3 instead of\n> LOG.I will update it in the next patch.Thanks,Dipesh", "msg_date": "Fri, 3 Sep 2021 18:31:46 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "At Fri, 3 Sep 2021 18:31:46 +0530, Dipesh Pandit <dipesh.pandit@gmail.com> wrote in \n> Hi,\n> \n> Thanks for the feedback.\n> \n> > Which approach do you think we should use? I think we have decent\n> > patches for both approaches at this point, so perhaps we should see if\n> > we can get some additional feedback from the community on which one we\n> > should pursue further.\n> \n> In my opinion both the approaches have benefits over current implementation.\n> I think in keep-trying-the-next-file approach we have handled all rare and\n> specific\n> scenarios which requires us to force a directory scan to archive the\n> desired files.\n> In addition to this with the recent change to force a directory scan at\n> checkpoint\n> we can avoid an infinite wait for a file which is still being missed out\n> despite\n> handling the special scenarios. It is also more efficient in extreme\n> scenarios\n> as discussed in this thread. However, multiple-files-per-readdir approach\n> is\n> cleaner with resilience of current implementation.\n> \n> I agree that we should decide on which approach to pursue further based on\n> additional feedback from the community.\n\n\nI was thinking that the multple-files approch would work efficiently\nbut the the patch still runs directory scans every 64 files. As\nRobert mentioned it is still O(N^2). I'm not sure the reason for the\nlimit, but if it were to lower memory consumption or the cost to sort,\nwe can resolve that issue by taking trying-the-next approach ignoring\nthe case of having many gaps (discussed below). If it were to cause\nvoluntary checking of out-of-order files, almost the same can be\nachieved by running directory scans every 64 files in the\ntrying-the-next approach (and we would suffer O(N^2) again). On the\nother hand, if archiving is delayed by several segments, the\nmultiple-files method might reduce the cost to scan the status\ndirectory but it won't matter since the directory contains only\nseveral files. (I think that it might be better that we don't go to\ntrying-the-next path if we found only several files by running a\ndirectory scan.) The multiple-files approach reduces the number of\ndirectory scans if there were many gaps in the WAL file\nsequence. Alghouth theoretically the last max_backend(+alpha?)\nsegemnts could be written out-of-order, but I suppose we only have\ngaps only among the several latest files in reality. I'm not sure,\nthough..\n\nIn short, the trying-the-next approach seems to me to be the way to\ngo, for the reason that it is simpler but it can cover the possible\nfailures by almost the same measures with the muliple-files approach.\n\n> > The problem I see with this is that pgarch_archiveXlog() might end up\n> > failing. If it does, we won't retry archiving the file until we do a\n> > directory scan. I think we could try to avoid forcing a directory\n> > scan outside of these failure cases and archiver startup, but I'm not\n> > sure it's really worth it. When pgarch_readyXlog() returns false, it\n> > most likely means that there are no .ready files present, so I'm not\n> > sure we are gaining a whole lot by avoiding a directory scan in that\n> > case. I guess it might help a bit if there are a ton of .done files,\n> > though.\n> \n> Yes, I think it will be useful when we have a bunch of .done files and\n> the frequency of .ready files is such that the archiver goes to wait\n> state before the next WAL file is ready for archival.\n> \n> > I agree, but it should probably be something like DEBUG3 instead of\n> > LOG.\n> \n> I will update it in the next patch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 07 Sep 2021 17:42:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 9/7/21, 1:42 AM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> I was thinking that the multple-files approch would work efficiently\r\n> but the the patch still runs directory scans every 64 files. As\r\n> Robert mentioned it is still O(N^2). I'm not sure the reason for the\r\n> limit, but if it were to lower memory consumption or the cost to sort,\r\n> we can resolve that issue by taking trying-the-next approach ignoring\r\n> the case of having many gaps (discussed below). If it were to cause\r\n> voluntary checking of out-of-order files, almost the same can be\r\n> achieved by running directory scans every 64 files in the\r\n> trying-the-next approach (and we would suffer O(N^2) again). On the\r\n> other hand, if archiving is delayed by several segments, the\r\n> multiple-files method might reduce the cost to scan the status\r\n> directory but it won't matter since the directory contains only\r\n> several files. (I think that it might be better that we don't go to\r\n> trying-the-next path if we found only several files by running a\r\n> directory scan.) The multiple-files approach reduces the number of\r\n> directory scans if there were many gaps in the WAL file\r\n> sequence. Alghouth theoretically the last max_backend(+alpha?)\r\n> segemnts could be written out-of-order, but I suppose we only have\r\n> gaps only among the several latest files in reality. I'm not sure,\r\n> though..\r\n>\r\n> In short, the trying-the-next approach seems to me to be the way to\r\n> go, for the reason that it is simpler but it can cover the possible\r\n> failures by almost the same measures with the muliple-files approach.\r\n\r\nThanks for chiming in. The limit of 64 in the multiple-files-per-\r\ndirectory-scan approach was mostly arbitrary. My earlier testing [0]\r\nwith different limits didn't reveal any significant difference, but\r\nusing a higher limit might yield a small improvement when there are\r\nseveral hundred thousand .ready files. IMO increasing the limit isn't\r\nreally worth it for this approach. For 500,000 .ready files,\r\nordinarily you'd need 500,000 directory scans. When 64 files are\r\narchived for each directory scan, you need ~8,000 directory scans.\r\nWith 128 files per directory scan, you need ~4,000. With 256, you\r\nneed ~2000. The difference between 8,000 directory scans and 500,000\r\nis quite significant. The difference between 2,000 and 8,000 isn't\r\nnearly as significant in comparison.\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/message-id/3ECC212F-88FD-4FB2-BAF1-C2DD1563E310%40amazon.com\r\n\r\n", "msg_date": "Tue, 7 Sep 2021 17:28:45 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Tue, Sep 7, 2021 at 1:28 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> Thanks for chiming in. The limit of 64 in the multiple-files-per-\n> directory-scan approach was mostly arbitrary. My earlier testing [0]\n> with different limits didn't reveal any significant difference, but\n> using a higher limit might yield a small improvement when there are\n> several hundred thousand .ready files. IMO increasing the limit isn't\n> really worth it for this approach. For 500,000 .ready files,\n> ordinarily you'd need 500,000 directory scans. When 64 files are\n> archived for each directory scan, you need ~8,000 directory scans.\n> With 128 files per directory scan, you need ~4,000. With 256, you\n> need ~2000. The difference between 8,000 directory scans and 500,000\n> is quite significant. The difference between 2,000 and 8,000 isn't\n> nearly as significant in comparison.\n\nThat's certainly true.\n\nI guess what I don't understand about the multiple-files-per-dirctory\nscan implementation is what happens when something happens that would\nrequire the keep-trying-the-next-file approach to perform a forced\nscan. It seems to me that you still need to force an immediate full\nscan, because if the idea is that you want to, say, prioritize\narchiving of new timeline files over any others, a cached list of\nfiles that you should archive next doesn't accomplish that, just like\nkeeping on trying the next file in sequence doesn't accomplish that.\n\nSo I'm wondering if in the end the two approaches converge somewhat,\nso that with either patch you get (1) some kind of optimization to\nscan the directory less often, plus (2) some kind of notification\nmechanism to tell you when you need to avoid applying that\noptimization. If you wanted to, (1) could even include both batching\nand then, when the batch is exhausted, trying files in sequence. I'm\nnot saying that's the way to go, but you could. In the end, it seems\nless important that we do any particular thing here and more important\nthat we do something - but if prioritizing timeline history files is\nimportant, then we have to preserve that behavior.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Sep 2021 13:52:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 9/7/21, 10:54 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> I guess what I don't understand about the multiple-files-per-dirctory\r\n> scan implementation is what happens when something happens that would\r\n> require the keep-trying-the-next-file approach to perform a forced\r\n> scan. It seems to me that you still need to force an immediate full\r\n> scan, because if the idea is that you want to, say, prioritize\r\n> archiving of new timeline files over any others, a cached list of\r\n> files that you should archive next doesn't accomplish that, just like\r\n> keeping on trying the next file in sequence doesn't accomplish that.\r\n\r\nRight. The latest patch for that approach [0] does just that. In\r\nfact, I think timeline files are the only files for which we need to\r\nforce an immediate directory scan in the multiple-files-per-scan\r\napproach. For the keep-trying-the-next-file approach, we have to\r\nforce a directory scan for anything but a regular WAL file that is\r\nahead of our archiver state.\r\n\r\n> So I'm wondering if in the end the two approaches converge somewhat,\r\n> so that with either patch you get (1) some kind of optimization to\r\n> scan the directory less often, plus (2) some kind of notification\r\n> mechanism to tell you when you need to avoid applying that\r\n> optimization. If you wanted to, (1) could even include both batching\r\n> and then, when the batch is exhausted, trying files in sequence. I'm\r\n> not saying that's the way to go, but you could. In the end, it seems\r\n> less important that we do any particular thing here and more important\r\n> that we do something - but if prioritizing timeline history files is\r\n> important, then we have to preserve that behavior.\r\n\r\nYeah, I would agree that the approaches basically converge into some\r\nform of \"do fewer directory scans.\"\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/message-id/attachment/125980/0001-Improve-performance-of-pgarch_readyXlog-with-many-st.patch\r\n\r\n", "msg_date": "Tue, 7 Sep 2021 18:13:39 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Tue, Sep 7, 2021 at 2:13 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> Right. The latest patch for that approach [0] does just that. In\n> fact, I think timeline files are the only files for which we need to\n> force an immediate directory scan in the multiple-files-per-scan\n> approach. For the keep-trying-the-next-file approach, we have to\n> force a directory scan for anything but a regular WAL file that is\n> ahead of our archiver state.\n\nYeah, that makes sense.\n\n> Yeah, I would agree that the approaches basically converge into some\n> form of \"do fewer directory scans.\"\n\nI guess we still have to pick one or the other, but I don't really\nknow how to do that, since both methods seem to be relatively fine,\nand the scenarios where one is better than the other all feel a little\nbit contrived. I guess if no clear consensus emerges in the next week\nor so, I'll just pick one and commit it. Not quite sure yet how I'll\ndo the picking, but we seem to all agree that something is better than\nnothing, so hopefully nobody will be too sad if I make an arbitrary\ndecision. And if some clear agreement emerges before then, even\nbetter.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Sep 2021 14:30:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 9/7/21, 11:31 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> I guess we still have to pick one or the other, but I don't really\r\n> know how to do that, since both methods seem to be relatively fine,\r\n> and the scenarios where one is better than the other all feel a little\r\n> bit contrived. I guess if no clear consensus emerges in the next week\r\n> or so, I'll just pick one and commit it. Not quite sure yet how I'll\r\n> do the picking, but we seem to all agree that something is better than\r\n> nothing, so hopefully nobody will be too sad if I make an arbitrary\r\n> decision. And if some clear agreement emerges before then, even\r\n> better.\r\n\r\nI will be happy to see this fixed either way.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 7 Sep 2021 18:40:24 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "At Tue, 7 Sep 2021 18:40:24 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 9/7/21, 11:31 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\n> > I guess we still have to pick one or the other, but I don't really\n> > know how to do that, since both methods seem to be relatively fine,\n> > and the scenarios where one is better than the other all feel a little\n> > bit contrived. I guess if no clear consensus emerges in the next week\n> > or so, I'll just pick one and commit it. Not quite sure yet how I'll\n> > do the picking, but we seem to all agree that something is better than\n> > nothing, so hopefully nobody will be too sad if I make an arbitrary\n> > decision. And if some clear agreement emerges before then, even\n> > better.\n> \n> I will be happy to see this fixed either way.\n\n+1. I agree about the estimation on performance gain to Nathan.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 08 Sep 2021 15:16:32 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "> > I guess we still have to pick one or the other, but I don't really\n> > know how to do that, since both methods seem to be relatively fine,\n> > and the scenarios where one is better than the other all feel a little\n> > bit contrived. I guess if no clear consensus emerges in the next week\n> > or so, I'll just pick one and commit it. Not quite sure yet how I'll\n> > do the picking, but we seem to all agree that something is better than\n> > nothing, so hopefully nobody will be too sad if I make an arbitrary\n> > decision. And if some clear agreement emerges before then, even\n> > better.\n>\n> I will be happy to see this fixed either way.\n\n+1\n\n> > I agree, but it should probably be something like DEBUG3 instead of\n> > LOG.\n>\n> I will update it in the next patch.\n\nUpdated log level to DEBUG3 and rebased the patch. PFA patch.\n\nThanks,\nDipesh", "msg_date": "Wed, 8 Sep 2021 23:15:24 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 9/8/21, 10:49 AM, \"Dipesh Pandit\" <dipesh.pandit@gmail.com> wrote:\r\n> Updated log level to DEBUG3 and rebased the patch. PFA patch.\r\n\r\nThanks for the new patch.\r\n\r\n+ * by checking the availability of next WAL file. \"xlogState\" specifies the\r\n+ * segment number and timeline ID corresponding to the next WAL file.\r\n\r\n\"xlogState\" probably needs to be updated here.\r\n\r\nAs noted before [0], I think we need to force a directory scan at the\r\nbeginning of pgarch_MainLoop() and when pgarch_ArchiverCopyLoop()\r\nreturns before we exit the \"while\" loop. Else, there's probably a\r\nrisk that we skip archiving a file until the next directory scan. IMO\r\nforcing a directory scan at the beginning of pgarch_ArchiverCopyLoop()\r\nis a simpler way to do roughly the same thing. I'm skeptical that\r\npersisting the next-anticipated state between calls to\r\npgarch_ArchiverCopyLoop() is worth the complexity.\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/message-id/AC78607B-9DA6-41F4-B253-840D3DD964BF%40amazon.com\r\n\r\n", "msg_date": "Thu, 9 Sep 2021 22:24:30 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Hi,\n\nThanks for the feedback.\n\n> + * by checking the availability of next WAL file. \"xlogState\" specifies\nthe\n> + * segment number and timeline ID corresponding to the next WAL file.\n>\n> \"xlogState\" probably needs to be updated here.\n\nYes, I updated the comment.\n\n> As noted before [0], I think we need to force a directory scan at the\n> beginning of pgarch_MainLoop() and when pgarch_ArchiverCopyLoop()\n> returns before we exit the \"while\" loop. Else, there's probably a\n> risk that we skip archiving a file until the next directory scan. IMO\n> forcing a directory scan at the beginning of pgarch_ArchiverCopyLoop()\n> is a simpler way to do roughly the same thing. I'm skeptical that\n> persisting the next-anticipated state between calls to\n> pgarch_ArchiverCopyLoop() is worth the complexity.\n\nI think if we force a directory scan in pgarch_ArchiverCopyLoop() when it\nreturns before we exit the \"while\" loop or outside the loop then it may\nresult in directory scan for all WAL files in one of the scenarios that I\ncan think of.\n\nThere could be two possible scenarios, first scenario in which the archiver\nis always lagging and the second scenario in which archiver is in sync or\nahead with the rate at which WAL files are generated.\n\nIf we focus on the second scenario, then consider a case where the archiver\nhas\njust archived file 1.ready and is about to check the availability of\n2.ready but the\nfile 2.ready is not available in archive status directory. Archiver\nperforms a directory\nscan as a fall-back mechanism and goes to wait state.(The current\nimplementation\nrelies on notifying the archiver by creating a .ready file on disk. It may\nhappen that\nthe file is ready file archival but due to slow notification mechanism\nthere is a delay\nin notification and archiver goes to wait state.) When file 2.ready is\ncreated on disk\narchive is notified, it wakes up and calls pgarch_ArchiverCopyLoop(). Now\nif we\nunconditionally force a directory scan in pgarch_ArchiverCopyLoop() then it\nmay\nresult in directory scan for all WAL files in this scenario. In this case\nwe have the\nnext anticipated log segment number and we can prevent an additional\ndirectory\nscan. I have tested this with a small setup by creating ~2000 WAL files and\nit has\nresulted in directory scan for each file.\n\nI agree that the the failure scenario discussed in [0] will require a WAL\nfile to\nwait until the next directory scan. However, this can be avoided by forcing\na\ndirectory scan in pgarch_ArchiverCopyLoop() only in case of failure\nscenario.\nThis will make sure that when the archiver wakes up for the next cycle it\nperforms a full directory leaving out any risk of missing a file due to\narchive\nfailure. Additionally, it will also avoid additional directory scans\nmentioned in\nabove scenario.\n\nI have incorporated the changes and updated a new patch. PFA patch.\n\nThanks,\nDipesh\n\n[0]\nhttps://www.postgresql.org/message-id/AC78607B-9DA6-41F4-B253-840D3DD964BF%40amazon.com", "msg_date": "Mon, 13 Sep 2021 20:18:23 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Thu, Sep 2, 2021 at 5:52 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> The pg_readyXlog() logic looks a bit like this:\n>\n> 1. Try to skip directory scan. If that succeeds, we're done.\n> 2. Do a directory scan.\n> 3. If we found a regular WAL file, update PgArch and return\n> what we found.\n>\n> Let's say step 1 looks for WAL file 10, but 10.ready doesn't exist\n> yet. The following directory scan ends up finding 11.ready. Just\n> before we update the PgArch state, XLogArchiveNotify() is called and\n> creates 10.ready. However, pg_readyXlog() has already decided to\n> return WAL segment 11 and update the state to look for 12 next. If we\n> just used '<', we won't force a directory scan, and segment 10 will\n> not be archived until the next one happens. If we use '<=', I don't\n> think we have the same problem.\n\nThe latest post on this thread contained a link to this one, and it\nmade me want to rewind to this point in the discussion. Suppose we\nhave the following alternative scenario:\n\nLet's say step 1 looks for WAL file 10, but 10.ready doesn't exist\nyet. The following directory scan ends up finding 12.ready. Just\nbefore we update the PgArch state, XLogArchiveNotify() is called and\ncreates 11.ready. However, pg_readyXlog() has already decided to\nreturn WAL segment 12 and update the state to look for 13 next.\n\nNow, if I'm not mistaken, using <= doesn't help at all.\n\nIn my opinion, the problem here is that the natural way to ask \"is\nthis file being archived out of order?\" is to ask yourself \"is the\nfile that I'm marking as ready for archiving now the one that\nimmediately follows the last one I marked as ready for archiving?\" and\nthen invert the result. That is, if I last marked 10 as ready, and now\nI'm marking 11 as ready, then it's in order, but if I'm now marking\nanything else whatsoever, then it's out of order. But that's not what\nthis does. Instead of comparing what it's doing now to what it did\nlast, it compares what it did now to what the archiver did last.\n\nAnd it's really not obvious that that's correct. I think that the\nabove argument actually demonstrates a flaw in the logic, but even if\nnot, or even if it's too small a flaw to be a problem in practice, it\nseems a lot harder to reason about.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Sep 2021 16:13:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 9/13/21, 1:14 PM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> On Thu, Sep 2, 2021 at 5:52 PM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> Let's say step 1 looks for WAL file 10, but 10.ready doesn't exist\r\n>> yet. The following directory scan ends up finding 11.ready. Just\r\n>> before we update the PgArch state, XLogArchiveNotify() is called and\r\n>> creates 10.ready. However, pg_readyXlog() has already decided to\r\n>> return WAL segment 11 and update the state to look for 12 next. If we\r\n>> just used '<', we won't force a directory scan, and segment 10 will\r\n>> not be archived until the next one happens. If we use '<=', I don't\r\n>> think we have the same problem.\r\n>\r\n> The latest post on this thread contained a link to this one, and it\r\n> made me want to rewind to this point in the discussion. Suppose we\r\n> have the following alternative scenario:\r\n>\r\n> Let's say step 1 looks for WAL file 10, but 10.ready doesn't exist\r\n> yet. The following directory scan ends up finding 12.ready. Just\r\n> before we update the PgArch state, XLogArchiveNotify() is called and\r\n> creates 11.ready. However, pg_readyXlog() has already decided to\r\n> return WAL segment 12 and update the state to look for 13 next.\r\n>\r\n> Now, if I'm not mistaken, using <= doesn't help at all.\r\n\r\nI think this is the scenario I was trying to touch on in the paragraph\r\nimmediately following the one you mentioned. My theory was that we'll\r\nstill skip forcing a directory scan until 10.ready is created, so it\r\nwould eventually work out as long as we can safely assume that all\r\n.ready files that should be created eventually will be. Thinking\r\nfurther, I don't think that's right. We might've already renamed\r\n10.ready to 10.done and removed it long ago, so there's a chance that\r\nwe wouldn't go back and pick up 11.ready until one of our \"fallback\"\r\ndirectory scans forced by the checkpointer. So, yes, I think you are\r\nright.\r\n\r\n> In my opinion, the problem here is that the natural way to ask \"is\r\n> this file being archived out of order?\" is to ask yourself \"is the\r\n> file that I'm marking as ready for archiving now the one that\r\n> immediately follows the last one I marked as ready for archiving?\" and\r\n> then invert the result. That is, if I last marked 10 as ready, and now\r\n> I'm marking 11 as ready, then it's in order, but if I'm now marking\r\n> anything else whatsoever, then it's out of order. But that's not what\r\n> this does. Instead of comparing what it's doing now to what it did\r\n> last, it compares what it did now to what the archiver did last.\r\n>\r\n> And it's really not obvious that that's correct. I think that the\r\n> above argument actually demonstrates a flaw in the logic, but even if\r\n> not, or even if it's too small a flaw to be a problem in practice, it\r\n> seems a lot harder to reason about.\r\n\r\nI certainly agree that it's harder to reason about. If we were to go\r\nthe keep-trying-the-next-file route, we could probably minimize a lot\r\nof the handling for these rare cases by banking on the \"fallback\"\r\ndirectory scans. Provided we believe these situations are extremely\r\nrare, some extra delay for an archive every once in a while might be\r\nacceptable.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 13 Sep 2021 22:22:36 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Thanks for the feedback.\n\n> The latest post on this thread contained a link to this one, and it\n> made me want to rewind to this point in the discussion. Suppose we\n> have the following alternative scenario:\n>\n> Let's say step 1 looks for WAL file 10, but 10.ready doesn't exist\n> yet. The following directory scan ends up finding 12.ready. Just\n> before we update the PgArch state, XLogArchiveNotify() is called and\n> creates 11.ready. However, pg_readyXlog() has already decided to\n> return WAL segment 12 and update the state to look for 13 next.\n>\n> Now, if I'm not mistaken, using <= doesn't help at all.\n>\n> In my opinion, the problem here is that the natural way to ask \"is\n> this file being archived out of order?\" is to ask yourself \"is the\n> file that I'm marking as ready for archiving now the one that\n> immediately follows the last one I marked as ready for archiving?\" and\n> then invert the result. That is, if I last marked 10 as ready, and now\n> I'm marking 11 as ready, then it's in order, but if I'm now marking\n> anything else whatsoever, then it's out of order. But that's not what\n> this does. Instead of comparing what it's doing now to what it did\n> last, it compares what it did now to what the archiver did last.\n\nI agree that when we are creating a .ready file we should compare\nthe current .ready file with the last .ready file to check if this file is\ncreated out of order. We can store the state of the last .ready file\nin shared memory and compare it with the current .ready file. I\nbelieve that archiver specific shared memory area can be used\nto store the state of the last .ready file unless I am missing\nsomething and this needs to be stored in a separate shared\nmemory area.\n\nWith this change, we have the flexibility to move the current archiver\nstate out of shared memory and keep it local to archiver. I have\nincorporated these changes and updated a new patch.\n\n\n> > And it's really not obvious that that's correct. I think that the\n> > above argument actually demonstrates a flaw in the logic, but even if\n> > not, or even if it's too small a flaw to be a problem in practice, it\n> > seems a lot harder to reason about.\n>\n> I certainly agree that it's harder to reason about. If we were to go\n> the keep-trying-the-next-file route, we could probably minimize a lot\n> of the handling for these rare cases by banking on the \"fallback\"\n> directory scans. Provided we believe these situations are extremely\n> rare, some extra delay for an archive every once in a while might be\n> acceptable.\n\n+1. We are forcing a directory scan at the checkpoint and it will make sure\nthat any missing file gets archived within the checkpoint boundaries.\n\nPlease find the attached patch.\n\nThanks,\nDipesh", "msg_date": "Tue, 14 Sep 2021 19:50:43 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 9/14/21, 7:23 AM, \"Dipesh Pandit\" <dipesh.pandit@gmail.com> wrote:\r\n> I agree that when we are creating a .ready file we should compare \r\n> the current .ready file with the last .ready file to check if this file is \r\n> created out of order. We can store the state of the last .ready file \r\n> in shared memory and compare it with the current .ready file. I\r\n> believe that archiver specific shared memory area can be used\r\n> to store the state of the last .ready file unless I am missing\r\n> something and this needs to be stored in a separate shared\r\n> memory area.\r\n>\r\n> With this change, we have the flexibility to move the current archiver\r\n> state out of shared memory and keep it local to archiver. I have \r\n> incorporated these changes and updated a new patch.\r\n\r\nI wonder if this can be simplified even further. If we don't bother\r\ntrying to catch out-of-order .ready files in XLogArchiveNotify() and\r\njust depend on the per-checkpoint/restartpoint directory scans, we can\r\nprobably remove lastReadySegNo from archiver state completely.\r\n\r\n+\t/* Force a directory scan if we are archiving anything but a regular\r\n+\t * WAL file or if this WAL file is being created out-of-order.\r\n+\t */\r\n+\tif (!IsXLogFileName(xlog))\r\n+\t\tPgArchForceDirScan();\r\n+\telse\r\n+\t{\r\n+\t\tTimeLineID tli;\r\n+\t\tXLogSegNo last_segno;\r\n+\t\tXLogSegNo this_segno;\r\n+\r\n+\t\tlast_segno = PgArchGetLastReadySegNo();\r\n+\t\tXLogFromFileName(xlog, &tli, &this_segno, wal_segment_size);\r\n+\r\n+\t\t/*\r\n+\t\t * Force a directory scan in case if this .ready file created out of\r\n+\t\t * order.\r\n+\t\t */\r\n+\t\tlast_segno++;\r\n+\t\tif (last_segno != this_segno)\r\n+\t\t\tPgArchForceDirScan();\r\n+\r\n+\t\tPgArchSetLastReadySegNo(this_segno);\r\n+\t}\r\n\r\nThis is an interesting idea, but the \"else\" block here seems prone to\r\nrace conditions. I think we'd have to hold arch_lck to prevent that.\r\nBut as I mentioned above, if we are okay with depending on the\r\nfallback directory scans, I think we can remove the \"else\" block\r\ncompletely.\r\n\r\n+\t/* Initialize the current state of archiver */\r\n+\txlogState.lastSegNo = MaxXLogSegNo;\r\n+\txlogState.lastTli = MaxTimeLineID;\r\n\r\nIt looks like we have two ways to force a directory scan. We can\r\neither set force_dir_scan to true, or lastSegNo can be set to\r\nMaxXLogSegNo. Why not just set force_dir_scan to true here so that we\r\nonly have one way to force a directory scan?\r\n\r\n+\t\t\t\t\t/*\r\n+\t\t\t\t\t * Failed to archive, make sure that archiver performs a\r\n+\t\t\t\t\t * full directory scan in the next cycle to avoid missing\r\n+\t\t\t\t\t * the WAL file which could not be archived due to some\r\n+\t\t\t\t\t * failure in current cycle.\r\n+\t\t\t\t\t */\r\n+\t\t\t\t\tPgArchForceDirScan();\r\n\r\nDon't we also need to force a directory scan in the other cases we\r\nreturn early from pgarch_ArchiverCopyLoop()? We will have already\r\nadvanced the archiver state in pgarch_readyXlog(), so I think we'd end\r\nup skipping files if we didn't. For example, if archive_command isn't\r\nset, we'll just return, and the next call to pgarch_readyXlog() might\r\nreturn the next file.\r\n\r\n+\t\t\t/* Continue directory scan until we find a regular WAL file */\r\n+\t\t\tSpinLockAcquire(&PgArch->arch_lck);\r\n+\t\t\tPgArch->force_dir_scan = true;\r\n+\t\t\tSpinLockRelease(&PgArch->arch_lck);\r\n\r\nnitpick: I think we should just call PgArchForceDirScan() here.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 14 Sep 2021 16:18:34 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 9/14/21, 9:18 AM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> This is an interesting idea, but the \"else\" block here seems prone to\r\n> race conditions. I think we'd have to hold arch_lck to prevent that.\r\n> But as I mentioned above, if we are okay with depending on the\r\n> fallback directory scans, I think we can remove the \"else\" block\r\n> completely.\r\n\r\nThinking further, we probably need to hold a lock even when we are\r\ncreating the .ready file to avoid race conditions.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 14 Sep 2021 18:07:31 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "At Tue, 14 Sep 2021 18:07:31 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 9/14/21, 9:18 AM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\n> > This is an interesting idea, but the \"else\" block here seems prone to\n> > race conditions. I think we'd have to hold arch_lck to prevent that.\n> > But as I mentioned above, if we are okay with depending on the\n> > fallback directory scans, I think we can remove the \"else\" block\n> > completely.\n> \n> Thinking further, we probably need to hold a lock even when we are\n> creating the .ready file to avoid race conditions.\n\nThe race condition surely happens, but even if that happens, all\ncompeting processes except one of them detect out-of-order and will\nenforce directory scan. But I'm not sure how it behaves under more\ncomplex situation so I'm not sure I like that behavior.\n\nWe could just use another lock for the logic there, but instead\ncouldn't we merge PgArchGetLastReadySegNo and PgArchSetLastReadySegNo\ninto one atomic test-and-(check-and-)set function? Like this.\n\n====\n XLogFromFileName(xlog, &tli, &this_segno, wal_segment_size);\n if (!PgArchReadySegIsInOrder(this_segno))\n PgArchForceDirScan();\n\nbool\nPgArchReadySegIsInOrder(XLogSegNo this_segno)\n{\n bool in_order = true;\n\n SpinLockAcquire(&PgArch->arch_lck);\n if (PgArch->lastReadySegNo + 1 != this_segno)\n in_order = false;\n PgArch->lastReadySegNo = this_segno;\n SpinLockRelease(&PgArch->arch_lck);\n\n return in_order;\n}\n====\n\nBy the way, it seems to me that we only need to force directory scan\nwhen notification seg number steps back. If this is correct, we can\nreduce the number how many times we enforce directory scans.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 15 Sep 2021 10:47:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Hi,\n\nThanks for the feedback.\n\n> I wonder if this can be simplified even further. If we don't bother\n> trying to catch out-of-order .ready files in XLogArchiveNotify() and\n> just depend on the per-checkpoint/restartpoint directory scans, we can\n> probably remove lastReadySegNo from archiver state completely.\n\nIf we agree that some extra delay in archiving these files is acceptable\nthen we don't require any special handling for this scenario otherwise\nwe may need to handle it separately.\n\n> + /* Initialize the current state of archiver */\n> + xlogState.lastSegNo = MaxXLogSegNo;\n> + xlogState.lastTli = MaxTimeLineID;\n>\n> It looks like we have two ways to force a directory scan. We can\n> either set force_dir_scan to true, or lastSegNo can be set to\n> MaxXLogSegNo. Why not just set force_dir_scan to true here so that we\n> only have one way to force a directory scan?\n\nmake sense, I have updated it.\n\n> Don't we also need to force a directory scan in the other cases we\n> return early from pgarch_ArchiverCopyLoop()? We will have already\n> advanced the archiver state in pgarch_readyXlog(), so I think we'd end\n> up skipping files if we didn't. For example, if archive_command isn't\n> set, we'll just return, and the next call to pgarch_readyXlog() might\n> return the next file.\n\nI agree, we should do it for all early return paths.\n\n> nitpick: I think we should just call PgArchForceDirScan() here.\n\nYes, that's right.\n\n> > > This is an interesting idea, but the \"else\" block here seems prone to\n> > > race conditions. I think we'd have to hold arch_lck to prevent that.\n> > > But as I mentioned above, if we are okay with depending on the\n> > > fallback directory scans, I think we can remove the \"else\" block\n> > > completely.\n\nOhh I didn't realize the race condition here. The competing processes\ncan read the same value of lastReadySegNo.\n\n> > Thinking further, we probably need to hold a lock even when we are\n> > creating the .ready file to avoid race conditions.\n>\n> The race condition surely happens, but even if that happens, all\n> competing processes except one of them detect out-of-order and will\n> enforce directory scan. But I'm not sure how it behaves under more\n> complex situation so I'm not sure I like that behavior.\n>\n> We could just use another lock for the logic there, but instead\n> couldn't we merge PgArchGetLastReadySegNo and PgArchSetLastReadySegNo\n> into one atomic test-and-(check-and-)set function? Like this.\n\nI agree that we can merge the existing \"Get\" and \"Set\" functions into\nan atomic test-and-check-and-set function to avoid a race condition.\n\nI have incorporated these changes and updated a new patch. PFA patch.\n\nThanks,\nDipesh", "msg_date": "Wed, 15 Sep 2021 16:57:41 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 9/15/21, 4:28 AM, \"Dipesh Pandit\" <dipesh.pandit@gmail.com> wrote:\r\n> I have incorporated these changes and updated a new patch. PFA patch.\r\n\r\nThanks for the new patch.\r\n\r\nI've attached my take on the latest version. My main focus this time\r\nwas simplifying the patch for this approach as much as possible.\r\nSpecifically, I've done the following:\r\n\r\n 1. I've removed several calls to PgArchForceDirScan() in favor of\r\n calling it at the top of pgarch_ArchiverCopyLoop(). I believe\r\n there is some disagreement about this change, but I don't think\r\n we gain enough to justify the complexity. The main reason we\r\n exit pgarch_ArchiverCopyLoop() should ordinarily be that we've\r\n run out of files to archive, so incurring a directory scan the\r\n next time it is called doesn't seem like it would normally be too\r\n bad. I'm sure there are exceptions (e.g., lots of .done files,\r\n archive failures), but the patch is still not making things any\r\n worse than they presently are for these cases.\r\n 2. I removed all the logic that attempted to catch out-of-order\r\n .ready files. Instead, XLogArchiveNotify() only forces a\r\n directory scan for files other than regular WAL files, and we\r\n depend on our periodic directory scans to pick up anything that's\r\n been left behind.\r\n 3. I moved the logic that forces directory scans every once in a\r\n while. We were doing that in the checkpoint/restartpoint logic,\r\n which, upon further thought, might not be the best idea. The\r\n checkpoint interval can vary widely, and IIRC we won't bother\r\n creating checkpoints at all if database activity stops. Instead,\r\n I've added logic in pgarch_readyXlog() that forces a directory\r\n scan if one hasn't happened in a few minutes.\r\n 4. Finally, I've tried to ensure comments are clear and that the\r\n logic is generally easy to reason about.\r\n\r\nWhat do you think?\r\n\r\nNathan", "msg_date": "Thu, 16 Sep 2021 23:26:55 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "Hi,\n\n> 1. I've removed several calls to PgArchForceDirScan() in favor of\n> calling it at the top of pgarch_ArchiverCopyLoop(). I believe\n> there is some disagreement about this change, but I don't think\n> we gain enough to justify the complexity. The main reason we\n> exit pgarch_ArchiverCopyLoop() should ordinarily be that we've\n> run out of files to archive, so incurring a directory scan the\n> next time it is called doesn't seem like it would normally be too\n> bad. I'm sure there are exceptions (e.g., lots of .done files,\n> archive failures), but the patch is still not making things any\n> worse than they presently are for these cases.\n\nYes, I think when archiver is lagging behind then a call to force\ndirectory scan at the top of pgarch_ArchiverCopyLoop() does not\nhave any impact. This may result into a directory scan in next cycle\nonly when the archiver is ahead or in sync but in that case also a\ndirectory scan may not incur too much cost since the archiver is\nahead.I agree that we can remove the separate calls to force a\ndirectory scan in failure scenarios with a single call at the top of\nPgArchForceDirScan().\n\n> 2. I removed all the logic that attempted to catch out-of-order\n> .ready files. Instead, XLogArchiveNotify() only forces a\n> directory scan for files other than regular WAL files, and we\n> depend on our periodic directory scans to pick up anything that's\n> been left behind.\n> 3. I moved the logic that forces directory scans every once in a\n> while. We were doing that in the checkpoint/restartpoint logic,\n> which, upon further thought, might not be the best idea. The\n> checkpoint interval can vary widely, and IIRC we won't bother\n> creating checkpoints at all if database activity stops. Instead,\n> I've added logic in pgarch_readyXlog() that forces a directory\n> scan if one hasn't happened in a few minutes.\n> 4. Finally, I've tried to ensure comments are clear and that the\n> logic is generally easy to reason about.\n>\n> What do you think?\n\nI agree, If we force a periodic directory scan then we may not\nrequire any special logic for handling scenarios where a .ready file\nis created out of order in XLogArchiveNotify(). We need to force a\ndirectory scan only in case of a non-regular WAL file in\nXLogArchiveNotify().\n\nOverall I think the periodic directory scan simplifies the patch and\nmakes sure that any missing file gets archived within a few mins.\n\nThanks,\nDipesh\n\nHi,> 1. I've removed several calls to PgArchForceDirScan() in favor of\n>     calling it at the top of pgarch_ArchiverCopyLoop().  I believe\n>     there is some disagreement about this change, but I don't think\n>     we gain enough to justify the complexity.  The main reason we\n>     exit pgarch_ArchiverCopyLoop() should ordinarily be that we've\n>     run out of files to archive, so incurring a directory scan the\n>     next time it is called doesn't seem like it would normally be too\n>     bad.  I'm sure there are exceptions (e.g., lots of .done files,\n>     archive failures), but the patch is still not making things any\n>     worse than they presently are for these cases.Yes, I think when archiver is lagging behind then a call to force directory scan at the top of pgarch_ArchiverCopyLoop() does nothave any impact. This may result into a directory scan in next cycleonly when the archiver is ahead or in sync but in that case also adirectory scan may not incur too much cost since the archiver is ahead.I agree that we can remove the separate calls to force a directory scan in failure scenarios with a single call at the top of PgArchForceDirScan(). > 2. I removed all the logic that attempted to catch out-of-order\n>     .ready files.  Instead, XLogArchiveNotify() only forces a\n>     directory scan for files other than regular WAL files, and we\n>     depend on our periodic directory scans to pick up anything that's\n>     been left behind.\n> 3. I moved the logic that forces directory scans every once in a\n>     while.  We were doing that in the checkpoint/restartpoint logic,\n>     which, upon further thought, might not be the best idea.  The\n>     checkpoint interval can vary widely, and IIRC we won't bother\n>     creating checkpoints at all if database activity stops.  Instead,\n>     I've added logic in pgarch_readyXlog() that forces a directory\n>     scan if one hasn't happened in a few minutes.> 4. Finally, I've tried to ensure comments are clear and that the\n>     logic is generally easy to reason about.\n>\n> What do you think?I agree, If we force a periodic directory scan then we may not require any special logic for handling scenarios where a .ready file is created out of order in XLogArchiveNotify(). We need to force a directory scan only in case of a non-regular WAL file inXLogArchiveNotify().Overall I think the periodic directory scan simplifies the patch and makes sure that any missing file gets archived within a few mins. Thanks,Dipesh", "msg_date": "Mon, 20 Sep 2021 13:04:57 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Thu, Sep 16, 2021 at 7:26 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> 1. I've removed several calls to PgArchForceDirScan() in favor of\n> calling it at the top of pgarch_ArchiverCopyLoop(). I believe\n> there is some disagreement about this change, but I don't think\n> we gain enough to justify the complexity. The main reason we\n> exit pgarch_ArchiverCopyLoop() should ordinarily be that we've\n> run out of files to archive, so incurring a directory scan the\n> next time it is called doesn't seem like it would normally be too\n> bad. I'm sure there are exceptions (e.g., lots of .done files,\n> archive failures), but the patch is still not making things any\n> worse than they presently are for these cases.\n\nI was thinking that this might increase the number of directory scans\nby a pretty large amount when we repeatedly catch up, then 1 new file\ngets added, then we catch up, etc.\n\nBut I guess your thought process is that such directory scans, even if\nthey happen many times per second, can't really be that expensive,\nsince the directory can't have much in it. Which seems like a fair\npoint. I wonder if there are any situations in which there's not much\nto archive but the archive_status directory still contains tons of\nfiles.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Sep 2021 16:25:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 2021-Sep-20, Robert Haas wrote:\n\n> I was thinking that this might increase the number of directory scans\n> by a pretty large amount when we repeatedly catch up, then 1 new file\n> gets added, then we catch up, etc.\n\nI was going to say that perhaps we can avoid repeated scans by having a\nbitmap of future files that were found by a scan; so if we need to do\none scan, we keep track of the presence of the next (say) 64 files in\nour timeline, and then we only have to do another scan when we need to\narchive a file that wasn't present the last time we scanned. However:\n\n> But I guess your thought process is that such directory scans, even if\n> they happen many times per second, can't really be that expensive,\n> since the directory can't have much in it. Which seems like a fair\n> point. I wonder if there are any situations in which there's not much\n> to archive but the archive_status directory still contains tons of\n> files.\n\n(If we take this stance, which seems reasonable to me, then we don't\nneed to optimize.) But perhaps we should complain if we find extraneous\nfiles in archive_status -- Then it'd be on the users' heads not to leave\ntons of files that would slow down the scan.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\nMaybe there's lots of data loss but the records of data loss are also lost.\n(Lincoln Yeoh)\n\n\n", "msg_date": "Mon, 20 Sep 2021 17:42:26 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 9/20/21, 1:42 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n> On 2021-Sep-20, Robert Haas wrote:\r\n>\r\n>> I was thinking that this might increase the number of directory scans\r\n>> by a pretty large amount when we repeatedly catch up, then 1 new file\r\n>> gets added, then we catch up, etc.\r\n>\r\n> I was going to say that perhaps we can avoid repeated scans by having a\r\n> bitmap of future files that were found by a scan; so if we need to do\r\n> one scan, we keep track of the presence of the next (say) 64 files in\r\n> our timeline, and then we only have to do another scan when we need to\r\n> archive a file that wasn't present the last time we scanned. However:\r\n\r\nThis sounds a bit like the other approach discussed earlier in this\r\nthread [0].\r\n\r\n>> But I guess your thought process is that such directory scans, even if\r\n>> they happen many times per second, can't really be that expensive,\r\n>> since the directory can't have much in it. Which seems like a fair\r\n>> point. I wonder if there are any situations in which there's not much\r\n>> to archive but the archive_status directory still contains tons of\r\n>> files.\r\n>\r\n> (If we take this stance, which seems reasonable to me, then we don't\r\n> need to optimize.) But perhaps we should complain if we find extraneous\r\n> files in archive_status -- Then it'd be on the users' heads not to leave\r\n> tons of files that would slow down the scan.\r\n\r\nThe simplest situation I can think of that might be a problem is when\r\ncheckpointing is stuck and the .done files are adding up. However,\r\nafter the lengthy directory scan, you should still be able to archive\r\nseveral files without a scan of archive_status. And if you are\r\nrepeatedly catching up, the extra directory scans probably aren't\r\nhurting anything. At the very least, this patch doesn't make things\r\nany worse in this area.\r\n\r\nBTW I attached a new version of the patch with a couple of small\r\nchanges. Specifically, I adjusted some of the comments and moved the\r\nassignment of last_dir_scan to after the directory scan completes.\r\nBefore, we were resetting it before the directory scan, so if the\r\ndirectory scan took too long, you'd still end up scanning\r\narchive_status for every file. I think that's still possible if your\r\narchive_command is especially slow, but archiving isn't going to keep\r\nup anyway in that case.\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/message-id/attachment/125980/0001-Improve-performance-of-pgarch_readyXlog-with-many-st.patch", "msg_date": "Mon, 20 Sep 2021 22:49:09 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Mon, Sep 20, 2021 at 4:42 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I was going to say that perhaps we can avoid repeated scans by having a\n> bitmap of future files that were found by a scan; so if we need to do\n> one scan, we keep track of the presence of the next (say) 64 files in\n> our timeline, and then we only have to do another scan when we need to\n> archive a file that wasn't present the last time we scanned.\n\nThere are two different proposed patches on this thread. One of them\nworks exactly that way, and the other one tries to optimize by\nassuming that if we just optimized WAL file N, we likely will next\nwant to archive WAL file N+1. It's been hard to decide which way is\nbetter.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Sep 2021 13:24:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Thu, Sep 16, 2021 at 7:26 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> What do you think?\n\nI think this is committable. I also went back and looked at your\nprevious proposal to do files in batches, and I think that's also\ncommittable. After some reflection, I think I have a slight preference\nfor the batching approach.\nIt seems like it might lend itself to archiving multiple files in a\nsingle invocation of the archive_command, and Alvaro just suggested it\nagain apparently not having realized that it had been previously\nproposed by Andres, so I guess it has the further advantage of being\nthe thing that several committers intuitively feel like we ought to be\ndoing to solve this problem.\n\nSo what I am inclined to do is commit\nv1-0001-Improve-performance-of-pgarch_readyXlog-with-many.patch.\nHowever, v6-0001-Do-fewer-directory-scans-of-archive_status.patch has\nperhaps evolved a bit more than the other one, so I thought I should\nfirst ask whether any of those changes have influenced your thinking\nabout the batching approach and whether you want to make any updates\nto that patch first. I don't really see that this is needed, but I\nmight be missing something.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 Sep 2021 12:28:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 9/24/21 12:28 PM, Robert Haas wrote:\n> On Thu, Sep 16, 2021 at 7:26 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n>> What do you think?\n> \n> I think this is committable. I also went back and looked at your\n> previous proposal to do files in batches, and I think that's also\n> committable. After some reflection, I think I have a slight preference\n> for the batching approach.\n> It seems like it might lend itself to archiving multiple files in a\n> single invocation of the archive_command, and Alvaro just suggested it\n> again apparently not having realized that it had been previously\n> proposed by Andres, so I guess it has the further advantage of being\n> the thing that several committers intuitively feel like we ought to be\n> doing to solve this problem.\n\nI also prefer this approach. Reducing directory scans is an excellent \noptimization, but from experience I know that execution time for the \narchive_command can also be a significant bottleneck. Begin able to \narchive multiple segments per execution would be a big win in certain \nscenarios.\n\n> So what I am inclined to do is commit\n> v1-0001-Improve-performance-of-pgarch_readyXlog-with-many.patch.\n\nI read the patch and it looks good to me.\n\nI do wish we had a way to test that history files get archived first, \nbut as I recall I was not able to figure out how to do reliably for [1] \nwithout writing a custom archive_command just for testing. That is \nsomething we might want to consider as we make this logic more complex.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n[1] \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=b981df4cc09aca978c5ce55e437a74913d09cccc\n\n\n", "msg_date": "Fri, 24 Sep 2021 16:27:42 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 9/24/21, 9:29 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> So what I am inclined to do is commit\r\n> v1-0001-Improve-performance-of-pgarch_readyXlog-with-many.patch.\r\n> However, v6-0001-Do-fewer-directory-scans-of-archive_status.patch has\r\n> perhaps evolved a bit more than the other one, so I thought I should\r\n> first ask whether any of those changes have influenced your thinking\r\n> about the batching approach and whether you want to make any updates\r\n> to that patch first. I don't really see that this is needed, but I\r\n> might be missing something.\r\n\r\nBesides sprucing up the comments a bit, I don't think there is\r\nanything that needs to be changed. The only other thing I considered\r\nwas getting rid of the arch_files array. Instead, I would swap the\r\ncomparator function the heap uses with a reverse one, rebuild the\r\nheap, and then have pgarch_readyXlog() return files via\r\nbinaryheap_remove_first(). However, this seemed like a bit more\r\ncomplexity than necessary.\r\n\r\nAttached is a new version of the patch with some expanded comments.\r\n\r\nNathan", "msg_date": "Mon, 27 Sep 2021 18:03:33 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 9/27/21, 11:06 AM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> On 9/24/21, 9:29 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n>> So what I am inclined to do is commit\r\n>> v1-0001-Improve-performance-of-pgarch_readyXlog-with-many.patch.\r\n>> However, v6-0001-Do-fewer-directory-scans-of-archive_status.patch has\r\n>> perhaps evolved a bit more than the other one, so I thought I should\r\n>> first ask whether any of those changes have influenced your thinking\r\n>> about the batching approach and whether you want to make any updates\r\n>> to that patch first. I don't really see that this is needed, but I\r\n>> might be missing something.\r\n>\r\n> Besides sprucing up the comments a bit, I don't think there is\r\n> anything that needs to be changed. The only other thing I considered\r\n> was getting rid of the arch_files array. Instead, I would swap the\r\n> comparator function the heap uses with a reverse one, rebuild the\r\n> heap, and then have pgarch_readyXlog() return files via\r\n> binaryheap_remove_first(). However, this seemed like a bit more\r\n> complexity than necessary.\r\n>\r\n> Attached is a new version of the patch with some expanded comments.\r\n\r\nI just wanted to gently bump this thread in case there is any\r\nadditional feedback. I should have some time to work on it this week.\r\nAlso, it's looking more and more like this patch will nicely assist\r\nthe batching/loadable backup module stuff [0].\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/E9035E94-EC76-436E-B6C9-1C03FBD8EF54%40amazon.com\r\n\r\n", "msg_date": "Tue, 5 Oct 2021 19:19:18 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Fri, Sep 24, 2021 at 12:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Sep 16, 2021 at 7:26 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> > What do you think?\n>\n> I think this is committable. I also went back and looked at your\n> previous proposal to do files in batches, and I think that's also\n> committable. After some reflection, I think I have a slight preference\n> for the batching approach.\n> It seems like it might lend itself to archiving multiple files in a\n> single invocation of the archive_command, and Alvaro just suggested it\n> again apparently not having realized that it had been previously\n> proposed by Andres, so I guess it has the further advantage of being\n> the thing that several committers intuitively feel like we ought to be\n> doing to solve this problem.\n>\n> So what I am inclined to do is commit\n> v1-0001-Improve-performance-of-pgarch_readyXlog-with-many.patch.\n> However, v6-0001-Do-fewer-directory-scans-of-archive_status.patch has\n> perhaps evolved a bit more than the other one, so I thought I should\n> first ask whether any of those changes have influenced your thinking\n> about the batching approach and whether you want to make any updates\n> to that patch first. I don't really see that this is needed, but I\n> might be missing something.\n\nNathan, I just realized we never closed the loop on this. Do you have\nany thoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Oct 2021 08:58:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 10/19/21, 5:59 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> Nathan, I just realized we never closed the loop on this. Do you have\r\n> any thoughts?\r\n\r\nIMO the patch is in decent shape. Happy to address any feedback you\r\nmight have on the latest patch [0].\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/message-id/attachment/126789/v3-0001-Improve-performance-of-pgarch_readyXlog-with-many.patch\r\n\r\n", "msg_date": "Tue, 19 Oct 2021 14:50:14 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 10/19/21, 7:53 AM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> On 10/19/21, 5:59 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n>> Nathan, I just realized we never closed the loop on this. Do you have\r\n>> any thoughts?\r\n>\r\n> IMO the patch is in decent shape. Happy to address any feedback you\r\n> might have on the latest patch [0].\r\n\r\nThis thread seems to have lost traction. The cfbot entry for the\r\nlatest patch [0] is still all green, so I think it is still good to\r\ngo. I'm happy to address any additional feedback, though.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/attachment/126789/v3-0001-Improve-performance-of-pgarch_readyXlog-with-many.patch\r\n\r\n", "msg_date": "Thu, 11 Nov 2021 15:37:12 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Thu, Nov 11, 2021 at 10:37 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> On 10/19/21, 7:53 AM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\n> > On 10/19/21, 5:59 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\n> >> Nathan, I just realized we never closed the loop on this. Do you have\n> >> any thoughts?\n> >\n> > IMO the patch is in decent shape. Happy to address any feedback you\n> > might have on the latest patch [0].\n>\n> This thread seems to have lost traction. The cfbot entry for the\n> latest patch [0] is still all green, so I think it is still good to\n> go. I'm happy to address any additional feedback, though.\n\nSomehow I didn't see your October 19th response previously. The\nthreading in gmail seems to have gotten broken, which may have\ncontributed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Nov 2021 14:49:46 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Thu, Nov 11, 2021 at 2:49 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Somehow I didn't see your October 19th response previously. The\n> threading in gmail seems to have gotten broken, which may have\n> contributed.\n\nAnd actually I also missed the September 27th email where you sent v3. Oops.\n\nCommitted now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Nov 2021 15:22:32 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On 11/11/21, 12:23 PM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> On Thu, Nov 11, 2021 at 2:49 PM Robert Haas <robertmhaas@gmail.com> wrote:\r\n>> Somehow I didn't see your October 19th response previously. The\r\n>> threading in gmail seems to have gotten broken, which may have\r\n>> contributed.\r\n>\r\n> And actually I also missed the September 27th email where you sent v3. Oops.\r\n>\r\n> Committed now.\r\n\r\nThanks! I figured it was something like that. Sorry if I caused the\r\nthread breakage.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 11 Nov 2021 20:58:33 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: .ready and .done files considered harmful" }, { "msg_contents": "On Thu, Nov 11, 2021 at 3:58 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> Thanks! I figured it was something like that. Sorry if I caused the\n> thread breakage.\n\nI think it was actually that the thread went over 100 emails ... which\nusually causes Google to break it, but I don't know why it broke it\ninto three pieces instead of two, or why I missed the new ones.\nAnyway, I don't think it was your fault, but no worries either way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Nov 2021 16:12:38 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: .ready and .done files considered harmful" } ]
[ { "msg_contents": "While reviewing some logical replication code I stumbled across a\nvariable usage that looks suspicious to me.\n\nNote that the AlterSubscription_refresh function (unlike other\nfunctions in the subscriptioncmds.c) is using the global variable\n\"wrconn\" instead of a local stack variable of the same name. I was\nunable to think of any good reason why it would be deliberately doing\nthis, so my guess is that it is simply an accidental mistake that has\ngone unnoticed because the compiler was silently equally happy just\nusing the global var.\n\nApparently, this is not causing any reported problems because it seems\nlike the code has been this way for ~4 years [1].\n\nEven so, it doesn't look intentional to me and I felt that there may\nbe unknown consequences (e.g. resource leakage?) of just blatting over\nthe global var. So, PSA a small patch to make this\nAlterSubscription_refresh function use a stack variable consistent\nwith the other nearby functions.\n\nThoughts?\n\n------\n[1] https://github.com/postgres/postgres/commit/7c4f52409a8c7d85ed169bbbc1f6092274d03920#\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 4 May 2021 09:29:42 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "On Tue, May 4, 2021 at 5:00 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> While reviewing some logical replication code I stumbled across a\n> variable usage that looks suspicious to me.\n>\n> Note that the AlterSubscription_refresh function (unlike other\n> functions in the subscriptioncmds.c) is using the global variable\n> \"wrconn\" instead of a local stack variable of the same name. I was\n> unable to think of any good reason why it would be deliberately doing\n> this, so my guess is that it is simply an accidental mistake that has\n> gone unnoticed because the compiler was silently equally happy just\n> using the global var.\n>\n> Apparently, this is not causing any reported problems because it seems\n> like the code has been this way for ~4 years [1].\n>\n> Even so, it doesn't look intentional to me and I felt that there may\n> be unknown consequences (e.g. resource leakage?) of just blatting over\n> the global var. So, PSA a small patch to make this\n> AlterSubscription_refresh function use a stack variable consistent\n> with the other nearby functions.\n>\n> Thoughts?\n\n+1. It looks like the global variable wrconn defined/declared in\nworker_internal.h/worker.c, is for logical apply/table sync worker and\nit doesn't make sense to use it for CREATE/ALTER subscription refresh\ncode that runs on a backend. And I couldn't think of any unknown\nconsequences/resource leakage, because that global variable is being\nused by different processes which have their own copy.\n\nAnd, the patch basically looks good to me, except a bit of rewording\nthe commit message to something like \"Use local variable wrconn in\nAlterSubscription_refresh instead of global a variable with the same\nname which is meant to be used for logical apply/table sync worker.\nHaving the wrconn global variable in AlterSubscription_refresh doesn't\ncause any real issue as such but it keeps the code in\nsubscriptioncmds.c inconsistent with other functions which use a local\nvariable named wrconn.\" or some other better wording?\n\nRegression tests were passed on my dev system with the patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 May 2021 09:26:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "Hi,\n\nOn 2021-05-04 09:29:42 +1000, Peter Smith wrote:\n> While reviewing some logical replication code I stumbled across a\n> variable usage that looks suspicious to me.\n\n> Note that the AlterSubscription_refresh function (unlike other\n> functions in the subscriptioncmds.c) is using the global variable\n> \"wrconn\" instead of a local stack variable of the same name. I was\n> unable to think of any good reason why it would be deliberately doing\n> this, so my guess is that it is simply an accidental mistake that has\n> gone unnoticed because the compiler was silently equally happy just\n> using the global var.\n\n> Apparently, this is not causing any reported problems because it seems\n> like the code has been this way for ~4 years [1].\n\nThis sounded vaguely familiar. After a bit of searching I found that's\nbecause I debugged a crash related to it:\nhttps://www.postgresql.org/message-id/20201111215820.qihhrz7fayu6myfi%40alap3.anarazel.de\n\nPeter?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 May 2021 21:31:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "On Tue, May 4, 2021 at 1:56 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, May 4, 2021 at 5:00 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > While reviewing some logical replication code I stumbled across a\n> > variable usage that looks suspicious to me.\n> >\n> > Note that the AlterSubscription_refresh function (unlike other\n> > functions in the subscriptioncmds.c) is using the global variable\n> > \"wrconn\" instead of a local stack variable of the same name. I was\n> > unable to think of any good reason why it would be deliberately doing\n> > this, so my guess is that it is simply an accidental mistake that has\n> > gone unnoticed because the compiler was silently equally happy just\n> > using the global var.\n> >\n> > Apparently, this is not causing any reported problems because it seems\n> > like the code has been this way for ~4 years [1].\n> >\n> > Even so, it doesn't look intentional to me and I felt that there may\n> > be unknown consequences (e.g. resource leakage?) of just blatting over\n> > the global var. So, PSA a small patch to make this\n> > AlterSubscription_refresh function use a stack variable consistent\n> > with the other nearby functions.\n> >\n> > Thoughts?\n>\n> +1. It looks like the global variable wrconn defined/declared in\n> worker_internal.h/worker.c, is for logical apply/table sync worker and\n> it doesn't make sense to use it for CREATE/ALTER subscription refresh\n> code that runs on a backend. And I couldn't think of any unknown\n> consequences/resource leakage, because that global variable is being\n> used by different processes which have their own copy.\n>\n> And, the patch basically looks good to me, except a bit of rewording\n> the commit message to something like \"Use local variable wrconn in\n> AlterSubscription_refresh instead of global a variable with the same\n> name which is meant to be used for logical apply/table sync worker.\n> Having the wrconn global variable in AlterSubscription_refresh doesn't\n> cause any real issue as such but it keeps the code in\n> subscriptioncmds.c inconsistent with other functions which use a local\n> variable named wrconn.\" or some other better wording?\n>\n> Regression tests were passed on my dev system with the patch.\n>\n\nThanks for your feedback.\n\nI can post another patch (or same patch with an improved commit\ncomment) later, but I will just wait a day first in case there is more\ninformation to say about it. e.g. my suspicion that there would be\n\"consequences\" seems to have come to fruition after all [1] although I\nnever would have thought of that tricky trigger / refresh scenario.\n\n------\n[1] https://www.postgresql.org/message-id/20210504043149.vg4w66vuh4qjrbph%40alap3.anarazel.de\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 4 May 2021 15:08:11 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "On Tue, May 4, 2021 at 2:31 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-05-04 09:29:42 +1000, Peter Smith wrote:\n> > While reviewing some logical replication code I stumbled across a\n> > variable usage that looks suspicious to me.\n>\n> > Note that the AlterSubscription_refresh function (unlike other\n> > functions in the subscriptioncmds.c) is using the global variable\n> > \"wrconn\" instead of a local stack variable of the same name. I was\n> > unable to think of any good reason why it would be deliberately doing\n> > this, so my guess is that it is simply an accidental mistake that has\n> > gone unnoticed because the compiler was silently equally happy just\n> > using the global var.\n>\n> > Apparently, this is not causing any reported problems because it seems\n> > like the code has been this way for ~4 years [1].\n>\n> This sounded vaguely familiar. After a bit of searching I found that's\n> because I debugged a crash related to it:\n> https://www.postgresql.org/message-id/20201111215820.qihhrz7fayu6myfi%40alap3.anarazel.de\n>\n\nOh! No wonder it sounded familiar.\n\nIt looks like I've just re-discovered the identical problem 5 months\nafter your post.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 4 May 2021 15:13:24 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "PSA v2 of this patch - it has the same content, but an improved commit comment.\n\nI have also added a commitfest entry, https://commitfest.postgresql.org/33/3109/\n\n------\nKind Regards,\nPeter Smith\nFujitsu Australia", "msg_date": "Wed, 5 May 2021 10:15:58 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> This patch replaces the global \"wrconn\" in AlterSubscription_refresh with a local variable of the same name, making it consistent with other functions in subscriptioncmds.c (e.g. DropSubscription).\n> The global wrconn is only meant to be used for logical apply/tablesync worker.\n> Using the global/incorrect wrconn in AlterSubscription_refresh doesn't normally cause any problems, but harm is still posslble if the apply worker ever manages to do a subscription refresh. e.g. see [1]\n\nHm. I would actually place the blame for this on whoever thought\nit was okay to name a global variable something as generic as\n\"wrconn\". Let's rename that while we're at it, say to something\nlike \"tablesync_wrconn\" (feel free to bikeshed).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 May 2021 22:35:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "On Tue, May 04, 2021 at 10:35:02PM -0400, Tom Lane wrote:\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > This patch replaces the global \"wrconn\" in AlterSubscription_refresh with a local variable of the same name, making it consistent with other functions in subscriptioncmds.c (e.g. DropSubscription).\n> > The global wrconn is only meant to be used for logical apply/tablesync worker.\n> > Using the global/incorrect wrconn in AlterSubscription_refresh doesn't normally cause any problems, but harm is still posslble if the apply worker ever manages to do a subscription refresh. e.g. see [1]\n> \n> Hm. I would actually place the blame for this on whoever thought\n> it was okay to name a global variable something as generic as\n> \"wrconn\". Let's rename that while we're at it, say to something\n> like \"tablesync_wrconn\" (feel free to bikeshed).\n\nYea, I think global vars should have at least 1) an underscore, or 2) a\ncapital, and in any case be 3) longer than 6 chars.\n\nThere's very few which violate both \"arms\" of that rule - should anything else\nbe renamed, too ?\n\n$ git grep -E '^static [^(=]*\\<[[:lower:]]{,6}(;$| =)' src/backend/'*.c'\nsrc/backend/access/heap/vacuumlazy.c:static int elevel = -1;\nsrc/backend/access/transam/xloginsert.c:static XLogRecData *rdatas;\nsrc/backend/bootstrap/bootstrap.c:static MemoryContext nogc = NULL; /* special no-gc mem context */\nsrc/backend/libpq/be-fsstubs.c:static MemoryContext fscxt = NULL;\nsrc/backend/replication/walreceiver.c:static WalReceiverConn *wrconn = NULL;\nsrc/backend/replication/walsender.c:static StringInfoData tmpbuf;\nsrc/backend/storage/file/fd.c:static int nfile = 0;\nsrc/backend/utils/misc/sampling.c:static ReservoirStateData oldrs;\n\npryzbyj@pryzbyj:~/src/postgres$ git grep -lE '^static [^(=]*\\<[[:lower:]]{,6}(;$| =)' src/backend/'*.c' |xargs wc -l |sort -nr\n 4326 src/backend/access/heap/vacuumlazy.c\n 3781 src/backend/storage/file/fd.c\n 3698 src/backend/replication/walsender.c\n 1428 src/backend/replication/walreceiver.c\n 1227 src/backend/access/transam/xloginsert.c\n 1155 src/backend/bootstrap/bootstrap.c\n 864 src/backend/libpq/be-fsstubs.c\n 296 src/backend/utils/misc/sampling.c\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 4 May 2021 22:56:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "On Wed, May 5, 2021 at 8:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > This patch replaces the global \"wrconn\" in AlterSubscription_refresh with a local variable of the same name, making it consistent with other functions in subscriptioncmds.c (e.g. DropSubscription).\n> > The global wrconn is only meant to be used for logical apply/tablesync worker.\n> > Using the global/incorrect wrconn in AlterSubscription_refresh doesn't normally cause any problems, but harm is still posslble if the apply worker ever manages to do a subscription refresh. e.g. see [1]\n>\n> Hm. I would actually place the blame for this on whoever thought\n> it was okay to name a global variable something as generic as\n> \"wrconn\". Let's rename that while we're at it, say to something\n> like \"tablesync_wrconn\" (feel free to bikeshed).\n\nI don't think \"tablesync_wrconn\" is the right name, because wrconn is\nalso being used in logical replication apply worker. So something like\n\"apply_worker_wrconn\" would be more meaningful.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 10:50:09 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "On Wed, May 5, 2021 at 3:20 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, May 5, 2021 at 8:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Peter Smith <smithpb2250@gmail.com> writes:\n> > > This patch replaces the global \"wrconn\" in AlterSubscription_refresh with a local variable of the same name, making it consistent with other functions in subscriptioncmds.c (e.g. DropSubscription).\n> > > The global wrconn is only meant to be used for logical apply/tablesync worker.\n> > > Using the global/incorrect wrconn in AlterSubscription_refresh doesn't normally cause any problems, but harm is still posslble if the apply worker ever manages to do a subscription refresh. e.g. see [1]\n> >\n> > Hm. I would actually place the blame for this on whoever thought\n> > it was okay to name a global variable something as generic as\n> > \"wrconn\". Let's rename that while we're at it, say to something\n> > like \"tablesync_wrconn\" (feel free to bikeshed).\n>\n\nOK, I am happy to change this but firstly just need some consensus on\nthe new name to use. I hope to avoid changing it, and then changing it\n5 more times.\n\n> I don't think \"tablesync_wrconn\" is the right name, because wrconn is\n> also being used in logical replication apply worker. So something like\n> \"apply_worker_wrconn\" would be more meaningful.\n>\n\nYes. that is better except I wonder if \"apply_worker_wrconn\" might\nseem unusual when used by the tablesync worker.\n\nMy suggestion is \"lrep_worker_wrconn\" which seems ok for both apply /\ntablesyn workers.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 5 May 2021 15:38:12 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "On Wed, May 5, 2021 at 12:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > This patch replaces the global \"wrconn\" in AlterSubscription_refresh with a local variable of the same name, making it consistent with other functions in subscriptioncmds.c (e.g. DropSubscription).\n> > The global wrconn is only meant to be used for logical apply/tablesync worker.\n> > Using the global/incorrect wrconn in AlterSubscription_refresh doesn't normally cause any problems, but harm is still posslble if the apply worker ever manages to do a subscription refresh. e.g. see [1]\n>\n> Hm. I would actually place the blame for this on whoever thought\n> it was okay to name a global variable something as generic as\n> \"wrconn\". Let's rename that while we're at it, say to something\n> like \"tablesync_wrconn\" (feel free to bikeshed).\n\nPSA v3 of the patch. Same as before, but now also renames the global\nvariable from \"wrconn\" to \"lrep_worker_wrconn\".\n\n------\nKind Regards,\nPeter Smith\nFujitsu Australia", "msg_date": "Thu, 6 May 2021 19:08:49 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "\nOn Thu, 06 May 2021 at 17:08, Peter Smith <smithpb2250@gmail.com> wrote:\n> On Wed, May 5, 2021 at 12:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Peter Smith <smithpb2250@gmail.com> writes:\n>> > This patch replaces the global \"wrconn\" in AlterSubscription_refresh with a local variable of the same name, making it consistent with other functions in subscriptioncmds.c (e.g. DropSubscription).\n>> > The global wrconn is only meant to be used for logical apply/tablesync worker.\n>> > Using the global/incorrect wrconn in AlterSubscription_refresh doesn't normally cause any problems, but harm is still posslble if the apply worker ever manages to do a subscription refresh. e.g. see [1]\n>>\n>> Hm. I would actually place the blame for this on whoever thought\n>> it was okay to name a global variable something as generic as\n>> \"wrconn\". Let's rename that while we're at it, say to something\n>> like \"tablesync_wrconn\" (feel free to bikeshed).\n>\n> PSA v3 of the patch. Same as before, but now also renames the global\n> variable from \"wrconn\" to \"lrep_worker_wrconn\".\n>\n\nThanks for updating patch. I'm confused why we move the walrcv_connect() out of\nPG_TRY() block?\n+ /* Try to connect to the publisher. */\n+ wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n+ if (!wrconn)\n+ ereport(ERROR,\n+ (errmsg(\"could not connect to the publisher: %s\", err)));\n+\n PG_TRY();\n {\n- /* Try to connect to the publisher. */\n- wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n- if (!wrconn)\n- ereport(ERROR,\n- (errmsg(\"could not connect to the publisher: %s\", err)));\n-\n /* Get the table list from publisher. */\n pubrel_names = fetch_table_list(wrconn, sub->publications);\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Thu, 06 May 2021 17:17:59 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "On Thu, May 6, 2021 at 7:18 PM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> On Thu, 06 May 2021 at 17:08, Peter Smith <smithpb2250@gmail.com> wrote:\n> > On Wed, May 5, 2021 at 12:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>\n> >> Peter Smith <smithpb2250@gmail.com> writes:\n> >> > This patch replaces the global \"wrconn\" in AlterSubscription_refresh with a local variable of the same name, making it consistent with other functions in subscriptioncmds.c (e.g. DropSubscription).\n> >> > The global wrconn is only meant to be used for logical apply/tablesync worker.\n> >> > Using the global/incorrect wrconn in AlterSubscription_refresh doesn't normally cause any problems, but harm is still posslble if the apply worker ever manages to do a subscription refresh. e.g. see [1]\n> >>\n> >> Hm. I would actually place the blame for this on whoever thought\n> >> it was okay to name a global variable something as generic as\n> >> \"wrconn\". Let's rename that while we're at it, say to something\n> >> like \"tablesync_wrconn\" (feel free to bikeshed).\n> >\n> > PSA v3 of the patch. Same as before, but now also renames the global\n> > variable from \"wrconn\" to \"lrep_worker_wrconn\".\n> >\n>\n> Thanks for updating patch. I'm confused why we move the walrcv_connect() out of\n> PG_TRY() block?\n> + /* Try to connect to the publisher. */\n> + wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n> + if (!wrconn)\n> + ereport(ERROR,\n> + (errmsg(\"could not connect to the publisher: %s\", err)));\n> +\n> PG_TRY();\n> {\n> - /* Try to connect to the publisher. */\n> - wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n> - if (!wrconn)\n> - ereport(ERROR,\n> - (errmsg(\"could not connect to the publisher: %s\", err)));\n> -\n> /* Get the table list from publisher. */\n> pubrel_names = fetch_table_list(wrconn, sub->publications);\n\nThanks for your review. Reason for moving that out of the PG_TRY are:\n\na) It makes code now consistent with other functions using wrconn. See\nCreateSubscription, DropSubscription etc\n\nb) It means don't need the wrconn NULL check anymore in the PG_FINALLY\nso it simplifies the disconnect.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 6 May 2021 19:30:23 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "\nOn Thu, 06 May 2021 at 17:30, Peter Smith <smithpb2250@gmail.com> wrote:\n> On Thu, May 6, 2021 at 7:18 PM Japin Li <japinli@hotmail.com> wrote:\n>>\n>>\n>> On Thu, 06 May 2021 at 17:08, Peter Smith <smithpb2250@gmail.com> wrote:\n>> > On Wed, May 5, 2021 at 12:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> >>\n>> >> Peter Smith <smithpb2250@gmail.com> writes:\n>> >> > This patch replaces the global \"wrconn\" in AlterSubscription_refresh with a local variable of the same name, making it consistent with other functions in subscriptioncmds.c (e.g. DropSubscription).\n>> >> > The global wrconn is only meant to be used for logical apply/tablesync worker.\n>> >> > Using the global/incorrect wrconn in AlterSubscription_refresh doesn't normally cause any problems, but harm is still posslble if the apply worker ever manages to do a subscription refresh. e.g. see [1]\n>> >>\n>> >> Hm. I would actually place the blame for this on whoever thought\n>> >> it was okay to name a global variable something as generic as\n>> >> \"wrconn\". Let's rename that while we're at it, say to something\n>> >> like \"tablesync_wrconn\" (feel free to bikeshed).\n>> >\n>> > PSA v3 of the patch. Same as before, but now also renames the global\n>> > variable from \"wrconn\" to \"lrep_worker_wrconn\".\n>> >\n>>\n>> Thanks for updating patch. I'm confused why we move the walrcv_connect() out of\n>> PG_TRY() block?\n>> + /* Try to connect to the publisher. */\n>> + wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n>> + if (!wrconn)\n>> + ereport(ERROR,\n>> + (errmsg(\"could not connect to the publisher: %s\", err)));\n>> +\n>> PG_TRY();\n>> {\n>> - /* Try to connect to the publisher. */\n>> - wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n>> - if (!wrconn)\n>> - ereport(ERROR,\n>> - (errmsg(\"could not connect to the publisher: %s\", err)));\n>> -\n>> /* Get the table list from publisher. */\n>> pubrel_names = fetch_table_list(wrconn, sub->publications);\n>\n> Thanks for your review. Reason for moving that out of the PG_TRY are:\n>\n> a) It makes code now consistent with other functions using wrconn. See\n> CreateSubscription, DropSubscription etc\n>\n> b) It means don't need the wrconn NULL check anymore in the PG_FINALLY\n> so it simplifies the disconnect.\n>\n\nThanks for your explanation!\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Thu, 06 May 2021 17:42:31 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "On Thu, May 6, 2021 at 3:00 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, May 6, 2021 at 7:18 PM Japin Li <japinli@hotmail.com> wrote:\n> >\n> >\n> > On Thu, 06 May 2021 at 17:08, Peter Smith <smithpb2250@gmail.com> wrote:\n> > > On Wed, May 5, 2021 at 12:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >>\n> > >> Peter Smith <smithpb2250@gmail.com> writes:\n> > >> > This patch replaces the global \"wrconn\" in AlterSubscription_refresh with a local variable of the same name, making it consistent with other functions in subscriptioncmds.c (e.g. DropSubscription).\n> > >> > The global wrconn is only meant to be used for logical apply/tablesync worker.\n> > >> > Using the global/incorrect wrconn in AlterSubscription_refresh doesn't normally cause any problems, but harm is still posslble if the apply worker ever manages to do a subscription refresh. e.g. see [1]\n> > >>\n> > >> Hm. I would actually place the blame for this on whoever thought\n> > >> it was okay to name a global variable something as generic as\n> > >> \"wrconn\". Let's rename that while we're at it, say to something\n> > >> like \"tablesync_wrconn\" (feel free to bikeshed).\n> > >\n> > > PSA v3 of the patch. Same as before, but now also renames the global\n> > > variable from \"wrconn\" to \"lrep_worker_wrconn\".\n> > >\n> >\n> > Thanks for updating patch. I'm confused why we move the walrcv_connect() out of\n> > PG_TRY() block?\n> > + /* Try to connect to the publisher. */\n> > + wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n> > + if (!wrconn)\n> > + ereport(ERROR,\n> > + (errmsg(\"could not connect to the publisher: %s\", err)));\n> > +\n> > PG_TRY();\n> > {\n> > - /* Try to connect to the publisher. */\n> > - wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n> > - if (!wrconn)\n> > - ereport(ERROR,\n> > - (errmsg(\"could not connect to the publisher: %s\", err)));\n> > -\n> > /* Get the table list from publisher. */\n> > pubrel_names = fetch_table_list(wrconn, sub->publications);\n>\n> Thanks for your review. Reason for moving that out of the PG_TRY are:\n>\n> a) It makes code now consistent with other functions using wrconn. See\n> CreateSubscription, DropSubscription etc\n>\n> b) It means don't need the wrconn NULL check anymore in the PG_FINALLY\n> so it simplifies the disconnect.\n\nAnd even if any error occurs after the connection is established and\nwhile libpqrcv_PQexec is being done in libpqrcv_connect, we reach\nPG_FINALLY() block to disconnect the connection, so no connection leak\ncan occur.\n\nPatch looks good to me except for the comments in the commit message:\n1) it crosses 80 char limit 2) a typo : \"posslble\"\n\nPlease add it to the current commitfest if not done already so that we\ndon't lose track of it and the patch gets a chance to be tested.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 May 2021 15:39:53 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "On 2021-May-06, Peter Smith wrote:\n\n> PSA v3 of the patch. Same as before, but now also renames the global\n> variable from \"wrconn\" to \"lrep_worker_wrconn\".\n\nI think there are two patches here -- the changes to\nAlterSubscription_refresh are a backpatchable bugfix, and the rest of it\ncan just be applied to master.\n\nIn my mind we make a bit of a distinction for global variables by using\nCamelCase rather than undercore_separated_words. There are plenty that\nviolate that \"rule\" of course, but ISTM that makes them stand more and\nit's less likely we've made this mistake. So I would name the variable\nLogRepWALRcvConn or something like that. My €0.02.\n\n-- \nÁlvaro Herrera Valdivia, Chile\n\"Entristecido, Wutra (canción de Las Barreras)\necha a Freyr a rodar\ny a nosotros al mar\"\n\n\n", "msg_date": "Thu, 6 May 2021 16:58:28 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-May-06, Peter Smith wrote:\n>> PSA v3 of the patch. Same as before, but now also renames the global\n>> variable from \"wrconn\" to \"lrep_worker_wrconn\".\n\n> I think there are two patches here -- the changes to\n> AlterSubscription_refresh are a backpatchable bugfix, and the rest of it\n> can just be applied to master.\n\nThe rename of that variable is just cosmetic, true, but I'd still be\ninclined to back-patch it. If we don't do so then I'm afraid that\nfuture back-patched fixes might be bitten by the same confusion,\npossibly introducing new real bugs.\n\nHaving said that, keeping the two aspects in separate patches might\nease review and testing.\n\n> In my mind we make a bit of a distinction for global variables by using\n> CamelCase rather than undercore_separated_words.\n\nI think it's about 50/50, TBH. I'd stick with whichever style is\nbeing used in nearby code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 May 2021 17:08:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "On Fri, May 7, 2021 at 7:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2021-May-06, Peter Smith wrote:\n> >> PSA v3 of the patch. Same as before, but now also renames the global\n> >> variable from \"wrconn\" to \"lrep_worker_wrconn\".\n>\n> > I think there are two patches here -- the changes to\n> > AlterSubscription_refresh are a backpatchable bugfix, and the rest of it\n> > can just be applied to master.\n>\n> The rename of that variable is just cosmetic, true, but I'd still be\n> inclined to back-patch it. If we don't do so then I'm afraid that\n> future back-patched fixes might be bitten by the same confusion,\n> possibly introducing new real bugs.\n>\n> Having said that, keeping the two aspects in separate patches might\n> ease review and testing.\n\nDone.\n\n>\n> > In my mind we make a bit of a distinction for global variables by using\n> > CamelCase rather than undercore_separated_words.\n>\n> I think it's about 50/50, TBH. I'd stick with whichever style is\n> being used in nearby code.\n>\n\nThe nearby code was a random mixture of Camels and Snakes, so instead\nof flipping a coin I went with the suggestion from Alvaro.\n\n~~\n\nPSA v4 of the patch.\n\n0001 - Fixes the AlterSubscription_refresh as before.\n0002 - Renames the global var \"wrconn\" -> \"LogRepWorkerWalRcvConn\" as suggested.\n\n------\nKind Regards,\nPeter Smith\nFujitsu Australia", "msg_date": "Fri, 7 May 2021 18:09:05 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "On Fri, May 7, 2021 at 6:09 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, May 7, 2021 at 7:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > On 2021-May-06, Peter Smith wrote:\n> > >> PSA v3 of the patch. Same as before, but now also renames the global\n> > >> variable from \"wrconn\" to \"lrep_worker_wrconn\".\n> >\n> > > I think there are two patches here -- the changes to\n> > > AlterSubscription_refresh are a backpatchable bugfix, and the rest of it\n> > > can just be applied to master.\n> >\n> > The rename of that variable is just cosmetic, true, but I'd still be\n> > inclined to back-patch it. If we don't do so then I'm afraid that\n> > future back-patched fixes might be bitten by the same confusion,\n> > possibly introducing new real bugs.\n> >\n> > Having said that, keeping the two aspects in separate patches might\n> > ease review and testing.\n>\n> Done.\n>\n> >\n> > > In my mind we make a bit of a distinction for global variables by using\n> > > CamelCase rather than undercore_separated_words.\n> >\n> > I think it's about 50/50, TBH. I'd stick with whichever style is\n> > being used in nearby code.\n> >\n>\n> The nearby code was a random mixture of Camels and Snakes, so instead\n> of flipping a coin I went with the suggestion from Alvaro.\n>\n> ~~\n>\n> PSA v4 of the patch.\n>\n> 0001 - Fixes the AlterSubscription_refresh as before.\n> 0002 - Renames the global var \"wrconn\" -> \"LogRepWorkerWalRcvConn\" as suggested.\n>\n\nIt seems that the 0001 part of this patch was pushed in the weekend [1]. Thanks!\n\nBut what about the 0002 part? If there is no immediate plan to push\nthat also then I will post a v5 just to stop the cfbot complaining.\n\n--------\n[1] https://github.com/postgres/postgres/commit/4e8c0f1a0d0d095a749a329a216c88a340a455b6\n\nKInd Regards,\nPeter Smith\nFujitsu Australia\n\n\n", "msg_date": "Mon, 10 May 2021 12:20:00 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "On Mon, May 10, 2021 at 7:50 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > 0001 - Fixes the AlterSubscription_refresh as before.\n> > 0002 - Renames the global var \"wrconn\" -> \"LogRepWorkerWalRcvConn\" as suggested.\n>\n> It seems that the 0001 part of this patch was pushed in the weekend [1]. Thanks!\n>\n> But what about the 0002 part? If there is no immediate plan to push\n> that also then I will post a v5 just to stop the cfbot complaining.\n\nI think the 0002 patch can be posted here, if it looks good, it can be\nmade \"Ready For Committer\".\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 May 2021 09:23:01 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "PSA v5 of the patch. It is the same as v4 but with the v4-0001 part\nomitted because that was already pushed.\n\n(reposted to keep cfbot happy).\n\n------\nKind Regards,\nPeter Smith\nFujitsu Australia", "msg_date": "Mon, 10 May 2021 16:15:06 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "On 2021-May-10, Peter Smith wrote:\n\n> PSA v5 of the patch. It is the same as v4 but with the v4-0001 part\n> omitted because that was already pushed.\n\nI made a few whitespace adjustments on Friday that I didn't get time to\npush, so I left the whole set to after the minors are finalized this\nweek. I'll get them pushed on Wednesday or so. (The back branches have\na few conflicts, on every release, but I see no reason to post those and\nit'd upset the cfbot).\n\n-- \n�lvaro Herrera Valdivia, Chile", "msg_date": "Mon, 10 May 2021 14:19:17 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" }, { "msg_contents": "On 2021-May-10, Peter Smith wrote:\n\n> PSA v5 of the patch. It is the same as v4 but with the v4-0001 part\n> omitted because that was already pushed.\n\nThanks, I have pushed this.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Wed, 12 May 2021 19:19:27 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: AlterSubscription_refresh \"wrconn\" wrong variable?" } ]
[ { "msg_contents": "Hi\n\nI am doing some test and I have few ideas:\n\n1. print server version to the output of pgbench. Now only client side\nversion is printed\n\n2. can ve generate some output in structured format - XML, JSON ?\n\nRegards\n\nPavel\n\nHiI am doing some test and I have few ideas:1. print server version to the output of pgbench. Now only client side version is printed2. can ve generate some output in structured format - XML, JSON ?RegardsPavel", "msg_date": "Tue, 4 May 2021 10:23:21 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "few ideas for pgbench" }, { "msg_contents": "\nHello Pavel,\n\n> 1. print server version to the output of pgbench. Now only client side\n> version is printed\n\nIt is easy enough and makes sense. Maybe only if it differs from the \nclient side version ?\n\n> 2. can ve generate some output in structured format - XML, JSON ?\n\nIt is obviously possible, but that would mean some code. ISTM that the \nvarious outputs are easy enough to parse and convert to anything without \nneeding a special format? Is there some particular part you have in mind?\n\n-- \nFabien.\n\n\n", "msg_date": "Tue, 4 May 2021 20:59:12 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: few ideas for pgbench" }, { "msg_contents": "út 4. 5. 2021 v 20:59 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\n\n>\n> Hello Pavel,\n>\n> > 1. print server version to the output of pgbench. Now only client side\n> > version is printed\n>\n> It is easy enough and makes sense. Maybe only if it differs from the\n> client side version ?\n>\n\nThis is not a simple question. Personally I prefer to show this info every\ntime, although it can be redundant. Just for check and for more simple\nautomatic processing.\n\nWhen I run pgbench, I usually work with more releases together, so the\nserver version is important info.\n\n\n> > 2. can ve generate some output in structured format - XML, JSON ?\n>\n> It is obviously possible, but that would mean some code. ISTM that the\n> various outputs are easy enough to parse and convert to anything without\n> needing a special format? Is there some particular part you have in mind?\n>\n\nI thought about something what I can simply import to Postgres or to R.\nBut maybe XML or JSON is a bad idea.\n\nWhat about CSV? Any run can produce one row.\n\n\n\n> --\n> Fabien.\n>\n\nút 4. 5. 2021 v 20:59 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\nHello Pavel,\n\n> 1. print server version to the output of pgbench. Now only client side\n> version is printed\n\nIt is easy enough and makes sense. Maybe only if it differs from the \nclient side version ?This is not a simple question. Personally I prefer to show this info every time, although it can be redundant. Just for check and for more simple automatic processing.When I run pgbench, I usually work with more releases together, so the server version is important info. \n\n> 2. can ve generate some output in structured format - XML, JSON ?\n\nIt is obviously possible, but that would mean some code. ISTM that the \nvarious outputs are easy enough to parse and convert to anything without \nneeding a special format? Is there some particular part you have in mind?I thought about something what I can simply import to  Postgres or to R. But maybe XML or JSON is a bad idea.What about CSV? Any run can produce one row.\n\n-- \nFabien.", "msg_date": "Tue, 4 May 2021 21:10:10 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: few ideas for pgbench" }, { "msg_contents": "\nHello Pavel,\n\n> This is not a simple question. Personally I prefer to show this info every\n> time, although it can be redundant. Just for check and for more simple\n> automatic processing.\n>\n> When I run pgbench, I usually work with more releases together, so the\n> server version is important info.\n\nOk. Yes.\n\n>>> 2. can ve generate some output in structured format - XML, JSON ?\n>>\n>> It is obviously possible, but that would mean some code. ISTM that the\n>> various outputs are easy enough to parse and convert to anything without\n>> needing a special format? Is there some particular part you have in mind?\n>>\n>\n> I thought about something what I can simply import to Postgres or to R.\n> But maybe XML or JSON is a bad idea.\n>\n> What about CSV? Any run can produce one row.\n\nYep, CSV is simple and nice. It depends on what information you would \nlike. For instance for progress report (-P 1) or logs/sampling (-l) would \nbe relevant candidates for CSV. Not so much for the final report, though.\n\n-- \nFabien.\n\n\n", "msg_date": "Wed, 5 May 2021 11:55:32 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: few ideas for pgbench" }, { "msg_contents": "st 5. 5. 2021 v 11:55 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\n\n>\n> Hello Pavel,\n>\n> > This is not a simple question. Personally I prefer to show this info\n> every\n> > time, although it can be redundant. Just for check and for more simple\n> > automatic processing.\n> >\n> > When I run pgbench, I usually work with more releases together, so the\n> > server version is important info.\n>\n> Ok. Yes.\n>\n> >>> 2. can ve generate some output in structured format - XML, JSON ?\n> >>\n> >> It is obviously possible, but that would mean some code. ISTM that the\n> >> various outputs are easy enough to parse and convert to anything without\n> >> needing a special format? Is there some particular part you have in\n> mind?\n> >>\n> >\n> > I thought about something what I can simply import to Postgres or to R.\n> > But maybe XML or JSON is a bad idea.\n> >\n> > What about CSV? Any run can produce one row.\n>\n> Yep, CSV is simple and nice. It depends on what information you would\n> like. For instance for progress report (-P 1) or logs/sampling (-l) would\n> be relevant candidates for CSV. Not so much for the final report, though.\n>\n\nI think so there can be almost all information. We have to ensure\nconsistency of columns.\n\nThe basic usage can be\n\nfor ....\ndo\n pg_bench ... >> logfile\ndone\n\nand log file can looks like\n\nstart time, rowno, serverver, clientver, connections, scale, readonly,\njobs, tps, latency, ...\n\nThe header row can be optional\n\n>\n> --\n> Fabien.\n>\n\nst 5. 5. 2021 v 11:55 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\nHello Pavel,\n\n> This is not a simple question. Personally I prefer to show this info every\n> time, although it can be redundant. Just for check and for more simple\n> automatic processing.\n>\n> When I run pgbench, I usually work with more releases together, so the\n> server version is important info.\n\nOk. Yes.\n\n>>> 2. can ve generate some output in structured format - XML, JSON ?\n>>\n>> It is obviously possible, but that would mean some code. ISTM that the\n>> various outputs are easy enough to parse and convert to anything without\n>> needing a special format? Is there some particular part you have in mind?\n>>\n>\n> I thought about something what I can simply import to  Postgres or to R.\n> But maybe XML or JSON is a bad idea.\n>\n> What about CSV? Any run can produce one row.\n\nYep, CSV is simple and nice. It depends on what information you would \nlike. For instance for progress report (-P 1) or logs/sampling (-l) would \nbe relevant candidates for CSV. Not so much for the final report, though.I think so there can be almost all information. We have to ensure consistency of columns.The basic usage can be for ....do  pg_bench ... >> logfiledoneand log file can looks likestart time, rowno, serverver, clientver, connections, scale, readonly, jobs, tps, latency, ...The header row can be optional\n\n-- \nFabien.", "msg_date": "Wed, 5 May 2021 12:22:34 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: few ideas for pgbench" }, { "msg_contents": "pá 7. 5. 2021 v 9:46 odesílatel Fabien COELHO <\nfabien.coelho@mines-paristech.fr> napsal:\n\n>\n> Hello,\n>\n> >>> When I run pgbench, I usually work with more releases together, so the\n> >>> server version is important info.\n> >>\n> >> Ok. Yes.\n>\n> Here is a putative patch for this simple part.\n>\n\n+1\n\n\n> >>> What about CSV? Any run can produce one row.\n> >>\n> >> Yep, CSV is simple and nice. It depends on what information you would\n> >> like. For instance for progress report (-P 1) or logs/sampling (-l)\n> would\n> >> be relevant candidates for CSV. Not so much for the final report,\n> though.\n> >\n> > I think so there can be almost all information. We have to ensure\n> > consistency of columns.\n> >\n> > The basic usage can be\n> >\n> > for ....\n> > do\n> > pg_bench ... >> logfile\n> > done\n> >\n> > and log file can looks like\n> >\n> > start time, rowno, serverver, clientver, connections, scale, readonly,\n> > jobs, tps, latency, ...\n> >\n> > The header row can be optional\n>\n> Hmmm. It is less clear how to do that with minimal code impact on the\n> code, as some options which change the report structure, eg when using\n> multiple scripts (-b/-f) or having detailed per-op informations (-r), as\n> show below:\n>\n> sh> pgbench -P 1 -T 10 -M prepared -c 2 -b se@9 -b si -r\n> starting vacuum...end.\n> progress: 1.0 s, 10666.9 tps, lat 0.186 ms stddev 0.454\n> progress: 2.0 s, 9928.0 tps, lat 0.201 ms stddev 0.466\n> progress: 3.0 s, 10314.8 tps, lat 0.193 ms stddev 0.469\n> progress: 4.0 s, 10042.7 tps, lat 0.198 ms stddev 0.466\n> progress: 5.0 s, 11084.3 tps, lat 0.180 ms stddev 0.408\n> progress: 6.0 s, 9804.1 tps, lat 0.203 ms stddev 0.474\n> progress: 7.0 s, 10271.5 tps, lat 0.194 ms stddev 0.463\n> progress: 8.0 s, 10511.5 tps, lat 0.190 ms stddev 0.424\n> progress: 9.0 s, 10005.7 tps, lat 0.199 ms stddev 0.501\n> progress: 10.0 s, 10512.4 tps, lat 0.190 ms stddev 0.428\n> pgbench (PostgreSQL) 14.0\n> server version: 13.2\n> transaction type: multiple scripts\n> scaling factor: 1\n> query mode: prepared\n> number of clients: 2\n> number of threads: 1\n> duration: 10 s\n> number of transactions actually processed: 103144\n> latency average = 0.193 ms\n> latency stddev = 0.455 ms\n> initial connection time = 5.043 ms\n> tps = 10319.361549 (without initial connection time)\n> SQL script 1: <builtin: select only>\n> - weight: 9 (targets 90.0% of total)\n> - 92654 transactions (89.8% of total, tps = 9269.856947)\n> - latency average = 0.052 ms\n> - latency stddev = 0.018 ms\n> - statement latencies in milliseconds:\n> 0.000 \\set aid random(1, 100000 * :scale)\n> 0.052 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n> SQL script 2: <builtin: simple update>\n> - weight: 1 (targets 10.0% of total)\n> - 10490 transactions (10.2% of total, tps = 1049.504602)\n> - latency average = 1.436 ms\n> - latency stddev = 0.562 ms\n> - statement latencies in milliseconds:\n> 0.001 \\set aid random(1, 100000 * :scale)\n> 0.000 \\set bid random(1, 1 * :scale)\n> 0.000 \\set tid random(1, 10 * :scale)\n> 0.000 \\set delta random(-5000, 5000)\n> 0.027 BEGIN;\n> 0.065 UPDATE pgbench_accounts SET abalance = abalance + :delta\n> WHERE aid = :aid;\n> 0.045 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n> 0.048 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime)\n> VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n> 1.249 END\n>\n> The nature of columns would change depending on options, eg \"initial\n> connection time\" does not make sense under -C, so that under a loop around\n> pgbench scenario rows would not necessarily be consistent…\n>\n> Also, I'm not sure whether such a report can/should include all inputs\n> options.\n>\n> Finally it is unclear how to add such a feature with minimal impact on the\n> source code.\n\n\nIt is a question if this is possible without more changes or without\ncompatibility break :( Probably not. All output should be centralized.\n\n\n> What I usually do is to put each pgbench run output in a separate file and\n> write a small shell/perl/python script to process these, possibly\n> generating CSV on the way.\n>\n\nThe goal of my proposal was a reduction of necessity to write auxiliary\nscripts. The produced document should not be \"nice\", but should be very\neasy to import it to some analytical tools.\n\nThere is an analogy with Postgres's CSV logs. It is the same. We can see\nthe result of pgbench like some log.\n\nPavel\n\n\n\n>\n> --\n> Fabien.\n\npá 7. 5. 2021 v 9:46 odesílatel Fabien COELHO <fabien.coelho@mines-paristech.fr> napsal:\nHello,\n\n>>> When I run pgbench, I usually work with more releases together, so the\n>>> server version is important info.\n>>\n>> Ok. Yes.\n\nHere is a putative patch for this simple part.+1 \n\n>>> What about CSV? Any run can produce one row.\n>>\n>> Yep, CSV is simple and nice. It depends on what information you would\n>> like. For instance for progress report (-P 1) or logs/sampling (-l) would\n>> be relevant candidates for CSV. Not so much for the final report, though.\n>\n> I think so there can be almost all information. We have to ensure\n> consistency of columns.\n>\n> The basic usage can be\n>\n> for ....\n> do\n>  pg_bench ... >> logfile\n> done\n>\n> and log file can looks like\n>\n> start time, rowno, serverver, clientver, connections, scale, readonly,\n> jobs, tps, latency, ...\n>\n> The header row can be optional\n\nHmmm. It is less clear how to do that with minimal code impact on the \ncode, as some options which change the report structure, eg when using \nmultiple scripts (-b/-f) or having detailed per-op informations (-r), as \nshow below:\n\n  sh> pgbench -P 1 -T 10 -M prepared -c 2 -b se@9 -b si -r\n  starting vacuum...end.\n  progress: 1.0 s, 10666.9 tps, lat 0.186 ms stddev 0.454\n  progress: 2.0 s, 9928.0 tps, lat 0.201 ms stddev 0.466\n  progress: 3.0 s, 10314.8 tps, lat 0.193 ms stddev 0.469\n  progress: 4.0 s, 10042.7 tps, lat 0.198 ms stddev 0.466\n  progress: 5.0 s, 11084.3 tps, lat 0.180 ms stddev 0.408\n  progress: 6.0 s, 9804.1 tps, lat 0.203 ms stddev 0.474\n  progress: 7.0 s, 10271.5 tps, lat 0.194 ms stddev 0.463\n  progress: 8.0 s, 10511.5 tps, lat 0.190 ms stddev 0.424\n  progress: 9.0 s, 10005.7 tps, lat 0.199 ms stddev 0.501\n  progress: 10.0 s, 10512.4 tps, lat 0.190 ms stddev 0.428\n  pgbench (PostgreSQL) 14.0\n  server version: 13.2\n  transaction type: multiple scripts\n  scaling factor: 1\n  query mode: prepared\n  number of clients: 2\n  number of threads: 1\n  duration: 10 s\n  number of transactions actually processed: 103144\n  latency average = 0.193 ms\n  latency stddev = 0.455 ms\n  initial connection time = 5.043 ms\n  tps = 10319.361549 (without initial connection time)\n  SQL script 1: <builtin: select only>\n  - weight: 9 (targets 90.0% of total)\n  - 92654 transactions (89.8% of total, tps = 9269.856947)\n  - latency average = 0.052 ms\n  - latency stddev = 0.018 ms\n  - statement latencies in milliseconds:\n          0.000  \\set aid random(1, 100000 * :scale)\n          0.052  SELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n  SQL script 2: <builtin: simple update>\n  - weight: 1 (targets 10.0% of total)\n  - 10490 transactions (10.2% of total, tps = 1049.504602)\n  - latency average = 1.436 ms\n  - latency stddev = 0.562 ms\n  - statement latencies in milliseconds:\n          0.001  \\set aid random(1, 100000 * :scale)\n          0.000  \\set bid random(1, 1 * :scale)\n          0.000  \\set tid random(1, 10 * :scale)\n          0.000  \\set delta random(-5000, 5000)\n          0.027  BEGIN;\n          0.065  UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;\n          0.045  SELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n          0.048  INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n          1.249  END\n\nThe nature of columns would change depending on options, eg \"initial \nconnection time\" does not make sense under -C, so that under a loop around \npgbench scenario rows would not necessarily be consistent…\n\nAlso, I'm not sure whether such a report can/should include all inputs \noptions.\n\nFinally it is unclear how to add such a feature with minimal impact on the \nsource code.It is a question if this is possible without more changes or without compatibility break :( Probably not. All output should be centralized.\n\nWhat I usually do is to put each pgbench run output in a separate file and \nwrite a small shell/perl/python script to process these, possibly \ngenerating CSV on the way.The goal of my proposal was a reduction of necessity to write auxiliary scripts. The produced document should not be \"nice\", but should be very easy to import it to some analytical tools.There is an analogy with Postgres's CSV logs. It is the same. We can see the result of pgbench like some log. Pavel \n\n-- \nFabien.", "msg_date": "Fri, 7 May 2021 09:58:33 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: few ideas for pgbench" }, { "msg_contents": "\n>> Finally it is unclear how to add such a feature with minimal impact on the\n>> source code.\n>\n>\n> It is a question if this is possible without more changes or without\n> compatibility break :( Probably not. All output should be centralized.\n\nYes and no.\n\nFor some things we could have \"void report_sometype(file, name, data)\" \nwhich append \"data,\" under csv but \"name = data\\n\" under text, but this \ndoes not work for nested data (eg -r -b/-f), which would rather require \nsome json/yaml/whatever format which can embed a hierarchy.\n\n>> What I usually do is to put each pgbench run output in a separate file and\n>> write a small shell/perl/python script to process these, possibly\n>> generating CSV on the way.\n>\n> The goal of my proposal was a reduction of necessity to write auxiliary\n> scripts. The produced document should not be \"nice\", but should be very\n> easy to import it to some analytical tools.\n\nYes, I understood that. I tend to use CSV for that, import results in pg \nor sqlite and analyse with SQL.\n\n> There is an analogy with Postgres's CSV logs. It is the same. We can see\n> the result of pgbench like some log.\n\nSure, but this works for simple flat data, not changing structures.\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 7 May 2021 11:28:49 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: few ideas for pgbench" }, { "msg_contents": "pá 7. 5. 2021 v 11:28 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\n\n>\n> >> Finally it is unclear how to add such a feature with minimal impact on\n> the\n> >> source code.\n> >\n> >\n> > It is a question if this is possible without more changes or without\n> > compatibility break :( Probably not. All output should be centralized.\n>\n> Yes and no.\n>\n> For some things we could have \"void report_sometype(file, name, data)\"\n> which append \"data,\" under csv but \"name = data\\n\" under text, but this\n> does not work for nested data (eg -r -b/-f), which would rather require\n> some json/yaml/whatever format which can embed a hierarchy.\n>\n\nIt can work with nested data too, but the result should be denormalized.\n\n\n\n> >> What I usually do is to put each pgbench run output in a separate file\n> and\n> >> write a small shell/perl/python script to process these, possibly\n> >> generating CSV on the way.\n> >\n> > The goal of my proposal was a reduction of necessity to write auxiliary\n> > scripts. The produced document should not be \"nice\", but should be very\n> > easy to import it to some analytical tools.\n>\n> Yes, I understood that. I tend to use CSV for that, import results in pg\n> or sqlite and analyse with SQL.\n>\n> > There is an analogy with Postgres's CSV logs. It is the same. We can see\n> > the result of pgbench like some log.\n>\n> Sure, but this works for simple flat data, not changing structures.\n>\n\nDenormalized tables are common. Although it can be ugly, it should work.\n\n\n\n> --\n> Fabien.\n>\n\npá 7. 5. 2021 v 11:28 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\n>> Finally it is unclear how to add such a feature with minimal impact on the\n>> source code.\n>\n>\n> It is a question if this is possible without more changes or without\n> compatibility break :( Probably not. All output should be centralized.\n\nYes and no.\n\nFor some things we could have \"void report_sometype(file, name, data)\" \nwhich append \"data,\" under csv but \"name = data\\n\" under text, but this \ndoes not work for nested data (eg -r -b/-f), which would rather require \nsome json/yaml/whatever format which can embed a hierarchy.It can work with nested data too, but the result should be denormalized. \n\n>> What I usually do is to put each pgbench run output in a separate file and\n>> write a small shell/perl/python script to process these, possibly\n>> generating CSV on the way.\n>\n> The goal of my proposal was a reduction of necessity to write auxiliary\n> scripts. The produced document should not be \"nice\", but should be very\n> easy to import it to some analytical tools.\n\nYes, I understood that. I tend to use CSV for that, import results in pg \nor sqlite and analyse with SQL.\n\n> There is an analogy with Postgres's CSV logs. It is the same. We can see\n> the result of pgbench like some log.\n\nSure, but this works for simple flat data, not changing structures.Denormalized tables are common.  Although it can be ugly, it should work.\n\n-- \nFabien.", "msg_date": "Fri, 7 May 2021 11:42:05 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: few ideas for pgbench" } ]
[ { "msg_contents": "Hi,\n\nThis new thread is a follow-up of [1].\n\n\n Problem description:\n\nWe have occasionally observed objects having an orphaned dependency, the \nmost common case we have seen (if not the only one) is functions not \nlinked to any namespaces.\nA patch has been initially proposed to fix this particular \n(function-to-namespace) dependency (see [1]), but there could be much \nmore scenarios (like the function-to-datatype one highlighted by Gilles \nin [1] that could lead to a function having an invalid parameter datatype).\nAs Tom said there are dozens more cases that would need to be \nconsidered, and a global approach to avoid those race conditions should \nbe considered instead.\n\nThe attached patch is avoiding those race conditions globally by \nchanging the dependency mechanism: we are using a dirty snapshot any \ntime we’re about to create a pg_depend or pg_shdepend entry.\nThat way we can check if there is in-flight transactions that are \naffecting the dependency: if that’s the case, an error is being reported.\n\nThis approach has been chosen over another one that would have make use \nof the locking machinery.\nThe reason for this choice is to avoid possible slow down of typical DDL \ncommand, risk of deadlock, number of locks taken by transaction...\n\n\n Implementation overview:\n\n * A new catalog snapshot is added: DirtyCatalogSnapshot.\n * This catalog snapshot is a dirty one to be able to look for\n in-flight dependencies.\n * Its usage is controlled by a new UseDirtyCatalogSnapshot variable.\n * Any time this variable is being set to true, then the next call to\n GetNonHistoricCatalogSnapshot() is returning the dirty snapshot.\n * This snapshot is being used to check for in-flight dependencies and\n also to get the objects description to generate the error messages.\n\n*Testing:*\n\n\n Test 1\n\nSession1:\n\n|postgres=# create schema tobeorph; CREATE SCHEMA postgres=# create \ntable tobeorph.bdt (a int); CREATE TABLE postgres=# begin; BEGIN \npostgres=*# CREATE OR REPLACE FUNCTION tobeorph.bdttime() RETURNS \nTIMESTAMP AS $$DECLARE outTS TIMESTAMP; BEGIN perform pg_sleep(10); \nRETURN CURRENT_TIMESTAMP; END; $$ LANGUAGE plpgsql volatile; CREATE \nFUNCTION |\n\nSession 1 does not commit, then session 2:\n\n|postgres=# drop schema tobeorph; ERROR: cannot drop schema tobeorph \nbecause other objects depend on it DETAIL: table tobeorph.bdt depends on \nschema tobeorph function tobeorph.bdttime() (not yet committed) depends \non schema tobeorph HINT: DROP and DROP CASCADE won't work when there are \nuncommitted dependencies. |\n\n\n Test 2\n\nSession 1:\n\n|postgres=# create schema toinsert; CREATE SCHEMA postgres=# begin; \nBEGIN postgres=*# drop schema toinsert; DROP SCHEMA |\n\nSession 1 does not commit, then session 2:\n\n|postgres=# CREATE OR REPLACE FUNCTION toinsert.bdttime() RETURNS \nTIMESTAMP AS $$DECLARE outTS TIMESTAMP; BEGIN perform pg_sleep(10); \nRETURN CURRENT_TIMESTAMP; END; $$ LANGUAGE plpgsql volatile; ERROR: \ncannot create function toinsert.bdttime() because it depends of other \nobjects uncommitted dependencies DETAIL: function toinsert.bdttime() \ndepends on schema toinsert (dependency not yet committed) HINT: CREATE \nwon't work as long as there is uncommitted dependencies. |\n\n\n Test3\n\n|Session1: psql -U toorph postgres psql (14devel) Type \"help\" for help. \npostgres=> begin; BEGIN postgres=*> CREATE OR REPLACE FUNCTION bdttime() \nRETURNS TIMESTAMP AS $$DECLARE outTS TIMESTAMP; BEGIN perform \npg_sleep(10); RETURN CURRENT_TIMESTAMP; END; $$ LANGUAGE plpgsql \nvolatile; CREATE FUNCTION |\n\nSession 1 does not commit, then session 2:\n\n|postgres=# drop owned by toorph; ERROR: cannot drop objects owned by \nrole toorph because other uncommitted objects depend on it DETAIL: \nfunction public.bdttime() (not yet committed) depends on role toorph \nHINT: Commit or rollback function public.bdttime() creation. |\n\nI'm creating a new commitfest entry for this patch.\n\nThanks\n\nBertrand\n\n||\n\n[1]: \nhttps://www.postgresql.org/message-id/flat/a4f55089-7cbd-fe5d-a9bb-19adc6418ae9%40darold.net#9af5cdaa9e80879beb1def3604c976e8", "msg_date": "Tue, 4 May 2021 11:55:43 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Patch to avoid orphaned dependencies" }, { "msg_contents": "Hello Bertrand,\n\nLe mardi 4 mai 2021, 11:55:43 CEST Drouvot, Bertrand a écrit :\n> \n> Implementation overview:\n> \n> * A new catalog snapshot is added: DirtyCatalogSnapshot.\n> * This catalog snapshot is a dirty one to be able to look for\n> in-flight dependencies.\n> * Its usage is controlled by a new UseDirtyCatalogSnapshot variable.\n> * Any time this variable is being set to true, then the next call to\n> GetNonHistoricCatalogSnapshot() is returning the dirty snapshot.\n> * This snapshot is being used to check for in-flight dependencies and\n> also to get the objects description to generate the error messages.\n> \n\nI quickly tested the patch, it behaves as advertised, and passes tests.\n\nIsolation tests should be added to demonstrate the issues it is solving.\n\nHowever, I am bit wary of this behaviour of setting the DirtyCatalogSnapshot \nglobal variable which is then reset after each snapshot acquisition: I'm \nhaving trouble understanding all the implications of that, if it would be \npossible to introduce an unforeseen snapshot before the one we actually want \nto be dirty. \n\nI don't want to derail this thread, but couldn't predicate locks on the \npg_depend index pages corresponding to the dropped object / referenced objects \nwork as a different approach ? I'm not familiar enough with them so maybe there \nis some fundamental misunderstanding on my end.\n\n-- \nRonan Dunklau\n\n\n\n\n", "msg_date": "Fri, 17 Sep 2021 10:09:00 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: Patch to avoid orphaned dependencies" }, { "msg_contents": "Hi Ronan,\n\nOn 9/17/21 10:09 AM, Ronan Dunklau wrote:\n> Hello Bertrand,\n>\n> Le mardi 4 mai 2021, 11:55:43 CEST Drouvot, Bertrand a écrit :\n>> Implementation overview:\n>>\n>> * A new catalog snapshot is added: DirtyCatalogSnapshot.\n>> * This catalog snapshot is a dirty one to be able to look for\n>> in-flight dependencies.\n>> * Its usage is controlled by a new UseDirtyCatalogSnapshot variable.\n>> * Any time this variable is being set to true, then the next call to\n>> GetNonHistoricCatalogSnapshot() is returning the dirty snapshot.\n>> * This snapshot is being used to check for in-flight dependencies and\n>> also to get the objects description to generate the error messages.\n>>\n> I quickly tested the patch, it behaves as advertised, and passes tests.\n\nThanks for looking at it!\n\n>\n> Isolation tests should be added to demonstrate the issues it is solving.\n\nGood point. I'll have a look.\n\n>\n> However, I am bit wary of this behaviour of setting the DirtyCatalogSnapshot\n> global variable which is then reset after each snapshot acquisition: I'm\n> having trouble understanding all the implications of that, if it would be\n> possible to introduce an unforeseen snapshot before the one we actually want\n> to be dirty.\n\nI don't think that could be possible as long as:\n\n- this is a per backend variable\n\n- we pay attention where we set it to true\n\nBut i might be missing something.\n\nDo you have any corner cases in mind?\n\n> I don't want to derail this thread, but couldn't predicate locks on the\n> pg_depend index pages corresponding to the dropped object / referenced objects\n> work as a different approach ?\n\nI'm fine to have a look at another approach if needed, but does it mean \nwe are not happy with the current approach proposal?\n\nThanks\n\nBertrand\n\n\n\n", "msg_date": "Mon, 20 Sep 2021 12:50:30 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Patch to avoid orphaned dependencies" }, { "msg_contents": "> On 20 Sep 2021, at 12:50, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> \n> Hi Ronan,\n> \n> On 9/17/21 10:09 AM, Ronan Dunklau wrote:\n>> Hello Bertrand,\n>> \n>> Le mardi 4 mai 2021, 11:55:43 CEST Drouvot, Bertrand a écrit :\n>>> Implementation overview:\n>>> \n>>> * A new catalog snapshot is added: DirtyCatalogSnapshot.\n>>> * This catalog snapshot is a dirty one to be able to look for\n>>> in-flight dependencies.\n>>> * Its usage is controlled by a new UseDirtyCatalogSnapshot variable.\n>>> * Any time this variable is being set to true, then the next call to\n>>> GetNonHistoricCatalogSnapshot() is returning the dirty snapshot.\n>>> * This snapshot is being used to check for in-flight dependencies and\n>>> also to get the objects description to generate the error messages.\n>>> \n>> I quickly tested the patch, it behaves as advertised, and passes tests.\n> \n> Thanks for looking at it!\n> \n>> \n>> Isolation tests should be added to demonstrate the issues it is solving.\n> \n> Good point. I'll have a look.\n> \n>> \n>> However, I am bit wary of this behaviour of setting the DirtyCatalogSnapshot\n>> global variable which is then reset after each snapshot acquisition: I'm\n>> having trouble understanding all the implications of that, if it would be\n>> possible to introduce an unforeseen snapshot before the one we actually want\n>> to be dirty.\n> \n> I don't think that could be possible as long as:\n> \n> - this is a per backend variable\n> \n> - we pay attention where we set it to true\n> \n> But i might be missing something.\n> \n> Do you have any corner cases in mind?\n> \n>> I don't want to derail this thread, but couldn't predicate locks on the\n>> pg_depend index pages corresponding to the dropped object / referenced objects\n>> work as a different approach ?\n> \n> I'm fine to have a look at another approach if needed, but does it mean we are not happy with the current approach proposal?\n\nThis patch fails to apply as a whole, with the parts applying showing quite\nlarge offsets. Have you had the chance to look at the isolation test asked for\nabove?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 17 Nov 2021 14:25:07 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Patch to avoid orphaned dependencies" }, { "msg_contents": "Hi,\n\nOn 11/17/21 2:25 PM, Daniel Gustafsson wrote:\n>\n> This patch fails to apply as a whole, with the parts applying showing quite\n> large offsets.\nThanks for the warning, please find attached a rebase of it.\n> Have you had the chance to look at the isolation test asked for\n> above?\n\nNot yet, but I'll look at it for sure.\n\nThanks\n\nBertrand", "msg_date": "Tue, 23 Nov 2021 16:22:19 +0100", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Patch to avoid orphaned dependencies" }, { "msg_contents": "Hi,\n\nOn 11/23/21 4:22 PM, Drouvot, Bertrand wrote:\n> Hi,\n>\n> On 11/17/21 2:25 PM, Daniel Gustafsson wrote:\n>>\n>> This patch fails to apply as a whole, with the parts applying showing \n>> quite\n>> large offsets.\n> Thanks for the warning, please find attached a rebase of it.\n>> Have you had the chance to look at the isolation test asked for\n>> above?\n>\n> Not yet, but I'll look at it for sure.\n>\nPlease find enclosed v1-0003-orphaned-dependencies.patch, that contains:\n\n- a mandatory rebase\n\n- a few isolation tests added in src/test/modules/test_dependencies (but \nI'm not sure at all that's the right place to add them, is it?)\n\nThanks\n\nBertrand", "msg_date": "Fri, 17 Dec 2021 14:19:18 +0100", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Patch to avoid orphaned dependencies" }, { "msg_contents": "Hi,\n\nOn 2021-12-17 14:19:18 +0100, Drouvot, Bertrand wrote:\n> Please find enclosed v1-0003-orphaned-dependencies.patch, that contains:\n> \n> - a mandatory rebase\n> \n> - a few isolation tests added in src/test/modules/test_dependencies (but I'm\n> not sure at all that's the right place to add them, is it?)\n\nThis fails on windows w/ msvc:\n\nhttps://cirrus-ci.com/task/5368174125252608?logs=configure#L102\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.157904#L12\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 30 Dec 2021 22:03:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Patch to avoid orphaned dependencies" }, { "msg_contents": "Hi,\n\nOn 12/31/21 7:03 AM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-12-17 14:19:18 +0100, Drouvot, Bertrand wrote:\n>> Please find enclosed v1-0003-orphaned-dependencies.patch, that contains:\n>>\n>> - a mandatory rebase\n>>\n>> - a few isolation tests added in src/test/modules/test_dependencies (but I'm\n>> not sure at all that's the right place to add them, is it?)\n> This fails on windows w/ msvc:\n>\n> https://cirrus-ci.com/task/5368174125252608?logs=configure#L102\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.157904#L12\n\nThanks Andres for the warning.\n\nPlease find enclosed v1-0004-orphaned-dependencies.patch that addresses \nthe issue.\n\nThanks\n\nBertrand", "msg_date": "Tue, 4 Jan 2022 10:28:27 +0100", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Patch to avoid orphaned dependencies" }, { "msg_contents": "Bertand, do you think this has any chance of making it into v15? If not,\nare you alright with adjusting the commitfest entry to v16 and moving it to\nthe next commitfest?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 17 Mar 2022 16:21:39 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch to avoid orphaned dependencies" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> Bertand, do you think this has any chance of making it into v15? If not,\n> are you alright with adjusting the commitfest entry to v16 and moving it to\n> the next commitfest?\n\nI looked this over briefly, and IMO it should have no chance of being\ncommitted in anything like this form.\n\nThe lesser problem is that (as already noted) the reliance on a global\nvariable that changes catalog lookup semantics is just unbelievably\nscary. An example of the possible consequences here is that a syscache\nentry could get made while that's set, containing a row that we should\nnot be able to see yet, and indeed might never get committed at all.\nPerhaps that could be addressed by abandoning the patch's ambition to tell\nyou the details of an uncommitted object (which would have race conditions\nanyway), so that *only* reads of pg_[sh]depend itself need be dirty.\n\nThe bigger problem is that it fails to close the race condition that\nit's intending to solve. This patch will catch a race like this:\n\nSession doing DROP Session doing CREATE\n\nDROP begins\n\n CREATE makes a dependent catalog entry\n\nDROP scans for dependent objects\n\n CREATE commits\n\nDROP removes catalog entry\n\nDROP commits\n\nHowever, it will not catch this slightly different timing:\n\nSession doing DROP Session doing CREATE\n\nDROP begins\n\nDROP scans for dependent objects\n\n CREATE makes a dependent catalog entry\n\n CREATE commits\n\nDROP removes catalog entry\n\nDROP commits\n\nSo I don't see that we've moved the goalposts very far at all.\n\nRealistically, if we want to prevent this type of problem, then all\ncreation DDL will have to take a lock on each referenced object that'd\nconflict with a lock taken by DROP. This might not be out of reach:\nI think we do already take such locks while dropping objects. The\nreference-side lock could be taken by the recordDependency mechanism\nitself, ensuring that we don't miss anything; and that would also\nallow us to not bother taking such a lock on pinned objects, which'd\ngreatly cut the cost (though not to zero).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Mar 2022 12:49:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to avoid orphaned dependencies" }, { "msg_contents": "On Wed, Mar 23, 2022 at 12:49:06PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> Bertand, do you think this has any chance of making it into v15? If not,\n>> are you alright with adjusting the commitfest entry to v16 and moving it to\n>> the next commitfest?\n> \n> I looked this over briefly, and IMO it should have no chance of being\n> committed in anything like this form.\n\nI marked the commitfest entry as waiting-on-author, set the target version\nto v16, and moved it to the next commitfest.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 30 Mar 2022 12:18:38 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch to avoid orphaned dependencies" }, { "msg_contents": "This entry has been waiting on author input for a while (our current\nthreshold is roughly two weeks), so I've marked it Returned with\nFeedback.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/38/3106/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 2 Aug 2022 14:03:39 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Patch to avoid orphaned dependencies" }, { "msg_contents": "Hi,\n\nOn Wed, Mar 23, 2022 at 12:49:06PM -0400, Tom Lane wrote:\n> Realistically, if we want to prevent this type of problem, then all\n> creation DDL will have to take a lock on each referenced object that'd\n> conflict with a lock taken by DROP. This might not be out of reach:\n> I think we do already take such locks while dropping objects. The\n> reference-side lock could be taken by the recordDependency mechanism\n> itself, ensuring that we don't miss anything; and that would also\n> allow us to not bother taking such a lock on pinned objects, which'd\n> greatly cut the cost (though not to zero).\n\nThanks for the idea (and sorry for the delay replying to it)! I had a look at it\nand just created a new thread [1] based on your proposal.\n\n[1]: https://www.postgresql.org/message-id/flat/ZiYjn0eVc7pxVY45%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Apr 2024 08:49:16 +0000", "msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch to avoid orphaned dependencies" } ]
[ { "msg_contents": "Hackers,\n\nDuring the version 14 development period, a few checks of toasted attributes were written but never committed. For the version 15 development cycle, I'd like to consider extending the checks of toasted attributes. First, no toasted attribute should ever have a rawsize larger than the 1GB varlena limit. Second, no compressed toasted attribute should have an extsize indicating that the toast expanded during toasting. Such a extsize could mean the compression code is malfunctioning, or that the extsize or rawsize fields are corrupt. Third, any compressed attribute should have a valid compression method ID.\n\nThese checks are cheap. Actually retrieving the compressed toasted data and checking that it uncompresses correctly would have very different performance implications, but that is not included in this patch.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 4 May 2021 09:20:00 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Extending amcheck to check toast size and compression" }, { "msg_contents": "+ /* Oversized toasted attributes should never be stored */\n+ if (toast_pointer.va_rawsize > VARLENA_SIZE_LIMIT)\n+ report_corruption(ctx,\n+ psprintf(\"toast value %u rawsize %u exceeds limit %u\",\n+ toast_pointer.va_valueid,\n+ toast_pointer.va_rawsize,\n+ VARLENA_SIZE_LIMIT));\n+\n\nI think the comment sounds wrong since toast is precisely for storage of\n\"oversized\" attributes.\n\nhttps://www.postgresql.org/docs/current/storage-toast.html\n| This section provides an overview of TOAST (The Oversized-Attribute Storage Technique).\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 4 May 2021 11:43:47 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Extending amcheck to check toast size and compression" }, { "msg_contents": "> On May 4, 2021, at 9:43 AM, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> + /* Oversized toasted attributes should never be stored */\n> + if (toast_pointer.va_rawsize > VARLENA_SIZE_LIMIT)\n> + report_corruption(ctx,\n> + psprintf(\"toast value %u rawsize %u exceeds limit %u\",\n> + toast_pointer.va_valueid,\n> + toast_pointer.va_rawsize,\n> + VARLENA_SIZE_LIMIT));\n> +\n> \n> I think the comment sounds wrong since toast is precisely for storage of\n> \"oversized\" attributes.\n> \n> https://www.postgresql.org/docs/current/storage-toast.html\n> | This section provides an overview of TOAST (The Oversized-Attribute Storage Technique).\n\nThanks for reviewing! Changed to:\n\n+ /* Toasted attributes too large to be untoasted should never be stored */\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 4 May 2021 11:24:35 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Extending amcheck to check toast size and compression" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, failed\nSpec compliant: tested, failed\nDocumentation: tested, failed\n\nThis patch looks good to me. Considering a positive response from another reviewer, status change to \"Ready for Committer\" seems to be appropriate.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Wed, 12 May 2021 15:19:40 +0000", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Extending amcheck to check toast size and compression" }, { "msg_contents": "Hi hackers,\n\n> make installcheck-world: tested, failed\n> Implements feature: tested, failed\n> Spec compliant: tested, failed\n> Documentation: tested, failed\n\nVery sorry about these \"failed\" checkboxes. Didn't use the commitfest\nwebapp for a while. The patch is fine.\n\n> The new status of this patch is: Ready for Committer\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 12 May 2021 18:25:14 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Extending amcheck to check toast size and compression" }, { "msg_contents": "> @@ -30,6 +30,9 @@ PG_FUNCTION_INFO_V1(verify_heapam);\n> /* The number of columns in tuples returned by verify_heapam */\n> #define HEAPCHECK_RELATION_COLS 4\n> \n> +/* The largest valid toast va_rawsize */\n> +#define VARLENA_SIZE_LIMIT 0x3FFFFFFF\n> +\n\nHmm, a toasted datum cannot be larger than MaxAllocSize, because it's \nreconstituted in a palloc'd datum, right?\n\n- Heikki\n\n\n", "msg_date": "Wed, 14 Jul 2021 13:33:59 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Extending amcheck to check toast size and compression" }, { "msg_contents": "\n\n> On Jul 14, 2021, at 3:33 AM, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> \n>> +/* The largest valid toast va_rawsize */\n>> +#define VARLENA_SIZE_LIMIT 0x3FFFFFFF\n>> +\n> \n> Hmm, a toasted datum cannot be larger than MaxAllocSize, because it's reconstituted in a palloc'd datum, right?\n\nNo datum size exceeds MaxAllocSize, and no datum expands when compressed (because for those that do expand under any particular compression algorithm, we opt to instead store the datum uncompressed), so no valid toast pointer should contain a va_rawsize field greater than MaxAllocSize. Any toast pointers that have larger va_rawsize fields are therefore corrupt.\n\nVARLENA_SIZE_LIMIT is defined here equal to MaxAllocSize:\n\n src/include/utils/memutils.h:#define MaxAllocSize ((Size) 0x3fffffff) /* 1 gigabyte - 1 */\n\nEarlier versions of the patch used MaxAllocSize rather than defining VARLENA_SIZE_LIMIT, but review comments suggested that was less clear.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 14 Jul 2021 07:57:47 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Extending amcheck to check toast size and compression" }, { "msg_contents": "\n\n> On Jul 14, 2021, at 7:57 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> so no valid toast pointer should contain a va_rawsize field greater than MaxAllocSize\n\n... nor should any valid toast pointer contain a va_extinfo field encoding a va_extsize greater than va_rawsize - VARHDRSZ.\n\nViolations of either of these properties suggest either a bug in the code which wrote the toast pointer, or that the toast pointer has been corrupted since being written, or that the page of data being read is being interpreted incorrectly, perhaps due to catalog corruption, or because the page is just random noise and not part of a valid table, etc. The amcheck code is not focused specifically on whether the toasted value can be detoasted so much as deducing that the data cannot be correct.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 14 Jul 2021 08:38:54 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Extending amcheck to check toast size and compression" }, { "msg_contents": "Right so here's a review.\n\nI think the patch is committable as is. It's an improvement and it\ndoes the job as promised. I do have some comments but I don't think\nthey're serious issues and would actually be pretty happy committing\nit as is. Fwiw I didn't realize how short the patch was at first and\nit probably doesn't need yet another review.\n\n\n/* Toasted attributes too large to be untoasted should never be stored */\nif (toast_pointer.va_rawsize > VARLENA_SIZE_LIMIT)\n\n1) I know this used to say MaxAlloc -- personally I would probably go\nwith that but either is fine. But if you're going to have a separate\nconstant there could be more of a comment explaining why that's the\nmaximum -- probably with a pointer to MaxAlloc and postgres.h's\nVARSIZE macros.\n\nThe switch statement at line 1443 seems a bit ... baroque. Is it\nclearer than a simple \"if cmid != TOAST_PGLZ_COMPRESSION_ID && cmid !=\nTOAST_LZ4_COMPRESSION_ID)\" ? I mean, I see this is easier to add more\ncases to but I found dealing with a case that falls through and no\ndefault is a lot of cognitive overhead to understand what is in the\nend just effectively a simple branch.\n\nFwiw compilers aren't always the best at optimizing switch statements.\nIt's entirely possible a compiler may end up building a whole lookup\ntable of jumps for this thing. Not that it's performance critical but\n...\n\nBut all that's more words than necessary for a minor style comment.\n\n\nFwiw I spent a few minutes thinking about and writing up this\nsuggestion and then only afterwards realized the code in question\nwasn't from this patch. I'll mention it anyways but it's not relevant\nto this patch review I guess :)\n\nI found the whole expected_chunk_seq parameter thing a bit confusing\nand less useful than possible. I would instead suggestion:\n\nAllocate an array of the expected number of chunk numbers before\ncalling check_toast_tuple and then just gather the chunk_seq that are\nreturned. When it's finished you can do things like: a) Check if\nthey're all ascending and report index corruption if not. b) Check if\nany numbers are missing and report which ones are missing and/or how\nmany. c) Check if there are duplicates and report that. These would\nall be easier for a user to interpret than \"index scan returned chunk\n5 when expecting chunk 9\".\n\n\n\nOn Tue, 4 May 2021 at 12:20, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>\n> Hackers,\n>\n> During the version 14 development period, a few checks of toasted attributes were written but never committed. For the version 15 development cycle, I'd like to consider extending the checks of toasted attributes. First, no toasted attribute should ever have a rawsize larger than the 1GB varlena limit. Second, no compressed toasted attribute should have an extsize indicating that the toast expanded during toasting. Such a extsize could mean the compression code is malfunctioning, or that the extsize or rawsize fields are corrupt. Third, any compressed attribute should have a valid compression method ID.\n>\n> These checks are cheap. Actually retrieving the compressed toasted data and checking that it uncompresses correctly would have very different performance implications, but that is not included in this patch.\n>\n>\n>\n> —\n> Mark Dilger\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\n\n-- \ngreg\n\nOn Wed, 14 Jul 2021 at 10:58, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Jul 14, 2021, at 3:33 AM, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >\n> >> +/* The largest valid toast va_rawsize */\n> >> +#define VARLENA_SIZE_LIMIT 0x3FFFFFFF\n> >> +\n> >\n> > Hmm, a toasted datum cannot be larger than MaxAllocSize, because it's reconstituted in a palloc'd datum, right?\n>\n> No datum size exceeds MaxAllocSize, and no datum expands when compressed (because for those that do expand under any particular compression algorithm, we opt to instead store the datum uncompressed), so no valid toast pointer should contain a va_rawsize field greater than MaxAllocSize. Any toast pointers that have larger va_rawsize fields are therefore corrupt.\n>\n> VARLENA_SIZE_LIMIT is defined here equal to MaxAllocSize:\n>\n> src/include/utils/memutils.h:#define MaxAllocSize ((Size) 0x3fffffff) /* 1 gigabyte - 1 */\n>\n> Earlier versions of the patch used MaxAllocSize rather than defining VARLENA_SIZE_LIMIT, but review comments suggested that was less clear.\n>\n> —\n> Mark Dilger\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n>\n>\n\n\n-- \ngreg\n\n\n", "msg_date": "Tue, 19 Oct 2021 16:58:47 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Extending amcheck to check toast size and compression" }, { "msg_contents": "\n\n> On Oct 19, 2021, at 1:58 PM, Greg Stark <stark@mit.edu> wrote:\n> \n> Right so here's a review.\n> \n> I think the patch is committable as is. It's an improvement and it\n> does the job as promised. I do have some comments but I don't think\n> they're serious issues and would actually be pretty happy committing\n> it as is. Fwiw I didn't realize how short the patch was at first and\n> it probably doesn't need yet another review.\n\nThanks for reviewing!\n\n> /* Toasted attributes too large to be untoasted should never be stored */\n> if (toast_pointer.va_rawsize > VARLENA_SIZE_LIMIT)\n> \n> 1) I know this used to say MaxAlloc -- personally I would probably go\n> with that but either is fine. But if you're going to have a separate\n> constant there could be more of a comment explaining why that's the\n> maximum -- probably with a pointer to MaxAlloc and postgres.h's\n> VARSIZE macros.\n\nI find the comment a bit verbose that way, but maybe people like it better? How does this look:\n\ndiff --git a/contrib/amcheck/verify_heapam.c b/contrib/amcheck/verify_heapam.c\nindex 774a70f63d..988e104d8e 100644\n--- a/contrib/amcheck/verify_heapam.c\n+++ b/contrib/amcheck/verify_heapam.c\n@@ -30,7 +30,11 @@ PG_FUNCTION_INFO_V1(verify_heapam);\n /* The number of columns in tuples returned by verify_heapam */\n #define HEAPCHECK_RELATION_COLS 4\n \n-/* The largest valid toast va_rawsize */\n+/*\n+ * The largest valid toast va_rawsize. This is the same as the MaxAllocSize\n+ * constant from memutils.h, and is the largest size that can fit in a varlena\n+ * va_header's 30-bit size field.\n+ */\n #define VARLENA_SIZE_LIMIT 0x3FFFFFFF\n \n /*\n\n\n> The switch statement at line 1443 seems a bit ... baroque. Is it\n> clearer than a simple \"if cmid != TOAST_PGLZ_COMPRESSION_ID && cmid !=\n> TOAST_LZ4_COMPRESSION_ID)\" ? I mean, I see this is easier to add more\n> cases to but I found dealing with a case that falls through and no\n> default is a lot of cognitive overhead to understand what is in the\n> end just effectively a simple branch.\n\nThe variable cmid (which stands for compression method identifier), is of enum type ToastCompressionId. From toast_compression.h:\n\ntypedef enum ToastCompressionId\n{\n TOAST_PGLZ_COMPRESSION_ID = 0,\n TOAST_LZ4_COMPRESSION_ID = 1,\n TOAST_INVALID_COMPRESSION_ID = 2\n} ToastCompressionId;\n\nThere is clearly room for one more compression algorithm in that list without overflowing the 2 bits reserved for such values, and I'd not like to gamble on some future hacker who adds TOAST_MY_FANCY_COMPRESSION_ID = 3 remembering to update contrib/amcheck. I used a switch statement to trigger a compiler warning in such an event. \n\n> Fwiw compilers aren't always the best at optimizing switch statements.\n> It's entirely possible a compiler may end up building a whole lookup\n> table of jumps for this thing. Not that it's performance critical but\n> ...\n\nThat may be a fair argument, but I'm a huge fan of using enums and switch statements to elicit the compiler's help in future modifications to the code. This is the first time I've heard a complaint of this sort and I'm unsure how to respond. How common is this optimization problem on modern compilers?\n\n> But all that's more words than necessary for a minor style comment.\n\nOk.\n\n> Fwiw I spent a few minutes thinking about and writing up this\n> suggestion and then only afterwards realized the code in question\n> wasn't from this patch. I'll mention it anyways but it's not relevant\n> to this patch review I guess :)\n\nSure, we can discuss it.\n\n> I found the whole expected_chunk_seq parameter thing a bit confusing\n> and less useful than possible. I would instead suggestion:\n> \n> Allocate an array of the expected number of chunk numbers before\n> calling check_toast_tuple and then just gather the chunk_seq that are\n> returned.\n\nOf course, you might get more chunks back than you expected, and overflow your array. But assuming you realloc, and assuming the checker avoids going into an infinite loop, that is one option.\n\n> When it's finished you can do things like: a) Check if\n> they're all ascending and report index corruption if not. b) Check if\n> any numbers are missing and report which ones are missing and/or how\n> many. c) Check if there are duplicates and report that. These would\n> all be easier for a user to interpret than \"index scan returned chunk\n> 5 when expecting chunk 9\".\n\nThis was reworked multiple times. The problem is how to think about the missing or extra chunks. One interpretation is that the chunks themselves are corrupt, but another interpretation is that the toast index is corrupt and causing the index scan over the toast table to visit the same chunk multiple times, or in the wrong order, etc. The index scan itself might bomb out with a segfault, or go into an infinite loop. It's hard to predict such things in the face of corruption, especially when considering that the index scan code might be modified in the future. I'm not claiming there is no room for improvement here -- likely there is -- but it is not simple, and the patch that would result would be larger than the patch actually being reviewed. I'd rather leave such a project for another day.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 20 Oct 2021 09:41:10 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Extending amcheck to check toast size and compression" }, { "msg_contents": "On Wed., Oct. 20, 2021, 12:41 Mark Dilger, <mark.dilger@enterprisedb.com>\nwrote:\n\n>\n> I used a switch statement to trigger a compiler warning in such an event.\n>\n\nCatching better compiler diagnostics is an excellent reason to choose this\nstructure. I guess all I could ask is that the comment saying no default\nbranch say this is the motivation.\n\n>\n\nOn Wed., Oct. 20, 2021, 12:41 Mark Dilger, <mark.dilger@enterprisedb.com> wrote:I used a switch statement to trigger a compiler warning in such an event.Catching better compiler diagnostics is an excellent reason to choose this structure. I guess all I could ask is that the comment saying no default branch say this is the motivation.", "msg_date": "Wed, 20 Oct 2021 14:42:35 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Extending amcheck to check toast size and compression" }, { "msg_contents": "\n\n> On Oct 20, 2021, at 11:42 AM, Greg Stark <stark@mit.edu> wrote:\n> \n> \n> \n> On Wed., Oct. 20, 2021, 12:41 Mark Dilger, <mark.dilger@enterprisedb.com> wrote:\n> \n> I used a switch statement to trigger a compiler warning in such an event.\n> \n> Catching better compiler diagnostics is an excellent reason to choose this structure. I guess all I could ask is that the comment saying no default branch say this is the motivation.\n\nOk. How about:\n\ndiff --git a/contrib/amcheck/verify_heapam.c b/contrib/amcheck/verify_heapam.c\nindex 774a70f63d..9500f43bc9 100644\n--- a/contrib/amcheck/verify_heapam.c\n+++ b/contrib/amcheck/verify_heapam.c\n@@ -30,7 +30,11 @@ PG_FUNCTION_INFO_V1(verify_heapam);\n /* The number of columns in tuples returned by verify_heapam */\n #define HEAPCHECK_RELATION_COLS 4\n \n-/* The largest valid toast va_rawsize */\n+/*\n+ * The largest valid toast va_rawsize. This is the same as the MaxAllocSize\n+ * constant from memutils.h, and is the largest size that can fit in a varlena\n+ * va_header's 30-bit size field.\n+ */\n #define VARLENA_SIZE_LIMIT 0x3FFFFFFF\n \n /*\n@@ -1452,7 +1456,11 @@ check_tuple_attribute(HeapCheckContext *ctx)\n case TOAST_INVALID_COMPRESSION_ID:\n break;\n \n- /* Intentionally no default here */\n+ /*\n+ * Intentionally no default here. We want the compiler to warn if\n+ * new compression methods are added to the ToastCompressionId enum\n+ * but not handled in our switch.\n+ */\n }\n if (!valid)\n report_corruption(ctx,\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 20 Oct 2021 12:06:01 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Extending amcheck to check toast size and compression" }, { "msg_contents": "> On Oct 20, 2021, at 12:06 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Ok. How about:\n\nDone that way.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 3 Nov 2021 15:56:41 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Extending amcheck to check toast size and compression" }, { "msg_contents": "On Wed, Nov 3, 2021 at 6:56 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> Done that way.\n\nI agree with what others have said: this looks fine.\n\nBut, is it plausible to add test coverage for the new checks, or is\nthat going to be too much of a pain?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Nov 2021 10:53:13 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extending amcheck to check toast size and compression" }, { "msg_contents": "> On Nov 4, 2021, at 7:53 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> But, is it plausible to add test coverage for the new checks, or is\n> that going to be too much of a pain?\n\nIt only takes about 20 additional lines in the regression test to check the code paths for raw sizes which are too large and too small, so I've done that in this next version. Testing corrupt compressed data in a deterministic, cross platform manner with a compact, easy to maintain regression test has eluded me and is not included here.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 4 Nov 2021 15:58:16 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Extending amcheck to check toast size and compression" }, { "msg_contents": "On Thu, Nov 4, 2021 at 6:58 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> It only takes about 20 additional lines in the regression test to check the code paths for raw sizes which are too large and too small, so I've done that in this next version. Testing corrupt compressed data in a deterministic, cross platform manner with a compact, easy to maintain regression test has eluded me and is not included here.\n\nOK, I've committed this version.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Nov 2021 09:34:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extending amcheck to check toast size and compression" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> OK, I've committed this version.\n\nSome of the buildfarm is unimpressed with this --- looks like the test\noutput is less stable than you thought.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Nov 2021 23:56:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Extending amcheck to check toast size and compression" }, { "msg_contents": "\n\n> On Nov 5, 2021, at 8:56 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Some of the buildfarm is unimpressed with this --- looks like the test\n> output is less stable than you thought.\n\nYes, it does. I had to play with it a bit to be sure the test itself is faulty, and I believe that it is.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sat, 6 Nov 2021 01:04:48 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Extending amcheck to check toast size and compression" } ]
[ { "msg_contents": "Hi all\n\nThe attached patch adds support for running any temp-install based tests\n(check, isolationcheck, src/test/recovery, etc) under the control of\nvalgrind with a simple\n\n make USE_VALGRIND=1 check\n\nIt's based on a script I've been using for some time to run faster, simpler\nValgrind checks on the codebase with much less irrelevant noise than the\nusual approaches.\n\nThere are no C code changes at all in this patch, it only touches\nMakefile.global and adds a new src/tools/valgrind_wrapper tool.\n\nWhen you specify USE_VALGRIND=1, the PATH used by $(with_temp_install) is\nprefixed with a tmp_install/bin_valgrind_wrapper/ directory. Each binary in\n$(bindir) gets a corresponding wrapper script in bin_valgrind_wrapper in\nthe temp install. The wrappers intercept execution of every command in the\nbindir and exec them under the control of valgrind (or skip valgrind and\nexec that target directly, if desired\n\nThis has many advantages over the usual approaches of an installcheck-based\nvalgrind run or \"valgrind make check\":\n\n* There's no custom setup, it works out of the box\n* It produces much less irrelevant log output and runs a lot faster because\nit only runs postgres-related binaries under valgrind, not irrelevant noise\nlike perl interpreters, make, shellscripts, etc.\n* It's much more targeted and selective - if you're only interested in some\nextension or new backend feature, you can trivially set it to target just\nthe backend, skip checking of initdb, and skip checking of psql, so you get\nmore relevant log output and faster runs.\n\nI'll follow up to this post with some timing and log output numbers but\nwanted to share what I had first.\n\n-DUSE_VALGRIND is also added to CFLAGS at compile time when USE_VALGRIND=1\nis passed to make. This gets rid of the need to hack pg_config_manual.h or\nfiddle with configure re-runs when you want to build with valgrind support.\nArguably it'd be better to add a --enable-valgrind option to configure. LMK\nif that's preferable.\n\nNote that there's a bit of a hack in the wrapper script to work around\nValgrind's inability to set the argv[0] of a process run under valgrind to\nanything other than the exact command-name to be executed. I have a patch\nfor valgrind pending to add that capability (like \"exec -a\" in bash) but a\nworkaround is necessary for now. It's made a bit more complicated by\nPostgreSQL's determination to canonicalize paths and follow symlinks in\nfind_my_exec(). The script's hardlink based workarounds for this could be\nremoved if we could agree to support a debug env-var or command-line option\nthat could be used to supply an override path to be returned by\nfind_my_exec() instead of performing normal discovery. If you'd prefer that\napproach to the current workaround in the script let me know.\n\nI'm also willing to add valgrind-support-detection logic that will cause\nvalgrind launched via \"make USE_VALGRIND=1\" to refuse to run if it detects\nthat the target postgres was not built with -DUSE_VALGRIND for proper\ninstrumentation. This can be done with the valgrind --require-text-symbol\noption and a dummy export symbol added to the symbol table only when\ncompiled with -DUSE_VALGRIND. If that's desirable let me know, it should be\nquick to add.\n\nYou can find more detail in the patch commit message (attached) and in the\nsrc/test/valgrind_wrapper script it adds. If you're wondering why the\nvalgrind options --trace-children=yes --trace-children-skip=pattern\n--trace-children-skip-by-arg=pattern don't solve this problem, read the\nscript's comments.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Wed, 5 May 2021 15:29:48 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "[PATCH] Faster, easier valgrind runs with make USE_VALGRIND=1 check" } ]
[ { "msg_contents": "Hi all\n\nI'm thinking of piggy-backing on the approach used in the \"Get memory\ncontexts of an arbitrary backend process\" patch in order to provide access\nto detailed reorder buffer content statistics from walsenders on request.\n\nRight now the reorder buffer is mostly a black-box. I mostly rely on gdb or\non dynamic probes (perf, systemtap) to see what it's doing. I intend a\npatch soon to add a couple of fields to struct WalSnd to report some very\ncoarse reorder buffer stats - at least oldest buffered xid, number of\nbuffered txns, total bytes of buffered txns in memory, total bytes of\nbuffered txns spilled to disk.\n\nBut sometimes what I really want is details on the txns that're in the\nreorder buffer, and that's not feasible to export via always-enabled\nreporting like struct WalSnd. So I'm thinking that the same approach used\nfor the memory context stats patch might work well for asking the walsender\nfor a detailed dump of reorder buffer contents. Something like a\nper-buffered-txn table of txn topxid, start-lsn, most recent change lsn,\nnumber of changes, number of subxids, number of invalidations, number of\ncatalog changes, buffer size in memory, buffer size spilled to disk.\n\nAnyone drastically opposed to the idea?\n\n(I know I have to finish up with the LWLock tracepoint patchset first, this\nis a RFC at this stage).\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nHi allI'm thinking of piggy-backing on the approach used in the \"Get memory contexts of an arbitrary backend process\" patch in order to provide access to detailed reorder buffer content statistics from walsenders on request.Right now the reorder buffer is mostly a black-box. I mostly rely on gdb or on dynamic probes (perf, systemtap) to see what it's doing. I intend a patch soon to add a couple of fields to struct WalSnd to report some very coarse reorder buffer stats - at least oldest buffered xid, number of buffered txns, total bytes of buffered txns in memory, total bytes of buffered txns spilled to disk.But sometimes what I really want is details on the txns that're in the reorder buffer, and that's not feasible to export via always-enabled reporting like struct WalSnd. So I'm thinking that the same approach used for the memory context stats patch might work well for asking the walsender for a detailed dump of reorder buffer contents. Something like a per-buffered-txn table of txn topxid, start-lsn, most recent change lsn, number of changes, number of subxids, number of invalidations, number of catalog changes, buffer size in memory, buffer size spilled to disk.Anyone drastically opposed to the idea?(I know I have to finish up with the LWLock tracepoint patchset first, this is a RFC at this stage).--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Wed, 5 May 2021 18:33:27 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "RFC: Detailed reorder buffer stats dumps" }, { "msg_contents": "On Wed, May 5, 2021 at 4:03 PM Craig Ringer <craig@2ndquadrant.com> wrote:\n>\n> Hi all\n>\n> I'm thinking of piggy-backing on the approach used in the \"Get memory contexts of an arbitrary backend process\" patch in order to provide access to detailed reorder buffer content statistics from walsenders on request.\n>\n> Right now the reorder buffer is mostly a black-box. I mostly rely on gdb or on dynamic probes (perf, systemtap) to see what it's doing. I intend a patch soon to add a couple of fields to struct WalSnd to report some very coarse reorder buffer stats - at least oldest buffered xid, number of buffered txns, total bytes of buffered txns in memory, total bytes of buffered txns spilled to disk.\n>\n> But sometimes what I really want is details on the txns that're in the reorder buffer, and that's not feasible to export via always-enabled reporting like struct WalSnd. So I'm thinking that the same approach used for the memory context stats patch might work well for asking the walsender for a detailed dump of reorder buffer contents. Something like a per-buffered-txn table of txn topxid, start-lsn, most recent change lsn, number of changes, number of subxids, number of invalidations, number of catalog changes, buffer size in memory, buffer size spilled to disk.\n>\n\n+1.\n\nWill be really useful to troubleshoot what's going on in the ReorderBuffer.\n\nIf we put that in WalSnd it will not be useful for the connections\nwhich are using normal backends to get logical changes through\nbuilt-in functions.\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 5 May 2021 19:09:13 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Detailed reorder buffer stats dumps" }, { "msg_contents": "Hi,\n\nOn 2021-05-05 18:33:27 +0800, Craig Ringer wrote:\n> I'm thinking of piggy-backing on the approach used in the \"Get memory\n> contexts of an arbitrary backend process\" patch in order to provide access\n> to detailed reorder buffer content statistics from walsenders on request.\n> \n> Right now the reorder buffer is mostly a black-box. I mostly rely on gdb or\n> on dynamic probes (perf, systemtap) to see what it's doing. I intend a\n> patch soon to add a couple of fields to struct WalSnd to report some very\n> coarse reorder buffer stats - at least oldest buffered xid, number of\n> buffered txns, total bytes of buffered txns in memory, total bytes of\n> buffered txns spilled to disk.\n> \n> But sometimes what I really want is details on the txns that're in the\n> reorder buffer, and that's not feasible to export via always-enabled\n> reporting like struct WalSnd. So I'm thinking that the same approach used\n> for the memory context stats patch might work well for asking the walsender\n> for a detailed dump of reorder buffer contents. Something like a\n> per-buffered-txn table of txn topxid, start-lsn, most recent change lsn,\n> number of changes, number of subxids, number of invalidations, number of\n> catalog changes, buffer size in memory, buffer size spilled to disk.\n> \n> Anyone drastically opposed to the idea?\n\nI am doubtful. The likelihood of ending with effectively unused code\nseems very substantial here.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 May 2021 11:28:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: RFC: Detailed reorder buffer stats dumps" }, { "msg_contents": "On Thu, 6 May 2021 at 02:28, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2021-05-05 18:33:27 +0800, Craig Ringer wrote:\n> > I'm thinking of piggy-backing on the approach used in the \"Get memory\n> > contexts of an arbitrary backend process\" patch in order to provide\n> access\n> > to detailed reorder buffer content statistics from walsenders on request.\n> >\n> > Right now the reorder buffer is mostly a black-box. I mostly rely on gdb\n> or\n> > on dynamic probes (perf, systemtap) to see what it's doing. I intend a\n> > patch soon to add a couple of fields to struct WalSnd to report some very\n> > coarse reorder buffer stats - at least oldest buffered xid, number of\n> > buffered txns, total bytes of buffered txns in memory, total bytes of\n> > buffered txns spilled to disk.\n> >\n> > But sometimes what I really want is details on the txns that're in the\n> > reorder buffer, and that's not feasible to export via always-enabled\n> > reporting like struct WalSnd. So I'm thinking that the same approach used\n> > for the memory context stats patch might work well for asking the\n> walsender\n> > for a detailed dump of reorder buffer contents. Something like a\n> > per-buffered-txn table of txn topxid, start-lsn, most recent change lsn,\n> > number of changes, number of subxids, number of invalidations, number of\n> > catalog changes, buffer size in memory, buffer size spilled to disk.\n> >\n> > Anyone drastically opposed to the idea?\n>\n> I am doubtful. The likelihood of ending with effectively unused code\n> seems very substantial here.\n>\n\nI can't rule that out, but the motivation for this proposal isn't\ndevelopment convenience. It's production support and operations. The\nreorder buffer is a black box right now, and when you're trying to answer\nthe questions \"what is the walsender doing,\" \"is meaningful progress being\nmade,\" and \"what is slowing down replication\" it's ... not easy.\n\nI currently rely on some fairly hairy gdb scripts, which I'm not keen on\nrunning on production systems if I can avoid it.\n\nI'm far from set on the approach of asking the walsender to dump a reorder\nbuffer state summary to a file. But I don't think the current state of\naffairs is much fun for production use. Especially since we prevent the\npg_stat_replication sent_lsn from going backwards, so reorder buffering can\ncause replication to appear to completely cease to progress for long\nperiods unless you identify the socket and monitor traffic on it, or you\nintrude on the process with gdb.\n\nAt the least it'd be helpful to have pg_stat_replication (or a new related\nauxiliary view like pg_stat_logical_decoding) show\n\n- walsender total bytes sent this session\n- number of txns processed this txn\n- number txns filtered out by output plugin this session\n- oldest xid in reorder buffer\n- reorder buffer number of txns\n- reorder buffer total size (in-memory and total inc spilled)\n- reorderbuffercommit current xid, last change lsn, total buffered size of\ncurrent tx, total bytes of buffer processed so far within the current txn,\nand commit lsn if known, only when currently streaming a txn from\nreorderbuffercommit\n\nThat way it'd be possible to tell if a logical walsender is currently\nprocessing a commit and get a much better sense of its progress within the\ncommit.\n\nPerhaps output plugins could do some of this and expose their own custom\nviews. But then each plugin would have to add its own. Plus they don't get\na particularly good view into the reorder buffer state; they'd have a hard\ntime maintaining good running stats.\n\nSome basic monitoring exposed for logical decoding and reorder buffering\nwould help a lot. Does that sound more palatable?\n\nIf so, I'd probably still want to be able to hook a few places in logical\ndecoding to allow an extension to instrument it when greater insight into\nthe inner workings is required.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 6 May 2021 at 02:28, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2021-05-05 18:33:27 +0800, Craig Ringer wrote:\n> I'm thinking of piggy-backing on the approach used in the \"Get memory\n> contexts of an arbitrary backend process\" patch in order to provide access\n> to detailed reorder buffer content statistics from walsenders on request.\n> \n> Right now the reorder buffer is mostly a black-box. I mostly rely on gdb or\n> on dynamic probes (perf, systemtap) to see what it's doing. I intend a\n> patch soon to add a couple of fields to struct WalSnd to report some very\n> coarse reorder buffer stats - at least oldest buffered xid, number of\n> buffered txns, total bytes of buffered txns in memory, total bytes of\n> buffered txns spilled to disk.\n> \n> But sometimes what I really want is details on the txns that're in the\n> reorder buffer, and that's not feasible to export via always-enabled\n> reporting like struct WalSnd. So I'm thinking that the same approach used\n> for the memory context stats patch might work well for asking the walsender\n> for a detailed dump of reorder buffer contents. Something like a\n> per-buffered-txn table of txn topxid, start-lsn, most recent change lsn,\n> number of changes, number of subxids, number of invalidations, number of\n> catalog changes, buffer size in memory, buffer size spilled to disk.\n> \n> Anyone drastically opposed to the idea?\n\nI am doubtful. The likelihood of ending with effectively unused code\nseems very substantial here.I can't rule that out, but the motivation for this proposal isn't development convenience. It's production support and operations. The reorder buffer is a black box right now, and when you're trying to answer the questions \"what is the walsender doing,\" \"is meaningful progress being made,\" and \"what is slowing down replication\" it's ... not easy.I currently rely on some fairly hairy gdb scripts, which I'm not keen on running on production systems if I can avoid it.I'm far from set on the approach of asking the walsender to dump a reorder buffer state summary to a file. But I don't think the current state of affairs is much fun for production use. Especially since we prevent the pg_stat_replication sent_lsn from going backwards, so reorder buffering can cause replication to appear to completely cease to progress for long periods unless you identify the socket and monitor traffic on it, or you intrude on the process with gdb.At the least it'd be helpful to have pg_stat_replication (or a new related auxiliary view like pg_stat_logical_decoding) show- walsender total bytes sent this session- number of txns processed this txn- number txns filtered out by output plugin this session- oldest xid in reorder buffer- reorder buffer number of txns- reorder buffer total size (in-memory and total inc spilled)- reorderbuffercommit current xid, last change lsn, total buffered size of current tx, total bytes of buffer processed so far within the current txn, and commit lsn if known, only when currently streaming a txn from reorderbuffercommitThat way it'd be possible to tell if a logical walsender is currently processing a commit and get a much better sense of its progress within the commit.Perhaps output plugins could do some of this and expose their own custom views. But then each plugin would have to add its own. Plus they don't get a particularly good view into the reorder buffer state; they'd have a hard time maintaining good running stats.Some basic monitoring exposed for logical decoding and reorder buffering would help a lot. Does that sound more palatable?If so, I'd probably still want to be able to hook a few places in logical decoding to allow an extension to instrument it when greater insight into the inner workings is required.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Thu, 6 May 2021 12:24:38 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Detailed reorder buffer stats dumps" }, { "msg_contents": "On Thu, May 6, 2021 at 9:54 AM Craig Ringer <craig@2ndquadrant.com> wrote:\n>\n> On Thu, 6 May 2021 at 02:28, Andres Freund <andres@anarazel.de> wrote:\n>>\n>> Hi,\n>>\n>> On 2021-05-05 18:33:27 +0800, Craig Ringer wrote:\n>> > I'm thinking of piggy-backing on the approach used in the \"Get memory\n>> > contexts of an arbitrary backend process\" patch in order to provide access\n>> > to detailed reorder buffer content statistics from walsenders on request.\n>> >\n>> > Right now the reorder buffer is mostly a black-box. I mostly rely on gdb or\n>> > on dynamic probes (perf, systemtap) to see what it's doing. I intend a\n>> > patch soon to add a couple of fields to struct WalSnd to report some very\n>> > coarse reorder buffer stats - at least oldest buffered xid, number of\n>> > buffered txns, total bytes of buffered txns in memory, total bytes of\n>> > buffered txns spilled to disk.\n>> >\n>> > But sometimes what I really want is details on the txns that're in the\n>> > reorder buffer, and that's not feasible to export via always-enabled\n>> > reporting like struct WalSnd. So I'm thinking that the same approach used\n>> > for the memory context stats patch might work well for asking the walsender\n>> > for a detailed dump of reorder buffer contents. Something like a\n>> > per-buffered-txn table of txn topxid, start-lsn, most recent change lsn,\n>> > number of changes, number of subxids, number of invalidations, number of\n>> > catalog changes, buffer size in memory, buffer size spilled to disk.\n>> >\n>> > Anyone drastically opposed to the idea?\n>>\n>> I am doubtful. The likelihood of ending with effectively unused code\n>> seems very substantial here.\n>\n>\n> I can't rule that out, but the motivation for this proposal isn't development convenience. It's production support and operations. The reorder buffer is a black box right now, and when you're trying to answer the questions \"what is the walsender doing,\" \"is meaningful progress being made,\" and \"what is slowing down replication\" it's ... not easy.\n>\n> I currently rely on some fairly hairy gdb scripts, which I'm not keen on running on production systems if I can avoid it.\n>\n> I'm far from set on the approach of asking the walsender to dump a reorder buffer state summary to a file. But I don't think the current state of affairs is much fun for production use. Especially since we prevent the pg_stat_replication sent_lsn from going backwards, so reorder buffering can cause replication to appear to completely cease to progress for long periods unless you identify the socket and monitor traffic on it, or you intrude on the process with gdb.\n>\n> At the least it'd be helpful to have pg_stat_replication (or a new related auxiliary view like pg_stat_logical_decoding) show\n>\n> - walsender total bytes sent this session\n> - number of txns processed this txn\n>\n\nYou might be able to derive some of the above sorts of stats from the\nnewly added pg_stat_replication_slots [1].\n\n> - number txns filtered out by output plugin this session\n> - oldest xid in reorder buffer\n> - reorder buffer number of txns\n> - reorder buffer total size (in-memory and total inc spilled)\n> - reorderbuffercommit current xid, last change lsn, total buffered size of current tx, total bytes of buffer processed so far within the current txn, and commit lsn if known, only when currently streaming a txn from reorderbuffercommit\n>\n> That way it'd be possible to tell if a logical walsender is currently processing a commit and get a much better sense of its progress within the commit.\n>\n> Perhaps output plugins could do some of this and expose their own custom views. But then each plugin would have to add its own. Plus they don't get a particularly good view into the reorder buffer state; they'd have a hard time maintaining good running stats.\n>\n> Some basic monitoring exposed for logical decoding and reorder buffering would help a lot. Does that sound more palatable?\n>\n\nCan't we think of enhancing existing views or introduce a new view to\nprovide such information?\n\n[1] - https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-REPLICATION-SLOTS-VIEW\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 May 2021 17:09:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Detailed reorder buffer stats dumps" }, { "msg_contents": "On Thu, 6 May 2021 at 19:40, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Thu, May 6, 2021 at 9:54 AM Craig Ringer <craig@2ndquadrant.com> wrote:\n> >\n\n> At the least it'd be helpful to have pg_stat_replication (or a new\n> related auxiliary view like pg_stat_logical_decoding) show\n> >\n> > - walsender total bytes sent this session\n> > - number of txns processed this txn\n> >\n>\n> You might be able to derive some of the above sorts of stats from the\n> newly added pg_stat_replication_slots [1].\n>\n>\nThat's a huge improvement that I managed to totally miss the discussion of\nand work on. Thanks. It'll be a significant help.\n'\n\n> > - number txns filtered out by output plugin this session\n> > - oldest xid in reorder buffer\n> > - reorder buffer number of txns\n> > - reorder buffer total size (in-memory and total inc spilled)\n> > - reorderbuffercommit current xid, last change lsn, total buffered size\n> of current tx, total bytes of buffer processed so far within the current\n> txn, and commit lsn if known, only when currently streaming a txn from\n> reorderbuffercommit\n>\n\nThese are less statistical in nature, and more about the current activity\nof the walsender and logical decoding state. I'm not sure if it'd make much\nsense to tack them on to pg_stat_replication_slots, especially as that'd\nalso mean they were quite delayed.\n\nBut it probably isn't worth the effort of exposing this info in a new view.\n\nWith the higher level info now available in pg_stat_replication_slots, I\nthink I might look into exposing these finer details via trace markers for\nuse with perf / systemtap / etc instead.\n\nA digression, but: It's a real shame that such tools don't give us a way to\nread specific tagged regions of memory with the same ease they let us probe\nfunction calls though. You generally need gdb to read the value of a\nglobal, or a moderately funky systemtap script. There's no convenient\nequivalent to SDT markers (TRACE_FOO) to tag variables. Wouldn't it be nice\nif we could\n\n perf watch postgres:walsender_reorderbuffer_oldest_xid\n\nor something like that?\n\nOn Thu, 6 May 2021 at 19:40, Amit Kapila <amit.kapila16@gmail.com> wrote:On Thu, May 6, 2021 at 9:54 AM Craig Ringer <craig@2ndquadrant.com> wrote:\n> \n> At the least it'd be helpful to have pg_stat_replication (or a new related auxiliary view like pg_stat_logical_decoding) show\n>\n> - walsender total bytes sent this session\n> - number of txns processed this txn\n>\n\nYou might be able to derive some of the above sorts of stats from the\nnewly added pg_stat_replication_slots [1].\nThat's a huge improvement that I managed to totally miss the discussion of and work on. Thanks. It'll be a significant help.' \n> - number txns filtered out by output plugin this session\n> - oldest xid in reorder buffer\n> - reorder buffer number of txns\n> - reorder buffer total size (in-memory and total inc spilled)\n> - reorderbuffercommit current xid, last change lsn, total buffered size of current tx, total bytes of buffer processed so far within the current txn, and commit lsn if known, only when currently streaming a txn from reorderbuffercommitThese are less statistical in nature, and more about the current activity of the walsender and logical decoding state. I'm not sure if it'd make much sense to tack them on to pg_stat_replication_slots, especially as that'd also mean they were quite delayed.But it probably isn't worth the effort of exposing this info in a new view.With the higher level info now available in pg_stat_replication_slots, I think I might look into exposing these finer details via trace markers for use with perf / systemtap  / etc instead.A digression, but: It's a real shame that such tools don't give us a way to read specific tagged regions of memory with the same ease they let us probe function calls though. You generally need gdb to read the value of a global, or a moderately funky systemtap script. There's no convenient equivalent to SDT markers (TRACE_FOO) to tag variables. Wouldn't it be nice if we could     perf watch postgres:walsender_reorderbuffer_oldest_xidor something like that?", "msg_date": "Tue, 29 Jun 2021 14:01:21 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Detailed reorder buffer stats dumps" } ]
[ { "msg_contents": "You might want to use pg_receivewal to save all of your WAL segments\nsomewhere instead of relying on archive_command. It has, at the least,\nthe advantage of working on the byte level rather than the segment\nlevel. But it seems to me that it is not entirely suitable as a\nsubstitute for archiving, for a couple of reasons. One is that as soon\nas it runs into a problem, it exits, which is not really what you want\nout of a daemon that's critical to the future availability of your\nsystem. Another is that you can't monitor it aside from looking at\nwhat it prints out, which is also not really what you want for a piece\nof critical infrastructure.\n\nThe first problem seems somewhat more straightforward. Suppose we add\na new command-line option, perhaps --daemon but we can bikeshed. If\nthis option is specified, then it tries to keep going when it hits a\nproblem, rather than just giving up. There's some fuzziness in my mind\nabout exactly what this should mean. If the problem we hit is that we\nlost the connection to the remote server, then we should try to\nreconnect. But if the problem is something like a failure inside\nopen_walfile() or close_walfile(), like a failed open() or fsync() or\nclose() or something, it's a little less clear what to do. Maybe one\nidea would be to have a parent process and a child process, where the\nchild process does all the work and the parent process just keeps\nre-launching it if it dies. It's not entirely clear that this is a\nsuitable way of recovering from, say, an fsync() failure, given\nprevious discussions claiming that - and I might be exaggerating a bit\nhere - there is essentially no way to recover from a failed fsync()\nbecause the kernel might have already thrown out your data and you\nmight as well just set the data center on fire - but perhaps an retry\nsystem that can't cope with certain corner cases is better than not\nhaving one at all, and perhaps we could revise the logic here and\nthere to have the process doing the work take some action other than\nexiting when that's an intelligent approach.\n\nThe second problem is a bit more complex. If you were transferring WAL\nto another PostgreSQL instance rather than to a frontend process, you\ncould log to some place other than standard output, like for example a\nfile, and you could periodically rotate that file, or alternatively\nyou could log to syslog or the Windows event log. Even better, you\ncould connect to PostgreSQL and run SQL queries against monitoring\nviews and see what results you get. If the existing monitoring views\ndon't give users what they need, we can improve them, but the whole\ninfrastructure needed for this kind of thing is altogether lacking for\nany frontend program. It does not seem very appealing to reinvent log\nrotation, connection management, and monitoring views inside\npg_receivewal, let alone in every frontend process where similar\nmonitoring might be useful. But at least for me, without such\ncapabilities, it is a little hard to take pg_receivewal seriously.\n\nI wonder first of all whether other people agree with these concerns,\nand secondly what they think we ought to do about it. One option is -\ndo nothing. This could be based either on the idea that pg_receivewal\nis hopeless, or else on the idea that pg_receivewal can be restarted\nby some external system when required and monitored well enough as\nthings stand. A second option is to start building out capabilities in\npg_receivewal to turn it into something closer to what you'd expect of\na normal daemon, with the addition of a retry capability as probably\nthe easiest improvement. A third option is to somehow move towards a\nworld where you can use the server to move WAL around even if you\ndon't really want to run the server. Imagine a server running with no\ndata directory and only a minimal set of running processes, just (1) a\npostmaster and (2) a walreceiver that writes to an archive directory\nand (3) non-database-connected backends that are just smart enough to\nhandle queries for status information. This has the same problem that\nI mentioned on the thread about monitoring the recovery process,\nnamely that we haven't got pg_authid. But against that, you get a lot\nof infrastructure for free: configuration files, process management,\nconnection management, an existing wire protocol, memory contexts,\nrich error reporting, etc.\n\nI am curious to hear what other people think about the usefulness (or\nlack thereof) of pg_receivewal as thing stand today, as well as ideas\nabout future direction.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 11:04:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "pg_receivewal makes a bad daemon" }, { "msg_contents": "On Wed, 2021-05-05 at 11:04 -0400, Robert Haas wrote:\n> You might want to use pg_receivewal to save all of your WAL segments\n> somewhere instead of relying on archive_command. It has, at the least,\n> the advantage of working on the byte level rather than the segment\n> level. But it seems to me that it is not entirely suitable as a\n> substitute for archiving, for a couple of reasons. One is that as soon\n> as it runs into a problem, it exits, which is not really what you want\n> out of a daemon that's critical to the future availability of your\n> system. Another is that you can't monitor it aside from looking at\n> what it prints out, which is also not really what you want for a piece\n> of critical infrastructure.\n> \n> The first problem seems somewhat more straightforward. Suppose we add\n> a new command-line option, perhaps --daemon but we can bikeshed. If\n> this option is specified, then it tries to keep going when it hits a\n> problem, rather than just giving up. [...]\n\nThat sounds like a good idea.\n\nI don't know what it takes to make that perfect (if such a thing exists),\nbut simply trying to re-establish database connections and dying when\nwe hit an I/O problem seems like a clear improvement.\n\n> The second problem is a bit more complex. [...]\n\nIf I wanted to monitor pg_receivewal, I'd have it use a replication\nslot and monitor \"pg_replication_slots\" on the primary. That way I see\nif there is a WAL sender process, and I can measure the lag in bytes.\n\nWhat more could you want?\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 05 May 2021 17:48:19 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal makes a bad daemon" }, { "msg_contents": "On Wed, May 5, 2021 at 5:04 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> You might want to use pg_receivewal to save all of your WAL segments\n> somewhere instead of relying on archive_command. It has, at the least,\n> the advantage of working on the byte level rather than the segment\n> level. But it seems to me that it is not entirely suitable as a\n> substitute for archiving, for a couple of reasons. One is that as soon\n> as it runs into a problem, it exits, which is not really what you want\n> out of a daemon that's critical to the future availability of your\n> system. Another is that you can't monitor it aside from looking at\n> what it prints out, which is also not really what you want for a piece\n> of critical infrastructure.\n>\n> The first problem seems somewhat more straightforward. Suppose we add\n> a new command-line option, perhaps --daemon but we can bikeshed. If\n> this option is specified, then it tries to keep going when it hits a\n> problem, rather than just giving up. There's some fuzziness in my mind\n> about exactly what this should mean. If the problem we hit is that we\n> lost the connection to the remote server, then we should try to\n> reconnect. But if the problem is something like a failure inside\n> open_walfile() or close_walfile(), like a failed open() or fsync() or\n> close() or something, it's a little less clear what to do. Maybe one\n> idea would be to have a parent process and a child process, where the\n> child process does all the work and the parent process just keeps\n> re-launching it if it dies. It's not entirely clear that this is a\n> suitable way of recovering from, say, an fsync() failure, given\n> previous discussions claiming that - and I might be exaggerating a bit\n> here - there is essentially no way to recover from a failed fsync()\n> because the kernel might have already thrown out your data and you\n> might as well just set the data center on fire - but perhaps an retry\n> system that can't cope with certain corner cases is better than not\n> having one at all, and perhaps we could revise the logic here and\n> there to have the process doing the work take some action other than\n> exiting when that's an intelligent approach.\n\nIs this really a problem we should fix ourselves? Most daemon-managers\ntoday will happily be configured to automatically restart a daemon on\nfailure with a single setting since a long time now. E.g. in systemd\n(which most linuxen uses now) you just set Restart=on-failure (or\nmaybe even Restart=always) and something like RestartSec=10.\n\nThat said, it wouldn't cover an fsync() error -- they will always\nrestart. The way to handle that is for the operator to capture the\nerror message perhaps, and just \"deal with it\"?\n\nWhat could be more interesting there in a \"systemd world\" would be to\nadd watchdog support. That'd obviously only be interesting on systemd\nplatforms, but we already have some of that basic notification support\nin the postmaster for those.\n\n> The second problem is a bit more complex. If you were transferring WAL\n> to another PostgreSQL instance rather than to a frontend process, you\n> could log to some place other than standard output, like for example a\n> file, and you could periodically rotate that file, or alternatively\n> you could log to syslog or the Windows event log. Even better, you\n> could connect to PostgreSQL and run SQL queries against monitoring\n> views and see what results you get. If the existing monitoring views\n> don't give users what they need, we can improve them, but the whole\n> infrastructure needed for this kind of thing is altogether lacking for\n> any frontend program. It does not seem very appealing to reinvent log\n> rotation, connection management, and monitoring views inside\n> pg_receivewal, let alone in every frontend process where similar\n> monitoring might be useful. But at least for me, without such\n> capabilities, it is a little hard to take pg_receivewal seriously.\n\nAgain, isn't this the job of the daemon runner? At least in cases\nwhere it's not Windows :)? That is, taking the output and putting it\nin a log, and interfacing with log rotation.\n\nNow, having some sort of statistics *other* than parsing a log would\ndefinitely be useful. But perhaps that could be something as simple\nhaving a --statsfile=/foo/bar parameter and then update that one at\nregular intervals with \"whatever is the current state\"?\n\nAnd of course, the other point to monitor is the replication slot on\nthe server it's connected to -- but I agree that being able to monitor\nboth sides there would be good.\n\n\n> I wonder first of all whether other people agree with these concerns,\n> and secondly what they think we ought to do about it. One option is -\n> do nothing. This could be based either on the idea that pg_receivewal\n> is hopeless, or else on the idea that pg_receivewal can be restarted\n> by some external system when required and monitored well enough as\n> things stand. A second option is to start building out capabilities in\n> pg_receivewal to turn it into something closer to what you'd expect of\n> a normal daemon, with the addition of a retry capability as probably\n> the easiest improvement. A third option is to somehow move towards a\n> world where you can use the server to move WAL around even if you\n> don't really want to run the server. Imagine a server running with no\n> data directory and only a minimal set of running processes, just (1) a\n> postmaster and (2) a walreceiver that writes to an archive directory\n> and (3) non-database-connected backends that are just smart enough to\n> handle queries for status information. This has the same problem that\n> I mentioned on the thread about monitoring the recovery process,\n> namely that we haven't got pg_authid. But against that, you get a lot\n> of infrastructure for free: configuration files, process management,\n> connection management, an existing wire protocol, memory contexts,\n> rich error reporting, etc.\n>\n> I am curious to hear what other people think about the usefulness (or\n> lack thereof) of pg_receivewal as thing stand today, as well as ideas\n> about future direction.\n\nPer above, I'm thinking maybe our efforts are better directed at\ndocumenting ways to do it now?\n\nAlso, all the above also apply to pg_recvlogical, right? So if we do\nwant to invent our own daemon-init-system, we should probably do one\nmore generic that can handle both.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 5 May 2021 18:34:36 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal makes a bad daemon" }, { "msg_contents": "On Wed, May 5, 2021 at 12:34 PM Magnus Hagander <magnus@hagander.net> wrote:\n> Is this really a problem we should fix ourselves? Most daemon-managers\n> today will happily be configured to automatically restart a daemon on\n> failure with a single setting since a long time now. E.g. in systemd\n> (which most linuxen uses now) you just set Restart=on-failure (or\n> maybe even Restart=always) and something like RestartSec=10.\n>\n> That said, it wouldn't cover an fsync() error -- they will always\n> restart. The way to handle that is for the operator to capture the\n> error message perhaps, and just \"deal with it\"?\n\nMaybe, but if that's really a non-problem, why does postgres itself\nrestart, and have facilities to write and rotate log files? I feel\nlike this argument boils down to \"a manual transmission ought to be\ngood enough for anyone, let's not have automatics.\" But over the years\npeople have found that automatics are a lot easier to drive. It may be\ntrue that if you know just how to configure your system's daemon\nmanager, you can make all of this work, but it's not like we document\nhow to do any of that, and it's probably not the same on every\nplatform - Windows in particular - and, really, why should people have\nto do this much work? If I want to run postgres in the background I\ncan just type 'pg_ctl start'. I could even put 'pg_ctl start' in my\ncrontab to make sure it gets restarted within a few minutes even if\nthe postmaster dies. If I want to keep pg_receivewal running all the\ntime ... I need a whole pile of extra mechanism to work around its\ninherent fragility. Documenting how that's typically done on modern\nsystems, as you propose further on, would be great, but I can't do it,\nbecause I don't know how to make it work. Hence the thread.\n\n> Also, all the above also apply to pg_recvlogical, right? So if we do\n> want to invent our own daemon-init-system, we should probably do one\n> more generic that can handle both.\n\nYeah. And I'm not really 100% convinced that trying to patch this\nfunctionality into pg_receive{wal,logical} is the best way forward ...\nbut I'm not entirely convinced that it isn't, either. I think one of\nthe basic problems with trying to deploy PostgreSQL in 2021 is that it\nneeds so much supporting infrastructure and so much babysitting.\narchive_command has to be a complicated, almost magical program we\ndon't provide, and we don't even tell you in the documentation that\nyou need it. If you don't want to use that, you can stream with\npg_receivewal instead, but now you need a complicated daemon-runner\nmechanism that we don't provide or document the need for. You also\nprobably need a connection pooler that we don't provide, a failover\nmanager that we don't provide, and backup management software that we\ndon't provide. And the interfaces that those tools have to work with\nare so awkward and primitive that even the tool authors can't always\nget it right. So I'm sort of unimpressed by any arguments that boil\ndown to \"what we have is good enough\" or \"that's the job of some other\npiece of software\". Too many things are the job of some piece of\nsoftware that doesn't really exist, or is only available on certain\nplatforms, or that has some other problem that makes it not usable for\neveryone. People want to be able to download and use PostgreSQL\nwithout needing a whole library of other bits and pieces from around\nthe Internet.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 13:12:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal makes a bad daemon" }, { "msg_contents": "On Wed, May 05, 2021 at 01:12:03PM -0400, Robert Haas wrote:\n> On Wed, May 5, 2021 at 12:34 PM Magnus Hagander <magnus@hagander.net> wrote:\n> > Is this really a problem we should fix ourselves? Most daemon-managers\n> > today will happily be configured to automatically restart a daemon on\n> > failure with a single setting since a long time now. E.g. in systemd\n> > (which most linuxen uses now) you just set Restart=on-failure (or\n> > maybe even Restart=always) and something like RestartSec=10.\n> >\n> > That said, it wouldn't cover an fsync() error -- they will always\n> > restart. The way to handle that is for the operator to capture the\n> > error message perhaps, and just \"deal with it\"?\n> \n> Maybe, but if that's really a non-problem, why does postgres itself\n> restart, and have facilities to write and rotate log files? I feel\n> like this argument boils down to \"a manual transmission ought to be\n> good enough for anyone, let's not have automatics.\" But over the years\n> people have found that automatics are a lot easier to drive. It may be\n> true that if you know just how to configure your system's daemon\n> manager, you can make all of this work, but it's not like we document\n> how to do any of that, and it's probably not the same on every\n> platform - Windows in particular - and, really, why should people have\n> to do this much work? If I want to run postgres in the background I\n> can just type 'pg_ctl start'. I could even put 'pg_ctl start' in my\n> crontab to make sure it gets restarted within a few minutes even if\n> the postmaster dies. If I want to keep pg_receivewal running all the\n> time ... I need a whole pile of extra mechanism to work around its\n> inherent fragility. Documenting how that's typically done on modern\n> systems, as you propose further on, would be great, but I can't do it,\n> because I don't know how to make it work. Hence the thread.\n> \n> > Also, all the above also apply to pg_recvlogical, right? So if we do\n> > want to invent our own daemon-init-system, we should probably do one\n> > more generic that can handle both.\n> \n> Yeah. And I'm not really 100% convinced that trying to patch this\n> functionality into pg_receive{wal,logical} is the best way forward ...\n> but I'm not entirely convinced that it isn't, either. I think one of\n> the basic problems with trying to deploy PostgreSQL in 2021 is that it\n> needs so much supporting infrastructure and so much babysitting.\n> archive_command has to be a complicated, almost magical program we\n> don't provide, and we don't even tell you in the documentation that\n> you need it. If you don't want to use that, you can stream with\n> pg_receivewal instead, but now you need a complicated daemon-runner\n> mechanism that we don't provide or document the need for. You also\n> probably need a connection pooler that we don't provide, a failover\n> manager that we don't provide, and backup management software that we\n> don't provide. And the interfaces that those tools have to work with\n> are so awkward and primitive that even the tool authors can't always\n> get it right. So I'm sort of unimpressed by any arguments that boil\n> down to \"what we have is good enough\" or \"that's the job of some other\n> piece of software\". Too many things are the job of some piece of\n> software that doesn't really exist, or is only available on certain\n> platforms, or that has some other problem that makes it not usable for\n> everyone. People want to be able to download and use PostgreSQL\n> without needing a whole library of other bits and pieces from around\n> the Internet.\n\nWe do use at least one bit and piece from around the internet to make\nour software usable, namely libreadline, the absence of which make\npsql pretty much unusable.\n\nThat out of the way, am I understanding correctly that you're\nproposing that make tools for daemon-izing, logging, connection\nmanagement, and failover, and ship same with PostgreSQL? I can see the\nappeal for people shipping proprietary forks of the PostgreSQL,\nespecially ones under restrictive licenses, and I guess we could make\na pretty good case for continuing to center those interests as we have\nsince the Berkeley days. Rather than, or maybe as a successor to,\nwiring such things into each tool we ship that require them, I'd\npicture something along the lines of .sos that could then be\nrepurposed, modified, etc., as we provide with the distribution as it\nis now.\n\nAnother possibility would be to look around for mature capabilities\nthat are cross-platform in the sense that they work on all the\nplatforms we do. While I don't think it's likely we'd find them for\nall the above use cases under compatible licenses, it's probably worth\na look. At worst, we'd get some idea of how (not) to design the APIs\nto them.\n\nI'm going to guess that anything with an incompatible license will\nupset people who are accustomed to ensuring that we have what legally\namounts to an MIT license clean distribution, but I'm thinking that\noption is at least worth discussing, even if the immediate consensus\nis, \"libreadline is bad enough. We went to a lot of trouble to purge\nthat other stuff back in the bad old days. Let's not make that mistake\nagain.\"\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Thu, 6 May 2021 02:42:27 +0000", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal makes a bad daemon" }, { "msg_contents": "On Wed, May 5, 2021 at 10:42 PM David Fetter <david@fetter.org> wrote:\n> We do use at least one bit and piece from around the internet to make\n> our software usable, namely libreadline, the absence of which make\n> psql pretty much unusable.\n\nI'm not talking about dependent libraries. We obviously have to depend\non some external libraries; it would be crazy to write our own\nversions of libreadline, zlib, glibc, and everything else we use.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 23:43:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal makes a bad daemon" }, { "msg_contents": "On Wed, May 5, 2021 at 7:12 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, May 5, 2021 at 12:34 PM Magnus Hagander <magnus@hagander.net> wrote:\n> > Is this really a problem we should fix ourselves? Most daemon-managers\n> > today will happily be configured to automatically restart a daemon on\n> > failure with a single setting since a long time now. E.g. in systemd\n> > (which most linuxen uses now) you just set Restart=on-failure (or\n> > maybe even Restart=always) and something like RestartSec=10.\n> >\n> > That said, it wouldn't cover an fsync() error -- they will always\n> > restart. The way to handle that is for the operator to capture the\n> > error message perhaps, and just \"deal with it\"?\n>\n> Maybe, but if that's really a non-problem, why does postgres itself\n> restart, and have facilities to write and rotate log files? I feel\n> like this argument boils down to \"a manual transmission ought to be\n> good enough for anyone, let's not have automatics.\" But over the years\n> people have found that automatics are a lot easier to drive. It may be\n> true that if you know just how to configure your system's daemon\n> manager, you can make all of this work, but it's not like we document\n> how to do any of that, and it's probably not the same on every\n> platform - Windows in particular - and, really, why should people have\n> to do this much work? If I want to run postgres in the background I\n> can just type 'pg_ctl start'. I could even put 'pg_ctl start' in my\n> crontab to make sure it gets restarted within a few minutes even if\n> the postmaster dies. If I want to keep pg_receivewal running all the\n> time ... I need a whole pile of extra mechanism to work around its\n> inherent fragility. Documenting how that's typically done on modern\n> systems, as you propose further on, would be great, but I can't do it,\n> because I don't know how to make it work. Hence the thread.\n\nIf PostgreSQL was built today, I'm not sure we would've built that\nfunctionality TBH.\n\nThe vast majority of people are not interested in manually starting\npostgres and then putting in a crontab to \"restart it if it fails\".\nThat's not how anybody runs a server and hasn't for a long time.\n\nIt might be interesting for us as developers, but not to the vast\nmajority of our users. Most of those get their startup scripts from\nour packagers -- so maybe we should encourage packagers to provide it,\nlike they do for PostgreSQL itself. But I don't think adding log\nrotations and other independent functionality to pg_receivexyz would\nhelp almost anybody in our user base.\n\nIn relation to the other thread about pid 1 handling and containers --\nif anything, I bet a larger portion of our users would be interested\nin running pg_receivewal in a dedicated container, than would want to\nstart it manually and verify it's running using crontab... By a large\nmargin.\n\nIt is true that Windows is a special case in this. But it is, I'd say,\nequally true that adding something akin to \"pg_ctl start\" for\npg_receivewal would be equally useless on Windows.\n\nWe can certainly build and add such functionality. But my feeling is\nthat it's going to be added complexity for very little practical gain.\nMuch of the server world moved to \"we don't want every single daemon\nto implement it it's own way, ever so slightly different\".\n\nI like your car analogy though. But I'd consider it more like \"we used\nto have to mix the right amount of oil into the gasoline manually. But\nmodern engines don't really require us to do that anymore, so most\npeople have stopped, only those who want very special cars do\". Or\nsomething along that line. (Reality is probably somewhere in between,\nand I suck at car analogies)\n\n\n> > Also, all the above also apply to pg_recvlogical, right? So if we do\n> > want to invent our own daemon-init-system, we should probably do one\n> > more generic that can handle both.\n>\n> Yeah. And I'm not really 100% convinced that trying to patch this\n> functionality into pg_receive{wal,logical} is the best way forward ...\n\nIt does in a lot of ways amount to basically a daemon-init system. It\nmight be easier to just vendor one of the existing ones :) Or more\nrealistically, suggest they use something that's already on their\nsystem. On linux that'll be systemd, on *bsd it'll probably be\nsomething like supervisord, on mac it'll be launchd. But this is\nreally more a function of the operating system/distribution.\n\nWindows is again the one that stands out. But PostgreSQL *alraedy*\ndoes a pretty weak job of solving that problem on Windows, so\nduplicating that is not that strong a win..\n\n\n> but I'm not entirely convinced that it isn't, either. I think one of\n> the basic problems with trying to deploy PostgreSQL in 2021 is that it\n> needs so much supporting infrastructure and so much babysitting.\n> archive_command has to be a complicated, almost magical program we\n> don't provide, and we don't even tell you in the documentation that\n> you need it. If you don't want to use that, you can stream with\n> pg_receivewal instead, but now you need a complicated daemon-runner\n> mechanism that we don't provide or document the need for. You also\n> probably need a connection pooler that we don't provide, a failover\n> manager that we don't provide, and backup management software that we\n> don't provide. And the interfaces that those tools have to work with\n> are so awkward and primitive that even the tool authors can't always\n> get it right. So I'm sort of unimpressed by any arguments that boil\n> down to \"what we have is good enough\" or \"that's the job of some other\n> piece of software\". Too many things are the job of some piece of\n> software that doesn't really exist, or is only available on certain\n> platforms, or that has some other problem that makes it not usable for\n> everyone. People want to be able to download and use PostgreSQL\n> without needing a whole library of other bits and pieces from around\n> the Internet.\n\nI definitely don't think what we have is good enough, and I agree with\nyour general description of the problem.\n\nI just don't think turning a simple tool into a more complicated\ndaemon is not going to help with that in any material way. You still\nneed some sort of *backup management* on that side, otherwise your\npg_receivewal will now be the one that fills your disk along with the\noutputs of your pg_basebackups. So we'd be better off providing that\nmanagement tool, which could then drive the lower level tools as\nnecessary.\n\nOr maybe the better solution in that case would perhaps be to actually\nbless one of the existing solutions out there by making it the\nofficial one.\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Fri, 7 May 2021 12:03:36 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal makes a bad daemon" }, { "msg_contents": "On Thu, May 6, 2021 at 5:43 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, May 5, 2021 at 10:42 PM David Fetter <david@fetter.org> wrote:\n> > We do use at least one bit and piece from around the internet to make\n> > our software usable, namely libreadline, the absence of which make\n> > psql pretty much unusable.\n\nFWIW, we did go with the idea of using readline. Which doesn't work\nproperly on Windows. So this is an excellent example of how we're\nalready not solving the problem for Windows users, but are apparently\nOK with it in this case.\n\n\n> I'm not talking about dependent libraries. We obviously have to depend\n> on some external libraries; it would be crazy to write our own\n> versions of libreadline, zlib, glibc, and everything else we use.\n\nWhy is that more crazy than building our own limited version of\nsupervisord? readline and glibc might be one thing, but zlib (at least\nthe parts we use) is probably less complex than building our own cross\nplatform daemon-management.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Fri, 7 May 2021 12:05:18 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal makes a bad daemon" }, { "msg_contents": "\nOn 05.05.21 19:12, Robert Haas wrote:\n> Maybe, but if that's really a non-problem, why does postgres itself\n> restart, and have facilities to write and rotate log files?\n\nI think because those were invented at a time when the operating system \nfacilities were less useful. And the log management facilities aren't \neven very good, because there is no support for remote logging.\n\n> It may be\n> true that if you know just how to configure your system's daemon\n> manager, you can make all of this work, but it's not like we document\n> how to do any of that, and it's probably not the same on every\n> platform - Windows in particular - and, really, why should people have\n> to do this much work? If I want to run postgres in the background I\n> can just type 'pg_ctl start'.\n\nNot really a solution, because systemd will kill it when you log out.\n\n> Documenting how that's typically done on modern\n> systems, as you propose further on, would be great, but I can't do it,\n> because I don't know how to make it work. Hence the thread.\n\nThat is probably effort better spent.\n\nI think the issues that you alluded to, what should be done in case of \nwhat error, is important to work out in detail and document in any case, \nbecause it will be the foundation of any of the other solutions.\n\n\n", "msg_date": "Tue, 11 May 2021 11:46:40 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal makes a bad daemon" }, { "msg_contents": "Hi,\n\nOn 2021-05-05 18:34:36 +0200, Magnus Hagander wrote:\n> Is this really a problem we should fix ourselves? Most daemon-managers\n> today will happily be configured to automatically restart a daemon on\n> failure with a single setting since a long time now. E.g. in systemd\n> (which most linuxen uses now) you just set Restart=on-failure (or\n> maybe even Restart=always) and something like RestartSec=10.\n\nI'm not convinced by this. For two main reasons:\n\n1) Our own code can know a lot more about the different error types than\n we can signal to systemd. The retry timeouts for e.g. a connection\n failure (whatever) is different than for fsync failing (alarm\n alarm). If we run out of space we might want to clean up space /\n invoke a command to do so, but there's nothing equivalent for\n systemd.\n\n2) Do we really want to either implement at least 3 different ways to do\n this kind of thing, or force users to do it over and over again?\n\nThat's not to say that there's no space for handling \"unexpected\" errors\noutside of postgres binaries, but I think it's pretty obvious that that\ndoesn't cover somewhat predictable types of errors.\n\n\nAnd looking at the server side of things - it is *not* the same for\nsystemd to restart postgres, as postmaster doing so internally. The\nlatter can hold on onto shared memory. Which e.g. with simple huge_pages\nconfigurations is crucial, because it prevents other processes to use\nthat shared memory. And it accelerates restart by a lot - the kernel\nneeding to zero shared memory on first access (or allocation) can be a\nvery significant penalty.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 11 May 2021 13:10:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal makes a bad daemon" }, { "msg_contents": "Hi,\n\nOn 2021-05-07 12:03:36 +0200, Magnus Hagander wrote:\n> It might be interesting for us as developers, but not to the vast\n> majority of our users. Most of those get their startup scripts from\n> our packagers -- so maybe we should encourage packagers to provide it,\n> like they do for PostgreSQL itself.\n\nI think that's the entirely wrong direction to go. A lot of the\nusability problems around postgres precisely stem from us doing this\nkind of thing, where the user experience then ends up wildly varying,\nincomplete and incomprehensible.\n\nThat's not to say that we need to reimplement everything just for a\nconsistent experience. But just punting crucial things like how a\narchiving can be made reliable in face of normal-ish errors, and how it\ncan be monitored is just going to further force people to move purely\nonto managed services.\n\n\n> Or maybe the better solution in that case would perhaps be to actually\n> bless one of the existing solutions out there by making it the\n> official one.\n\nWhich existing system currently does provide an archiving solution that\ndoes not imply the very significant overhead of archive_command? Even if\nan archiving solution internally batches things, the fsyncs, filesystem\nmetadata operations for .ready .done are a *significant* cost and all\nthe forks are not cheap either.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 11 May 2021 13:22:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal makes a bad daemon" } ]
[ { "msg_contents": "Problem: I have to set IntervalStyle in separate statement to convert\ninterval type to ISO8601 string.\n\nThis isn't well supported by ORMs or similar tools, requiring us to\nset it globally (per role or per database).\n\nProblem #2 (exotic): you can't mix two output styles in a single query.\n\nProposed solution: special case to_char function to accept values accepted\nby\nSET intervalstyle to 'XYZ'\n\nSo:\n\nSELECT to_char(INTERVAL '8 minutes', 'iso_8601')\n\nwill act similar to\n\nSET intervalstyle TO 'iso_8601';\nSELECT (INTERVAL '8 minutes')::text\nRESET interval_style;\n\nProblem: I have to set IntervalStyle in separate statement to convert interval type to ISO8601 string.This isn't well supported by ORMs or similar tools, requiring us toset it globally (per role or per database).Problem #2 (exotic): you can't mix two output styles in a single query.Proposed solution: special case to_char function to accept values accepted bySET intervalstyle to 'XYZ'So:SELECT to_char(INTERVAL '8 minutes', 'iso_8601')will act similar toSET intervalstyle TO 'iso_8601';SELECT (INTERVAL '8 minutes')::textRESET interval_style;", "msg_date": "Wed, 5 May 2021 17:06:44 +0200", "msg_from": "=?UTF-8?Q?Micha=C5=82_Wadas?= <michalwadas@gmail.com>", "msg_from_op": true, "msg_subject": "Proposal: per expression intervalstyle" } ]
[ { "msg_contents": "Hi,\n\n From time to time, I need to deal with bizarrely delimited text files,\nhaving to use tools such as awk/sed/perl to reformat the files\nso that they can be copied into PostgreSQL.\n\nIf such files could be imported to a table with a single text column,\nwe could then use PostgreSQL's now really fast regex-engine\nto do data cleaning/reformatting, instead of having to rely on external awk-ward tools.\n\nFurthermore, sometimes you don't want to clean/reformat the data at all,\nbut simply import the text lines \"as is\" without modifications,\nsuch as when wanting to import unformatted log files,\nwhere the log lines can contain any characters.\n\nCould it be an idea to exploit the fact that DELIMITER E'\\n' is currently an error?\n\n ERROR: COPY delimiter cannot be newline or carriage return\n\nThat is, to change E'\\n' to be a valid delimiter, which would simply read each line\ndelimited by newlines, as a single column.\n\nThe hack I'm currently abusing is to find some one-byte character that is not present anywhere in the text file,\nand then to use that character as a delimiter. This doesn't work when needing to deal with a text file\nwhich content is unknown at the time when writing the code though, so it's mostly useful for throwaway one-off queries.\n\nThoughts?\n\n/Joel\n\nMySQL seems to already support using \\n as a delimiter (I haven't verified it myself though) [1]\n\n[1] https://stackoverflow.com/questions/18394620/postgres-import-file-that-has-columns-separated-by-new-lines\n\n\nHi,From time to time, I need to deal with bizarrely delimited text files,having to use tools such as awk/sed/perl to reformat the filesso that they can be copied into PostgreSQL.If such files could be imported to a table with a single text column,we could then use PostgreSQL's now really fast regex-engineto do data cleaning/reformatting, instead of having to rely on external awk-ward tools.Furthermore, sometimes you don't want to clean/reformat the data at all,but simply import the text lines \"as is\" without modifications,such as when wanting to import unformatted log files,where the log lines can contain any characters.Could it be an idea to exploit the fact that DELIMITER E'\\n' is currently an error?    ERROR:  COPY delimiter cannot be newline or carriage returnThat is, to change E'\\n' to be a valid delimiter, which would simply read each linedelimited by newlines, as a single column.The hack I'm currently abusing is to find some one-byte character that is not present anywhere in the text file,and then to use that character as a delimiter. This doesn't work when needing to deal with a text filewhich content is unknown at the time when writing the code though, so it's mostly useful for throwaway one-off queries.Thoughts?/JoelMySQL seems to already support using \\n as a delimiter (I haven't verified it myself though) [1][1] https://stackoverflow.com/questions/18394620/postgres-import-file-that-has-columns-separated-by-new-lines", "msg_date": "Wed, 05 May 2021 17:30:30 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "COPY table_name (single_column) FROM 'unknown.txt' DELIMITER E'\\n'" }, { "msg_contents": "On Wed, May 5, 2021 at 8:31 AM Joel Jacobson <joel@compiler.org> wrote:\n\n> Could it be an idea to exploit the fact that DELIMITER E'\\n' is currently\n> an error?\n>\n>\nWhy not just allow: \"DELIMITER NONE\" to be valid syntax meaning exactly\nwhat it says and does exactly what you desire?\n\nDavid J.\n\nOn Wed, May 5, 2021 at 8:31 AM Joel Jacobson <joel@compiler.org> wrote:Could it be an idea to exploit the fact that DELIMITER E'\\n' is currently an error?Why not just allow: \"DELIMITER NONE\" to be valid syntax meaning exactly what it says and does exactly what you desire?David J.", "msg_date": "Wed, 5 May 2021 10:02:33 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: COPY table_name (single_column) FROM 'unknown.txt' DELIMITER\n E'\\n'" }, { "msg_contents": "On 05/05/21 13:02, David G. Johnston wrote:\n> Why not just allow: \"DELIMITER NONE\" to be valid syntax meaning exactly\n> what it says and does exactly what you desire?\n\nWhat would it mean? That you get one column, multiple rows of text\ncorresponding to \"lines\" delimited by something, or that you get one\ncolumn, one row of text for the entire content of the file?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 5 May 2021 13:23:01 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: COPY table_name (single_column) FROM 'unknown.txt' DELIMITER\n E'\\n'" }, { "msg_contents": "On Wed, 5 May 2021 at 13:23, Chapman Flack <chap@anastigmatix.net> wrote:\n\n> On 05/05/21 13:02, David G. Johnston wrote:\n> > Why not just allow: \"DELIMITER NONE\" to be valid syntax meaning exactly\n> > what it says and does exactly what you desire?\n>\n> What would it mean? That you get one column, multiple rows of text\n> corresponding to \"lines\" delimited by something, or that you get one\n> column, one row of text for the entire content of the file?\n>\n\nIt means no column delimiter. In other words, there is no character\nwhich marks the end of a data value, so the entire line is a single data\nvalue.\n\nWould DELIMITER NULL make sense? The existing values are literal strings so\nNULL fits with that. Do we already have NONE as a keyword somewhere? It's\nlisted in the keyword appendix to the documentation but I can't think of\nwhere it is used off the top of my head.\n\nOn Wed, 5 May 2021 at 13:23, Chapman Flack <chap@anastigmatix.net> wrote:On 05/05/21 13:02, David G. Johnston wrote:\n> Why not just allow: \"DELIMITER NONE\" to be valid syntax meaning exactly\n> what it says and does exactly what you desire?\n\nWhat would it mean? That you get one column, multiple rows of text\ncorresponding to \"lines\" delimited by something, or that you get one\ncolumn, one row of text for the entire content of the file?It means no column delimiter. In other words, there is no character which marks the end of a data value, so the entire line is a single data value.Would DELIMITER NULL make sense? The existing values are literal strings so NULL fits with that. Do we already have NONE as a keyword somewhere? It's listed in the keyword appendix to the documentation but I can't think of where it is used off the top of my head.", "msg_date": "Wed, 5 May 2021 13:34:18 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: COPY table_name (single_column) FROM 'unknown.txt' DELIMITER\n E'\\n'" }, { "msg_contents": "On Wed, May 5, 2021 at 10:34 AM Isaac Morland <isaac.morland@gmail.com>\nwrote:\n\n> On Wed, 5 May 2021 at 13:23, Chapman Flack <chap@anastigmatix.net> wrote:\n>\n>> On 05/05/21 13:02, David G. Johnston wrote:\n>> > Why not just allow: \"DELIMITER NONE\" to be valid syntax meaning exactly\n>> > what it says and does exactly what you desire?\n>>\n>> What would it mean? That you get one column, multiple rows of text\n>> corresponding to \"lines\" delimited by something, or that you get one\n>> column, one row of text for the entire content of the file?\n>>\n>\n> It means no column delimiter. In other words, there is no character\n> which marks the end of a data value, so the entire line is a single data\n> value.\n>\n>\nThis. When dealing with COPY it's expected that each line becomes its own\nrow. On the server you can do pg_read_file() if you need the entire file\nto be considered a single value. psql (\\I and variables) is a bit more\nhackey, but I'd rather see that improved directly anyway if the goal is to\ntry and make getting the \"whole document\" easier - copy isn't the right API\nfor that IMO.\n\nDavid J.\n\nOn Wed, May 5, 2021 at 10:34 AM Isaac Morland <isaac.morland@gmail.com> wrote:On Wed, 5 May 2021 at 13:23, Chapman Flack <chap@anastigmatix.net> wrote:On 05/05/21 13:02, David G. Johnston wrote:\n> Why not just allow: \"DELIMITER NONE\" to be valid syntax meaning exactly\n> what it says and does exactly what you desire?\n\nWhat would it mean? That you get one column, multiple rows of text\ncorresponding to \"lines\" delimited by something, or that you get one\ncolumn, one row of text for the entire content of the file?It means no column delimiter. In other words, there is no character which marks the end of a data value, so the entire line is a single data value.This.  When dealing with COPY it's expected that each line becomes its own row.  On the server you can do pg_read_file() if you need the entire file to be considered a single value.  psql (\\I and variables) is a bit more hackey, but I'd rather see that improved directly anyway if the goal is to try and make getting the \"whole document\" easier - copy isn't the right API for that IMO.David J.", "msg_date": "Wed, 5 May 2021 10:58:49 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: COPY table_name (single_column) FROM 'unknown.txt' DELIMITER\n E'\\n'" }, { "msg_contents": "On Wed, May 5, 2021, at 19:58, David G. Johnston wrote:\n> On Wed, May 5, 2021 at 10:34 AM Isaac Morland <isaac.morland@gmail.com> wrote:\n>> On Wed, 5 May 2021 at 13:23, Chapman Flack <chap@anastigmatix.net> wrote:\n>>> On 05/05/21 13:02, David G. Johnston wrote:\n>>> > Why not just allow: \"DELIMITER NONE\" to be valid syntax meaning exactly\n>>> > what it says and does exactly what you desire?\n>>> \n>>> What would it mean? That you get one column, multiple rows of text\n>>> corresponding to \"lines\" delimited by something, or that you get one\n>>> column, one row of text for the entire content of the file?\n>> \n>> It means no column delimiter. In other words, there is no character which marks the end of a data value, so the entire line is a single data value.\n>> \n> \n> This. When dealing with COPY it's expected that each line becomes its own row. On the server you can do pg_read_file() if you need the entire file to be considered a single value. psql (\\I and variables) is a bit more hackey, but I'd rather see that improved directly anyway if the goal is to try and make getting the \"whole document\" easier - copy isn't the right API for that IMO.\n\nI think you misunderstood the problem.\nI don't want the entire file to be considered a single value.\nI want each line to become its own row, just a row with a single column.\n\nSo I actually think COPY seems like a perfect match for the job,\nsince it does precisely that, except there is no delimiter in this case.\n\nI'm currently using the pg_read_file()-hack in a project,\nand even though it can read files up to 1GB,\nusing e.g. regexp_split_to_table() to split on E'\\n'\nseems to need 4x as much memory, so it only\nworks with files less than ~256MB.\n\nSELECT COUNT(*) FROM regexp_split_to_table(repeat(E'\\n',1000000000),E'\\n');\nERROR: invalid memory alloc request size 4000000004\nTime: 4151.374 ms (00:04.151)\n\n/Joel\nOn Wed, May 5, 2021, at 19:58, David G. Johnston wrote:On Wed, May 5, 2021 at 10:34 AM Isaac Morland <isaac.morland@gmail.com> wrote:On Wed, 5 May 2021 at 13:23, Chapman Flack <chap@anastigmatix.net> wrote:On 05/05/21 13:02, David G. Johnston wrote: > Why not just allow: \"DELIMITER NONE\" to be valid syntax meaning exactly > what it says and does exactly what you desire? What would it mean? That you get one column, multiple rows of text corresponding to \"lines\" delimited by something, or that you get one column, one row of text for the entire content of the file?It means no column delimiter. In other words, there is no character which marks the end of a data value, so the entire line is a single data value.This.  When dealing with COPY it's expected that each line becomes its own row.  On the server you can do pg_read_file() if you need the entire file to be considered a single value.  psql (\\I and variables) is a bit more hackey, but I'd rather see that improved directly anyway if the goal is to try and make getting the \"whole document\" easier - copy isn't the right API for that IMO.I think you misunderstood the problem.I don't want the entire file to be considered a single value.I want each line to become its own row, just a row with a single column.So I actually think COPY seems like a perfect match for the job,since it does precisely that, except there is no delimiter in this case.I'm currently using the pg_read_file()-hack in a project,and even though it can read files up to 1GB,using e.g. regexp_split_to_table() to split on E'\\n'seems to need 4x as much memory, so it onlyworks with files less than ~256MB.SELECT COUNT(*) FROM regexp_split_to_table(repeat(E'\\n',1000000000),E'\\n');ERROR:  invalid memory alloc request size 4000000004Time: 4151.374 ms (00:04.151)/Joel", "msg_date": "Wed, 05 May 2021 20:35:03 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_COPY_table=5Fname_(single=5Fcolumn)_FROM_'unknown.txt'_DEL?=\n =?UTF-8?Q?IMITER_E'\\n'?=" }, { "msg_contents": "On Wed, May 5, 2021, at 19:34, Isaac Morland wrote:\n> Would DELIMITER NULL make sense? The existing values are literal strings so NULL fits with that. Do we already have NONE as a keyword somewhere? It's listed in the keyword appendix to the documentation but I can't think of where it is used off the top of my head.\n\n+1 to using some keyword. NULL or NONE seems fine to me. Or maybe WITHOUT DELIMITER?\n\n/Joel\nOn Wed, May 5, 2021, at 19:34, Isaac Morland wrote:Would DELIMITER NULL make sense? The existing values are literal strings so NULL fits with that. Do we already have NONE as a keyword somewhere? It's listed in the keyword appendix to the documentation but I can't think of where it is used off the top of my head.+1 to using some keyword. NULL or NONE seems fine to me. Or maybe WITHOUT DELIMITER?/Joel", "msg_date": "Wed, 05 May 2021 20:38:51 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_COPY_table=5Fname_(single=5Fcolumn)_FROM_'unknown.txt'_DEL?=\n =?UTF-8?Q?IMITER_E'\\n'?=" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> I think you misunderstood the problem.\n> I don't want the entire file to be considered a single value.\n> I want each line to become its own row, just a row with a single column.\n\n> So I actually think COPY seems like a perfect match for the job,\n> since it does precisely that, except there is no delimiter in this case.\n\nWell, there's more to it than just the column delimiter.\n\n* What about \\N being converted to NULL?\n* What about \\. being treated as EOF?\n* Do you want to turn off the special behavior of backslash (ESCAPE)\n altogether?\n* What about newline conversions (\\r\\n being seen as just \\n, etc)?\n\nI'm inclined to think that \"use pg_read_file and then split at newlines\"\nmight be a saner answer than delving into all these fine points.\nNot least because people yell when you add cycles to the COPY\ninner loops.\n\n> I'm currently using the pg_read_file()-hack in a project,\n> and even though it can read files up to 1GB,\n> using e.g. regexp_split_to_table() to split on E'\\n'\n> seems to need 4x as much memory, so it only\n> works with files less than ~256MB.\n\nYeah, that's because of the conversion to \"chr\". But a regexp\nis overkill for that anyway. Don't we have something that will\nsplit on simple substring matches?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 May 2021 14:45:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re:\n =?UTF-8?Q?Re:_COPY_table=5Fname_(single=5Fcolumn)_FROM_'unknown.txt'_DEL?=\n =?UTF-8?Q?IMITER_E'\\n'?=" }, { "msg_contents": "\nOn 5/5/21 2:45 PM, Tom Lane wrote:\n> \"Joel Jacobson\" <joel@compiler.org> writes:\n>> I think you misunderstood the problem.\n>> I don't want the entire file to be considered a single value.\n>> I want each line to become its own row, just a row with a single column.\n>> So I actually think COPY seems like a perfect match for the job,\n>> since it does precisely that, except there is no delimiter in this case.\n> Well, there's more to it than just the column delimiter.\n>\n> * What about \\N being converted to NULL?\n> * What about \\. being treated as EOF?\n> * Do you want to turn off the special behavior of backslash (ESCAPE)\n> altogether?\n> * What about newline conversions (\\r\\n being seen as just \\n, etc)?\n>\n> I'm inclined to think that \"use pg_read_file and then split at newlines\"\n> might be a saner answer than delving into all these fine points.\n> Not least because people yell when you add cycles to the COPY\n> inner loops.\n\n\n\n+1\n\n\nAlso we have generally been resistant to supporting odd formats. FDWs\ncan help here (e.g. file_text_array), but they can't use STDIN IIRC.\n\n\n>\n>> I'm currently using the pg_read_file()-hack in a project,\n>> and even though it can read files up to 1GB,\n>> using e.g. regexp_split_to_table() to split on E'\\n'\n>> seems to need 4x as much memory, so it only\n>> works with files less than ~256MB.\n> Yeah, that's because of the conversion to \"chr\". But a regexp\n> is overkill for that anyway. Don't we have something that will\n> split on simple substring matches?\n>\n> \t\t\t\n\n\n\nNot that I know of. There is split_part but I don't think that's fit for\npurpose here. Do we need one, or have I missed something?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 5 May 2021 15:22:17 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: COPY table_name (single_column) FROM 'unknown.txt' DELIMITER\n E'\\n'" }, { "msg_contents": "On Wed, May 05, 2021 at 02:45:41PM -0400, Tom Lane wrote:\n> > I'm currently using the pg_read_file()-hack in a project,\n> > and even though it can read files up to 1GB,\n> > using e.g. regexp_split_to_table() to split on E'\\n'\n> > seems to need 4x as much memory, so it only\n> > works with files less than ~256MB.\n> \n> Yeah, that's because of the conversion to \"chr\". But a regexp\n> is overkill for that anyway. Don't we have something that will\n> split on simple substring matches?\n\nFor v14\n\ncommit 66f163068030b5c5fe792a0daee27822dac43791\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Wed Sep 2 18:23:56 2020 -0400\n\n Add string_to_table() function.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 5 May 2021 14:36:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Re: COPY table_name (single_column) FROM 'unknown.txt' DELIMITER\n E'\\n'" }, { "msg_contents": "\nOn 5/5/21 3:36 PM, Justin Pryzby wrote:\n> On Wed, May 05, 2021 at 02:45:41PM -0400, Tom Lane wrote:\n>>> I'm currently using the pg_read_file()-hack in a project,\n>>> and even though it can read files up to 1GB,\n>>> using e.g. regexp_split_to_table() to split on E'\\n'\n>>> seems to need 4x as much memory, so it only\n>>> works with files less than ~256MB.\n>> Yeah, that's because of the conversion to \"chr\". But a regexp\n>> is overkill for that anyway. Don't we have something that will\n>> split on simple substring matches?\n> For v14\n>\n> commit 66f163068030b5c5fe792a0daee27822dac43791\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: Wed Sep 2 18:23:56 2020 -0400\n>\n> Add string_to_table() function.\n>\n\nHa! just in time :-)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 5 May 2021 15:51:46 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: COPY table_name (single_column) FROM 'unknown.txt' DELIMITER\n E'\\n'" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 5/5/21 2:45 PM, Tom Lane wrote:\n>> Yeah, that's because of the conversion to \"chr\". But a regexp\n>> is overkill for that anyway. Don't we have something that will\n>> split on simple substring matches?\n\n> Not that I know of. There is split_part but I don't think that's fit for\n> purpose here. Do we need one, or have I missed something?\n\n[ checks manual ... ]\n\nstring_to_array or string_to_table would do, I think.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 May 2021 15:51:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: COPY table_name (single_column) FROM 'unknown.txt' DELIMITER\n E'\\n'" }, { "msg_contents": "On Wed, May 5, 2021, at 21:51, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net <mailto:andrew%40dunslane.net>> writes:\n> > On 5/5/21 2:45 PM, Tom Lane wrote:\n> >> Yeah, that's because of the conversion to \"chr\". But a regexp\n> >> is overkill for that anyway. Don't we have something that will\n> >> split on simple substring matches?\n> \n> > Not that I know of. There is split_part but I don't think that's fit for\n> > purpose here. Do we need one, or have I missed something?\n> \n> [ checks manual ... ]\n> \n> string_to_array or string_to_table would do, I think.\n> \n> regards, tom lane\n> \n\nThanks for these new functions, they seem really useful for a lot of cases.\n\nHowever, I see two problems with using string_to_table() for this particular use-case.\n \n- Doesn't work with files larger than 1GB, due to pg_read_file()'s limit.\n- 68% slower than using the COPY-hack.\n\n% ls -lah foo.txt\n-rw-r--r-- 1 joel staff 623M May 6 07:31 foo.txt\n\n% wc -l foo.txt\n6771864 foo.txt\n\n# \\d txt\n Table \"public.txt\"\nColumn | Type | Collation | Nullable | Default\n--------+------+-----------+----------+---------\nline | text | | |\n\n# COPY txt (line) FROM 'foo.txt' DELIMITER '\"';\nCOPY 6771864\nTime: 9829.707 ms (00:09.830)\nTime: 9552.286 ms (00:09.552)\nTime: 9483.115 ms (00:09.483)\n\n# TRUNCATE txt;\nTRUNCATE TABLE\n\n# INSERT INTO txt (line) SELECT string_to_table(pg_read_file('foo.txt'),E'\\n');\nINSERT 0 6771865\nTime: 16556.078 ms (00:16.556)\nTime: 14720.343 ms (00:14.720)\nTime: 17266.088 ms (00:17.266)\n\n/Joel\n\nOn Wed, May 5, 2021, at 21:51, Tom Lane wrote:Andrew Dunstan <andrew@dunslane.net> writes:> On 5/5/21 2:45 PM, Tom Lane wrote:>> Yeah, that's because of the conversion to \"chr\".  But a regexp>> is overkill for that anyway.  Don't we have something that will>> split on simple substring matches?> Not that I know of. There is split_part but I don't think that's fit for> purpose here. Do we need one, or have I missed something?[ checks manual ... ]string_to_array or string_to_table would do, I think.regards, tom laneThanks for these new functions, they seem really useful for a lot of cases.However, I see two problems with using string_to_table() for this particular use-case. - Doesn't work with files larger than 1GB, due to pg_read_file()'s limit.- 68% slower than using the COPY-hack.% ls -lah foo.txt-rw-r--r--  1 joel  staff   623M May  6 07:31 foo.txt% wc -l foo.txt6771864 foo.txt# \\d txt               Table \"public.txt\"Column | Type | Collation | Nullable | Default--------+------+-----------+----------+---------line   | text |           |          |# COPY txt (line) FROM 'foo.txt' DELIMITER '\"';COPY 6771864Time: 9829.707 ms (00:09.830)Time: 9552.286 ms (00:09.552)Time: 9483.115 ms (00:09.483)# TRUNCATE txt;TRUNCATE TABLE# INSERT INTO txt (line) SELECT string_to_table(pg_read_file('foo.txt'),E'\\n');INSERT 0 6771865Time: 16556.078 ms (00:16.556)Time: 14720.343 ms (00:14.720)Time: 17266.088 ms (00:17.266)/Joel", "msg_date": "Thu, 06 May 2021 07:52:11 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_COPY_table=5Fname_(single=5Fcolumn)_FROM_'unknown.txt'_DEL?=\n =?UTF-8?Q?IMITER_E'\\n'?=" }, { "msg_contents": "Joel Jacobson schrieb am 05.05.2021 um 17:30:\n> Could it be an idea to exploit the fact that DELIMITER E'\\n' is currently an error?\n>\n>     ERROR:  COPY delimiter cannot be newline or carriage return\n>\n> That is, to change E'\\n' to be a valid delimiter, which would simply read each line\n> delimited by newlines, as a single column.\n>\n> The hack I'm currently abusing is to find some one-byte character that is not present anywhere in the text file,\n> and then to use that character as a delimiter. This doesn't work when needing to deal with a text file\n> which content is unknown at the time when writing the code though, so it's mostly useful for throwaway one-off queries.\n\nWhat about\n\n delimiter E'\\1'\n\nThe probability that a file contains the ASCII \"character\" 1 seems rather low.\n\nThomas\n\n\n\n\n", "msg_date": "Thu, 6 May 2021 08:10:20 +0200", "msg_from": "Thomas Kellerer <shammat@gmx.net>", "msg_from_op": false, "msg_subject": "Re: COPY table_name (single_column) FROM 'unknown.txt' DELIMITER\n E'\\n'" }, { "msg_contents": "On Wed, May 5, 2021, at 20:45, Tom Lane wrote:\n> \"Joel Jacobson\" <joel@compiler.org <mailto:joel%40compiler.org>> writes:\n> > I think you misunderstood the problem.\n> > I don't want the entire file to be considered a single value.\n> > I want each line to become its own row, just a row with a single column.\n> \n> > So I actually think COPY seems like a perfect match for the job,\n> > since it does precisely that, except there is no delimiter in this case.\n> \n> Well, there's more to it than just the column delimiter.\n> \n> * What about \\N being converted to NULL?\n> * What about \\. being treated as EOF?\n> * Do you want to turn off the special behavior of backslash (ESCAPE)\n> altogether?\n> * What about newline conversions (\\r\\n being seen as just \\n, etc)?\n> \n> I'm inclined to think that \"use pg_read_file and then split at newlines\"\n> might be a saner answer than delving into all these fine points.\n> Not least because people yell when you add cycles to the COPY\n> inner loops.\n\nThanks for providing strong arguments why the COPY approach is a dead-end, I agree.\n\nHowever, as demonstrated in my previous email, using\n\n string_to_table(pg_read_file( filename ), E'\\n')\n\nhas its performance as well as max size issues.\n\nMaybe these two problems could be solved by combining the two functions into one?\n\n file_to_table ( filename text, delimiter text [, null_string text ] ) → setof text\n\nI'm thinking thanks to returning \"setof text\", such a function could read a stream,\nand return a line as soon as a delimiter is encountered, not having to keep\nthe entire file in memory at any time.\n\n/Joel\nOn Wed, May 5, 2021, at 20:45, Tom Lane wrote:\"Joel Jacobson\" <joel@compiler.org> writes:> I think you misunderstood the problem.> I don't want the entire file to be considered a single value.> I want each line to become its own row, just a row with a single column.> So I actually think COPY seems like a perfect match for the job,> since it does precisely that, except there is no delimiter in this case.Well, there's more to it than just the column delimiter.* What about \\N being converted to NULL?* What about \\. being treated as EOF?* Do you want to turn off the special behavior of backslash (ESCAPE)  altogether?* What about newline conversions (\\r\\n being seen as just \\n, etc)?I'm inclined to think that \"use pg_read_file and then split at newlines\"might be a saner answer than delving into all these fine points.Not least because people yell when you add cycles to the COPYinner loops.Thanks for providing strong arguments why the COPY approach is a dead-end, I agree.However, as demonstrated in my previous email, using   string_to_table(pg_read_file( filename ), E'\\n')has its performance as well as max size issues.Maybe these two problems could be solved by combining the two functions into one?   file_to_table ( filename text, delimiter text [, null_string text ] ) → setof textI'm thinking thanks to returning \"setof text\", such a function could read a stream,and return a line as soon as a delimiter is encountered, not having to keepthe entire file in memory at any time./Joel", "msg_date": "Thu, 06 May 2021 08:13:48 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_COPY_table=5Fname_(single=5Fcolumn)_FROM_'unknown.txt'_DEL?=\n =?UTF-8?Q?IMITER_E'\\n'?=" }, { "msg_contents": "I have similar problems and what is really needed is a way to get a file\nfrom client side into a server side object that can be dealt with later.\nThe most popular way is COPY and it is built into the psql tool. In general\nit supports \\copy wrapper, and there is COPY FROM STDIN. However, it is not\navailable to the files that are not following the csv-like structure. I had\nto use it for XML and huge JSON files before, and it's always `sed` before\nthe import and a replace() after.\n\npg_read_file does not help on cloud and managed installs of postgres here.\n\nWhat I would prefer is some new COPY mode like RAW that will just push\nwhatever it gets on the stdin/input into the cell on the server side. This\nway it can be proxied by psql, utilize existing infra for passing streams\nand be used in shell scripting.\n\n\n\nOn Thu, May 6, 2021 at 9:14 AM Joel Jacobson <joel@compiler.org> wrote:\n\n> On Wed, May 5, 2021, at 20:45, Tom Lane wrote:\n>\n> \"Joel Jacobson\" <joel@compiler.org> writes:\n> > I think you misunderstood the problem.\n> > I don't want the entire file to be considered a single value.\n> > I want each line to become its own row, just a row with a single column.\n>\n> > So I actually think COPY seems like a perfect match for the job,\n> > since it does precisely that, except there is no delimiter in this case.\n>\n> Well, there's more to it than just the column delimiter.\n>\n> * What about \\N being converted to NULL?\n> * What about \\. being treated as EOF?\n> * Do you want to turn off the special behavior of backslash (ESCAPE)\n> altogether?\n> * What about newline conversions (\\r\\n being seen as just \\n, etc)?\n>\n> I'm inclined to think that \"use pg_read_file and then split at newlines\"\n> might be a saner answer than delving into all these fine points.\n> Not least because people yell when you add cycles to the COPY\n> inner loops.\n>\n>\n> Thanks for providing strong arguments why the COPY approach is a dead-end,\n> I agree.\n>\n> However, as demonstrated in my previous email, using\n>\n> string_to_table(pg_read_file( filename ), E'\\n')\n>\n> has its performance as well as max size issues.\n>\n> Maybe these two problems could be solved by combining the two functions\n> into one?\n>\n> file_to_table ( filename text, delimiter text [, null_string text ] ) →\n> setof text\n>\n> I'm thinking thanks to returning \"setof text\", such a function could read\n> a stream,\n> and return a line as soon as a delimiter is encountered, not having to keep\n> the entire file in memory at any time.\n>\n> /Joel\n>\n\n\n-- \nDarafei \"Komяpa\" Praliaskouski\nOSM BY Team - http://openstreetmap.by/\n\nI have similar problems and what is really needed is a way to get a file from client side into a server side object that can be dealt with later.The most popular way is COPY and it is built into the psql tool. In general it supports \\copy wrapper, and there is COPY FROM STDIN. However, it is not available to the files that are not following the csv-like structure. I had to use it for XML and huge JSON files before, and it's always `sed` before the import and a replace() after.pg_read_file does not help on cloud and managed installs of postgres here.What I would prefer is some new COPY mode like RAW that will just push whatever it gets on the stdin/input into the cell on the server side. This way it can be proxied by psql, utilize existing infra for passing streams and be used in shell scripting.On Thu, May 6, 2021 at 9:14 AM Joel Jacobson <joel@compiler.org> wrote:On Wed, May 5, 2021, at 20:45, Tom Lane wrote:\"Joel Jacobson\" <joel@compiler.org> writes:> I think you misunderstood the problem.> I don't want the entire file to be considered a single value.> I want each line to become its own row, just a row with a single column.> So I actually think COPY seems like a perfect match for the job,> since it does precisely that, except there is no delimiter in this case.Well, there's more to it than just the column delimiter.* What about \\N being converted to NULL?* What about \\. being treated as EOF?* Do you want to turn off the special behavior of backslash (ESCAPE)  altogether?* What about newline conversions (\\r\\n being seen as just \\n, etc)?I'm inclined to think that \"use pg_read_file and then split at newlines\"might be a saner answer than delving into all these fine points.Not least because people yell when you add cycles to the COPYinner loops.Thanks for providing strong arguments why the COPY approach is a dead-end, I agree.However, as demonstrated in my previous email, using   string_to_table(pg_read_file( filename ), E'\\n')has its performance as well as max size issues.Maybe these two problems could be solved by combining the two functions into one?   file_to_table ( filename text, delimiter text [, null_string text ] ) → setof textI'm thinking thanks to returning \"setof text\", such a function could read a stream,and return a line as soon as a delimiter is encountered, not having to keepthe entire file in memory at any time./Joel-- Darafei \"Komяpa\" PraliaskouskiOSM BY Team - http://openstreetmap.by/", "msg_date": "Thu, 6 May 2021 09:21:26 +0300", "msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>", "msg_from_op": false, "msg_subject": "Re: COPY table_name (single_column) FROM 'unknown.txt' DELIMITER\n E'\\n'" }, { "msg_contents": "On Thu, 6 May 2021 at 02:21, Darafei \"Komяpa\" Praliaskouski <me@komzpa.net>\nwrote:\n\n\n> What I would prefer is some new COPY mode like RAW that will just push\n> whatever it gets on the stdin/input into the cell on the server side. This\n> way it can be proxied by psql, utilize existing infra for passing streams\n> and be used in shell scripting.\n>\n\nYes! A significant missing feature is “take this arbitrary bucket of bits\nand move it to/from the database from/to this file without modification of\nany kind”. There are all sorts of tutorials on the Web about how to almost\ndo it or fake it or convert to hex or whatever but I’ve never seen a nice\nsimple explanation of “here is the command:”.\n\nOf course there is a lot to think about. Coming out of the database, the\nquery result must be exactly one row containing exactly one column; how do\nwe handle other database results? Coming into the database, only one data\nvalue can come from a single file; so how do we populate the other columns\nof whatever table we copy to?\n\nOn Thu, 6 May 2021 at 02:21, Darafei \"Komяpa\" Praliaskouski <me@komzpa.net> wrote: What I would prefer is some new COPY mode like RAW that will just push whatever it gets on the stdin/input into the cell on the server side. This way it can be proxied by psql, utilize existing infra for passing streams and be used in shell scripting.Yes! A significant missing feature is “take this arbitrary bucket of bits and move it to/from the database from/to this file without modification of any kind”. There are all sorts of tutorials on the Web about how to almost do it or fake it or convert to hex or whatever but I’ve never seen a nice simple explanation of “here is the command:”.Of course there is a lot to think about. Coming out of the database, the query result must be exactly one row containing exactly one column; how do we handle other database results? Coming into the database, only one data value can come from a single file; so how do we populate the other columns of whatever table we copy to?", "msg_date": "Thu, 6 May 2021 07:41:18 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: COPY table_name (single_column) FROM 'unknown.txt' DELIMITER\n E'\\n'" }, { "msg_contents": "On Thu, May 6, 2021, at 13:41, Isaac Morland wrote:\n> On Thu, 6 May 2021 at 02:21, Darafei \"Komяpa\" Praliaskouski <me@komzpa.net> wrote:\n> \n>> What I would prefer is some new COPY mode like RAW that will just push whatever it gets on the stdin/input into the cell on the server side. This way it can be proxied by psql, utilize existing infra for passing streams and be used in shell scripting.\n> \n> Yes! A significant missing feature is “take this arbitrary bucket of bits and move it to/from the database from/to this file without modification of any kind”. There are all sorts of tutorials on the Web about how to almost do it or fake it or convert to hex or whatever but I’ve never seen a nice simple explanation of “here is the command:”.\n> \n> Of course there is a lot to think about. Coming out of the database, the query result must be exactly one row containing exactly one column; how do we handle other database results? Coming into the database, only one data value can come from a single file; so how do we populate the other columns of whatever table we copy to?\n> \n\nIf the file is on the server, you can use pg_read_binary_file() for that purpose.\n\n/Joel\nOn Thu, May 6, 2021, at 13:41, Isaac Morland wrote:On Thu, 6 May 2021 at 02:21, Darafei \"Komяpa\" Praliaskouski <me@komzpa.net> wrote: What I would prefer is some new COPY mode like RAW that will just push whatever it gets on the stdin/input into the cell on the server side. This way it can be proxied by psql, utilize existing infra for passing streams and be used in shell scripting.Yes! A significant missing feature is “take this arbitrary bucket of bits and move it to/from the database from/to this file without modification of any kind”. There are all sorts of tutorials on the Web about how to almost do it or fake it or convert to hex or whatever but I’ve never seen a nice simple explanation of “here is the command:”.Of course there is a lot to think about. Coming out of the database, the query result must be exactly one row containing exactly one column; how do we handle other database results? Coming into the database, only one data value can come from a single file; so how do we populate the other columns of whatever table we copy to?If the file is on the server, you can use pg_read_binary_file() for that purpose./Joel", "msg_date": "Thu, 06 May 2021 18:02:30 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_COPY_table=5Fname_(single=5Fcolumn)_FROM_'unknown.txt'_DEL?=\n =?UTF-8?Q?IMITER_E'\\n'?=" }, { "msg_contents": "On Thu, 6 May 2021 at 12:02, Joel Jacobson <joel@compiler.org> wrote:\n\n> On Thu, May 6, 2021, at 13:41, Isaac Morland wrote:\n>\n> Yes! A significant missing feature is “take this arbitrary bucket of bits\n> and move it to/from the database from/to this file without modification of\n> any kind”. There are all sorts of tutorials on the Web about how to almost\n> do it or fake it or convert to hex or whatever but I’ve never seen a nice\n> simple explanation of “here is the command:”.\n>\n> Of course there is a lot to think about. Coming out of the database, the\n> query result must be exactly one row containing exactly one column; how do\n> we handle other database results? Coming into the database, only one data\n> value can come from a single file; so how do we populate the other columns\n> of whatever table we copy to?\n>\n> If the file is on the server, you can use pg_read_binary_file() for that\n> purpose.\n>\n\nYes, sorry, I should have explicitly said “on the client” somewhere up\nthere. Getting files from the DB server into the DB is no problem.\n\nOn Thu, 6 May 2021 at 12:02, Joel Jacobson <joel@compiler.org> wrote:On Thu, May 6, 2021, at 13:41, Isaac Morland wrote:Yes! A significant missing feature is “take this arbitrary bucket of bits and move it to/from the database from/to this file without modification of any kind”. There are all sorts of tutorials on the Web about how to almost do it or fake it or convert to hex or whatever but I’ve never seen a nice simple explanation of “here is the command:”.Of course there is a lot to think about. Coming out of the database, the query result must be exactly one row containing exactly one column; how do we handle other database results? Coming into the database, only one data value can come from a single file; so how do we populate the other columns of whatever table we copy to?If the file is on the server, you can use pg_read_binary_file() for that purpose.Yes, sorry, I should have explicitly said “on the client” somewhere up there. Getting files from the DB server into the DB is no problem.", "msg_date": "Thu, 6 May 2021 12:37:39 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: COPY table_name (single_column) FROM 'unknown.txt' DELIMITER\n E'\\n'" }, { "msg_contents": "\nOn 5/6/21 7:41 AM, Isaac Morland wrote:\n> On Thu, 6 May 2021 at 02:21, Darafei \"Komяpa\" Praliaskouski\n> <me@komzpa.net <mailto:me@komzpa.net>> wrote:\n>  \n>\n> What I would prefer is some new COPY mode like RAW that will just\n> push whatever it gets on the stdin/input into the cell on the\n> server side. This way it can be proxied by psql, utilize existing\n> infra for passing streams and be used in shell scripting.\n>\n>\n> Yes! A significant missing feature is “take this arbitrary bucket of\n> bits and move it to/from the database from/to this file without\n> modification of any kind”. There are all sorts of tutorials on the Web\n> about how to almost do it or fake it or convert to hex or whatever but\n> I’ve never seen a nice simple explanation of “here is the command:”.\n>\n> Of course there is a lot to think about. Coming out of the database,\n> the query result must be exactly one row containing exactly one\n> column; how do we handle other database results? Coming into the\n> database, only one data value can come from a single file; so how do\n> we populate the other columns of whatever table we copy to?\n>\n\n\nWhat I'd like is something a bit more general. We could extend the FDW\nAPI to allow for a CopyStdin handler or some such. The input could be\nsent unfiltered to the handler, which would do whatever it liked with\nit. That way FDWs like file_fdw and file_text_array_fdw could read from\nstdin, for example. I'm pretty sure it would handle the OP's use case.\nThe downside is you'd need to write an FDW handler, but that's not too\nhard, and there are lots of examples.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 6 May 2021 16:08:16 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: COPY table_name (single_column) FROM 'unknown.txt' DELIMITER\n E'\\n'" }, { "msg_contents": "\tDarafei \"Komяpa\" Praliaskouski wrote:\n\n> What I would prefer is some new COPY mode like RAW that will just push\n> whatever it gets on the stdin/input into the cell on the server side. This\n> way it can be proxied by psql, utilize existing infra for passing streams\n> and be used in shell scripting.\n\nCOPY RAW has been proposed and discussed quite a bit previously:\nhttps://commitfest.postgresql.org/12/676/\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: https://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n", "msg_date": "Sun, 09 May 2021 15:39:36 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: COPY table_name (single_column) FROM 'unknown.txt' DELIMITER\n E'\\n'" }, { "msg_contents": "What about:\n\n COPY ... FROM ... WITH PATTERN 'regexp_pattern'\n\nWhere the columns would be matched with the capture groups.\n\nThis could handle the quite common case of varying white-space as column separators:\n\n COPY log (col1, col2, col3) FROM 'log.txt' WITH PATTERN '^(\\S+)\\s+(\\S+)\\s+(\\S+)$'\n\nThis could also handle $SUBJECT:\n\n COPY table_name (single_column) FROM 'unknown.txt' WITH PATTERN '^(.*)$';\n\nAnd lots of other more complex use-cases.\n\n/Joel\nWhat about:   COPY ... FROM ... WITH PATTERN 'regexp_pattern'Where the columns would be matched with the capture groups.This could handle the quite common case of varying white-space as column separators:   COPY log (col1, col2, col3) FROM 'log.txt' WITH PATTERN '^(\\S+)\\s+(\\S+)\\s+(\\S+)$'This could also handle $SUBJECT:   COPY table_name (single_column) FROM 'unknown.txt' WITH PATTERN '^(.*)$';And lots of other more complex use-cases./Joel", "msg_date": "Mon, 10 May 2021 08:12:47 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_COPY_table=5Fname_(single=5Fcolumn)_FROM_'unknown.txt'_DEL?=\n =?UTF-8?Q?IMITER_E'\\n'?=" } ]
[ { "msg_contents": "It's getting to be time to think about these steps for v14:\n\n* Renumber any manually-assigned OIDs between 8000 and 9999\n to lower numbers, using renumber_oids.pl (see notes in bki.sgml)\n\n* pgindent, perltidy, reformat-dat-files\n\n* Update config.guess and config.sub\n (from https://savannah.gnu.org/projects/config)\n\n* Update Unicode data: Edit UNICODE_VERSION and CLDR_VERSION in\n src/Makefile.global.in, run make update-unicode, and commit.\n\nIt looks like Peter already took care of the last two. Barring\nobjections, I'll plan to do the first two next Wednesday or so\n(after the back-branch-release dust has settled).\n\nI notice that we also list this as a pre-beta task in\nsrc/tools/RELEASE_CHANGES:\n\n* Update inet/cidr data types with newest Bind patches\n\nHowever, I can't recall that anyone has ever done any such thing;\nand at this point, any attempt to re-sync that code would likely\nbe a rather major task. Should we take that off the checklist?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 May 2021 14:18:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "v14 mechanical code beautification patches" }, { "msg_contents": "On Wed, May 5, 2021 at 02:18:04PM -0400, Tom Lane wrote:\n> I notice that we also list this as a pre-beta task in\n> src/tools/RELEASE_CHANGES:\n> \n> * Update inet/cidr data types with newest Bind patches\n> \n> However, I can't recall that anyone has ever done any such thing;\n> and at this point, any attempt to re-sync that code would likely\n> be a rather major task. Should we take that off the checklist?\n\nI think it is related to these files:\n\n\tsrc/backend/utils/adt/inet_cidr_ntop.c\n\tsrc/backend/utils/adt/inet_net_pton.c\n\nwhich have at the top:\n\n * Copyright (c) 2004 by Internet Systems Consortium, Inc. (\"ISC\")\n * Copyright (c) 1996,1999 by Internet Software Consortium.\n\nbut I am not sure we still need to update those, so I would remove it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 5 May 2021 18:55:25 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: v14 mechanical code beautification patches" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Wed, May 5, 2021 at 02:18:04PM -0400, Tom Lane wrote:\n>> I notice that we also list this as a pre-beta task in\n>> src/tools/RELEASE_CHANGES:\n>> \t* Update inet/cidr data types with newest Bind patches\n>> However, I can't recall that anyone has ever done any such thing;\n>> and at this point, any attempt to re-sync that code would likely\n>> be a rather major task. Should we take that off the checklist?\n\n> I think it is related to these files:\n> \tsrc/backend/utils/adt/inet_cidr_ntop.c\n> \tsrc/backend/utils/adt/inet_net_pton.c\n> which have at the top:\n> * Copyright (c) 2004 by Internet Systems Consortium, Inc. (\"ISC\")\n> * Copyright (c) 1996,1999 by Internet Software Consortium.\n> but I am not sure we still need to update those, so I would remove it.\n\nI dug in the archives and found the thread that prompted you to\nadd that bullet item:\n\nhttps://www.postgresql.org/message-id/200502021700.j12H05j20872%40candle.pha.pa.us\n\nwhich made the point that those were moving targets back in 2005.\nI doubt they still are, so I don't see much point in keeping this\nin the checklist.\n\n(There may or may not be value in doing a one-time check to see\nif we've missed anything.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 May 2021 19:08:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: v14 mechanical code beautification patches" }, { "msg_contents": "On Wed, May 5, 2021 at 07:08:35PM -0400, Tom Lane wrote:\n> > I think it is related to these files:\n> > \tsrc/backend/utils/adt/inet_cidr_ntop.c\n> > \tsrc/backend/utils/adt/inet_net_pton.c\n> > which have at the top:\n> > * Copyright (c) 2004 by Internet Systems Consortium, Inc. (\"ISC\")\n> > * Copyright (c) 1996,1999 by Internet Software Consortium.\n> > but I am not sure we still need to update those, so I would remove it.\n> \n> I dug in the archives and found the thread that prompted you to\n> add that bullet item:\n> \n> https://www.postgresql.org/message-id/200502021700.j12H05j20872%40candle.pha.pa.us\n> \n> which made the point that those were moving targets back in 2005.\n> I doubt they still are, so I don't see much point in keeping this\n> in the checklist.\n> \n> (There may or may not be value in doing a one-time check to see\n> if we've missed anything.)\n\nThanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 5 May 2021 20:01:22 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: v14 mechanical code beautification patches" }, { "msg_contents": "I wrote:\n> I dug in the archives and found the thread that prompted you to\n> add that bullet item:\n> https://www.postgresql.org/message-id/200502021700.j12H05j20872%40candle.pha.pa.us\n> which made the point that those were moving targets back in 2005.\n> I doubt they still are, so I don't see much point in keeping this\n> in the checklist.\n> (There may or may not be value in doing a one-time check to see\n> if we've missed anything.)\n\nI located the \"current\" versions of those files in libbind 6.0.\n(I put \"current\" in quotes because the file dates seem to be\n2005-2008, so indeed development came to a stop a long time ago.)\n\nThey are *very* different from what we have, though. Some of it\nis visibly cosmetic, but other parts have been rewritten quite a bit,\nso it's hard to tell if the functionality is identical.\n\nIn the absence of a reason to think we have bugs that we need to\nfix, I'm not sure it's worth analyzing the differences in detail.\nI definitely wouldn't just adopt all the diffs blindly.\n\nIn any case, that RELEASE_CHANGES item is clearly a dead letter\nnow, so I'll go remove it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 May 2021 22:45:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: v14 mechanical code beautification patches" }, { "msg_contents": "On Wed, May 5, 2021 at 10:45:01PM -0400, Tom Lane wrote:\n> I located the \"current\" versions of those files in libbind 6.0.\n> (I put \"current\" in quotes because the file dates seem to be\n> 2005-2008, so indeed development came to a stop a long time ago.)\n> \n> They are *very* different from what we have, though. Some of it\n> is visibly cosmetic, but other parts have been rewritten quite a bit,\n> so it's hard to tell if the functionality is identical.\n> \n> In the absence of a reason to think we have bugs that we need to\n> fix, I'm not sure it's worth analyzing the differences in detail.\n> I definitely wouldn't just adopt all the diffs blindly.\n> \n> In any case, that RELEASE_CHANGES item is clearly a dead letter\n> now, so I'll go remove it.\n\nOK, thanks for checking. I think there was some concern about IPv6\nchanges at the time we adopted this.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 6 May 2021 06:28:21 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: v14 mechanical code beautification patches" } ]
[ { "msg_contents": "I noticed this recent crash on lorikeet:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lorikeet&dt=2021-05-05%2009%3A19%3A29\n\nThe relevant bits of the log seem to be\n\n2021-05-05 05:36:22.011 EDT [60926716.bb24:1] ERROR: could not map dynamic shared memory segment\n...\n2021-05-05 05:36:22.013 EDT [609266c5.b793:4] LOG: background worker \"parallel worker\" (PID 47908) exited with exit code 1\n...\nTRAP: FailedAssertion(\"BackgroundWorkerData->parallel_register_count - BackgroundWorkerData->parallel_terminate_count <= MAX_PARALLEL_WORKER_LIMIT\", File: \"/home/andrew/bf/root/REL_13_STABLE/pgsql.build/../pgsql/src/backend/postmaster/bgworker.c\", Line: 1016)\n*** starting debugger for pid 47743, tid 1264\n2021-05-05 05:36:26.629 EDT [609266c5.b793:5] LOG: server process (PID 47743) exited with exit code 127\n\nSo we had a parallel worker fail to start, whereupon its leader went down\nwith an assertion failure. I know that the parallel-worker code is held\ntogether with chewing gum and baling wire, but that's a bit much.\n\nLooking at the indicated code, we find\n\n /*\n * If this is a parallel worker, check whether there are already too many\n * parallel workers; if so, don't register another one. Our view of\n * parallel_terminate_count may be slightly stale, but that doesn't really\n * matter: we would have gotten the same result if we'd arrived here\n * slightly earlier anyway. There's no help for it, either, since the\n * postmaster must not take locks; a memory barrier wouldn't guarantee\n * anything useful.\n */\n if (parallel && (BackgroundWorkerData->parallel_register_count -\n BackgroundWorkerData->parallel_terminate_count) >=\n max_parallel_workers)\n {\n Assert(BackgroundWorkerData->parallel_register_count -\n BackgroundWorkerData->parallel_terminate_count <=\n MAX_PARALLEL_WORKER_LIMIT);\n LWLockRelease(BackgroundWorkerLock);\n return false;\n }\n\nI would like to know on what grounds that Assert isn't insanity?\nWe just finished pointing out that we might see an old\nparallel_terminate_count, which ISTM says specifically that\nparallel_register_count minus parallel_terminate_count might\nbe larger than expected.\n\nAdmittedly, it seems unlikely that the difference could exceed\nMAX_PARALLEL_WORKER_LIMIT = 1024 in a regression test run where\nthe limit on number of parallel workers is only 8. What I think is\nmore likely, given that these counters are unsigned, is that the\ndifference was actually negative. Which could be a bug, or it could\nbe an expectable race condition, or it could just be some flakiness\non lorikeet's part (that machine has had a lot of issues lately).\n\nI trawled the buildfarm logs going back 180 days, and found no\nother instances of this assertion, which seems to be evidence\nin favor of the \"lorikeet got flaky\" theory. But it's not proof.\n\nIn any case, I see zero value in this assertion, so I propose\nwe remove it. If we don't remove it, it needs serious revision,\nbecause it seems absolutely obvious to me that it could trigger\nwhen there is nothing wrong. A system pushing the limit of\nnumber of parallel workers would be at considerable risk.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 May 2021 15:46:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Dubious assertion in RegisterDynamicBackgroundWorker" }, { "msg_contents": "On Wed, May 5, 2021 at 3:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Admittedly, it seems unlikely that the difference could exceed\n> MAX_PARALLEL_WORKER_LIMIT = 1024 in a regression test run where\n> the limit on number of parallel workers is only 8. What I think is\n> more likely, given that these counters are unsigned, is that the\n> difference was actually negative. Which could be a bug, or it could\n> be an expectable race condition, or it could just be some flakiness\n> on lorikeet's part (that machine has had a lot of issues lately).\n\nI think that assertion was added by me, and I think the thought\nprocess was that the value shouldn't go negative and that if it does\nit's probably a bug which we might want to fix. But since the values\nare unsigned I could hardly check for < 0, so I did it this way\ninstead.\n\nBut since there's no memory barrier between the two loads, I guess\nthere's no guarantee that they have the expected relationship, even if\nthere is a memory barrier on the store side. I wonder if it's worth\ntrying to tighten that up so that the assertion is more meaningful, or\njust give up and rip it out. I'm afraid that if we do have (or\ndevelop) bugs in this area, someone will discover that the effective\nmax_parallel_workers value on their system slowly drifts up or down\nfrom the configured value, and we'll have no clue where things are\ngoing wrong. The assertion was intended to give us a chance of\nnoticing that sort of problem in the buildfarm or on a developer's\nmachine before the code gets out into the real world.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 16:35:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Dubious assertion in RegisterDynamicBackgroundWorker" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, May 5, 2021 at 3:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Admittedly, it seems unlikely that the difference could exceed\n>> MAX_PARALLEL_WORKER_LIMIT = 1024 in a regression test run where\n>> the limit on number of parallel workers is only 8. What I think is\n>> more likely, given that these counters are unsigned, is that the\n>> difference was actually negative. Which could be a bug, or it could\n>> be an expectable race condition, or it could just be some flakiness\n>> on lorikeet's part (that machine has had a lot of issues lately).\n\n> I think that assertion was added by me, and I think the thought\n> process was that the value shouldn't go negative and that if it does\n> it's probably a bug which we might want to fix. But since the values\n> are unsigned I could hardly check for < 0, so I did it this way\n> instead.\n\n> But since there's no memory barrier between the two loads, I guess\n> there's no guarantee that they have the expected relationship, even if\n> there is a memory barrier on the store side. I wonder if it's worth\n> trying to tighten that up so that the assertion is more meaningful, or\n> just give up and rip it out. I'm afraid that if we do have (or\n> develop) bugs in this area, someone will discover that the effective\n> max_parallel_workers value on their system slowly drifts up or down\n> from the configured value, and we'll have no clue where things are\n> going wrong. The assertion was intended to give us a chance of\n> noticing that sort of problem in the buildfarm or on a developer's\n> machine before the code gets out into the real world.\n\nI follow your concern, but I'm not convinced that this assertion is\na useful aid; first because the asynchrony involved makes the edge\ncases rather squishy, and second because allowing 1024 bogus\nincrements before complaining will likely mean that developer test\nruns will not last long enough to trigger the assertion, and third\nbecause if it does fire it's too far removed from the perpetrator\nto be much help in figuring out what went wrong, or even if\nanything *is* wrong.\n\nI've not tried to trace the code, but I'm now a bit suspicious\nthat there is indeed a design bug here. I gather from the\ncomments that parallel_register_count is incremented by the\nworker processes, which of course implies that a worker that\nfails to reattach to shared memory won't do that. But\nparallel_terminate_count is incremented by the postmaster.\nIf the postmaster will do that even in the case of a worker that\nfailed at startup, then lorikeet's symptoms are neatly explained.\n\nI'd be more comfortable with this code if the increments and\ndecrements were handled by the same process.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 May 2021 17:22:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Dubious assertion in RegisterDynamicBackgroundWorker" }, { "msg_contents": "On Wed, May 5, 2021 at 5:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I follow your concern, but I'm not convinced that this assertion is\n> a useful aid; first because the asynchrony involved makes the edge\n> cases rather squishy, and second because allowing 1024 bogus\n> increments before complaining will likely mean that developer test\n> runs will not last long enough to trigger the assertion, and third\n> because if it does fire it's too far removed from the perpetrator\n> to be much help in figuring out what went wrong, or even if\n> anything *is* wrong.\n\nWell, it allows 1024 bogus increments in one direction, but a negative\nvalue will trip it PDQ.\n\n> I've not tried to trace the code, but I'm now a bit suspicious\n> that there is indeed a design bug here. I gather from the\n> comments that parallel_register_count is incremented by the\n> worker processes, which of course implies that a worker that\n> fails to reattach to shared memory won't do that. But\n> parallel_terminate_count is incremented by the postmaster.\n> If the postmaster will do that even in the case of a worker that\n> failed at startup, then lorikeet's symptoms are neatly explained.\n\nparallel_register_count is incremented by\nRegisterDynamicBackgroundWorker, i.e. when the worker process is\nrequested, not after it starts. To try to do it from the worker after\nit's launched would, as you suppose, be completely busted. The more\ngeneral point here is that we should be adjusting these at the same\npoint we allocate and free the worker slots. Wherever that happens is\nthe right place to do this, because the goal of the mechanism is\nprecisely to limit the number of such slots that can be used by\nparallel query.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 20:35:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Dubious assertion in RegisterDynamicBackgroundWorker" }, { "msg_contents": "I wrote:\n> I've not tried to trace the code, but I'm now a bit suspicious\n> that there is indeed a design bug here. I gather from the\n> comments that parallel_register_count is incremented by the\n> worker processes, which of course implies that a worker that\n> fails to reattach to shared memory won't do that. But\n> parallel_terminate_count is incremented by the postmaster.\n> If the postmaster will do that even in the case of a worker that\n> failed at startup, then lorikeet's symptoms are neatly explained.\n\nThat theory seems to be nonsense. After a bit more study of the\ncode, I see that parallel_register_count is incremented by the *leader*\nprocess, when it reserves a BackgroundWorkerSlot for the worker.\nAnd parallel_terminate_count is incremented by the postmaster when\nit releases the slot; so it's darn hard to see how\nparallel_terminate_count could get ahead of parallel_register_count.\n\nI noticed that lorikeet's worker didn't fail at shared memory reattach,\nas I first thought, anyway. It failed at\n\tERROR: could not map dynamic shared memory segment\nwhich means we ought to be able to reproduce the symptoms by faking\nfailure of dsm_attach(), as I did in the quick hack attached.\nWhat I get is a lot of \"parallel worker failed to initialize\" and\n\"lost connection to parallel worker\" errors, but no assertion.\n(I also tried this with an EXEC_BACKEND build, just in case that'd\nchange the behavior, but it didn't.) So it seems like the \"lorikeet\nis flaky\" theory is looking pretty plausible.\n\nI do see what seems to be a bug-let in ForgetBackgroundWorker.\nBackgroundWorkerStateChange is careful to do this when freeing\na slot:\n\n /*\n * We need a memory barrier here to make sure that the load of\n * bgw_notify_pid and the update of parallel_terminate_count\n * complete before the store to in_use.\n */\n notify_pid = slot->worker.bgw_notify_pid;\n if ((slot->worker.bgw_flags & BGWORKER_CLASS_PARALLEL) != 0)\n BackgroundWorkerData->parallel_terminate_count++;\n pg_memory_barrier();\n slot->pid = 0;\n slot->in_use = false;\n\nbut the mainline case in ForgetBackgroundWorker is a lot less\nparanoid:\n\n Assert(rw->rw_shmem_slot < max_worker_processes);\n slot = &BackgroundWorkerData->slot[rw->rw_shmem_slot];\n if ((rw->rw_worker.bgw_flags & BGWORKER_CLASS_PARALLEL) != 0)\n BackgroundWorkerData->parallel_terminate_count++;\n\n slot->in_use = false;\n\nOne of these functions is mistaken. However, I can't construct\na theory whereby that explains lorikeet's symptoms, mainly because\nIntel chips don't do out-of-order stores so the messing with\nparallel_terminate_count should be done before in_use is cleared,\neven without an explicit memory barrier.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 05 May 2021 21:10:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Dubious assertion in RegisterDynamicBackgroundWorker" }, { "msg_contents": "On Wed, May 05, 2021 at 03:46:43PM -0400, Tom Lane wrote:\n> I know that the parallel-worker code is held\n> together with chewing gum and baling wire\n\nHaving spent time in that code, I didn't notice such weakness. I'm sure it\nhas bugs, but I doubt its bug density is unusual for PostgreSQL.\n\n\n", "msg_date": "Wed, 5 May 2021 23:45:46 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Dubious assertion in RegisterDynamicBackgroundWorker" } ]
[ { "msg_contents": "Per sqlsmith.\n\npostgres=# SELECT pg_get_statisticsobjdef_expressions(123);\nERROR: cache lookup failed for statistics object 123\npostgres=# \\errverbose \nERROR: XX000: cache lookup failed for statistics object 123\nLOCATION: pg_get_statisticsobjdef_expressions, ruleutils.c:1762\n\nThe expectation is that sql callable functions should return null rather than\nhitting elog().\n\nIn the 003 patch, I wonder if this part should be updated, too:\n\n| ... which can greatly improve query plans that use the expression index.\n\nIt can improve queries even that don't use the index, right ?\nSay, if a query has f(x) = 11, and the MCV list for the expression shows that\n50% of the table has f(x)=11, then the query might decide to *not* use an index\nscan.\n\n-- \nJustin", "msg_date": "Wed, 5 May 2021 16:09:47 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "cache lookup failed for statistics object 123" }, { "msg_contents": "Hi,\n\nOn 5/5/21 11:09 PM, Justin Pryzby wrote:\n> Per sqlsmith.\n> \n> postgres=# SELECT pg_get_statisticsobjdef_expressions(123);\n> ERROR: cache lookup failed for statistics object 123\n> postgres=# \\errverbose\n> ERROR: XX000: cache lookup failed for statistics object 123\n> LOCATION: pg_get_statisticsobjdef_expressions, ruleutils.c:1762\n> \n> The expectation is that sql callable functions should return null rather than\n> hitting elog().\n> \n\nRight, thanks for noticing this.\n\n> In the 003 patch, I wonder if this part should be updated, too:\n> \n> | ... which can greatly improve query plans that use the expression index.\n> \n> It can improve queries even that don't use the index, right ?\n >\n> Say, if a query has f(x) = 11, and the MCV list for the expression shows that\n> 50% of the table has f(x)=11, then the query might decide to *not* use an index\n> scan.\n\nYeah, it should talk about improving estimates, it's mostly unrelated to \nusing indexes.\n\nregards\n\n\n", "msg_date": "Thu, 6 May 2021 22:25:06 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: cache lookup failed for statistics object 123" }, { "msg_contents": "I've pushed all three patches, with some better commit messages etc.\n\nthanks!\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 May 2021 17:19:10 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: cache lookup failed for statistics object 123" } ]
[ { "msg_contents": "Folks,\n\nI was writing up a query on pg_constraint, and the columns whose\ndescriptions I've changed here were pretty hard to puzzle out, as they\nwere only distinct up to the difference between F and P, which isn't\nalways easy to see. Please find attached a patch to disambiguate\nthem.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate", "msg_date": "Wed, 5 May 2021 22:52:33 +0000", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": true, "msg_subject": "Make some column descriptions easier to distinguish visually" } ]
[ { "msg_contents": "The attached patch implements ALTER TABLE ... SET ACCESS METHOD.\n\nFor simplicity, I used the normal alter table path, ATRewriteTable(),\nwhich does not follow the stricter isolation semantics that VACUUM FULL\nfollows. If someone thinks that's unacceptable, please let me know.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 05 May 2021 20:45:50 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "[v15] ALTER TABLE ... SET ACCESS METHOD" }, { "msg_contents": "Hi,\n\nOn 2021-05-05 20:45:50 -0700, Jeff Davis wrote:\n> The attached patch implements ALTER TABLE ... SET ACCESS METHOD.\n\nI think it'd be good to have a basic test for this. See create_am.sql\nfor how to do so without actually needing a second AM implementation.\n\n\n> For simplicity, I used the normal alter table path, ATRewriteTable(),\n> which does not follow the stricter isolation semantics that VACUUM FULL\n> follows. If someone thinks that's unacceptable, please let me know.\n\nI think it'd be good to have something better for tablecmd.c table\nrewrites - but I don't think it's this patch's responsibility.\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Wed, 5 May 2021 20:59:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [v15] ALTER TABLE ... SET ACCESS METHOD" }, { "msg_contents": "On Wed, May 05, 2021 at 08:45:50PM -0700, Jeff Davis wrote:\n> The attached patch implements ALTER TABLE ... SET ACCESS METHOD.\n> \n> For simplicity, I used the normal alter table path, ATRewriteTable(),\n> which does not follow the stricter isolation semantics that VACUUM FULL\n> follows. If someone thinks that's unacceptable, please let me know.\n\nWhy doees your patch say v15?\nIt's nearly the same as my pre-existing patch, so should merge them.\n\nhttps://commitfest.postgresql.org/33/3020/\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 5 May 2021 23:40:45 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [v15] ALTER TABLE ... SET ACCESS METHOD" }, { "msg_contents": "On Wed, 2021-05-05 at 23:40 -0500, Justin Pryzby wrote:\n> Why doees your patch say v15?\n> It's nearly the same as my pre-existing patch, so should merge them.\n\nSorry, I completely missed your patch. I retract mine and we'll\ncontinue with yours.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 05 May 2021 21:56:13 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: [v15] ALTER TABLE ... SET ACCESS METHOD" } ]
[ { "msg_contents": "Hi,\n\nI wanted to apply https://postgr.es/m/CAGRY4nwaiPJc8wO0G7WZCgBmATC3GJVgvBoADZHDbCzhj8zTPw@mail.gmail.com\nand noticed that there's not just Gen_dummy_probes.sed but also a\nGen_dummy_probes.pl.\n\nI understand why we don't want to rely on sed because of windows - but\nit's far from obvious why we can't just use the .pl variant all the\ntime?\n\nThe perl version was introduced in\n\ncommit 5d0320105699c253fe19b8b42ae1bffb67785b02\nAuthor: Andrew Dunstan <andrew@dunslane.net>\nDate: 2016-03-19 18:36:35 -0400\n\n Remove dependency on psed for MSVC builds.\n\n Modern Perl has removed psed from its core distribution, so it might not\n be readily available on some build platforms. We therefore replace its\n use with a Perl script generated by s2p, which is equivalent to the sed\n script. The latter is retained for non-MSVC builds to avoid creating a\n new hard dependency on Perl for non-Windows tarball builds.\n\n Backpatch to all live branches.\n\n Michael Paquier and me.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 May 2021 20:56:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I understand why we don't want to rely on sed because of windows - but\n> it's far from obvious why we can't just use the .pl variant all the\n> time?\n\nPerl is not considered a hard build requirement on non-Windows.\nWe could dodge that by shipping a pre-built dummy probes.h,\nbut that doesn't really seem like a cleaner way than what's\nthere now.\n\nAlso, as I read it, Gen_dummy_probes.sed is useful in any case as\nbeing the \"source code\" for Gen_dummy_probes.pl. You'd need some\nother form of documentation if you removed it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 May 2021 00:18:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "Hi,\n\nOn 2021-05-06 00:18:12 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I understand why we don't want to rely on sed because of windows - but\n> > it's far from obvious why we can't just use the .pl variant all the\n> > time?\n> \n> Perl is not considered a hard build requirement on non-Windows.\n\nOops, forgot that.\n\n\n> We could dodge that by shipping a pre-built dummy probes.h,\n> but that doesn't really seem like a cleaner way than what's\n> there now.\n\nI tried to regenerate Gen_dummy_probes.pl using s2p - which doesn't seem\nto exist for modern versions of perl anymore :(\n\n\n> Also, as I read it, Gen_dummy_probes.sed is useful in any case as\n> being the \"source code\" for Gen_dummy_probes.pl. You'd need some\n> other form of documentation if you removed it.\n\n:/\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 May 2021 21:59:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n\n> I tried to regenerate Gen_dummy_probes.pl using s2p - which doesn't seem\n> to exist for modern versions of perl anymore :(\n\nIt still exists, it's just not part of the core Perl distribution any\nmore (since 5.22, released in 2015):\n\n https://metacpan.org/pod/perl5220delta#find2perl,-s2p-and-a2p-removal\n https://metacpan.org/release/App-s2p.\n\nYou can install it with `cpan App::s2p`.\n\n\n- ilmari\n-- \n\"The surreality of the universe tends towards a maximum\" -- Skud's Law\n\"Never formulate a law or axiom that you're not prepared to live with\n the consequences of.\" -- Skud's Meta-Law\n\n\n", "msg_date": "Thu, 06 May 2021 11:13:28 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "\nOn 5/6/21 12:59 AM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-05-06 00:18:12 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> I understand why we don't want to rely on sed because of windows - but\n>>> it's far from obvious why we can't just use the .pl variant all the\n>>> time?\n>> Perl is not considered a hard build requirement on non-Windows.\n> Oops, forgot that.\n>\n>\n>> We could dodge that by shipping a pre-built dummy probes.h,\n>> but that doesn't really seem like a cleaner way than what's\n>> there now.\n> I tried to regenerate Gen_dummy_probes.pl using s2p - which doesn't seem\n> to exist for modern versions of perl anymore :(\n>\n>\n>> Also, as I read it, Gen_dummy_probes.sed is useful in any case as\n>> being the \"source code\" for Gen_dummy_probes.pl. You'd need some\n>> other form of documentation if you removed it.\n\n\nI suggest we add a README that sets out\n\n\na) why we do things this way\n\nb) that the sed script is what's authoritative\n\nc) how to regenerate the perl script if you change the sed script,\nincluding where to get s2p\n\n\nI can do that.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 6 May 2021 09:55:34 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "On 5/6/21 9:55 AM, Andrew Dunstan wrote:\n> On 5/6/21 12:59 AM, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2021-05-06 00:18:12 -0400, Tom Lane wrote:\n>>> Andres Freund <andres@anarazel.de> writes:\n>>>> I understand why we don't want to rely on sed because of windows - but\n>>>> it's far from obvious why we can't just use the .pl variant all the\n>>>> time?\n>>> Perl is not considered a hard build requirement on non-Windows.\n>> Oops, forgot that.\n>>\n>>\n>>> We could dodge that by shipping a pre-built dummy probes.h,\n>>> but that doesn't really seem like a cleaner way than what's\n>>> there now.\n>> I tried to regenerate Gen_dummy_probes.pl using s2p - which doesn't seem\n>> to exist for modern versions of perl anymore :(\n>>\n>>\n>>> Also, as I read it, Gen_dummy_probes.sed is useful in any case as\n>>> being the \"source code\" for Gen_dummy_probes.pl. You'd need some\n>>> other form of documentation if you removed it.\n>\n> I suggest we add a README that sets out\n>\n>\n> a) why we do things this way\n>\n> b) that the sed script is what's authoritative\n>\n> c) how to regenerate the perl script if you change the sed script,\n> including where to get s2p\n>\n>\n> I can do that.\n>\n>\n\n\nHere's a patch that adds the README and also adds a Makefile recipe for\nregenerating Gen_dummy_probes.pl after the sed script is changed. On my\nsystem at least the recipe is idempotent.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 7 May 2021 11:19:02 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Here's a patch that adds the README and also adds a Makefile recipe for\n> regenerating Gen_dummy_probes.pl after the sed script is changed. On my\n> system at least the recipe is idempotent.\n\nI've not tested the Makefile recipe, but the README looks good.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 May 2021 13:04:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "Hi,\n\nOn 2021-05-07 11:19:02 -0400, Andrew Dunstan wrote:\n> Here's a patch that adds the README and also adds a Makefile recipe for\n> regenerating Gen_dummy_probes.pl after the sed script is changed. On my\n> system at least the recipe is idempotent.\n\nNice! Thanks for this work.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 May 2021 10:20:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "Hi,\n\nOn 2021-05-06 11:13:28 +0100, Dagfinn Ilmari Manns�ker wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> \n> > I tried to regenerate Gen_dummy_probes.pl using s2p - which doesn't seem\n> > to exist for modern versions of perl anymore :(\n> \n> It still exists, it's just not part of the core Perl distribution any\n> more (since 5.22, released in 2015):\n> \n> https://metacpan.org/pod/perl5220delta#find2perl,-s2p-and-a2p-removal\n> https://metacpan.org/release/App-s2p.\n\nOh, I got confused because the cpan link at the top of\nhttps://perldoc.perl.org/5.6.2/s2p is dead, and because I forgot all I\nknew about perl a long time ago.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 May 2021 10:36:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "\nOn 5/7/21 1:20 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-05-07 11:19:02 -0400, Andrew Dunstan wrote:\n>> Here's a patch that adds the README and also adds a Makefile recipe for\n>> regenerating Gen_dummy_probes.pl after the sed script is changed. On my\n>> system at least the recipe is idempotent.\n> Nice! Thanks for this work.\n>\n\n\nde nada. pushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 7 May 2021 14:31:43 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "On 07.05.21 20:31, Andrew Dunstan wrote:\n> On 5/7/21 1:20 PM, Andres Freund wrote:\n>> On 2021-05-07 11:19:02 -0400, Andrew Dunstan wrote:\n>>> Here's a patch that adds the README and also adds a Makefile recipe for\n>>> regenerating Gen_dummy_probes.pl after the sed script is changed. On my\n>>> system at least the recipe is idempotent.\n>> Nice! Thanks for this work.\n> \n> de nada. pushed.\n\nThis recipe doesn't produce a Gen_dummy_probes.pl that matches exactly \nthe one that is there now. If this is going to be the preferred method, \nthen we should generate it once so that it matches going forward.\n\n\n", "msg_date": "Mon, 10 May 2021 11:44:44 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n\n> On 07.05.21 20:31, Andrew Dunstan wrote:\n>> On 5/7/21 1:20 PM, Andres Freund wrote:\n>>> On 2021-05-07 11:19:02 -0400, Andrew Dunstan wrote:\n>>>> Here's a patch that adds the README and also adds a Makefile recipe for\n>>>> regenerating Gen_dummy_probes.pl after the sed script is changed. On my\n>>>> system at least the recipe is idempotent.\n>>> Nice! Thanks for this work.\n>>\n>> de nada. pushed.\n>\n> This recipe doesn't produce a Gen_dummy_probes.pl that matches exactly\n> the one that is there now. If this is going to be the preferred method,\n> then we should generate it once so that it matches going forward.\n\nWhich version of perltidy do you have installed? For me it generates\nidentical versions using any of 20170521 (per src/tools/pgindent/README),\n20201207 (what I happened to have installed before), and 20210402 (the\nlatest).\n\nAlso, what does the difference look like?\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl\n\n\n", "msg_date": "Mon, 10 May 2021 12:16:12 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "\nOn 5/10/21 7:16 AM, Dagfinn Ilmari Mannsåker wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>\n>> On 07.05.21 20:31, Andrew Dunstan wrote:\n>>> On 5/7/21 1:20 PM, Andres Freund wrote:\n>>>> On 2021-05-07 11:19:02 -0400, Andrew Dunstan wrote:\n>>>>> Here's a patch that adds the README and also adds a Makefile recipe for\n>>>>> regenerating Gen_dummy_probes.pl after the sed script is changed. On my\n>>>>> system at least the recipe is idempotent.\n>>>> Nice! Thanks for this work.\n>>> de nada. pushed.\n>> This recipe doesn't produce a Gen_dummy_probes.pl that matches exactly\n>> the one that is there now. If this is going to be the preferred method,\n>> then we should generate it once so that it matches going forward.\n> Which version of perltidy do you have installed? For me it generates\n> identical versions using any of 20170521 (per src/tools/pgindent/README),\n> 20201207 (what I happened to have installed before), and 20210402 (the\n> latest).\n>\n> Also, what does the difference look like?\n>\n\nYep:\n\n andrew@emma:utils $ touch Gen_dummy_probes.sed\n andrew@emma:utils $ touch ../../../src/Makefile.global\n andrew@emma:utils $ make top_srcdir=../../.. Gen_dummy_probes.pl\n perl -ni -e ' print; exit if /^\\$0/;' Gen_dummy_probes.pl\n s2p -f Gen_dummy_probes.sed  | sed -e 1,4d -e '/# #/d' -e '$d' >>\n Gen_dummy_probes.pl\n perltidy --profile=../../tools/pgindent/perltidyrc Gen_dummy_probes.pl\n perl -pi -e '!$lb && ( /^\\t+#/  || /^# prototypes/ ) && print qq{\\n};'\\\n     -e '$lb = m/^\\n/; ' Gen_dummy_probes.pl\n andrew@emma:utils $ git diff\n andrew@emma:utils $ perltidy --version\n This is perltidy, v20170521\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 10 May 2021 08:29:40 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 5/10/21 7:16 AM, Dagfinn Ilmari Mannsåker wrote:\n>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>>> This recipe doesn't produce a Gen_dummy_probes.pl that matches exactly\n>>> the one that is there now. If this is going to be the preferred method,\n>>> then we should generate it once so that it matches going forward.\n\n>> Which version of perltidy do you have installed? For me it generates\n>> identical versions using any of 20170521 (per src/tools/pgindent/README),\n>> 20201207 (what I happened to have installed before), and 20210402 (the\n>> latest).\n\n> Yep:\n\nFor me, using App-s2p-1.003 and perltidy v20170521, it works\nas long as I start with the previous version of\nGen_dummy_probes.pl in place. I first tried to test this by\n\"rm Gen_dummy_probes.pl; make Gen_dummy_probes.pl\", and what\nI got was a script without all the initial commentary nor\nthe first line of actual Perl code.\n\nI don't think this is good practice; it implies that any\naccidental corruption of the commentary would be carried\nforward. I think we should be extracting the commentary\nfrom Gen_dummy_probes.sed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 May 2021 12:07:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "On 5/10/21 12:07 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 5/10/21 7:16 AM, Dagfinn Ilmari Mannsåker wrote:\n>>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>>>> This recipe doesn't produce a Gen_dummy_probes.pl that matches exactly\n>>>> the one that is there now. If this is going to be the preferred method,\n>>>> then we should generate it once so that it matches going forward.\n>>> Which version of perltidy do you have installed? For me it generates\n>>> identical versions using any of 20170521 (per src/tools/pgindent/README),\n>>> 20201207 (what I happened to have installed before), and 20210402 (the\n>>> latest).\n>> Yep:\n> For me, using App-s2p-1.003 and perltidy v20170521, it works\n> as long as I start with the previous version of\n> Gen_dummy_probes.pl in place. I first tried to test this by\n> \"rm Gen_dummy_probes.pl; make Gen_dummy_probes.pl\", and what\n> I got was a script without all the initial commentary nor\n> the first line of actual Perl code.\n>\n> I don't think this is good practice; it implies that any\n> accidental corruption of the commentary would be carried\n> forward. I think we should be extracting the commentary\n> from Gen_dummy_probes.sed.\n>\n> \t\t\t\n\n\nI don't know how likely accidental corruption is, but OK, let's not make\nthe next generation dependent on the current generation of the file. The\nsimplest way around that seems to me to cache the perl prolog, as in the\nattached patch Is that more to your liking? I also adjusted it so we\npick up the first line of code from s2p rather than from the prolog,\nwhich is now just comments and the #! line.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 11 May 2021 10:32:55 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 5/10/21 12:07 PM, Tom Lane wrote:\n>> I don't think this is good practice; it implies that any\n>> accidental corruption of the commentary would be carried\n>> forward. I think we should be extracting the commentary\n>> from Gen_dummy_probes.sed.\n\n> I don't know how likely accidental corruption is, but OK, let's not make\n> the next generation dependent on the current generation of the file. The\n> simplest way around that seems to me to cache the perl prolog, as in the\n> attached patch Is that more to your liking? I also adjusted it so we\n> pick up the first line of code from s2p rather than from the prolog,\n> which is now just comments and the #! line.\n\nWorks for me. One other thought --- do we care whether this works\nin a VPATH build, and if so does it? The $< and $@ references should\nbe OK, but I'm betting you need $(srcdir)/Gen_dummy_probes.pl.prolog\nor the like.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 May 2021 10:52:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "\nOn 5/11/21 10:52 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 5/10/21 12:07 PM, Tom Lane wrote:\n>>> I don't think this is good practice; it implies that any\n>>> accidental corruption of the commentary would be carried\n>>> forward. I think we should be extracting the commentary\n>>> from Gen_dummy_probes.sed.\n>> I don't know how likely accidental corruption is, but OK, let's not make\n>> the next generation dependent on the current generation of the file. The\n>> simplest way around that seems to me to cache the perl prolog, as in the\n>> attached patch Is that more to your liking? I also adjusted it so we\n>> pick up the first line of code from s2p rather than from the prolog,\n>> which is now just comments and the #! line.\n> Works for me. One other thought --- do we care whether this works\n> in a VPATH build, and if so does it? The $< and $@ references should\n> be OK, but I'm betting you need $(srcdir)/Gen_dummy_probes.pl.prolog\n> or the like.\n>\n> \t\t\t\n\n\n\nWhy would we? It's only used in Windows builds, and there's no VPATH\nthere (sadly). In fact, building the file isn't part of any standard\nbuild procedure. I think this is probably in the same boat as the SSL\ncerts we make in src/test/ssl - I don't think those recipes are meant\nfor use in VPATH builds either.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 11 May 2021 11:44:22 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "Hi,\n\nOn 2021-05-11 10:52:22 -0400, Tom Lane wrote:\n> Works for me. One other thought --- do we care whether this works\n> in a VPATH build, and if so does it? The $< and $@ references should\n> be OK, but I'm betting you need $(srcdir)/Gen_dummy_probes.pl.prolog\n> or the like.\n\nIt doesn't work in a VPATH build right now, FWIW. $@, $< will point to a\nlocal file in the build directory, right now. And the path to perltidyrc\ndoesn't work either. It seems to work after the following modifications\n\ndiff --git i/src/backend/utils/Makefile w/src/backend/utils/Makefile\nindex bcf9dd41adf..ca733d12dce 100644\n--- i/src/backend/utils/Makefile\n+++ w/src/backend/utils/Makefile\n@@ -92,10 +92,10 @@ $(top_builddir)/src/include/utils/probes.h: probes.h\n # Nothing depends on it, so it will never be called unless explicitly requested\n # The last two lines of the recipe format the script according to our\n # standard and put back some blank lines for improved readability.\n-Gen_dummy_probes.pl: Gen_dummy_probes.sed\n+$(top_srcdir)/src/backend/utils/Gen_dummy_probes.pl: $(top_srcdir)/src/backend/utils/Gen_dummy_probes.sed\n perl -ni -e ' print; exit if /^\\$$0/;' $@\n s2p -f $< | sed -e 1,4d -e '/# #/d' -e '$$d' >> $@\n- perltidy --profile=../../tools/pgindent/perltidyrc $@\n+ perltidy --profile=$(top_srcdir)/src/tools/pgindent/perltidyrc $@\n perl -pi -e '!$$lb && ( /^\\t+#/ || /^# prototypes/ ) && print qq{\\n};'\\\n -e '$$lb = m/^\\n/; ' $@\n \ndiff --git i/src/test/regress/parallel_schedule w/src/test/regress/parallel_schedule\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 11 May 2021 10:21:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "Hi,\n\nOn 2021-05-11 11:44:22 -0400, Andrew Dunstan wrote:\n> Why would we? It's only used in Windows builds, and there's no VPATH\n> there (sadly).\n\nIs that really relevant? We'll need to update the file on any platform\nwhen modifying the .sed, not just in windows.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 11 May 2021 10:22:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "\nOn 5/11/21 1:21 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-05-11 10:52:22 -0400, Tom Lane wrote:\n>> Works for me. One other thought --- do we care whether this works\n>> in a VPATH build, and if so does it? The $< and $@ references should\n>> be OK, but I'm betting you need $(srcdir)/Gen_dummy_probes.pl.prolog\n>> or the like.\n> It doesn't work in a VPATH build right now, FWIW. $@, $< will point to a\n> local file in the build directory, right now. And the path to perltidyrc\n> doesn't work either. It seems to work after the following modifications\n>\n> diff --git i/src/backend/utils/Makefile w/src/backend/utils/Makefile\n> index bcf9dd41adf..ca733d12dce 100644\n> --- i/src/backend/utils/Makefile\n> +++ w/src/backend/utils/Makefile\n> @@ -92,10 +92,10 @@ $(top_builddir)/src/include/utils/probes.h: probes.h\n> # Nothing depends on it, so it will never be called unless explicitly requested\n> # The last two lines of the recipe format the script according to our\n> # standard and put back some blank lines for improved readability.\n> -Gen_dummy_probes.pl: Gen_dummy_probes.sed\n> +$(top_srcdir)/src/backend/utils/Gen_dummy_probes.pl: $(top_srcdir)/src/backend/utils/Gen_dummy_probes.sed\n> perl -ni -e ' print; exit if /^\\$$0/;' $@\n> s2p -f $< | sed -e 1,4d -e '/# #/d' -e '$$d' >> $@\n> - perltidy --profile=../../tools/pgindent/perltidyrc $@\n> + perltidy --profile=$(top_srcdir)/src/tools/pgindent/perltidyrc $@\n> perl -pi -e '!$$lb && ( /^\\t+#/ || /^# prototypes/ ) && print qq{\\n};'\\\n> -e '$$lb = m/^\\n/; ' $@\n> \n\n\nYeah, but this will create the perl file in the vpath directory where it\nwon't ever be used anyway. You really want this back in the source\ndirectory where you can check it in etc.\n\nI came up with this:\n\n\nGen_dummy_probes.pl: $(top_srcdir)/$(subdir)/Gen_dummy_probes.sed $(top_srcdir)/$(subdir)/Gen_dummy_probes.pl.prolog\n    cp $(top_srcdir)/$(subdir)/Gen_dummy_probes.pl.prolog $(top_srcdir)/$(subdir)/$@\n    s2p -f $<  | sed -e 1,3d -e '/# #/ d' -e '$$d' >> $(top_srcdir)/$(subdir)/$@\n    perltidy --profile=$(top_srcdir)/$(subdir)/../../tools/pgindent/perltidyrc $(top_srcdir)/$(subdir)/$@\n    perl -pi -e '!$$lb && ( /^\\t+#/  || /^# prototypes/ ) && print qq{\\n};'\\\n        -e '$$lb = m/^\\n/; ' $(top_srcdir)/$(subdir)/$@\n\n\nI'm not aware of any other case where we generate an in-tree file from a\nvpath, which is why it feels strange.\n\n\ncheers\n\n\nandrew\n\n\n\n\n", "msg_date": "Tue, 11 May 2021 14:30:10 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "\nOn 5/11/21 2:30 PM, Andrew Dunstan wrote:\n> On 5/11/21 1:21 PM, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2021-05-11 10:52:22 -0400, Tom Lane wrote:\n>>> Works for me. One other thought --- do we care whether this works\n>>> in a VPATH build, and if so does it? The $< and $@ references should\n>>> be OK, but I'm betting you need $(srcdir)/Gen_dummy_probes.pl.prolog\n>>> or the like.\n>> It doesn't work in a VPATH build right now, FWIW. $@, $< will point to a\n>> local file in the build directory, right now. And the path to perltidyrc\n>> doesn't work either. It seems to work after the following modifications\n>>\n>> diff --git i/src/backend/utils/Makefile w/src/backend/utils/Makefile\n>> index bcf9dd41adf..ca733d12dce 100644\n>> --- i/src/backend/utils/Makefile\n>> +++ w/src/backend/utils/Makefile\n>> @@ -92,10 +92,10 @@ $(top_builddir)/src/include/utils/probes.h: probes.h\n>> # Nothing depends on it, so it will never be called unless explicitly requested\n>> # The last two lines of the recipe format the script according to our\n>> # standard and put back some blank lines for improved readability.\n>> -Gen_dummy_probes.pl: Gen_dummy_probes.sed\n>> +$(top_srcdir)/src/backend/utils/Gen_dummy_probes.pl: $(top_srcdir)/src/backend/utils/Gen_dummy_probes.sed\n>> perl -ni -e ' print; exit if /^\\$$0/;' $@\n>> s2p -f $< | sed -e 1,4d -e '/# #/d' -e '$$d' >> $@\n>> - perltidy --profile=../../tools/pgindent/perltidyrc $@\n>> + perltidy --profile=$(top_srcdir)/src/tools/pgindent/perltidyrc $@\n>> perl -pi -e '!$$lb && ( /^\\t+#/ || /^# prototypes/ ) && print qq{\\n};'\\\n>> -e '$$lb = m/^\\n/; ' $@\n>> \n>\n> Yeah, but this will create the perl file in the vpath directory where it\n> won't ever be used anyway. You really want this back in the source\n> directory where you can check it in etc.\n>\n> I came up with this:\n>\n>\n> Gen_dummy_probes.pl: $(top_srcdir)/$(subdir)/Gen_dummy_probes.sed $(top_srcdir)/$(subdir)/Gen_dummy_probes.pl.prolog\n>     cp $(top_srcdir)/$(subdir)/Gen_dummy_probes.pl.prolog $(top_srcdir)/$(subdir)/$@\n>     s2p -f $<  | sed -e 1,3d -e '/# #/ d' -e '$$d' >> $(top_srcdir)/$(subdir)/$@\n>     perltidy --profile=$(top_srcdir)/$(subdir)/../../tools/pgindent/perltidyrc $(top_srcdir)/$(subdir)/$@\n>     perl -pi -e '!$$lb && ( /^\\t+#/  || /^# prototypes/ ) && print qq{\\n};'\\\n>         -e '$$lb = m/^\\n/; ' $(top_srcdir)/$(subdir)/$@\n>\n>\n> I'm not aware of any other case where we generate an in-tree file from a\n> vpath, which is why it feels strange.\n\n\n\nSimplified version:\n\n\nGen_dummy_probes.pl: $(srcdir)/Gen_dummy_probes.sed $(srcdir)/Gen_dummy_probes.pl.prolog\n    cp $(srcdir)/Gen_dummy_probes.pl.prolog $(srcdir)/$@\n    s2p -f $<  | sed -e 1,3d -e '/# #/ d' -e '$$d' >> $(srcdir)/$@\n    perltidy --profile=$(srcdir)/../../tools/pgindent/perltidyrc $(srcdir)/$@\n    perl -pi -e '!$$lb && ( /^\\t+#/  || /^# prototypes/ ) && print qq{\\n};'\\\n        -e '$$lb = m/^\\n/; ' $(srcdir)/$@\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 11 May 2021 15:46:40 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "Hi,\n\nOn 2021-05-11 14:30:10 -0400, Andrew Dunstan wrote:\n> On 5/11/21 1:21 PM, Andres Freund wrote:\n> > It doesn't work in a VPATH build right now, FWIW. $@, $< will point to a\n> > local file in the build directory, right now. And the path to perltidyrc\n> > doesn't work either. It seems to work after the following modifications\n> >\n> > diff --git i/src/backend/utils/Makefile w/src/backend/utils/Makefile\n> > index bcf9dd41adf..ca733d12dce 100644\n> > --- i/src/backend/utils/Makefile\n> > +++ w/src/backend/utils/Makefile\n> > @@ -92,10 +92,10 @@ $(top_builddir)/src/include/utils/probes.h: probes.h\n> > # Nothing depends on it, so it will never be called unless explicitly requested\n> > # The last two lines of the recipe format the script according to our\n> > # standard and put back some blank lines for improved readability.\n> > -Gen_dummy_probes.pl: Gen_dummy_probes.sed\n> > +$(top_srcdir)/src/backend/utils/Gen_dummy_probes.pl: $(top_srcdir)/src/backend/utils/Gen_dummy_probes.sed\n> > perl -ni -e ' print; exit if /^\\$$0/;' $@\n> > s2p -f $< | sed -e 1,4d -e '/# #/d' -e '$$d' >> $@\n> > - perltidy --profile=../../tools/pgindent/perltidyrc $@\n> > + perltidy --profile=$(top_srcdir)/src/tools/pgindent/perltidyrc $@\n> > perl -pi -e '!$$lb && ( /^\\t+#/ || /^# prototypes/ ) && print qq{\\n};'\\\n> > -e '$$lb = m/^\\n/; ' $@\n> > \n> \n> \n> Yeah, but this will create the perl file in the vpath directory where it\n> won't ever be used anyway. You really want this back in the source\n> directory where you can check it in etc.\n\nHm, why would it? Both $< and $@ will be in the source tree, as\nformulated above?\n\nandres@awork3:~/src/postgresql$ git diff src/backend/utils/Gen_dummy_probes.pl\nandres@awork3:~/src/postgresql$ echo foo >> src/backend/utils/Gen_dummy_probes.pl\nandres@awork3:~/src/postgresql$ git diff src/backend/utils/Gen_dummy_probes.pl\ndiff --git i/src/backend/utils/Gen_dummy_probes.pl w/src/backend/utils/Gen_dummy_probes.pl\nindex 4852103daf4..1b724bf1038 100644\n--- i/src/backend/utils/Gen_dummy_probes.pl\n+++ w/src/backend/utils/Gen_dummy_probes.pl\n@@ -257,3 +257,4 @@ sub printQ()\n }\n undef(@Q);\n }\n+foo\nandres@awork3:~/src/postgresql$ touch src/backend/utils/Gen_dummy_probes.sed && make -C /home/andres/build/postgres/dev-assert/vpath/src/backend/utils Gen_dummy_probes.pl\nperl -ni -e ' print; exit if /^\\$0/;' /home/andres/src/postgresql/src/backend/utils/Gen_dummy_probes.pl\ns2p -f /home/andres/src/postgresql/src/backend/utils/Gen_dummy_probes.sed | sed -e 1,4d -e '/# #/d' -e '$d' >> /home/andres/src/postgresql/src/backend/utils/Gen_dummy_probes.pl\nperltidy --profile=/home/andres/src/postgresql/src/tools/pgindent/perltidyrc /home/andres/src/postgresql/src/backend/utils/Gen_dummy_probes.pl\nperl -pi -e '!$lb && ( /^\\t+#/ || /^# prototypes/ ) && print qq{\\n};'\\\n\t-e '$lb = m/^\\n/; ' /home/andres/src/postgresql/src/backend/utils/Gen_dummy_probes.pl\nandres@awork3:~/src/postgresql$ git diff src/backend/utils/Gen_dummy_probes.pl\nandres@awork3:~/src/postgresql$ ls /home/andres/build/postgres/dev-assert/vpath/src/backend/utils/Gen_dummy_probes*\n/home/andres/build/postgres/dev-assert/vpath/src/backend/utils/Gen_dummy_probes.pl.tdy\n\nSo only a temp file ends up in the build dir?\n\n\n> I came up with this:\n> \n> \n> Gen_dummy_probes.pl: $(top_srcdir)/$(subdir)/Gen_dummy_probes.sed $(top_srcdir)/$(subdir)/Gen_dummy_probes.pl.prolog\n> ��� cp $(top_srcdir)/$(subdir)/Gen_dummy_probes.pl.prolog $(top_srcdir)/$(subdir)/$@\n> ��� s2p -f $<� | sed -e 1,3d -e '/# #/ d' -e '$$d' >> $(top_srcdir)/$(subdir)/$@\n> ��� perltidy --profile=$(top_srcdir)/$(subdir)/../../tools/pgindent/perltidyrc $(top_srcdir)/$(subdir)/$@\n> ��� perl -pi -e '!$$lb && ( /^\\t+#/� || /^# prototypes/ ) && print qq{\\n};'\\\n> ��� ��� -e '$$lb = m/^\\n/; ' $(top_srcdir)/$(subdir)/$@\n> \n> \n> I'm not aware of any other case where we generate an in-tree file from a\n> vpath, which is why it feels strange.\n\nYea, it is a bit odd, agreed. We don't have many generated sources\ninside the git repo (vs in the tarball). The most prominent one is\nconfigure, obviously...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 11 May 2021 12:47:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "On 2021-May-11, Andrew Dunstan wrote:\n\n> Yeah, but this will create the perl file in the vpath directory where it\n> won't ever be used anyway. You really want this back in the source\n> directory where you can check it in etc.\n\nHmm ... no, it doesn't. As far as I can see it works as intended. The\npath to the prolog file is broken, yes, but that's it. However, maybe\nI'm doing something wrong but if you define the rule like that, then it\nno longer works in normal (non-vpath) mode. This works:\n\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W", "msg_date": "Tue, 11 May 2021 15:53:06 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-11 14:30:10 -0400, Andrew Dunstan wrote:\n>> I'm not aware of any other case where we generate an in-tree file from a\n>> vpath, which is why it feels strange.\n\n> Yea, it is a bit odd, agreed. We don't have many generated sources\n> inside the git repo (vs in the tarball). The most prominent one is\n> configure, obviously...\n\nI think this is overly cute. As a counterexample, the rules to regenerate\ngram.c and similar files don't bend over backwards like that to force the\noutput to be in the srcdir.\n\nI haven't dug in the gmake manual to be sure, but I think that in a VPATH\nbuild, $@ will refer to the file in the srcdir if the file exists there\nbut is out-of-date. So if you go with the straightforward use of $< and\n$@, I believe it will in fact work. The only way to make it fail under\nVPATH would be to do\n\trm path/to/srcdir/Gen_dummy_probes.pl; make Gen_dummy_probes.pl\nwhich I think is sufficiently unlikely to not be a problem. In fact,\none could argue that building Gen_dummy_probes.pl in the VPATH dir\nis exactly what the user is trying to make happen if she does this.\n\nIn short: don't be cuter than the longstanding bison/flex rules are.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 May 2021 16:01:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "\nOn 5/11/21 4:01 PM, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2021-05-11 14:30:10 -0400, Andrew Dunstan wrote:\n>>> I'm not aware of any other case where we generate an in-tree file from a\n>>> vpath, which is why it feels strange.\n>> Yea, it is a bit odd, agreed. We don't have many generated sources\n>> inside the git repo (vs in the tarball). The most prominent one is\n>> configure, obviously...\n> I think this is overly cute. As a counterexample, the rules to regenerate\n> gram.c and similar files don't bend over backwards like that to force the\n> output to be in the srcdir.\n>\n> I haven't dug in the gmake manual to be sure, but I think that in a VPATH\n> build, $@ will refer to the file in the srcdir if the file exists there\n> but is out-of-date. So if you go with the straightforward use of $< and\n> $@, I believe it will in fact work. The only way to make it fail under\n> VPATH would be to do\n> \trm path/to/srcdir/Gen_dummy_probes.pl; make Gen_dummy_probes.pl\n> which I think is sufficiently unlikely to not be a problem. In fact,\n> one could argue that building Gen_dummy_probes.pl in the VPATH dir\n> is exactly what the user is trying to make happen if she does this.\n>\n> In short: don't be cuter than the longstanding bison/flex rules are.\n>\n> \t\t\t\n\nWhat will she do with it? gram.c generated in a vpath build is 100%\nusable where it's generated. Also. it's not a file we keep in the git repo.\n\nNot gonna fight, there's been way too much energy spent on this. I'll\njust do what Alvaro suggested. But I won't be surprised if some future\ncommit is missing the perl update.\n\n\ncheers\n\n\nandrew\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 11 May 2021 17:43:29 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" }, { "msg_contents": "\nOn 5/11/21 5:43 PM, Andrew Dunstan wrote:\n> On 5/11/21 4:01 PM, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> On 2021-05-11 14:30:10 -0400, Andrew Dunstan wrote:\n>>>> I'm not aware of any other case where we generate an in-tree file from a\n>>>> vpath, which is why it feels strange.\n>>> Yea, it is a bit odd, agreed. We don't have many generated sources\n>>> inside the git repo (vs in the tarball). The most prominent one is\n>>> configure, obviously...\n>> I think this is overly cute. As a counterexample, the rules to regenerate\n>> gram.c and similar files don't bend over backwards like that to force the\n>> output to be in the srcdir.\n>>\n>> I haven't dug in the gmake manual to be sure, but I think that in a VPATH\n>> build, $@ will refer to the file in the srcdir if the file exists there\n>> but is out-of-date. So if you go with the straightforward use of $< and\n>> $@, I believe it will in fact work. The only way to make it fail under\n>> VPATH would be to do\n>> \trm path/to/srcdir/Gen_dummy_probes.pl; make Gen_dummy_probes.pl\n>> which I think is sufficiently unlikely to not be a problem. In fact,\n>> one could argue that building Gen_dummy_probes.pl in the VPATH dir\n>> is exactly what the user is trying to make happen if she does this.\n>>\n>> In short: don't be cuter than the longstanding bison/flex rules are.\n>>\n>> \t\t\t\n> What will she do with it? gram.c generated in a vpath build is 100%\n> usable where it's generated. Also. it's not a file we keep in the git repo.\n>\n> Not gonna fight, there's been way too much energy spent on this. I'll\n> just do what Alvaro suggested. But I won't be surprised if some future\n> commit is missing the perl update.\n>\n>\n\n\nBelay that. His patch does what I tried to do but does it right. I'll\nfigure it out.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 11 May 2021 17:54:33 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Why do we have perl and sed versions of Gen_dummy_probes?" } ]
[ { "msg_contents": "Hi hackers,\n With PG 13.2(3fb4c75e857adee3da4386e947ba58a75f3e74b7), I tested\nsubtransaction with parallel scan, I got a subtransaction coredump as below:\n\n(gdb) bt\n#0 0x00001517ce61f7ff in raise () from /lib64/libc.so.6\n#1 0x00001517ce609c35 in abort () from /lib64/libc.so.6\n#2 0x0000000000aaf93f in ExceptionalCondition (conditionName=0xb4c920\n\"TransactionIdFollowsOrEquals(xid, TransactionXmin)\", errorType=0xb4c796\n\"FailedAssertion\", \n fileName=0xb4c738\n\"/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/\ntransam/subtrans.c\", lineNumber=156)\n at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/utils/er\nror/assert.c:67\n#3 0x0000000000563111 in SubTransGetTopmostTransaction (xid=196963)\n at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/t\nransam/subtrans.c:156\n#4 0x0000000000b05206 in XidInMVCCSnapshot (xid=196963, snapshot=0x2f8ec58)\n at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/utils/ti\nme/snapmgr.c:2293\n#5 0x00000000004ff2bc in HeapTupleSatisfiesMVCC (htup=0x7ffc21807120,\nsnapshot=0x2f8ec58, buffer=1946)\n at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/h\neap/heapam_visibility.c:1073\n#6 0x0000000000500363 in HeapTupleSatisfiesVisibility (tup=0x7ffc21807120,\nsnapshot=0x2f8ec58, buffer=1946)\n at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/h\neap/heapam_visibility.c:1695\n#7 0x00000000004e423b in heapgetpage (sscan=0x2f8e840, page=1685) at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/h\neap/heapam.c:447\n#8 0x00000000004e68bf in heapgettup_pagemode (scan=0x2f8e840,\ndir=ForwardScanDirection, nkeys=0, key=0x0)\n at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/h\neap/heapam.c:1077\n#9 0x00000000004e6eb6 in heap_getnextslot (sscan=0x2f8e840,\ndirection=ForwardScanDirection, slot=0x2fd8a38)\n at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/h\neap/heapam.c:1333\n#10 0x000000000075350e in table_scan_getnextslot (sscan=0x2f8e840,\ndirection=ForwardScanDirection, slot=0x2fd8a38)\n at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/include/access/t\nableam.h:906\n#11 0x00000000007535d6 in SeqNext (node=0x2fd86a8) at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor\n/nodeSeqscan.c:80\n#12 0x000000000071af3c in ExecScanFetch (node=0x2fd86a8, accessMtd=0x753542\n<SeqNext>, recheckMtd=0x7535e7 <SeqRecheck>)\n at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor\n/execScan.c:133\n#13 0x000000000071afdd in ExecScan (node=0x2fd86a8, accessMtd=0x753542\n<SeqNext>, recheckMtd=0x7535e7 <SeqRecheck>)\n at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor\n/execScan.c:199\n#14 0x0000000000753631 in ExecSeqScan (pstate=0x2fd86a8) at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor\n/nodeSeqscan.c:112\n#15 0x00000000007173af in ExecProcNodeFirst (node=0x2fd86a8) at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor\n/execProcnode.c:450\n#16 0x000000000070b7f7 in ExecProcNode (node=0x2fd86a8) at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/include/executor\n/executor.h:248\n#17 0x000000000070e302 in ExecutePlan (estate=0x2fd7ca0,\nplanstate=0x2fd86a8, use_parallel_mode=false, operation=CMD_SELECT,\nsendTuples=true, numberTuples=0, \n direction=ForwardScanDirection, dest=0x2f8ddd8, execute_once=true) at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor\n/execMain.c:1632\n#18 0x000000000070be22 in standard_ExecutorRun (queryDesc=0x2f8e7a8,\ndirection=ForwardScanDirection, count=0, execute_once=true)\n at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor\n/execMain.c:350\n#19 0x000000000070bc50 in ExecutorRun (queryDesc=0x2f8e7a8,\ndirection=ForwardScanDirection, count=0, execute_once=true)\n at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor\n/execMain.c:294\n#20 0x00000000007131d5 in ParallelQueryMain (seg=0x2ef30b8,\ntoc=0x1517cf85c000)\n at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor\n/execParallel.c:1448\n#21 0x000000000055f70c in ParallelWorkerMain (main_arg=897455922) at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/t\nransam/parallel.c:1470\n#22 0x000000000086e255 in StartBackgroundWorker () at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmast\ner/bgworker.c:879\n#23 0x000000000088192c in do_start_bgworker (rw=0x2f221c0) at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmast\ner/postmaster.c:5870\n#24 0x0000000000881cd8 in maybe_start_bgworkers () at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmast\ner/postmaster.c:6095\n#25 0x0000000000880d14 in sigusr1_handler (postgres_signal_arg=10) at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmast\ner/postmaster.c:5255\n#26 <signal handler called>\n#27 0x00001517ce6dc4bb in select () from /lib64/libc.so.6\n#28 0x000000000087c867 in ServerLoop () at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmast\ner/postmaster.c:1703\n#29 0x000000000087c232 in PostmasterMain (argc=3, argv=0x2ef1070) at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmast\ner/postmaster.c:1412\n#30 0x0000000000783418 in main (argc=3, argv=0x2ef1070) at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/main/mai\nn.c:210\n(gdb) f 3\n#3 0x0000000000563111 in SubTransGetTopmostTransaction (xid=196963)\n at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/t\nransam/subtrans.c:325\n325 Assert(TransactionIdFollowsOrEquals(xid, TransactionXmin));\n(gdb) p xid\n$1 = 196963\n(gdb) f 4\n#4 0x0000000000b05206 in XidInMVCCSnapshot (xid=196963, snapshot=0x2f8ec58)\n at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/utils/ti\nme/snapmgr.c:2293\n2293 xid = SubTransGetTopmostTransaction(xid);\n(gdb) p *snapshot\n$2 = {snapshot_type = SNAPSHOT_MVCC, xmin = 196962, xmax = 210314, xip =\n0x2f8ecc0, xcnt = 82, subxip = 0x0, subxcnt = 0, suboverflowed = true, \n takenDuringRecovery = false, copied = true, curcid = 14, speculativeToken\n= 2139062143, active_count = 0, regd_count = 1, ph_node = {first_child =\n0x0, \n next_sibling = 0xf65ca0 <CatalogSnapshotData+64>, prev_or_parent =\n0x2f8dbc8}, whenTaken = 0, lsn = 0}\n(gdb) p TransactionXmin\n$3 = 196992\n(gdb) f 3\n#3 0x0000000000563111 in SubTransGetTopmostTransaction (xid=196963)\n at\n/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/t\nransam/subtrans.c:325\n325 Assert(TransactionIdFollowsOrEquals(xid, TransactionXmin));\n(gdb) p xid\n$4 = 196963\n(gdb) p TransactionXmin\n$5 = 196992\n\nAfter simple analyze, I think it should be a bug.\n\n1, Main process first get active snapshot(xmin 196962, xmax 210314), and\npush active snapshot. \n2, Main process call InitializeParallelDSM, get transaction snapshot(xmin\n196992, xmax 210320), and send this snapshot with parameters\nPARALLEL_KEY_TRANSACTION_SNAPSHOT.\n3, Main process call\nExecParallelInitializeDSM->ExecSeqScanInitializeDSM->table_parallelscan_init\nialize SerializeSnapshot active snapshot(xmin 196962, xmax 210314)\n4, Main process Lanch Parallel Workers process.\n\nParallel Workers process work flow:\n1,\nParallelWorkerMain->BackgroundWorkerInitializeConnectionByOid->GetTransactio\nnSnapshot(xmin 196992, xmax 210320) and set TransactionXmin to 196992.\n2,\nParallelWorkerMain->RestoreTransactionSnapshot->SetTransactionSnapshot->Proc\nArrayInstallRestoredXmin and set TransactionXmin to 196992.\n3,\nParallelWorkerMain->ExecParallelInitializeWorker->ExecSeqScanInitializeWorke\nr->table_beginscan_parallel restore active snapshot(xmin 196962, xmax\n210314) from main process.\n4, Got coredump\nParallelWorkerMain->ParallelQueryMain->ExecutorRun...ExecScan->heapgetpage->\nHeapTupleSatisfiesMVCC->XidInMVCCSnapshot->SubTransGetTopmostTransaction \n\nSo the root cause is the Parallel Workers process set the TransactionXmin\nwith later transcation snapshot. When parallel scan, Parallel Workers\nprocess use the older active snapshot.\nIt leads to subtrans assert coredump. I cannot get a better solution, Is\nthere any ideas?\n\t\n\n\n", "msg_date": "Thu, 6 May 2021 17:34:08 +0800", "msg_from": "\"Pengchengliu\" <pengchengliu@tju.edu.cn>", "msg_from_op": true, "msg_subject": "PG 13.2 Parallel scan with SubTransGetTopmostTransaction assert\n coredump" } ]
[ { "msg_contents": "Hi,\n\nWhile looking for something else, I noticed thorntail has failed twice\nlike this, on REL_12_STABLE:\n\nTRAP: FailedAssertion(\"!(CritSectionCount == 0 ||\n(context)->allowInCritSection)\", File:\n\"/home/nm/farm/sparc64_deb10_gcc_64_ubsan/REL_12_STABLE/pgsql.build/../pgsql/src/backend/utils/mmgr/mcxt.c\",\nLine: 931)\n\nIn both cases, the last thing that process said was:\n\n2021-05-05 20:25:51.650 MSK [3442631:6] 008_fsm_truncation.pl LOG:\nreceived replication command: BASE_BACKUP LABEL 'pg_basebackup base\nbackup' FAST NOWAIT\n\nUnfortunately there is no libbacktrace in that release, and for some\nreason we don't see a core being analysed... (gdb not installed,\nlooking for wrong core file pattern, ...?)\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2021-05-05%2017%3A08%3A00\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2021-04-22%2016%3A59%3A01\n\n\n", "msg_date": "Thu, 6 May 2021 22:35:30 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> While looking for something else, I noticed thorntail has failed twice\n> like this, on REL_12_STABLE:\n> TRAP: FailedAssertion(\"!(CritSectionCount == 0 ||\n> (context)->allowInCritSection)\", File:\n> \"/home/nm/farm/sparc64_deb10_gcc_64_ubsan/REL_12_STABLE/pgsql.build/../pgsql/src/backend/utils/mmgr/mcxt.c\",\n> Line: 931)\n\nAfter failing to reproduce this locally, I went so far as to sign up\nfor a gcc compile farm account so I could try to reproduce it on the\nmachine running thorntail. I succeeded, after more than a few tries,\nand here is the smoking gun:\n\n#3 0x00000100007f792c in ExceptionalCondition (\n conditionName=0x10000a38b80 \"!(CritSectionCount == 0 || (context)->allowInCritSection)\", errorType=0x1000087fb20 \"FailedAssertion\", \n fileName=0x10000a38908 \"mcxt.c\", lineNumber=<optimized out>) at assert.c:54\n#4 0x00000100008422f4 in palloc (size=64) at mcxt.c:931\n#5 0x00000100001f5cec in XLogFileNameP (tli=<optimized out>, segno=1)\n at xlog.c:10209\n#6 0x00000100001f6220 in issue_xlog_fsync (fd=<optimized out>, segno=1)\n at xlog.c:10186\n#7 0x00000100001f6784 in XLogWrite (WriteRqst=..., flexible=<optimized out>)\n at xlog.c:2607\n#8 0x00000100001f793c in XLogFlush (record=23717128) at xlog.c:2926\n#9 XLogFlush (record=23717128) at xlog.c:2802\n#10 0x00000100001fe71c in XLogReportParameters () at xlog.c:9525\n#11 StartupXLOG () at xlog.c:7805\n#12 0x0000010000552d30 in StartupProcessMain () at startup.c:226\n#13 0x0000010000215c1c in AuxiliaryProcessMain (argc=2, argv=0x7feffdc2f80)\n at bootstrap.c:451\n\nThe interesting part of this is frame 6, which points here:\n\n case SYNC_METHOD_FDATASYNC:\n if (pg_fdatasync(fd) != 0)\n ereport(PANIC,\n (errcode_for_file_access(),\n errmsg(\"could not fdatasync file \\\"%s\\\": %m\",\n XLogFileNameP(ThisTimeLineID, segno))));\n\nSo fdatasync() failed, and the code attempting to report that is not\ncritical-section-safe because it includes a palloc. Checking the state\nof elog.c's error stack shows that the failure was errno = 5, or EIO.\n\nConclusions:\n\n1. No wonder we could not reproduce it anywhere else. I've warned\nthe cfarm admins that their machine may be having hardware issues.\n\n2. We evidently need to put a bit more effort into this error\nreporting logic. More generally, I wonder how we could audit\nthe code for similar hazards elsewhere, because I bet there are\nsome. (Or ... could it be sane to run functions included in\nthe ereport's arguments in ErrorContext?)\n\n3. One might wonder why we're getting an fdatasync failure at\nall, when thorntail is configured to run with fsync = off.\nThe answer to that one is that 008_fsm_truncation.pl takes it\nupon itself to force fsync = on, overriding the express wishes\nof the buildfarm owner, not to mention general project policy.\nAFAICT that was added with little if any thought in the initial\ncreation of 008_fsm_truncation.pl, and I think we should take\nit out. There's certainly no visible reason for this one\nTAP script to be running with fsync on when no others do.\n\n> Unfortunately there is no libbacktrace in that release, and for some\n> reason we don't see a core being analysed... (gdb not installed,\n> looking for wrong core file pattern, ...?)\n\nThat I'm not sure about. gdb is certainly installed, and thorntail is\nvisibly running the current buildfarm client and is configured with the\ncorrect core_file_glob, and I can report that the crash did leave a 'core'\nfile in the data directory (so it's not a case of systemd commandeering\nthe core dump). Seems like core-file collection should've worked\n... unless maybe it's not covering TAP tests at all?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 May 2021 21:43:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "On Thu, May 06, 2021 at 09:43:32PM -0400, Tom Lane wrote:\n> 2. We evidently need to put a bit more effort into this error\n> reporting logic. More generally, I wonder how we could audit\n> the code for similar hazards elsewhere, because I bet there are\n> some. (Or ... could it be sane to run functions included in\n> the ereport's arguments in ErrorContext?)\n\nSeems reasonable. I don't have good ideas for auditing; just making the\npalloc work may be easier.\n\n> 3. One might wonder why we're getting an fdatasync failure at\n> all, when thorntail is configured to run with fsync = off.\n> The answer to that one is that 008_fsm_truncation.pl takes it\n> upon itself to force fsync = on, overriding the express wishes\n> of the buildfarm owner, not to mention general project policy.\n> AFAICT that was added with little if any thought in the initial\n> creation of 008_fsm_truncation.pl, and I think we should take\n> it out. There's certainly no visible reason for this one\n> TAP script to be running with fsync on when no others do.\n\nI've caught that one test taking ~10min due to its fsync use. If fsync=on\nisn't important to the test, +1 for removing it.\n\n\n", "msg_date": "Thu, 6 May 2021 19:28:31 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "Hi,\n\nOn 2021-05-06 21:43:32 -0400, Tom Lane wrote:\n> 2. We evidently need to put a bit more effort into this error\n> reporting logic. More generally, I wonder how we could audit\n> the code for similar hazards elsewhere, because I bet there are\n> some. (Or ... could it be sane to run functions included in\n> the ereport's arguments in ErrorContext?)\n\nI have wondered about that before myself. It's pretty awkward to solve\nthese kind of things at the caller level, and we have a perfectly good\ncontext to do this in, that we know is going to be reset. However - we\ndon't reset ErrorContext for DEBUG messages, I believe. So there'd be a\nnoticeable increase in leaking into ErrorContext, unless we change how\nwe do that?\n\nI guess I could see only switching to another memory context for >=\nERROR, but it does seem a bit odd. But for PANIC etc it's quite annoying\nto loose the actual error message on the buildfarm.\n\n\n> > Unfortunately there is no libbacktrace in that release, and for some\n> > reason we don't see a core being analysed... (gdb not installed,\n> > looking for wrong core file pattern, ...?)\n> \n> That I'm not sure about. gdb is certainly installed, and thorntail is\n> visibly running the current buildfarm client and is configured with the\n> correct core_file_glob, and I can report that the crash did leave a 'core'\n> file in the data directory (so it's not a case of systemd commandeering\n> the core dump). Seems like core-file collection should've worked\n> ... unless maybe it's not covering TAP tests at all?\n\nI suspect that is it - there's not really a good way for the buildfarm\nclient to even know where there could be data directories :(.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 May 2021 20:50:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-06 21:43:32 -0400, Tom Lane wrote:\n>> That I'm not sure about. gdb is certainly installed, and thorntail is\n>> visibly running the current buildfarm client and is configured with the\n>> correct core_file_glob, and I can report that the crash did leave a 'core'\n>> file in the data directory (so it's not a case of systemd commandeering\n>> the core dump). Seems like core-file collection should've worked\n>> ... unless maybe it's not covering TAP tests at all?\n\n> I suspect that is it - there's not really a good way for the buildfarm\n> client to even know where there could be data directories :(.\n\nDoes it need to? I'm envisioning \"find tmp_check -name '$core_file_glob'\"\nor something along that line.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 May 2021 00:30:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "Hi,\n\nOn 2021-05-07 00:30:11 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-05-06 21:43:32 -0400, Tom Lane wrote:\n> >> That I'm not sure about. gdb is certainly installed, and thorntail is\n> >> visibly running the current buildfarm client and is configured with the\n> >> correct core_file_glob, and I can report that the crash did leave a 'core'\n> >> file in the data directory (so it's not a case of systemd commandeering\n> >> the core dump). Seems like core-file collection should've worked\n> >> ... unless maybe it's not covering TAP tests at all?\n> \n> > I suspect that is it - there's not really a good way for the buildfarm\n> > client to even know where there could be data directories :(.\n> \n> Does it need to? I'm envisioning \"find tmp_check -name '$core_file_glob'\"\n> or something along that line.\n\nYea, it'd be doable that way. It'd be a bit harder to associate the core\nfiles with specific tests though. But I now checked, and it indeed\nchecks for core files in a specific subset of tests, and that that test\nonly globs inside the passed-in datadir.\n\nsub get_stack_trace\n{\n return get_stack_trace_cygwin(@_) if ($ENV{CYGWIN});\n\n my $bindir = shift;\n my $pgdata = shift;\n\n # no core = no result\n my @cores = glob(\"$pgdata/$core_file_glob\");\n return () unless @cores;\n...\n\nandres@awork3:~/src/pgbuildfarm-client$ ack get_stack_trace\nrun_build.pl\n1601:\t\t get_stack_trace(\"$installdir/bin\", \"$installdir/data-$locale\");\n1637:\t\t get_stack_trace(\"$installdir/bin\", \"$installdir/data-$locale\");\n1711:\t\tmy @trace = get_stack_trace(\"$installdir/bin\", \"$installdir/data\");\n1750:\t\t get_stack_trace(\"$installdir/bin\", \"$installdir/data-$locale\");\n1798:\t\t get_stack_trace(\"$installdir/bin\", \"$installdir/data-$locale\");\n2010:\t\tmy @trace = get_stack_trace(\"$binloc$installdir/bin\", \"$base/data\");\n2061:\t\t get_stack_trace(\"$base/install$installdir/bin\", \"$base/data\");\n\nPGBuild/Utils.pm\n30: get_stack_trace cleanlogs writelog\n148:sub get_stack_trace_cygwin\n173:sub get_stack_trace\n175:\treturn get_stack_trace_cygwin(@_) if ($ENV{CYGWIN});\n\nPGBuild/Modules/RedisFDW.pm\n214:\t\t get_stack_trace(\"$installdir/bin\", \"$installdir/data-$locale\");\n\nPGBuild/Modules/TestUpgrade.pm\n148:\t\tmy @trace = get_stack_trace(\"$tmp_bin_dir\", \"$tmp_data_dir\");\n\nPGBuild/Modules/TestCollateLinuxUTF8.pm\n122:\t\t get_stack_trace(\"$installdir/bin\", \"$installdir/data-$locale\");\n\nPGBuild/Modules/TestICU.pm\n110:\t\t get_stack_trace(\"$installdir/bin\", \"$installdir/data-$locale\");\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 May 2021 21:38:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "On Fri, May 7, 2021 at 1:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The interesting part of this is frame 6, which points here:\n>\n> case SYNC_METHOD_FDATASYNC:\n> if (pg_fdatasync(fd) != 0)\n> ereport(PANIC,\n> (errcode_for_file_access(),\n> errmsg(\"could not fdatasync file \\\"%s\\\": %m\",\n> XLogFileNameP(ThisTimeLineID, segno))));\n\nOh, and I see that 13 has 9989d37d \"Remove XLogFileNameP() from the\ntree\" to fix this exact problem.\n\n\n", "msg_date": "Fri, 7 May 2021 16:42:46 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, May 7, 2021 at 1:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The interesting part of this is frame 6, which points here:\n\n> Oh, and I see that 13 has 9989d37d \"Remove XLogFileNameP() from the\n> tree\" to fix this exact problem.\n\nHah, so that maybe explains why thorntail has only shown this in\nthe v12 branch. Should we consider back-patching that?\n\nThe more general issue of how to detect, or else make safe,\npallocs in critical error reports remains. But I bet any answer\nwe think of for that will not be back-patchable. So back-patching\na localized fix for the specific bug we know of might be worth doing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 May 2021 00:53:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "I wrote:\n> 1. No wonder we could not reproduce it anywhere else. I've warned\n> the cfarm admins that their machine may be having hardware issues.\n\nI heard back from the machine's admin. The time of the crash I observed\nmatches exactly to these events in the kernel log:\n\nMay 07 03:31:39 gcc202 kernel: dm-0: writeback error on inode 2148294407, offset 0, sector 159239256\nMay 07 03:31:39 gcc202 kernel: sunvdc: vdc_tx_trigger() failure, err=-11\nMay 07 03:31:39 gcc202 kernel: blk_update_request: I/O error, dev vdiskc, sector 157618896 op 0x1:(WRITE) flags 0x4800 phys_seg 16 prio class 0\n\nSo it's not a mirage. The admin seems to think it might be a kernel\nbug though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 May 2021 10:29:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "\nOn 5/7/21 12:38 AM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-05-07 00:30:11 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> On 2021-05-06 21:43:32 -0400, Tom Lane wrote:\n>>>> That I'm not sure about. gdb is certainly installed, and thorntail is\n>>>> visibly running the current buildfarm client and is configured with the\n>>>> correct core_file_glob, and I can report that the crash did leave a 'core'\n>>>> file in the data directory (so it's not a case of systemd commandeering\n>>>> the core dump). Seems like core-file collection should've worked\n>>>> ... unless maybe it's not covering TAP tests at all?\n>>> I suspect that is it - there's not really a good way for the buildfarm\n>>> client to even know where there could be data directories :(.\n>> Does it need to? I'm envisioning \"find tmp_check -name '$core_file_glob'\"\n>> or something along that line.\n> Yea, it'd be doable that way. It'd be a bit harder to associate the core\n> files with specific tests though. But I now checked, and it indeed\n> checks for core files in a specific subset of tests, and that that test\n> only globs inside the passed-in datadir.\n>\n\nworking on it ...\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 7 May 2021 11:27:01 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "I wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> Oh, and I see that 13 has 9989d37d \"Remove XLogFileNameP() from the\n>> tree\" to fix this exact problem.\n\n> Hah, so that maybe explains why thorntail has only shown this in\n> the v12 branch. Should we consider back-patching that?\n\nRealizing that 9989d37d prevents the assertion failure, I went\nto see if thorntail had shown EIO failures without assertions.\nLooking back 180 days, I found these:\n\n sysname | branch | snapshot | stage | l \n-----------+---------------+---------------------+--------------------+------------------------------------------------------------------------------------------------------------------------------------------------\n thorntail | HEAD | 2021-03-19 21:28:15 | recoveryCheck | 2021-03-20 00:48:48.117 MSK [4089174:11] 008_fsm_truncation.pl PANIC: could not fdatasync file \"000000010000000000000002\": Input/output error\n thorntail | HEAD | 2021-04-06 16:08:10 | recoveryCheck | 2021-04-06 19:30:54.103 MSK [3355008:11] 008_fsm_truncation.pl PANIC: could not fdatasync file \"000000010000000000000002\": Input/output error\n thorntail | REL9_6_STABLE | 2021-04-12 02:38:04 | pg_basebackupCheck | pg_basebackup: could not fsync file \"000000010000000000000013\": Input/output error\n\nSo indeed the kernel-or-hardware problem is affecting other branches.\nI suspect that the lack of reports in the pre-v12 branches is mostly\ndown to there having been many fewer runs on those branches within\nthe past couple months.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 May 2021 13:18:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "\nOn 5/7/21 11:27 AM, Andrew Dunstan wrote:\n> On 5/7/21 12:38 AM, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2021-05-07 00:30:11 -0400, Tom Lane wrote:\n>>> Andres Freund <andres@anarazel.de> writes:\n>>>> On 2021-05-06 21:43:32 -0400, Tom Lane wrote:\n>>>>> That I'm not sure about. gdb is certainly installed, and thorntail is\n>>>>> visibly running the current buildfarm client and is configured with the\n>>>>> correct core_file_glob, and I can report that the crash did leave a 'core'\n>>>>> file in the data directory (so it's not a case of systemd commandeering\n>>>>> the core dump). Seems like core-file collection should've worked\n>>>>> ... unless maybe it's not covering TAP tests at all?\n>>>> I suspect that is it - there's not really a good way for the buildfarm\n>>>> client to even know where there could be data directories :(.\n>>> Does it need to? I'm envisioning \"find tmp_check -name '$core_file_glob'\"\n>>> or something along that line.\n>> Yea, it'd be doable that way. It'd be a bit harder to associate the core\n>> files with specific tests though. But I now checked, and it indeed\n>> checks for core files in a specific subset of tests, and that that test\n>> only globs inside the passed-in datadir.\n>>\n> working on it ...\n>\n>\n> cheers\n>\n>\n\n\nsee\n<https://github.com/PGBuildFarm/client-code/commit/5361bb5ff01ee47e8998f5f8f44732ccc5a01183>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 7 May 2021 15:47:28 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "Hi,\n\nOn 2021-05-07 10:29:58 -0400, Tom Lane wrote:\n> I wrote:\n> > 1. No wonder we could not reproduce it anywhere else. I've warned\n> > the cfarm admins that their machine may be having hardware issues.\n> \n> I heard back from the machine's admin. The time of the crash I observed\n> matches exactly to these events in the kernel log:\n> \n> May 07 03:31:39 gcc202 kernel: dm-0: writeback error on inode 2148294407, offset 0, sector 159239256\n> May 07 03:31:39 gcc202 kernel: sunvdc: vdc_tx_trigger() failure, err=-11\n> May 07 03:31:39 gcc202 kernel: blk_update_request: I/O error, dev vdiskc, sector 157618896 op 0x1:(WRITE) flags 0x4800 phys_seg 16 prio class 0\n> \n> So it's not a mirage. The admin seems to think it might be a kernel\n> bug though.\n\nIsn't this a good reason to have at least some tests run with fsync=on?\n\nIt makes a ton of sense for buildfarm animals to disable fsync to\nachieve acceptable performance. Having something in there that\nnevertheless does some light exercise of the fsync code doesn't seem\nbad?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 May 2021 12:49:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Isn't this a good reason to have at least some tests run with fsync=on?\n\nWhy?\n\nI can certainly see an argument for running some buildfarm animals\nwith fsync on (for all tests). I don't see a reason for forcing\nthem all to run some tests that way; and if I were going to do that,\nI doubt that 008_fsm_truncation.pl would be the one I would pick.\nI think it's nothing but sloppiness that that one is out of step with\nall the rest.\n\nIMO, if a buildfarm owner sets fsync = off, they mean off.\nThey don't mean \"maybe\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 May 2021 16:30:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "On Fri, May 07, 2021 at 01:18:19PM -0400, Tom Lane wrote:\n> Realizing that 9989d37d prevents the assertion failure, I went\n> to see if thorntail had shown EIO failures without assertions.\n> Looking back 180 days, I found these:\n> \n> sysname | branch | snapshot | stage | l \n> -----------+---------------+---------------------+--------------------+------------------------------------------------------------------------------------------------------------------------------------------------\n> thorntail | HEAD | 2021-03-19 21:28:15 | recoveryCheck | 2021-03-20 00:48:48.117 MSK [4089174:11] 008_fsm_truncation.pl PANIC: could not fdatasync file \"000000010000000000000002\": Input/output error\n> thorntail | HEAD | 2021-04-06 16:08:10 | recoveryCheck | 2021-04-06 19:30:54.103 MSK [3355008:11] 008_fsm_truncation.pl PANIC: could not fdatasync file \"000000010000000000000002\": Input/output error\n> thorntail | REL9_6_STABLE | 2021-04-12 02:38:04 | pg_basebackupCheck | pg_basebackup: could not fsync file \"000000010000000000000013\": Input/output error\n> \n> So indeed the kernel-or-hardware problem is affecting other branches.\n\nHaving a flaky buildfarm member is bad news. I'll LD_PRELOAD the attached to\nprevent fsync from reaching the kernel. Hopefully, that will make the\nhardware-or-kernel trouble unreachable. (Changing 008_fsm_truncation.pl\nwouldn't avoid this, because fsync=off doesn't affect syncs outside the\nbackend.)", "msg_date": "Fri, 7 May 2021 17:14:18 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "On Fri, May 07, 2021 at 04:30:00PM -0400, Tom Lane wrote:\n> I can certainly see an argument for running some buildfarm animals\n> with fsync on (for all tests). I don't see a reason for forcing\n> them all to run some tests that way; and if I were going to do that,\n> I doubt that 008_fsm_truncation.pl would be the one I would pick.\n> I think it's nothing but sloppiness that that one is out of step with\n> all the rest.\n\nMy take on this point is that using the configuration that can be\nenforced for each animal would be enough. I manage a small animal and\nthis stuff can take a while to flush some data.\n\nWorth noting that using fsync=on has not been discussed on the\noriginal thread, and I don't see why that's necessary:\nhttps://www.postgresql.org/message-id/flat/CABOikdNr5vKucqyZH9s1Mh0XebLs_jRhKv6eJfNnD2wxTn%3D_9A%40mail.gmail.com\nSo I would vote for removing it in this case.\n--\nMichael", "msg_date": "Sat, 8 May 2021 10:33:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "On Fri, May 07, 2021 at 04:42:46PM +1200, Thomas Munro wrote:\n> Oh, and I see that 13 has 9989d37d \"Remove XLogFileNameP() from the\n> tree\" to fix this exact problem.\n\nI don't see that we'd be able to get a redesign of this area safe\nenough for a backpatch, but perhaps we (I?) had better put some extra\neffort in back-patching this commit while keeping XLogFileNameP()\naround for compatibility? How do people feel about that?\n--\nMichael", "msg_date": "Sat, 8 May 2021 10:38:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "Hi,\n\nOn 2021-05-07 17:14:18 -0700, Noah Misch wrote:\n> Having a flaky buildfarm member is bad news. I'll LD_PRELOAD the attached to\n> prevent fsync from reaching the kernel. Hopefully, that will make the\n> hardware-or-kernel trouble unreachable. (Changing 008_fsm_truncation.pl\n> wouldn't avoid this, because fsync=off doesn't affect syncs outside the\n> backend.)\n\nNot sure how reliable that is - there's other paths that could return an\nerror, I think. If the root cause is the disk responding weirdly to\nwrite cache flushes, you could tell the kernel that that the disk has no\nwrite cache (e.g. echo write through > /sys/block/sda/queue/write_cache).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 May 2021 19:08:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-07 17:14:18 -0700, Noah Misch wrote:\n>> Having a flaky buildfarm member is bad news. I'll LD_PRELOAD the attached to\n>> prevent fsync from reaching the kernel. Hopefully, that will make the\n>> hardware-or-kernel trouble unreachable. (Changing 008_fsm_truncation.pl\n>> wouldn't avoid this, because fsync=off doesn't affect syncs outside the\n>> backend.)\n\n> Not sure how reliable that is - there's other paths that could return an\n> error, I think. If the root cause is the disk responding weirdly to\n> write cache flushes, you could tell the kernel that that the disk has no\n> write cache (e.g. echo write through > /sys/block/sda/queue/write_cache).\n\nI seriously doubt Noah has root on that machine.\n\nMore to the point, the admin told me it's a VM (or LDOM, whatever that is)\nunder a Solaris host, so there's no direct hardware access going on\nanyway. He didn't say in so many words, but I suspect the reason he's\nsuspecting kernel bugs is that there's nothing going wrong so far as the\nhost OS is concerned.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 May 2021 22:18:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "On Fri, May 07, 2021 at 10:18:14PM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-05-07 17:14:18 -0700, Noah Misch wrote:\n> >> Having a flaky buildfarm member is bad news. I'll LD_PRELOAD the attached to\n> >> prevent fsync from reaching the kernel. Hopefully, that will make the\n> >> hardware-or-kernel trouble unreachable. (Changing 008_fsm_truncation.pl\n> >> wouldn't avoid this, because fsync=off doesn't affect syncs outside the\n> >> backend.)\n> \n> > Not sure how reliable that is - there's other paths that could return an\n> > error, I think.\n\nYep, one can imagine a failure at close() or something. All the non-HEAD\nbuildfarm failures are at some *sync call, so I'm optimistic about getting\nmileage from this. (I didn't check the more-numerous HEAD failures.) If it's\nnot enough, I may move the farm directory to tmpfs.\n\n> > If the root cause is the disk responding weirdly to\n> > write cache flushes, you could tell the kernel that that the disk has no\n> > write cache (e.g. echo write through > /sys/block/sda/queue/write_cache).\n> \n> I seriously doubt Noah has root on that machine.\n\nIf I can make the case for that setting being a good thing for the VM's users\ngenerally, I probably can file a ticket and get it done.\n\n> More to the point, the admin told me it's a VM (or LDOM, whatever that is)\n> under a Solaris host, so there's no direct hardware access going on\n> anyway. He didn't say in so many words, but I suspect the reason he's\n> suspecting kernel bugs is that there's nothing going wrong so far as the\n> host OS is concerned.\n\n\n", "msg_date": "Fri, 7 May 2021 20:30:44 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "On Sat, May 8, 2021 at 2:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> May 07 03:31:39 gcc202 kernel: sunvdc: vdc_tx_trigger() failure, err=-11\n\nThat's -EAGAIN (assuming errnos match x86) and I guess it indicates\nthat VDC_MAX_RETRIES is exceeded here:\n\nhttps://github.com/torvalds/linux/blob/master/drivers/block/sunvdc.c#L451\nhttps://github.com/torvalds/linux/blob/master/drivers/block/sunvdc.c#L526\n\nOne theory is that the hypervisor/host is occasionally too swamped to\nservice the request queue fast enough over a ~10ms period, given that\nvio_ldc_send() itself retries 1000 times with a 1us sleep, the outer\nloop tries ten times, and ldc.c's write_nonraw() reports -EAGAIN when\nthere is no space for the message. (Alternatively, it's trying to\nsend a message that's too big for the channel, the channel is\ncorrupted by bugs, or my fly-by of this code I'd never heard of before\nnow is just way off...)\n\n\n", "msg_date": "Sat, 8 May 2021 16:57:54 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" }, { "msg_contents": "On Sat, May 08, 2021 at 04:57:54PM +1200, Thomas Munro wrote:\n> On Sat, May 8, 2021 at 2:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > May 07 03:31:39 gcc202 kernel: sunvdc: vdc_tx_trigger() failure, err=-11\n> \n> That's -EAGAIN (assuming errnos match x86) and I guess it indicates\n> that VDC_MAX_RETRIES is exceeded here:\n> \n> https://github.com/torvalds/linux/blob/master/drivers/block/sunvdc.c#L451\n> https://github.com/torvalds/linux/blob/master/drivers/block/sunvdc.c#L526\n> \n> One theory is that the hypervisor/host is occasionally too swamped to\n> service the request queue fast enough over a ~10ms period, given that\n> vio_ldc_send() itself retries 1000 times with a 1us sleep, the outer\n> loop tries ten times, and ldc.c's write_nonraw() reports -EAGAIN when\n> there is no space for the message. (Alternatively, it's trying to\n> send a message that's too big for the channel, the channel is\n> corrupted by bugs, or my fly-by of this code I'd never heard of before\n> now is just way off...)\n\nNice discovery. From\nhttps://github.com/torvalds/linux/commit/a11f6ca9aef989b56cd31ff4ee2af4fb31a172ec\nI see those details are 2.5 years old, somewhat young relative to the driver\nas a whole. I don't know which part should change, though.\n\n\n", "msg_date": "Sat, 8 May 2021 09:55:07 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Anti-critical-section assertion failure in mcxt.c reached by\n walsender" } ]
[ { "msg_contents": "Hi,\n\nWhile reviewing one of the 'Table partitioning' related patches, I found\nthat Postgres does not support multiple column based LIST partitioning.\nBased on this understanding, I have started working on this feature. I also\nfeel that 'Multi-Column List Partitioning' can be benefited to the Postgres\nusers in future.\n\nI am attaching the WIP patch for this feature here. It supports\n'Multi-Column List Partitioning', however some tasks are still pending. I\nwould like to know your thoughts about this, So that I can continue the\nwork with improvising the current patch.\n\nFollowing things are handled in the patch.\n1. Syntax\n\nCREATE TABLE table_name (attrs) PARTITION BY LIST(list_of_columns);\n\nEarlier there was no provision to mention multiple columns as part of the\n'list_of_columns' clause. Now we can mention the list of columns separated\nby comma.\n\nCREATE TABLE table_name_p1 PARTITION OF table_name FOR VALUES IN\nlist_of_values.\n\nWhereas list_of_columns can be\na. (value [,...])\nb. (value [,...]) [,...]\n\nI would like to list a few examples here for better understanding.\nEx-1:\nCREATE TABLE t1(a int) PARTITION BY LIST(a);\nCREATE TABLE t1_1 PARTITION OF t1 FOR VALUES IN (1, 2, 10, 5, 7);\n\nEx-2:\nCREATE TABLE t2(a int, b int) PARTITION BY LIST(a,b);\nCREATE TABLE t2_1 PARTITION OF t2 FOR VALUES IN (1, 2), (1, 5), (2, 2),(2,\n10);\n\nPlease share if any changes are required in the above syntax.\n\n2. Modified transformation logic to support above syntax.\n\n3. Modified the data structures to store the information caused by above\nsyntax. Also modified the searching logic to route the tuple to the\nappropriate partition.\n\n4. Done a few basic testing and verified CREATE TABLE, INSERT INTO and\nSELECT are working fine.\n\n\nFollowing items are pending and I am working on it.\n\n1. Handling of 'NULL' values.\n\n2. Support multi column case in partition pruning.\n\n3. Add test cases to the regression test suite.\n\nPlease share your thoughts.\n\n\nThanks & Regards,\nNitin Jadhav", "msg_date": "Thu, 6 May 2021 19:32:45 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Multi-Column List Partitioning" }, { "msg_contents": "> While reviewing one of the 'Table partitioning' related patches,\n> I found that Postgres does not support multiple column based LIST\n> partitioning. Based on this understanding, I have started working on\n> this feature. I also feel that 'Multi-Column List Partitioning' can\n> be benefited to the Postgres users in future.\n\n+1 for the feature. I also think this can help users deal with some\nuseful cases.\n\n\n> CREATE TABLE t2_1 PARTITION OF t2 FOR VALUES IN (1, 2), (1, 5), (2,\n> 2),(2, 10);\n\nIMHO, listing every single tuple like this might be a bit cumbersome for\nthe user. What about something like this:\n\n...FOR VALUES IN (1, 2, 3, 4), (11, 22, 33, 44), where the first set\nis the list for values of column A and second list is for column B. We\ncan treat these lists as A X B possible values or simply (a1, b1), (a2,\nb2) internally. However I see other proprietary databases already have\nsyntax something similar that you are proposing here. So, I leave it\nopen for the thoughts from experts. Also, though what I propose might be\neasy from a user perspective, but might not be that easy for\nimplementation, given that for a larger number of columns in partition list\ne.g. A X B X C X D lists become unmanageable.\n\nI did not review the patch in detail, but a quick look at it leaves me\nwith following comments:\n\n1.\n> + * list. Then this function will continue the serach and return the\nindex of\nTypo:\ns/serach/search\n\n2.\nA compiler warning:\npartprune.c: In function ‘get_matching_list_bounds’:\npartprune.c:2731:20: error: passing argument 5 of ‘partition_list_bsearch’\nmakes pointer from integer without a cast [-Werror=int-conversion]\n 2731 | nvalues, value, &is_equal);\n | ^~~~~\n | |\n | Datum {aka long unsigned int}\nIn file included from partprune.c:53:\n../../../src/include/partitioning/partbounds.h:120:32: note: expected\n‘Datum *’ {aka ‘long unsigned int *’} but argument is of type ‘Datum’ {aka\n‘long unsigned int’}\n 120 | int nvalues, Datum *value, bool *is_equal);\n | ~~~~~~~^~~~~\n\n3.\nAnd, a server crash with following case:\npostgres=# CREATE TABLE t1 (a int) PARTITION BY LIST (a);\nCREATE TABLE\npostgres=# CREATE TABLE t1p1 PARTITION OF t1 FOR VALUES IN (1, 2, 3);\nCREATE TABLE\npostgres=# \\d+ t1p1\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!?>\n\nStacktrace:\n(gdb) bt\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007f5d273c5859 in __GI_abort () at abort.c:79\n#2 0x000055779d2eb69d in ExceptionalCondition\n(conditionName=0x55779d4978d8 \"ptr == NULL || nodeTag(ptr) == type\",\nerrorType=0x55779d4978c3 \"FailedAssertion\",\n fileName=0x55779d4978a0 \"../../../src/include/nodes/nodes.h\",\nlineNumber=603) at assert.c:69\n#3 0x000055779d03a684 in castNodeImpl (type=T_Const, ptr=0x55779e457b18)\nat ../../../src/include/nodes/nodes.h:603\n#4 0x000055779d04368a in get_qual_for_list (parent=0x7f5d1df829b8,\nspec=0x55779e457950) at partbounds.c:4155\n#5 0x000055779d03ac60 in get_qual_from_partbound (rel=0x7f5d1df82570,\nparent=0x7f5d1df829b8, spec=0x55779e457950) at partbounds.c:272\n#6 0x000055779d2cf630 in generate_partition_qual (rel=0x7f5d1df82570) at\npartcache.c:379\n#7 0x000055779d2cf468 in get_partition_qual_relid (relid=32771) at\npartcache.c:308\n#8 0x000055779d2592bf in pg_get_partition_constraintdef\n(fcinfo=0x55779e44ee50) at ruleutils.c:2019\n#9 0x000055779cec7221 in ExecInterpExpr (state=0x55779e44dfb0,\necontext=0x55779e407fe8, isnull=0x7ffddf9b109f) at execExprInterp.c:744\n#10 0x000055779cec954f in ExecInterpExprStillValid (state=0x55779e44dfb0,\necontext=0x55779e407fe8, isNull=0x7ffddf9b109f) at execExprInterp.c:1819\n#11 0x000055779cf1d58a in ExecEvalExprSwitchContext (state=0x55779e44dfb0,\necontext=0x55779e407fe8, isNull=0x7ffddf9b109f)\n at ../../../src/include/executor/executor.h:338\n#12 0x000055779cf1d602 in ExecProject (projInfo=0x55779e44dfa8) at\n../../../src/include/executor/executor.h:372\n#13 0x000055779cf1db2f in ExecNestLoop (pstate=0x55779e407ed0) at\nnodeNestloop.c:241\n#14 0x000055779cedf136 in ExecProcNodeFirst (node=0x55779e407ed0) at\nexecProcnode.c:462\n#15 0x000055779ced3053 in ExecProcNode (node=0x55779e407ed0) at\n../../../src/include/executor/executor.h:257\n#16 0x000055779ced5a87 in ExecutePlan (estate=0x55779e407c80,\nplanstate=0x55779e407ed0, use_parallel_mode=false, operation=CMD_SELECT,\nsendTuples=true, numberTuples=0,\n direction=ForwardScanDirection, dest=0x55779e425a88, execute_once=true)\nat execMain.c:1551\n#17 0x000055779ced372d in standard_ExecutorRun (queryDesc=0x55779e453520,\ndirection=ForwardScanDirection, count=0, execute_once=true) at\nexecMain.c:361\n#18 0x000055779ced353c in ExecutorRun (queryDesc=0x55779e453520,\ndirection=ForwardScanDirection, count=0, execute_once=true) at\nexecMain.c:305\n#19 0x000055779d13d287 in PortalRunSelect (portal=0x55779e398800,\nforward=true, count=0, dest=0x55779e425a88) at pquery.c:912\n#20 0x000055779d13cec0 in PortalRun (portal=0x55779e398800,\ncount=9223372036854775807, isTopLevel=true, run_once=true,\ndest=0x55779e425a88, altdest=0x55779e425a88,\n qc=0x7ffddf9b14f0) at pquery.c:756\n#21 0x000055779d1361ce in exec_simple_query (\n query_string=0x55779e3367a0 \"SELECT inhparent::pg_catalog.regclass,\\n\n pg_catalog.pg_get_expr(c.relpartbound, c.oid),\\n inhdetachpending,\\n\n pg_catalog.pg_get_partition_constraintdef(c.oid)\\nFROM pg_catalog.pg_class\nc JOIN pg_catalo\"...) at postgres.c:1214\n#22 0x000055779d13ad8b in PostgresMain (argc=1, argv=0x7ffddf9b1710,\ndbname=0x55779e3626f8 \"postgres\", username=0x55779e3626d8 \"hadoop\") at\npostgres.c:4476\n#23 0x000055779d0674d3 in BackendRun (port=0x55779e358380) at\npostmaster.c:4488\n#24 0x000055779d066d8c in BackendStartup (port=0x55779e358380) at\npostmaster.c:4210\n#25 0x000055779d062f9b in ServerLoop () at postmaster.c:1742\n#26 0x000055779d062734 in PostmasterMain (argc=3, argv=0x55779e3308b0) at\npostmaster.c:1414\n#27 0x000055779cf5805f in main (argc=3, argv=0x55779e3308b0) at main.c:209\n\nRegards,\nJeevan Ladhe\n\nOn Thu, May 6, 2021 at 7:33 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com>\nwrote:\n\n> Hi,\n>\n> While reviewing one of the 'Table partitioning' related patches, I found\n> that Postgres does not support multiple column based LIST partitioning.\n> Based on this understanding, I have started working on this feature. I also\n> feel that 'Multi-Column List Partitioning' can be benefited to the Postgres\n> users in future.\n>\n> I am attaching the WIP patch for this feature here. It supports\n> 'Multi-Column List Partitioning', however some tasks are still pending. I\n> would like to know your thoughts about this, So that I can continue the\n> work with improvising the current patch.\n>\n> Following things are handled in the patch.\n> 1. Syntax\n>\n> CREATE TABLE table_name (attrs) PARTITION BY LIST(list_of_columns);\n>\n> Earlier there was no provision to mention multiple columns as part of the\n> 'list_of_columns' clause. Now we can mention the list of columns separated\n> by comma.\n>\n> CREATE TABLE table_name_p1 PARTITION OF table_name FOR VALUES IN\n> list_of_values.\n>\n> Whereas list_of_columns can be\n> a. (value [,...])\n> b. (value [,...]) [,...]\n>\n> I would like to list a few examples here for better understanding.\n> Ex-1:\n> CREATE TABLE t1(a int) PARTITION BY LIST(a);\n> CREATE TABLE t1_1 PARTITION OF t1 FOR VALUES IN (1, 2, 10, 5, 7);\n>\n> Ex-2:\n> CREATE TABLE t2(a int, b int) PARTITION BY LIST(a,b);\n> CREATE TABLE t2_1 PARTITION OF t2 FOR VALUES IN (1, 2), (1, 5), (2, 2),(2,\n> 10);\n>\n> Please share if any changes are required in the above syntax.\n>\n> 2. Modified transformation logic to support above syntax.\n>\n> 3. Modified the data structures to store the information caused by above\n> syntax. Also modified the searching logic to route the tuple to the\n> appropriate partition.\n>\n> 4. Done a few basic testing and verified CREATE TABLE, INSERT INTO and\n> SELECT are working fine.\n>\n>\n> Following items are pending and I am working on it.\n>\n> 1. Handling of 'NULL' values.\n>\n> 2. Support multi column case in partition pruning.\n>\n> 3. Add test cases to the regression test suite.\n>\n> Please share your thoughts.\n>\n>\n> Thanks & Regards,\n> Nitin Jadhav\n>\n>\n>\n>\n>\n>\n>\n\n> While reviewing one of the 'Table partitioning' related patches,> I found that Postgres does not support multiple column based LIST> partitioning. Based on this understanding, I have started working on> this feature. I also feel that 'Multi-Column List Partitioning' can> be benefited to the Postgres users in future.+1 for the feature. I also think this can help users deal with someuseful cases.> CREATE TABLE t2_1 PARTITION OF t2 FOR VALUES IN (1, 2), (1, 5), (2,> 2),(2, 10);IMHO, listing every single tuple like this might be a bit cumbersome forthe user. What about something like this:...FOR VALUES IN (1, 2, 3, 4), (11, 22, 33, 44), where the first setis the list for values of column A and second list is for column B. Wecan treat these lists as A X B possible values or simply (a1, b1), (a2,b2) internally. However I see other proprietary databases already havesyntax something similar that you are proposing here. So, I leave itopen for the thoughts from experts. Also, though what I propose might beeasy from a user perspective, but might not be that easy forimplementation, given that for a larger number of columns in partition liste.g. A X B X C X D lists become unmanageable.I did not review the patch in detail, but a quick look at it leaves mewith following comments:1.> + * list. Then this function will continue the serach and return the index ofTypo:s/serach/search2.A compiler warning:partprune.c: In function ‘get_matching_list_bounds’:partprune.c:2731:20: error: passing argument 5 of ‘partition_list_bsearch’ makes pointer from integer without a cast [-Werror=int-conversion] 2731 |           nvalues, value, &is_equal);      |                    ^~~~~      |                    |      |                    Datum {aka long unsigned int}In file included from partprune.c:53:../../../src/include/partitioning/partbounds.h:120:32: note: expected ‘Datum *’ {aka ‘long unsigned int *’} but argument is of type ‘Datum’ {aka ‘long unsigned int’}  120 |            int nvalues, Datum *value, bool *is_equal);      |                         ~~~~~~~^~~~~3.And, a server crash with following case:postgres=# CREATE TABLE t1 (a int) PARTITION BY LIST (a);CREATE TABLEpostgres=# CREATE TABLE t1p1 PARTITION OF t1 FOR VALUES IN (1, 2, 3);CREATE TABLEpostgres=# \\d+ t1p1server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.The connection to the server was lost. Attempting reset: Failed.!?>Stacktrace:(gdb) bt#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50#1  0x00007f5d273c5859 in __GI_abort () at abort.c:79#2  0x000055779d2eb69d in ExceptionalCondition (conditionName=0x55779d4978d8 \"ptr == NULL || nodeTag(ptr) == type\", errorType=0x55779d4978c3 \"FailedAssertion\",     fileName=0x55779d4978a0 \"../../../src/include/nodes/nodes.h\", lineNumber=603) at assert.c:69#3  0x000055779d03a684 in castNodeImpl (type=T_Const, ptr=0x55779e457b18) at ../../../src/include/nodes/nodes.h:603#4  0x000055779d04368a in get_qual_for_list (parent=0x7f5d1df829b8, spec=0x55779e457950) at partbounds.c:4155#5  0x000055779d03ac60 in get_qual_from_partbound (rel=0x7f5d1df82570, parent=0x7f5d1df829b8, spec=0x55779e457950) at partbounds.c:272#6  0x000055779d2cf630 in generate_partition_qual (rel=0x7f5d1df82570) at partcache.c:379#7  0x000055779d2cf468 in get_partition_qual_relid (relid=32771) at partcache.c:308#8  0x000055779d2592bf in pg_get_partition_constraintdef (fcinfo=0x55779e44ee50) at ruleutils.c:2019#9  0x000055779cec7221 in ExecInterpExpr (state=0x55779e44dfb0, econtext=0x55779e407fe8, isnull=0x7ffddf9b109f) at execExprInterp.c:744#10 0x000055779cec954f in ExecInterpExprStillValid (state=0x55779e44dfb0, econtext=0x55779e407fe8, isNull=0x7ffddf9b109f) at execExprInterp.c:1819#11 0x000055779cf1d58a in ExecEvalExprSwitchContext (state=0x55779e44dfb0, econtext=0x55779e407fe8, isNull=0x7ffddf9b109f)    at ../../../src/include/executor/executor.h:338#12 0x000055779cf1d602 in ExecProject (projInfo=0x55779e44dfa8) at ../../../src/include/executor/executor.h:372#13 0x000055779cf1db2f in ExecNestLoop (pstate=0x55779e407ed0) at nodeNestloop.c:241#14 0x000055779cedf136 in ExecProcNodeFirst (node=0x55779e407ed0) at execProcnode.c:462#15 0x000055779ced3053 in ExecProcNode (node=0x55779e407ed0) at ../../../src/include/executor/executor.h:257#16 0x000055779ced5a87 in ExecutePlan (estate=0x55779e407c80, planstate=0x55779e407ed0, use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true, numberTuples=0,     direction=ForwardScanDirection, dest=0x55779e425a88, execute_once=true) at execMain.c:1551#17 0x000055779ced372d in standard_ExecutorRun (queryDesc=0x55779e453520, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:361#18 0x000055779ced353c in ExecutorRun (queryDesc=0x55779e453520, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:305#19 0x000055779d13d287 in PortalRunSelect (portal=0x55779e398800, forward=true, count=0, dest=0x55779e425a88) at pquery.c:912#20 0x000055779d13cec0 in PortalRun (portal=0x55779e398800, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x55779e425a88, altdest=0x55779e425a88,     qc=0x7ffddf9b14f0) at pquery.c:756#21 0x000055779d1361ce in exec_simple_query (    query_string=0x55779e3367a0 \"SELECT inhparent::pg_catalog.regclass,\\n  pg_catalog.pg_get_expr(c.relpartbound, c.oid),\\n  inhdetachpending,\\n  pg_catalog.pg_get_partition_constraintdef(c.oid)\\nFROM pg_catalog.pg_class c JOIN pg_catalo\"...) at postgres.c:1214#22 0x000055779d13ad8b in PostgresMain (argc=1, argv=0x7ffddf9b1710, dbname=0x55779e3626f8 \"postgres\", username=0x55779e3626d8 \"hadoop\") at postgres.c:4476#23 0x000055779d0674d3 in BackendRun (port=0x55779e358380) at postmaster.c:4488#24 0x000055779d066d8c in BackendStartup (port=0x55779e358380) at postmaster.c:4210#25 0x000055779d062f9b in ServerLoop () at postmaster.c:1742#26 0x000055779d062734 in PostmasterMain (argc=3, argv=0x55779e3308b0) at postmaster.c:1414#27 0x000055779cf5805f in main (argc=3, argv=0x55779e3308b0) at main.c:209Regards,Jeevan LadheOn Thu, May 6, 2021 at 7:33 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:Hi,While reviewing one of the 'Table partitioning' related patches, I found that Postgres does not support multiple column based LIST partitioning. Based on this understanding, I have started working on this feature. I also feel that 'Multi-Column List Partitioning' can be benefited to the Postgres users in future.  I am attaching the WIP patch for this feature here. It supports 'Multi-Column List Partitioning', however some tasks are still pending. I would like to know your thoughts about this, So that I can continue the work with improvising the current patch. Following things are handled in the patch.1. SyntaxCREATE TABLE table_name (attrs) PARTITION BY LIST(list_of_columns);Earlier there was no provision to mention multiple columns as part of the 'list_of_columns' clause. Now we can mention the list of columns separated by comma.CREATE TABLE table_name_p1 PARTITION OF table_name FOR VALUES IN list_of_values.Whereas list_of_columns can bea. (value [,...])b. (value [,...]) [,...]I would like to list a few examples here for better understanding.Ex-1:CREATE TABLE t1(a int) PARTITION BY LIST(a);CREATE TABLE t1_1 PARTITION OF t1 FOR VALUES IN (1, 2, 10, 5, 7);Ex-2: CREATE TABLE t2(a int, b int) PARTITION BY LIST(a,b);CREATE TABLE t2_1 PARTITION OF t2 FOR VALUES IN (1, 2), (1, 5), (2, 2),(2, 10);Please share if any changes are required in the above syntax.2. Modified transformation logic to support above syntax.3. Modified the data structures to store the information caused by above syntax. Also modified the searching logic to route the tuple to the appropriate partition.4. Done a few basic testing and verified CREATE TABLE, INSERT INTO and SELECT are working fine.Following items are pending and I am working on it.1. Handling of 'NULL' values.2. Support multi column case in partition pruning.3. Add test cases to the regression test suite.Please share your thoughts.Thanks & Regards,Nitin Jadhav", "msg_date": "Fri, 7 May 2021 19:35:51 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Thanks Jeevan for looking into this thread.\n\n> I did not review the patch in detail, but a quick look at it leaves me\n> with following comments:\n\nI will incorporate these changes.\n\n> ...FOR VALUES IN (1, 2, 3, 4), (11, 22, 33, 44), where the first set\n> is the list for values of column A and second list is for column B. We\n> can treat these lists as A X B possible values or simply (a1, b1), (a2,\n> b2) internally. However I see other proprietary databases already have\n> syntax something similar that you are proposing here. So, I leave it\n> open for the thoughts from experts. Also, though what I propose might be\n> easy from a user perspective, but might not be that easy for\n> implementation, given that for a larger number of columns in partition\nlist\n> e.g. A X B X C X D lists become unmanageable.\n\nI feel this is also not easy from a user's perspective. For example\nfor a partition\nwith 2 partition keys (a,b) for values like (1,1), (1,2), (1,3),\n(1,4),(1,5). This\nwould be converted to (1,1,1,1,1), (1,2,3,4,5). It is difficult to match\nthe values\nof column 'a' to 'b'. Anyways let's wait for the other's opinion about this.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Fri, May 7, 2021 at 7:36 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>\nwrote:\n\n>\n> > While reviewing one of the 'Table partitioning' related patches,\n> > I found that Postgres does not support multiple column based LIST\n> > partitioning. Based on this understanding, I have started working on\n> > this feature. I also feel that 'Multi-Column List Partitioning' can\n> > be benefited to the Postgres users in future.\n>\n> +1 for the feature. I also think this can help users deal with some\n> useful cases.\n>\n>\n> > CREATE TABLE t2_1 PARTITION OF t2 FOR VALUES IN (1, 2), (1, 5), (2,\n> > 2),(2, 10);\n>\n> IMHO, listing every single tuple like this might be a bit cumbersome for\n> the user. What about something like this:\n>\n> ...FOR VALUES IN (1, 2, 3, 4), (11, 22, 33, 44), where the first set\n> is the list for values of column A and second list is for column B. We\n> can treat these lists as A X B possible values or simply (a1, b1), (a2,\n> b2) internally. However I see other proprietary databases already have\n> syntax something similar that you are proposing here. So, I leave it\n> open for the thoughts from experts. Also, though what I propose might be\n> easy from a user perspective, but might not be that easy for\n> implementation, given that for a larger number of columns in partition list\n> e.g. A X B X C X D lists become unmanageable.\n>\n> I did not review the patch in detail, but a quick look at it leaves me\n> with following comments:\n>\n> 1.\n> > + * list. Then this function will continue the serach and return the\n> index of\n> Typo:\n> s/serach/search\n>\n> 2.\n> A compiler warning:\n> partprune.c: In function ‘get_matching_list_bounds’:\n> partprune.c:2731:20: error: passing argument 5 of ‘partition_list_bsearch’\n> makes pointer from integer without a cast [-Werror=int-conversion]\n> 2731 | nvalues, value, &is_equal);\n> | ^~~~~\n> | |\n> | Datum {aka long unsigned int}\n> In file included from partprune.c:53:\n> ../../../src/include/partitioning/partbounds.h:120:32: note: expected\n> ‘Datum *’ {aka ‘long unsigned int *’} but argument is of type ‘Datum’ {aka\n> ‘long unsigned int’}\n> 120 | int nvalues, Datum *value, bool *is_equal);\n> | ~~~~~~~^~~~~\n>\n> 3.\n> And, a server crash with following case:\n> postgres=# CREATE TABLE t1 (a int) PARTITION BY LIST (a);\n> CREATE TABLE\n> postgres=# CREATE TABLE t1p1 PARTITION OF t1 FOR VALUES IN (1, 2, 3);\n> CREATE TABLE\n> postgres=# \\d+ t1p1\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> !?>\n>\n> Stacktrace:\n> (gdb) bt\n> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n> #1 0x00007f5d273c5859 in __GI_abort () at abort.c:79\n> #2 0x000055779d2eb69d in ExceptionalCondition\n> (conditionName=0x55779d4978d8 \"ptr == NULL || nodeTag(ptr) == type\",\n> errorType=0x55779d4978c3 \"FailedAssertion\",\n> fileName=0x55779d4978a0 \"../../../src/include/nodes/nodes.h\",\n> lineNumber=603) at assert.c:69\n> #3 0x000055779d03a684 in castNodeImpl (type=T_Const, ptr=0x55779e457b18)\n> at ../../../src/include/nodes/nodes.h:603\n> #4 0x000055779d04368a in get_qual_for_list (parent=0x7f5d1df829b8,\n> spec=0x55779e457950) at partbounds.c:4155\n> #5 0x000055779d03ac60 in get_qual_from_partbound (rel=0x7f5d1df82570,\n> parent=0x7f5d1df829b8, spec=0x55779e457950) at partbounds.c:272\n> #6 0x000055779d2cf630 in generate_partition_qual (rel=0x7f5d1df82570) at\n> partcache.c:379\n> #7 0x000055779d2cf468 in get_partition_qual_relid (relid=32771) at\n> partcache.c:308\n> #8 0x000055779d2592bf in pg_get_partition_constraintdef\n> (fcinfo=0x55779e44ee50) at ruleutils.c:2019\n> #9 0x000055779cec7221 in ExecInterpExpr (state=0x55779e44dfb0,\n> econtext=0x55779e407fe8, isnull=0x7ffddf9b109f) at execExprInterp.c:744\n> #10 0x000055779cec954f in ExecInterpExprStillValid (state=0x55779e44dfb0,\n> econtext=0x55779e407fe8, isNull=0x7ffddf9b109f) at execExprInterp.c:1819\n> #11 0x000055779cf1d58a in ExecEvalExprSwitchContext (state=0x55779e44dfb0,\n> econtext=0x55779e407fe8, isNull=0x7ffddf9b109f)\n> at ../../../src/include/executor/executor.h:338\n> #12 0x000055779cf1d602 in ExecProject (projInfo=0x55779e44dfa8) at\n> ../../../src/include/executor/executor.h:372\n> #13 0x000055779cf1db2f in ExecNestLoop (pstate=0x55779e407ed0) at\n> nodeNestloop.c:241\n> #14 0x000055779cedf136 in ExecProcNodeFirst (node=0x55779e407ed0) at\n> execProcnode.c:462\n> #15 0x000055779ced3053 in ExecProcNode (node=0x55779e407ed0) at\n> ../../../src/include/executor/executor.h:257\n> #16 0x000055779ced5a87 in ExecutePlan (estate=0x55779e407c80,\n> planstate=0x55779e407ed0, use_parallel_mode=false, operation=CMD_SELECT,\n> sendTuples=true, numberTuples=0,\n> direction=ForwardScanDirection, dest=0x55779e425a88,\n> execute_once=true) at execMain.c:1551\n> #17 0x000055779ced372d in standard_ExecutorRun (queryDesc=0x55779e453520,\n> direction=ForwardScanDirection, count=0, execute_once=true) at\n> execMain.c:361\n> #18 0x000055779ced353c in ExecutorRun (queryDesc=0x55779e453520,\n> direction=ForwardScanDirection, count=0, execute_once=true) at\n> execMain.c:305\n> #19 0x000055779d13d287 in PortalRunSelect (portal=0x55779e398800,\n> forward=true, count=0, dest=0x55779e425a88) at pquery.c:912\n> #20 0x000055779d13cec0 in PortalRun (portal=0x55779e398800,\n> count=9223372036854775807, isTopLevel=true, run_once=true,\n> dest=0x55779e425a88, altdest=0x55779e425a88,\n> qc=0x7ffddf9b14f0) at pquery.c:756\n> #21 0x000055779d1361ce in exec_simple_query (\n> query_string=0x55779e3367a0 \"SELECT inhparent::pg_catalog.regclass,\\n\n> pg_catalog.pg_get_expr(c.relpartbound, c.oid),\\n inhdetachpending,\\n\n> pg_catalog.pg_get_partition_constraintdef(c.oid)\\nFROM pg_catalog.pg_class\n> c JOIN pg_catalo\"...) at postgres.c:1214\n> #22 0x000055779d13ad8b in PostgresMain (argc=1, argv=0x7ffddf9b1710,\n> dbname=0x55779e3626f8 \"postgres\", username=0x55779e3626d8 \"hadoop\") at\n> postgres.c:4476\n> #23 0x000055779d0674d3 in BackendRun (port=0x55779e358380) at\n> postmaster.c:4488\n> #24 0x000055779d066d8c in BackendStartup (port=0x55779e358380) at\n> postmaster.c:4210\n> #25 0x000055779d062f9b in ServerLoop () at postmaster.c:1742\n> #26 0x000055779d062734 in PostmasterMain (argc=3, argv=0x55779e3308b0) at\n> postmaster.c:1414\n> #27 0x000055779cf5805f in main (argc=3, argv=0x55779e3308b0) at main.c:209\n>\n> Regards,\n> Jeevan Ladhe\n>\n> On Thu, May 6, 2021 at 7:33 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com>\n> wrote:\n>\n>> Hi,\n>>\n>> While reviewing one of the 'Table partitioning' related patches, I found\n>> that Postgres does not support multiple column based LIST partitioning.\n>> Based on this understanding, I have started working on this feature. I also\n>> feel that 'Multi-Column List Partitioning' can be benefited to the Postgres\n>> users in future.\n>>\n>> I am attaching the WIP patch for this feature here. It supports\n>> 'Multi-Column List Partitioning', however some tasks are still pending. I\n>> would like to know your thoughts about this, So that I can continue the\n>> work with improvising the current patch.\n>>\n>> Following things are handled in the patch.\n>> 1. Syntax\n>>\n>> CREATE TABLE table_name (attrs) PARTITION BY LIST(list_of_columns);\n>>\n>> Earlier there was no provision to mention multiple columns as part of the\n>> 'list_of_columns' clause. Now we can mention the list of columns separated\n>> by comma.\n>>\n>> CREATE TABLE table_name_p1 PARTITION OF table_name FOR VALUES IN\n>> list_of_values.\n>>\n>> Whereas list_of_columns can be\n>> a. (value [,...])\n>> b. (value [,...]) [,...]\n>>\n>> I would like to list a few examples here for better understanding.\n>> Ex-1:\n>> CREATE TABLE t1(a int) PARTITION BY LIST(a);\n>> CREATE TABLE t1_1 PARTITION OF t1 FOR VALUES IN (1, 2, 10, 5, 7);\n>>\n>> Ex-2:\n>> CREATE TABLE t2(a int, b int) PARTITION BY LIST(a,b);\n>> CREATE TABLE t2_1 PARTITION OF t2 FOR VALUES IN (1, 2), (1, 5), (2,\n>> 2),(2, 10);\n>>\n>> Please share if any changes are required in the above syntax.\n>>\n>> 2. Modified transformation logic to support above syntax.\n>>\n>> 3. Modified the data structures to store the information caused by above\n>> syntax. Also modified the searching logic to route the tuple to the\n>> appropriate partition.\n>>\n>> 4. Done a few basic testing and verified CREATE TABLE, INSERT INTO and\n>> SELECT are working fine.\n>>\n>>\n>> Following items are pending and I am working on it.\n>>\n>> 1. Handling of 'NULL' values.\n>>\n>> 2. Support multi column case in partition pruning.\n>>\n>> 3. Add test cases to the regression test suite.\n>>\n>> Please share your thoughts.\n>>\n>>\n>> Thanks & Regards,\n>> Nitin Jadhav\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n\nThanks Jeevan for looking into this thread.> I did not review the patch in detail, but a quick look at it leaves me> with following comments:I will incorporate these changes.> ...FOR VALUES IN (1, 2, 3, 4), (11, 22, 33, 44), where the first set> is the list for values of column A and second list is for column B. We> can treat these lists as A X B possible values or simply (a1, b1), (a2,> b2) internally. However I see other proprietary databases already have> syntax something similar that you are proposing here. So, I leave it> open for the thoughts from experts. Also, though what I propose might be> easy from a user perspective, but might not be that easy for> implementation, given that for a larger number of columns in partition list> e.g. A X B X C X D lists become unmanageable.I feel this is also not easy from a user's perspective. For example for a partition with 2 partition keys (a,b) for values like (1,1), (1,2), (1,3), (1,4),(1,5). This would be converted to (1,1,1,1,1), (1,2,3,4,5). It is difficult to match the values of column 'a' to 'b'. Anyways let's wait for the other's opinion about this.Thanks & Regards,Nitin JadhavOn Fri, May 7, 2021 at 7:36 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com> wrote:> While reviewing one of the 'Table partitioning' related patches,> I found that Postgres does not support multiple column based LIST> partitioning. Based on this understanding, I have started working on> this feature. I also feel that 'Multi-Column List Partitioning' can> be benefited to the Postgres users in future.+1 for the feature. I also think this can help users deal with someuseful cases.> CREATE TABLE t2_1 PARTITION OF t2 FOR VALUES IN (1, 2), (1, 5), (2,> 2),(2, 10);IMHO, listing every single tuple like this might be a bit cumbersome forthe user. What about something like this:...FOR VALUES IN (1, 2, 3, 4), (11, 22, 33, 44), where the first setis the list for values of column A and second list is for column B. Wecan treat these lists as A X B possible values or simply (a1, b1), (a2,b2) internally. However I see other proprietary databases already havesyntax something similar that you are proposing here. So, I leave itopen for the thoughts from experts. Also, though what I propose might beeasy from a user perspective, but might not be that easy forimplementation, given that for a larger number of columns in partition liste.g. A X B X C X D lists become unmanageable.I did not review the patch in detail, but a quick look at it leaves mewith following comments:1.> + * list. Then this function will continue the serach and return the index ofTypo:s/serach/search2.A compiler warning:partprune.c: In function ‘get_matching_list_bounds’:partprune.c:2731:20: error: passing argument 5 of ‘partition_list_bsearch’ makes pointer from integer without a cast [-Werror=int-conversion] 2731 |           nvalues, value, &is_equal);      |                    ^~~~~      |                    |      |                    Datum {aka long unsigned int}In file included from partprune.c:53:../../../src/include/partitioning/partbounds.h:120:32: note: expected ‘Datum *’ {aka ‘long unsigned int *’} but argument is of type ‘Datum’ {aka ‘long unsigned int’}  120 |            int nvalues, Datum *value, bool *is_equal);      |                         ~~~~~~~^~~~~3.And, a server crash with following case:postgres=# CREATE TABLE t1 (a int) PARTITION BY LIST (a);CREATE TABLEpostgres=# CREATE TABLE t1p1 PARTITION OF t1 FOR VALUES IN (1, 2, 3);CREATE TABLEpostgres=# \\d+ t1p1server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.The connection to the server was lost. Attempting reset: Failed.!?>Stacktrace:(gdb) bt#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50#1  0x00007f5d273c5859 in __GI_abort () at abort.c:79#2  0x000055779d2eb69d in ExceptionalCondition (conditionName=0x55779d4978d8 \"ptr == NULL || nodeTag(ptr) == type\", errorType=0x55779d4978c3 \"FailedAssertion\",     fileName=0x55779d4978a0 \"../../../src/include/nodes/nodes.h\", lineNumber=603) at assert.c:69#3  0x000055779d03a684 in castNodeImpl (type=T_Const, ptr=0x55779e457b18) at ../../../src/include/nodes/nodes.h:603#4  0x000055779d04368a in get_qual_for_list (parent=0x7f5d1df829b8, spec=0x55779e457950) at partbounds.c:4155#5  0x000055779d03ac60 in get_qual_from_partbound (rel=0x7f5d1df82570, parent=0x7f5d1df829b8, spec=0x55779e457950) at partbounds.c:272#6  0x000055779d2cf630 in generate_partition_qual (rel=0x7f5d1df82570) at partcache.c:379#7  0x000055779d2cf468 in get_partition_qual_relid (relid=32771) at partcache.c:308#8  0x000055779d2592bf in pg_get_partition_constraintdef (fcinfo=0x55779e44ee50) at ruleutils.c:2019#9  0x000055779cec7221 in ExecInterpExpr (state=0x55779e44dfb0, econtext=0x55779e407fe8, isnull=0x7ffddf9b109f) at execExprInterp.c:744#10 0x000055779cec954f in ExecInterpExprStillValid (state=0x55779e44dfb0, econtext=0x55779e407fe8, isNull=0x7ffddf9b109f) at execExprInterp.c:1819#11 0x000055779cf1d58a in ExecEvalExprSwitchContext (state=0x55779e44dfb0, econtext=0x55779e407fe8, isNull=0x7ffddf9b109f)    at ../../../src/include/executor/executor.h:338#12 0x000055779cf1d602 in ExecProject (projInfo=0x55779e44dfa8) at ../../../src/include/executor/executor.h:372#13 0x000055779cf1db2f in ExecNestLoop (pstate=0x55779e407ed0) at nodeNestloop.c:241#14 0x000055779cedf136 in ExecProcNodeFirst (node=0x55779e407ed0) at execProcnode.c:462#15 0x000055779ced3053 in ExecProcNode (node=0x55779e407ed0) at ../../../src/include/executor/executor.h:257#16 0x000055779ced5a87 in ExecutePlan (estate=0x55779e407c80, planstate=0x55779e407ed0, use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true, numberTuples=0,     direction=ForwardScanDirection, dest=0x55779e425a88, execute_once=true) at execMain.c:1551#17 0x000055779ced372d in standard_ExecutorRun (queryDesc=0x55779e453520, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:361#18 0x000055779ced353c in ExecutorRun (queryDesc=0x55779e453520, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:305#19 0x000055779d13d287 in PortalRunSelect (portal=0x55779e398800, forward=true, count=0, dest=0x55779e425a88) at pquery.c:912#20 0x000055779d13cec0 in PortalRun (portal=0x55779e398800, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x55779e425a88, altdest=0x55779e425a88,     qc=0x7ffddf9b14f0) at pquery.c:756#21 0x000055779d1361ce in exec_simple_query (    query_string=0x55779e3367a0 \"SELECT inhparent::pg_catalog.regclass,\\n  pg_catalog.pg_get_expr(c.relpartbound, c.oid),\\n  inhdetachpending,\\n  pg_catalog.pg_get_partition_constraintdef(c.oid)\\nFROM pg_catalog.pg_class c JOIN pg_catalo\"...) at postgres.c:1214#22 0x000055779d13ad8b in PostgresMain (argc=1, argv=0x7ffddf9b1710, dbname=0x55779e3626f8 \"postgres\", username=0x55779e3626d8 \"hadoop\") at postgres.c:4476#23 0x000055779d0674d3 in BackendRun (port=0x55779e358380) at postmaster.c:4488#24 0x000055779d066d8c in BackendStartup (port=0x55779e358380) at postmaster.c:4210#25 0x000055779d062f9b in ServerLoop () at postmaster.c:1742#26 0x000055779d062734 in PostmasterMain (argc=3, argv=0x55779e3308b0) at postmaster.c:1414#27 0x000055779cf5805f in main (argc=3, argv=0x55779e3308b0) at main.c:209Regards,Jeevan LadheOn Thu, May 6, 2021 at 7:33 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:Hi,While reviewing one of the 'Table partitioning' related patches, I found that Postgres does not support multiple column based LIST partitioning. Based on this understanding, I have started working on this feature. I also feel that 'Multi-Column List Partitioning' can be benefited to the Postgres users in future.  I am attaching the WIP patch for this feature here. It supports 'Multi-Column List Partitioning', however some tasks are still pending. I would like to know your thoughts about this, So that I can continue the work with improvising the current patch. Following things are handled in the patch.1. SyntaxCREATE TABLE table_name (attrs) PARTITION BY LIST(list_of_columns);Earlier there was no provision to mention multiple columns as part of the 'list_of_columns' clause. Now we can mention the list of columns separated by comma.CREATE TABLE table_name_p1 PARTITION OF table_name FOR VALUES IN list_of_values.Whereas list_of_columns can bea. (value [,...])b. (value [,...]) [,...]I would like to list a few examples here for better understanding.Ex-1:CREATE TABLE t1(a int) PARTITION BY LIST(a);CREATE TABLE t1_1 PARTITION OF t1 FOR VALUES IN (1, 2, 10, 5, 7);Ex-2: CREATE TABLE t2(a int, b int) PARTITION BY LIST(a,b);CREATE TABLE t2_1 PARTITION OF t2 FOR VALUES IN (1, 2), (1, 5), (2, 2),(2, 10);Please share if any changes are required in the above syntax.2. Modified transformation logic to support above syntax.3. Modified the data structures to store the information caused by above syntax. Also modified the searching logic to route the tuple to the appropriate partition.4. Done a few basic testing and verified CREATE TABLE, INSERT INTO and SELECT are working fine.Following items are pending and I am working on it.1. Handling of 'NULL' values.2. Support multi column case in partition pruning.3. Add test cases to the regression test suite.Please share your thoughts.Thanks & Regards,Nitin Jadhav", "msg_date": "Sat, 8 May 2021 14:12:20 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Hello Nitin,\n\nOn Thu, May 6, 2021 at 11:03 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> While reviewing one of the 'Table partitioning' related patches, I found that Postgres does not support multiple column based LIST partitioning. Based on this understanding, I have started working on this feature. I also feel that 'Multi-Column List Partitioning' can be benefited to the Postgres users in future.\n\nYes, it would be nice to have this. Thanks for picking this up.\n\n> I am attaching the WIP patch for this feature here. It supports 'Multi-Column List Partitioning', however some tasks are still pending. I would like to know your thoughts about this, So that I can continue the work with improvising the current patch.\n>\n> Following things are handled in the patch.\n> 1. Syntax\n>\n> CREATE TABLE table_name (attrs) PARTITION BY LIST(list_of_columns);\n>\n> Earlier there was no provision to mention multiple columns as part of the 'list_of_columns' clause. Now we can mention the list of columns separated by comma.\n>\n> CREATE TABLE table_name_p1 PARTITION OF table_name FOR VALUES IN list_of_values.\n>\n> Whereas list_of_columns can be\n> a. (value [,...])\n> b. (value [,...]) [,...]\n>\n> I would like to list a few examples here for better understanding.\n> Ex-1:\n> CREATE TABLE t1(a int) PARTITION BY LIST(a);\n> CREATE TABLE t1_1 PARTITION OF t1 FOR VALUES IN (1, 2, 10, 5, 7);\n>\n> Ex-2:\n> CREATE TABLE t2(a int, b int) PARTITION BY LIST(a,b);\n> CREATE TABLE t2_1 PARTITION OF t2 FOR VALUES IN (1, 2), (1, 5), (2, 2),(2, 10);\n\nHmm, why not have parentheses around these lists, that is: (\n(list_of_values) [, ...] )\n\nSo your example would look like this:\n\nCREATE TABLE t2_1 PARTITION OF t2 FOR VALUES IN ((1, 2), (1, 5), (2,\n2), (2, 10));\n\nIMO, it is not such a bad syntax from a user's PoV. It's not hard to\nunderstand from this syntax that the partition constraint is something\nlike (a, b) = (1, 2) OR (a, b) = (1, 5) OR ..., where the = performs\nrow-wise comparison.\n\nI will now take a look at the patch itself.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 21 May 2021 13:02:21 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Fri, May 21, 2021 at 1:02 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> I will now take a look at the patch itself.\n\nSome quick observations:\n\n* I get a lot of instances of the following 2 warnings when compiling\nthe patched code:\n\nWarning #1:\n\npartprune.c: In function ‘get_matching_list_bounds’:\npartprune.c:2731:11: warning: passing argument 5 of\n‘partition_list_bsearch’ makes pointer from integer without a cast\n[enabled by default]\n nvalues, value, &is_equal);\n ^\nIn file included from partprune.c:53:0:\n../../../src/include/partitioning/partbounds.h:117:12: note: expected\n‘Datum *’ but argument is of type ‘Datum’\n extern int partition_list_bsearch(FmgrInfo *partsupfunc,\n\nWarning #2:\n\npartprune.c:2781:12: warning: incompatible integer to pointer\nconversion passing 'Datum'\n (aka 'unsigned long') to parameter of type 'Datum *' (aka\n'unsigned long *'); take the\n address with & [-Wint-conversion]\n\n value, &is_equal);\n\n ^~~~~\n\n &\n../../../src/include/partitioning/partbounds.h:120:32: note: passing\nargument to parameter 'value'\n here\n ...int nvalues, Datum *value, bool *is_equal);\n\n* I think this code:\n\n===\n /* Get the only column's name in case we need to output an error */\n if (key->partattrs[0] != 0)\n colname = get_attname(RelationGetRelid(parent),\n key->partattrs[0], false);\n else\n colname = deparse_expression((Node *) linitial(partexprs),\n\ndeparse_context_for(RelationGetRelationName(parent),\n\nRelationGetRelid(parent)),\n false, false);\n /* Need its type data too */\n coltype = get_partition_col_typid(key, 0);\n coltypmod = get_partition_col_typmod(key, 0);\n partcollation = get_partition_col_collation(key, 0);\n===\n\nbelongs in the new function transformPartitionListBounds that you\nadded, because without doing so, any errors having to do with\npartitioning columns other than the first one will report the first\ncolumn's name in the error message:\n\npostgres=# create table foo (a bool, b bool) partition by list (a, b);\nCREATE TABLE\n\n-- this is fine!\npostgres=# create table foo_true_true partition of foo for values in (1, true);\nERROR: specified value cannot be cast to type boolean for column \"a\"\nLINE 1: ...able foo_true_true partition of foo for values in (1, true);\n\n-- not this!\npostgres=# create table foo_true_true partition of foo for values in (true, 1);\nERROR: specified value cannot be cast to type boolean for column \"a\"\nLINE 1: ...able foo_true_true partition of foo for values in (true, 1);\n\n* The following prototype of transformPartitionListBounds() means that\nall values in a given bound list are analyzed with the first\npartitioning column's colname, type, typmod, etc., which is wrong:\n\n+static List *\n+transformPartitionListBounds(ParseState *pstate, PartitionBoundSpec *spec,\n+ char *colname, Oid coltype, int32 coltypmod,\n+ Oid partcollation, int partnatts)\n+{\n\nAn example of wrong behavior because of that:\n\npostgres=# create table foo (a bool, b text) partition by list (a, b);\nCREATE TABLE\nTime: 3.967 ms\npostgres=# create table foo_true_true partition of foo for values in\n(true, 'whatever');\nERROR: invalid input syntax for type boolean: \"whatever\"\nLINE 1: ...o_true_true partition of foo for values in (true, 'whatever'...\n\n\"whatever\" should've been accepted but because it's checked with a's\ntype, it is wrongly flagged.\n\nPlease take a look at how transformPartitionRangeBound() handles this,\nespecially how it uses the correct partitioning column's info to\nanalyze the corresponding bound value expression.\n\nI will continue looking next week.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 21 May 2021 21:16:50 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "> Yes, it would be nice to have this. Thanks for picking this up.\n\nThanks for confirming.\n\n> Some quick observations:\n\nThanks for providing the comments. I will handle these cases.\n\n> Hmm, why not have parentheses around these lists, that is: (\n> (list_of_values) [, ...] )\n>\n> So your example would look like this:\n>\n> CREATE TABLE t2_1 PARTITION OF t2 FOR VALUES IN ((1, 2), (1, 5), (2,\n> 2), (2, 10));\n\nI am ok with this syntax. This would be more appropriate.\n\n> IMO, it is not such a bad syntax from a user's PoV. It's not hard to\n> understand from this syntax that the partition constraint is something\n> like (a, b) = (1, 2) OR (a, b) = (1, 5) OR ..., where the = performs\n> row-wise comparison.\n\nThanks for suggesting to use row-wise comparison. I have few queries\nwith respect to handling of NULL values.\n\n1. What should be the partition constraint for the above case. AFAIK,\nrow-wise comparison wont work with NULL values as shown in [1]. I mean\ntwo rows are considered equal if all their corresponding members are\nnon-null and equal. The rows are unequal if any corresponding members\nare non-null and unequal. Otherwise the result of the row comparison\nis unknown (null). So we should generate different types of\nconstraints for NULL values.\n\nEx:\nCREATE TABLE t(a int, b int) PARTITION BY LIST(a,b);\nCREATE TABLE t_1 PARTITION OF t FOR VALUES IN (1, 1), (1, NULL),\n(NULL, 1), (NULL, NULL);\n\nAs per my knowledge, we should consider creating partition constraints\nfor the above example as given below.\n\n(a, b) = (1, 1) OR ((a = 1) AND (b IS NULL)) OR ((a IS NULL) AND (b =\n1)) OR ((a is NULL) AND (b is NULL)).\n\nKindly correct me if I am wrong.\n\n2. In the current code we don't put the NULL value in the 'datums'\nfield of 'PartitionBoundInfoData' structure [2]. Since there can be\nonly one NULL value, we directly store the corresponding index value\nin the 'null_index' field. Now we have to handle multiple NULL values\nin case of Multi-Column List Partitioning. So the question is how to\nhandle this scenario. Following are the 2 approaches to handle this.\n\nApproach-1:\nAdd another field 'bool **isnull' in [2] and mark the corresponding\nelement to TRUE if it has NULL value and the corresponding location in\n'datums' contains empty/No value. For example, If a partition bound is\n(1, NULL), then\n\ndatums[0][0] = 1\ndatums[0][1] = Not assigned any value\nisnull[0][0] = FALSE\nis null[0][1] = TRUE\n\nSo now we have an entry in the 'datums' field for a bound containing\nNULL value, so we should handle this in all the scenarios where we are\nmanipulating 'datums' in order to support NULL values and avoid crash.\n\nApproach-2:\nDon't add the bound information to 'datums' field of [2] if any of the\nvalue is NULL. Store this information separately in the structures\nmentioned in [3] and process accordingly.\n\nI feel approach-1 is the better solution as this requires less code\nchanges and easy to implement than approach-2. Kindly share your\nthoughts about the approaches and please share if you have any better\nsolution than the above 2.\n\n[1]:\npostgres@15890=#SELECT ROW(1, 2) = ROW(1, 2);\n ?column?\n----------\n t\n(1 row)\n\npostgres@15890=#SELECT ROW(1, 2) = ROW(1, 1);\n ?column?\n----------\n f\n(1 row)\n\npostgres@15890=#SELECT ROW(1, NULL) = ROW(1, NULL);\n ?column?\n----------\n\n(1 row)\n\npostgres@15890=#SELECT ROW(1, 2) = ROW(1, NULL);\n ?column?\n----------\n\n(1 row)\n\n[2] :\ntypedef struct PartitionBoundInfoData\n{\n char strategy; /* hash, list or range? */\n int ndatums; /* Length of the datums[] array */\n Datum **datums;\n PartitionRangeDatumKind **kind; /* The kind of each range bound datum;\n * NULL for hash and list partitioned\n * tables */\n int nindexes; /* Length of the indexes[] array */\n int *indexes; /* Partition indexes */\n int null_index; /* Index of the null-accepting partition; -1\n * if there isn't one */\n int default_index; /* Index of the default partition; -1 if there\n * isn't one */\n} PartitionBoundInfoData;\n\n[3]:\ntypedef struct NullBoundDatumInfo\n{\n Datum *datum;\n int col_index;\n int. bound_index;\n} NullBoundDatumInfo;\n\ntypedef struct NullBoundIsNullInfo\n{\n int col_index;\n int. bound_index;\n} NullBoundIsNullInfo;\n\nAdd 2 fields of type 'NullBoundDatumInfo' and 'NullBoundIsNullInfo' to\nthe structure [2].\n\n--\nThanks & Regards,\nNitin Jadhav\n\nOn Fri, May 21, 2021 at 5:47 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Fri, May 21, 2021 at 1:02 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > I will now take a look at the patch itself.\n>\n> Some quick observations:\n>\n> * I get a lot of instances of the following 2 warnings when compiling\n> the patched code:\n>\n> Warning #1:\n>\n> partprune.c: In function ‘get_matching_list_bounds’:\n> partprune.c:2731:11: warning: passing argument 5 of\n> ‘partition_list_bsearch’ makes pointer from integer without a cast\n> [enabled by default]\n> nvalues, value, &is_equal);\n> ^\n> In file included from partprune.c:53:0:\n> ../../../src/include/partitioning/partbounds.h:117:12: note: expected\n> ‘Datum *’ but argument is of type ‘Datum’\n> extern int partition_list_bsearch(FmgrInfo *partsupfunc,\n>\n> Warning #2:\n>\n> partprune.c:2781:12: warning: incompatible integer to pointer\n> conversion passing 'Datum'\n> (aka 'unsigned long') to parameter of type 'Datum *' (aka\n> 'unsigned long *'); take the\n> address with & [-Wint-conversion]\n>\n> value, &is_equal);\n>\n> ^~~~~\n>\n> &\n> ../../../src/include/partitioning/partbounds.h:120:32: note: passing\n> argument to parameter 'value'\n> here\n> ...int nvalues, Datum *value, bool *is_equal);\n>\n> * I think this code:\n>\n> ===\n> /* Get the only column's name in case we need to output an error */\n> if (key->partattrs[0] != 0)\n> colname = get_attname(RelationGetRelid(parent),\n> key->partattrs[0], false);\n> else\n> colname = deparse_expression((Node *) linitial(partexprs),\n>\n> deparse_context_for(RelationGetRelationName(parent),\n>\n> RelationGetRelid(parent)),\n> false, false);\n> /* Need its type data too */\n> coltype = get_partition_col_typid(key, 0);\n> coltypmod = get_partition_col_typmod(key, 0);\n> partcollation = get_partition_col_collation(key, 0);\n> ===\n>\n> belongs in the new function transformPartitionListBounds that you\n> added, because without doing so, any errors having to do with\n> partitioning columns other than the first one will report the first\n> column's name in the error message:\n>\n> postgres=# create table foo (a bool, b bool) partition by list (a, b);\n> CREATE TABLE\n>\n> -- this is fine!\n> postgres=# create table foo_true_true partition of foo for values in (1, true);\n> ERROR: specified value cannot be cast to type boolean for column \"a\"\n> LINE 1: ...able foo_true_true partition of foo for values in (1, true);\n>\n> -- not this!\n> postgres=# create table foo_true_true partition of foo for values in (true, 1);\n> ERROR: specified value cannot be cast to type boolean for column \"a\"\n> LINE 1: ...able foo_true_true partition of foo for values in (true, 1);\n>\n> * The following prototype of transformPartitionListBounds() means that\n> all values in a given bound list are analyzed with the first\n> partitioning column's colname, type, typmod, etc., which is wrong:\n>\n> +static List *\n> +transformPartitionListBounds(ParseState *pstate, PartitionBoundSpec *spec,\n> + char *colname, Oid coltype, int32 coltypmod,\n> + Oid partcollation, int partnatts)\n> +{\n>\n> An example of wrong behavior because of that:\n>\n> postgres=# create table foo (a bool, b text) partition by list (a, b);\n> CREATE TABLE\n> Time: 3.967 ms\n> postgres=# create table foo_true_true partition of foo for values in\n> (true, 'whatever');\n> ERROR: invalid input syntax for type boolean: \"whatever\"\n> LINE 1: ...o_true_true partition of foo for values in (true, 'whatever'...\n>\n> \"whatever\" should've been accepted but because it's checked with a's\n> type, it is wrongly flagged.\n>\n> Please take a look at how transformPartitionRangeBound() handles this,\n> especially how it uses the correct partitioning column's info to\n> analyze the corresponding bound value expression.\n>\n> I will continue looking next week.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 23 May 2021 15:19:07 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Sun, May 23, 2021 at 6:49 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> > IMO, it is not such a bad syntax from a user's PoV. It's not hard to\n> > understand from this syntax that the partition constraint is something\n> > like (a, b) = (1, 2) OR (a, b) = (1, 5) OR ..., where the = performs\n> > row-wise comparison.\n>\n> Thanks for suggesting to use row-wise comparison.\n\nActually, I was just describing how the *users* may want to visualize\nthe partition constraint...\n\n> I have few queries\n> with respect to handling of NULL values.\n>\n> 1. What should be the partition constraint for the above case. AFAIK,\n> row-wise comparison wont work with NULL values as shown in [1]. I mean\n> two rows are considered equal if all their corresponding members are\n> non-null and equal. The rows are unequal if any corresponding members\n> are non-null and unequal. Otherwise the result of the row comparison\n> is unknown (null). So we should generate different types of\n> constraints for NULL values.\n>\n> Ex:\n> CREATE TABLE t(a int, b int) PARTITION BY LIST(a,b);\n> CREATE TABLE t_1 PARTITION OF t FOR VALUES IN (1, 1), (1, NULL),\n> (NULL, 1), (NULL, NULL);\n>\n> As per my knowledge, we should consider creating partition constraints\n> for the above example as given below.\n>\n> (a, b) = (1, 1) OR ((a = 1) AND (b IS NULL)) OR ((a IS NULL) AND (b =\n> 1)) OR ((a is NULL) AND (b is NULL)).\n\nYeah, something like that should do the trick.\n\nAgain, I was not actually suggesting that you write code to implement\nthe constraint using something like RowCompareExpr, only that the\nusers might want to view the constraint as doing row-wise comparison\nof the partitioning columns and the specified value lists.\n\n> 2. In the current code we don't put the NULL value in the 'datums'\n> field of 'PartitionBoundInfoData' structure [2]. Since there can be\n> only one NULL value, we directly store the corresponding index value\n> in the 'null_index' field. Now we have to handle multiple NULL values\n> in case of Multi-Column List Partitioning. So the question is how to\n> handle this scenario. Following are the 2 approaches to handle this.\n>\n> Approach-1:\n> Add another field 'bool **isnull' in [2] and mark the corresponding\n> element to TRUE if it has NULL value and the corresponding location in\n> 'datums' contains empty/No value. For example, If a partition bound is\n> (1, NULL), then\n>\n> datums[0][0] = 1\n> datums[0][1] = Not assigned any value\n> isnull[0][0] = FALSE\n> is null[0][1] = TRUE\n>\n> So now we have an entry in the 'datums' field for a bound containing\n> NULL value, so we should handle this in all the scenarios where we are\n> manipulating 'datums' in order to support NULL values and avoid crash.\n>\n> Approach-2:\n> Don't add the bound information to 'datums' field of [2] if any of the\n> value is NULL. Store this information separately in the structures\n> mentioned in [3] and process accordingly.\n>\n> I feel approach-1 is the better solution as this requires less code\n> changes and easy to implement than approach-2. Kindly share your\n> thoughts about the approaches and please share if you have any better\n> solution than the above 2.\n\nApproach 1 sounds better. It sounds like approach 1 might help us\nimplement support for allowing NULLs in range partition bounds in the\nfuture, if at all. For now, it might be better to not allocate the\nisnull array except for list partitioning.\n\nI'll wait for you to post a new patch addressing at least the comments\nin my earlier email. Also, please make sure to run `make check`\nsuccessfully before posting the patch. :)\n\nThanks.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 26 May 2021 13:57:04 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "> Approach 1 sounds better. It sounds like approach 1 might help us\n> implement support for allowing NULLs in range partition bounds in the\n> future, if at all. For now, it might be better to not allocate the\n> isnull array except for list partitioning.\n\nThanks for confirming.\n\n> I'll wait for you to post a new patch addressing at least the comments\n> in my earlier email. Also, please make sure to run `make check`\n> successfully before posting the patch. :)\n\nI have fixed all of the review comments given by you and Jeevan in the\nattached patch and also the attached patch contains more changes\ncompared to the previous patch. Following are the implementation\ndetails.\n\n1. Regarding syntax, the existing syntax will work fine for the\nsingle-column list partitioning. However I have used the new syntax\nfor the multi-column list partitioning as we discussed earlier. I have\nused a combination of 'AND' and 'OR' logic for the partition\nconstraints as given in the below example.\n\npostgres@17503=#create table t(a int, b text) partition by list(a,b);\nCREATE TABLE\npostgres@17503=#create table t1 partition of t for values in ((1,'a'),\n(NULL,'b'));\nCREATE TABLE\npostgres@17503=#\\d+ t\n Partitioned table \"public.t\"\n Column | Type | Collation | Nullable | Default | Storage |\nCompression | Stats target | Description\n--------+---------+-----------+----------+---------+----------+-------------+--------------+-------------\n a | integer | | | | plain |\n | |\n b | text | | | | extended |\n | |\nPartition key: LIST (a, b)\nPartitions: t1 FOR VALUES IN ((1, 'a'), (NULL, 'b'))\n\npostgres@17503=#\\d+ t1\n Table \"public.t1\"\n Column | Type | Collation | Nullable | Default | Storage |\nCompression | Stats target | Description\n--------+---------+-----------+----------+---------+----------+-------------+--------------+-------------\n a | integer | | | | plain |\n | |\n b | text | | | | extended |\n | |\nPartition of: t FOR VALUES IN ((1, 'a'), (NULL, 'b'))\nPartition constraint: (((a = 1) AND (b = 'a'::text)) OR ((a IS NULL)\nAND (b = 'b'::text)))\nAccess method: heap\n\n2. In the existing code, NULL values were handled differently. It was\nnot added to the 'datums' variable, rather used to store the partition\nindex directly in the 'null_index' variable. Now there is a\npossibility of multiple NULL values, hence introducing a new member\n'isnulls' in the 'PartitionBoundInfoData' struct which indicates\nwhether the corresponding element in the 'datums' is NULL. Now\n'null_index' cannot be used directly to store the partition index, so\nremoved it and made the necessary changes in multiple places.\n\n3. I have added test cases for 'create table' and 'insert' statements\nrelated to multi-column list partitioning and these are working fine\nwith 'make check'.\n\n4. Handled the partition pruning code to accommodate these changes for\nsingle-column list partitioning. However it is pending for\nmulti-column list partitioning.\n\n5. I have done necessary changes in partition wise join related code\nto accommodate for single-column list partitioning. However it is\npending for multi-column list partitioning.\n\nKindly review the patch and let me know if any changes are required.\n\nPending items:\n1. Support of partition pruning for multi-column list partitioning.\n2. Support of partition wise join for multi-column list partitioning.\n\nI will continue to work on the above 2 items.\nKindly let me know if I am missing something.\n\nThanks & Regards,\nNitin Jadhav\n\n\nOn Wed, May 26, 2021 at 10:27 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Sun, May 23, 2021 at 6:49 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > > IMO, it is not such a bad syntax from a user's PoV. It's not hard to\n> > > understand from this syntax that the partition constraint is something\n> > > like (a, b) = (1, 2) OR (a, b) = (1, 5) OR ..., where the = performs\n> > > row-wise comparison.\n> >\n> > Thanks for suggesting to use row-wise comparison.\n>\n> Actually, I was just describing how the *users* may want to visualize\n> the partition constraint...\n>\n> > I have few queries\n> > with respect to handling of NULL values.\n> >\n> > 1. What should be the partition constraint for the above case. AFAIK,\n> > row-wise comparison wont work with NULL values as shown in [1]. I mean\n> > two rows are considered equal if all their corresponding members are\n> > non-null and equal. The rows are unequal if any corresponding members\n> > are non-null and unequal. Otherwise the result of the row comparison\n> > is unknown (null). So we should generate different types of\n> > constraints for NULL values.\n> >\n> > Ex:\n> > CREATE TABLE t(a int, b int) PARTITION BY LIST(a,b);\n> > CREATE TABLE t_1 PARTITION OF t FOR VALUES IN (1, 1), (1, NULL),\n> > (NULL, 1), (NULL, NULL);\n> >\n> > As per my knowledge, we should consider creating partition constraints\n> > for the above example as given below.\n> >\n> > (a, b) = (1, 1) OR ((a = 1) AND (b IS NULL)) OR ((a IS NULL) AND (b =\n> > 1)) OR ((a is NULL) AND (b is NULL)).\n>\n> Yeah, something like that should do the trick.\n>\n> Again, I was not actually suggesting that you write code to implement\n> the constraint using something like RowCompareExpr, only that the\n> users might want to view the constraint as doing row-wise comparison\n> of the partitioning columns and the specified value lists.\n>\n> > 2. In the current code we don't put the NULL value in the 'datums'\n> > field of 'PartitionBoundInfoData' structure [2]. Since there can be\n> > only one NULL value, we directly store the corresponding index value\n> > in the 'null_index' field. Now we have to handle multiple NULL values\n> > in case of Multi-Column List Partitioning. So the question is how to\n> > handle this scenario. Following are the 2 approaches to handle this.\n> >\n> > Approach-1:\n> > Add another field 'bool **isnull' in [2] and mark the corresponding\n> > element to TRUE if it has NULL value and the corresponding location in\n> > 'datums' contains empty/No value. For example, If a partition bound is\n> > (1, NULL), then\n> >\n> > datums[0][0] = 1\n> > datums[0][1] = Not assigned any value\n> > isnull[0][0] = FALSE\n> > is null[0][1] = TRUE\n> >\n> > So now we have an entry in the 'datums' field for a bound containing\n> > NULL value, so we should handle this in all the scenarios where we are\n> > manipulating 'datums' in order to support NULL values and avoid crash.\n> >\n> > Approach-2:\n> > Don't add the bound information to 'datums' field of [2] if any of the\n> > value is NULL. Store this information separately in the structures\n> > mentioned in [3] and process accordingly.\n> >\n> > I feel approach-1 is the better solution as this requires less code\n> > changes and easy to implement than approach-2. Kindly share your\n> > thoughts about the approaches and please share if you have any better\n> > solution than the above 2.\n>\n> Approach 1 sounds better. It sounds like approach 1 might help us\n> implement support for allowing NULLs in range partition bounds in the\n> future, if at all. For now, it might be better to not allocate the\n> isnull array except for list partitioning.\n>\n> I'll wait for you to post a new patch addressing at least the comments\n> in my earlier email. Also, please make sure to run `make check`\n> successfully before posting the patch. :)\n>\n> Thanks.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com", "msg_date": "Thu, 3 Jun 2021 20:15:09 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Hi Nitin,\n\nOn Thu, Jun 3, 2021 at 11:45 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> > I'll wait for you to post a new patch addressing at least the comments\n> > in my earlier email. Also, please make sure to run `make check`\n> > successfully before posting the patch. :)\n>\n> I have fixed all of the review comments given by you and Jeevan in the\n> attached patch and also the attached patch contains more changes\n> compared to the previous patch. Following are the implementation\n> details.\n\nThanks for the updated version.\n\n> 1. Regarding syntax, the existing syntax will work fine for the\n> single-column list partitioning. However I have used the new syntax\n> for the multi-column list partitioning as we discussed earlier. I have\n> used a combination of 'AND' and 'OR' logic for the partition\n> constraints as given in the below example.\n>\n> postgres@17503=#create table t(a int, b text) partition by list(a,b);\n> CREATE TABLE\n> postgres@17503=#create table t1 partition of t for values in ((1,'a'),\n> (NULL,'b'));\n> CREATE TABLE\n> postgres@17503=#\\d+ t\n> Partitioned table \"public.t\"\n> Column | Type | Collation | Nullable | Default | Storage |\n> Compression | Stats target | Description\n> --------+---------+-----------+----------+---------+----------+-------------+--------------+-------------\n> a | integer | | | | plain |\n> | |\n> b | text | | | | extended |\n> | |\n> Partition key: LIST (a, b)\n> Partitions: t1 FOR VALUES IN ((1, 'a'), (NULL, 'b'))\n>\n> postgres@17503=#\\d+ t1\n> Table \"public.t1\"\n> Column | Type | Collation | Nullable | Default | Storage |\n> Compression | Stats target | Description\n> --------+---------+-----------+----------+---------+----------+-------------+--------------+-------------\n> a | integer | | | | plain |\n> | |\n> b | text | | | | extended |\n> | |\n> Partition of: t FOR VALUES IN ((1, 'a'), (NULL, 'b'))\n> Partition constraint: (((a = 1) AND (b = 'a'::text)) OR ((a IS NULL)\n> AND (b = 'b'::text)))\n> Access method: heap\n\nThe constraint expressions seem to come out correctly, though I\nhaven't checked your implementation closely yet.\n\n> 2. In the existing code, NULL values were handled differently. It was\n> not added to the 'datums' variable, rather used to store the partition\n> index directly in the 'null_index' variable. Now there is a\n> possibility of multiple NULL values, hence introducing a new member\n> 'isnulls' in the 'PartitionBoundInfoData' struct which indicates\n> whether the corresponding element in the 'datums' is NULL. Now\n> 'null_index' cannot be used directly to store the partition index, so\n> removed it and made the necessary changes in multiple places.\n>\n> 3. I have added test cases for 'create table' and 'insert' statements\n> related to multi-column list partitioning and these are working fine\n> with 'make check'.\n>\n> 4. Handled the partition pruning code to accommodate these changes for\n> single-column list partitioning. However it is pending for\n> multi-column list partitioning.\n>\n> 5. I have done necessary changes in partition wise join related code\n> to accommodate for single-column list partitioning. However it is\n> pending for multi-column list partitioning.\n>\n> Kindly review the patch and let me know if any changes are required.\n\nThe new list bound binary search and related comparison support\nfunction look a bit too verbose to me. I was expecting\npartition_list_bsearch() to look very much like\npartition_range_datum_bsearch(), but that is not the case. The\nspecial case code that you wrote in partition_list_bsearch() seems\nunnecessary, at least in that function. I'm talking about the code\nfragment starting with this comment:\n\n /*\n * Once we find the matching for the first column but if it does not\n * match for the any of the other columns, then the binary search\n * will not work in all the cases. We should traverse just below\n * and above the mid index until we find the match or we reach the\n * first mismatch.\n */\n\nI guess you're perhaps trying to address the case where the caller\ndoes not specify the values for all of the partition key columns,\nwhich can happen when the partition pruning code needs to handle a set\nof clauses matching only some of the partition key columns. But\nthat's a concern of the partition pruning code and so the special case\nshould be handled there (if at all), not in the binary search function\nthat is shared with other callers. Regarding that, I'm wondering if\nwe should require clauses matching all of the partition key columns to\nbe found for the pruning code to call the binary search, so do\nsomething like get_matching_hash_bounds() does:\n\n /*\n * For hash partitioning we can only perform pruning based on equality\n * clauses to the partition key or IS NULL clauses. We also can only\n * prune if we got values for all keys.\n */\n if (nvalues + bms_num_members(nullkeys) == partnatts)\n {\n /* code to compute matching hash bound offset */\n }\n else\n {\n /* Report all valid offsets into the boundinfo->indexes array. */\n result->bound_offsets = bms_add_range(NULL, 0,\n boundinfo->nindexes - 1);\n }\n\nDo you think that trying to match list partitions even with fewer keys\nis worth the complexity of the implementation? That is, is the use\ncase to search for only a subset of partition key columns common\nenough with list partitioning?\n\nIf we do decide to implement the special case, remember that to do\nthat efficiently, we'd need to require that the subset of matched key\ncolumns constitutes a prefix, because of the way the datums are\nsorted. That is, match all partitions when the query only contains a\nclause for b when the partition key is (a, b, c), but engage the\nspecial case of pruning if the query contains clauses for a, or for a\nand b.\n\nI will look at other parts of the patch next week hopefully. For\nnow, attached is a delta patch that applies on top of your v1, which\ndoes:\n\n* Simplify partition_list_bsearch() and partition_lbound_datum_cmp()\n* Make qsort_partition_list_value_cmp simply call\npartition_lbound_datum_cmp() instead of having its own logic to\ncompare input bounds\n* Move partition_lbound_datum_cmp() into partbounds.c as a static\nfunction (export seems unnecessary)\n* Add a comment for PartitionBoundInfo.isnulls and remove that for null_index\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 11 Jun 2021 12:37:54 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Fri, Jun 11, 2021 at 12:37 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> I will look at other parts of the patch next week hopefully. For\n> now, attached is a delta patch that applies on top of your v1, which\n> does:\n>\n> * Simplify partition_list_bsearch() and partition_lbound_datum_cmp()\n> * Make qsort_partition_list_value_cmp simply call\n> partition_lbound_datum_cmp() instead of having its own logic to\n> compare input bounds\n> * Move partition_lbound_datum_cmp() into partbounds.c as a static\n> function (export seems unnecessary)\n> * Add a comment for PartitionBoundInfo.isnulls and remove that for null_index\n\nOne more:\n\n* Add all columns of newly added test query in insert.sql to the order\nby clause to get predictably ordered output\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Jun 2021 15:26:05 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Thu, Jun 10, 2021 at 8:38 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> Hi Nitin,\n>\n> On Thu, Jun 3, 2021 at 11:45 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > > I'll wait for you to post a new patch addressing at least the comments\n> > > in my earlier email. Also, please make sure to run `make check`\n> > > successfully before posting the patch. :)\n> >\n> > I have fixed all of the review comments given by you and Jeevan in the\n> > attached patch and also the attached patch contains more changes\n> > compared to the previous patch. Following are the implementation\n> > details.\n>\n> Thanks for the updated version.\n>\n> > 1. Regarding syntax, the existing syntax will work fine for the\n> > single-column list partitioning. However I have used the new syntax\n> > for the multi-column list partitioning as we discussed earlier. I have\n> > used a combination of 'AND' and 'OR' logic for the partition\n> > constraints as given in the below example.\n> >\n> > postgres@17503=#create table t(a int, b text) partition by list(a,b);\n> > CREATE TABLE\n> > postgres@17503=#create table t1 partition of t for values in ((1,'a'),\n> > (NULL,'b'));\n> > CREATE TABLE\n> > postgres@17503=#\\d+ t\n> > Partitioned table \"public.t\"\n> > Column | Type | Collation | Nullable | Default | Storage |\n> > Compression | Stats target | Description\n> >\n> --------+---------+-----------+----------+---------+----------+-------------+--------------+-------------\n> > a | integer | | | | plain |\n> > | |\n> > b | text | | | | extended |\n> > | |\n> > Partition key: LIST (a, b)\n> > Partitions: t1 FOR VALUES IN ((1, 'a'), (NULL, 'b'))\n> >\n> > postgres@17503=#\\d+ t1\n> > Table \"public.t1\"\n> > Column | Type | Collation | Nullable | Default | Storage |\n> > Compression | Stats target | Description\n> >\n> --------+---------+-----------+----------+---------+----------+-------------+--------------+-------------\n> > a | integer | | | | plain |\n> > | |\n> > b | text | | | | extended |\n> > | |\n> > Partition of: t FOR VALUES IN ((1, 'a'), (NULL, 'b'))\n> > Partition constraint: (((a = 1) AND (b = 'a'::text)) OR ((a IS NULL)\n> > AND (b = 'b'::text)))\n> > Access method: heap\n>\n> The constraint expressions seem to come out correctly, though I\n> haven't checked your implementation closely yet.\n>\n> > 2. In the existing code, NULL values were handled differently. It was\n> > not added to the 'datums' variable, rather used to store the partition\n> > index directly in the 'null_index' variable. Now there is a\n> > possibility of multiple NULL values, hence introducing a new member\n> > 'isnulls' in the 'PartitionBoundInfoData' struct which indicates\n> > whether the corresponding element in the 'datums' is NULL. Now\n> > 'null_index' cannot be used directly to store the partition index, so\n> > removed it and made the necessary changes in multiple places.\n> >\n> > 3. I have added test cases for 'create table' and 'insert' statements\n> > related to multi-column list partitioning and these are working fine\n> > with 'make check'.\n> >\n> > 4. Handled the partition pruning code to accommodate these changes for\n> > single-column list partitioning. However it is pending for\n> > multi-column list partitioning.\n> >\n> > 5. I have done necessary changes in partition wise join related code\n> > to accommodate for single-column list partitioning. However it is\n> > pending for multi-column list partitioning.\n> >\n> > Kindly review the patch and let me know if any changes are required.\n>\n> The new list bound binary search and related comparison support\n> function look a bit too verbose to me. I was expecting\n> partition_list_bsearch() to look very much like\n> partition_range_datum_bsearch(), but that is not the case. The\n> special case code that you wrote in partition_list_bsearch() seems\n> unnecessary, at least in that function. I'm talking about the code\n> fragment starting with this comment:\n>\n> /*\n> * Once we find the matching for the first column but if it does\n> not\n> * match for the any of the other columns, then the binary search\n> * will not work in all the cases. We should traverse just below\n> * and above the mid index until we find the match or we reach\n> the\n> * first mismatch.\n> */\n>\n> I guess you're perhaps trying to address the case where the caller\n> does not specify the values for all of the partition key columns,\n> which can happen when the partition pruning code needs to handle a set\n> of clauses matching only some of the partition key columns. But\n> that's a concern of the partition pruning code and so the special case\n> should be handled there (if at all), not in the binary search function\n> that is shared with other callers. Regarding that, I'm wondering if\n> we should require clauses matching all of the partition key columns to\n> be found for the pruning code to call the binary search, so do\n> something like get_matching_hash_bounds() does:\n>\n> /*\n> * For hash partitioning we can only perform pruning based on equality\n> * clauses to the partition key or IS NULL clauses. We also can only\n> * prune if we got values for all keys.\n> */\n> if (nvalues + bms_num_members(nullkeys) == partnatts)\n> {\n> /* code to compute matching hash bound offset */\n> }\n> else\n> {\n> /* Report all valid offsets into the boundinfo->indexes array. */\n> result->bound_offsets = bms_add_range(NULL, 0,\n> boundinfo->nindexes - 1);\n> }\n>\n> Do you think that trying to match list partitions even with fewer keys\n> is worth the complexity of the implementation? That is, is the use\n> case to search for only a subset of partition key columns common\n> enough with list partitioning?\n>\n> If we do decide to implement the special case, remember that to do\n> that efficiently, we'd need to require that the subset of matched key\n> columns constitutes a prefix, because of the way the datums are\n> sorted. That is, match all partitions when the query only contains a\n> clause for b when the partition key is (a, b, c), but engage the\n> special case of pruning if the query contains clauses for a, or for a\n> and b.\n>\n> I will look at other parts of the patch next week hopefully. For\n> now, attached is a delta patch that applies on top of your v1, which\n> does:\n>\n> * Simplify partition_list_bsearch() and partition_lbound_datum_cmp()\n> * Make qsort_partition_list_value_cmp simply call\n> partition_lbound_datum_cmp() instead of having its own logic to\n> compare input bounds\n> * Move partition_lbound_datum_cmp() into partbounds.c as a static\n> function (export seems unnecessary)\n> * Add a comment for PartitionBoundInfo.isnulls and remove that for\n> null_index\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n\n\nHi, Amit:\n\n+ * isnulls is an array of boolean-tuples with key->partnatts booleans\nvalues\n+ * each. Currently only used for list partitioning, it stores whether a\n\nI think 'booleans' should be 'boolean'.\nThe trailing word 'each' is unnecessary.\n\nCheers\n\nOn Thu, Jun 10, 2021 at 8:38 PM Amit Langote <amitlangote09@gmail.com> wrote:Hi Nitin,\n\nOn Thu, Jun 3, 2021 at 11:45 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> > I'll wait for you to post a new patch addressing at least the comments\n> > in my earlier email.  Also, please make sure to run `make check`\n> > successfully before posting the patch. :)\n>\n> I have fixed all of the review comments given by you and Jeevan in the\n> attached patch and also the attached patch contains more changes\n> compared to the previous patch. Following are the implementation\n> details.\n\nThanks for the updated version.\n\n> 1. Regarding syntax, the existing syntax will work fine for the\n> single-column list partitioning. However I have used the new syntax\n> for the multi-column list partitioning as we discussed earlier. I have\n> used a combination of 'AND' and 'OR' logic for the partition\n> constraints as given in the below example.\n>\n> postgres@17503=#create table t(a int, b text) partition by list(a,b);\n> CREATE TABLE\n> postgres@17503=#create table t1 partition of t for values in ((1,'a'),\n> (NULL,'b'));\n> CREATE TABLE\n> postgres@17503=#\\d+ t\n>                                       Partitioned table \"public.t\"\n>  Column |  Type   | Collation | Nullable | Default | Storage  |\n> Compression | Stats target | Description\n> --------+---------+-----------+----------+---------+----------+-------------+--------------+-------------\n>  a      | integer |           |          |         | plain    |\n>      |              |\n>  b      | text    |           |          |         | extended |\n>      |              |\n> Partition key: LIST (a, b)\n> Partitions: t1 FOR VALUES IN ((1, 'a'), (NULL, 'b'))\n>\n> postgres@17503=#\\d+ t1\n>                                             Table \"public.t1\"\n>  Column |  Type   | Collation | Nullable | Default | Storage  |\n> Compression | Stats target | Description\n> --------+---------+-----------+----------+---------+----------+-------------+--------------+-------------\n>  a      | integer |           |          |         | plain    |\n>      |              |\n>  b      | text    |           |          |         | extended |\n>      |              |\n> Partition of: t FOR VALUES IN ((1, 'a'), (NULL, 'b'))\n> Partition constraint: (((a = 1) AND (b = 'a'::text)) OR ((a IS NULL)\n> AND (b = 'b'::text)))\n> Access method: heap\n\nThe constraint expressions seem to come out correctly, though I\nhaven't checked your implementation closely yet.\n\n> 2. In the existing code, NULL values were handled differently. It was\n> not added to the 'datums' variable, rather used to store the partition\n> index directly in the 'null_index' variable. Now there is a\n> possibility of multiple NULL values, hence introducing  a new member\n> 'isnulls' in the 'PartitionBoundInfoData' struct which indicates\n> whether the corresponding element in the 'datums' is NULL. Now\n> 'null_index' cannot be used directly to store the partition index, so\n> removed it and made the necessary changes in multiple places.\n>\n> 3. I have added test cases for 'create table' and 'insert' statements\n> related to multi-column list partitioning and these are working fine\n> with 'make check'.\n>\n> 4. Handled the partition pruning code to accommodate these changes for\n> single-column list partitioning. However it is pending for\n> multi-column list partitioning.\n>\n> 5. I have done necessary changes in partition wise join related code\n> to accommodate for single-column list partitioning. However it is\n> pending for multi-column list partitioning.\n>\n> Kindly review the patch and let me know if any changes are required.\n\nThe new list bound binary search and related comparison support\nfunction look a bit too verbose to me.  I was expecting\npartition_list_bsearch() to look very much like\npartition_range_datum_bsearch(), but that is not the case.  The\nspecial case code that you wrote in partition_list_bsearch() seems\nunnecessary, at least in that function.  I'm talking about the code\nfragment starting with this comment:\n\n          /*\n           * Once we find the matching for the first column but if it does not\n           * match for the any of the other columns, then the binary search\n           * will not work in all the cases. We should traverse just below\n           * and above the mid index until we find the match or we reach the\n           * first mismatch.\n           */\n\nI guess you're perhaps trying to address the case where the caller\ndoes not specify the values for all of the partition key columns,\nwhich can happen when the partition pruning code needs to handle a set\nof clauses matching only some of the partition key columns.  But\nthat's a concern of the partition pruning code and so the special case\nshould be handled there (if at all), not in the binary search function\nthat is shared with other callers.  Regarding that, I'm wondering if\nwe should require clauses matching all of the partition key columns to\nbe found for the pruning code to call the binary search, so do\nsomething like get_matching_hash_bounds() does:\n\n    /*\n     * For hash partitioning we can only perform pruning based on equality\n     * clauses to the partition key or IS NULL clauses.  We also can only\n     * prune if we got values for all keys.\n     */\n    if (nvalues + bms_num_members(nullkeys) == partnatts)\n    {\n        /* code to compute matching hash bound offset */\n    }\n    else\n    {\n        /* Report all valid offsets into the boundinfo->indexes array. */\n        result->bound_offsets = bms_add_range(NULL, 0,\n                                              boundinfo->nindexes - 1);\n    }\n\nDo you think that trying to match list partitions even with fewer keys\nis worth the complexity of the implementation?  That is, is the use\ncase to search for only a subset of partition key columns common\nenough with list partitioning?\n\nIf we do decide to implement the special case, remember that to do\nthat efficiently, we'd need to require that the subset of matched key\ncolumns constitutes a prefix, because of the way the datums are\nsorted.  That is, match all partitions when the query only contains a\nclause for b when the partition key is (a, b, c), but engage the\nspecial case of pruning if the query contains clauses for a, or for a\nand b.\n\nI will look at other parts of the patch next week hopefully.   For\nnow, attached is a delta patch that applies on top of your v1, which\ndoes:\n\n* Simplify partition_list_bsearch() and partition_lbound_datum_cmp()\n* Make qsort_partition_list_value_cmp simply call\npartition_lbound_datum_cmp() instead of having its own logic to\ncompare input bounds\n* Move partition_lbound_datum_cmp() into partbounds.c as a static\nfunction (export seems unnecessary)\n* Add a comment for PartitionBoundInfo.isnulls and remove that for null_index\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.comHi, Amit:+ * isnulls is an array of boolean-tuples with key->partnatts booleans values+ * each.  Currently only used for list partitioning, it stores whether a I think 'booleans' should be 'boolean'.The trailing word 'each' is unnecessary.Cheers", "msg_date": "Fri, 11 Jun 2021 10:32:25 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "> The new list bound binary search and related comparison support\n> function look a bit too verbose to me. I was expecting\n> partition_list_bsearch() to look very much like\n> partition_range_datum_bsearch(), but that is not the case. The\n> special case code that you wrote in partition_list_bsearch() seems\n> unnecessary, at least in that function. I'm talking about the code\n> fragment starting with this comment:\n>\n> I will look at other parts of the patch next week hopefully. For\n> now, attached is a delta patch that applies on top of your v1, which\n> does:\n>\n> * Simplify partition_list_bsearch() and partition_lbound_datum_cmp()\n> * Make qsort_partition_list_value_cmp simply call\n> partition_lbound_datum_cmp() instead of having its own logic to\n> compare input bounds\n> * Move partition_lbound_datum_cmp() into partbounds.c as a static\n> function (export seems unnecessary)\n> * Add a comment for PartitionBoundInfo.isnulls and remove that for null_index\n\nYes. You are right. The extra code added in partition_list_bsearch()\nis not required and thanks for sharing the delta patch. It looks good\nto me and I have incorporated the changes in the attached patch.\n\n> I guess you're perhaps trying to address the case where the caller\n> does not specify the values for all of the partition key columns,\n> which can happen when the partition pruning code needs to handle a set\n> of clauses matching only some of the partition key columns. But\n> that's a concern of the partition pruning code and so the special case\n> should be handled there (if at all), not in the binary search function\n> that is shared with other callers. Regarding that, I'm wondering if\n> we should require clauses matching all of the partition key columns to\n> be found for the pruning code to call the binary search, so do\n> something like get_matching_hash_bounds() does:\n>\n> Do you think that trying to match list partitions even with fewer keys\n> is worth the complexity of the implementation? That is, is the use\n> case to search for only a subset of partition key columns common\n> enough with list partitioning?\n>\n> If we do decide to implement the special case, remember that to do\n> that efficiently, we'd need to require that the subset of matched key\n> columns constitutes a prefix, because of the way the datums are\n> sorted. That is, match all partitions when the query only contains a\n> clause for b when the partition key is (a, b, c), but engage the\n> special case of pruning if the query contains clauses for a, or for a\n> and b.\n\nThanks for the suggestion. Below is the implementation details for the\npartition pruning for multi column list partitioning.\n\nIn the existing code (For single column list partitioning)\n1. In gen_partprune_steps_internal(), we try to match the where\nclauses provided by the user with the partition key data using\nmatch_clause_to_partition_key(). Based on the match, this function can\nreturn many values like PARTCLAUSE_MATCH_CLAUSE,\nPARTCLAUSE_MATCH_NULLNESS, PARTCLAUSE_NOMATCH, etc.\n2. In case of PARTCLAUSE_MATCH_CLAUSE, we generate steps using\ngen_prune_steps_from_opexps() (strategy-2) which generate and return a\nlist of PartitionPruneStepOp that are based on OpExpr and BooleanTest\nclauses that have been matched to the partition key and it also takes\ncare handling prefix of the partition keys.\n3. In case of PARTCLAUSE_MATCH_NULLNESS, we generate steps using\ngen_prune_step_op() (strategy-1) which generates single\nPartitionPruneStepOp since the earlier list partitioning supports\nsingle column and there can be only one NULL value. In\nget_matching_list_bounds(), if the nullkeys is not empty, we fetch the\npartition index which accepts null and we used to return from here.\n\nIn case of multi column list partitioning, we have columns more than\none and hence there is a possibility of more than one NULL values in\nthe where clauses. The above mentioned steps are modified like below.\n\n1. Modified the match_clause_to_partition_key() to generate an object\nof PartClauseInfo structure and return PARTCLAUSE_MATCH_CLAUSE even in\ncase of clauses related to NULL. The information required to generate\nPartClauseInfo is populated here like the constant expression\nconsisting of (Datum) 0, op_strategy, op_is_ne, etc.\n2. Since I am returning PARTCLAUSE_MATCH_CLAUSE, now we use strategy-2\n(gen_prune_steps_from_opexps) to generate partition pruning steps.\nThis function takes care of generating a list of pruning steps if\nthere are multiple clauses and also takes care of handling prefixes.\n3. Modified perform_pruning_base_step() to generate the datum values\nand isnulls data of the where clauses. In case if any of the key\ncontains NULL value then the corresponding datum value is 0.\n4. Modified get_matching_list_bounds() to generate the minimum offset\nand/or maximum offset of the matched values based on the difference\noperation strategies. Now since the NULL containing bound values are\npart of 'boundinfo', changed the code accordingly to include the NULL\ncontaining partitions or not in different scenarios like\nInvalidStrategy, etc.\n\nI have done some cosmetic changes to\nv1_multi_column_list_partitioning.patch. So all the above code changes\nrelated to partition pruning are merged with the previous patch and\nalso included the delta patch shared by you. Hence sharing a single\npatch.\n\nKindly have a look and share your thoughts.\n\n\n\nOn Fri, Jun 11, 2021 at 10:57 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>\n>\n> On Thu, Jun 10, 2021 at 8:38 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>>\n>> Hi Nitin,\n>>\n>> On Thu, Jun 3, 2021 at 11:45 PM Nitin Jadhav\n>> <nitinjadhavpostgres@gmail.com> wrote:\n>> > > I'll wait for you to post a new patch addressing at least the comments\n>> > > in my earlier email. Also, please make sure to run `make check`\n>> > > successfully before posting the patch. :)\n>> >\n>> > I have fixed all of the review comments given by you and Jeevan in the\n>> > attached patch and also the attached patch contains more changes\n>> > compared to the previous patch. Following are the implementation\n>> > details.\n>>\n>> Thanks for the updated version.\n>>\n>> > 1. Regarding syntax, the existing syntax will work fine for the\n>> > single-column list partitioning. However I have used the new syntax\n>> > for the multi-column list partitioning as we discussed earlier. I have\n>> > used a combination of 'AND' and 'OR' logic for the partition\n>> > constraints as given in the below example.\n>> >\n>> > postgres@17503=#create table t(a int, b text) partition by list(a,b);\n>> > CREATE TABLE\n>> > postgres@17503=#create table t1 partition of t for values in ((1,'a'),\n>> > (NULL,'b'));\n>> > CREATE TABLE\n>> > postgres@17503=#\\d+ t\n>> > Partitioned table \"public.t\"\n>> > Column | Type | Collation | Nullable | Default | Storage |\n>> > Compression | Stats target | Description\n>> > --------+---------+-----------+----------+---------+----------+-------------+--------------+-------------\n>> > a | integer | | | | plain |\n>> > | |\n>> > b | text | | | | extended |\n>> > | |\n>> > Partition key: LIST (a, b)\n>> > Partitions: t1 FOR VALUES IN ((1, 'a'), (NULL, 'b'))\n>> >\n>> > postgres@17503=#\\d+ t1\n>> > Table \"public.t1\"\n>> > Column | Type | Collation | Nullable | Default | Storage |\n>> > Compression | Stats target | Description\n>> > --------+---------+-----------+----------+---------+----------+-------------+--------------+-------------\n>> > a | integer | | | | plain |\n>> > | |\n>> > b | text | | | | extended |\n>> > | |\n>> > Partition of: t FOR VALUES IN ((1, 'a'), (NULL, 'b'))\n>> > Partition constraint: (((a = 1) AND (b = 'a'::text)) OR ((a IS NULL)\n>> > AND (b = 'b'::text)))\n>> > Access method: heap\n>>\n>> The constraint expressions seem to come out correctly, though I\n>> haven't checked your implementation closely yet.\n>>\n>> > 2. In the existing code, NULL values were handled differently. It was\n>> > not added to the 'datums' variable, rather used to store the partition\n>> > index directly in the 'null_index' variable. Now there is a\n>> > possibility of multiple NULL values, hence introducing a new member\n>> > 'isnulls' in the 'PartitionBoundInfoData' struct which indicates\n>> > whether the corresponding element in the 'datums' is NULL. Now\n>> > 'null_index' cannot be used directly to store the partition index, so\n>> > removed it and made the necessary changes in multiple places.\n>> >\n>> > 3. I have added test cases for 'create table' and 'insert' statements\n>> > related to multi-column list partitioning and these are working fine\n>> > with 'make check'.\n>> >\n>> > 4. Handled the partition pruning code to accommodate these changes for\n>> > single-column list partitioning. However it is pending for\n>> > multi-column list partitioning.\n>> >\n>> > 5. I have done necessary changes in partition wise join related code\n>> > to accommodate for single-column list partitioning. However it is\n>> > pending for multi-column list partitioning.\n>> >\n>> > Kindly review the patch and let me know if any changes are required.\n>>\n>> The new list bound binary search and related comparison support\n>> function look a bit too verbose to me. I was expecting\n>> partition_list_bsearch() to look very much like\n>> partition_range_datum_bsearch(), but that is not the case. The\n>> special case code that you wrote in partition_list_bsearch() seems\n>> unnecessary, at least in that function. I'm talking about the code\n>> fragment starting with this comment:\n>>\n>> /*\n>> * Once we find the matching for the first column but if it does not\n>> * match for the any of the other columns, then the binary search\n>> * will not work in all the cases. We should traverse just below\n>> * and above the mid index until we find the match or we reach the\n>> * first mismatch.\n>> */\n>>\n>> I guess you're perhaps trying to address the case where the caller\n>> does not specify the values for all of the partition key columns,\n>> which can happen when the partition pruning code needs to handle a set\n>> of clauses matching only some of the partition key columns. But\n>> that's a concern of the partition pruning code and so the special case\n>> should be handled there (if at all), not in the binary search function\n>> that is shared with other callers. Regarding that, I'm wondering if\n>> we should require clauses matching all of the partition key columns to\n>> be found for the pruning code to call the binary search, so do\n>> something like get_matching_hash_bounds() does:\n>>\n>> /*\n>> * For hash partitioning we can only perform pruning based on equality\n>> * clauses to the partition key or IS NULL clauses. We also can only\n>> * prune if we got values for all keys.\n>> */\n>> if (nvalues + bms_num_members(nullkeys) == partnatts)\n>> {\n>> /* code to compute matching hash bound offset */\n>> }\n>> else\n>> {\n>> /* Report all valid offsets into the boundinfo->indexes array. */\n>> result->bound_offsets = bms_add_range(NULL, 0,\n>> boundinfo->nindexes - 1);\n>> }\n>>\n>> Do you think that trying to match list partitions even with fewer keys\n>> is worth the complexity of the implementation? That is, is the use\n>> case to search for only a subset of partition key columns common\n>> enough with list partitioning?\n>>\n>> If we do decide to implement the special case, remember that to do\n>> that efficiently, we'd need to require that the subset of matched key\n>> columns constitutes a prefix, because of the way the datums are\n>> sorted. That is, match all partitions when the query only contains a\n>> clause for b when the partition key is (a, b, c), but engage the\n>> special case of pruning if the query contains clauses for a, or for a\n>> and b.\n>>\n>> I will look at other parts of the patch next week hopefully. For\n>> now, attached is a delta patch that applies on top of your v1, which\n>> does:\n>>\n>> * Simplify partition_list_bsearch() and partition_lbound_datum_cmp()\n>> * Make qsort_partition_list_value_cmp simply call\n>> partition_lbound_datum_cmp() instead of having its own logic to\n>> compare input bounds\n>> * Move partition_lbound_datum_cmp() into partbounds.c as a static\n>> function (export seems unnecessary)\n>> * Add a comment for PartitionBoundInfo.isnulls and remove that for null_index\n>>\n>> --\n>> Amit Langote\n>> EDB: http://www.enterprisedb.com\n>\n>\n> Hi, Amit:\n>\n> + * isnulls is an array of boolean-tuples with key->partnatts booleans values\n> + * each. Currently only used for list partitioning, it stores whether a\n>\n> I think 'booleans' should be 'boolean'.\n> The trailing word 'each' is unnecessary.\n>\n> Cheers", "msg_date": "Wed, 25 Aug 2021 18:12:09 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Wed, Aug 25, 2021 at 5:41 AM Nitin Jadhav <nitinjadhavpostgres@gmail.com>\nwrote:\n\n> > The new list bound binary search and related comparison support\n> > function look a bit too verbose to me. I was expecting\n> > partition_list_bsearch() to look very much like\n> > partition_range_datum_bsearch(), but that is not the case. The\n> > special case code that you wrote in partition_list_bsearch() seems\n> > unnecessary, at least in that function. I'm talking about the code\n> > fragment starting with this comment:\n> >\n> > I will look at other parts of the patch next week hopefully. For\n> > now, attached is a delta patch that applies on top of your v1, which\n> > does:\n> >\n> > * Simplify partition_list_bsearch() and partition_lbound_datum_cmp()\n> > * Make qsort_partition_list_value_cmp simply call\n> > partition_lbound_datum_cmp() instead of having its own logic to\n> > compare input bounds\n> > * Move partition_lbound_datum_cmp() into partbounds.c as a static\n> > function (export seems unnecessary)\n> > * Add a comment for PartitionBoundInfo.isnulls and remove that for\n> null_index\n>\n> Yes. You are right. The extra code added in partition_list_bsearch()\n> is not required and thanks for sharing the delta patch. It looks good\n> to me and I have incorporated the changes in the attached patch.\n>\n> > I guess you're perhaps trying to address the case where the caller\n> > does not specify the values for all of the partition key columns,\n> > which can happen when the partition pruning code needs to handle a set\n> > of clauses matching only some of the partition key columns. But\n> > that's a concern of the partition pruning code and so the special case\n> > should be handled there (if at all), not in the binary search function\n> > that is shared with other callers. Regarding that, I'm wondering if\n> > we should require clauses matching all of the partition key columns to\n> > be found for the pruning code to call the binary search, so do\n> > something like get_matching_hash_bounds() does:\n> >\n> > Do you think that trying to match list partitions even with fewer keys\n> > is worth the complexity of the implementation? That is, is the use\n> > case to search for only a subset of partition key columns common\n> > enough with list partitioning?\n> >\n> > If we do decide to implement the special case, remember that to do\n> > that efficiently, we'd need to require that the subset of matched key\n> > columns constitutes a prefix, because of the way the datums are\n> > sorted. That is, match all partitions when the query only contains a\n> > clause for b when the partition key is (a, b, c), but engage the\n> > special case of pruning if the query contains clauses for a, or for a\n> > and b.\n>\n> Thanks for the suggestion. Below is the implementation details for the\n> partition pruning for multi column list partitioning.\n>\n> In the existing code (For single column list partitioning)\n> 1. In gen_partprune_steps_internal(), we try to match the where\n> clauses provided by the user with the partition key data using\n> match_clause_to_partition_key(). Based on the match, this function can\n> return many values like PARTCLAUSE_MATCH_CLAUSE,\n> PARTCLAUSE_MATCH_NULLNESS, PARTCLAUSE_NOMATCH, etc.\n> 2. In case of PARTCLAUSE_MATCH_CLAUSE, we generate steps using\n> gen_prune_steps_from_opexps() (strategy-2) which generate and return a\n> list of PartitionPruneStepOp that are based on OpExpr and BooleanTest\n> clauses that have been matched to the partition key and it also takes\n> care handling prefix of the partition keys.\n> 3. In case of PARTCLAUSE_MATCH_NULLNESS, we generate steps using\n> gen_prune_step_op() (strategy-1) which generates single\n> PartitionPruneStepOp since the earlier list partitioning supports\n> single column and there can be only one NULL value. In\n> get_matching_list_bounds(), if the nullkeys is not empty, we fetch the\n> partition index which accepts null and we used to return from here.\n>\n> In case of multi column list partitioning, we have columns more than\n> one and hence there is a possibility of more than one NULL values in\n> the where clauses. The above mentioned steps are modified like below.\n>\n> 1. Modified the match_clause_to_partition_key() to generate an object\n> of PartClauseInfo structure and return PARTCLAUSE_MATCH_CLAUSE even in\n> case of clauses related to NULL. The information required to generate\n> PartClauseInfo is populated here like the constant expression\n> consisting of (Datum) 0, op_strategy, op_is_ne, etc.\n> 2. Since I am returning PARTCLAUSE_MATCH_CLAUSE, now we use strategy-2\n> (gen_prune_steps_from_opexps) to generate partition pruning steps.\n> This function takes care of generating a list of pruning steps if\n> there are multiple clauses and also takes care of handling prefixes.\n> 3. Modified perform_pruning_base_step() to generate the datum values\n> and isnulls data of the where clauses. In case if any of the key\n> contains NULL value then the corresponding datum value is 0.\n> 4. Modified get_matching_list_bounds() to generate the minimum offset\n> and/or maximum offset of the matched values based on the difference\n> operation strategies. Now since the NULL containing bound values are\n> part of 'boundinfo', changed the code accordingly to include the NULL\n> containing partitions or not in different scenarios like\n> InvalidStrategy, etc.\n>\n> I have done some cosmetic changes to\n> v1_multi_column_list_partitioning.patch. So all the above code changes\n> related to partition pruning are merged with the previous patch and\n> also included the delta patch shared by you. Hence sharing a single\n> patch.\n>\n> Kindly have a look and share your thoughts.\n>\n>\n> Hi,\n\nbq. Supported new syantx to allow mentioning multiple key information.\n\nsyantx -> syntax\n\n+ isDuplicate = checkForDuplicates(result, values);\n+ if (isDuplicate)\n+ continue;\n\nIt seems the variable isDuplicate is not needed. The if statement can\ndirectly check the return value from checkForDuplicates().\n\n+ //TODO: Handle for multi-column cases\n+ for (j = 0; j < 1; j++)\n\nIs this part going to be updated in the next patch?\n\nCheers\n\nOn Wed, Aug 25, 2021 at 5:41 AM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:> The new list bound binary search and related comparison support\n> function look a bit too verbose to me.  I was expecting\n> partition_list_bsearch() to look very much like\n> partition_range_datum_bsearch(), but that is not the case.  The\n> special case code that you wrote in partition_list_bsearch() seems\n> unnecessary, at least in that function.  I'm talking about the code\n> fragment starting with this comment:\n>\n> I will look at other parts of the patch next week hopefully.   For\n> now, attached is a delta patch that applies on top of your v1, which\n> does:\n>\n> * Simplify partition_list_bsearch() and partition_lbound_datum_cmp()\n> * Make qsort_partition_list_value_cmp simply call\n> partition_lbound_datum_cmp() instead of having its own logic to\n> compare input bounds\n> * Move partition_lbound_datum_cmp() into partbounds.c as a static\n> function (export seems unnecessary)\n> * Add a comment for PartitionBoundInfo.isnulls and remove that for null_index\n\nYes. You are right. The extra code added in partition_list_bsearch()\nis not required and thanks for sharing the delta patch. It looks good\nto me and I have incorporated the changes in the attached patch.\n\n> I guess you're perhaps trying to address the case where the caller\n> does not specify the values for all of the partition key columns,\n> which can happen when the partition pruning code needs to handle a set\n> of clauses matching only some of the partition key columns.  But\n> that's a concern of the partition pruning code and so the special case\n> should be handled there (if at all), not in the binary search function\n> that is shared with other callers.  Regarding that, I'm wondering if\n> we should require clauses matching all of the partition key columns to\n> be found for the pruning code to call the binary search, so do\n> something like get_matching_hash_bounds() does:\n>\n> Do you think that trying to match list partitions even with fewer keys\n> is worth the complexity of the implementation?  That is, is the use\n> case to search for only a subset of partition key columns common\n> enough with list partitioning?\n>\n> If we do decide to implement the special case, remember that to do\n> that efficiently, we'd need to require that the subset of matched key\n> columns constitutes a prefix, because of the way the datums are\n> sorted.  That is, match all partitions when the query only contains a\n> clause for b when the partition key is (a, b, c), but engage the\n> special case of pruning if the query contains clauses for a, or for a\n> and b.\n\nThanks for the suggestion. Below is the implementation details for the\npartition pruning for multi column list partitioning.\n\nIn the existing code (For single column list partitioning)\n1. In gen_partprune_steps_internal(), we try to match the where\nclauses provided by the user with the partition key data using\nmatch_clause_to_partition_key(). Based on the match, this function can\nreturn many values like PARTCLAUSE_MATCH_CLAUSE,\nPARTCLAUSE_MATCH_NULLNESS, PARTCLAUSE_NOMATCH, etc.\n2. In case of PARTCLAUSE_MATCH_CLAUSE, we generate steps using\ngen_prune_steps_from_opexps() (strategy-2) which generate and return a\nlist of PartitionPruneStepOp that are based on OpExpr and BooleanTest\nclauses that have been matched to the partition key and it also takes\ncare handling prefix of the partition keys.\n3. In case of PARTCLAUSE_MATCH_NULLNESS, we generate steps using\ngen_prune_step_op() (strategy-1) which generates single\nPartitionPruneStepOp since the earlier list partitioning supports\nsingle column and there can be only one NULL value. In\nget_matching_list_bounds(), if the nullkeys is not empty, we fetch the\npartition index which accepts null and we used to return from here.\n\nIn case of multi column list partitioning, we have columns more than\none and hence there is a possibility of more than one NULL values in\nthe where clauses. The above mentioned steps are modified like below.\n\n1.  Modified the match_clause_to_partition_key() to generate an object\nof PartClauseInfo structure and return PARTCLAUSE_MATCH_CLAUSE even in\ncase of clauses related to NULL. The information required to generate\nPartClauseInfo is populated here like the constant expression\nconsisting of (Datum) 0, op_strategy, op_is_ne, etc.\n2. Since I am returning PARTCLAUSE_MATCH_CLAUSE, now we use strategy-2\n(gen_prune_steps_from_opexps) to generate partition pruning steps.\nThis function takes care of generating a list of pruning steps if\nthere are multiple clauses and also takes care of handling prefixes.\n3. Modified perform_pruning_base_step() to generate the datum values\nand isnulls data of the where clauses. In case if any of the key\ncontains NULL value then the corresponding datum value is 0.\n4. Modified get_matching_list_bounds() to generate the minimum offset\nand/or maximum offset of the matched values based on the difference\noperation strategies. Now since the NULL containing bound values are\npart of 'boundinfo', changed the code accordingly to include the NULL\ncontaining partitions or not in different scenarios like\nInvalidStrategy, etc.\n\nI have done some cosmetic changes to\nv1_multi_column_list_partitioning.patch. So all the above code changes\nrelated to partition pruning are merged with the previous patch and\nalso included the delta patch shared by you. Hence sharing a single\npatch.\n\nKindly have a look and share your thoughts.\nHi,bq. Supported new syantx to allow mentioning multiple key information. syantx -> syntax+       isDuplicate = checkForDuplicates(result, values);+       if (isDuplicate)+           continue;It seems the variable isDuplicate is not needed. The if statement can directly check the return value from checkForDuplicates().+       //TODO: Handle for multi-column cases+       for (j = 0; j < 1; j++)Is this part going to be updated in the next patch?Cheers", "msg_date": "Wed, 25 Aug 2021 14:10:33 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "> + * isnulls is an array of boolean-tuples with key->partnatts booleans values\n> + * each. Currently only used for list partitioning, it stores whether a\n>\n> I think 'booleans' should be 'boolean'.\n> The trailing word 'each' is unnecessary.\n\n> bq. Supported new syantx to allow mentioning multiple key information.\n>\n> syantx -> syntax\n\n> + isDuplicate = checkForDuplicates(result, values);\n> + if (isDuplicate)\n> + continue;\n>\n> It seems the variable isDuplicate is not needed. The if statement can directly check the return value from checkForDuplicates().\n\nI agree that isDuplicate is not required.\nThanks for sharing the comments. I will take care of these comments in\nthe next patch.\n\n> + //TODO: Handle for multi-column cases\n> + for (j = 0; j < 1; j++)\n>\n> Is this part going to be updated in the next patch?\n\nYes. The code changes related to partition-wise join are in progress.\nI will handle these in the next patch.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, Aug 26, 2021 at 2:40 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>\n>\n> On Wed, Aug 25, 2021 at 5:41 AM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:\n>>\n>> > The new list bound binary search and related comparison support\n>> > function look a bit too verbose to me. I was expecting\n>> > partition_list_bsearch() to look very much like\n>> > partition_range_datum_bsearch(), but that is not the case. The\n>> > special case code that you wrote in partition_list_bsearch() seems\n>> > unnecessary, at least in that function. I'm talking about the code\n>> > fragment starting with this comment:\n>> >\n>> > I will look at other parts of the patch next week hopefully. For\n>> > now, attached is a delta patch that applies on top of your v1, which\n>> > does:\n>> >\n>> > * Simplify partition_list_bsearch() and partition_lbound_datum_cmp()\n>> > * Make qsort_partition_list_value_cmp simply call\n>> > partition_lbound_datum_cmp() instead of having its own logic to\n>> > compare input bounds\n>> > * Move partition_lbound_datum_cmp() into partbounds.c as a static\n>> > function (export seems unnecessary)\n>> > * Add a comment for PartitionBoundInfo.isnulls and remove that for null_index\n>>\n>> Yes. You are right. The extra code added in partition_list_bsearch()\n>> is not required and thanks for sharing the delta patch. It looks good\n>> to me and I have incorporated the changes in the attached patch.\n>>\n>> > I guess you're perhaps trying to address the case where the caller\n>> > does not specify the values for all of the partition key columns,\n>> > which can happen when the partition pruning code needs to handle a set\n>> > of clauses matching only some of the partition key columns. But\n>> > that's a concern of the partition pruning code and so the special case\n>> > should be handled there (if at all), not in the binary search function\n>> > that is shared with other callers. Regarding that, I'm wondering if\n>> > we should require clauses matching all of the partition key columns to\n>> > be found for the pruning code to call the binary search, so do\n>> > something like get_matching_hash_bounds() does:\n>> >\n>> > Do you think that trying to match list partitions even with fewer keys\n>> > is worth the complexity of the implementation? That is, is the use\n>> > case to search for only a subset of partition key columns common\n>> > enough with list partitioning?\n>> >\n>> > If we do decide to implement the special case, remember that to do\n>> > that efficiently, we'd need to require that the subset of matched key\n>> > columns constitutes a prefix, because of the way the datums are\n>> > sorted. That is, match all partitions when the query only contains a\n>> > clause for b when the partition key is (a, b, c), but engage the\n>> > special case of pruning if the query contains clauses for a, or for a\n>> > and b.\n>>\n>> Thanks for the suggestion. Below is the implementation details for the\n>> partition pruning for multi column list partitioning.\n>>\n>> In the existing code (For single column list partitioning)\n>> 1. In gen_partprune_steps_internal(), we try to match the where\n>> clauses provided by the user with the partition key data using\n>> match_clause_to_partition_key(). Based on the match, this function can\n>> return many values like PARTCLAUSE_MATCH_CLAUSE,\n>> PARTCLAUSE_MATCH_NULLNESS, PARTCLAUSE_NOMATCH, etc.\n>> 2. In case of PARTCLAUSE_MATCH_CLAUSE, we generate steps using\n>> gen_prune_steps_from_opexps() (strategy-2) which generate and return a\n>> list of PartitionPruneStepOp that are based on OpExpr and BooleanTest\n>> clauses that have been matched to the partition key and it also takes\n>> care handling prefix of the partition keys.\n>> 3. In case of PARTCLAUSE_MATCH_NULLNESS, we generate steps using\n>> gen_prune_step_op() (strategy-1) which generates single\n>> PartitionPruneStepOp since the earlier list partitioning supports\n>> single column and there can be only one NULL value. In\n>> get_matching_list_bounds(), if the nullkeys is not empty, we fetch the\n>> partition index which accepts null and we used to return from here.\n>>\n>> In case of multi column list partitioning, we have columns more than\n>> one and hence there is a possibility of more than one NULL values in\n>> the where clauses. The above mentioned steps are modified like below.\n>>\n>> 1. Modified the match_clause_to_partition_key() to generate an object\n>> of PartClauseInfo structure and return PARTCLAUSE_MATCH_CLAUSE even in\n>> case of clauses related to NULL. The information required to generate\n>> PartClauseInfo is populated here like the constant expression\n>> consisting of (Datum) 0, op_strategy, op_is_ne, etc.\n>> 2. Since I am returning PARTCLAUSE_MATCH_CLAUSE, now we use strategy-2\n>> (gen_prune_steps_from_opexps) to generate partition pruning steps.\n>> This function takes care of generating a list of pruning steps if\n>> there are multiple clauses and also takes care of handling prefixes.\n>> 3. Modified perform_pruning_base_step() to generate the datum values\n>> and isnulls data of the where clauses. In case if any of the key\n>> contains NULL value then the corresponding datum value is 0.\n>> 4. Modified get_matching_list_bounds() to generate the minimum offset\n>> and/or maximum offset of the matched values based on the difference\n>> operation strategies. Now since the NULL containing bound values are\n>> part of 'boundinfo', changed the code accordingly to include the NULL\n>> containing partitions or not in different scenarios like\n>> InvalidStrategy, etc.\n>>\n>> I have done some cosmetic changes to\n>> v1_multi_column_list_partitioning.patch. So all the above code changes\n>> related to partition pruning are merged with the previous patch and\n>> also included the delta patch shared by you. Hence sharing a single\n>> patch.\n>>\n>> Kindly have a look and share your thoughts.\n>>\n>>\n> Hi,\n>\n> bq. Supported new syantx to allow mentioning multiple key information.\n>\n> syantx -> syntax\n>\n> + isDuplicate = checkForDuplicates(result, values);\n> + if (isDuplicate)\n> + continue;\n>\n> It seems the variable isDuplicate is not needed. The if statement can directly check the return value from checkForDuplicates().\n>\n> + //TODO: Handle for multi-column cases\n> + for (j = 0; j < 1; j++)\n>\n> Is this part going to be updated in the next patch?\n>\n> Cheers\n\n\n", "msg_date": "Fri, 27 Aug 2021 12:54:09 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Hi Nitin.\n\nI have been testing these patches. Patches applied cleanly on the head.\nWhile testing I found below a case where update row movement is not working\nproperly.\nPlease find the test case below.\n\npostgres=# create table p0 (a int, b text, c bool) partition by list\n(a,b,c);\nCREATE TABLE\npostgres=# create table p01 partition of p0 for values in ((1,1,true));\nCREATE TABLE\npostgres=# create table p02 partition of p0 for values in ((1,NULL,false));\nCREATE TABLE\npostgres=# insert into p0 values (1,'1',true);\nINSERT 0 1\npostgres=# insert into p0 values (1,NULL,false);\nINSERT 0 1\npostgres=# select tableoid::regclass,* from p0;\n tableoid | a | b | c\n----------+---+---+---\n p01 | 1 | 1 | t\n p02 | 1 | | f\n(2 rows)\n\npostgres=# update p0 set b = NULL;\nUPDATE 2\npostgres=# select tableoid::regclass,* from p0;\n tableoid | a | b | c\n----------+---+---+---\n p01 | 1 | | t\n p02 | 1 | | f\n(2 rows)\n\nI think this update should fail as there is no partition satisfying update\nrow (1,NULL,true).\n\nThanks & Regards,\nRajkumar Raghuwanshi\n\n\nOn Fri, Aug 27, 2021 at 12:53 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com>\nwrote:\n\n> > + * isnulls is an array of boolean-tuples with key->partnatts booleans\n> values\n> > + * each. Currently only used for list partitioning, it stores whether a\n> >\n> > I think 'booleans' should be 'boolean'.\n> > The trailing word 'each' is unnecessary.\n>\n> > bq. Supported new syantx to allow mentioning multiple key information.\n> >\n> > syantx -> syntax\n>\n> > + isDuplicate = checkForDuplicates(result, values);\n> > + if (isDuplicate)\n> > + continue;\n> >\n> > It seems the variable isDuplicate is not needed. The if statement can\n> directly check the return value from checkForDuplicates().\n>\n> I agree that isDuplicate is not required.\n> Thanks for sharing the comments. I will take care of these comments in\n> the next patch.\n>\n> > + //TODO: Handle for multi-column cases\n> > + for (j = 0; j < 1; j++)\n> >\n> > Is this part going to be updated in the next patch?\n>\n> Yes. The code changes related to partition-wise join are in progress.\n> I will handle these in the next patch.\n>\n> Thanks & Regards,\n> Nitin Jadhav\n>\n> On Thu, Aug 26, 2021 at 2:40 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> >\n> >\n> > On Wed, Aug 25, 2021 at 5:41 AM Nitin Jadhav <\n> nitinjadhavpostgres@gmail.com> wrote:\n> >>\n> >> > The new list bound binary search and related comparison support\n> >> > function look a bit too verbose to me. I was expecting\n> >> > partition_list_bsearch() to look very much like\n> >> > partition_range_datum_bsearch(), but that is not the case. The\n> >> > special case code that you wrote in partition_list_bsearch() seems\n> >> > unnecessary, at least in that function. I'm talking about the code\n> >> > fragment starting with this comment:\n> >> >\n> >> > I will look at other parts of the patch next week hopefully. For\n> >> > now, attached is a delta patch that applies on top of your v1, which\n> >> > does:\n> >> >\n> >> > * Simplify partition_list_bsearch() and partition_lbound_datum_cmp()\n> >> > * Make qsort_partition_list_value_cmp simply call\n> >> > partition_lbound_datum_cmp() instead of having its own logic to\n> >> > compare input bounds\n> >> > * Move partition_lbound_datum_cmp() into partbounds.c as a static\n> >> > function (export seems unnecessary)\n> >> > * Add a comment for PartitionBoundInfo.isnulls and remove that for\n> null_index\n> >>\n> >> Yes. You are right. The extra code added in partition_list_bsearch()\n> >> is not required and thanks for sharing the delta patch. It looks good\n> >> to me and I have incorporated the changes in the attached patch.\n> >>\n> >> > I guess you're perhaps trying to address the case where the caller\n> >> > does not specify the values for all of the partition key columns,\n> >> > which can happen when the partition pruning code needs to handle a set\n> >> > of clauses matching only some of the partition key columns. But\n> >> > that's a concern of the partition pruning code and so the special case\n> >> > should be handled there (if at all), not in the binary search function\n> >> > that is shared with other callers. Regarding that, I'm wondering if\n> >> > we should require clauses matching all of the partition key columns to\n> >> > be found for the pruning code to call the binary search, so do\n> >> > something like get_matching_hash_bounds() does:\n> >> >\n> >> > Do you think that trying to match list partitions even with fewer keys\n> >> > is worth the complexity of the implementation? That is, is the use\n> >> > case to search for only a subset of partition key columns common\n> >> > enough with list partitioning?\n> >> >\n> >> > If we do decide to implement the special case, remember that to do\n> >> > that efficiently, we'd need to require that the subset of matched key\n> >> > columns constitutes a prefix, because of the way the datums are\n> >> > sorted. That is, match all partitions when the query only contains a\n> >> > clause for b when the partition key is (a, b, c), but engage the\n> >> > special case of pruning if the query contains clauses for a, or for a\n> >> > and b.\n> >>\n> >> Thanks for the suggestion. Below is the implementation details for the\n> >> partition pruning for multi column list partitioning.\n> >>\n> >> In the existing code (For single column list partitioning)\n> >> 1. In gen_partprune_steps_internal(), we try to match the where\n> >> clauses provided by the user with the partition key data using\n> >> match_clause_to_partition_key(). Based on the match, this function can\n> >> return many values like PARTCLAUSE_MATCH_CLAUSE,\n> >> PARTCLAUSE_MATCH_NULLNESS, PARTCLAUSE_NOMATCH, etc.\n> >> 2. In case of PARTCLAUSE_MATCH_CLAUSE, we generate steps using\n> >> gen_prune_steps_from_opexps() (strategy-2) which generate and return a\n> >> list of PartitionPruneStepOp that are based on OpExpr and BooleanTest\n> >> clauses that have been matched to the partition key and it also takes\n> >> care handling prefix of the partition keys.\n> >> 3. In case of PARTCLAUSE_MATCH_NULLNESS, we generate steps using\n> >> gen_prune_step_op() (strategy-1) which generates single\n> >> PartitionPruneStepOp since the earlier list partitioning supports\n> >> single column and there can be only one NULL value. In\n> >> get_matching_list_bounds(), if the nullkeys is not empty, we fetch the\n> >> partition index which accepts null and we used to return from here.\n> >>\n> >> In case of multi column list partitioning, we have columns more than\n> >> one and hence there is a possibility of more than one NULL values in\n> >> the where clauses. The above mentioned steps are modified like below.\n> >>\n> >> 1. Modified the match_clause_to_partition_key() to generate an object\n> >> of PartClauseInfo structure and return PARTCLAUSE_MATCH_CLAUSE even in\n> >> case of clauses related to NULL. The information required to generate\n> >> PartClauseInfo is populated here like the constant expression\n> >> consisting of (Datum) 0, op_strategy, op_is_ne, etc.\n> >> 2. Since I am returning PARTCLAUSE_MATCH_CLAUSE, now we use strategy-2\n> >> (gen_prune_steps_from_opexps) to generate partition pruning steps.\n> >> This function takes care of generating a list of pruning steps if\n> >> there are multiple clauses and also takes care of handling prefixes.\n> >> 3. Modified perform_pruning_base_step() to generate the datum values\n> >> and isnulls data of the where clauses. In case if any of the key\n> >> contains NULL value then the corresponding datum value is 0.\n> >> 4. Modified get_matching_list_bounds() to generate the minimum offset\n> >> and/or maximum offset of the matched values based on the difference\n> >> operation strategies. Now since the NULL containing bound values are\n> >> part of 'boundinfo', changed the code accordingly to include the NULL\n> >> containing partitions or not in different scenarios like\n> >> InvalidStrategy, etc.\n> >>\n> >> I have done some cosmetic changes to\n> >> v1_multi_column_list_partitioning.patch. So all the above code changes\n> >> related to partition pruning are merged with the previous patch and\n> >> also included the delta patch shared by you. Hence sharing a single\n> >> patch.\n> >>\n> >> Kindly have a look and share your thoughts.\n> >>\n> >>\n> > Hi,\n> >\n> > bq. Supported new syantx to allow mentioning multiple key information.\n> >\n> > syantx -> syntax\n> >\n> > + isDuplicate = checkForDuplicates(result, values);\n> > + if (isDuplicate)\n> > + continue;\n> >\n> > It seems the variable isDuplicate is not needed. The if statement can\n> directly check the return value from checkForDuplicates().\n> >\n> > + //TODO: Handle for multi-column cases\n> > + for (j = 0; j < 1; j++)\n> >\n> > Is this part going to be updated in the next patch?\n> >\n> > Cheers\n>\n>\n>\n\nHi Nitin.I have been testing these patches. Patches applied cleanly on the head. While testing I found below a case where update row movement is not working properly.Please find the test case below.postgres=# create table p0 (a int, b text, c bool) partition by list (a,b,c);CREATE TABLEpostgres=# create table p01 partition of p0 for values in ((1,1,true));CREATE TABLEpostgres=# create table p02 partition of p0 for values in ((1,NULL,false));CREATE TABLEpostgres=# insert into p0 values (1,'1',true);INSERT 0 1postgres=# insert into p0 values (1,NULL,false);INSERT 0 1postgres=# select tableoid::regclass,* from p0; tableoid | a | b | c ----------+---+---+--- p01      | 1 | 1 | t p02      | 1 |   | f(2 rows)postgres=# update p0 set b = NULL;UPDATE 2postgres=# select tableoid::regclass,* from p0; tableoid | a | b | c ----------+---+---+--- p01      | 1 |   | t p02      | 1 |   | f(2 rows)I think this update should fail as there is no partition satisfying update row (1,NULL,true).Thanks & Regards,Rajkumar RaghuwanshiOn Fri, Aug 27, 2021 at 12:53 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:> + * isnulls is an array of boolean-tuples with key->partnatts booleans values\n> + * each.  Currently only used for list partitioning, it stores whether a\n>\n> I think 'booleans' should be 'boolean'.\n> The trailing word 'each' is unnecessary.\n\n> bq. Supported new syantx to allow mentioning multiple key information.\n>\n> syantx -> syntax\n\n> +       isDuplicate = checkForDuplicates(result, values);\n> +       if (isDuplicate)\n> +           continue;\n>\n> It seems the variable isDuplicate is not needed. The if statement can directly check the return value from checkForDuplicates().\n\nI agree that isDuplicate is not required.\nThanks for sharing the comments. I will take care of these comments in\nthe next patch.\n\n> +       //TODO: Handle for multi-column cases\n> +       for (j = 0; j < 1; j++)\n>\n> Is this part going to be updated in the next patch?\n\nYes. The code changes related to partition-wise join are in progress.\nI will handle these in the next patch.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, Aug 26, 2021 at 2:40 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>\n>\n> On Wed, Aug 25, 2021 at 5:41 AM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:\n>>\n>> > The new list bound binary search and related comparison support\n>> > function look a bit too verbose to me.  I was expecting\n>> > partition_list_bsearch() to look very much like\n>> > partition_range_datum_bsearch(), but that is not the case.  The\n>> > special case code that you wrote in partition_list_bsearch() seems\n>> > unnecessary, at least in that function.  I'm talking about the code\n>> > fragment starting with this comment:\n>> >\n>> > I will look at other parts of the patch next week hopefully.   For\n>> > now, attached is a delta patch that applies on top of your v1, which\n>> > does:\n>> >\n>> > * Simplify partition_list_bsearch() and partition_lbound_datum_cmp()\n>> > * Make qsort_partition_list_value_cmp simply call\n>> > partition_lbound_datum_cmp() instead of having its own logic to\n>> > compare input bounds\n>> > * Move partition_lbound_datum_cmp() into partbounds.c as a static\n>> > function (export seems unnecessary)\n>> > * Add a comment for PartitionBoundInfo.isnulls and remove that for null_index\n>>\n>> Yes. You are right. The extra code added in partition_list_bsearch()\n>> is not required and thanks for sharing the delta patch. It looks good\n>> to me and I have incorporated the changes in the attached patch.\n>>\n>> > I guess you're perhaps trying to address the case where the caller\n>> > does not specify the values for all of the partition key columns,\n>> > which can happen when the partition pruning code needs to handle a set\n>> > of clauses matching only some of the partition key columns.  But\n>> > that's a concern of the partition pruning code and so the special case\n>> > should be handled there (if at all), not in the binary search function\n>> > that is shared with other callers.  Regarding that, I'm wondering if\n>> > we should require clauses matching all of the partition key columns to\n>> > be found for the pruning code to call the binary search, so do\n>> > something like get_matching_hash_bounds() does:\n>> >\n>> > Do you think that trying to match list partitions even with fewer keys\n>> > is worth the complexity of the implementation?  That is, is the use\n>> > case to search for only a subset of partition key columns common\n>> > enough with list partitioning?\n>> >\n>> > If we do decide to implement the special case, remember that to do\n>> > that efficiently, we'd need to require that the subset of matched key\n>> > columns constitutes a prefix, because of the way the datums are\n>> > sorted.  That is, match all partitions when the query only contains a\n>> > clause for b when the partition key is (a, b, c), but engage the\n>> > special case of pruning if the query contains clauses for a, or for a\n>> > and b.\n>>\n>> Thanks for the suggestion. Below is the implementation details for the\n>> partition pruning for multi column list partitioning.\n>>\n>> In the existing code (For single column list partitioning)\n>> 1. In gen_partprune_steps_internal(), we try to match the where\n>> clauses provided by the user with the partition key data using\n>> match_clause_to_partition_key(). Based on the match, this function can\n>> return many values like PARTCLAUSE_MATCH_CLAUSE,\n>> PARTCLAUSE_MATCH_NULLNESS, PARTCLAUSE_NOMATCH, etc.\n>> 2. In case of PARTCLAUSE_MATCH_CLAUSE, we generate steps using\n>> gen_prune_steps_from_opexps() (strategy-2) which generate and return a\n>> list of PartitionPruneStepOp that are based on OpExpr and BooleanTest\n>> clauses that have been matched to the partition key and it also takes\n>> care handling prefix of the partition keys.\n>> 3. In case of PARTCLAUSE_MATCH_NULLNESS, we generate steps using\n>> gen_prune_step_op() (strategy-1) which generates single\n>> PartitionPruneStepOp since the earlier list partitioning supports\n>> single column and there can be only one NULL value. In\n>> get_matching_list_bounds(), if the nullkeys is not empty, we fetch the\n>> partition index which accepts null and we used to return from here.\n>>\n>> In case of multi column list partitioning, we have columns more than\n>> one and hence there is a possibility of more than one NULL values in\n>> the where clauses. The above mentioned steps are modified like below.\n>>\n>> 1.  Modified the match_clause_to_partition_key() to generate an object\n>> of PartClauseInfo structure and return PARTCLAUSE_MATCH_CLAUSE even in\n>> case of clauses related to NULL. The information required to generate\n>> PartClauseInfo is populated here like the constant expression\n>> consisting of (Datum) 0, op_strategy, op_is_ne, etc.\n>> 2. Since I am returning PARTCLAUSE_MATCH_CLAUSE, now we use strategy-2\n>> (gen_prune_steps_from_opexps) to generate partition pruning steps.\n>> This function takes care of generating a list of pruning steps if\n>> there are multiple clauses and also takes care of handling prefixes.\n>> 3. Modified perform_pruning_base_step() to generate the datum values\n>> and isnulls data of the where clauses. In case if any of the key\n>> contains NULL value then the corresponding datum value is 0.\n>> 4. Modified get_matching_list_bounds() to generate the minimum offset\n>> and/or maximum offset of the matched values based on the difference\n>> operation strategies. Now since the NULL containing bound values are\n>> part of 'boundinfo', changed the code accordingly to include the NULL\n>> containing partitions or not in different scenarios like\n>> InvalidStrategy, etc.\n>>\n>> I have done some cosmetic changes to\n>> v1_multi_column_list_partitioning.patch. So all the above code changes\n>> related to partition pruning are merged with the previous patch and\n>> also included the delta patch shared by you. Hence sharing a single\n>> patch.\n>>\n>> Kindly have a look and share your thoughts.\n>>\n>>\n> Hi,\n>\n> bq. Supported new syantx to allow mentioning multiple key information.\n>\n> syantx -> syntax\n>\n> +       isDuplicate = checkForDuplicates(result, values);\n> +       if (isDuplicate)\n> +           continue;\n>\n> It seems the variable isDuplicate is not needed. The if statement can directly check the return value from checkForDuplicates().\n>\n> +       //TODO: Handle for multi-column cases\n> +       for (j = 0; j < 1; j++)\n>\n> Is this part going to be updated in the next patch?\n>\n> Cheers", "msg_date": "Mon, 30 Aug 2021 13:21:21 +0530", "msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Mon, Aug 30, 2021 at 4:51 PM Rajkumar Raghuwanshi\n<rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>\n> Hi Nitin.\n>\n> I have been testing these patches. Patches applied cleanly on the head. While testing I found below a case where update row movement is not working properly.\n> Please find the test case below.\n>\n> postgres=# create table p0 (a int, b text, c bool) partition by list (a,b,c);\n> CREATE TABLE\n> postgres=# create table p01 partition of p0 for values in ((1,1,true));\n> CREATE TABLE\n> postgres=# create table p02 partition of p0 for values in ((1,NULL,false));\n> CREATE TABLE\n> postgres=# insert into p0 values (1,'1',true);\n> INSERT 0 1\n> postgres=# insert into p0 values (1,NULL,false);\n> INSERT 0 1\n> postgres=# select tableoid::regclass,* from p0;\n> tableoid | a | b | c\n> ----------+---+---+---\n> p01 | 1 | 1 | t\n> p02 | 1 | | f\n> (2 rows)\n>\n> postgres=# update p0 set b = NULL;\n> UPDATE 2\n> postgres=# select tableoid::regclass,* from p0;\n> tableoid | a | b | c\n> ----------+---+---+---\n> p01 | 1 | | t\n> p02 | 1 | | f\n> (2 rows)\n>\n> I think this update should fail as there is no partition satisfying update row (1,NULL,true).\n\nYeah, contrary to my earlier assessment, it seems the partition\nconstraint on each of those partitions fails to explicitly include an\nIS NOT NULL test for each column that has a non-NULL value assigned.\nSo, for example, the constraint of p01 should actually be:\n\n(a IS NOT NULL) AND (a = 1) AND (b IS NOT NULL) AND (b = 1) AND (c IS\nNOT NULL) AND (c = true)\n\nAs per the patch's current implementation, tuple (1, NULL, true)\npasses p01's partition constraint, because only (b = 1) is not\nsufficient to reject a NULL value being assigned to b.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 31 Aug 2021 13:06:22 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "> I have been testing these patches. Patches applied cleanly on the head. While testing I found below a case where update row movement is not working properly.\n> Please find the test case below.\n\nThanks for testing and sharing the details of the issue.\n\n> Yeah, contrary to my earlier assessment, it seems the partition\n> constraint on each of those partitions fails to explicitly include an\n> IS NOT NULL test for each column that has a non-NULL value assigned.\n> So, for example, the constraint of p01 should actually be:\n>\n> (a IS NOT NULL) AND (a = 1) AND (b IS NOT NULL) AND (b = 1) AND (c IS\n> NOT NULL) AND (c = true)\n\nYes. It should add an IS NOT NULL test for each column. I have\nmodified the patch accordingly and verified with the test case shared\nby Rajkumar.\n\n> + * isnulls is an array of boolean-tuples with key->partnatts booleans values\n> + * each. Currently only used for list partitioning, it stores whether a\n>\n> I think 'booleans' should be 'boolean'.\n> The trailing word 'each' is unnecessary.\n>\n> bq. Supported new syantx to allow mentioning multiple key information.\n>\n> syantx -> syntax\n>\n> + isDuplicate = checkForDuplicates(result, values);\n> + if (isDuplicate)\n> + continue;\n>\n> It seems the variable isDuplicate is not needed. The if statement can directly check the return value from checkForDuplicates().\n\nThe attached patch also fixes the above comments.\n\nThanks & Regards,\nNitin Jadhav\nOn Tue, Aug 31, 2021 at 9:36 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Mon, Aug 30, 2021 at 4:51 PM Rajkumar Raghuwanshi\n> <rajkumar.raghuwanshi@enterprisedb.com> wrote:\n> >\n> > Hi Nitin.\n> >\n> > I have been testing these patches. Patches applied cleanly on the head. While testing I found below a case where update row movement is not working properly.\n> > Please find the test case below.\n> >\n> > postgres=# create table p0 (a int, b text, c bool) partition by list (a,b,c);\n> > CREATE TABLE\n> > postgres=# create table p01 partition of p0 for values in ((1,1,true));\n> > CREATE TABLE\n> > postgres=# create table p02 partition of p0 for values in ((1,NULL,false));\n> > CREATE TABLE\n> > postgres=# insert into p0 values (1,'1',true);\n> > INSERT 0 1\n> > postgres=# insert into p0 values (1,NULL,false);\n> > INSERT 0 1\n> > postgres=# select tableoid::regclass,* from p0;\n> > tableoid | a | b | c\n> > ----------+---+---+---\n> > p01 | 1 | 1 | t\n> > p02 | 1 | | f\n> > (2 rows)\n> >\n> > postgres=# update p0 set b = NULL;\n> > UPDATE 2\n> > postgres=# select tableoid::regclass,* from p0;\n> > tableoid | a | b | c\n> > ----------+---+---+---\n> > p01 | 1 | | t\n> > p02 | 1 | | f\n> > (2 rows)\n> >\n> > I think this update should fail as there is no partition satisfying update row (1,NULL,true).\n>\n> Yeah, contrary to my earlier assessment, it seems the partition\n> constraint on each of those partitions fails to explicitly include an\n> IS NOT NULL test for each column that has a non-NULL value assigned.\n> So, for example, the constraint of p01 should actually be:\n>\n> (a IS NOT NULL) AND (a = 1) AND (b IS NOT NULL) AND (b = 1) AND (c IS\n> NOT NULL) AND (c = true)\n>\n> As per the patch's current implementation, tuple (1, NULL, true)\n> passes p01's partition constraint, because only (b = 1) is not\n> sufficient to reject a NULL value being assigned to b.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com", "msg_date": "Tue, 31 Aug 2021 16:32:59 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Hi Nitin,\n\nOn Tue, Aug 31, 2021 at 8:02 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> The attached patch also fixes the above comments.\n\nI noticed that multi-column list partitions containing NULLs don't\nwork correctly with partition pruning yet.\n\ncreate table p0 (a int, b text, c bool) partition by list (a, b, c);\ncreate table p01 partition of p0 for values in ((1, 1, true), (NULL, 1, false));\ncreate table p02 partition of p0 for values in ((1, NULL, false));\nexplain select * from p0 where a is null;\n QUERY PLAN\n--------------------------------------------------------\n Seq Scan on p01 p0 (cost=0.00..22.50 rows=6 width=37)\n Filter: (a IS NULL)\n(2 rows)\n\nI guess that may be due to the following newly added code being incomplete:\n\n+/*\n+ * get_partition_bound_null_index\n+ *\n+ * Returns the partition index of the partition bound which accepts NULL.\n+ */\n+int\n+get_partition_bound_null_index(PartitionBoundInfo boundinfo)\n+{\n+ int i = 0;\n+ int j = 0;\n+\n+ if (!boundinfo->isnulls)\n+ return -1;\n\n- if (!val->constisnull)\n- count++;\n+ for (i = 0; i < boundinfo->ndatums; i++)\n+ {\n+ //TODO: Handle for multi-column cases\n+ for (j = 0; j < 1; j++)\n+ {\n+ if (boundinfo->isnulls[i][j])\n+ return boundinfo->indexes[i];\n }\n }\n\n+ return -1;\n+}\n\nMaybe this function needs to return a \"bitmapset\" of indexes, because\nmultiple partitions can now contain NULL values.\n\nSome other issues I noticed and suggestions for improvement:\n\n+/*\n+ * checkForDuplicates\n+ *\n+ * Returns TRUE if the list bound element is already present in the list of\n+ * list bounds, FALSE otherwise.\n+ */\n+static bool\n+checkForDuplicates(List *source, List *searchElem)\n\nThis function name may be too generic. Given that it is specific to\nimplementing list bound de-duplication, maybe the following signature\nis more appropriate:\n\nstatic bool\ncheckListBoundDuplicated(List *list_bounds, List *new_bound)\n\nAlso, better if the function comment mentions those parameter names, like:\n\n\"Returns TRUE if the list bound element 'new_bound' is already present\nin the target list 'list_bounds', FALSE otherwise.\"\n\n+/*\n+ * transformPartitionListBounds\n+ *\n+ * Converts the expressions of list partition bounds from the raw grammar\n+ * representation.\n\nA sentence about the result format would be helpful, like:\n\nThe result is a List of Lists of Const nodes to account for the\npartition key possibly containing more than one column.\n\n+ int i = 0;\n+ int j = 0;\n\nBetter to initialize such loop counters closer to the loop.\n\n+ colname[i] = (char *) palloc0(NAMEDATALEN * sizeof(char));\n+ colname[i] = get_attname(RelationGetRelid(parent),\n+ key->partattrs[i], false);\n\nThe palloc in the 1st statement is wasteful, because the 2nd statement\noverwrites its pointer by the pointer to the string palloc'd by\nget_attname().\n\n+ ListCell *cell2 = NULL;\n\nNo need to explicitly initialize the loop variable.\n\n+ RowExpr *rowexpr = NULL;\n+\n+ if (!IsA(expr, RowExpr))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n+ errmsg(\"Invalid list bound specification\"),\n+ parser_errposition(pstate, exprLocation((Node\n*) spec))));\n+\n+ rowexpr = (RowExpr *) expr;\n\nIt's okay to assign rowexpr at the top here instead of the dummy\nNULL-initialization and write the condition as:\n\n if (!IsA(rowexpr, RowExpr))\n\n+ if (isDuplicate)\n+ continue;\n+\n+ result = lappend(result, values);\n\nI can see you copied this style from the existing code, but how about\nwriting this simply as:\n\n if (!isDuplicate)\n result = lappend(result, values);\n\n-/* One value coming from some (index'th) list partition */\n+/* One bound of a list partition */\n typedef struct PartitionListValue\n {\n int index;\n- Datum value;\n+ Datum *values;\n+ bool *isnulls;\n } PartitionListValue;\n\nGiven that this is a locally-defined struct, I wonder if it makes\nsense to rename the struct while we're at it. Call it, say,\nPartitionListBound?\n\nAlso, please keep part of the existing comment that says that the\nbound belongs to index'th partition.\n\nWill send more comments in a bit...\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Sep 2021 14:31:07 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Wed, Sep 1, 2021 at 2:31 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Aug 31, 2021 at 8:02 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > The attached patch also fixes the above comments.\n>\n> I noticed that multi-column list partitions containing NULLs don't\n> work correctly with partition pruning yet.\n>\n> create table p0 (a int, b text, c bool) partition by list (a, b, c);\n> create table p01 partition of p0 for values in ((1, 1, true), (NULL, 1, false));\n> create table p02 partition of p0 for values in ((1, NULL, false));\n> explain select * from p0 where a is null;\n> QUERY PLAN\n> --------------------------------------------------------\n> Seq Scan on p01 p0 (cost=0.00..22.50 rows=6 width=37)\n> Filter: (a IS NULL)\n> (2 rows)\n>\n> I guess that may be due to the following newly added code being incomplete:\n>\n> +/*\n> + * get_partition_bound_null_index\n> + *\n> + * Returns the partition index of the partition bound which accepts NULL.\n> + */\n> +int\n> +get_partition_bound_null_index(PartitionBoundInfo boundinfo)\n> +{\n> + int i = 0;\n> + int j = 0;\n> +\n> + if (!boundinfo->isnulls)\n> + return -1;\n>\n> - if (!val->constisnull)\n> - count++;\n> + for (i = 0; i < boundinfo->ndatums; i++)\n> + {\n> + //TODO: Handle for multi-column cases\n> + for (j = 0; j < 1; j++)\n> + {\n> + if (boundinfo->isnulls[i][j])\n> + return boundinfo->indexes[i];\n> }\n> }\n>\n> + return -1;\n> +}\n>\n> Maybe this function needs to return a \"bitmapset\" of indexes, because\n> multiple partitions can now contain NULL values.\n>\n> Some other issues I noticed and suggestions for improvement:\n>\n> +/*\n> + * checkForDuplicates\n> + *\n> + * Returns TRUE if the list bound element is already present in the list of\n> + * list bounds, FALSE otherwise.\n> + */\n> +static bool\n> +checkForDuplicates(List *source, List *searchElem)\n>\n> This function name may be too generic. Given that it is specific to\n> implementing list bound de-duplication, maybe the following signature\n> is more appropriate:\n>\n> static bool\n> checkListBoundDuplicated(List *list_bounds, List *new_bound)\n>\n> Also, better if the function comment mentions those parameter names, like:\n>\n> \"Returns TRUE if the list bound element 'new_bound' is already present\n> in the target list 'list_bounds', FALSE otherwise.\"\n>\n> +/*\n> + * transformPartitionListBounds\n> + *\n> + * Converts the expressions of list partition bounds from the raw grammar\n> + * representation.\n>\n> A sentence about the result format would be helpful, like:\n>\n> The result is a List of Lists of Const nodes to account for the\n> partition key possibly containing more than one column.\n>\n> + int i = 0;\n> + int j = 0;\n>\n> Better to initialize such loop counters closer to the loop.\n>\n> + colname[i] = (char *) palloc0(NAMEDATALEN * sizeof(char));\n> + colname[i] = get_attname(RelationGetRelid(parent),\n> + key->partattrs[i], false);\n>\n> The palloc in the 1st statement is wasteful, because the 2nd statement\n> overwrites its pointer by the pointer to the string palloc'd by\n> get_attname().\n>\n> + ListCell *cell2 = NULL;\n>\n> No need to explicitly initialize the loop variable.\n>\n> + RowExpr *rowexpr = NULL;\n> +\n> + if (!IsA(expr, RowExpr))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> + errmsg(\"Invalid list bound specification\"),\n> + parser_errposition(pstate, exprLocation((Node\n> *) spec))));\n> +\n> + rowexpr = (RowExpr *) expr;\n>\n> It's okay to assign rowexpr at the top here instead of the dummy\n> NULL-initialization and write the condition as:\n>\n> if (!IsA(rowexpr, RowExpr))\n>\n> + if (isDuplicate)\n> + continue;\n> +\n> + result = lappend(result, values);\n>\n> I can see you copied this style from the existing code, but how about\n> writing this simply as:\n>\n> if (!isDuplicate)\n> result = lappend(result, values);\n>\n> -/* One value coming from some (index'th) list partition */\n> +/* One bound of a list partition */\n> typedef struct PartitionListValue\n> {\n> int index;\n> - Datum value;\n> + Datum *values;\n> + bool *isnulls;\n> } PartitionListValue;\n>\n> Given that this is a locally-defined struct, I wonder if it makes\n> sense to rename the struct while we're at it. Call it, say,\n> PartitionListBound?\n>\n> Also, please keep part of the existing comment that says that the\n> bound belongs to index'th partition.\n>\n> Will send more comments in a bit...\n\n+ * partition_bound_accepts_nulls\n+ *\n+ * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n */\n\nI suggest slight rewording, as follows:\n\n\"Returns TRUE if any of the partition bounds contains a NULL value,\nFALSE otherwise.\"\n\n- PartitionListValue *all_values;\n+ PartitionListValue **all_values;\n...\n- all_values = (PartitionListValue *)\n- palloc(ndatums * sizeof(PartitionListValue));\n+ ndatums = get_list_datum_count(boundspecs, nparts);\n+ all_values = (PartitionListValue **)\n+ palloc(ndatums * sizeof(PartitionListValue *));\n\nI don't see the need to redefine all_values's pointer type. No need\nto palloc PartitionListValue repeatedly for every datum as done\nfurther down as follows:\n\n+ all_values[j] = (PartitionListValue *)\npalloc(sizeof(PartitionListValue));\n\nYou do need the following two though:\n\n+ all_values[j]->values = (Datum *) palloc0(key->partnatts *\nsizeof(Datum));\n+ all_values[j]->isnulls = (bool *) palloc0(key->partnatts *\nsizeof(bool));\n\nIf you change the above the way I suggest, you'd also need to revert\nthe following change:\n\n- qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n+ qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n qsort_partition_list_value_cmp, (void *) key);\n\n+ int orig_index = all_values[i]->index;\n+ boundinfo->datums[i] = (Datum *) palloc(key->partnatts * sizeof(Datum));\n\nMissing a newline between these two statements.\n\nBTW, I noticed that the boundDatums variable is no longer used in\ncreate_list_bounds. I traced back its origin and found that a recent\ncommit 53d86957e98 introduced it to implement an idea to reduce the\nfiner-grained pallocs that were being done in create_list_bounds(). I\ndon't think that this patch needs to throw away that work. You can\nmake it work as the attached delta patch that applies on top of v3.\nPlease check.\n\n@@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n*parttyplen, bool *parttypbyval,\n if (b1->nindexes != b2->nindexes)\n return false;\n\n- if (b1->null_index != b2->null_index)\n+ if (get_partition_bound_null_index(b1) !=\nget_partition_bound_null_index(b2))\n\nAs mentioned in the last message, this bit in partition_bounds_equal()\nneeds to be comparing \"bitmapsets\" of null bound indexes, that is\nafter fixing get_partition_bound_null_index() as previously mentioned.\n\nBut...\n\n@@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n*parttyplen, bool *parttypbyval,\n * context. datumIsEqual() should be simple enough to be\n * safe.\n */\n- if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n+ if (b1->isnulls)\n+ b1_isnull = b1->isnulls[i][j];\n+ if (b2->isnulls)\n+ b2_isnull = b2->isnulls[i][j];\n+\n+ /*\n+ * If any of the partition bound has NULL value, then check\n+ * equality for the NULL value instead of comparing the datums\n+ * as it does not contain valid value in case of NULL.\n+ */\n+ if (b1_isnull || b2_isnull)\n+ {\n+ if (b1_isnull != b2_isnull)\n+ return false;\n+ }\n\n...if you have this in the main loop, I don't think we need the above\ncode stanza which appears to implement a short-cut for this long-form\nlogic.\n\n+ (key->strategy != PARTITION_STRATEGY_LIST ||\n+ !src->isnulls[i][j]))\n\nI think it's better to write this condition as follows just like the\naccompanying condition involving src->kind:\n\n (src->nulls == NULL || !src->isnulls[i][j])\n\n(Skipped looking at merge_list_bounds() and related changes for now as\nI see a lot of TODOs remain to be done.)\n\nIn check_new_partition_bound():\n\n+ Datum *values = (Datum *)\npalloc0(key->partnatts * sizeof(Datum));\n+ bool *isnulls = (bool *)\npalloc0(key->partnatts * sizeof(bool));\n\nDoesn't seem like a bad idea to declare these as:\n\n Datum values[PARTITION_MAX_KEYS];\n bool isnulls[PARTITION_MAX_KEYS];\n\n\nI looked at get_qual_for_list_multi_column() and immediately thought\nthat it may be a bad idea. I think it's better to integrate the logic\nfor multi-column case into the existing function even if that makes\nthe function appear more complex. Having two functions with the same\ngoal and mostly the same code is not a good idea mainly because it\nbecomes a maintenance burden.\n\nI have attempted a rewrite such that get_qual_for_list() now handles\nboth the single-column and multi-column cases. Changes included in\nthe delta patch. The patch updates some outputs of the newly added\ntests for multi-column list partitions, because the new code emits the\nIS NOT NULL tests a bit differently than\nget_qual_for_list_mutli_column() would. Notably, the old approach\nwould emit IS NOT NULL for every non-NULL datum matched to a given\ncolumn, not just once for the column. However, the patch makes a few\nother tests fail, mainly because I had to fix\npartition_bound_accepts_nulls() to handle the multi-column case,\nthough didn't bother to update all callers of it to also handle the\nmulti-column case correctly. I guess that's a TODO you're going to\ndeal with at some point anyway. :)\n\nI still have more than half of v3 left to look at, so will continue\nlooking. In the meantime, please check the changes I suggested,\nincluding the delta patch, and let me know your thoughts.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 3 Sep 2021 22:46:23 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On PG head + Nitin's v3 patch + Amit's Delta patch. Make check is failing\nwith below errors.\n\n--inherit.sql is failing with error :\"ERROR: negative bitmapset member not\nallowed\"\nupdate mlparted_tab mlp set c = 'xxx'\nfrom\n (select a from some_tab union all select a+1 from some_tab) ss (a)\nwhere (mlp.a = ss.a and mlp.b = 'b') or mlp.a = 3;\nERROR: negative bitmapset member not allowed\n\n--partition_join.sql is crashing with enable_partitionwise_join set to true.\nCREATE TABLE plt1_adv (a int, b int, c text) PARTITION BY LIST (c);\nCREATE TABLE plt1_adv_p1 PARTITION OF plt1_adv FOR VALUES IN ('0001',\n'0003');\nCREATE TABLE plt1_adv_p2 PARTITION OF plt1_adv FOR VALUES IN ('0004',\n'0006');\nCREATE TABLE plt1_adv_p3 PARTITION OF plt1_adv FOR VALUES IN ('0008',\n'0009');\nINSERT INTO plt1_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM\ngenerate_series(1, 299) i WHERE i % 10 IN (1, 3, 4, 6, 8, 9);\nANALYZE plt1_adv;\nCREATE TABLE plt2_adv (a int, b int, c text) PARTITION BY LIST (c);\nCREATE TABLE plt2_adv_p1 PARTITION OF plt2_adv FOR VALUES IN ('0002',\n'0003');\nCREATE TABLE plt2_adv_p2 PARTITION OF plt2_adv FOR VALUES IN ('0004',\n'0006');\nCREATE TABLE plt2_adv_p3 PARTITION OF plt2_adv FOR VALUES IN ('0007',\n'0009');\nINSERT INTO plt2_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM\ngenerate_series(1, 299) i WHERE i % 10 IN (2, 3, 4, 6, 7, 9);\nANALYZE plt2_adv;\n-- inner join\nEXPLAIN (COSTS OFF)\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON\n(t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nconnection to server was lost\n\n\n--stack-trace\nCore was generated by `postgres: edb regression [local] EXPLAIN\n '.\nProgram terminated with signal 6, Aborted.\n#0 0x00007f7d339ba277 in raise () from /lib64/libc.so.6\nMissing separate debuginfos, use: debuginfo-install\nglibc-2.17-222.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64\nkrb5-libs-1.15.1-19.el7.x86_64 libcom_err-1.42.9-12.el7_5.x86_64\nlibgcc-4.8.5-39.el7.x86_64 libselinux-2.5-12.el7.x86_64\nopenssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64\nzlib-1.2.7-17.el7.x86_64\n(gdb) bt\n#0 0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n#1 0x00007f7d339bb968 in abort () from /lib64/libc.so.6\n#2 0x0000000000b0fbc3 in ExceptionalCondition (conditionName=0xcbda10\n\"part_index >= 0\", errorType=0xcbd1c3 \"FailedAssertion\", fileName=0xcbd2fe\n\"partbounds.c\", lineNumber=1957)\n at assert.c:69\n#3 0x0000000000892aa1 in is_dummy_partition (rel=0x19b37c0, part_index=-1)\nat partbounds.c:1957\n#4 0x00000000008919bd in merge_list_bounds (partnatts=1,\npartsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0,\ninner_rel=0x1922938, jointype=JOIN_INNER,\n outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at\npartbounds.c:1529\n#5 0x00000000008910de in partition_bounds_merge (partnatts=1,\npartsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0,\ninner_rel=0x1922938, jointype=JOIN_INNER,\n outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at\npartbounds.c:1223\n#6 0x000000000082c41a in compute_partition_bounds (root=0x1a19ed0,\nrel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30,\nparent_sjinfo=0x7fffd67752a0, parts1=0x7fffd67751b0,\n parts2=0x7fffd67751a8) at joinrels.c:1644\n#7 0x000000000082bc34 in try_partitionwise_join (root=0x1a19ed0,\nrel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30,\nparent_sjinfo=0x7fffd67752a0, parent_restrictlist=0x1ab3318)\n at joinrels.c:1402\n#8 0x000000000082aea2 in populate_joinrel_with_paths (root=0x1a19ed0,\nrel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, sjinfo=0x7fffd67752a0,\nrestrictlist=0x1ab3318)\n at joinrels.c:926\n#9 0x000000000082a8f5 in make_join_rel (root=0x1a19ed0, rel1=0x19b37c0,\nrel2=0x1922938) at joinrels.c:760\n#10 0x0000000000829e03 in make_rels_by_clause_joins (root=0x1a19ed0,\nold_rel=0x19b37c0, other_rels_list=0x1ab2970, other_rels=0x1ab2990) at\njoinrels.c:312\n#11 0x00000000008298d9 in join_search_one_level (root=0x1a19ed0, level=2)\nat joinrels.c:123\n#12 0x000000000080c566 in standard_join_search (root=0x1a19ed0,\nlevels_needed=2, initial_rels=0x1ab2970) at allpaths.c:3020\n#13 0x000000000080c4df in make_rel_from_joinlist (root=0x1a19ed0,\njoinlist=0x199d538) at allpaths.c:2951\n#14 0x000000000080816b in make_one_rel (root=0x1a19ed0, joinlist=0x199d538)\nat allpaths.c:228\n#15 0x000000000084491d in query_planner (root=0x1a19ed0,\nqp_callback=0x84a538 <standard_qp_callback>, qp_extra=0x7fffd6775630) at\nplanmain.c:276\n#16 0x0000000000847040 in grouping_planner (root=0x1a19ed0,\ntuple_fraction=0) at planner.c:1447\n#17 0x0000000000846709 in subquery_planner (glob=0x19b39d8,\nparse=0x1aaa290, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at\nplanner.c:1025\n#18 0x0000000000844f3e in standard_planner (parse=0x1aaa290,\n query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a,\nt2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c =\nt2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048, boundParams=0x0)\nat planner.c:406\n#19 0x0000000000844ce9 in planner (parse=0x1aaa290,\n query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a,\nt2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c =\nt2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048, boundParams=0x0)\nat planner.c:277\n#20 0x0000000000978483 in pg_plan_query (querytree=0x1aaa290,\n query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a,\nt2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c =\nt2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048, boundParams=0x0)\nat postgres.c:847\n#21 0x00000000006937fc in ExplainOneQuery (query=0x1aaa290,\ncursorOptions=2048, into=0x0, es=0x19b36f0,\n queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a,\nt2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c =\nt2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n params=0x0, queryEnv=0x0) at explain.c:397\n#22 0x0000000000693351 in ExplainQuery (pstate=0x197c410, stmt=0x1aaa0b0,\nparams=0x0, dest=0x197c378) at explain.c:281\n#23 0x00000000009811fa in standard_ProcessUtility (pstmt=0x1a0bfc8,\n queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a,\nt2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c =\nt2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\nqueryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:845\n#24 0x00000000009809ec in ProcessUtility (pstmt=0x1a0bfc8,\n queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a,\nt2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c =\nt2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\nqueryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:527\n#25 0x000000000097f636 in PortalRunUtility (portal=0x1893b40,\npstmt=0x1a0bfc8, isTopLevel=true, setHoldSnapshot=true, dest=0x197c378,\nqc=0x7fffd6775f90) at pquery.c:1147\n#26 0x000000000097f3a5 in FillPortalStore (portal=0x1893b40,\nisTopLevel=true) at pquery.c:1026\n#27 0x000000000097ed11 in PortalRun (portal=0x1893b40,\ncount=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1a0c0b8,\naltdest=0x1a0c0b8, qc=0x7fffd6776150) at pquery.c:758\n#28 0x0000000000978aa5 in exec_simple_query (\n\nThanks & Regards,\nRajkumar Raghuwanshi\n\n\nOn Fri, Sep 3, 2021 at 7:17 PM Amit Langote <amitlangote09@gmail.com> wrote:\n\n> On Wed, Sep 1, 2021 at 2:31 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> > On Tue, Aug 31, 2021 at 8:02 PM Nitin Jadhav\n> > <nitinjadhavpostgres@gmail.com> wrote:\n> > > The attached patch also fixes the above comments.\n> >\n> > I noticed that multi-column list partitions containing NULLs don't\n> > work correctly with partition pruning yet.\n> >\n> > create table p0 (a int, b text, c bool) partition by list (a, b, c);\n> > create table p01 partition of p0 for values in ((1, 1, true), (NULL, 1,\n> false));\n> > create table p02 partition of p0 for values in ((1, NULL, false));\n> > explain select * from p0 where a is null;\n> > QUERY PLAN\n> > --------------------------------------------------------\n> > Seq Scan on p01 p0 (cost=0.00..22.50 rows=6 width=37)\n> > Filter: (a IS NULL)\n> > (2 rows)\n> >\n> > I guess that may be due to the following newly added code being\n> incomplete:\n> >\n> > +/*\n> > + * get_partition_bound_null_index\n> > + *\n> > + * Returns the partition index of the partition bound which accepts\n> NULL.\n> > + */\n> > +int\n> > +get_partition_bound_null_index(PartitionBoundInfo boundinfo)\n> > +{\n> > + int i = 0;\n> > + int j = 0;\n> > +\n> > + if (!boundinfo->isnulls)\n> > + return -1;\n> >\n> > - if (!val->constisnull)\n> > - count++;\n> > + for (i = 0; i < boundinfo->ndatums; i++)\n> > + {\n> > + //TODO: Handle for multi-column cases\n> > + for (j = 0; j < 1; j++)\n> > + {\n> > + if (boundinfo->isnulls[i][j])\n> > + return boundinfo->indexes[i];\n> > }\n> > }\n> >\n> > + return -1;\n> > +}\n> >\n> > Maybe this function needs to return a \"bitmapset\" of indexes, because\n> > multiple partitions can now contain NULL values.\n> >\n> > Some other issues I noticed and suggestions for improvement:\n> >\n> > +/*\n> > + * checkForDuplicates\n> > + *\n> > + * Returns TRUE if the list bound element is already present in the\n> list of\n> > + * list bounds, FALSE otherwise.\n> > + */\n> > +static bool\n> > +checkForDuplicates(List *source, List *searchElem)\n> >\n> > This function name may be too generic. Given that it is specific to\n> > implementing list bound de-duplication, maybe the following signature\n> > is more appropriate:\n> >\n> > static bool\n> > checkListBoundDuplicated(List *list_bounds, List *new_bound)\n> >\n> > Also, better if the function comment mentions those parameter names,\n> like:\n> >\n> > \"Returns TRUE if the list bound element 'new_bound' is already present\n> > in the target list 'list_bounds', FALSE otherwise.\"\n> >\n> > +/*\n> > + * transformPartitionListBounds\n> > + *\n> > + * Converts the expressions of list partition bounds from the raw\n> grammar\n> > + * representation.\n> >\n> > A sentence about the result format would be helpful, like:\n> >\n> > The result is a List of Lists of Const nodes to account for the\n> > partition key possibly containing more than one column.\n> >\n> > + int i = 0;\n> > + int j = 0;\n> >\n> > Better to initialize such loop counters closer to the loop.\n> >\n> > + colname[i] = (char *) palloc0(NAMEDATALEN * sizeof(char));\n> > + colname[i] = get_attname(RelationGetRelid(parent),\n> > + key->partattrs[i], false);\n> >\n> > The palloc in the 1st statement is wasteful, because the 2nd statement\n> > overwrites its pointer by the pointer to the string palloc'd by\n> > get_attname().\n> >\n> > + ListCell *cell2 = NULL;\n> >\n> > No need to explicitly initialize the loop variable.\n> >\n> > + RowExpr *rowexpr = NULL;\n> > +\n> > + if (!IsA(expr, RowExpr))\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> > + errmsg(\"Invalid list bound specification\"),\n> > + parser_errposition(pstate, exprLocation((Node\n> > *) spec))));\n> > +\n> > + rowexpr = (RowExpr *) expr;\n> >\n> > It's okay to assign rowexpr at the top here instead of the dummy\n> > NULL-initialization and write the condition as:\n> >\n> > if (!IsA(rowexpr, RowExpr))\n> >\n> > + if (isDuplicate)\n> > + continue;\n> > +\n> > + result = lappend(result, values);\n> >\n> > I can see you copied this style from the existing code, but how about\n> > writing this simply as:\n> >\n> > if (!isDuplicate)\n> > result = lappend(result, values);\n> >\n> > -/* One value coming from some (index'th) list partition */\n> > +/* One bound of a list partition */\n> > typedef struct PartitionListValue\n> > {\n> > int index;\n> > - Datum value;\n> > + Datum *values;\n> > + bool *isnulls;\n> > } PartitionListValue;\n> >\n> > Given that this is a locally-defined struct, I wonder if it makes\n> > sense to rename the struct while we're at it. Call it, say,\n> > PartitionListBound?\n> >\n> > Also, please keep part of the existing comment that says that the\n> > bound belongs to index'th partition.\n> >\n> > Will send more comments in a bit...\n>\n> + * partition_bound_accepts_nulls\n> + *\n> + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n> */\n>\n> I suggest slight rewording, as follows:\n>\n> \"Returns TRUE if any of the partition bounds contains a NULL value,\n> FALSE otherwise.\"\n>\n> - PartitionListValue *all_values;\n> + PartitionListValue **all_values;\n> ...\n> - all_values = (PartitionListValue *)\n> - palloc(ndatums * sizeof(PartitionListValue));\n> + ndatums = get_list_datum_count(boundspecs, nparts);\n> + all_values = (PartitionListValue **)\n> + palloc(ndatums * sizeof(PartitionListValue *));\n>\n> I don't see the need to redefine all_values's pointer type. No need\n> to palloc PartitionListValue repeatedly for every datum as done\n> further down as follows:\n>\n> + all_values[j] = (PartitionListValue *)\n> palloc(sizeof(PartitionListValue));\n>\n> You do need the following two though:\n>\n> + all_values[j]->values = (Datum *) palloc0(key->partnatts *\n> sizeof(Datum));\n> + all_values[j]->isnulls = (bool *) palloc0(key->partnatts *\n> sizeof(bool));\n>\n> If you change the above the way I suggest, you'd also need to revert\n> the following change:\n>\n> - qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n> + qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n> qsort_partition_list_value_cmp, (void *) key);\n>\n> + int orig_index = all_values[i]->index;\n> + boundinfo->datums[i] = (Datum *) palloc(key->partnatts *\n> sizeof(Datum));\n>\n> Missing a newline between these two statements.\n>\n> BTW, I noticed that the boundDatums variable is no longer used in\n> create_list_bounds. I traced back its origin and found that a recent\n> commit 53d86957e98 introduced it to implement an idea to reduce the\n> finer-grained pallocs that were being done in create_list_bounds(). I\n> don't think that this patch needs to throw away that work. You can\n> make it work as the attached delta patch that applies on top of v3.\n> Please check.\n>\n> @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n> *parttyplen, bool *parttypbyval,\n> if (b1->nindexes != b2->nindexes)\n> return false;\n>\n> - if (b1->null_index != b2->null_index)\n> + if (get_partition_bound_null_index(b1) !=\n> get_partition_bound_null_index(b2))\n>\n> As mentioned in the last message, this bit in partition_bounds_equal()\n> needs to be comparing \"bitmapsets\" of null bound indexes, that is\n> after fixing get_partition_bound_null_index() as previously mentioned.\n>\n> But...\n>\n> @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n> *parttyplen, bool *parttypbyval,\n> * context. datumIsEqual() should be simple enough to be\n> * safe.\n> */\n> - if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n> + if (b1->isnulls)\n> + b1_isnull = b1->isnulls[i][j];\n> + if (b2->isnulls)\n> + b2_isnull = b2->isnulls[i][j];\n> +\n> + /*\n> + * If any of the partition bound has NULL value, then check\n> + * equality for the NULL value instead of comparing the\n> datums\n> + * as it does not contain valid value in case of NULL.\n> + */\n> + if (b1_isnull || b2_isnull)\n> + {\n> + if (b1_isnull != b2_isnull)\n> + return false;\n> + }\n>\n> ...if you have this in the main loop, I don't think we need the above\n> code stanza which appears to implement a short-cut for this long-form\n> logic.\n>\n> + (key->strategy != PARTITION_STRATEGY_LIST ||\n> + !src->isnulls[i][j]))\n>\n> I think it's better to write this condition as follows just like the\n> accompanying condition involving src->kind:\n>\n> (src->nulls == NULL || !src->isnulls[i][j])\n>\n> (Skipped looking at merge_list_bounds() and related changes for now as\n> I see a lot of TODOs remain to be done.)\n>\n> In check_new_partition_bound():\n>\n> + Datum *values = (Datum *)\n> palloc0(key->partnatts * sizeof(Datum));\n> + bool *isnulls = (bool *)\n> palloc0(key->partnatts * sizeof(bool));\n>\n> Doesn't seem like a bad idea to declare these as:\n>\n> Datum values[PARTITION_MAX_KEYS];\n> bool isnulls[PARTITION_MAX_KEYS];\n>\n>\n> I looked at get_qual_for_list_multi_column() and immediately thought\n> that it may be a bad idea. I think it's better to integrate the logic\n> for multi-column case into the existing function even if that makes\n> the function appear more complex. Having two functions with the same\n> goal and mostly the same code is not a good idea mainly because it\n> becomes a maintenance burden.\n>\n> I have attempted a rewrite such that get_qual_for_list() now handles\n> both the single-column and multi-column cases. Changes included in\n> the delta patch. The patch updates some outputs of the newly added\n> tests for multi-column list partitions, because the new code emits the\n> IS NOT NULL tests a bit differently than\n> get_qual_for_list_mutli_column() would. Notably, the old approach\n> would emit IS NOT NULL for every non-NULL datum matched to a given\n> column, not just once for the column. However, the patch makes a few\n> other tests fail, mainly because I had to fix\n> partition_bound_accepts_nulls() to handle the multi-column case,\n> though didn't bother to update all callers of it to also handle the\n> multi-column case correctly. I guess that's a TODO you're going to\n> deal with at some point anyway. :)\n>\n> I still have more than half of v3 left to look at, so will continue\n> looking. In the meantime, please check the changes I suggested,\n> including the delta patch, and let me know your thoughts.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n>\n\nOn PG head + Nitin's v3 patch + Amit's Delta patch.  Make check is failing with below errors.--inherit.sql is failing with error :\"ERROR:  negative bitmapset member not allowed\"  update mlparted_tab mlp set c = 'xxx'from  (select a from some_tab union all select a+1 from some_tab) ss (a)where (mlp.a = ss.a and mlp.b = 'b') or mlp.a = 3;ERROR:  negative bitmapset member not allowed--partition_join.sql is crashing with enable_partitionwise_join set to true.CREATE TABLE plt1_adv (a int, b int, c text) PARTITION BY LIST (c);CREATE TABLE plt1_adv_p1 PARTITION OF plt1_adv FOR VALUES IN ('0001', '0003');CREATE TABLE plt1_adv_p2 PARTITION OF plt1_adv FOR VALUES IN ('0004', '0006');CREATE TABLE plt1_adv_p3 PARTITION OF plt1_adv FOR VALUES IN ('0008', '0009');INSERT INTO plt1_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM generate_series(1, 299) i WHERE i % 10 IN (1, 3, 4, 6, 8, 9);ANALYZE plt1_adv;CREATE TABLE plt2_adv (a int, b int, c text) PARTITION BY LIST (c);CREATE TABLE plt2_adv_p1 PARTITION OF plt2_adv FOR VALUES IN ('0002', '0003');CREATE TABLE plt2_adv_p2 PARTITION OF plt2_adv FOR VALUES IN ('0004', '0006');CREATE TABLE plt2_adv_p3 PARTITION OF plt2_adv FOR VALUES IN ('0007', '0009');INSERT INTO plt2_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM generate_series(1, 299) i WHERE i % 10 IN (2, 3, 4, 6, 7, 9);ANALYZE plt2_adv;-- inner joinEXPLAIN (COSTS OFF)SELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;server closed the connection unexpectedly        This probably means the server terminated abnormally        before or while processing the request.connection to server was lost--stack-traceCore was generated by `postgres: edb regression [local] EXPLAIN                                      '.Program terminated with signal 6, Aborted.#0  0x00007f7d339ba277 in raise () from /lib64/libc.so.6Missing separate debuginfos, use: debuginfo-install glibc-2.17-222.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-19.el7.x86_64 libcom_err-1.42.9-12.el7_5.x86_64 libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-12.el7.x86_64 openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-17.el7.x86_64(gdb) bt#0  0x00007f7d339ba277 in raise () from /lib64/libc.so.6#1  0x00007f7d339bb968 in abort () from /lib64/libc.so.6#2  0x0000000000b0fbc3 in ExceptionalCondition (conditionName=0xcbda10 \"part_index >= 0\", errorType=0xcbd1c3 \"FailedAssertion\", fileName=0xcbd2fe \"partbounds.c\", lineNumber=1957)    at assert.c:69#3  0x0000000000892aa1 in is_dummy_partition (rel=0x19b37c0, part_index=-1) at partbounds.c:1957#4  0x00000000008919bd in merge_list_bounds (partnatts=1, partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0, inner_rel=0x1922938, jointype=JOIN_INNER,     outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at partbounds.c:1529#5  0x00000000008910de in partition_bounds_merge (partnatts=1, partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0, inner_rel=0x1922938, jointype=JOIN_INNER,     outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at partbounds.c:1223#6  0x000000000082c41a in compute_partition_bounds (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, parent_sjinfo=0x7fffd67752a0, parts1=0x7fffd67751b0,     parts2=0x7fffd67751a8) at joinrels.c:1644#7  0x000000000082bc34 in try_partitionwise_join (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, parent_sjinfo=0x7fffd67752a0, parent_restrictlist=0x1ab3318)    at joinrels.c:1402#8  0x000000000082aea2 in populate_joinrel_with_paths (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, sjinfo=0x7fffd67752a0, restrictlist=0x1ab3318)    at joinrels.c:926#9  0x000000000082a8f5 in make_join_rel (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938) at joinrels.c:760#10 0x0000000000829e03 in make_rels_by_clause_joins (root=0x1a19ed0, old_rel=0x19b37c0, other_rels_list=0x1ab2970, other_rels=0x1ab2990) at joinrels.c:312#11 0x00000000008298d9 in join_search_one_level (root=0x1a19ed0, level=2) at joinrels.c:123#12 0x000000000080c566 in standard_join_search (root=0x1a19ed0, levels_needed=2, initial_rels=0x1ab2970) at allpaths.c:3020#13 0x000000000080c4df in make_rel_from_joinlist (root=0x1a19ed0, joinlist=0x199d538) at allpaths.c:2951#14 0x000000000080816b in make_one_rel (root=0x1a19ed0, joinlist=0x199d538) at allpaths.c:228#15 0x000000000084491d in query_planner (root=0x1a19ed0, qp_callback=0x84a538 <standard_qp_callback>, qp_extra=0x7fffd6775630) at planmain.c:276#16 0x0000000000847040 in grouping_planner (root=0x1a19ed0, tuple_fraction=0) at planner.c:1447#17 0x0000000000846709 in subquery_planner (glob=0x19b39d8, parse=0x1aaa290, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at planner.c:1025#18 0x0000000000844f3e in standard_planner (parse=0x1aaa290,     query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048, boundParams=0x0) at planner.c:406#19 0x0000000000844ce9 in planner (parse=0x1aaa290,     query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048, boundParams=0x0) at planner.c:277#20 0x0000000000978483 in pg_plan_query (querytree=0x1aaa290,     query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048, boundParams=0x0) at postgres.c:847#21 0x00000000006937fc in ExplainOneQuery (query=0x1aaa290, cursorOptions=2048, into=0x0, es=0x19b36f0,     queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",     params=0x0, queryEnv=0x0) at explain.c:397#22 0x0000000000693351 in ExplainQuery (pstate=0x197c410, stmt=0x1aaa0b0, params=0x0, dest=0x197c378) at explain.c:281#23 0x00000000009811fa in standard_ProcessUtility (pstmt=0x1a0bfc8,     queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",     readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:845#24 0x00000000009809ec in ProcessUtility (pstmt=0x1a0bfc8,     queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",     readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:527#25 0x000000000097f636 in PortalRunUtility (portal=0x1893b40, pstmt=0x1a0bfc8, isTopLevel=true, setHoldSnapshot=true, dest=0x197c378, qc=0x7fffd6775f90) at pquery.c:1147#26 0x000000000097f3a5 in FillPortalStore (portal=0x1893b40, isTopLevel=true) at pquery.c:1026#27 0x000000000097ed11 in PortalRun (portal=0x1893b40, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1a0c0b8, altdest=0x1a0c0b8, qc=0x7fffd6776150) at pquery.c:758#28 0x0000000000978aa5 in exec_simple_query (Thanks & Regards,Rajkumar RaghuwanshiOn Fri, Sep 3, 2021 at 7:17 PM Amit Langote <amitlangote09@gmail.com> wrote:On Wed, Sep 1, 2021 at 2:31 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Aug 31, 2021 at 8:02 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > The attached patch also fixes the above comments.\n>\n> I noticed that multi-column list partitions containing NULLs don't\n> work correctly with partition pruning yet.\n>\n> create table p0 (a int, b text, c bool) partition by list (a, b, c);\n> create table p01 partition of p0 for values in ((1, 1, true), (NULL, 1, false));\n> create table p02 partition of p0 for values in ((1, NULL, false));\n> explain select * from p0 where a is null;\n>                        QUERY PLAN\n> --------------------------------------------------------\n>  Seq Scan on p01 p0  (cost=0.00..22.50 rows=6 width=37)\n>    Filter: (a IS NULL)\n> (2 rows)\n>\n> I guess that may be due to the following newly added code being incomplete:\n>\n> +/*\n> + * get_partition_bound_null_index\n> + *\n> + * Returns the partition index of the partition bound which accepts NULL.\n> + */\n> +int\n> +get_partition_bound_null_index(PartitionBoundInfo boundinfo)\n> +{\n> +   int i = 0;\n> +   int j = 0;\n> +\n> +   if (!boundinfo->isnulls)\n> +       return -1;\n>\n> -           if (!val->constisnull)\n> -               count++;\n> +   for (i = 0; i < boundinfo->ndatums; i++)\n> +   {\n> +       //TODO: Handle for multi-column cases\n> +       for (j = 0; j < 1; j++)\n> +       {\n> +           if (boundinfo->isnulls[i][j])\n> +               return boundinfo->indexes[i];\n>         }\n>     }\n>\n> +   return -1;\n> +}\n>\n> Maybe this function needs to return a \"bitmapset\" of indexes, because\n> multiple partitions can now contain NULL values.\n>\n> Some other issues I noticed and suggestions for improvement:\n>\n> +/*\n> + * checkForDuplicates\n> + *\n> + * Returns TRUE if the list bound element is already present in the list of\n> + * list bounds, FALSE otherwise.\n> + */\n> +static bool\n> +checkForDuplicates(List *source, List *searchElem)\n>\n> This function name may be too generic.  Given that it is specific to\n> implementing list bound de-duplication, maybe the following signature\n> is more appropriate:\n>\n> static bool\n> checkListBoundDuplicated(List *list_bounds, List *new_bound)\n>\n> Also, better if the function comment mentions those parameter names, like:\n>\n> \"Returns TRUE if the list bound element 'new_bound' is already present\n> in the target list 'list_bounds', FALSE otherwise.\"\n>\n> +/*\n> + * transformPartitionListBounds\n> + *\n> + * Converts the expressions of list partition bounds from the raw grammar\n> + * representation.\n>\n> A sentence about the result format would be helpful, like:\n>\n> The result is a List of Lists of Const nodes to account for the\n> partition key possibly containing more than one column.\n>\n> +   int             i = 0;\n> +   int             j = 0;\n>\n> Better to initialize such loop counters closer to the loop.\n>\n> +           colname[i] = (char *) palloc0(NAMEDATALEN * sizeof(char));\n> +           colname[i] = get_attname(RelationGetRelid(parent),\n> +                                    key->partattrs[i], false);\n>\n> The palloc in the 1st statement is wasteful, because the 2nd statement\n> overwrites its pointer by the pointer to the string palloc'd by\n> get_attname().\n>\n> +           ListCell   *cell2 = NULL;\n>\n> No need to explicitly initialize the loop variable.\n>\n> +           RowExpr     *rowexpr = NULL;\n> +\n> +           if (!IsA(expr, RowExpr))\n> +               ereport(ERROR,\n> +                       (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> +                       errmsg(\"Invalid list bound specification\"),\n> +                       parser_errposition(pstate, exprLocation((Node\n> *) spec))));\n> +\n> +           rowexpr = (RowExpr *) expr;\n>\n> It's okay to assign rowexpr at the top here instead of the dummy\n> NULL-initialization and write the condition as:\n>\n>     if (!IsA(rowexpr, RowExpr))\n>\n> +       if (isDuplicate)\n> +           continue;\n> +\n> +       result = lappend(result, values);\n>\n> I can see you copied this style from the existing code, but how about\n> writing this simply as:\n>\n>     if (!isDuplicate)\n>         result = lappend(result, values);\n>\n> -/* One value coming from some (index'th) list partition */\n> +/* One bound of a list partition */\n>  typedef struct PartitionListValue\n>  {\n>     int         index;\n> -   Datum       value;\n> +   Datum      *values;\n> +   bool       *isnulls;\n>  } PartitionListValue;\n>\n> Given that this is a locally-defined struct, I wonder if it makes\n> sense to rename the struct while we're at it.  Call it, say,\n> PartitionListBound?\n>\n> Also, please keep part of the existing comment that says that the\n> bound belongs to index'th partition.\n>\n> Will send more comments in a bit...\n\n+ * partition_bound_accepts_nulls\n+ *\n+ * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n  */\n\nI suggest slight rewording, as follows:\n\n\"Returns TRUE if any of the partition bounds contains a NULL value,\nFALSE otherwise.\"\n\n-   PartitionListValue *all_values;\n+   PartitionListValue **all_values;\n...\n-   all_values = (PartitionListValue *)\n-       palloc(ndatums * sizeof(PartitionListValue));\n+   ndatums = get_list_datum_count(boundspecs, nparts);\n+   all_values = (PartitionListValue **)\n+       palloc(ndatums * sizeof(PartitionListValue *));\n\nI don't see the need to redefine all_values's pointer type.  No need\nto palloc PartitionListValue repeatedly for every datum as done\nfurther down as follows:\n\n+           all_values[j] = (PartitionListValue *)\npalloc(sizeof(PartitionListValue));\n\nYou do need the following two though:\n\n+           all_values[j]->values = (Datum *) palloc0(key->partnatts *\nsizeof(Datum));\n+           all_values[j]->isnulls = (bool *) palloc0(key->partnatts *\nsizeof(bool));\n\nIf you change the above the way I suggest, you'd also need to revert\nthe following change:\n\n-   qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n+   qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n              qsort_partition_list_value_cmp, (void *) key);\n\n+       int         orig_index = all_values[i]->index;\n+       boundinfo->datums[i] = (Datum *) palloc(key->partnatts * sizeof(Datum));\n\nMissing a newline between these two statements.\n\nBTW, I noticed that the boundDatums variable is no longer used in\ncreate_list_bounds.  I traced back its origin and found that a recent\ncommit 53d86957e98 introduced it to implement an idea to reduce the\nfiner-grained pallocs that were being done in create_list_bounds().  I\ndon't think that this patch needs to throw away that work.  You can\nmake it work as the attached delta patch that applies on top of v3.\nPlease check.\n\n@@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n*parttyplen, bool *parttypbyval,\n    if (b1->nindexes != b2->nindexes)\n        return false;\n\n-   if (b1->null_index != b2->null_index)\n+   if (get_partition_bound_null_index(b1) !=\nget_partition_bound_null_index(b2))\n\nAs mentioned in the last message, this bit in partition_bounds_equal()\nneeds to be comparing \"bitmapsets\" of null bound indexes, that is\nafter fixing get_partition_bound_null_index() as previously mentioned.\n\nBut...\n\n@@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n*parttyplen, bool *parttypbyval,\n                 * context.  datumIsEqual() should be simple enough to be\n                 * safe.\n                 */\n-               if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n+               if (b1->isnulls)\n+                   b1_isnull = b1->isnulls[i][j];\n+               if (b2->isnulls)\n+                   b2_isnull = b2->isnulls[i][j];\n+\n+               /*\n+                * If any of the partition bound has NULL value, then check\n+                * equality for the NULL value instead of comparing the datums\n+                * as it does not contain valid value in case of NULL.\n+                */\n+               if (b1_isnull || b2_isnull)\n+               {\n+                   if (b1_isnull != b2_isnull)\n+                       return false;\n+               }\n\n...if you have this in the main loop, I don't think we need the above\ncode stanza which appears to implement a short-cut for this long-form\nlogic.\n\n+               (key->strategy != PARTITION_STRATEGY_LIST ||\n+                !src->isnulls[i][j]))\n\nI think it's better to write this condition as follows just like the\naccompanying condition involving src->kind:\n\n    (src->nulls == NULL || !src->isnulls[i][j])\n\n(Skipped looking at merge_list_bounds() and related changes for now as\nI see a lot of TODOs remain to be done.)\n\nIn check_new_partition_bound():\n\n+                       Datum      *values = (Datum *)\npalloc0(key->partnatts * sizeof(Datum));\n+                       bool       *isnulls = (bool *)\npalloc0(key->partnatts * sizeof(bool));\n\nDoesn't seem like a bad idea to declare these as:\n\n    Datum    values[PARTITION_MAX_KEYS];\n    bool        isnulls[PARTITION_MAX_KEYS];\n\n\nI looked at get_qual_for_list_multi_column() and immediately thought\nthat it may be a bad idea.  I think it's better to integrate the logic\nfor multi-column case into the existing function even if that makes\nthe function appear more complex.  Having two functions with the same\ngoal and mostly the same code is not a good idea mainly because it\nbecomes a maintenance burden.\n\nI have attempted a rewrite such that get_qual_for_list() now handles\nboth the single-column and multi-column cases.  Changes included in\nthe delta patch.  The patch updates some outputs of the newly added\ntests for multi-column list partitions, because the new code emits the\nIS NOT NULL tests a bit differently than\nget_qual_for_list_mutli_column() would.  Notably, the old approach\nwould emit IS NOT NULL for every non-NULL datum matched to a given\ncolumn, not just once for the column.  However, the patch makes a few\nother tests fail, mainly because I had to fix\npartition_bound_accepts_nulls() to handle the multi-column case,\nthough didn't bother to update all callers of it to also handle the\nmulti-column case correctly.  I guess that's a TODO you're going to\ndeal with at some point anyway. :)\n\nI still have more than half of v3 left to look at, so will continue\nlooking.   In the meantime, please check the changes I suggested,\nincluding the delta patch, and let me know your thoughts.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 13 Sep 2021 15:47:36 +0530", "msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Hi,\n\nOn Mon, Sep 13, 2021 at 7:17 PM Rajkumar Raghuwanshi\n<rajkumar.raghuwanshi@enterprisedb.com> wrote:\n> On PG head + Nitin's v3 patch + Amit's Delta patch. Make check is failing with below errors.\n\nThanks Rajkumar for testing.\n\n> --inherit.sql is failing with error :\"ERROR: negative bitmapset member not allowed\"\n> update mlparted_tab mlp set c = 'xxx'\n> from\n> (select a from some_tab union all select a+1 from some_tab) ss (a)\n> where (mlp.a = ss.a and mlp.b = 'b') or mlp.a = 3;\n> ERROR: negative bitmapset member not allowed\n>\n> --partition_join.sql is crashing with enable_partitionwise_join set to true.\n\nHere's a v2 of the delta patch that should fix both of these test\nfailures. As I mentioned in my last reply, my delta patch fixed what\nI think were problems in Nitin's v3 patch but were not complete by\nthemselves. Especially, I hadn't bothered to investigate various /*\nTODO: handle multi-column list partitioning */ sites to deal with my\nown changes.\n\nIn the attached updated version, I've dealt with some of those such\nthat at least the existing cases exercising partition pruning and\npartition wise joins now pass.\n\nI thought about sending a v4 of the main patch with my proposed\nchanges so far integrated, but decided to just post a delta_v2 for\nnow.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 1 Oct 2021 22:55:44 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Fri, Oct 1, 2021 at 6:56 AM Amit Langote <amitlangote09@gmail.com> wrote:\n\n> Hi,\n>\n> On Mon, Sep 13, 2021 at 7:17 PM Rajkumar Raghuwanshi\n> <rajkumar.raghuwanshi@enterprisedb.com> wrote:\n> > On PG head + Nitin's v3 patch + Amit's Delta patch. Make check is\n> failing with below errors.\n>\n> Thanks Rajkumar for testing.\n>\n> > --inherit.sql is failing with error :\"ERROR: negative bitmapset member\n> not allowed\"\n> > update mlparted_tab mlp set c = 'xxx'\n> > from\n> > (select a from some_tab union all select a+1 from some_tab) ss (a)\n> > where (mlp.a = ss.a and mlp.b = 'b') or mlp.a = 3;\n> > ERROR: negative bitmapset member not allowed\n> >\n> > --partition_join.sql is crashing with enable_partitionwise_join set to\n> true.\n>\n> Here's a v2 of the delta patch that should fix both of these test\n> failures. As I mentioned in my last reply, my delta patch fixed what\n> I think were problems in Nitin's v3 patch but were not complete by\n> themselves. Especially, I hadn't bothered to investigate various /*\n> TODO: handle multi-column list partitioning */ sites to deal with my\n> own changes.\n>\n> In the attached updated version, I've dealt with some of those such\n> that at least the existing cases exercising partition pruning and\n> partition wise joins now pass.\n>\n> I thought about sending a v4 of the main patch with my proposed\n> changes so far integrated, but decided to just post a delta_v2 for\n> now.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n\nHi,\n\n+ for (i = 0; i < partnatts; i++)\n+ {\n+ if (outer_isnull[i])\n+ outer_has_null = true;\n\nWe can come out of the loop once outer_has_null is assigned.\nSimilar comment for the loop inside if (inner_isnull).\n\n+ * For the multi-column case, we must make an BoolExpr that\n\nan BoolExpr -> a BoolExpr\n\nCheers\n\nOn Fri, Oct 1, 2021 at 6:56 AM Amit Langote <amitlangote09@gmail.com> wrote:Hi,\n\nOn Mon, Sep 13, 2021 at 7:17 PM Rajkumar Raghuwanshi\n<rajkumar.raghuwanshi@enterprisedb.com> wrote:\n> On PG head + Nitin's v3 patch + Amit's Delta patch.  Make check is failing with below errors.\n\nThanks Rajkumar for testing.\n\n> --inherit.sql is failing with error :\"ERROR:  negative bitmapset member not allowed\"\n> update mlparted_tab mlp set c = 'xxx'\n> from\n>   (select a from some_tab union all select a+1 from some_tab) ss (a)\n> where (mlp.a = ss.a and mlp.b = 'b') or mlp.a = 3;\n> ERROR:  negative bitmapset member not allowed\n>\n> --partition_join.sql is crashing with enable_partitionwise_join set to true.\n\nHere's a v2 of the delta patch that should fix both of these test\nfailures.  As I mentioned in my last reply, my delta patch fixed what\nI think were problems in Nitin's v3 patch but were not complete by\nthemselves.  Especially, I hadn't bothered to investigate various /*\nTODO: handle multi-column list partitioning */ sites to deal with my\nown changes.\n\nIn the attached updated version, I've dealt with some of those such\nthat at least the existing cases exercising partition pruning and\npartition wise joins now pass.\n\nI thought about sending a v4 of the main patch with my proposed\nchanges so far integrated, but decided to just post a delta_v2 for\nnow.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.comHi,+           for (i = 0; i < partnatts; i++)+           {+               if (outer_isnull[i])+                   outer_has_null = true;We can come out of the loop once outer_has_null is assigned.Similar comment for the loop inside if (inner_isnull).+            * For the multi-column case, we must make an BoolExpr thatan BoolExpr -> a BoolExprCheers", "msg_date": "Fri, 1 Oct 2021 08:09:07 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "> > On PG head + Nitin's v3 patch + Amit's Delta patch. Make check is failing with below errors.\n>\n> Thanks Rajkumar for testing.\n>\n> Here's a v2 of the delta patch that should fix both of these test\n> failures. As I mentioned in my last reply, my delta patch fixed what\n> I think were problems in Nitin's v3 patch but were not complete by\n> themselves. Especially, I hadn't bothered to investigate various /*\n> TODO: handle multi-column list partitioning */ sites to deal with my\n> own changes.\n\nThanks Rajkumar for testing and Thank you Amit for working on v2 of\nthe delta patch. Actually I had done the code changes related to\npartition-wise join and I was in the middle of fixing the review\ncomments, So I could not share the patch. Anyways thanks for your\nefforts.\n\n> I noticed that multi-column list partitions containing NULLs don't\n> work correctly with partition pruning yet.\n>\n> create table p0 (a int, b text, c bool) partition by list (a, b, c);\n> create table p01 partition of p0 for values in ((1, 1, true), (NULL, 1, false));\n> create table p02 partition of p0 for values in ((1, NULL, false));\n> explain select * from p0 where a is null;\n> QUERY PLAN\n> --------------------------------------------------------\n> Seq Scan on p01 p0 (cost=0.00..22.50 rows=6 width=37)\n> Filter: (a IS NULL)\n> (2 rows)\n>\n> In the attached updated version, I've dealt with some of those such\n> that at least the existing cases exercising partition pruning and\n> partition wise joins now pass.\n\nwrt partition pruning, I have checked the output of the above case\nwith the v2 version of the delta patch and without that. The output\nremains same. Kindly let me know if I am missing something. But I feel\nthe above output is correct as the partition p01 is the only partition\nwhich contains NULL value for column a, hence it is showing \"Seq scan\non p01\" in the output. Kindly correct me if I am wrong. I feel the\ncode changes related to 'null_keys' is not required, hence not\nincorporated that in the attached patch.\n\nwrt partition-wise join, I had run the regression test (with new cases\nrelated to partition-wise join) on v2 of the delta patch and observed\nthe crash. Hence I have not incorporated the partition-wise join\nrelated code from v2 of delta patch to main v4 patch. Instead I have\nadded the partition-wise join related code done by me in the attached\npatch. Please share your thoughts and if possible we can improvise the\ncode. Rest of the changes looks good to me and I have incorporated\nthat in the attached patch.\n\n\n> I guess that may be due to the following newly added code being incomplete:\n> Maybe this function needs to return a \"bitmapset\" of indexes, because\n> multiple partitions can now contain NULL values.\n\nI feel this function is not required at all as we are not separating\nthe non null and null partitions now. Removed in the attached patch.\nAlso removed the \"scan_null' variable from the structure\n\"PruneStepResult\" and cleaned up the corresponding code blocks.\n\n\n> This function name may be too generic. Given that it is specific to\n> implementing list bound de-duplication, maybe the following signature\n> is more appropriate:\n>\n> static bool\n> checkListBoundDuplicated(List *list_bounds, List *new_bound)\n\nYes. The function name looks more generic. How about using\n\"isListBoundDuplicated()\"? I have used this name in the patch. Please\nlet me know if that does not look correct.\n\n\n> Also, better if the function comment mentions those parameter names, like:\n>\n> \"Returns TRUE if the list bound element 'new_bound' is already present\n> in the target list 'list_bounds', FALSE otherwise.\"\n\nFixed.\n\n\n> +/*\n> + * transformPartitionListBounds\n> + *\n> + * Converts the expressions of list partition bounds from the raw grammar\n> + * representation.\n>\n> A sentence about the result format would be helpful, like:\n>\n> The result is a List of Lists of Const nodes to account for the\n> partition key possibly containing more than one column.\n\nFixed.\n\n\n> + int i = 0;\n> + int j = 0;\n>\n> Better to initialize such loop counters closer to the loop.\n\nFixed in all the places.\n\n\n> + colname[i] = (char *) palloc0(NAMEDATALEN * sizeof(char));\n> + colname[i] = get_attname(RelationGetRelid(parent),\n> + key->partattrs[i], false);\n>\n> The palloc in the 1st statement is wasteful, because the 2nd statement\n> overwrites its pointer by the pointer to the string palloc'd by\n> get_attname().\n\nRemoved the 1st statement as it is not required.\n\n\n> + ListCell *cell2 = NULL;\n>\n> No need to explicitly initialize the loop variable.\n\nFixed in all the places.\n\n\n> + RowExpr *rowexpr = NULL;\n> +\n> + if (!IsA(expr, RowExpr))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> + errmsg(\"Invalid list bound specification\"),\n> + parser_errposition(pstate, exprLocation((Node\n> *) spec))));\n> +\n> + rowexpr = (RowExpr *) expr;\n>\n> It's okay to assign rowexpr at the top here instead of the dummy\n> NULL-initialization and write the condition as:\n>\n> if (!IsA(rowexpr, RowExpr))\n\nFixed.\n\n\n> + if (isDuplicate)\n> + continue;\n> +\n> + result = lappend(result, values);\n>\n> I can see you copied this style from the existing code, but how about\n> writing this simply as:\n>\n> if (!isDuplicate)\n> result = lappend(result, values);\n\nThis looks good. I have changed in the patch.\n\n\n> -/* One value coming from some (index'th) list partition */\n> +/* One bound of a list partition */\n> typedef struct PartitionListValue\n> {\n> int index;\n> - Datum value;\n> + Datum *values;\n> + bool *isnulls;\n> } PartitionListValue;\n>\n> Given that this is a locally-defined struct, I wonder if it makes\n> sense to rename the struct while we're at it. Call it, say,\n> PartitionListBound?\n\nYes. PartitionListBound looks more appropriate and it also matches the\nsimilar structures of the other partition strategies.\n\n> Also, please keep part of the existing comment that says that the\n> bound belongs to index'th partition.\n\nRetained the old comment.\n\n\n> + * partition_bound_accepts_nulls\n> + *\n> + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n> */\n>\n> I suggest slight rewording, as follows:\n>\n> \"Returns TRUE if any of the partition bounds contains a NULL value,\n> FALSE otherwise.\"\n\nFixed.\n\n\n> - PartitionListValue *all_values;\n> + PartitionListValue **all_values;\n> ...\n> - all_values = (PartitionListValue *)\n> - palloc(ndatums * sizeof(PartitionListValue));\n> + ndatums = get_list_datum_count(boundspecs, nparts);\n> + all_values = (PartitionListValue **)\n> + palloc(ndatums * sizeof(PartitionListValue *));\n>\n> I don't see the need to redefine all_values's pointer type. No need\n> to palloc PartitionListValue repeatedly for every datum as done\n> further down as follows:\n>\n> + all_values[j] = (PartitionListValue *)\n> palloc(sizeof(PartitionListValue));\n>\n> You do need the following two though:\n>\n> + all_values[j]->values = (Datum *) palloc0(key->partnatts *\n> sizeof(Datum));\n> + all_values[j]->isnulls = (bool *) palloc0(key->partnatts *\n> sizeof(bool));\n>\n> If you change the above the way I suggest, you'd also need to revert\n> the following change:\n>\n> - qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n> + qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n> qsort_partition_list_value_cmp, (void *) key);\n>\n> + int orig_index = all_values[i]->index;\n> + boundinfo->datums[i] = (Datum *) palloc(key->partnatts * sizeof(Datum));\n>\n> Missing a newline between these two statements.\n\nFixed. Made necessary changes to keep the intent of existing code.\n\n\n> @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n> *parttyplen, bool *parttypbyval,\n> if (b1->nindexes != b2->nindexes)\n> return false;\n>\n> - if (b1->null_index != b2->null_index)\n> + if (get_partition_bound_null_index(b1) !=\n> get_partition_bound_null_index(b2))\n>\n> As mentioned in the last message, this bit in partition_bounds_equal()\n> needs to be comparing \"bitmapsets\" of null bound indexes, that is\n> after fixing get_partition_bound_null_index() as previously mentioned.\n\nAs mentioned earlier, removed the functionality of\nget_partition_bound_null_index(), hence the above condition is not\nrequired and removed.\n\n> But...\n>\n> @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n> *parttyplen, bool *parttypbyval,\n> * context. datumIsEqual() should be simple enough to be\n> * safe.\n> */\n> - if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n> + if (b1->isnulls)\n> + b1_isnull = b1->isnulls[i][j];\n> + if (b2->isnulls)\n> + b2_isnull = b2->isnulls[i][j];\n> +\n> + /*\n> + * If any of the partition bound has NULL value, then check\n> + * equality for the NULL value instead of comparing the datums\n> + * as it does not contain valid value in case of NULL.\n> + */\n> + if (b1_isnull || b2_isnull)\n> + {\n> + if (b1_isnull != b2_isnull)\n> + return false;\n> + }\n>\n> ...if you have this in the main loop, I don't think we need the above\n> code stanza which appears to implement a short-cut for this long-form\n> logic.\n\nYes. May be we could have ignored the above code stanza if we would\nhave comparing the null indexes using get_partition_bound_null_index()\nin the beginning of the function. But hence we are not separating the\nnon null partitions and null partitions, I would like to keep the\nlogic in the inner loop as we are doing it for non null bound values\nin the above code stanza, just to give a feel that null bound values\nare also handled the same way as non null values. Please correct me if\nI am wrong.\n\n\n> + (key->strategy != PARTITION_STRATEGY_LIST ||\n> + !src->isnulls[i][j]))\n>\n> I think it's better to write this condition as follows just like the\n> accompanying condition involving src->kind:\n>\n> (src->nulls == NULL || !src->isnulls[i][j])\n\nFixed.\n\n\n> In check_new_partition_bound():\n>\n> + Datum *values = (Datum *)\n> palloc0(key->partnatts * sizeof(Datum));\n> + bool *isnulls = (bool *)\n> palloc0(key->partnatts * sizeof(bool));\n>\n> Doesn't seem like a bad idea to declare these as:\n>\n> Datum values[PARTITION_MAX_KEYS];\n> bool isnulls[PARTITION_MAX_KEYS];\n\nThanks for the suggestion. I have changed as above.\n\n> I looked at get_qual_for_list_multi_column() and immediately thought\n> that it may be a bad idea. I think it's better to integrate the logic\n> for multi-column case into the existing function even if that makes\n> the function appear more complex. Having two functions with the same\n> goal and mostly the same code is not a good idea mainly because it\n> becomes a maintenance burden.\n\nActually I had written a separate function because of the complexity.\nNow I have understood that since the objective is same, it should be\ndone in a single function irrespective of complexity.\n\n> I have attempted a rewrite such that get_qual_for_list() now handles\n> both the single-column and multi-column cases. Changes included in\n> the delta patch. The patch updates some outputs of the newly added\n> tests for multi-column list partitions, because the new code emits the\n> IS NOT NULL tests a bit differently than\n> get_qual_for_list_mutli_column() would. Notably, the old approach\n> would emit IS NOT NULL for every non-NULL datum matched to a given\n> column, not just once for the column. However, the patch makes a few\n> other tests fail, mainly because I had to fix\n> partition_bound_accepts_nulls() to handle the multi-column case,\n> though didn't bother to update all callers of it to also handle the\n> multi-column case correctly. I guess that's a TODO you're going to\n> deal with at some point anyway. :)\n\nThank you very much for your efforts. The changes looks good to me and\nI have incorporated these changes in the attached patch.\n\nI have completed the coding for all the TODOs and hence removed in the\npatch. The naming conventions used for function/variable names varies\nacross the files. Some places it is like 'namesLikeThis' and in some\nplace it is like 'names_like_this'. I have used the naming conventions\nbased on the surrounding styles used. I am happy to change those if\nrequired.\n\nI have verified 'make check' with the attached patch and it is working fine.\n\n\nThanks & Regards,\nNitin Jadhav\n\n\nOn Mon, Sep 13, 2021 at 3:47 PM Rajkumar Raghuwanshi\n<rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>\n> On PG head + Nitin's v3 patch + Amit's Delta patch. Make check is failing with below errors.\n>\n> --inherit.sql is failing with error :\"ERROR: negative bitmapset member not allowed\"\n> update mlparted_tab mlp set c = 'xxx'\n> from\n> (select a from some_tab union all select a+1 from some_tab) ss (a)\n> where (mlp.a = ss.a and mlp.b = 'b') or mlp.a = 3;\n> ERROR: negative bitmapset member not allowed\n>\n> --partition_join.sql is crashing with enable_partitionwise_join set to true.\n> CREATE TABLE plt1_adv (a int, b int, c text) PARTITION BY LIST (c);\n> CREATE TABLE plt1_adv_p1 PARTITION OF plt1_adv FOR VALUES IN ('0001', '0003');\n> CREATE TABLE plt1_adv_p2 PARTITION OF plt1_adv FOR VALUES IN ('0004', '0006');\n> CREATE TABLE plt1_adv_p3 PARTITION OF plt1_adv FOR VALUES IN ('0008', '0009');\n> INSERT INTO plt1_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM generate_series(1, 299) i WHERE i % 10 IN (1, 3, 4, 6, 8, 9);\n> ANALYZE plt1_adv;\n> CREATE TABLE plt2_adv (a int, b int, c text) PARTITION BY LIST (c);\n> CREATE TABLE plt2_adv_p1 PARTITION OF plt2_adv FOR VALUES IN ('0002', '0003');\n> CREATE TABLE plt2_adv_p2 PARTITION OF plt2_adv FOR VALUES IN ('0004', '0006');\n> CREATE TABLE plt2_adv_p3 PARTITION OF plt2_adv FOR VALUES IN ('0007', '0009');\n> INSERT INTO plt2_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM generate_series(1, 299) i WHERE i % 10 IN (2, 3, 4, 6, 7, 9);\n> ANALYZE plt2_adv;\n> -- inner join\n> EXPLAIN (COSTS OFF)\n> SELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> connection to server was lost\n>\n>\n> --stack-trace\n> Core was generated by `postgres: edb regression [local] EXPLAIN '.\n> Program terminated with signal 6, Aborted.\n> #0 0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n> Missing separate debuginfos, use: debuginfo-install glibc-2.17-222.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-19.el7.x86_64 libcom_err-1.42.9-12.el7_5.x86_64 libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-12.el7.x86_64 openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-17.el7.x86_64\n> (gdb) bt\n> #0 0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n> #1 0x00007f7d339bb968 in abort () from /lib64/libc.so.6\n> #2 0x0000000000b0fbc3 in ExceptionalCondition (conditionName=0xcbda10 \"part_index >= 0\", errorType=0xcbd1c3 \"FailedAssertion\", fileName=0xcbd2fe \"partbounds.c\", lineNumber=1957)\n> at assert.c:69\n> #3 0x0000000000892aa1 in is_dummy_partition (rel=0x19b37c0, part_index=-1) at partbounds.c:1957\n> #4 0x00000000008919bd in merge_list_bounds (partnatts=1, partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0, inner_rel=0x1922938, jointype=JOIN_INNER,\n> outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at partbounds.c:1529\n> #5 0x00000000008910de in partition_bounds_merge (partnatts=1, partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0, inner_rel=0x1922938, jointype=JOIN_INNER,\n> outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at partbounds.c:1223\n> #6 0x000000000082c41a in compute_partition_bounds (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, parent_sjinfo=0x7fffd67752a0, parts1=0x7fffd67751b0,\n> parts2=0x7fffd67751a8) at joinrels.c:1644\n> #7 0x000000000082bc34 in try_partitionwise_join (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, parent_sjinfo=0x7fffd67752a0, parent_restrictlist=0x1ab3318)\n> at joinrels.c:1402\n> #8 0x000000000082aea2 in populate_joinrel_with_paths (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, sjinfo=0x7fffd67752a0, restrictlist=0x1ab3318)\n> at joinrels.c:926\n> #9 0x000000000082a8f5 in make_join_rel (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938) at joinrels.c:760\n> #10 0x0000000000829e03 in make_rels_by_clause_joins (root=0x1a19ed0, old_rel=0x19b37c0, other_rels_list=0x1ab2970, other_rels=0x1ab2990) at joinrels.c:312\n> #11 0x00000000008298d9 in join_search_one_level (root=0x1a19ed0, level=2) at joinrels.c:123\n> #12 0x000000000080c566 in standard_join_search (root=0x1a19ed0, levels_needed=2, initial_rels=0x1ab2970) at allpaths.c:3020\n> #13 0x000000000080c4df in make_rel_from_joinlist (root=0x1a19ed0, joinlist=0x199d538) at allpaths.c:2951\n> #14 0x000000000080816b in make_one_rel (root=0x1a19ed0, joinlist=0x199d538) at allpaths.c:228\n> #15 0x000000000084491d in query_planner (root=0x1a19ed0, qp_callback=0x84a538 <standard_qp_callback>, qp_extra=0x7fffd6775630) at planmain.c:276\n> #16 0x0000000000847040 in grouping_planner (root=0x1a19ed0, tuple_fraction=0) at planner.c:1447\n> #17 0x0000000000846709 in subquery_planner (glob=0x19b39d8, parse=0x1aaa290, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at planner.c:1025\n> #18 0x0000000000844f3e in standard_planner (parse=0x1aaa290,\n> query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048, boundParams=0x0) at planner.c:406\n> #19 0x0000000000844ce9 in planner (parse=0x1aaa290,\n> query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048, boundParams=0x0) at planner.c:277\n> #20 0x0000000000978483 in pg_plan_query (querytree=0x1aaa290,\n> query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048, boundParams=0x0) at postgres.c:847\n> #21 0x00000000006937fc in ExplainOneQuery (query=0x1aaa290, cursorOptions=2048, into=0x0, es=0x19b36f0,\n> queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n> params=0x0, queryEnv=0x0) at explain.c:397\n> #22 0x0000000000693351 in ExplainQuery (pstate=0x197c410, stmt=0x1aaa0b0, params=0x0, dest=0x197c378) at explain.c:281\n> #23 0x00000000009811fa in standard_ProcessUtility (pstmt=0x1a0bfc8,\n> queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n> readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:845\n> #24 0x00000000009809ec in ProcessUtility (pstmt=0x1a0bfc8,\n> queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n> readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:527\n> #25 0x000000000097f636 in PortalRunUtility (portal=0x1893b40, pstmt=0x1a0bfc8, isTopLevel=true, setHoldSnapshot=true, dest=0x197c378, qc=0x7fffd6775f90) at pquery.c:1147\n> #26 0x000000000097f3a5 in FillPortalStore (portal=0x1893b40, isTopLevel=true) at pquery.c:1026\n> #27 0x000000000097ed11 in PortalRun (portal=0x1893b40, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1a0c0b8, altdest=0x1a0c0b8, qc=0x7fffd6776150) at pquery.c:758\n> #28 0x0000000000978aa5 in exec_simple_query (\n>\n> Thanks & Regards,\n> Rajkumar Raghuwanshi\n>\n>\n> On Fri, Sep 3, 2021 at 7:17 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>>\n>> On Wed, Sep 1, 2021 at 2:31 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> > On Tue, Aug 31, 2021 at 8:02 PM Nitin Jadhav\n>> > <nitinjadhavpostgres@gmail.com> wrote:\n>> > > The attached patch also fixes the above comments.\n>> >\n>> > I noticed that multi-column list partitions containing NULLs don't\n>> > work correctly with partition pruning yet.\n>> >\n>> > create table p0 (a int, b text, c bool) partition by list (a, b, c);\n>> > create table p01 partition of p0 for values in ((1, 1, true), (NULL, 1, false));\n>> > create table p02 partition of p0 for values in ((1, NULL, false));\n>> > explain select * from p0 where a is null;\n>> > QUERY PLAN\n>> > --------------------------------------------------------\n>> > Seq Scan on p01 p0 (cost=0.00..22.50 rows=6 width=37)\n>> > Filter: (a IS NULL)\n>> > (2 rows)\n>> >\n>> > I guess that may be due to the following newly added code being incomplete:\n>> >\n>> > +/*\n>> > + * get_partition_bound_null_index\n>> > + *\n>> > + * Returns the partition index of the partition bound which accepts NULL.\n>> > + */\n>> > +int\n>> > +get_partition_bound_null_index(PartitionBoundInfo boundinfo)\n>> > +{\n>> > + int i = 0;\n>> > + int j = 0;\n>> > +\n>> > + if (!boundinfo->isnulls)\n>> > + return -1;\n>> >\n>> > - if (!val->constisnull)\n>> > - count++;\n>> > + for (i = 0; i < boundinfo->ndatums; i++)\n>> > + {\n>> > + //TODO: Handle for multi-column cases\n>> > + for (j = 0; j < 1; j++)\n>> > + {\n>> > + if (boundinfo->isnulls[i][j])\n>> > + return boundinfo->indexes[i];\n>> > }\n>> > }\n>> >\n>> > + return -1;\n>> > +}\n>> >\n>> > Maybe this function needs to return a \"bitmapset\" of indexes, because\n>> > multiple partitions can now contain NULL values.\n>> >\n>> > Some other issues I noticed and suggestions for improvement:\n>> >\n>> > +/*\n>> > + * checkForDuplicates\n>> > + *\n>> > + * Returns TRUE if the list bound element is already present in the list of\n>> > + * list bounds, FALSE otherwise.\n>> > + */\n>> > +static bool\n>> > +checkForDuplicates(List *source, List *searchElem)\n>> >\n>> > This function name may be too generic. Given that it is specific to\n>> > implementing list bound de-duplication, maybe the following signature\n>> > is more appropriate:\n>> >\n>> > static bool\n>> > checkListBoundDuplicated(List *list_bounds, List *new_bound)\n>> >\n>> > Also, better if the function comment mentions those parameter names, like:\n>> >\n>> > \"Returns TRUE if the list bound element 'new_bound' is already present\n>> > in the target list 'list_bounds', FALSE otherwise.\"\n>> >\n>> > +/*\n>> > + * transformPartitionListBounds\n>> > + *\n>> > + * Converts the expressions of list partition bounds from the raw grammar\n>> > + * representation.\n>> >\n>> > A sentence about the result format would be helpful, like:\n>> >\n>> > The result is a List of Lists of Const nodes to account for the\n>> > partition key possibly containing more than one column.\n>> >\n>> > + int i = 0;\n>> > + int j = 0;\n>> >\n>> > Better to initialize such loop counters closer to the loop.\n>> >\n>> > + colname[i] = (char *) palloc0(NAMEDATALEN * sizeof(char));\n>> > + colname[i] = get_attname(RelationGetRelid(parent),\n>> > + key->partattrs[i], false);\n>> >\n>> > The palloc in the 1st statement is wasteful, because the 2nd statement\n>> > overwrites its pointer by the pointer to the string palloc'd by\n>> > get_attname().\n>> >\n>> > + ListCell *cell2 = NULL;\n>> >\n>> > No need to explicitly initialize the loop variable.\n>> >\n>> > + RowExpr *rowexpr = NULL;\n>> > +\n>> > + if (!IsA(expr, RowExpr))\n>> > + ereport(ERROR,\n>> > + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n>> > + errmsg(\"Invalid list bound specification\"),\n>> > + parser_errposition(pstate, exprLocation((Node\n>> > *) spec))));\n>> > +\n>> > + rowexpr = (RowExpr *) expr;\n>> >\n>> > It's okay to assign rowexpr at the top here instead of the dummy\n>> > NULL-initialization and write the condition as:\n>> >\n>> > if (!IsA(rowexpr, RowExpr))\n>> >\n>> > + if (isDuplicate)\n>> > + continue;\n>> > +\n>> > + result = lappend(result, values);\n>> >\n>> > I can see you copied this style from the existing code, but how about\n>> > writing this simply as:\n>> >\n>> > if (!isDuplicate)\n>> > result = lappend(result, values);\n>> >\n>> > -/* One value coming from some (index'th) list partition */\n>> > +/* One bound of a list partition */\n>> > typedef struct PartitionListValue\n>> > {\n>> > int index;\n>> > - Datum value;\n>> > + Datum *values;\n>> > + bool *isnulls;\n>> > } PartitionListValue;\n>> >\n>> > Given that this is a locally-defined struct, I wonder if it makes\n>> > sense to rename the struct while we're at it. Call it, say,\n>> > PartitionListBound?\n>> >\n>> > Also, please keep part of the existing comment that says that the\n>> > bound belongs to index'th partition.\n>> >\n>> > Will send more comments in a bit...\n>>\n>> + * partition_bound_accepts_nulls\n>> + *\n>> + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n>> */\n>>\n>> I suggest slight rewording, as follows:\n>>\n>> \"Returns TRUE if any of the partition bounds contains a NULL value,\n>> FALSE otherwise.\"\n>>\n>> - PartitionListValue *all_values;\n>> + PartitionListValue **all_values;\n>> ...\n>> - all_values = (PartitionListValue *)\n>> - palloc(ndatums * sizeof(PartitionListValue));\n>> + ndatums = get_list_datum_count(boundspecs, nparts);\n>> + all_values = (PartitionListValue **)\n>> + palloc(ndatums * sizeof(PartitionListValue *));\n>>\n>> I don't see the need to redefine all_values's pointer type. No need\n>> to palloc PartitionListValue repeatedly for every datum as done\n>> further down as follows:\n>>\n>> + all_values[j] = (PartitionListValue *)\n>> palloc(sizeof(PartitionListValue));\n>>\n>> You do need the following two though:\n>>\n>> + all_values[j]->values = (Datum *) palloc0(key->partnatts *\n>> sizeof(Datum));\n>> + all_values[j]->isnulls = (bool *) palloc0(key->partnatts *\n>> sizeof(bool));\n>>\n>> If you change the above the way I suggest, you'd also need to revert\n>> the following change:\n>>\n>> - qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n>> + qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n>> qsort_partition_list_value_cmp, (void *) key);\n>>\n>> + int orig_index = all_values[i]->index;\n>> + boundinfo->datums[i] = (Datum *) palloc(key->partnatts * sizeof(Datum));\n>>\n>> Missing a newline between these two statements.\n>>\n>> BTW, I noticed that the boundDatums variable is no longer used in\n>> create_list_bounds. I traced back its origin and found that a recent\n>> commit 53d86957e98 introduced it to implement an idea to reduce the\n>> finer-grained pallocs that were being done in create_list_bounds(). I\n>> don't think that this patch needs to throw away that work. You can\n>> make it work as the attached delta patch that applies on top of v3.\n>> Please check.\n>>\n>> @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n>> *parttyplen, bool *parttypbyval,\n>> if (b1->nindexes != b2->nindexes)\n>> return false;\n>>\n>> - if (b1->null_index != b2->null_index)\n>> + if (get_partition_bound_null_index(b1) !=\n>> get_partition_bound_null_index(b2))\n>>\n>> As mentioned in the last message, this bit in partition_bounds_equal()\n>> needs to be comparing \"bitmapsets\" of null bound indexes, that is\n>> after fixing get_partition_bound_null_index() as previously mentioned.\n>>\n>> But...\n>>\n>> @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n>> *parttyplen, bool *parttypbyval,\n>> * context. datumIsEqual() should be simple enough to be\n>> * safe.\n>> */\n>> - if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n>> + if (b1->isnulls)\n>> + b1_isnull = b1->isnulls[i][j];\n>> + if (b2->isnulls)\n>> + b2_isnull = b2->isnulls[i][j];\n>> +\n>> + /*\n>> + * If any of the partition bound has NULL value, then check\n>> + * equality for the NULL value instead of comparing the datums\n>> + * as it does not contain valid value in case of NULL.\n>> + */\n>> + if (b1_isnull || b2_isnull)\n>> + {\n>> + if (b1_isnull != b2_isnull)\n>> + return false;\n>> + }\n>>\n>> ...if you have this in the main loop, I don't think we need the above\n>> code stanza which appears to implement a short-cut for this long-form\n>> logic.\n>>\n>> + (key->strategy != PARTITION_STRATEGY_LIST ||\n>> + !src->isnulls[i][j]))\n>>\n>> I think it's better to write this condition as follows just like the\n>> accompanying condition involving src->kind:\n>>\n>> (src->nulls == NULL || !src->isnulls[i][j])\n>>\n>> (Skipped looking at merge_list_bounds() and related changes for now as\n>> I see a lot of TODOs remain to be done.)\n>>\n>> In check_new_partition_bound():\n>>\n>> + Datum *values = (Datum *)\n>> palloc0(key->partnatts * sizeof(Datum));\n>> + bool *isnulls = (bool *)\n>> palloc0(key->partnatts * sizeof(bool));\n>>\n>> Doesn't seem like a bad idea to declare these as:\n>>\n>> Datum values[PARTITION_MAX_KEYS];\n>> bool isnulls[PARTITION_MAX_KEYS];\n>>\n>>\n>> I looked at get_qual_for_list_multi_column() and immediately thought\n>> that it may be a bad idea. I think it's better to integrate the logic\n>> for multi-column case into the existing function even if that makes\n>> the function appear more complex. Having two functions with the same\n>> goal and mostly the same code is not a good idea mainly because it\n>> becomes a maintenance burden.\n>>\n>> I have attempted a rewrite such that get_qual_for_list() now handles\n>> both the single-column and multi-column cases. Changes included in\n>> the delta patch. The patch updates some outputs of the newly added\n>> tests for multi-column list partitions, because the new code emits the\n>> IS NOT NULL tests a bit differently than\n>> get_qual_for_list_mutli_column() would. Notably, the old approach\n>> would emit IS NOT NULL for every non-NULL datum matched to a given\n>> column, not just once for the column. However, the patch makes a few\n>> other tests fail, mainly because I had to fix\n>> partition_bound_accepts_nulls() to handle the multi-column case,\n>> though didn't bother to update all callers of it to also handle the\n>> multi-column case correctly. I guess that's a TODO you're going to\n>> deal with at some point anyway. :)\n>>\n>> I still have more than half of v3 left to look at, so will continue\n>> looking. In the meantime, please check the changes I suggested,\n>> including the delta patch, and let me know your thoughts.\n>>\n>> --\n>> Amit Langote\n>> EDB: http://www.enterprisedb.com", "msg_date": "Sun, 3 Oct 2021 01:53:26 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Thanks Nitin,\n\nv4 patches applied cleanly and make check is passing now. While testing\nfurther I observed that if multiple values are given for a single\ncolumn list partition it is not giving error instead it is changing values\nitself. Please find the example below.\n\npostgres=# CREATE TABLE plt1 (a int, b varchar) PARTITION BY LIST(b);\nCREATE TABLE\npostgres=# CREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN\n(('0001','0001'),('0002','0002'));\nCREATE TABLE\npostgres=# \\d+ plt1;\n Partitioned table \"public.plt1\"\n Column | Type | Collation | Nullable | Default | Storage |\nCompression | Stats target | Description\n--------+-------------------+-----------+----------+---------+----------+-------------+--------------+-------------\n a | integer | | | | plain |\n | |\n b | character varying | | | | extended |\n | |\nPartition key: LIST (b)\nPartitions: plt1_p1 FOR VALUES IN ('(0001,0001)', '(0002,0002)')\n\nI think it should throw an error as the partition by list has only 1 column\nbut we are giving 2 values.\nalso if you see \\d+ showing plt1_p1 partition value as ‘(0001,0001)’\ninstead of ('0001','0001').\n\nThanks & Regards,\nRajkumar Raghuwanshi\n\n\n\nOn Sun, Oct 3, 2021 at 1:52 AM Nitin Jadhav <nitinjadhavpostgres@gmail.com>\nwrote:\n\n> > > On PG head + Nitin's v3 patch + Amit's Delta patch. Make check is\n> failing with below errors.\n> >\n> > Thanks Rajkumar for testing.\n> >\n> > Here's a v2 of the delta patch that should fix both of these test\n> > failures. As I mentioned in my last reply, my delta patch fixed what\n> > I think were problems in Nitin's v3 patch but were not complete by\n> > themselves. Especially, I hadn't bothered to investigate various /*\n> > TODO: handle multi-column list partitioning */ sites to deal with my\n> > own changes.\n>\n> Thanks Rajkumar for testing and Thank you Amit for working on v2 of\n> the delta patch. Actually I had done the code changes related to\n> partition-wise join and I was in the middle of fixing the review\n> comments, So I could not share the patch. Anyways thanks for your\n> efforts.\n>\n> > I noticed that multi-column list partitions containing NULLs don't\n> > work correctly with partition pruning yet.\n> >\n> > create table p0 (a int, b text, c bool) partition by list (a, b, c);\n> > create table p01 partition of p0 for values in ((1, 1, true), (NULL, 1,\n> false));\n> > create table p02 partition of p0 for values in ((1, NULL, false));\n> > explain select * from p0 where a is null;\n> > QUERY PLAN\n> > --------------------------------------------------------\n> > Seq Scan on p01 p0 (cost=0.00..22.50 rows=6 width=37)\n> > Filter: (a IS NULL)\n> > (2 rows)\n> >\n> > In the attached updated version, I've dealt with some of those such\n> > that at least the existing cases exercising partition pruning and\n> > partition wise joins now pass.\n>\n> wrt partition pruning, I have checked the output of the above case\n> with the v2 version of the delta patch and without that. The output\n> remains same. Kindly let me know if I am missing something. But I feel\n> the above output is correct as the partition p01 is the only partition\n> which contains NULL value for column a, hence it is showing \"Seq scan\n> on p01\" in the output. Kindly correct me if I am wrong. I feel the\n> code changes related to 'null_keys' is not required, hence not\n> incorporated that in the attached patch.\n>\n> wrt partition-wise join, I had run the regression test (with new cases\n> related to partition-wise join) on v2 of the delta patch and observed\n> the crash. Hence I have not incorporated the partition-wise join\n> related code from v2 of delta patch to main v4 patch. Instead I have\n> added the partition-wise join related code done by me in the attached\n> patch. Please share your thoughts and if possible we can improvise the\n> code. Rest of the changes looks good to me and I have incorporated\n> that in the attached patch.\n>\n>\n> > I guess that may be due to the following newly added code being\n> incomplete:\n> > Maybe this function needs to return a \"bitmapset\" of indexes, because\n> > multiple partitions can now contain NULL values.\n>\n> I feel this function is not required at all as we are not separating\n> the non null and null partitions now. Removed in the attached patch.\n> Also removed the \"scan_null' variable from the structure\n> \"PruneStepResult\" and cleaned up the corresponding code blocks.\n>\n>\n> > This function name may be too generic. Given that it is specific to\n> > implementing list bound de-duplication, maybe the following signature\n> > is more appropriate:\n> >\n> > static bool\n> > checkListBoundDuplicated(List *list_bounds, List *new_bound)\n>\n> Yes. The function name looks more generic. How about using\n> \"isListBoundDuplicated()\"? I have used this name in the patch. Please\n> let me know if that does not look correct.\n>\n>\n> > Also, better if the function comment mentions those parameter names,\n> like:\n> >\n> > \"Returns TRUE if the list bound element 'new_bound' is already present\n> > in the target list 'list_bounds', FALSE otherwise.\"\n>\n> Fixed.\n>\n>\n> > +/*\n> > + * transformPartitionListBounds\n> > + *\n> > + * Converts the expressions of list partition bounds from the raw\n> grammar\n> > + * representation.\n> >\n> > A sentence about the result format would be helpful, like:\n> >\n> > The result is a List of Lists of Const nodes to account for the\n> > partition key possibly containing more than one column.\n>\n> Fixed.\n>\n>\n> > + int i = 0;\n> > + int j = 0;\n> >\n> > Better to initialize such loop counters closer to the loop.\n>\n> Fixed in all the places.\n>\n>\n> > + colname[i] = (char *) palloc0(NAMEDATALEN * sizeof(char));\n> > + colname[i] = get_attname(RelationGetRelid(parent),\n> > + key->partattrs[i], false);\n> >\n> > The palloc in the 1st statement is wasteful, because the 2nd statement\n> > overwrites its pointer by the pointer to the string palloc'd by\n> > get_attname().\n>\n> Removed the 1st statement as it is not required.\n>\n>\n> > + ListCell *cell2 = NULL;\n> >\n> > No need to explicitly initialize the loop variable.\n>\n> Fixed in all the places.\n>\n>\n> > + RowExpr *rowexpr = NULL;\n> > +\n> > + if (!IsA(expr, RowExpr))\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> > + errmsg(\"Invalid list bound specification\"),\n> > + parser_errposition(pstate, exprLocation((Node\n> > *) spec))));\n> > +\n> > + rowexpr = (RowExpr *) expr;\n> >\n> > It's okay to assign rowexpr at the top here instead of the dummy\n> > NULL-initialization and write the condition as:\n> >\n> > if (!IsA(rowexpr, RowExpr))\n>\n> Fixed.\n>\n>\n> > + if (isDuplicate)\n> > + continue;\n> > +\n> > + result = lappend(result, values);\n> >\n> > I can see you copied this style from the existing code, but how about\n> > writing this simply as:\n> >\n> > if (!isDuplicate)\n> > result = lappend(result, values);\n>\n> This looks good. I have changed in the patch.\n>\n>\n> > -/* One value coming from some (index'th) list partition */\n> > +/* One bound of a list partition */\n> > typedef struct PartitionListValue\n> > {\n> > int index;\n> > - Datum value;\n> > + Datum *values;\n> > + bool *isnulls;\n> > } PartitionListValue;\n> >\n> > Given that this is a locally-defined struct, I wonder if it makes\n> > sense to rename the struct while we're at it. Call it, say,\n> > PartitionListBound?\n>\n> Yes. PartitionListBound looks more appropriate and it also matches the\n> similar structures of the other partition strategies.\n>\n> > Also, please keep part of the existing comment that says that the\n> > bound belongs to index'th partition.\n>\n> Retained the old comment.\n>\n>\n> > + * partition_bound_accepts_nulls\n> > + *\n> > + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n> > */\n> >\n> > I suggest slight rewording, as follows:\n> >\n> > \"Returns TRUE if any of the partition bounds contains a NULL value,\n> > FALSE otherwise.\"\n>\n> Fixed.\n>\n>\n> > - PartitionListValue *all_values;\n> > + PartitionListValue **all_values;\n> > ...\n> > - all_values = (PartitionListValue *)\n> > - palloc(ndatums * sizeof(PartitionListValue));\n> > + ndatums = get_list_datum_count(boundspecs, nparts);\n> > + all_values = (PartitionListValue **)\n> > + palloc(ndatums * sizeof(PartitionListValue *));\n> >\n> > I don't see the need to redefine all_values's pointer type. No need\n> > to palloc PartitionListValue repeatedly for every datum as done\n> > further down as follows:\n> >\n> > + all_values[j] = (PartitionListValue *)\n> > palloc(sizeof(PartitionListValue));\n> >\n> > You do need the following two though:\n> >\n> > + all_values[j]->values = (Datum *) palloc0(key->partnatts *\n> > sizeof(Datum));\n> > + all_values[j]->isnulls = (bool *) palloc0(key->partnatts *\n> > sizeof(bool));\n> >\n> > If you change the above the way I suggest, you'd also need to revert\n> > the following change:\n> >\n> > - qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n> > + qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n> > qsort_partition_list_value_cmp, (void *) key);\n> >\n> > + int orig_index = all_values[i]->index;\n> > + boundinfo->datums[i] = (Datum *) palloc(key->partnatts *\n> sizeof(Datum));\n> >\n> > Missing a newline between these two statements.\n>\n> Fixed. Made necessary changes to keep the intent of existing code.\n>\n>\n> > @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n> > *parttyplen, bool *parttypbyval,\n> > if (b1->nindexes != b2->nindexes)\n> > return false;\n> >\n> > - if (b1->null_index != b2->null_index)\n> > + if (get_partition_bound_null_index(b1) !=\n> > get_partition_bound_null_index(b2))\n> >\n> > As mentioned in the last message, this bit in partition_bounds_equal()\n> > needs to be comparing \"bitmapsets\" of null bound indexes, that is\n> > after fixing get_partition_bound_null_index() as previously mentioned.\n>\n> As mentioned earlier, removed the functionality of\n> get_partition_bound_null_index(), hence the above condition is not\n> required and removed.\n>\n> > But...\n> >\n> > @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n> > *parttyplen, bool *parttypbyval,\n> > * context. datumIsEqual() should be simple enough to be\n> > * safe.\n> > */\n> > - if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n> > + if (b1->isnulls)\n> > + b1_isnull = b1->isnulls[i][j];\n> > + if (b2->isnulls)\n> > + b2_isnull = b2->isnulls[i][j];\n> > +\n> > + /*\n> > + * If any of the partition bound has NULL value, then\n> check\n> > + * equality for the NULL value instead of comparing the\n> datums\n> > + * as it does not contain valid value in case of NULL.\n> > + */\n> > + if (b1_isnull || b2_isnull)\n> > + {\n> > + if (b1_isnull != b2_isnull)\n> > + return false;\n> > + }\n> >\n> > ...if you have this in the main loop, I don't think we need the above\n> > code stanza which appears to implement a short-cut for this long-form\n> > logic.\n>\n> Yes. May be we could have ignored the above code stanza if we would\n> have comparing the null indexes using get_partition_bound_null_index()\n> in the beginning of the function. But hence we are not separating the\n> non null partitions and null partitions, I would like to keep the\n> logic in the inner loop as we are doing it for non null bound values\n> in the above code stanza, just to give a feel that null bound values\n> are also handled the same way as non null values. Please correct me if\n> I am wrong.\n>\n>\n> > + (key->strategy != PARTITION_STRATEGY_LIST ||\n> > + !src->isnulls[i][j]))\n> >\n> > I think it's better to write this condition as follows just like the\n> > accompanying condition involving src->kind:\n> >\n> > (src->nulls == NULL || !src->isnulls[i][j])\n>\n> Fixed.\n>\n>\n> > In check_new_partition_bound():\n> >\n> > + Datum *values = (Datum *)\n> > palloc0(key->partnatts * sizeof(Datum));\n> > + bool *isnulls = (bool *)\n> > palloc0(key->partnatts * sizeof(bool));\n> >\n> > Doesn't seem like a bad idea to declare these as:\n> >\n> > Datum values[PARTITION_MAX_KEYS];\n> > bool isnulls[PARTITION_MAX_KEYS];\n>\n> Thanks for the suggestion. I have changed as above.\n>\n> > I looked at get_qual_for_list_multi_column() and immediately thought\n> > that it may be a bad idea. I think it's better to integrate the logic\n> > for multi-column case into the existing function even if that makes\n> > the function appear more complex. Having two functions with the same\n> > goal and mostly the same code is not a good idea mainly because it\n> > becomes a maintenance burden.\n>\n> Actually I had written a separate function because of the complexity.\n> Now I have understood that since the objective is same, it should be\n> done in a single function irrespective of complexity.\n>\n> > I have attempted a rewrite such that get_qual_for_list() now handles\n> > both the single-column and multi-column cases. Changes included in\n> > the delta patch. The patch updates some outputs of the newly added\n> > tests for multi-column list partitions, because the new code emits the\n> > IS NOT NULL tests a bit differently than\n> > get_qual_for_list_mutli_column() would. Notably, the old approach\n> > would emit IS NOT NULL for every non-NULL datum matched to a given\n> > column, not just once for the column. However, the patch makes a few\n> > other tests fail, mainly because I had to fix\n> > partition_bound_accepts_nulls() to handle the multi-column case,\n> > though didn't bother to update all callers of it to also handle the\n> > multi-column case correctly. I guess that's a TODO you're going to\n> > deal with at some point anyway. :)\n>\n> Thank you very much for your efforts. The changes looks good to me and\n> I have incorporated these changes in the attached patch.\n>\n> I have completed the coding for all the TODOs and hence removed in the\n> patch. The naming conventions used for function/variable names varies\n> across the files. Some places it is like 'namesLikeThis' and in some\n> place it is like 'names_like_this'. I have used the naming conventions\n> based on the surrounding styles used. I am happy to change those if\n> required.\n>\n> I have verified 'make check' with the attached patch and it is working\n> fine.\n>\n>\n> Thanks & Regards,\n> Nitin Jadhav\n>\n>\n> On Mon, Sep 13, 2021 at 3:47 PM Rajkumar Raghuwanshi\n> <rajkumar.raghuwanshi@enterprisedb.com> wrote:\n> >\n> > On PG head + Nitin's v3 patch + Amit's Delta patch. Make check is\n> failing with below errors.\n> >\n> > --inherit.sql is failing with error :\"ERROR: negative bitmapset member\n> not allowed\"\n> > update mlparted_tab mlp set c = 'xxx'\n> > from\n> > (select a from some_tab union all select a+1 from some_tab) ss (a)\n> > where (mlp.a = ss.a and mlp.b = 'b') or mlp.a = 3;\n> > ERROR: negative bitmapset member not allowed\n> >\n> > --partition_join.sql is crashing with enable_partitionwise_join set to\n> true.\n> > CREATE TABLE plt1_adv (a int, b int, c text) PARTITION BY LIST (c);\n> > CREATE TABLE plt1_adv_p1 PARTITION OF plt1_adv FOR VALUES IN ('0001',\n> '0003');\n> > CREATE TABLE plt1_adv_p2 PARTITION OF plt1_adv FOR VALUES IN ('0004',\n> '0006');\n> > CREATE TABLE plt1_adv_p3 PARTITION OF plt1_adv FOR VALUES IN ('0008',\n> '0009');\n> > INSERT INTO plt1_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM\n> generate_series(1, 299) i WHERE i % 10 IN (1, 3, 4, 6, 8, 9);\n> > ANALYZE plt1_adv;\n> > CREATE TABLE plt2_adv (a int, b int, c text) PARTITION BY LIST (c);\n> > CREATE TABLE plt2_adv_p1 PARTITION OF plt2_adv FOR VALUES IN ('0002',\n> '0003');\n> > CREATE TABLE plt2_adv_p2 PARTITION OF plt2_adv FOR VALUES IN ('0004',\n> '0006');\n> > CREATE TABLE plt2_adv_p3 PARTITION OF plt2_adv FOR VALUES IN ('0007',\n> '0009');\n> > INSERT INTO plt2_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM\n> generate_series(1, 299) i WHERE i % 10 IN (2, 3, 4, 6, 7, 9);\n> > ANALYZE plt2_adv;\n> > -- inner join\n> > EXPLAIN (COSTS OFF)\n> > SELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON\n> (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\n> > server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> > connection to server was lost\n> >\n> >\n> > --stack-trace\n> > Core was generated by `postgres: edb regression [local] EXPLAIN\n> '.\n> > Program terminated with signal 6, Aborted.\n> > #0 0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n> > Missing separate debuginfos, use: debuginfo-install\n> glibc-2.17-222.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64\n> krb5-libs-1.15.1-19.el7.x86_64 libcom_err-1.42.9-12.el7_5.x86_64\n> libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-12.el7.x86_64\n> openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64\n> zlib-1.2.7-17.el7.x86_64\n> > (gdb) bt\n> > #0 0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n> > #1 0x00007f7d339bb968 in abort () from /lib64/libc.so.6\n> > #2 0x0000000000b0fbc3 in ExceptionalCondition (conditionName=0xcbda10\n> \"part_index >= 0\", errorType=0xcbd1c3 \"FailedAssertion\", fileName=0xcbd2fe\n> \"partbounds.c\", lineNumber=1957)\n> > at assert.c:69\n> > #3 0x0000000000892aa1 in is_dummy_partition (rel=0x19b37c0,\n> part_index=-1) at partbounds.c:1957\n> > #4 0x00000000008919bd in merge_list_bounds (partnatts=1,\n> partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0,\n> inner_rel=0x1922938, jointype=JOIN_INNER,\n> > outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at\n> partbounds.c:1529\n> > #5 0x00000000008910de in partition_bounds_merge (partnatts=1,\n> partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0,\n> inner_rel=0x1922938, jointype=JOIN_INNER,\n> > outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at\n> partbounds.c:1223\n> > #6 0x000000000082c41a in compute_partition_bounds (root=0x1a19ed0,\n> rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30,\n> parent_sjinfo=0x7fffd67752a0, parts1=0x7fffd67751b0,\n> > parts2=0x7fffd67751a8) at joinrels.c:1644\n> > #7 0x000000000082bc34 in try_partitionwise_join (root=0x1a19ed0,\n> rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30,\n> parent_sjinfo=0x7fffd67752a0, parent_restrictlist=0x1ab3318)\n> > at joinrels.c:1402\n> > #8 0x000000000082aea2 in populate_joinrel_with_paths (root=0x1a19ed0,\n> rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, sjinfo=0x7fffd67752a0,\n> restrictlist=0x1ab3318)\n> > at joinrels.c:926\n> > #9 0x000000000082a8f5 in make_join_rel (root=0x1a19ed0, rel1=0x19b37c0,\n> rel2=0x1922938) at joinrels.c:760\n> > #10 0x0000000000829e03 in make_rels_by_clause_joins (root=0x1a19ed0,\n> old_rel=0x19b37c0, other_rels_list=0x1ab2970, other_rels=0x1ab2990) at\n> joinrels.c:312\n> > #11 0x00000000008298d9 in join_search_one_level (root=0x1a19ed0,\n> level=2) at joinrels.c:123\n> > #12 0x000000000080c566 in standard_join_search (root=0x1a19ed0,\n> levels_needed=2, initial_rels=0x1ab2970) at allpaths.c:3020\n> > #13 0x000000000080c4df in make_rel_from_joinlist (root=0x1a19ed0,\n> joinlist=0x199d538) at allpaths.c:2951\n> > #14 0x000000000080816b in make_one_rel (root=0x1a19ed0,\n> joinlist=0x199d538) at allpaths.c:228\n> > #15 0x000000000084491d in query_planner (root=0x1a19ed0,\n> qp_callback=0x84a538 <standard_qp_callback>, qp_extra=0x7fffd6775630) at\n> planmain.c:276\n> > #16 0x0000000000847040 in grouping_planner (root=0x1a19ed0,\n> tuple_fraction=0) at planner.c:1447\n> > #17 0x0000000000846709 in subquery_planner (glob=0x19b39d8,\n> parse=0x1aaa290, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at\n> planner.c:1025\n> > #18 0x0000000000844f3e in standard_planner (parse=0x1aaa290,\n> > query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048,\n> boundParams=0x0) at planner.c:406\n> > #19 0x0000000000844ce9 in planner (parse=0x1aaa290,\n> > query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048,\n> boundParams=0x0) at planner.c:277\n> > #20 0x0000000000978483 in pg_plan_query (querytree=0x1aaa290,\n> > query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048,\n> boundParams=0x0) at postgres.c:847\n> > #21 0x00000000006937fc in ExplainOneQuery (query=0x1aaa290,\n> cursorOptions=2048, into=0x0, es=0x19b36f0,\n> > queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a,\n> t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c =\n> t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n> > params=0x0, queryEnv=0x0) at explain.c:397\n> > #22 0x0000000000693351 in ExplainQuery (pstate=0x197c410,\n> stmt=0x1aaa0b0, params=0x0, dest=0x197c378) at explain.c:281\n> > #23 0x00000000009811fa in standard_ProcessUtility (pstmt=0x1a0bfc8,\n> > queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a,\n> t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c =\n> t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n> > readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n> queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:845\n> > #24 0x00000000009809ec in ProcessUtility (pstmt=0x1a0bfc8,\n> > queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a,\n> t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c =\n> t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n> > readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n> queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:527\n> > #25 0x000000000097f636 in PortalRunUtility (portal=0x1893b40,\n> pstmt=0x1a0bfc8, isTopLevel=true, setHoldSnapshot=true, dest=0x197c378,\n> qc=0x7fffd6775f90) at pquery.c:1147\n> > #26 0x000000000097f3a5 in FillPortalStore (portal=0x1893b40,\n> isTopLevel=true) at pquery.c:1026\n> > #27 0x000000000097ed11 in PortalRun (portal=0x1893b40,\n> count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1a0c0b8,\n> altdest=0x1a0c0b8, qc=0x7fffd6776150) at pquery.c:758\n> > #28 0x0000000000978aa5 in exec_simple_query (\n> >\n> > Thanks & Regards,\n> > Rajkumar Raghuwanshi\n> >\n> >\n> > On Fri, Sep 3, 2021 at 7:17 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> >>\n> >> On Wed, Sep 1, 2021 at 2:31 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> >> > On Tue, Aug 31, 2021 at 8:02 PM Nitin Jadhav\n> >> > <nitinjadhavpostgres@gmail.com> wrote:\n> >> > > The attached patch also fixes the above comments.\n> >> >\n> >> > I noticed that multi-column list partitions containing NULLs don't\n> >> > work correctly with partition pruning yet.\n> >> >\n> >> > create table p0 (a int, b text, c bool) partition by list (a, b, c);\n> >> > create table p01 partition of p0 for values in ((1, 1, true), (NULL,\n> 1, false));\n> >> > create table p02 partition of p0 for values in ((1, NULL, false));\n> >> > explain select * from p0 where a is null;\n> >> > QUERY PLAN\n> >> > --------------------------------------------------------\n> >> > Seq Scan on p01 p0 (cost=0.00..22.50 rows=6 width=37)\n> >> > Filter: (a IS NULL)\n> >> > (2 rows)\n> >> >\n> >> > I guess that may be due to the following newly added code being\n> incomplete:\n> >> >\n> >> > +/*\n> >> > + * get_partition_bound_null_index\n> >> > + *\n> >> > + * Returns the partition index of the partition bound which accepts\n> NULL.\n> >> > + */\n> >> > +int\n> >> > +get_partition_bound_null_index(PartitionBoundInfo boundinfo)\n> >> > +{\n> >> > + int i = 0;\n> >> > + int j = 0;\n> >> > +\n> >> > + if (!boundinfo->isnulls)\n> >> > + return -1;\n> >> >\n> >> > - if (!val->constisnull)\n> >> > - count++;\n> >> > + for (i = 0; i < boundinfo->ndatums; i++)\n> >> > + {\n> >> > + //TODO: Handle for multi-column cases\n> >> > + for (j = 0; j < 1; j++)\n> >> > + {\n> >> > + if (boundinfo->isnulls[i][j])\n> >> > + return boundinfo->indexes[i];\n> >> > }\n> >> > }\n> >> >\n> >> > + return -1;\n> >> > +}\n> >> >\n> >> > Maybe this function needs to return a \"bitmapset\" of indexes, because\n> >> > multiple partitions can now contain NULL values.\n> >> >\n> >> > Some other issues I noticed and suggestions for improvement:\n> >> >\n> >> > +/*\n> >> > + * checkForDuplicates\n> >> > + *\n> >> > + * Returns TRUE if the list bound element is already present in the\n> list of\n> >> > + * list bounds, FALSE otherwise.\n> >> > + */\n> >> > +static bool\n> >> > +checkForDuplicates(List *source, List *searchElem)\n> >> >\n> >> > This function name may be too generic. Given that it is specific to\n> >> > implementing list bound de-duplication, maybe the following signature\n> >> > is more appropriate:\n> >> >\n> >> > static bool\n> >> > checkListBoundDuplicated(List *list_bounds, List *new_bound)\n> >> >\n> >> > Also, better if the function comment mentions those parameter names,\n> like:\n> >> >\n> >> > \"Returns TRUE if the list bound element 'new_bound' is already present\n> >> > in the target list 'list_bounds', FALSE otherwise.\"\n> >> >\n> >> > +/*\n> >> > + * transformPartitionListBounds\n> >> > + *\n> >> > + * Converts the expressions of list partition bounds from the raw\n> grammar\n> >> > + * representation.\n> >> >\n> >> > A sentence about the result format would be helpful, like:\n> >> >\n> >> > The result is a List of Lists of Const nodes to account for the\n> >> > partition key possibly containing more than one column.\n> >> >\n> >> > + int i = 0;\n> >> > + int j = 0;\n> >> >\n> >> > Better to initialize such loop counters closer to the loop.\n> >> >\n> >> > + colname[i] = (char *) palloc0(NAMEDATALEN * sizeof(char));\n> >> > + colname[i] = get_attname(RelationGetRelid(parent),\n> >> > + key->partattrs[i], false);\n> >> >\n> >> > The palloc in the 1st statement is wasteful, because the 2nd statement\n> >> > overwrites its pointer by the pointer to the string palloc'd by\n> >> > get_attname().\n> >> >\n> >> > + ListCell *cell2 = NULL;\n> >> >\n> >> > No need to explicitly initialize the loop variable.\n> >> >\n> >> > + RowExpr *rowexpr = NULL;\n> >> > +\n> >> > + if (!IsA(expr, RowExpr))\n> >> > + ereport(ERROR,\n> >> > + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> >> > + errmsg(\"Invalid list bound specification\"),\n> >> > + parser_errposition(pstate, exprLocation((Node\n> >> > *) spec))));\n> >> > +\n> >> > + rowexpr = (RowExpr *) expr;\n> >> >\n> >> > It's okay to assign rowexpr at the top here instead of the dummy\n> >> > NULL-initialization and write the condition as:\n> >> >\n> >> > if (!IsA(rowexpr, RowExpr))\n> >> >\n> >> > + if (isDuplicate)\n> >> > + continue;\n> >> > +\n> >> > + result = lappend(result, values);\n> >> >\n> >> > I can see you copied this style from the existing code, but how about\n> >> > writing this simply as:\n> >> >\n> >> > if (!isDuplicate)\n> >> > result = lappend(result, values);\n> >> >\n> >> > -/* One value coming from some (index'th) list partition */\n> >> > +/* One bound of a list partition */\n> >> > typedef struct PartitionListValue\n> >> > {\n> >> > int index;\n> >> > - Datum value;\n> >> > + Datum *values;\n> >> > + bool *isnulls;\n> >> > } PartitionListValue;\n> >> >\n> >> > Given that this is a locally-defined struct, I wonder if it makes\n> >> > sense to rename the struct while we're at it. Call it, say,\n> >> > PartitionListBound?\n> >> >\n> >> > Also, please keep part of the existing comment that says that the\n> >> > bound belongs to index'th partition.\n> >> >\n> >> > Will send more comments in a bit...\n> >>\n> >> + * partition_bound_accepts_nulls\n> >> + *\n> >> + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n> >> */\n> >>\n> >> I suggest slight rewording, as follows:\n> >>\n> >> \"Returns TRUE if any of the partition bounds contains a NULL value,\n> >> FALSE otherwise.\"\n> >>\n> >> - PartitionListValue *all_values;\n> >> + PartitionListValue **all_values;\n> >> ...\n> >> - all_values = (PartitionListValue *)\n> >> - palloc(ndatums * sizeof(PartitionListValue));\n> >> + ndatums = get_list_datum_count(boundspecs, nparts);\n> >> + all_values = (PartitionListValue **)\n> >> + palloc(ndatums * sizeof(PartitionListValue *));\n> >>\n> >> I don't see the need to redefine all_values's pointer type. No need\n> >> to palloc PartitionListValue repeatedly for every datum as done\n> >> further down as follows:\n> >>\n> >> + all_values[j] = (PartitionListValue *)\n> >> palloc(sizeof(PartitionListValue));\n> >>\n> >> You do need the following two though:\n> >>\n> >> + all_values[j]->values = (Datum *) palloc0(key->partnatts *\n> >> sizeof(Datum));\n> >> + all_values[j]->isnulls = (bool *) palloc0(key->partnatts *\n> >> sizeof(bool));\n> >>\n> >> If you change the above the way I suggest, you'd also need to revert\n> >> the following change:\n> >>\n> >> - qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n> >> + qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n> >> qsort_partition_list_value_cmp, (void *) key);\n> >>\n> >> + int orig_index = all_values[i]->index;\n> >> + boundinfo->datums[i] = (Datum *) palloc(key->partnatts *\n> sizeof(Datum));\n> >>\n> >> Missing a newline between these two statements.\n> >>\n> >> BTW, I noticed that the boundDatums variable is no longer used in\n> >> create_list_bounds. I traced back its origin and found that a recent\n> >> commit 53d86957e98 introduced it to implement an idea to reduce the\n> >> finer-grained pallocs that were being done in create_list_bounds(). I\n> >> don't think that this patch needs to throw away that work. You can\n> >> make it work as the attached delta patch that applies on top of v3.\n> >> Please check.\n> >>\n> >> @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n> >> *parttyplen, bool *parttypbyval,\n> >> if (b1->nindexes != b2->nindexes)\n> >> return false;\n> >>\n> >> - if (b1->null_index != b2->null_index)\n> >> + if (get_partition_bound_null_index(b1) !=\n> >> get_partition_bound_null_index(b2))\n> >>\n> >> As mentioned in the last message, this bit in partition_bounds_equal()\n> >> needs to be comparing \"bitmapsets\" of null bound indexes, that is\n> >> after fixing get_partition_bound_null_index() as previously mentioned.\n> >>\n> >> But...\n> >>\n> >> @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n> >> *parttyplen, bool *parttypbyval,\n> >> * context. datumIsEqual() should be simple enough to\n> be\n> >> * safe.\n> >> */\n> >> - if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n> >> + if (b1->isnulls)\n> >> + b1_isnull = b1->isnulls[i][j];\n> >> + if (b2->isnulls)\n> >> + b2_isnull = b2->isnulls[i][j];\n> >> +\n> >> + /*\n> >> + * If any of the partition bound has NULL value, then\n> check\n> >> + * equality for the NULL value instead of comparing the\n> datums\n> >> + * as it does not contain valid value in case of NULL.\n> >> + */\n> >> + if (b1_isnull || b2_isnull)\n> >> + {\n> >> + if (b1_isnull != b2_isnull)\n> >> + return false;\n> >> + }\n> >>\n> >> ...if you have this in the main loop, I don't think we need the above\n> >> code stanza which appears to implement a short-cut for this long-form\n> >> logic.\n> >>\n> >> + (key->strategy != PARTITION_STRATEGY_LIST ||\n> >> + !src->isnulls[i][j]))\n> >>\n> >> I think it's better to write this condition as follows just like the\n> >> accompanying condition involving src->kind:\n> >>\n> >> (src->nulls == NULL || !src->isnulls[i][j])\n> >>\n> >> (Skipped looking at merge_list_bounds() and related changes for now as\n> >> I see a lot of TODOs remain to be done.)\n> >>\n> >> In check_new_partition_bound():\n> >>\n> >> + Datum *values = (Datum *)\n> >> palloc0(key->partnatts * sizeof(Datum));\n> >> + bool *isnulls = (bool *)\n> >> palloc0(key->partnatts * sizeof(bool));\n> >>\n> >> Doesn't seem like a bad idea to declare these as:\n> >>\n> >> Datum values[PARTITION_MAX_KEYS];\n> >> bool isnulls[PARTITION_MAX_KEYS];\n> >>\n> >>\n> >> I looked at get_qual_for_list_multi_column() and immediately thought\n> >> that it may be a bad idea. I think it's better to integrate the logic\n> >> for multi-column case into the existing function even if that makes\n> >> the function appear more complex. Having two functions with the same\n> >> goal and mostly the same code is not a good idea mainly because it\n> >> becomes a maintenance burden.\n> >>\n> >> I have attempted a rewrite such that get_qual_for_list() now handles\n> >> both the single-column and multi-column cases. Changes included in\n> >> the delta patch. The patch updates some outputs of the newly added\n> >> tests for multi-column list partitions, because the new code emits the\n> >> IS NOT NULL tests a bit differently than\n> >> get_qual_for_list_mutli_column() would. Notably, the old approach\n> >> would emit IS NOT NULL for every non-NULL datum matched to a given\n> >> column, not just once for the column. However, the patch makes a few\n> >> other tests fail, mainly because I had to fix\n> >> partition_bound_accepts_nulls() to handle the multi-column case,\n> >> though didn't bother to update all callers of it to also handle the\n> >> multi-column case correctly. I guess that's a TODO you're going to\n> >> deal with at some point anyway. :)\n> >>\n> >> I still have more than half of v3 left to look at, so will continue\n> >> looking. In the meantime, please check the changes I suggested,\n> >> including the delta patch, and let me know your thoughts.\n> >>\n> >> --\n> >> Amit Langote\n> >> EDB: http://www.enterprisedb.com\n>\n\nThanks Nitin,v4 patches applied cleanly and make check is passing now. While testing further I observed that if multiple values are given for a single column list partition it is not giving error instead it is changing values itself. Please find the example below.postgres=# CREATE TABLE plt1 (a int, b varchar) PARTITION BY LIST(b);CREATE TABLEpostgres=# CREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN (('0001','0001'),('0002','0002'));CREATE TABLEpostgres=# \\d+ plt1;                                          Partitioned table \"public.plt1\" Column |       Type        | Collation | Nullable | Default | Storage  | Compression | Stats target | Description --------+-------------------+-----------+----------+---------+----------+-------------+--------------+------------- a      | integer           |           |          |         | plain    |             |              |  b      | character varying |           |          |         | extended |             |              | Partition key: LIST (b)Partitions: plt1_p1 FOR VALUES IN ('(0001,0001)', '(0002,0002)')I think it should throw an error as the partition by list has only 1 column but we are giving 2 values. also if you see \\d+ showing plt1_p1 partition value as ‘(0001,0001)’ instead of ('0001','0001').Thanks & Regards,Rajkumar RaghuwanshiOn Sun, Oct 3, 2021 at 1:52 AM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:> > On PG head + Nitin's v3 patch + Amit's Delta patch.  Make check is failing with below errors.\n>\n> Thanks Rajkumar for testing.\n>\n> Here's a v2 of the delta patch that should fix both of these test\n> failures.  As I mentioned in my last reply, my delta patch fixed what\n> I think were problems in Nitin's v3 patch but were not complete by\n> themselves.  Especially, I hadn't bothered to investigate various /*\n> TODO: handle multi-column list partitioning */ sites to deal with my\n> own changes.\n\nThanks Rajkumar for testing and Thank you Amit for working on v2 of\nthe delta patch. Actually I had done the code changes related to\npartition-wise join and I was in the middle of fixing the review\ncomments, So I could not share the patch. Anyways thanks for your\nefforts.\n\n> I noticed that multi-column list partitions containing NULLs don't\n> work correctly with partition pruning yet.\n>\n> create table p0 (a int, b text, c bool) partition by list (a, b, c);\n> create table p01 partition of p0 for values in ((1, 1, true), (NULL, 1, false));\n> create table p02 partition of p0 for values in ((1, NULL, false));\n> explain select * from p0 where a is null;\n>                        QUERY PLAN\n> --------------------------------------------------------\n> Seq Scan on p01 p0  (cost=0.00..22.50 rows=6 width=37)\n>  Filter: (a IS NULL)\n> (2 rows)\n>\n> In the attached updated version, I've dealt with some of those such\n> that at least the existing cases exercising partition pruning and\n> partition wise joins now pass.\n\nwrt partition pruning, I have checked the output of the above case\nwith the v2 version of the delta patch and without that. The output\nremains same. Kindly let me know if I am missing something. But I feel\nthe above output is correct as the partition p01 is the only partition\nwhich contains NULL value for column a, hence it is showing \"Seq scan\non p01\" in the output. Kindly correct me if I am wrong. I feel the\ncode changes related to 'null_keys' is not required, hence not\nincorporated that in the attached patch.\n\nwrt partition-wise join, I had run the regression test (with new cases\nrelated to partition-wise join) on v2 of the delta patch and observed\nthe crash. Hence I have not incorporated the partition-wise join\nrelated code from v2 of delta patch to main v4 patch. Instead I have\nadded the partition-wise join related code done by me in the attached\npatch. Please share your thoughts and if possible we can improvise the\ncode. Rest of the changes looks good to me and I have incorporated\nthat in the attached patch.\n\n\n> I guess that may be due to the following newly added code being incomplete:\n> Maybe this function needs to return a \"bitmapset\" of indexes, because\n> multiple partitions can now contain NULL values.\n\nI feel this function is not required at all as we are not separating\nthe non null and null partitions now. Removed in the attached patch.\nAlso removed the \"scan_null' variable from the structure\n\"PruneStepResult\" and cleaned up the corresponding code blocks.\n\n\n> This function name may be too generic.  Given that it is specific to\n> implementing list bound de-duplication, maybe the following signature\n> is more appropriate:\n>\n> static bool\n> checkListBoundDuplicated(List *list_bounds, List *new_bound)\n\nYes. The function name looks more generic. How about using\n\"isListBoundDuplicated()\"? I have used this name in the patch. Please\nlet me know if that does not look correct.\n\n\n> Also, better if the function comment mentions those parameter names, like:\n>\n> \"Returns TRUE if the list bound element 'new_bound' is already present\n> in the target list 'list_bounds', FALSE otherwise.\"\n\nFixed.\n\n\n> +/*\n> + * transformPartitionListBounds\n> + *\n> + * Converts the expressions of list partition bounds from the raw grammar\n> + * representation.\n>\n> A sentence about the result format would be helpful, like:\n>\n> The result is a List of Lists of Const nodes to account for the\n> partition key possibly containing more than one column.\n\nFixed.\n\n\n> +   int             i = 0;\n> +   int             j = 0;\n>\n> Better to initialize such loop counters closer to the loop.\n\nFixed in all the places.\n\n\n> +           colname[i] = (char *) palloc0(NAMEDATALEN * sizeof(char));\n> +           colname[i] = get_attname(RelationGetRelid(parent),\n> +                                    key->partattrs[i], false);\n>\n> The palloc in the 1st statement is wasteful, because the 2nd statement\n> overwrites its pointer by the pointer to the string palloc'd by\n> get_attname().\n\nRemoved the 1st statement as it is not required.\n\n\n> +           ListCell   *cell2 = NULL;\n>\n> No need to explicitly initialize the loop variable.\n\nFixed in all the places.\n\n\n> +           RowExpr     *rowexpr = NULL;\n> +\n> +           if (!IsA(expr, RowExpr))\n> +               ereport(ERROR,\n> +                       (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> +                       errmsg(\"Invalid list bound specification\"),\n> +                       parser_errposition(pstate, exprLocation((Node\n> *) spec))));\n> +\n> +           rowexpr = (RowExpr *) expr;\n>\n> It's okay to assign rowexpr at the top here instead of the dummy\n> NULL-initialization and write the condition as:\n>\n>    if (!IsA(rowexpr, RowExpr))\n\nFixed.\n\n\n> +       if (isDuplicate)\n> +           continue;\n> +\n> +       result = lappend(result, values);\n>\n> I can see you copied this style from the existing code, but how about\n> writing this simply as:\n>\n>    if (!isDuplicate)\n>        result = lappend(result, values);\n\nThis looks good. I have changed in the patch.\n\n\n> -/* One value coming from some (index'th) list partition */\n> +/* One bound of a list partition */\n> typedef struct PartitionListValue\n> {\n>    int         index;\n> -   Datum       value;\n> +   Datum      *values;\n> +   bool       *isnulls;\n>  } PartitionListValue;\n>\n> Given that this is a locally-defined struct, I wonder if it makes\n> sense to rename the struct while we're at it.  Call it, say,\n> PartitionListBound?\n\nYes. PartitionListBound looks more appropriate and it also matches the\nsimilar structures of the other partition strategies.\n\n> Also, please keep part of the existing comment that says that the\n> bound belongs to index'th partition.\n\nRetained the old comment.\n\n\n> + * partition_bound_accepts_nulls\n> + *\n> + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n>  */\n>\n> I suggest slight rewording, as follows:\n>\n> \"Returns TRUE if any of the partition bounds contains a NULL value,\n> FALSE otherwise.\"\n\nFixed.\n\n\n> -   PartitionListValue *all_values;\n> +   PartitionListValue **all_values;\n> ...\n> -   all_values = (PartitionListValue *)\n> -       palloc(ndatums * sizeof(PartitionListValue));\n> +   ndatums = get_list_datum_count(boundspecs, nparts);\n> +   all_values = (PartitionListValue **)\n> +       palloc(ndatums * sizeof(PartitionListValue *));\n>\n> I don't see the need to redefine all_values's pointer type.  No need\n> to palloc PartitionListValue repeatedly for every datum as done\n> further down as follows:\n>\n> +           all_values[j] = (PartitionListValue *)\n> palloc(sizeof(PartitionListValue));\n>\n> You do need the following two though:\n>\n> +           all_values[j]->values = (Datum *) palloc0(key->partnatts *\n> sizeof(Datum));\n> +           all_values[j]->isnulls = (bool *) palloc0(key->partnatts *\n> sizeof(bool));\n>\n> If you change the above the way I suggest, you'd also need to revert\n> the following change:\n>\n> -   qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n> +   qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n>              qsort_partition_list_value_cmp, (void *) key);\n>\n> +       int         orig_index = all_values[i]->index;\n> +       boundinfo->datums[i] = (Datum *) palloc(key->partnatts * sizeof(Datum));\n>\n> Missing a newline between these two statements.\n\nFixed. Made necessary changes to keep the intent of existing code.\n\n\n> @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n> *parttyplen, bool *parttypbyval,\n>    if (b1->nindexes != b2->nindexes)\n>        return false;\n>\n> -   if (b1->null_index != b2->null_index)\n> +   if (get_partition_bound_null_index(b1) !=\n> get_partition_bound_null_index(b2))\n>\n> As mentioned in the last message, this bit in partition_bounds_equal()\n> needs to be comparing \"bitmapsets\" of null bound indexes, that is\n> after fixing get_partition_bound_null_index() as previously mentioned.\n\nAs mentioned earlier, removed the functionality of\nget_partition_bound_null_index(), hence the above condition is not\nrequired and removed.\n\n> But...\n>\n> @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n> *parttyplen, bool *parttypbyval,\n>                 * context.  datumIsEqual() should be simple enough to be\n>                 * safe.\n>                 */\n> -               if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n> +               if (b1->isnulls)\n> +                   b1_isnull = b1->isnulls[i][j];\n> +               if (b2->isnulls)\n> +                   b2_isnull = b2->isnulls[i][j];\n> +\n> +               /*\n> +                * If any of the partition bound has NULL value, then check\n> +                * equality for the NULL value instead of comparing the datums\n> +                * as it does not contain valid value in case of NULL.\n> +                */\n> +               if (b1_isnull || b2_isnull)\n> +               {\n> +                   if (b1_isnull != b2_isnull)\n> +                       return false;\n> +               }\n>\n> ...if you have this in the main loop, I don't think we need the above\n> code stanza which appears to implement a short-cut for this long-form\n> logic.\n\nYes. May be we could have ignored the above code stanza if we would\nhave comparing the null indexes using get_partition_bound_null_index()\nin the beginning of the function. But hence we are not separating the\nnon null partitions and null partitions, I would like to keep the\nlogic in the inner loop as we are doing it for non null bound values\nin the above code stanza, just to give a feel that null bound values\nare also handled the same way as non null values. Please correct me if\nI am wrong.\n\n\n> +               (key->strategy != PARTITION_STRATEGY_LIST ||\n> +                !src->isnulls[i][j]))\n>\n> I think it's better to write this condition as follows just like the\n> accompanying condition involving src->kind:\n>\n>    (src->nulls == NULL || !src->isnulls[i][j])\n\nFixed.\n\n\n> In check_new_partition_bound():\n>\n> +                       Datum      *values = (Datum *)\n> palloc0(key->partnatts * sizeof(Datum));\n> +                       bool       *isnulls = (bool *)\n> palloc0(key->partnatts * sizeof(bool));\n>\n> Doesn't seem like a bad idea to declare these as:\n>\n>     Datum    values[PARTITION_MAX_KEYS];\n>    bool        isnulls[PARTITION_MAX_KEYS];\n\nThanks for the suggestion. I have changed as above.\n\n> I looked at get_qual_for_list_multi_column() and immediately thought\n> that it may be a bad idea.  I think it's better to integrate the logic\n> for multi-column case into the existing function even if that makes\n> the function appear more complex.  Having two functions with the same\n> goal and mostly the same code is not a good idea mainly because it\n> becomes a maintenance burden.\n\nActually I had written a separate function because of the complexity.\nNow I have understood that since the objective is same, it should be\ndone in a single function irrespective of complexity.\n\n> I have attempted a rewrite such that get_qual_for_list() now handles\n> both the single-column and multi-column cases.  Changes included in\n> the delta patch.  The patch updates some outputs of the newly added\n> tests for multi-column list partitions, because the new code emits the\n> IS NOT NULL tests a bit differently than\n> get_qual_for_list_mutli_column() would.  Notably, the old approach\n> would emit IS NOT NULL for every non-NULL datum matched to a given\n> column, not just once for the column.  However, the patch makes a few\n> other tests fail, mainly because I had to fix\n> partition_bound_accepts_nulls() to handle the multi-column case,\n> though didn't bother to update all callers of it to also handle the\n> multi-column case correctly.  I guess that's a TODO you're going to\n> deal with at some point anyway. :)\n\nThank you very much for your efforts. The changes looks good to me and\nI have incorporated these changes in the attached patch.\n\nI have completed the coding for all the TODOs and hence removed in the\npatch. The naming conventions used for function/variable names varies\nacross the files. Some places it is like 'namesLikeThis' and in some\nplace it is like 'names_like_this'. I have used the naming conventions\nbased on the surrounding styles used. I am happy to change those if\nrequired.\n\nI have verified 'make check' with the attached patch and it is working fine.\n\n\nThanks & Regards,\nNitin Jadhav\n\n\nOn Mon, Sep 13, 2021 at 3:47 PM Rajkumar Raghuwanshi\n<rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>\n> On PG head + Nitin's v3 patch + Amit's Delta patch.  Make check is failing with below errors.\n>\n> --inherit.sql is failing with error :\"ERROR:  negative bitmapset member not allowed\"\n> update mlparted_tab mlp set c = 'xxx'\n> from\n>   (select a from some_tab union all select a+1 from some_tab) ss (a)\n> where (mlp.a = ss.a and mlp.b = 'b') or mlp.a = 3;\n> ERROR:  negative bitmapset member not allowed\n>\n> --partition_join.sql is crashing with enable_partitionwise_join set to true.\n> CREATE TABLE plt1_adv (a int, b int, c text) PARTITION BY LIST (c);\n> CREATE TABLE plt1_adv_p1 PARTITION OF plt1_adv FOR VALUES IN ('0001', '0003');\n> CREATE TABLE plt1_adv_p2 PARTITION OF plt1_adv FOR VALUES IN ('0004', '0006');\n> CREATE TABLE plt1_adv_p3 PARTITION OF plt1_adv FOR VALUES IN ('0008', '0009');\n> INSERT INTO plt1_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM generate_series(1, 299) i WHERE i % 10 IN (1, 3, 4, 6, 8, 9);\n> ANALYZE plt1_adv;\n> CREATE TABLE plt2_adv (a int, b int, c text) PARTITION BY LIST (c);\n> CREATE TABLE plt2_adv_p1 PARTITION OF plt2_adv FOR VALUES IN ('0002', '0003');\n> CREATE TABLE plt2_adv_p2 PARTITION OF plt2_adv FOR VALUES IN ('0004', '0006');\n> CREATE TABLE plt2_adv_p3 PARTITION OF plt2_adv FOR VALUES IN ('0007', '0009');\n> INSERT INTO plt2_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM generate_series(1, 299) i WHERE i % 10 IN (2, 3, 4, 6, 7, 9);\n> ANALYZE plt2_adv;\n> -- inner join\n> EXPLAIN (COSTS OFF)\n> SELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\n> server closed the connection unexpectedly\n>         This probably means the server terminated abnormally\n>         before or while processing the request.\n> connection to server was lost\n>\n>\n> --stack-trace\n> Core was generated by `postgres: edb regression [local] EXPLAIN                                      '.\n> Program terminated with signal 6, Aborted.\n> #0  0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n> Missing separate debuginfos, use: debuginfo-install glibc-2.17-222.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-19.el7.x86_64 libcom_err-1.42.9-12.el7_5.x86_64 libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-12.el7.x86_64 openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-17.el7.x86_64\n> (gdb) bt\n> #0  0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n> #1  0x00007f7d339bb968 in abort () from /lib64/libc.so.6\n> #2  0x0000000000b0fbc3 in ExceptionalCondition (conditionName=0xcbda10 \"part_index >= 0\", errorType=0xcbd1c3 \"FailedAssertion\", fileName=0xcbd2fe \"partbounds.c\", lineNumber=1957)\n>     at assert.c:69\n> #3  0x0000000000892aa1 in is_dummy_partition (rel=0x19b37c0, part_index=-1) at partbounds.c:1957\n> #4  0x00000000008919bd in merge_list_bounds (partnatts=1, partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0, inner_rel=0x1922938, jointype=JOIN_INNER,\n>     outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at partbounds.c:1529\n> #5  0x00000000008910de in partition_bounds_merge (partnatts=1, partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0, inner_rel=0x1922938, jointype=JOIN_INNER,\n>     outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at partbounds.c:1223\n> #6  0x000000000082c41a in compute_partition_bounds (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, parent_sjinfo=0x7fffd67752a0, parts1=0x7fffd67751b0,\n>     parts2=0x7fffd67751a8) at joinrels.c:1644\n> #7  0x000000000082bc34 in try_partitionwise_join (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, parent_sjinfo=0x7fffd67752a0, parent_restrictlist=0x1ab3318)\n>     at joinrels.c:1402\n> #8  0x000000000082aea2 in populate_joinrel_with_paths (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, sjinfo=0x7fffd67752a0, restrictlist=0x1ab3318)\n>     at joinrels.c:926\n> #9  0x000000000082a8f5 in make_join_rel (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938) at joinrels.c:760\n> #10 0x0000000000829e03 in make_rels_by_clause_joins (root=0x1a19ed0, old_rel=0x19b37c0, other_rels_list=0x1ab2970, other_rels=0x1ab2990) at joinrels.c:312\n> #11 0x00000000008298d9 in join_search_one_level (root=0x1a19ed0, level=2) at joinrels.c:123\n> #12 0x000000000080c566 in standard_join_search (root=0x1a19ed0, levels_needed=2, initial_rels=0x1ab2970) at allpaths.c:3020\n> #13 0x000000000080c4df in make_rel_from_joinlist (root=0x1a19ed0, joinlist=0x199d538) at allpaths.c:2951\n> #14 0x000000000080816b in make_one_rel (root=0x1a19ed0, joinlist=0x199d538) at allpaths.c:228\n> #15 0x000000000084491d in query_planner (root=0x1a19ed0, qp_callback=0x84a538 <standard_qp_callback>, qp_extra=0x7fffd6775630) at planmain.c:276\n> #16 0x0000000000847040 in grouping_planner (root=0x1a19ed0, tuple_fraction=0) at planner.c:1447\n> #17 0x0000000000846709 in subquery_planner (glob=0x19b39d8, parse=0x1aaa290, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at planner.c:1025\n> #18 0x0000000000844f3e in standard_planner (parse=0x1aaa290,\n>     query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048, boundParams=0x0) at planner.c:406\n> #19 0x0000000000844ce9 in planner (parse=0x1aaa290,\n>     query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048, boundParams=0x0) at planner.c:277\n> #20 0x0000000000978483 in pg_plan_query (querytree=0x1aaa290,\n>     query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048, boundParams=0x0) at postgres.c:847\n> #21 0x00000000006937fc in ExplainOneQuery (query=0x1aaa290, cursorOptions=2048, into=0x0, es=0x19b36f0,\n>     queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>     params=0x0, queryEnv=0x0) at explain.c:397\n> #22 0x0000000000693351 in ExplainQuery (pstate=0x197c410, stmt=0x1aaa0b0, params=0x0, dest=0x197c378) at explain.c:281\n> #23 0x00000000009811fa in standard_ProcessUtility (pstmt=0x1a0bfc8,\n>     queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>     readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:845\n> #24 0x00000000009809ec in ProcessUtility (pstmt=0x1a0bfc8,\n>     queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>     readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:527\n> #25 0x000000000097f636 in PortalRunUtility (portal=0x1893b40, pstmt=0x1a0bfc8, isTopLevel=true, setHoldSnapshot=true, dest=0x197c378, qc=0x7fffd6775f90) at pquery.c:1147\n> #26 0x000000000097f3a5 in FillPortalStore (portal=0x1893b40, isTopLevel=true) at pquery.c:1026\n> #27 0x000000000097ed11 in PortalRun (portal=0x1893b40, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1a0c0b8, altdest=0x1a0c0b8, qc=0x7fffd6776150) at pquery.c:758\n> #28 0x0000000000978aa5 in exec_simple_query (\n>\n> Thanks & Regards,\n> Rajkumar Raghuwanshi\n>\n>\n> On Fri, Sep 3, 2021 at 7:17 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>>\n>> On Wed, Sep 1, 2021 at 2:31 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> > On Tue, Aug 31, 2021 at 8:02 PM Nitin Jadhav\n>> > <nitinjadhavpostgres@gmail.com> wrote:\n>> > > The attached patch also fixes the above comments.\n>> >\n>> > I noticed that multi-column list partitions containing NULLs don't\n>> > work correctly with partition pruning yet.\n>> >\n>> > create table p0 (a int, b text, c bool) partition by list (a, b, c);\n>> > create table p01 partition of p0 for values in ((1, 1, true), (NULL, 1, false));\n>> > create table p02 partition of p0 for values in ((1, NULL, false));\n>> > explain select * from p0 where a is null;\n>> >                        QUERY PLAN\n>> > --------------------------------------------------------\n>> >  Seq Scan on p01 p0  (cost=0.00..22.50 rows=6 width=37)\n>> >    Filter: (a IS NULL)\n>> > (2 rows)\n>> >\n>> > I guess that may be due to the following newly added code being incomplete:\n>> >\n>> > +/*\n>> > + * get_partition_bound_null_index\n>> > + *\n>> > + * Returns the partition index of the partition bound which accepts NULL.\n>> > + */\n>> > +int\n>> > +get_partition_bound_null_index(PartitionBoundInfo boundinfo)\n>> > +{\n>> > +   int i = 0;\n>> > +   int j = 0;\n>> > +\n>> > +   if (!boundinfo->isnulls)\n>> > +       return -1;\n>> >\n>> > -           if (!val->constisnull)\n>> > -               count++;\n>> > +   for (i = 0; i < boundinfo->ndatums; i++)\n>> > +   {\n>> > +       //TODO: Handle for multi-column cases\n>> > +       for (j = 0; j < 1; j++)\n>> > +       {\n>> > +           if (boundinfo->isnulls[i][j])\n>> > +               return boundinfo->indexes[i];\n>> >         }\n>> >     }\n>> >\n>> > +   return -1;\n>> > +}\n>> >\n>> > Maybe this function needs to return a \"bitmapset\" of indexes, because\n>> > multiple partitions can now contain NULL values.\n>> >\n>> > Some other issues I noticed and suggestions for improvement:\n>> >\n>> > +/*\n>> > + * checkForDuplicates\n>> > + *\n>> > + * Returns TRUE if the list bound element is already present in the list of\n>> > + * list bounds, FALSE otherwise.\n>> > + */\n>> > +static bool\n>> > +checkForDuplicates(List *source, List *searchElem)\n>> >\n>> > This function name may be too generic.  Given that it is specific to\n>> > implementing list bound de-duplication, maybe the following signature\n>> > is more appropriate:\n>> >\n>> > static bool\n>> > checkListBoundDuplicated(List *list_bounds, List *new_bound)\n>> >\n>> > Also, better if the function comment mentions those parameter names, like:\n>> >\n>> > \"Returns TRUE if the list bound element 'new_bound' is already present\n>> > in the target list 'list_bounds', FALSE otherwise.\"\n>> >\n>> > +/*\n>> > + * transformPartitionListBounds\n>> > + *\n>> > + * Converts the expressions of list partition bounds from the raw grammar\n>> > + * representation.\n>> >\n>> > A sentence about the result format would be helpful, like:\n>> >\n>> > The result is a List of Lists of Const nodes to account for the\n>> > partition key possibly containing more than one column.\n>> >\n>> > +   int             i = 0;\n>> > +   int             j = 0;\n>> >\n>> > Better to initialize such loop counters closer to the loop.\n>> >\n>> > +           colname[i] = (char *) palloc0(NAMEDATALEN * sizeof(char));\n>> > +           colname[i] = get_attname(RelationGetRelid(parent),\n>> > +                                    key->partattrs[i], false);\n>> >\n>> > The palloc in the 1st statement is wasteful, because the 2nd statement\n>> > overwrites its pointer by the pointer to the string palloc'd by\n>> > get_attname().\n>> >\n>> > +           ListCell   *cell2 = NULL;\n>> >\n>> > No need to explicitly initialize the loop variable.\n>> >\n>> > +           RowExpr     *rowexpr = NULL;\n>> > +\n>> > +           if (!IsA(expr, RowExpr))\n>> > +               ereport(ERROR,\n>> > +                       (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n>> > +                       errmsg(\"Invalid list bound specification\"),\n>> > +                       parser_errposition(pstate, exprLocation((Node\n>> > *) spec))));\n>> > +\n>> > +           rowexpr = (RowExpr *) expr;\n>> >\n>> > It's okay to assign rowexpr at the top here instead of the dummy\n>> > NULL-initialization and write the condition as:\n>> >\n>> >     if (!IsA(rowexpr, RowExpr))\n>> >\n>> > +       if (isDuplicate)\n>> > +           continue;\n>> > +\n>> > +       result = lappend(result, values);\n>> >\n>> > I can see you copied this style from the existing code, but how about\n>> > writing this simply as:\n>> >\n>> >     if (!isDuplicate)\n>> >         result = lappend(result, values);\n>> >\n>> > -/* One value coming from some (index'th) list partition */\n>> > +/* One bound of a list partition */\n>> >  typedef struct PartitionListValue\n>> >  {\n>> >     int         index;\n>> > -   Datum       value;\n>> > +   Datum      *values;\n>> > +   bool       *isnulls;\n>> >  } PartitionListValue;\n>> >\n>> > Given that this is a locally-defined struct, I wonder if it makes\n>> > sense to rename the struct while we're at it.  Call it, say,\n>> > PartitionListBound?\n>> >\n>> > Also, please keep part of the existing comment that says that the\n>> > bound belongs to index'th partition.\n>> >\n>> > Will send more comments in a bit...\n>>\n>> + * partition_bound_accepts_nulls\n>> + *\n>> + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n>>   */\n>>\n>> I suggest slight rewording, as follows:\n>>\n>> \"Returns TRUE if any of the partition bounds contains a NULL value,\n>> FALSE otherwise.\"\n>>\n>> -   PartitionListValue *all_values;\n>> +   PartitionListValue **all_values;\n>> ...\n>> -   all_values = (PartitionListValue *)\n>> -       palloc(ndatums * sizeof(PartitionListValue));\n>> +   ndatums = get_list_datum_count(boundspecs, nparts);\n>> +   all_values = (PartitionListValue **)\n>> +       palloc(ndatums * sizeof(PartitionListValue *));\n>>\n>> I don't see the need to redefine all_values's pointer type.  No need\n>> to palloc PartitionListValue repeatedly for every datum as done\n>> further down as follows:\n>>\n>> +           all_values[j] = (PartitionListValue *)\n>> palloc(sizeof(PartitionListValue));\n>>\n>> You do need the following two though:\n>>\n>> +           all_values[j]->values = (Datum *) palloc0(key->partnatts *\n>> sizeof(Datum));\n>> +           all_values[j]->isnulls = (bool *) palloc0(key->partnatts *\n>> sizeof(bool));\n>>\n>> If you change the above the way I suggest, you'd also need to revert\n>> the following change:\n>>\n>> -   qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n>> +   qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n>>               qsort_partition_list_value_cmp, (void *) key);\n>>\n>> +       int         orig_index = all_values[i]->index;\n>> +       boundinfo->datums[i] = (Datum *) palloc(key->partnatts * sizeof(Datum));\n>>\n>> Missing a newline between these two statements.\n>>\n>> BTW, I noticed that the boundDatums variable is no longer used in\n>> create_list_bounds.  I traced back its origin and found that a recent\n>> commit 53d86957e98 introduced it to implement an idea to reduce the\n>> finer-grained pallocs that were being done in create_list_bounds().  I\n>> don't think that this patch needs to throw away that work.  You can\n>> make it work as the attached delta patch that applies on top of v3.\n>> Please check.\n>>\n>> @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n>> *parttyplen, bool *parttypbyval,\n>>     if (b1->nindexes != b2->nindexes)\n>>         return false;\n>>\n>> -   if (b1->null_index != b2->null_index)\n>> +   if (get_partition_bound_null_index(b1) !=\n>> get_partition_bound_null_index(b2))\n>>\n>> As mentioned in the last message, this bit in partition_bounds_equal()\n>> needs to be comparing \"bitmapsets\" of null bound indexes, that is\n>> after fixing get_partition_bound_null_index() as previously mentioned.\n>>\n>> But...\n>>\n>> @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n>> *parttyplen, bool *parttypbyval,\n>>                  * context.  datumIsEqual() should be simple enough to be\n>>                  * safe.\n>>                  */\n>> -               if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n>> +               if (b1->isnulls)\n>> +                   b1_isnull = b1->isnulls[i][j];\n>> +               if (b2->isnulls)\n>> +                   b2_isnull = b2->isnulls[i][j];\n>> +\n>> +               /*\n>> +                * If any of the partition bound has NULL value, then check\n>> +                * equality for the NULL value instead of comparing the datums\n>> +                * as it does not contain valid value in case of NULL.\n>> +                */\n>> +               if (b1_isnull || b2_isnull)\n>> +               {\n>> +                   if (b1_isnull != b2_isnull)\n>> +                       return false;\n>> +               }\n>>\n>> ...if you have this in the main loop, I don't think we need the above\n>> code stanza which appears to implement a short-cut for this long-form\n>> logic.\n>>\n>> +               (key->strategy != PARTITION_STRATEGY_LIST ||\n>> +                !src->isnulls[i][j]))\n>>\n>> I think it's better to write this condition as follows just like the\n>> accompanying condition involving src->kind:\n>>\n>>     (src->nulls == NULL || !src->isnulls[i][j])\n>>\n>> (Skipped looking at merge_list_bounds() and related changes for now as\n>> I see a lot of TODOs remain to be done.)\n>>\n>> In check_new_partition_bound():\n>>\n>> +                       Datum      *values = (Datum *)\n>> palloc0(key->partnatts * sizeof(Datum));\n>> +                       bool       *isnulls = (bool *)\n>> palloc0(key->partnatts * sizeof(bool));\n>>\n>> Doesn't seem like a bad idea to declare these as:\n>>\n>>     Datum    values[PARTITION_MAX_KEYS];\n>>     bool        isnulls[PARTITION_MAX_KEYS];\n>>\n>>\n>> I looked at get_qual_for_list_multi_column() and immediately thought\n>> that it may be a bad idea.  I think it's better to integrate the logic\n>> for multi-column case into the existing function even if that makes\n>> the function appear more complex.  Having two functions with the same\n>> goal and mostly the same code is not a good idea mainly because it\n>> becomes a maintenance burden.\n>>\n>> I have attempted a rewrite such that get_qual_for_list() now handles\n>> both the single-column and multi-column cases.  Changes included in\n>> the delta patch.  The patch updates some outputs of the newly added\n>> tests for multi-column list partitions, because the new code emits the\n>> IS NOT NULL tests a bit differently than\n>> get_qual_for_list_mutli_column() would.  Notably, the old approach\n>> would emit IS NOT NULL for every non-NULL datum matched to a given\n>> column, not just once for the column.  However, the patch makes a few\n>> other tests fail, mainly because I had to fix\n>> partition_bound_accepts_nulls() to handle the multi-column case,\n>> though didn't bother to update all callers of it to also handle the\n>> multi-column case correctly.  I guess that's a TODO you're going to\n>> deal with at some point anyway. :)\n>>\n>> I still have more than half of v3 left to look at, so will continue\n>> looking.   In the meantime, please check the changes I suggested,\n>> including the delta patch, and let me know your thoughts.\n>>\n>> --\n>> Amit Langote\n>> EDB: http://www.enterprisedb.com", "msg_date": "Thu, 7 Oct 2021 16:04:52 +0530", "msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Thanks Rajkumar for testing.\n\n> I think it should throw an error as the partition by list has only 1\ncolumn but we are giving 2 values.\n\nI also agree that it should throw an error in the above case. Fixed the\nissue in the attached patch. Also added related test cases to the\nregression test suite.\n\n\n> also if you see \\d+ showing plt1_p1 partition value as ‘(0001,0001)’\ninstead of ('0001','0001').\n\nNow throwing errors in the initial stage, this case doesn't arise.\n\nPlease share if you find any other issues.\n\nThanks & Regards,\nNitin Jadhav\n\n\n\n\n\nOn Thu, Oct 7, 2021 at 4:05 PM Rajkumar Raghuwanshi <\nrajkumar.raghuwanshi@enterprisedb.com> wrote:\n\n> Thanks Nitin,\n>\n> v4 patches applied cleanly and make check is passing now. While testing\n> further I observed that if multiple values are given for a single\n> column list partition it is not giving error instead it is changing values\n> itself. Please find the example below.\n>\n> postgres=# CREATE TABLE plt1 (a int, b varchar) PARTITION BY LIST(b);\n> CREATE TABLE\n> postgres=# CREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN\n> (('0001','0001'),('0002','0002'));\n> CREATE TABLE\n> postgres=# \\d+ plt1;\n> Partitioned table \"public.plt1\"\n> Column | Type | Collation | Nullable | Default | Storage |\n> Compression | Stats target | Description\n>\n> --------+-------------------+-----------+----------+---------+----------+-------------+--------------+-------------\n> a | integer | | | | plain |\n> | |\n> b | character varying | | | | extended |\n> | |\n> Partition key: LIST (b)\n> Partitions: plt1_p1 FOR VALUES IN ('(0001,0001)', '(0002,0002)')\n>\n> I think it should throw an error as the partition by list has only 1\n> column but we are giving 2 values.\n> also if you see \\d+ showing plt1_p1 partition value as ‘(0001,0001)’\n> instead of ('0001','0001').\n>\n> Thanks & Regards,\n> Rajkumar Raghuwanshi\n>\n>\n>\n> On Sun, Oct 3, 2021 at 1:52 AM Nitin Jadhav <nitinjadhavpostgres@gmail.com>\n> wrote:\n>\n>> > > On PG head + Nitin's v3 patch + Amit's Delta patch. Make check is\n>> failing with below errors.\n>> >\n>> > Thanks Rajkumar for testing.\n>> >\n>> > Here's a v2 of the delta patch that should fix both of these test\n>> > failures. As I mentioned in my last reply, my delta patch fixed what\n>> > I think were problems in Nitin's v3 patch but were not complete by\n>> > themselves. Especially, I hadn't bothered to investigate various /*\n>> > TODO: handle multi-column list partitioning */ sites to deal with my\n>> > own changes.\n>>\n>> Thanks Rajkumar for testing and Thank you Amit for working on v2 of\n>> the delta patch. Actually I had done the code changes related to\n>> partition-wise join and I was in the middle of fixing the review\n>> comments, So I could not share the patch. Anyways thanks for your\n>> efforts.\n>>\n>> > I noticed that multi-column list partitions containing NULLs don't\n>> > work correctly with partition pruning yet.\n>> >\n>> > create table p0 (a int, b text, c bool) partition by list (a, b, c);\n>> > create table p01 partition of p0 for values in ((1, 1, true), (NULL, 1,\n>> false));\n>> > create table p02 partition of p0 for values in ((1, NULL, false));\n>> > explain select * from p0 where a is null;\n>> > QUERY PLAN\n>> > --------------------------------------------------------\n>> > Seq Scan on p01 p0 (cost=0.00..22.50 rows=6 width=37)\n>> > Filter: (a IS NULL)\n>> > (2 rows)\n>> >\n>> > In the attached updated version, I've dealt with some of those such\n>> > that at least the existing cases exercising partition pruning and\n>> > partition wise joins now pass.\n>>\n>> wrt partition pruning, I have checked the output of the above case\n>> with the v2 version of the delta patch and without that. The output\n>> remains same. Kindly let me know if I am missing something. But I feel\n>> the above output is correct as the partition p01 is the only partition\n>> which contains NULL value for column a, hence it is showing \"Seq scan\n>> on p01\" in the output. Kindly correct me if I am wrong. I feel the\n>> code changes related to 'null_keys' is not required, hence not\n>> incorporated that in the attached patch.\n>>\n>> wrt partition-wise join, I had run the regression test (with new cases\n>> related to partition-wise join) on v2 of the delta patch and observed\n>> the crash. Hence I have not incorporated the partition-wise join\n>> related code from v2 of delta patch to main v4 patch. Instead I have\n>> added the partition-wise join related code done by me in the attached\n>> patch. Please share your thoughts and if possible we can improvise the\n>> code. Rest of the changes looks good to me and I have incorporated\n>> that in the attached patch.\n>>\n>>\n>> > I guess that may be due to the following newly added code being\n>> incomplete:\n>> > Maybe this function needs to return a \"bitmapset\" of indexes, because\n>> > multiple partitions can now contain NULL values.\n>>\n>> I feel this function is not required at all as we are not separating\n>> the non null and null partitions now. Removed in the attached patch.\n>> Also removed the \"scan_null' variable from the structure\n>> \"PruneStepResult\" and cleaned up the corresponding code blocks.\n>>\n>>\n>> > This function name may be too generic. Given that it is specific to\n>> > implementing list bound de-duplication, maybe the following signature\n>> > is more appropriate:\n>> >\n>> > static bool\n>> > checkListBoundDuplicated(List *list_bounds, List *new_bound)\n>>\n>> Yes. The function name looks more generic. How about using\n>> \"isListBoundDuplicated()\"? I have used this name in the patch. Please\n>> let me know if that does not look correct.\n>>\n>>\n>> > Also, better if the function comment mentions those parameter names,\n>> like:\n>> >\n>> > \"Returns TRUE if the list bound element 'new_bound' is already present\n>> > in the target list 'list_bounds', FALSE otherwise.\"\n>>\n>> Fixed.\n>>\n>>\n>> > +/*\n>> > + * transformPartitionListBounds\n>> > + *\n>> > + * Converts the expressions of list partition bounds from the raw\n>> grammar\n>> > + * representation.\n>> >\n>> > A sentence about the result format would be helpful, like:\n>> >\n>> > The result is a List of Lists of Const nodes to account for the\n>> > partition key possibly containing more than one column.\n>>\n>> Fixed.\n>>\n>>\n>> > + int i = 0;\n>> > + int j = 0;\n>> >\n>> > Better to initialize such loop counters closer to the loop.\n>>\n>> Fixed in all the places.\n>>\n>>\n>> > + colname[i] = (char *) palloc0(NAMEDATALEN * sizeof(char));\n>> > + colname[i] = get_attname(RelationGetRelid(parent),\n>> > + key->partattrs[i], false);\n>> >\n>> > The palloc in the 1st statement is wasteful, because the 2nd statement\n>> > overwrites its pointer by the pointer to the string palloc'd by\n>> > get_attname().\n>>\n>> Removed the 1st statement as it is not required.\n>>\n>>\n>> > + ListCell *cell2 = NULL;\n>> >\n>> > No need to explicitly initialize the loop variable.\n>>\n>> Fixed in all the places.\n>>\n>>\n>> > + RowExpr *rowexpr = NULL;\n>> > +\n>> > + if (!IsA(expr, RowExpr))\n>> > + ereport(ERROR,\n>> > + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n>> > + errmsg(\"Invalid list bound specification\"),\n>> > + parser_errposition(pstate, exprLocation((Node\n>> > *) spec))));\n>> > +\n>> > + rowexpr = (RowExpr *) expr;\n>> >\n>> > It's okay to assign rowexpr at the top here instead of the dummy\n>> > NULL-initialization and write the condition as:\n>> >\n>> > if (!IsA(rowexpr, RowExpr))\n>>\n>> Fixed.\n>>\n>>\n>> > + if (isDuplicate)\n>> > + continue;\n>> > +\n>> > + result = lappend(result, values);\n>> >\n>> > I can see you copied this style from the existing code, but how about\n>> > writing this simply as:\n>> >\n>> > if (!isDuplicate)\n>> > result = lappend(result, values);\n>>\n>> This looks good. I have changed in the patch.\n>>\n>>\n>> > -/* One value coming from some (index'th) list partition */\n>> > +/* One bound of a list partition */\n>> > typedef struct PartitionListValue\n>> > {\n>> > int index;\n>> > - Datum value;\n>> > + Datum *values;\n>> > + bool *isnulls;\n>> > } PartitionListValue;\n>> >\n>> > Given that this is a locally-defined struct, I wonder if it makes\n>> > sense to rename the struct while we're at it. Call it, say,\n>> > PartitionListBound?\n>>\n>> Yes. PartitionListBound looks more appropriate and it also matches the\n>> similar structures of the other partition strategies.\n>>\n>> > Also, please keep part of the existing comment that says that the\n>> > bound belongs to index'th partition.\n>>\n>> Retained the old comment.\n>>\n>>\n>> > + * partition_bound_accepts_nulls\n>> > + *\n>> > + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n>> > */\n>> >\n>> > I suggest slight rewording, as follows:\n>> >\n>> > \"Returns TRUE if any of the partition bounds contains a NULL value,\n>> > FALSE otherwise.\"\n>>\n>> Fixed.\n>>\n>>\n>> > - PartitionListValue *all_values;\n>> > + PartitionListValue **all_values;\n>> > ...\n>> > - all_values = (PartitionListValue *)\n>> > - palloc(ndatums * sizeof(PartitionListValue));\n>> > + ndatums = get_list_datum_count(boundspecs, nparts);\n>> > + all_values = (PartitionListValue **)\n>> > + palloc(ndatums * sizeof(PartitionListValue *));\n>> >\n>> > I don't see the need to redefine all_values's pointer type. No need\n>> > to palloc PartitionListValue repeatedly for every datum as done\n>> > further down as follows:\n>> >\n>> > + all_values[j] = (PartitionListValue *)\n>> > palloc(sizeof(PartitionListValue));\n>> >\n>> > You do need the following two though:\n>> >\n>> > + all_values[j]->values = (Datum *) palloc0(key->partnatts *\n>> > sizeof(Datum));\n>> > + all_values[j]->isnulls = (bool *) palloc0(key->partnatts *\n>> > sizeof(bool));\n>> >\n>> > If you change the above the way I suggest, you'd also need to revert\n>> > the following change:\n>> >\n>> > - qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n>> > + qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n>> > qsort_partition_list_value_cmp, (void *) key);\n>> >\n>> > + int orig_index = all_values[i]->index;\n>> > + boundinfo->datums[i] = (Datum *) palloc(key->partnatts *\n>> sizeof(Datum));\n>> >\n>> > Missing a newline between these two statements.\n>>\n>> Fixed. Made necessary changes to keep the intent of existing code.\n>>\n>>\n>> > @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n>> > *parttyplen, bool *parttypbyval,\n>> > if (b1->nindexes != b2->nindexes)\n>> > return false;\n>> >\n>> > - if (b1->null_index != b2->null_index)\n>> > + if (get_partition_bound_null_index(b1) !=\n>> > get_partition_bound_null_index(b2))\n>> >\n>> > As mentioned in the last message, this bit in partition_bounds_equal()\n>> > needs to be comparing \"bitmapsets\" of null bound indexes, that is\n>> > after fixing get_partition_bound_null_index() as previously mentioned.\n>>\n>> As mentioned earlier, removed the functionality of\n>> get_partition_bound_null_index(), hence the above condition is not\n>> required and removed.\n>>\n>> > But...\n>> >\n>> > @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n>> > *parttyplen, bool *parttypbyval,\n>> > * context. datumIsEqual() should be simple enough to be\n>> > * safe.\n>> > */\n>> > - if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n>> > + if (b1->isnulls)\n>> > + b1_isnull = b1->isnulls[i][j];\n>> > + if (b2->isnulls)\n>> > + b2_isnull = b2->isnulls[i][j];\n>> > +\n>> > + /*\n>> > + * If any of the partition bound has NULL value, then\n>> check\n>> > + * equality for the NULL value instead of comparing the\n>> datums\n>> > + * as it does not contain valid value in case of NULL.\n>> > + */\n>> > + if (b1_isnull || b2_isnull)\n>> > + {\n>> > + if (b1_isnull != b2_isnull)\n>> > + return false;\n>> > + }\n>> >\n>> > ...if you have this in the main loop, I don't think we need the above\n>> > code stanza which appears to implement a short-cut for this long-form\n>> > logic.\n>>\n>> Yes. May be we could have ignored the above code stanza if we would\n>> have comparing the null indexes using get_partition_bound_null_index()\n>> in the beginning of the function. But hence we are not separating the\n>> non null partitions and null partitions, I would like to keep the\n>> logic in the inner loop as we are doing it for non null bound values\n>> in the above code stanza, just to give a feel that null bound values\n>> are also handled the same way as non null values. Please correct me if\n>> I am wrong.\n>>\n>>\n>> > + (key->strategy != PARTITION_STRATEGY_LIST ||\n>> > + !src->isnulls[i][j]))\n>> >\n>> > I think it's better to write this condition as follows just like the\n>> > accompanying condition involving src->kind:\n>> >\n>> > (src->nulls == NULL || !src->isnulls[i][j])\n>>\n>> Fixed.\n>>\n>>\n>> > In check_new_partition_bound():\n>> >\n>> > + Datum *values = (Datum *)\n>> > palloc0(key->partnatts * sizeof(Datum));\n>> > + bool *isnulls = (bool *)\n>> > palloc0(key->partnatts * sizeof(bool));\n>> >\n>> > Doesn't seem like a bad idea to declare these as:\n>> >\n>> > Datum values[PARTITION_MAX_KEYS];\n>> > bool isnulls[PARTITION_MAX_KEYS];\n>>\n>> Thanks for the suggestion. I have changed as above.\n>>\n>> > I looked at get_qual_for_list_multi_column() and immediately thought\n>> > that it may be a bad idea. I think it's better to integrate the logic\n>> > for multi-column case into the existing function even if that makes\n>> > the function appear more complex. Having two functions with the same\n>> > goal and mostly the same code is not a good idea mainly because it\n>> > becomes a maintenance burden.\n>>\n>> Actually I had written a separate function because of the complexity.\n>> Now I have understood that since the objective is same, it should be\n>> done in a single function irrespective of complexity.\n>>\n>> > I have attempted a rewrite such that get_qual_for_list() now handles\n>> > both the single-column and multi-column cases. Changes included in\n>> > the delta patch. The patch updates some outputs of the newly added\n>> > tests for multi-column list partitions, because the new code emits the\n>> > IS NOT NULL tests a bit differently than\n>> > get_qual_for_list_mutli_column() would. Notably, the old approach\n>> > would emit IS NOT NULL for every non-NULL datum matched to a given\n>> > column, not just once for the column. However, the patch makes a few\n>> > other tests fail, mainly because I had to fix\n>> > partition_bound_accepts_nulls() to handle the multi-column case,\n>> > though didn't bother to update all callers of it to also handle the\n>> > multi-column case correctly. I guess that's a TODO you're going to\n>> > deal with at some point anyway. :)\n>>\n>> Thank you very much for your efforts. The changes looks good to me and\n>> I have incorporated these changes in the attached patch.\n>>\n>> I have completed the coding for all the TODOs and hence removed in the\n>> patch. The naming conventions used for function/variable names varies\n>> across the files. Some places it is like 'namesLikeThis' and in some\n>> place it is like 'names_like_this'. I have used the naming conventions\n>> based on the surrounding styles used. I am happy to change those if\n>> required.\n>>\n>> I have verified 'make check' with the attached patch and it is working\n>> fine.\n>>\n>>\n>> Thanks & Regards,\n>> Nitin Jadhav\n>>\n>>\n>> On Mon, Sep 13, 2021 at 3:47 PM Rajkumar Raghuwanshi\n>> <rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>> >\n>> > On PG head + Nitin's v3 patch + Amit's Delta patch. Make check is\n>> failing with below errors.\n>> >\n>> > --inherit.sql is failing with error :\"ERROR: negative bitmapset member\n>> not allowed\"\n>> > update mlparted_tab mlp set c = 'xxx'\n>> > from\n>> > (select a from some_tab union all select a+1 from some_tab) ss (a)\n>> > where (mlp.a = ss.a and mlp.b = 'b') or mlp.a = 3;\n>> > ERROR: negative bitmapset member not allowed\n>> >\n>> > --partition_join.sql is crashing with enable_partitionwise_join set to\n>> true.\n>> > CREATE TABLE plt1_adv (a int, b int, c text) PARTITION BY LIST (c);\n>> > CREATE TABLE plt1_adv_p1 PARTITION OF plt1_adv FOR VALUES IN ('0001',\n>> '0003');\n>> > CREATE TABLE plt1_adv_p2 PARTITION OF plt1_adv FOR VALUES IN ('0004',\n>> '0006');\n>> > CREATE TABLE plt1_adv_p3 PARTITION OF plt1_adv FOR VALUES IN ('0008',\n>> '0009');\n>> > INSERT INTO plt1_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM\n>> generate_series(1, 299) i WHERE i % 10 IN (1, 3, 4, 6, 8, 9);\n>> > ANALYZE plt1_adv;\n>> > CREATE TABLE plt2_adv (a int, b int, c text) PARTITION BY LIST (c);\n>> > CREATE TABLE plt2_adv_p1 PARTITION OF plt2_adv FOR VALUES IN ('0002',\n>> '0003');\n>> > CREATE TABLE plt2_adv_p2 PARTITION OF plt2_adv FOR VALUES IN ('0004',\n>> '0006');\n>> > CREATE TABLE plt2_adv_p3 PARTITION OF plt2_adv FOR VALUES IN ('0007',\n>> '0009');\n>> > INSERT INTO plt2_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM\n>> generate_series(1, 299) i WHERE i % 10 IN (2, 3, 4, 6, 7, 9);\n>> > ANALYZE plt2_adv;\n>> > -- inner join\n>> > EXPLAIN (COSTS OFF)\n>> > SELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2\n>> ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\n>> > server closed the connection unexpectedly\n>> > This probably means the server terminated abnormally\n>> > before or while processing the request.\n>> > connection to server was lost\n>> >\n>> >\n>> > --stack-trace\n>> > Core was generated by `postgres: edb regression [local] EXPLAIN\n>> '.\n>> > Program terminated with signal 6, Aborted.\n>> > #0 0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n>> > Missing separate debuginfos, use: debuginfo-install\n>> glibc-2.17-222.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64\n>> krb5-libs-1.15.1-19.el7.x86_64 libcom_err-1.42.9-12.el7_5.x86_64\n>> libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-12.el7.x86_64\n>> openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64\n>> zlib-1.2.7-17.el7.x86_64\n>> > (gdb) bt\n>> > #0 0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n>> > #1 0x00007f7d339bb968 in abort () from /lib64/libc.so.6\n>> > #2 0x0000000000b0fbc3 in ExceptionalCondition (conditionName=0xcbda10\n>> \"part_index >= 0\", errorType=0xcbd1c3 \"FailedAssertion\", fileName=0xcbd2fe\n>> \"partbounds.c\", lineNumber=1957)\n>> > at assert.c:69\n>> > #3 0x0000000000892aa1 in is_dummy_partition (rel=0x19b37c0,\n>> part_index=-1) at partbounds.c:1957\n>> > #4 0x00000000008919bd in merge_list_bounds (partnatts=1,\n>> partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0,\n>> inner_rel=0x1922938, jointype=JOIN_INNER,\n>> > outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at\n>> partbounds.c:1529\n>> > #5 0x00000000008910de in partition_bounds_merge (partnatts=1,\n>> partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0,\n>> inner_rel=0x1922938, jointype=JOIN_INNER,\n>> > outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at\n>> partbounds.c:1223\n>> > #6 0x000000000082c41a in compute_partition_bounds (root=0x1a19ed0,\n>> rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30,\n>> parent_sjinfo=0x7fffd67752a0, parts1=0x7fffd67751b0,\n>> > parts2=0x7fffd67751a8) at joinrels.c:1644\n>> > #7 0x000000000082bc34 in try_partitionwise_join (root=0x1a19ed0,\n>> rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30,\n>> parent_sjinfo=0x7fffd67752a0, parent_restrictlist=0x1ab3318)\n>> > at joinrels.c:1402\n>> > #8 0x000000000082aea2 in populate_joinrel_with_paths (root=0x1a19ed0,\n>> rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, sjinfo=0x7fffd67752a0,\n>> restrictlist=0x1ab3318)\n>> > at joinrels.c:926\n>> > #9 0x000000000082a8f5 in make_join_rel (root=0x1a19ed0,\n>> rel1=0x19b37c0, rel2=0x1922938) at joinrels.c:760\n>> > #10 0x0000000000829e03 in make_rels_by_clause_joins (root=0x1a19ed0,\n>> old_rel=0x19b37c0, other_rels_list=0x1ab2970, other_rels=0x1ab2990) at\n>> joinrels.c:312\n>> > #11 0x00000000008298d9 in join_search_one_level (root=0x1a19ed0,\n>> level=2) at joinrels.c:123\n>> > #12 0x000000000080c566 in standard_join_search (root=0x1a19ed0,\n>> levels_needed=2, initial_rels=0x1ab2970) at allpaths.c:3020\n>> > #13 0x000000000080c4df in make_rel_from_joinlist (root=0x1a19ed0,\n>> joinlist=0x199d538) at allpaths.c:2951\n>> > #14 0x000000000080816b in make_one_rel (root=0x1a19ed0,\n>> joinlist=0x199d538) at allpaths.c:228\n>> > #15 0x000000000084491d in query_planner (root=0x1a19ed0,\n>> qp_callback=0x84a538 <standard_qp_callback>, qp_extra=0x7fffd6775630) at\n>> planmain.c:276\n>> > #16 0x0000000000847040 in grouping_planner (root=0x1a19ed0,\n>> tuple_fraction=0) at planner.c:1447\n>> > #17 0x0000000000846709 in subquery_planner (glob=0x19b39d8,\n>> parse=0x1aaa290, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at\n>> planner.c:1025\n>> > #18 0x0000000000844f3e in standard_planner (parse=0x1aaa290,\n>> > query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048,\n>> boundParams=0x0) at planner.c:406\n>> > #19 0x0000000000844ce9 in planner (parse=0x1aaa290,\n>> > query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048,\n>> boundParams=0x0) at planner.c:277\n>> > #20 0x0000000000978483 in pg_plan_query (querytree=0x1aaa290,\n>> > query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048,\n>> boundParams=0x0) at postgres.c:847\n>> > #21 0x00000000006937fc in ExplainOneQuery (query=0x1aaa290,\n>> cursorOptions=2048, into=0x0, es=0x19b36f0,\n>> > queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>> > params=0x0, queryEnv=0x0) at explain.c:397\n>> > #22 0x0000000000693351 in ExplainQuery (pstate=0x197c410,\n>> stmt=0x1aaa0b0, params=0x0, dest=0x197c378) at explain.c:281\n>> > #23 0x00000000009811fa in standard_ProcessUtility (pstmt=0x1a0bfc8,\n>> > queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>> > readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n>> queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:845\n>> > #24 0x00000000009809ec in ProcessUtility (pstmt=0x1a0bfc8,\n>> > queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>> > readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n>> queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:527\n>> > #25 0x000000000097f636 in PortalRunUtility (portal=0x1893b40,\n>> pstmt=0x1a0bfc8, isTopLevel=true, setHoldSnapshot=true, dest=0x197c378,\n>> qc=0x7fffd6775f90) at pquery.c:1147\n>> > #26 0x000000000097f3a5 in FillPortalStore (portal=0x1893b40,\n>> isTopLevel=true) at pquery.c:1026\n>> > #27 0x000000000097ed11 in PortalRun (portal=0x1893b40,\n>> count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1a0c0b8,\n>> altdest=0x1a0c0b8, qc=0x7fffd6776150) at pquery.c:758\n>> > #28 0x0000000000978aa5 in exec_simple_query (\n>> >\n>> > Thanks & Regards,\n>> > Rajkumar Raghuwanshi\n>> >\n>> >\n>> > On Fri, Sep 3, 2021 at 7:17 PM Amit Langote <amitlangote09@gmail.com>\n>> wrote:\n>> >>\n>> >> On Wed, Sep 1, 2021 at 2:31 PM Amit Langote <amitlangote09@gmail.com>\n>> wrote:\n>> >> > On Tue, Aug 31, 2021 at 8:02 PM Nitin Jadhav\n>> >> > <nitinjadhavpostgres@gmail.com> wrote:\n>> >> > > The attached patch also fixes the above comments.\n>> >> >\n>> >> > I noticed that multi-column list partitions containing NULLs don't\n>> >> > work correctly with partition pruning yet.\n>> >> >\n>> >> > create table p0 (a int, b text, c bool) partition by list (a, b, c);\n>> >> > create table p01 partition of p0 for values in ((1, 1, true), (NULL,\n>> 1, false));\n>> >> > create table p02 partition of p0 for values in ((1, NULL, false));\n>> >> > explain select * from p0 where a is null;\n>> >> > QUERY PLAN\n>> >> > --------------------------------------------------------\n>> >> > Seq Scan on p01 p0 (cost=0.00..22.50 rows=6 width=37)\n>> >> > Filter: (a IS NULL)\n>> >> > (2 rows)\n>> >> >\n>> >> > I guess that may be due to the following newly added code being\n>> incomplete:\n>> >> >\n>> >> > +/*\n>> >> > + * get_partition_bound_null_index\n>> >> > + *\n>> >> > + * Returns the partition index of the partition bound which accepts\n>> NULL.\n>> >> > + */\n>> >> > +int\n>> >> > +get_partition_bound_null_index(PartitionBoundInfo boundinfo)\n>> >> > +{\n>> >> > + int i = 0;\n>> >> > + int j = 0;\n>> >> > +\n>> >> > + if (!boundinfo->isnulls)\n>> >> > + return -1;\n>> >> >\n>> >> > - if (!val->constisnull)\n>> >> > - count++;\n>> >> > + for (i = 0; i < boundinfo->ndatums; i++)\n>> >> > + {\n>> >> > + //TODO: Handle for multi-column cases\n>> >> > + for (j = 0; j < 1; j++)\n>> >> > + {\n>> >> > + if (boundinfo->isnulls[i][j])\n>> >> > + return boundinfo->indexes[i];\n>> >> > }\n>> >> > }\n>> >> >\n>> >> > + return -1;\n>> >> > +}\n>> >> >\n>> >> > Maybe this function needs to return a \"bitmapset\" of indexes, because\n>> >> > multiple partitions can now contain NULL values.\n>> >> >\n>> >> > Some other issues I noticed and suggestions for improvement:\n>> >> >\n>> >> > +/*\n>> >> > + * checkForDuplicates\n>> >> > + *\n>> >> > + * Returns TRUE if the list bound element is already present in the\n>> list of\n>> >> > + * list bounds, FALSE otherwise.\n>> >> > + */\n>> >> > +static bool\n>> >> > +checkForDuplicates(List *source, List *searchElem)\n>> >> >\n>> >> > This function name may be too generic. Given that it is specific to\n>> >> > implementing list bound de-duplication, maybe the following signature\n>> >> > is more appropriate:\n>> >> >\n>> >> > static bool\n>> >> > checkListBoundDuplicated(List *list_bounds, List *new_bound)\n>> >> >\n>> >> > Also, better if the function comment mentions those parameter names,\n>> like:\n>> >> >\n>> >> > \"Returns TRUE if the list bound element 'new_bound' is already\n>> present\n>> >> > in the target list 'list_bounds', FALSE otherwise.\"\n>> >> >\n>> >> > +/*\n>> >> > + * transformPartitionListBounds\n>> >> > + *\n>> >> > + * Converts the expressions of list partition bounds from the raw\n>> grammar\n>> >> > + * representation.\n>> >> >\n>> >> > A sentence about the result format would be helpful, like:\n>> >> >\n>> >> > The result is a List of Lists of Const nodes to account for the\n>> >> > partition key possibly containing more than one column.\n>> >> >\n>> >> > + int i = 0;\n>> >> > + int j = 0;\n>> >> >\n>> >> > Better to initialize such loop counters closer to the loop.\n>> >> >\n>> >> > + colname[i] = (char *) palloc0(NAMEDATALEN *\n>> sizeof(char));\n>> >> > + colname[i] = get_attname(RelationGetRelid(parent),\n>> >> > + key->partattrs[i], false);\n>> >> >\n>> >> > The palloc in the 1st statement is wasteful, because the 2nd\n>> statement\n>> >> > overwrites its pointer by the pointer to the string palloc'd by\n>> >> > get_attname().\n>> >> >\n>> >> > + ListCell *cell2 = NULL;\n>> >> >\n>> >> > No need to explicitly initialize the loop variable.\n>> >> >\n>> >> > + RowExpr *rowexpr = NULL;\n>> >> > +\n>> >> > + if (!IsA(expr, RowExpr))\n>> >> > + ereport(ERROR,\n>> >> > + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n>> >> > + errmsg(\"Invalid list bound specification\"),\n>> >> > + parser_errposition(pstate, exprLocation((Node\n>> >> > *) spec))));\n>> >> > +\n>> >> > + rowexpr = (RowExpr *) expr;\n>> >> >\n>> >> > It's okay to assign rowexpr at the top here instead of the dummy\n>> >> > NULL-initialization and write the condition as:\n>> >> >\n>> >> > if (!IsA(rowexpr, RowExpr))\n>> >> >\n>> >> > + if (isDuplicate)\n>> >> > + continue;\n>> >> > +\n>> >> > + result = lappend(result, values);\n>> >> >\n>> >> > I can see you copied this style from the existing code, but how about\n>> >> > writing this simply as:\n>> >> >\n>> >> > if (!isDuplicate)\n>> >> > result = lappend(result, values);\n>> >> >\n>> >> > -/* One value coming from some (index'th) list partition */\n>> >> > +/* One bound of a list partition */\n>> >> > typedef struct PartitionListValue\n>> >> > {\n>> >> > int index;\n>> >> > - Datum value;\n>> >> > + Datum *values;\n>> >> > + bool *isnulls;\n>> >> > } PartitionListValue;\n>> >> >\n>> >> > Given that this is a locally-defined struct, I wonder if it makes\n>> >> > sense to rename the struct while we're at it. Call it, say,\n>> >> > PartitionListBound?\n>> >> >\n>> >> > Also, please keep part of the existing comment that says that the\n>> >> > bound belongs to index'th partition.\n>> >> >\n>> >> > Will send more comments in a bit...\n>> >>\n>> >> + * partition_bound_accepts_nulls\n>> >> + *\n>> >> + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n>> >> */\n>> >>\n>> >> I suggest slight rewording, as follows:\n>> >>\n>> >> \"Returns TRUE if any of the partition bounds contains a NULL value,\n>> >> FALSE otherwise.\"\n>> >>\n>> >> - PartitionListValue *all_values;\n>> >> + PartitionListValue **all_values;\n>> >> ...\n>> >> - all_values = (PartitionListValue *)\n>> >> - palloc(ndatums * sizeof(PartitionListValue));\n>> >> + ndatums = get_list_datum_count(boundspecs, nparts);\n>> >> + all_values = (PartitionListValue **)\n>> >> + palloc(ndatums * sizeof(PartitionListValue *));\n>> >>\n>> >> I don't see the need to redefine all_values's pointer type. No need\n>> >> to palloc PartitionListValue repeatedly for every datum as done\n>> >> further down as follows:\n>> >>\n>> >> + all_values[j] = (PartitionListValue *)\n>> >> palloc(sizeof(PartitionListValue));\n>> >>\n>> >> You do need the following two though:\n>> >>\n>> >> + all_values[j]->values = (Datum *) palloc0(key->partnatts *\n>> >> sizeof(Datum));\n>> >> + all_values[j]->isnulls = (bool *) palloc0(key->partnatts *\n>> >> sizeof(bool));\n>> >>\n>> >> If you change the above the way I suggest, you'd also need to revert\n>> >> the following change:\n>> >>\n>> >> - qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n>> >> + qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n>> >> qsort_partition_list_value_cmp, (void *) key);\n>> >>\n>> >> + int orig_index = all_values[i]->index;\n>> >> + boundinfo->datums[i] = (Datum *) palloc(key->partnatts *\n>> sizeof(Datum));\n>> >>\n>> >> Missing a newline between these two statements.\n>> >>\n>> >> BTW, I noticed that the boundDatums variable is no longer used in\n>> >> create_list_bounds. I traced back its origin and found that a recent\n>> >> commit 53d86957e98 introduced it to implement an idea to reduce the\n>> >> finer-grained pallocs that were being done in create_list_bounds(). I\n>> >> don't think that this patch needs to throw away that work. You can\n>> >> make it work as the attached delta patch that applies on top of v3.\n>> >> Please check.\n>> >>\n>> >> @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n>> >> *parttyplen, bool *parttypbyval,\n>> >> if (b1->nindexes != b2->nindexes)\n>> >> return false;\n>> >>\n>> >> - if (b1->null_index != b2->null_index)\n>> >> + if (get_partition_bound_null_index(b1) !=\n>> >> get_partition_bound_null_index(b2))\n>> >>\n>> >> As mentioned in the last message, this bit in partition_bounds_equal()\n>> >> needs to be comparing \"bitmapsets\" of null bound indexes, that is\n>> >> after fixing get_partition_bound_null_index() as previously mentioned.\n>> >>\n>> >> But...\n>> >>\n>> >> @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n>> >> *parttyplen, bool *parttypbyval,\n>> >> * context. datumIsEqual() should be simple enough to\n>> be\n>> >> * safe.\n>> >> */\n>> >> - if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n>> >> + if (b1->isnulls)\n>> >> + b1_isnull = b1->isnulls[i][j];\n>> >> + if (b2->isnulls)\n>> >> + b2_isnull = b2->isnulls[i][j];\n>> >> +\n>> >> + /*\n>> >> + * If any of the partition bound has NULL value, then\n>> check\n>> >> + * equality for the NULL value instead of comparing\n>> the datums\n>> >> + * as it does not contain valid value in case of NULL.\n>> >> + */\n>> >> + if (b1_isnull || b2_isnull)\n>> >> + {\n>> >> + if (b1_isnull != b2_isnull)\n>> >> + return false;\n>> >> + }\n>> >>\n>> >> ...if you have this in the main loop, I don't think we need the above\n>> >> code stanza which appears to implement a short-cut for this long-form\n>> >> logic.\n>> >>\n>> >> + (key->strategy != PARTITION_STRATEGY_LIST ||\n>> >> + !src->isnulls[i][j]))\n>> >>\n>> >> I think it's better to write this condition as follows just like the\n>> >> accompanying condition involving src->kind:\n>> >>\n>> >> (src->nulls == NULL || !src->isnulls[i][j])\n>> >>\n>> >> (Skipped looking at merge_list_bounds() and related changes for now as\n>> >> I see a lot of TODOs remain to be done.)\n>> >>\n>> >> In check_new_partition_bound():\n>> >>\n>> >> + Datum *values = (Datum *)\n>> >> palloc0(key->partnatts * sizeof(Datum));\n>> >> + bool *isnulls = (bool *)\n>> >> palloc0(key->partnatts * sizeof(bool));\n>> >>\n>> >> Doesn't seem like a bad idea to declare these as:\n>> >>\n>> >> Datum values[PARTITION_MAX_KEYS];\n>> >> bool isnulls[PARTITION_MAX_KEYS];\n>> >>\n>> >>\n>> >> I looked at get_qual_for_list_multi_column() and immediately thought\n>> >> that it may be a bad idea. I think it's better to integrate the logic\n>> >> for multi-column case into the existing function even if that makes\n>> >> the function appear more complex. Having two functions with the same\n>> >> goal and mostly the same code is not a good idea mainly because it\n>> >> becomes a maintenance burden.\n>> >>\n>> >> I have attempted a rewrite such that get_qual_for_list() now handles\n>> >> both the single-column and multi-column cases. Changes included in\n>> >> the delta patch. The patch updates some outputs of the newly added\n>> >> tests for multi-column list partitions, because the new code emits the\n>> >> IS NOT NULL tests a bit differently than\n>> >> get_qual_for_list_mutli_column() would. Notably, the old approach\n>> >> would emit IS NOT NULL for every non-NULL datum matched to a given\n>> >> column, not just once for the column. However, the patch makes a few\n>> >> other tests fail, mainly because I had to fix\n>> >> partition_bound_accepts_nulls() to handle the multi-column case,\n>> >> though didn't bother to update all callers of it to also handle the\n>> >> multi-column case correctly. I guess that's a TODO you're going to\n>> >> deal with at some point anyway. :)\n>> >>\n>> >> I still have more than half of v3 left to look at, so will continue\n>> >> looking. In the meantime, please check the changes I suggested,\n>> >> including the delta patch, and let me know your thoughts.\n>> >>\n>> >> --\n>> >> Amit Langote\n>> >> EDB: http://www.enterprisedb.com\n>>\n>", "msg_date": "Thu, 7 Oct 2021 18:04:16 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Thanks for the patch, it applied cleanly and fixed the reported issue. I\nobserved another case where\nIn case of multi-col list partition on the same column query is not picking\npartition wise join. Is this expected?\n\nCREATE TABLE plt1 (a int, b int, c varchar) PARTITION BY LIST(c,c);\nCREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN\n(('0001','0001'),('0002','0002'),('0003','0003'));\nCREATE TABLE plt1_p2 PARTITION OF plt1 FOR VALUES IN\n(('0004','0004'),('0005','0005'),('0006','0006'));\nCREATE TABLE plt1_p3 PARTITION OF plt1 DEFAULT;\nINSERT INTO plt1 SELECT i, i % 47, to_char(i % 11, 'FM0000') FROM\ngenerate_series(0, 500) i WHERE i % 11 NOT IN (0,10);\nANALYSE plt1;\nCREATE TABLE plt2 (a int, b int, c varchar) PARTITION BY LIST(c,c);\nCREATE TABLE plt2_p1 PARTITION OF plt2 FOR VALUES IN\n(('0001','0001'),('0002','0002'),('0003','0003'));\nCREATE TABLE plt2_p2 PARTITION OF plt2 FOR VALUES IN\n(('0004','0004'),('0005','0005'),('0006','0006'));\nCREATE TABLE plt2_p3 PARTITION OF plt2 DEFAULT;\nINSERT INTO plt2 SELECT i, i % 47, to_char(i % 11, 'FM0000') FROM\ngenerate_series(0, 500) i WHERE i % 11 NOT IN (0,10);\nANALYSE plt2;\nSET enable_partitionwise_join TO true;\nEXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.a, t2.c FROM plt1 t1 INNER JOIN\nplt2 t2 ON t1.c = t2.c;\n\npostgres=# EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.a, t2.c FROM plt1 t1\nINNER JOIN plt2 t2 ON t1.c = t2.c;\n QUERY PLAN\n--------------------------------------------\n Hash Join\n Hash Cond: ((t1.c)::text = (t2.c)::text)\n -> Append\n -> Seq Scan on plt1_p1 t1_1\n -> Seq Scan on plt1_p2 t1_2\n -> Seq Scan on plt1_p3 t1_3\n -> Hash\n -> Append\n -> Seq Scan on plt2_p1 t2_1\n -> Seq Scan on plt2_p2 t2_2\n -> Seq Scan on plt2_p3 t2_3\n(11 rows)\n\nThanks & Regards,\nRajkumar Raghuwanshi\n\n\n\nOn Thu, Oct 7, 2021 at 6:03 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com>\nwrote:\n\n> Thanks Rajkumar for testing.\n>\n> > I think it should throw an error as the partition by list has only 1\n> column but we are giving 2 values.\n>\n> I also agree that it should throw an error in the above case. Fixed the\n> issue in the attached patch. Also added related test cases to the\n> regression test suite.\n>\n>\n> > also if you see \\d+ showing plt1_p1 partition value as ‘(0001,0001)’\n> instead of ('0001','0001').\n>\n> Now throwing errors in the initial stage, this case doesn't arise.\n>\n> Please share if you find any other issues.\n>\n> Thanks & Regards,\n> Nitin Jadhav\n>\n>\n>\n>\n>\n> On Thu, Oct 7, 2021 at 4:05 PM Rajkumar Raghuwanshi <\n> rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>\n>> Thanks Nitin,\n>>\n>> v4 patches applied cleanly and make check is passing now. While testing\n>> further I observed that if multiple values are given for a single\n>> column list partition it is not giving error instead it is changing\n>> values itself. Please find the example below.\n>>\n>> postgres=# CREATE TABLE plt1 (a int, b varchar) PARTITION BY LIST(b);\n>> CREATE TABLE\n>> postgres=# CREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN\n>> (('0001','0001'),('0002','0002'));\n>> CREATE TABLE\n>> postgres=# \\d+ plt1;\n>> Partitioned table \"public.plt1\"\n>> Column | Type | Collation | Nullable | Default | Storage |\n>> Compression | Stats target | Description\n>>\n>> --------+-------------------+-----------+----------+---------+----------+-------------+--------------+-------------\n>> a | integer | | | | plain |\n>> | |\n>> b | character varying | | | | extended |\n>> | |\n>> Partition key: LIST (b)\n>> Partitions: plt1_p1 FOR VALUES IN ('(0001,0001)', '(0002,0002)')\n>>\n>> I think it should throw an error as the partition by list has only 1\n>> column but we are giving 2 values.\n>> also if you see \\d+ showing plt1_p1 partition value as ‘(0001,0001)’\n>> instead of ('0001','0001').\n>>\n>> Thanks & Regards,\n>> Rajkumar Raghuwanshi\n>>\n>>\n>>\n>> On Sun, Oct 3, 2021 at 1:52 AM Nitin Jadhav <\n>> nitinjadhavpostgres@gmail.com> wrote:\n>>\n>>> > > On PG head + Nitin's v3 patch + Amit's Delta patch. Make check is\n>>> failing with below errors.\n>>> >\n>>> > Thanks Rajkumar for testing.\n>>> >\n>>> > Here's a v2 of the delta patch that should fix both of these test\n>>> > failures. As I mentioned in my last reply, my delta patch fixed what\n>>> > I think were problems in Nitin's v3 patch but were not complete by\n>>> > themselves. Especially, I hadn't bothered to investigate various /*\n>>> > TODO: handle multi-column list partitioning */ sites to deal with my\n>>> > own changes.\n>>>\n>>> Thanks Rajkumar for testing and Thank you Amit for working on v2 of\n>>> the delta patch. Actually I had done the code changes related to\n>>> partition-wise join and I was in the middle of fixing the review\n>>> comments, So I could not share the patch. Anyways thanks for your\n>>> efforts.\n>>>\n>>> > I noticed that multi-column list partitions containing NULLs don't\n>>> > work correctly with partition pruning yet.\n>>> >\n>>> > create table p0 (a int, b text, c bool) partition by list (a, b, c);\n>>> > create table p01 partition of p0 for values in ((1, 1, true), (NULL,\n>>> 1, false));\n>>> > create table p02 partition of p0 for values in ((1, NULL, false));\n>>> > explain select * from p0 where a is null;\n>>> > QUERY PLAN\n>>> > --------------------------------------------------------\n>>> > Seq Scan on p01 p0 (cost=0.00..22.50 rows=6 width=37)\n>>> > Filter: (a IS NULL)\n>>> > (2 rows)\n>>> >\n>>> > In the attached updated version, I've dealt with some of those such\n>>> > that at least the existing cases exercising partition pruning and\n>>> > partition wise joins now pass.\n>>>\n>>> wrt partition pruning, I have checked the output of the above case\n>>> with the v2 version of the delta patch and without that. The output\n>>> remains same. Kindly let me know if I am missing something. But I feel\n>>> the above output is correct as the partition p01 is the only partition\n>>> which contains NULL value for column a, hence it is showing \"Seq scan\n>>> on p01\" in the output. Kindly correct me if I am wrong. I feel the\n>>> code changes related to 'null_keys' is not required, hence not\n>>> incorporated that in the attached patch.\n>>>\n>>> wrt partition-wise join, I had run the regression test (with new cases\n>>> related to partition-wise join) on v2 of the delta patch and observed\n>>> the crash. Hence I have not incorporated the partition-wise join\n>>> related code from v2 of delta patch to main v4 patch. Instead I have\n>>> added the partition-wise join related code done by me in the attached\n>>> patch. Please share your thoughts and if possible we can improvise the\n>>> code. Rest of the changes looks good to me and I have incorporated\n>>> that in the attached patch.\n>>>\n>>>\n>>> > I guess that may be due to the following newly added code being\n>>> incomplete:\n>>> > Maybe this function needs to return a \"bitmapset\" of indexes, because\n>>> > multiple partitions can now contain NULL values.\n>>>\n>>> I feel this function is not required at all as we are not separating\n>>> the non null and null partitions now. Removed in the attached patch.\n>>> Also removed the \"scan_null' variable from the structure\n>>> \"PruneStepResult\" and cleaned up the corresponding code blocks.\n>>>\n>>>\n>>> > This function name may be too generic. Given that it is specific to\n>>> > implementing list bound de-duplication, maybe the following signature\n>>> > is more appropriate:\n>>> >\n>>> > static bool\n>>> > checkListBoundDuplicated(List *list_bounds, List *new_bound)\n>>>\n>>> Yes. The function name looks more generic. How about using\n>>> \"isListBoundDuplicated()\"? I have used this name in the patch. Please\n>>> let me know if that does not look correct.\n>>>\n>>>\n>>> > Also, better if the function comment mentions those parameter names,\n>>> like:\n>>> >\n>>> > \"Returns TRUE if the list bound element 'new_bound' is already present\n>>> > in the target list 'list_bounds', FALSE otherwise.\"\n>>>\n>>> Fixed.\n>>>\n>>>\n>>> > +/*\n>>> > + * transformPartitionListBounds\n>>> > + *\n>>> > + * Converts the expressions of list partition bounds from the raw\n>>> grammar\n>>> > + * representation.\n>>> >\n>>> > A sentence about the result format would be helpful, like:\n>>> >\n>>> > The result is a List of Lists of Const nodes to account for the\n>>> > partition key possibly containing more than one column.\n>>>\n>>> Fixed.\n>>>\n>>>\n>>> > + int i = 0;\n>>> > + int j = 0;\n>>> >\n>>> > Better to initialize such loop counters closer to the loop.\n>>>\n>>> Fixed in all the places.\n>>>\n>>>\n>>> > + colname[i] = (char *) palloc0(NAMEDATALEN * sizeof(char));\n>>> > + colname[i] = get_attname(RelationGetRelid(parent),\n>>> > + key->partattrs[i], false);\n>>> >\n>>> > The palloc in the 1st statement is wasteful, because the 2nd statement\n>>> > overwrites its pointer by the pointer to the string palloc'd by\n>>> > get_attname().\n>>>\n>>> Removed the 1st statement as it is not required.\n>>>\n>>>\n>>> > + ListCell *cell2 = NULL;\n>>> >\n>>> > No need to explicitly initialize the loop variable.\n>>>\n>>> Fixed in all the places.\n>>>\n>>>\n>>> > + RowExpr *rowexpr = NULL;\n>>> > +\n>>> > + if (!IsA(expr, RowExpr))\n>>> > + ereport(ERROR,\n>>> > + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n>>> > + errmsg(\"Invalid list bound specification\"),\n>>> > + parser_errposition(pstate, exprLocation((Node\n>>> > *) spec))));\n>>> > +\n>>> > + rowexpr = (RowExpr *) expr;\n>>> >\n>>> > It's okay to assign rowexpr at the top here instead of the dummy\n>>> > NULL-initialization and write the condition as:\n>>> >\n>>> > if (!IsA(rowexpr, RowExpr))\n>>>\n>>> Fixed.\n>>>\n>>>\n>>> > + if (isDuplicate)\n>>> > + continue;\n>>> > +\n>>> > + result = lappend(result, values);\n>>> >\n>>> > I can see you copied this style from the existing code, but how about\n>>> > writing this simply as:\n>>> >\n>>> > if (!isDuplicate)\n>>> > result = lappend(result, values);\n>>>\n>>> This looks good. I have changed in the patch.\n>>>\n>>>\n>>> > -/* One value coming from some (index'th) list partition */\n>>> > +/* One bound of a list partition */\n>>> > typedef struct PartitionListValue\n>>> > {\n>>> > int index;\n>>> > - Datum value;\n>>> > + Datum *values;\n>>> > + bool *isnulls;\n>>> > } PartitionListValue;\n>>> >\n>>> > Given that this is a locally-defined struct, I wonder if it makes\n>>> > sense to rename the struct while we're at it. Call it, say,\n>>> > PartitionListBound?\n>>>\n>>> Yes. PartitionListBound looks more appropriate and it also matches the\n>>> similar structures of the other partition strategies.\n>>>\n>>> > Also, please keep part of the existing comment that says that the\n>>> > bound belongs to index'th partition.\n>>>\n>>> Retained the old comment.\n>>>\n>>>\n>>> > + * partition_bound_accepts_nulls\n>>> > + *\n>>> > + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n>>> > */\n>>> >\n>>> > I suggest slight rewording, as follows:\n>>> >\n>>> > \"Returns TRUE if any of the partition bounds contains a NULL value,\n>>> > FALSE otherwise.\"\n>>>\n>>> Fixed.\n>>>\n>>>\n>>> > - PartitionListValue *all_values;\n>>> > + PartitionListValue **all_values;\n>>> > ...\n>>> > - all_values = (PartitionListValue *)\n>>> > - palloc(ndatums * sizeof(PartitionListValue));\n>>> > + ndatums = get_list_datum_count(boundspecs, nparts);\n>>> > + all_values = (PartitionListValue **)\n>>> > + palloc(ndatums * sizeof(PartitionListValue *));\n>>> >\n>>> > I don't see the need to redefine all_values's pointer type. No need\n>>> > to palloc PartitionListValue repeatedly for every datum as done\n>>> > further down as follows:\n>>> >\n>>> > + all_values[j] = (PartitionListValue *)\n>>> > palloc(sizeof(PartitionListValue));\n>>> >\n>>> > You do need the following two though:\n>>> >\n>>> > + all_values[j]->values = (Datum *) palloc0(key->partnatts *\n>>> > sizeof(Datum));\n>>> > + all_values[j]->isnulls = (bool *) palloc0(key->partnatts *\n>>> > sizeof(bool));\n>>> >\n>>> > If you change the above the way I suggest, you'd also need to revert\n>>> > the following change:\n>>> >\n>>> > - qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n>>> > + qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n>>> > qsort_partition_list_value_cmp, (void *) key);\n>>> >\n>>> > + int orig_index = all_values[i]->index;\n>>> > + boundinfo->datums[i] = (Datum *) palloc(key->partnatts *\n>>> sizeof(Datum));\n>>> >\n>>> > Missing a newline between these two statements.\n>>>\n>>> Fixed. Made necessary changes to keep the intent of existing code.\n>>>\n>>>\n>>> > @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n>>> > *parttyplen, bool *parttypbyval,\n>>> > if (b1->nindexes != b2->nindexes)\n>>> > return false;\n>>> >\n>>> > - if (b1->null_index != b2->null_index)\n>>> > + if (get_partition_bound_null_index(b1) !=\n>>> > get_partition_bound_null_index(b2))\n>>> >\n>>> > As mentioned in the last message, this bit in partition_bounds_equal()\n>>> > needs to be comparing \"bitmapsets\" of null bound indexes, that is\n>>> > after fixing get_partition_bound_null_index() as previously mentioned.\n>>>\n>>> As mentioned earlier, removed the functionality of\n>>> get_partition_bound_null_index(), hence the above condition is not\n>>> required and removed.\n>>>\n>>> > But...\n>>> >\n>>> > @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n>>> > *parttyplen, bool *parttypbyval,\n>>> > * context. datumIsEqual() should be simple enough to\n>>> be\n>>> > * safe.\n>>> > */\n>>> > - if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n>>> > + if (b1->isnulls)\n>>> > + b1_isnull = b1->isnulls[i][j];\n>>> > + if (b2->isnulls)\n>>> > + b2_isnull = b2->isnulls[i][j];\n>>> > +\n>>> > + /*\n>>> > + * If any of the partition bound has NULL value, then\n>>> check\n>>> > + * equality for the NULL value instead of comparing\n>>> the datums\n>>> > + * as it does not contain valid value in case of NULL.\n>>> > + */\n>>> > + if (b1_isnull || b2_isnull)\n>>> > + {\n>>> > + if (b1_isnull != b2_isnull)\n>>> > + return false;\n>>> > + }\n>>> >\n>>> > ...if you have this in the main loop, I don't think we need the above\n>>> > code stanza which appears to implement a short-cut for this long-form\n>>> > logic.\n>>>\n>>> Yes. May be we could have ignored the above code stanza if we would\n>>> have comparing the null indexes using get_partition_bound_null_index()\n>>> in the beginning of the function. But hence we are not separating the\n>>> non null partitions and null partitions, I would like to keep the\n>>> logic in the inner loop as we are doing it for non null bound values\n>>> in the above code stanza, just to give a feel that null bound values\n>>> are also handled the same way as non null values. Please correct me if\n>>> I am wrong.\n>>>\n>>>\n>>> > + (key->strategy != PARTITION_STRATEGY_LIST ||\n>>> > + !src->isnulls[i][j]))\n>>> >\n>>> > I think it's better to write this condition as follows just like the\n>>> > accompanying condition involving src->kind:\n>>> >\n>>> > (src->nulls == NULL || !src->isnulls[i][j])\n>>>\n>>> Fixed.\n>>>\n>>>\n>>> > In check_new_partition_bound():\n>>> >\n>>> > + Datum *values = (Datum *)\n>>> > palloc0(key->partnatts * sizeof(Datum));\n>>> > + bool *isnulls = (bool *)\n>>> > palloc0(key->partnatts * sizeof(bool));\n>>> >\n>>> > Doesn't seem like a bad idea to declare these as:\n>>> >\n>>> > Datum values[PARTITION_MAX_KEYS];\n>>> > bool isnulls[PARTITION_MAX_KEYS];\n>>>\n>>> Thanks for the suggestion. I have changed as above.\n>>>\n>>> > I looked at get_qual_for_list_multi_column() and immediately thought\n>>> > that it may be a bad idea. I think it's better to integrate the logic\n>>> > for multi-column case into the existing function even if that makes\n>>> > the function appear more complex. Having two functions with the same\n>>> > goal and mostly the same code is not a good idea mainly because it\n>>> > becomes a maintenance burden.\n>>>\n>>> Actually I had written a separate function because of the complexity.\n>>> Now I have understood that since the objective is same, it should be\n>>> done in a single function irrespective of complexity.\n>>>\n>>> > I have attempted a rewrite such that get_qual_for_list() now handles\n>>> > both the single-column and multi-column cases. Changes included in\n>>> > the delta patch. The patch updates some outputs of the newly added\n>>> > tests for multi-column list partitions, because the new code emits the\n>>> > IS NOT NULL tests a bit differently than\n>>> > get_qual_for_list_mutli_column() would. Notably, the old approach\n>>> > would emit IS NOT NULL for every non-NULL datum matched to a given\n>>> > column, not just once for the column. However, the patch makes a few\n>>> > other tests fail, mainly because I had to fix\n>>> > partition_bound_accepts_nulls() to handle the multi-column case,\n>>> > though didn't bother to update all callers of it to also handle the\n>>> > multi-column case correctly. I guess that's a TODO you're going to\n>>> > deal with at some point anyway. :)\n>>>\n>>> Thank you very much for your efforts. The changes looks good to me and\n>>> I have incorporated these changes in the attached patch.\n>>>\n>>> I have completed the coding for all the TODOs and hence removed in the\n>>> patch. The naming conventions used for function/variable names varies\n>>> across the files. Some places it is like 'namesLikeThis' and in some\n>>> place it is like 'names_like_this'. I have used the naming conventions\n>>> based on the surrounding styles used. I am happy to change those if\n>>> required.\n>>>\n>>> I have verified 'make check' with the attached patch and it is working\n>>> fine.\n>>>\n>>>\n>>> Thanks & Regards,\n>>> Nitin Jadhav\n>>>\n>>>\n>>> On Mon, Sep 13, 2021 at 3:47 PM Rajkumar Raghuwanshi\n>>> <rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>>> >\n>>> > On PG head + Nitin's v3 patch + Amit's Delta patch. Make check is\n>>> failing with below errors.\n>>> >\n>>> > --inherit.sql is failing with error :\"ERROR: negative bitmapset\n>>> member not allowed\"\n>>> > update mlparted_tab mlp set c = 'xxx'\n>>> > from\n>>> > (select a from some_tab union all select a+1 from some_tab) ss (a)\n>>> > where (mlp.a = ss.a and mlp.b = 'b') or mlp.a = 3;\n>>> > ERROR: negative bitmapset member not allowed\n>>> >\n>>> > --partition_join.sql is crashing with enable_partitionwise_join set to\n>>> true.\n>>> > CREATE TABLE plt1_adv (a int, b int, c text) PARTITION BY LIST (c);\n>>> > CREATE TABLE plt1_adv_p1 PARTITION OF plt1_adv FOR VALUES IN ('0001',\n>>> '0003');\n>>> > CREATE TABLE plt1_adv_p2 PARTITION OF plt1_adv FOR VALUES IN ('0004',\n>>> '0006');\n>>> > CREATE TABLE plt1_adv_p3 PARTITION OF plt1_adv FOR VALUES IN ('0008',\n>>> '0009');\n>>> > INSERT INTO plt1_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM\n>>> generate_series(1, 299) i WHERE i % 10 IN (1, 3, 4, 6, 8, 9);\n>>> > ANALYZE plt1_adv;\n>>> > CREATE TABLE plt2_adv (a int, b int, c text) PARTITION BY LIST (c);\n>>> > CREATE TABLE plt2_adv_p1 PARTITION OF plt2_adv FOR VALUES IN ('0002',\n>>> '0003');\n>>> > CREATE TABLE plt2_adv_p2 PARTITION OF plt2_adv FOR VALUES IN ('0004',\n>>> '0006');\n>>> > CREATE TABLE plt2_adv_p3 PARTITION OF plt2_adv FOR VALUES IN ('0007',\n>>> '0009');\n>>> > INSERT INTO plt2_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM\n>>> generate_series(1, 299) i WHERE i % 10 IN (2, 3, 4, 6, 7, 9);\n>>> > ANALYZE plt2_adv;\n>>> > -- inner join\n>>> > EXPLAIN (COSTS OFF)\n>>> > SELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2\n>>> ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\n>>> > server closed the connection unexpectedly\n>>> > This probably means the server terminated abnormally\n>>> > before or while processing the request.\n>>> > connection to server was lost\n>>> >\n>>> >\n>>> > --stack-trace\n>>> > Core was generated by `postgres: edb regression [local] EXPLAIN\n>>> '.\n>>> > Program terminated with signal 6, Aborted.\n>>> > #0 0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n>>> > Missing separate debuginfos, use: debuginfo-install\n>>> glibc-2.17-222.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64\n>>> krb5-libs-1.15.1-19.el7.x86_64 libcom_err-1.42.9-12.el7_5.x86_64\n>>> libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-12.el7.x86_64\n>>> openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64\n>>> zlib-1.2.7-17.el7.x86_64\n>>> > (gdb) bt\n>>> > #0 0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n>>> > #1 0x00007f7d339bb968 in abort () from /lib64/libc.so.6\n>>> > #2 0x0000000000b0fbc3 in ExceptionalCondition (conditionName=0xcbda10\n>>> \"part_index >= 0\", errorType=0xcbd1c3 \"FailedAssertion\", fileName=0xcbd2fe\n>>> \"partbounds.c\", lineNumber=1957)\n>>> > at assert.c:69\n>>> > #3 0x0000000000892aa1 in is_dummy_partition (rel=0x19b37c0,\n>>> part_index=-1) at partbounds.c:1957\n>>> > #4 0x00000000008919bd in merge_list_bounds (partnatts=1,\n>>> partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0,\n>>> inner_rel=0x1922938, jointype=JOIN_INNER,\n>>> > outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at\n>>> partbounds.c:1529\n>>> > #5 0x00000000008910de in partition_bounds_merge (partnatts=1,\n>>> partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0,\n>>> inner_rel=0x1922938, jointype=JOIN_INNER,\n>>> > outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at\n>>> partbounds.c:1223\n>>> > #6 0x000000000082c41a in compute_partition_bounds (root=0x1a19ed0,\n>>> rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30,\n>>> parent_sjinfo=0x7fffd67752a0, parts1=0x7fffd67751b0,\n>>> > parts2=0x7fffd67751a8) at joinrels.c:1644\n>>> > #7 0x000000000082bc34 in try_partitionwise_join (root=0x1a19ed0,\n>>> rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30,\n>>> parent_sjinfo=0x7fffd67752a0, parent_restrictlist=0x1ab3318)\n>>> > at joinrels.c:1402\n>>> > #8 0x000000000082aea2 in populate_joinrel_with_paths (root=0x1a19ed0,\n>>> rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, sjinfo=0x7fffd67752a0,\n>>> restrictlist=0x1ab3318)\n>>> > at joinrels.c:926\n>>> > #9 0x000000000082a8f5 in make_join_rel (root=0x1a19ed0,\n>>> rel1=0x19b37c0, rel2=0x1922938) at joinrels.c:760\n>>> > #10 0x0000000000829e03 in make_rels_by_clause_joins (root=0x1a19ed0,\n>>> old_rel=0x19b37c0, other_rels_list=0x1ab2970, other_rels=0x1ab2990) at\n>>> joinrels.c:312\n>>> > #11 0x00000000008298d9 in join_search_one_level (root=0x1a19ed0,\n>>> level=2) at joinrels.c:123\n>>> > #12 0x000000000080c566 in standard_join_search (root=0x1a19ed0,\n>>> levels_needed=2, initial_rels=0x1ab2970) at allpaths.c:3020\n>>> > #13 0x000000000080c4df in make_rel_from_joinlist (root=0x1a19ed0,\n>>> joinlist=0x199d538) at allpaths.c:2951\n>>> > #14 0x000000000080816b in make_one_rel (root=0x1a19ed0,\n>>> joinlist=0x199d538) at allpaths.c:228\n>>> > #15 0x000000000084491d in query_planner (root=0x1a19ed0,\n>>> qp_callback=0x84a538 <standard_qp_callback>, qp_extra=0x7fffd6775630) at\n>>> planmain.c:276\n>>> > #16 0x0000000000847040 in grouping_planner (root=0x1a19ed0,\n>>> tuple_fraction=0) at planner.c:1447\n>>> > #17 0x0000000000846709 in subquery_planner (glob=0x19b39d8,\n>>> parse=0x1aaa290, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at\n>>> planner.c:1025\n>>> > #18 0x0000000000844f3e in standard_planner (parse=0x1aaa290,\n>>> > query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048,\n>>> boundParams=0x0) at planner.c:406\n>>> > #19 0x0000000000844ce9 in planner (parse=0x1aaa290,\n>>> > query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048,\n>>> boundParams=0x0) at planner.c:277\n>>> > #20 0x0000000000978483 in pg_plan_query (querytree=0x1aaa290,\n>>> > query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048,\n>>> boundParams=0x0) at postgres.c:847\n>>> > #21 0x00000000006937fc in ExplainOneQuery (query=0x1aaa290,\n>>> cursorOptions=2048, into=0x0, es=0x19b36f0,\n>>> > queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>>> > params=0x0, queryEnv=0x0) at explain.c:397\n>>> > #22 0x0000000000693351 in ExplainQuery (pstate=0x197c410,\n>>> stmt=0x1aaa0b0, params=0x0, dest=0x197c378) at explain.c:281\n>>> > #23 0x00000000009811fa in standard_ProcessUtility (pstmt=0x1a0bfc8,\n>>> > queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>>> > readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n>>> queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:845\n>>> > #24 0x00000000009809ec in ProcessUtility (pstmt=0x1a0bfc8,\n>>> > queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>>> > readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n>>> queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:527\n>>> > #25 0x000000000097f636 in PortalRunUtility (portal=0x1893b40,\n>>> pstmt=0x1a0bfc8, isTopLevel=true, setHoldSnapshot=true, dest=0x197c378,\n>>> qc=0x7fffd6775f90) at pquery.c:1147\n>>> > #26 0x000000000097f3a5 in FillPortalStore (portal=0x1893b40,\n>>> isTopLevel=true) at pquery.c:1026\n>>> > #27 0x000000000097ed11 in PortalRun (portal=0x1893b40,\n>>> count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1a0c0b8,\n>>> altdest=0x1a0c0b8, qc=0x7fffd6776150) at pquery.c:758\n>>> > #28 0x0000000000978aa5 in exec_simple_query (\n>>> >\n>>> > Thanks & Regards,\n>>> > Rajkumar Raghuwanshi\n>>> >\n>>> >\n>>> > On Fri, Sep 3, 2021 at 7:17 PM Amit Langote <amitlangote09@gmail.com>\n>>> wrote:\n>>> >>\n>>> >> On Wed, Sep 1, 2021 at 2:31 PM Amit Langote <amitlangote09@gmail.com>\n>>> wrote:\n>>> >> > On Tue, Aug 31, 2021 at 8:02 PM Nitin Jadhav\n>>> >> > <nitinjadhavpostgres@gmail.com> wrote:\n>>> >> > > The attached patch also fixes the above comments.\n>>> >> >\n>>> >> > I noticed that multi-column list partitions containing NULLs don't\n>>> >> > work correctly with partition pruning yet.\n>>> >> >\n>>> >> > create table p0 (a int, b text, c bool) partition by list (a, b, c);\n>>> >> > create table p01 partition of p0 for values in ((1, 1, true),\n>>> (NULL, 1, false));\n>>> >> > create table p02 partition of p0 for values in ((1, NULL, false));\n>>> >> > explain select * from p0 where a is null;\n>>> >> > QUERY PLAN\n>>> >> > --------------------------------------------------------\n>>> >> > Seq Scan on p01 p0 (cost=0.00..22.50 rows=6 width=37)\n>>> >> > Filter: (a IS NULL)\n>>> >> > (2 rows)\n>>> >> >\n>>> >> > I guess that may be due to the following newly added code being\n>>> incomplete:\n>>> >> >\n>>> >> > +/*\n>>> >> > + * get_partition_bound_null_index\n>>> >> > + *\n>>> >> > + * Returns the partition index of the partition bound which\n>>> accepts NULL.\n>>> >> > + */\n>>> >> > +int\n>>> >> > +get_partition_bound_null_index(PartitionBoundInfo boundinfo)\n>>> >> > +{\n>>> >> > + int i = 0;\n>>> >> > + int j = 0;\n>>> >> > +\n>>> >> > + if (!boundinfo->isnulls)\n>>> >> > + return -1;\n>>> >> >\n>>> >> > - if (!val->constisnull)\n>>> >> > - count++;\n>>> >> > + for (i = 0; i < boundinfo->ndatums; i++)\n>>> >> > + {\n>>> >> > + //TODO: Handle for multi-column cases\n>>> >> > + for (j = 0; j < 1; j++)\n>>> >> > + {\n>>> >> > + if (boundinfo->isnulls[i][j])\n>>> >> > + return boundinfo->indexes[i];\n>>> >> > }\n>>> >> > }\n>>> >> >\n>>> >> > + return -1;\n>>> >> > +}\n>>> >> >\n>>> >> > Maybe this function needs to return a \"bitmapset\" of indexes,\n>>> because\n>>> >> > multiple partitions can now contain NULL values.\n>>> >> >\n>>> >> > Some other issues I noticed and suggestions for improvement:\n>>> >> >\n>>> >> > +/*\n>>> >> > + * checkForDuplicates\n>>> >> > + *\n>>> >> > + * Returns TRUE if the list bound element is already present in\n>>> the list of\n>>> >> > + * list bounds, FALSE otherwise.\n>>> >> > + */\n>>> >> > +static bool\n>>> >> > +checkForDuplicates(List *source, List *searchElem)\n>>> >> >\n>>> >> > This function name may be too generic. Given that it is specific to\n>>> >> > implementing list bound de-duplication, maybe the following\n>>> signature\n>>> >> > is more appropriate:\n>>> >> >\n>>> >> > static bool\n>>> >> > checkListBoundDuplicated(List *list_bounds, List *new_bound)\n>>> >> >\n>>> >> > Also, better if the function comment mentions those parameter\n>>> names, like:\n>>> >> >\n>>> >> > \"Returns TRUE if the list bound element 'new_bound' is already\n>>> present\n>>> >> > in the target list 'list_bounds', FALSE otherwise.\"\n>>> >> >\n>>> >> > +/*\n>>> >> > + * transformPartitionListBounds\n>>> >> > + *\n>>> >> > + * Converts the expressions of list partition bounds from the raw\n>>> grammar\n>>> >> > + * representation.\n>>> >> >\n>>> >> > A sentence about the result format would be helpful, like:\n>>> >> >\n>>> >> > The result is a List of Lists of Const nodes to account for the\n>>> >> > partition key possibly containing more than one column.\n>>> >> >\n>>> >> > + int i = 0;\n>>> >> > + int j = 0;\n>>> >> >\n>>> >> > Better to initialize such loop counters closer to the loop.\n>>> >> >\n>>> >> > + colname[i] = (char *) palloc0(NAMEDATALEN *\n>>> sizeof(char));\n>>> >> > + colname[i] = get_attname(RelationGetRelid(parent),\n>>> >> > + key->partattrs[i], false);\n>>> >> >\n>>> >> > The palloc in the 1st statement is wasteful, because the 2nd\n>>> statement\n>>> >> > overwrites its pointer by the pointer to the string palloc'd by\n>>> >> > get_attname().\n>>> >> >\n>>> >> > + ListCell *cell2 = NULL;\n>>> >> >\n>>> >> > No need to explicitly initialize the loop variable.\n>>> >> >\n>>> >> > + RowExpr *rowexpr = NULL;\n>>> >> > +\n>>> >> > + if (!IsA(expr, RowExpr))\n>>> >> > + ereport(ERROR,\n>>> >> > + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n>>> >> > + errmsg(\"Invalid list bound specification\"),\n>>> >> > + parser_errposition(pstate,\n>>> exprLocation((Node\n>>> >> > *) spec))));\n>>> >> > +\n>>> >> > + rowexpr = (RowExpr *) expr;\n>>> >> >\n>>> >> > It's okay to assign rowexpr at the top here instead of the dummy\n>>> >> > NULL-initialization and write the condition as:\n>>> >> >\n>>> >> > if (!IsA(rowexpr, RowExpr))\n>>> >> >\n>>> >> > + if (isDuplicate)\n>>> >> > + continue;\n>>> >> > +\n>>> >> > + result = lappend(result, values);\n>>> >> >\n>>> >> > I can see you copied this style from the existing code, but how\n>>> about\n>>> >> > writing this simply as:\n>>> >> >\n>>> >> > if (!isDuplicate)\n>>> >> > result = lappend(result, values);\n>>> >> >\n>>> >> > -/* One value coming from some (index'th) list partition */\n>>> >> > +/* One bound of a list partition */\n>>> >> > typedef struct PartitionListValue\n>>> >> > {\n>>> >> > int index;\n>>> >> > - Datum value;\n>>> >> > + Datum *values;\n>>> >> > + bool *isnulls;\n>>> >> > } PartitionListValue;\n>>> >> >\n>>> >> > Given that this is a locally-defined struct, I wonder if it makes\n>>> >> > sense to rename the struct while we're at it. Call it, say,\n>>> >> > PartitionListBound?\n>>> >> >\n>>> >> > Also, please keep part of the existing comment that says that the\n>>> >> > bound belongs to index'th partition.\n>>> >> >\n>>> >> > Will send more comments in a bit...\n>>> >>\n>>> >> + * partition_bound_accepts_nulls\n>>> >> + *\n>>> >> + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n>>> >> */\n>>> >>\n>>> >> I suggest slight rewording, as follows:\n>>> >>\n>>> >> \"Returns TRUE if any of the partition bounds contains a NULL value,\n>>> >> FALSE otherwise.\"\n>>> >>\n>>> >> - PartitionListValue *all_values;\n>>> >> + PartitionListValue **all_values;\n>>> >> ...\n>>> >> - all_values = (PartitionListValue *)\n>>> >> - palloc(ndatums * sizeof(PartitionListValue));\n>>> >> + ndatums = get_list_datum_count(boundspecs, nparts);\n>>> >> + all_values = (PartitionListValue **)\n>>> >> + palloc(ndatums * sizeof(PartitionListValue *));\n>>> >>\n>>> >> I don't see the need to redefine all_values's pointer type. No need\n>>> >> to palloc PartitionListValue repeatedly for every datum as done\n>>> >> further down as follows:\n>>> >>\n>>> >> + all_values[j] = (PartitionListValue *)\n>>> >> palloc(sizeof(PartitionListValue));\n>>> >>\n>>> >> You do need the following two though:\n>>> >>\n>>> >> + all_values[j]->values = (Datum *) palloc0(key->partnatts *\n>>> >> sizeof(Datum));\n>>> >> + all_values[j]->isnulls = (bool *) palloc0(key->partnatts *\n>>> >> sizeof(bool));\n>>> >>\n>>> >> If you change the above the way I suggest, you'd also need to revert\n>>> >> the following change:\n>>> >>\n>>> >> - qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n>>> >> + qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n>>> >> qsort_partition_list_value_cmp, (void *) key);\n>>> >>\n>>> >> + int orig_index = all_values[i]->index;\n>>> >> + boundinfo->datums[i] = (Datum *) palloc(key->partnatts *\n>>> sizeof(Datum));\n>>> >>\n>>> >> Missing a newline between these two statements.\n>>> >>\n>>> >> BTW, I noticed that the boundDatums variable is no longer used in\n>>> >> create_list_bounds. I traced back its origin and found that a recent\n>>> >> commit 53d86957e98 introduced it to implement an idea to reduce the\n>>> >> finer-grained pallocs that were being done in create_list_bounds(). I\n>>> >> don't think that this patch needs to throw away that work. You can\n>>> >> make it work as the attached delta patch that applies on top of v3.\n>>> >> Please check.\n>>> >>\n>>> >> @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n>>> >> *parttyplen, bool *parttypbyval,\n>>> >> if (b1->nindexes != b2->nindexes)\n>>> >> return false;\n>>> >>\n>>> >> - if (b1->null_index != b2->null_index)\n>>> >> + if (get_partition_bound_null_index(b1) !=\n>>> >> get_partition_bound_null_index(b2))\n>>> >>\n>>> >> As mentioned in the last message, this bit in partition_bounds_equal()\n>>> >> needs to be comparing \"bitmapsets\" of null bound indexes, that is\n>>> >> after fixing get_partition_bound_null_index() as previously mentioned.\n>>> >>\n>>> >> But...\n>>> >>\n>>> >> @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n>>> >> *parttyplen, bool *parttypbyval,\n>>> >> * context. datumIsEqual() should be simple enough\n>>> to be\n>>> >> * safe.\n>>> >> */\n>>> >> - if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n>>> >> + if (b1->isnulls)\n>>> >> + b1_isnull = b1->isnulls[i][j];\n>>> >> + if (b2->isnulls)\n>>> >> + b2_isnull = b2->isnulls[i][j];\n>>> >> +\n>>> >> + /*\n>>> >> + * If any of the partition bound has NULL value, then\n>>> check\n>>> >> + * equality for the NULL value instead of comparing\n>>> the datums\n>>> >> + * as it does not contain valid value in case of NULL.\n>>> >> + */\n>>> >> + if (b1_isnull || b2_isnull)\n>>> >> + {\n>>> >> + if (b1_isnull != b2_isnull)\n>>> >> + return false;\n>>> >> + }\n>>> >>\n>>> >> ...if you have this in the main loop, I don't think we need the above\n>>> >> code stanza which appears to implement a short-cut for this long-form\n>>> >> logic.\n>>> >>\n>>> >> + (key->strategy != PARTITION_STRATEGY_LIST ||\n>>> >> + !src->isnulls[i][j]))\n>>> >>\n>>> >> I think it's better to write this condition as follows just like the\n>>> >> accompanying condition involving src->kind:\n>>> >>\n>>> >> (src->nulls == NULL || !src->isnulls[i][j])\n>>> >>\n>>> >> (Skipped looking at merge_list_bounds() and related changes for now as\n>>> >> I see a lot of TODOs remain to be done.)\n>>> >>\n>>> >> In check_new_partition_bound():\n>>> >>\n>>> >> + Datum *values = (Datum *)\n>>> >> palloc0(key->partnatts * sizeof(Datum));\n>>> >> + bool *isnulls = (bool *)\n>>> >> palloc0(key->partnatts * sizeof(bool));\n>>> >>\n>>> >> Doesn't seem like a bad idea to declare these as:\n>>> >>\n>>> >> Datum values[PARTITION_MAX_KEYS];\n>>> >> bool isnulls[PARTITION_MAX_KEYS];\n>>> >>\n>>> >>\n>>> >> I looked at get_qual_for_list_multi_column() and immediately thought\n>>> >> that it may be a bad idea. I think it's better to integrate the logic\n>>> >> for multi-column case into the existing function even if that makes\n>>> >> the function appear more complex. Having two functions with the same\n>>> >> goal and mostly the same code is not a good idea mainly because it\n>>> >> becomes a maintenance burden.\n>>> >>\n>>> >> I have attempted a rewrite such that get_qual_for_list() now handles\n>>> >> both the single-column and multi-column cases. Changes included in\n>>> >> the delta patch. The patch updates some outputs of the newly added\n>>> >> tests for multi-column list partitions, because the new code emits the\n>>> >> IS NOT NULL tests a bit differently than\n>>> >> get_qual_for_list_mutli_column() would. Notably, the old approach\n>>> >> would emit IS NOT NULL for every non-NULL datum matched to a given\n>>> >> column, not just once for the column. However, the patch makes a few\n>>> >> other tests fail, mainly because I had to fix\n>>> >> partition_bound_accepts_nulls() to handle the multi-column case,\n>>> >> though didn't bother to update all callers of it to also handle the\n>>> >> multi-column case correctly. I guess that's a TODO you're going to\n>>> >> deal with at some point anyway. :)\n>>> >>\n>>> >> I still have more than half of v3 left to look at, so will continue\n>>> >> looking. In the meantime, please check the changes I suggested,\n>>> >> including the delta patch, and let me know your thoughts.\n>>> >>\n>>> >> --\n>>> >> Amit Langote\n>>> >> EDB: http://www.enterprisedb.com\n>>>\n>>\n\nThanks for the patch, it applied cleanly and fixed the reported issue.  I observed another case whereIn case of multi-col list partition on the same column query is not picking partition wise join. Is this expected?CREATE TABLE plt1 (a int, b int, c varchar) PARTITION BY LIST(c,c);CREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN (('0001','0001'),('0002','0002'),('0003','0003'));CREATE TABLE plt1_p2 PARTITION OF plt1 FOR VALUES IN (('0004','0004'),('0005','0005'),('0006','0006'));CREATE TABLE plt1_p3 PARTITION OF plt1 DEFAULT;INSERT INTO plt1 SELECT i, i % 47, to_char(i % 11, 'FM0000') FROM generate_series(0, 500) i WHERE i % 11 NOT  IN (0,10);ANALYSE plt1;CREATE TABLE plt2 (a int, b int, c varchar) PARTITION BY LIST(c,c);CREATE TABLE plt2_p1 PARTITION OF plt2 FOR VALUES IN (('0001','0001'),('0002','0002'),('0003','0003'));CREATE TABLE plt2_p2 PARTITION OF plt2 FOR VALUES IN (('0004','0004'),('0005','0005'),('0006','0006'));CREATE TABLE plt2_p3 PARTITION OF plt2 DEFAULT;INSERT INTO plt2 SELECT i, i % 47, to_char(i % 11, 'FM0000') FROM generate_series(0, 500) i WHERE i % 11 NOT  IN (0,10);ANALYSE plt2;SET enable_partitionwise_join TO true;EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.a, t2.c FROM plt1 t1 INNER JOIN plt2 t2 ON t1.c = t2.c;postgres=# EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.a, t2.c FROM plt1 t1 INNER JOIN plt2 t2 ON t1.c = t2.c;                 QUERY PLAN                 -------------------------------------------- Hash Join   Hash Cond: ((t1.c)::text = (t2.c)::text)   ->  Append         ->  Seq Scan on plt1_p1 t1_1         ->  Seq Scan on plt1_p2 t1_2         ->  Seq Scan on plt1_p3 t1_3   ->  Hash         ->  Append               ->  Seq Scan on plt2_p1 t2_1               ->  Seq Scan on plt2_p2 t2_2               ->  Seq Scan on plt2_p3 t2_3(11 rows)Thanks & Regards,Rajkumar RaghuwanshiOn Thu, Oct 7, 2021 at 6:03 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:Thanks Rajkumar for testing.> I think it should throw an error as the partition by list has only 1 column but we are giving 2 values. I also agree that it should throw an error in the above case. Fixed the issue in the attached patch. Also added related test cases to the regression test suite.> also if you see \\d+ showing plt1_p1 partition value as ‘(0001,0001)’ instead of ('0001','0001').Now throwing errors in the initial stage, this case doesn't arise. Please share if you find any other issues.Thanks & Regards,Nitin JadhavOn Thu, Oct 7, 2021 at 4:05 PM Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote:Thanks Nitin,v4 patches applied cleanly and make check is passing now. While testing further I observed that if multiple values are given for a single column list partition it is not giving error instead it is changing values itself. Please find the example below.postgres=# CREATE TABLE plt1 (a int, b varchar) PARTITION BY LIST(b);CREATE TABLEpostgres=# CREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN (('0001','0001'),('0002','0002'));CREATE TABLEpostgres=# \\d+ plt1;                                          Partitioned table \"public.plt1\" Column |       Type        | Collation | Nullable | Default | Storage  | Compression | Stats target | Description --------+-------------------+-----------+----------+---------+----------+-------------+--------------+------------- a      | integer           |           |          |         | plain    |             |              |  b      | character varying |           |          |         | extended |             |              | Partition key: LIST (b)Partitions: plt1_p1 FOR VALUES IN ('(0001,0001)', '(0002,0002)')I think it should throw an error as the partition by list has only 1 column but we are giving 2 values. also if you see \\d+ showing plt1_p1 partition value as ‘(0001,0001)’ instead of ('0001','0001').Thanks & Regards,Rajkumar RaghuwanshiOn Sun, Oct 3, 2021 at 1:52 AM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:> > On PG head + Nitin's v3 patch + Amit's Delta patch.  Make check is failing with below errors.\n>\n> Thanks Rajkumar for testing.\n>\n> Here's a v2 of the delta patch that should fix both of these test\n> failures.  As I mentioned in my last reply, my delta patch fixed what\n> I think were problems in Nitin's v3 patch but were not complete by\n> themselves.  Especially, I hadn't bothered to investigate various /*\n> TODO: handle multi-column list partitioning */ sites to deal with my\n> own changes.\n\nThanks Rajkumar for testing and Thank you Amit for working on v2 of\nthe delta patch. Actually I had done the code changes related to\npartition-wise join and I was in the middle of fixing the review\ncomments, So I could not share the patch. Anyways thanks for your\nefforts.\n\n> I noticed that multi-column list partitions containing NULLs don't\n> work correctly with partition pruning yet.\n>\n> create table p0 (a int, b text, c bool) partition by list (a, b, c);\n> create table p01 partition of p0 for values in ((1, 1, true), (NULL, 1, false));\n> create table p02 partition of p0 for values in ((1, NULL, false));\n> explain select * from p0 where a is null;\n>                        QUERY PLAN\n> --------------------------------------------------------\n> Seq Scan on p01 p0  (cost=0.00..22.50 rows=6 width=37)\n>  Filter: (a IS NULL)\n> (2 rows)\n>\n> In the attached updated version, I've dealt with some of those such\n> that at least the existing cases exercising partition pruning and\n> partition wise joins now pass.\n\nwrt partition pruning, I have checked the output of the above case\nwith the v2 version of the delta patch and without that. The output\nremains same. Kindly let me know if I am missing something. But I feel\nthe above output is correct as the partition p01 is the only partition\nwhich contains NULL value for column a, hence it is showing \"Seq scan\non p01\" in the output. Kindly correct me if I am wrong. I feel the\ncode changes related to 'null_keys' is not required, hence not\nincorporated that in the attached patch.\n\nwrt partition-wise join, I had run the regression test (with new cases\nrelated to partition-wise join) on v2 of the delta patch and observed\nthe crash. Hence I have not incorporated the partition-wise join\nrelated code from v2 of delta patch to main v4 patch. Instead I have\nadded the partition-wise join related code done by me in the attached\npatch. Please share your thoughts and if possible we can improvise the\ncode. Rest of the changes looks good to me and I have incorporated\nthat in the attached patch.\n\n\n> I guess that may be due to the following newly added code being incomplete:\n> Maybe this function needs to return a \"bitmapset\" of indexes, because\n> multiple partitions can now contain NULL values.\n\nI feel this function is not required at all as we are not separating\nthe non null and null partitions now. Removed in the attached patch.\nAlso removed the \"scan_null' variable from the structure\n\"PruneStepResult\" and cleaned up the corresponding code blocks.\n\n\n> This function name may be too generic.  Given that it is specific to\n> implementing list bound de-duplication, maybe the following signature\n> is more appropriate:\n>\n> static bool\n> checkListBoundDuplicated(List *list_bounds, List *new_bound)\n\nYes. The function name looks more generic. How about using\n\"isListBoundDuplicated()\"? I have used this name in the patch. Please\nlet me know if that does not look correct.\n\n\n> Also, better if the function comment mentions those parameter names, like:\n>\n> \"Returns TRUE if the list bound element 'new_bound' is already present\n> in the target list 'list_bounds', FALSE otherwise.\"\n\nFixed.\n\n\n> +/*\n> + * transformPartitionListBounds\n> + *\n> + * Converts the expressions of list partition bounds from the raw grammar\n> + * representation.\n>\n> A sentence about the result format would be helpful, like:\n>\n> The result is a List of Lists of Const nodes to account for the\n> partition key possibly containing more than one column.\n\nFixed.\n\n\n> +   int             i = 0;\n> +   int             j = 0;\n>\n> Better to initialize such loop counters closer to the loop.\n\nFixed in all the places.\n\n\n> +           colname[i] = (char *) palloc0(NAMEDATALEN * sizeof(char));\n> +           colname[i] = get_attname(RelationGetRelid(parent),\n> +                                    key->partattrs[i], false);\n>\n> The palloc in the 1st statement is wasteful, because the 2nd statement\n> overwrites its pointer by the pointer to the string palloc'd by\n> get_attname().\n\nRemoved the 1st statement as it is not required.\n\n\n> +           ListCell   *cell2 = NULL;\n>\n> No need to explicitly initialize the loop variable.\n\nFixed in all the places.\n\n\n> +           RowExpr     *rowexpr = NULL;\n> +\n> +           if (!IsA(expr, RowExpr))\n> +               ereport(ERROR,\n> +                       (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> +                       errmsg(\"Invalid list bound specification\"),\n> +                       parser_errposition(pstate, exprLocation((Node\n> *) spec))));\n> +\n> +           rowexpr = (RowExpr *) expr;\n>\n> It's okay to assign rowexpr at the top here instead of the dummy\n> NULL-initialization and write the condition as:\n>\n>    if (!IsA(rowexpr, RowExpr))\n\nFixed.\n\n\n> +       if (isDuplicate)\n> +           continue;\n> +\n> +       result = lappend(result, values);\n>\n> I can see you copied this style from the existing code, but how about\n> writing this simply as:\n>\n>    if (!isDuplicate)\n>        result = lappend(result, values);\n\nThis looks good. I have changed in the patch.\n\n\n> -/* One value coming from some (index'th) list partition */\n> +/* One bound of a list partition */\n> typedef struct PartitionListValue\n> {\n>    int         index;\n> -   Datum       value;\n> +   Datum      *values;\n> +   bool       *isnulls;\n>  } PartitionListValue;\n>\n> Given that this is a locally-defined struct, I wonder if it makes\n> sense to rename the struct while we're at it.  Call it, say,\n> PartitionListBound?\n\nYes. PartitionListBound looks more appropriate and it also matches the\nsimilar structures of the other partition strategies.\n\n> Also, please keep part of the existing comment that says that the\n> bound belongs to index'th partition.\n\nRetained the old comment.\n\n\n> + * partition_bound_accepts_nulls\n> + *\n> + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n>  */\n>\n> I suggest slight rewording, as follows:\n>\n> \"Returns TRUE if any of the partition bounds contains a NULL value,\n> FALSE otherwise.\"\n\nFixed.\n\n\n> -   PartitionListValue *all_values;\n> +   PartitionListValue **all_values;\n> ...\n> -   all_values = (PartitionListValue *)\n> -       palloc(ndatums * sizeof(PartitionListValue));\n> +   ndatums = get_list_datum_count(boundspecs, nparts);\n> +   all_values = (PartitionListValue **)\n> +       palloc(ndatums * sizeof(PartitionListValue *));\n>\n> I don't see the need to redefine all_values's pointer type.  No need\n> to palloc PartitionListValue repeatedly for every datum as done\n> further down as follows:\n>\n> +           all_values[j] = (PartitionListValue *)\n> palloc(sizeof(PartitionListValue));\n>\n> You do need the following two though:\n>\n> +           all_values[j]->values = (Datum *) palloc0(key->partnatts *\n> sizeof(Datum));\n> +           all_values[j]->isnulls = (bool *) palloc0(key->partnatts *\n> sizeof(bool));\n>\n> If you change the above the way I suggest, you'd also need to revert\n> the following change:\n>\n> -   qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n> +   qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n>              qsort_partition_list_value_cmp, (void *) key);\n>\n> +       int         orig_index = all_values[i]->index;\n> +       boundinfo->datums[i] = (Datum *) palloc(key->partnatts * sizeof(Datum));\n>\n> Missing a newline between these two statements.\n\nFixed. Made necessary changes to keep the intent of existing code.\n\n\n> @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n> *parttyplen, bool *parttypbyval,\n>    if (b1->nindexes != b2->nindexes)\n>        return false;\n>\n> -   if (b1->null_index != b2->null_index)\n> +   if (get_partition_bound_null_index(b1) !=\n> get_partition_bound_null_index(b2))\n>\n> As mentioned in the last message, this bit in partition_bounds_equal()\n> needs to be comparing \"bitmapsets\" of null bound indexes, that is\n> after fixing get_partition_bound_null_index() as previously mentioned.\n\nAs mentioned earlier, removed the functionality of\nget_partition_bound_null_index(), hence the above condition is not\nrequired and removed.\n\n> But...\n>\n> @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n> *parttyplen, bool *parttypbyval,\n>                 * context.  datumIsEqual() should be simple enough to be\n>                 * safe.\n>                 */\n> -               if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n> +               if (b1->isnulls)\n> +                   b1_isnull = b1->isnulls[i][j];\n> +               if (b2->isnulls)\n> +                   b2_isnull = b2->isnulls[i][j];\n> +\n> +               /*\n> +                * If any of the partition bound has NULL value, then check\n> +                * equality for the NULL value instead of comparing the datums\n> +                * as it does not contain valid value in case of NULL.\n> +                */\n> +               if (b1_isnull || b2_isnull)\n> +               {\n> +                   if (b1_isnull != b2_isnull)\n> +                       return false;\n> +               }\n>\n> ...if you have this in the main loop, I don't think we need the above\n> code stanza which appears to implement a short-cut for this long-form\n> logic.\n\nYes. May be we could have ignored the above code stanza if we would\nhave comparing the null indexes using get_partition_bound_null_index()\nin the beginning of the function. But hence we are not separating the\nnon null partitions and null partitions, I would like to keep the\nlogic in the inner loop as we are doing it for non null bound values\nin the above code stanza, just to give a feel that null bound values\nare also handled the same way as non null values. Please correct me if\nI am wrong.\n\n\n> +               (key->strategy != PARTITION_STRATEGY_LIST ||\n> +                !src->isnulls[i][j]))\n>\n> I think it's better to write this condition as follows just like the\n> accompanying condition involving src->kind:\n>\n>    (src->nulls == NULL || !src->isnulls[i][j])\n\nFixed.\n\n\n> In check_new_partition_bound():\n>\n> +                       Datum      *values = (Datum *)\n> palloc0(key->partnatts * sizeof(Datum));\n> +                       bool       *isnulls = (bool *)\n> palloc0(key->partnatts * sizeof(bool));\n>\n> Doesn't seem like a bad idea to declare these as:\n>\n>     Datum    values[PARTITION_MAX_KEYS];\n>    bool        isnulls[PARTITION_MAX_KEYS];\n\nThanks for the suggestion. I have changed as above.\n\n> I looked at get_qual_for_list_multi_column() and immediately thought\n> that it may be a bad idea.  I think it's better to integrate the logic\n> for multi-column case into the existing function even if that makes\n> the function appear more complex.  Having two functions with the same\n> goal and mostly the same code is not a good idea mainly because it\n> becomes a maintenance burden.\n\nActually I had written a separate function because of the complexity.\nNow I have understood that since the objective is same, it should be\ndone in a single function irrespective of complexity.\n\n> I have attempted a rewrite such that get_qual_for_list() now handles\n> both the single-column and multi-column cases.  Changes included in\n> the delta patch.  The patch updates some outputs of the newly added\n> tests for multi-column list partitions, because the new code emits the\n> IS NOT NULL tests a bit differently than\n> get_qual_for_list_mutli_column() would.  Notably, the old approach\n> would emit IS NOT NULL for every non-NULL datum matched to a given\n> column, not just once for the column.  However, the patch makes a few\n> other tests fail, mainly because I had to fix\n> partition_bound_accepts_nulls() to handle the multi-column case,\n> though didn't bother to update all callers of it to also handle the\n> multi-column case correctly.  I guess that's a TODO you're going to\n> deal with at some point anyway. :)\n\nThank you very much for your efforts. The changes looks good to me and\nI have incorporated these changes in the attached patch.\n\nI have completed the coding for all the TODOs and hence removed in the\npatch. The naming conventions used for function/variable names varies\nacross the files. Some places it is like 'namesLikeThis' and in some\nplace it is like 'names_like_this'. I have used the naming conventions\nbased on the surrounding styles used. I am happy to change those if\nrequired.\n\nI have verified 'make check' with the attached patch and it is working fine.\n\n\nThanks & Regards,\nNitin Jadhav\n\n\nOn Mon, Sep 13, 2021 at 3:47 PM Rajkumar Raghuwanshi\n<rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>\n> On PG head + Nitin's v3 patch + Amit's Delta patch.  Make check is failing with below errors.\n>\n> --inherit.sql is failing with error :\"ERROR:  negative bitmapset member not allowed\"\n> update mlparted_tab mlp set c = 'xxx'\n> from\n>   (select a from some_tab union all select a+1 from some_tab) ss (a)\n> where (mlp.a = ss.a and mlp.b = 'b') or mlp.a = 3;\n> ERROR:  negative bitmapset member not allowed\n>\n> --partition_join.sql is crashing with enable_partitionwise_join set to true.\n> CREATE TABLE plt1_adv (a int, b int, c text) PARTITION BY LIST (c);\n> CREATE TABLE plt1_adv_p1 PARTITION OF plt1_adv FOR VALUES IN ('0001', '0003');\n> CREATE TABLE plt1_adv_p2 PARTITION OF plt1_adv FOR VALUES IN ('0004', '0006');\n> CREATE TABLE plt1_adv_p3 PARTITION OF plt1_adv FOR VALUES IN ('0008', '0009');\n> INSERT INTO plt1_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM generate_series(1, 299) i WHERE i % 10 IN (1, 3, 4, 6, 8, 9);\n> ANALYZE plt1_adv;\n> CREATE TABLE plt2_adv (a int, b int, c text) PARTITION BY LIST (c);\n> CREATE TABLE plt2_adv_p1 PARTITION OF plt2_adv FOR VALUES IN ('0002', '0003');\n> CREATE TABLE plt2_adv_p2 PARTITION OF plt2_adv FOR VALUES IN ('0004', '0006');\n> CREATE TABLE plt2_adv_p3 PARTITION OF plt2_adv FOR VALUES IN ('0007', '0009');\n> INSERT INTO plt2_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM generate_series(1, 299) i WHERE i % 10 IN (2, 3, 4, 6, 7, 9);\n> ANALYZE plt2_adv;\n> -- inner join\n> EXPLAIN (COSTS OFF)\n> SELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\n> server closed the connection unexpectedly\n>         This probably means the server terminated abnormally\n>         before or while processing the request.\n> connection to server was lost\n>\n>\n> --stack-trace\n> Core was generated by `postgres: edb regression [local] EXPLAIN                                      '.\n> Program terminated with signal 6, Aborted.\n> #0  0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n> Missing separate debuginfos, use: debuginfo-install glibc-2.17-222.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-19.el7.x86_64 libcom_err-1.42.9-12.el7_5.x86_64 libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-12.el7.x86_64 openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-17.el7.x86_64\n> (gdb) bt\n> #0  0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n> #1  0x00007f7d339bb968 in abort () from /lib64/libc.so.6\n> #2  0x0000000000b0fbc3 in ExceptionalCondition (conditionName=0xcbda10 \"part_index >= 0\", errorType=0xcbd1c3 \"FailedAssertion\", fileName=0xcbd2fe \"partbounds.c\", lineNumber=1957)\n>     at assert.c:69\n> #3  0x0000000000892aa1 in is_dummy_partition (rel=0x19b37c0, part_index=-1) at partbounds.c:1957\n> #4  0x00000000008919bd in merge_list_bounds (partnatts=1, partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0, inner_rel=0x1922938, jointype=JOIN_INNER,\n>     outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at partbounds.c:1529\n> #5  0x00000000008910de in partition_bounds_merge (partnatts=1, partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0, inner_rel=0x1922938, jointype=JOIN_INNER,\n>     outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at partbounds.c:1223\n> #6  0x000000000082c41a in compute_partition_bounds (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, parent_sjinfo=0x7fffd67752a0, parts1=0x7fffd67751b0,\n>     parts2=0x7fffd67751a8) at joinrels.c:1644\n> #7  0x000000000082bc34 in try_partitionwise_join (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, parent_sjinfo=0x7fffd67752a0, parent_restrictlist=0x1ab3318)\n>     at joinrels.c:1402\n> #8  0x000000000082aea2 in populate_joinrel_with_paths (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, sjinfo=0x7fffd67752a0, restrictlist=0x1ab3318)\n>     at joinrels.c:926\n> #9  0x000000000082a8f5 in make_join_rel (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938) at joinrels.c:760\n> #10 0x0000000000829e03 in make_rels_by_clause_joins (root=0x1a19ed0, old_rel=0x19b37c0, other_rels_list=0x1ab2970, other_rels=0x1ab2990) at joinrels.c:312\n> #11 0x00000000008298d9 in join_search_one_level (root=0x1a19ed0, level=2) at joinrels.c:123\n> #12 0x000000000080c566 in standard_join_search (root=0x1a19ed0, levels_needed=2, initial_rels=0x1ab2970) at allpaths.c:3020\n> #13 0x000000000080c4df in make_rel_from_joinlist (root=0x1a19ed0, joinlist=0x199d538) at allpaths.c:2951\n> #14 0x000000000080816b in make_one_rel (root=0x1a19ed0, joinlist=0x199d538) at allpaths.c:228\n> #15 0x000000000084491d in query_planner (root=0x1a19ed0, qp_callback=0x84a538 <standard_qp_callback>, qp_extra=0x7fffd6775630) at planmain.c:276\n> #16 0x0000000000847040 in grouping_planner (root=0x1a19ed0, tuple_fraction=0) at planner.c:1447\n> #17 0x0000000000846709 in subquery_planner (glob=0x19b39d8, parse=0x1aaa290, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at planner.c:1025\n> #18 0x0000000000844f3e in standard_planner (parse=0x1aaa290,\n>     query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048, boundParams=0x0) at planner.c:406\n> #19 0x0000000000844ce9 in planner (parse=0x1aaa290,\n>     query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048, boundParams=0x0) at planner.c:277\n> #20 0x0000000000978483 in pg_plan_query (querytree=0x1aaa290,\n>     query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048, boundParams=0x0) at postgres.c:847\n> #21 0x00000000006937fc in ExplainOneQuery (query=0x1aaa290, cursorOptions=2048, into=0x0, es=0x19b36f0,\n>     queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>     params=0x0, queryEnv=0x0) at explain.c:397\n> #22 0x0000000000693351 in ExplainQuery (pstate=0x197c410, stmt=0x1aaa0b0, params=0x0, dest=0x197c378) at explain.c:281\n> #23 0x00000000009811fa in standard_ProcessUtility (pstmt=0x1a0bfc8,\n>     queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>     readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:845\n> #24 0x00000000009809ec in ProcessUtility (pstmt=0x1a0bfc8,\n>     queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>     readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:527\n> #25 0x000000000097f636 in PortalRunUtility (portal=0x1893b40, pstmt=0x1a0bfc8, isTopLevel=true, setHoldSnapshot=true, dest=0x197c378, qc=0x7fffd6775f90) at pquery.c:1147\n> #26 0x000000000097f3a5 in FillPortalStore (portal=0x1893b40, isTopLevel=true) at pquery.c:1026\n> #27 0x000000000097ed11 in PortalRun (portal=0x1893b40, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1a0c0b8, altdest=0x1a0c0b8, qc=0x7fffd6776150) at pquery.c:758\n> #28 0x0000000000978aa5 in exec_simple_query (\n>\n> Thanks & Regards,\n> Rajkumar Raghuwanshi\n>\n>\n> On Fri, Sep 3, 2021 at 7:17 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>>\n>> On Wed, Sep 1, 2021 at 2:31 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> > On Tue, Aug 31, 2021 at 8:02 PM Nitin Jadhav\n>> > <nitinjadhavpostgres@gmail.com> wrote:\n>> > > The attached patch also fixes the above comments.\n>> >\n>> > I noticed that multi-column list partitions containing NULLs don't\n>> > work correctly with partition pruning yet.\n>> >\n>> > create table p0 (a int, b text, c bool) partition by list (a, b, c);\n>> > create table p01 partition of p0 for values in ((1, 1, true), (NULL, 1, false));\n>> > create table p02 partition of p0 for values in ((1, NULL, false));\n>> > explain select * from p0 where a is null;\n>> >                        QUERY PLAN\n>> > --------------------------------------------------------\n>> >  Seq Scan on p01 p0  (cost=0.00..22.50 rows=6 width=37)\n>> >    Filter: (a IS NULL)\n>> > (2 rows)\n>> >\n>> > I guess that may be due to the following newly added code being incomplete:\n>> >\n>> > +/*\n>> > + * get_partition_bound_null_index\n>> > + *\n>> > + * Returns the partition index of the partition bound which accepts NULL.\n>> > + */\n>> > +int\n>> > +get_partition_bound_null_index(PartitionBoundInfo boundinfo)\n>> > +{\n>> > +   int i = 0;\n>> > +   int j = 0;\n>> > +\n>> > +   if (!boundinfo->isnulls)\n>> > +       return -1;\n>> >\n>> > -           if (!val->constisnull)\n>> > -               count++;\n>> > +   for (i = 0; i < boundinfo->ndatums; i++)\n>> > +   {\n>> > +       //TODO: Handle for multi-column cases\n>> > +       for (j = 0; j < 1; j++)\n>> > +       {\n>> > +           if (boundinfo->isnulls[i][j])\n>> > +               return boundinfo->indexes[i];\n>> >         }\n>> >     }\n>> >\n>> > +   return -1;\n>> > +}\n>> >\n>> > Maybe this function needs to return a \"bitmapset\" of indexes, because\n>> > multiple partitions can now contain NULL values.\n>> >\n>> > Some other issues I noticed and suggestions for improvement:\n>> >\n>> > +/*\n>> > + * checkForDuplicates\n>> > + *\n>> > + * Returns TRUE if the list bound element is already present in the list of\n>> > + * list bounds, FALSE otherwise.\n>> > + */\n>> > +static bool\n>> > +checkForDuplicates(List *source, List *searchElem)\n>> >\n>> > This function name may be too generic.  Given that it is specific to\n>> > implementing list bound de-duplication, maybe the following signature\n>> > is more appropriate:\n>> >\n>> > static bool\n>> > checkListBoundDuplicated(List *list_bounds, List *new_bound)\n>> >\n>> > Also, better if the function comment mentions those parameter names, like:\n>> >\n>> > \"Returns TRUE if the list bound element 'new_bound' is already present\n>> > in the target list 'list_bounds', FALSE otherwise.\"\n>> >\n>> > +/*\n>> > + * transformPartitionListBounds\n>> > + *\n>> > + * Converts the expressions of list partition bounds from the raw grammar\n>> > + * representation.\n>> >\n>> > A sentence about the result format would be helpful, like:\n>> >\n>> > The result is a List of Lists of Const nodes to account for the\n>> > partition key possibly containing more than one column.\n>> >\n>> > +   int             i = 0;\n>> > +   int             j = 0;\n>> >\n>> > Better to initialize such loop counters closer to the loop.\n>> >\n>> > +           colname[i] = (char *) palloc0(NAMEDATALEN * sizeof(char));\n>> > +           colname[i] = get_attname(RelationGetRelid(parent),\n>> > +                                    key->partattrs[i], false);\n>> >\n>> > The palloc in the 1st statement is wasteful, because the 2nd statement\n>> > overwrites its pointer by the pointer to the string palloc'd by\n>> > get_attname().\n>> >\n>> > +           ListCell   *cell2 = NULL;\n>> >\n>> > No need to explicitly initialize the loop variable.\n>> >\n>> > +           RowExpr     *rowexpr = NULL;\n>> > +\n>> > +           if (!IsA(expr, RowExpr))\n>> > +               ereport(ERROR,\n>> > +                       (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n>> > +                       errmsg(\"Invalid list bound specification\"),\n>> > +                       parser_errposition(pstate, exprLocation((Node\n>> > *) spec))));\n>> > +\n>> > +           rowexpr = (RowExpr *) expr;\n>> >\n>> > It's okay to assign rowexpr at the top here instead of the dummy\n>> > NULL-initialization and write the condition as:\n>> >\n>> >     if (!IsA(rowexpr, RowExpr))\n>> >\n>> > +       if (isDuplicate)\n>> > +           continue;\n>> > +\n>> > +       result = lappend(result, values);\n>> >\n>> > I can see you copied this style from the existing code, but how about\n>> > writing this simply as:\n>> >\n>> >     if (!isDuplicate)\n>> >         result = lappend(result, values);\n>> >\n>> > -/* One value coming from some (index'th) list partition */\n>> > +/* One bound of a list partition */\n>> >  typedef struct PartitionListValue\n>> >  {\n>> >     int         index;\n>> > -   Datum       value;\n>> > +   Datum      *values;\n>> > +   bool       *isnulls;\n>> >  } PartitionListValue;\n>> >\n>> > Given that this is a locally-defined struct, I wonder if it makes\n>> > sense to rename the struct while we're at it.  Call it, say,\n>> > PartitionListBound?\n>> >\n>> > Also, please keep part of the existing comment that says that the\n>> > bound belongs to index'th partition.\n>> >\n>> > Will send more comments in a bit...\n>>\n>> + * partition_bound_accepts_nulls\n>> + *\n>> + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n>>   */\n>>\n>> I suggest slight rewording, as follows:\n>>\n>> \"Returns TRUE if any of the partition bounds contains a NULL value,\n>> FALSE otherwise.\"\n>>\n>> -   PartitionListValue *all_values;\n>> +   PartitionListValue **all_values;\n>> ...\n>> -   all_values = (PartitionListValue *)\n>> -       palloc(ndatums * sizeof(PartitionListValue));\n>> +   ndatums = get_list_datum_count(boundspecs, nparts);\n>> +   all_values = (PartitionListValue **)\n>> +       palloc(ndatums * sizeof(PartitionListValue *));\n>>\n>> I don't see the need to redefine all_values's pointer type.  No need\n>> to palloc PartitionListValue repeatedly for every datum as done\n>> further down as follows:\n>>\n>> +           all_values[j] = (PartitionListValue *)\n>> palloc(sizeof(PartitionListValue));\n>>\n>> You do need the following two though:\n>>\n>> +           all_values[j]->values = (Datum *) palloc0(key->partnatts *\n>> sizeof(Datum));\n>> +           all_values[j]->isnulls = (bool *) palloc0(key->partnatts *\n>> sizeof(bool));\n>>\n>> If you change the above the way I suggest, you'd also need to revert\n>> the following change:\n>>\n>> -   qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n>> +   qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n>>               qsort_partition_list_value_cmp, (void *) key);\n>>\n>> +       int         orig_index = all_values[i]->index;\n>> +       boundinfo->datums[i] = (Datum *) palloc(key->partnatts * sizeof(Datum));\n>>\n>> Missing a newline between these two statements.\n>>\n>> BTW, I noticed that the boundDatums variable is no longer used in\n>> create_list_bounds.  I traced back its origin and found that a recent\n>> commit 53d86957e98 introduced it to implement an idea to reduce the\n>> finer-grained pallocs that were being done in create_list_bounds().  I\n>> don't think that this patch needs to throw away that work.  You can\n>> make it work as the attached delta patch that applies on top of v3.\n>> Please check.\n>>\n>> @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n>> *parttyplen, bool *parttypbyval,\n>>     if (b1->nindexes != b2->nindexes)\n>>         return false;\n>>\n>> -   if (b1->null_index != b2->null_index)\n>> +   if (get_partition_bound_null_index(b1) !=\n>> get_partition_bound_null_index(b2))\n>>\n>> As mentioned in the last message, this bit in partition_bounds_equal()\n>> needs to be comparing \"bitmapsets\" of null bound indexes, that is\n>> after fixing get_partition_bound_null_index() as previously mentioned.\n>>\n>> But...\n>>\n>> @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n>> *parttyplen, bool *parttypbyval,\n>>                  * context.  datumIsEqual() should be simple enough to be\n>>                  * safe.\n>>                  */\n>> -               if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n>> +               if (b1->isnulls)\n>> +                   b1_isnull = b1->isnulls[i][j];\n>> +               if (b2->isnulls)\n>> +                   b2_isnull = b2->isnulls[i][j];\n>> +\n>> +               /*\n>> +                * If any of the partition bound has NULL value, then check\n>> +                * equality for the NULL value instead of comparing the datums\n>> +                * as it does not contain valid value in case of NULL.\n>> +                */\n>> +               if (b1_isnull || b2_isnull)\n>> +               {\n>> +                   if (b1_isnull != b2_isnull)\n>> +                       return false;\n>> +               }\n>>\n>> ...if you have this in the main loop, I don't think we need the above\n>> code stanza which appears to implement a short-cut for this long-form\n>> logic.\n>>\n>> +               (key->strategy != PARTITION_STRATEGY_LIST ||\n>> +                !src->isnulls[i][j]))\n>>\n>> I think it's better to write this condition as follows just like the\n>> accompanying condition involving src->kind:\n>>\n>>     (src->nulls == NULL || !src->isnulls[i][j])\n>>\n>> (Skipped looking at merge_list_bounds() and related changes for now as\n>> I see a lot of TODOs remain to be done.)\n>>\n>> In check_new_partition_bound():\n>>\n>> +                       Datum      *values = (Datum *)\n>> palloc0(key->partnatts * sizeof(Datum));\n>> +                       bool       *isnulls = (bool *)\n>> palloc0(key->partnatts * sizeof(bool));\n>>\n>> Doesn't seem like a bad idea to declare these as:\n>>\n>>     Datum    values[PARTITION_MAX_KEYS];\n>>     bool        isnulls[PARTITION_MAX_KEYS];\n>>\n>>\n>> I looked at get_qual_for_list_multi_column() and immediately thought\n>> that it may be a bad idea.  I think it's better to integrate the logic\n>> for multi-column case into the existing function even if that makes\n>> the function appear more complex.  Having two functions with the same\n>> goal and mostly the same code is not a good idea mainly because it\n>> becomes a maintenance burden.\n>>\n>> I have attempted a rewrite such that get_qual_for_list() now handles\n>> both the single-column and multi-column cases.  Changes included in\n>> the delta patch.  The patch updates some outputs of the newly added\n>> tests for multi-column list partitions, because the new code emits the\n>> IS NOT NULL tests a bit differently than\n>> get_qual_for_list_mutli_column() would.  Notably, the old approach\n>> would emit IS NOT NULL for every non-NULL datum matched to a given\n>> column, not just once for the column.  However, the patch makes a few\n>> other tests fail, mainly because I had to fix\n>> partition_bound_accepts_nulls() to handle the multi-column case,\n>> though didn't bother to update all callers of it to also handle the\n>> multi-column case correctly.  I guess that's a TODO you're going to\n>> deal with at some point anyway. :)\n>>\n>> I still have more than half of v3 left to look at, so will continue\n>> looking.   In the meantime, please check the changes I suggested,\n>> including the delta patch, and let me know your thoughts.\n>>\n>> --\n>> Amit Langote\n>> EDB: http://www.enterprisedb.com", "msg_date": "Mon, 11 Oct 2021 11:05:50 +0530", "msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Hi Nitin,\n\nWhile testing further I got a crash with partition wise join enabled for\nmulti-col list partitions. please find test case & stack-trace below.\n\nSET enable_partitionwise_join TO on;\nCREATE TABLE plt1 (c varchar, d varchar) PARTITION BY LIST(c,d);\nCREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN\n(('0001','0001'),('0002','0002'),(NULL,NULL));\nCREATE TABLE plt1_p2 PARTITION OF plt1 FOR VALUES IN\n(('0004','0004'),('0005','0005'),('0006','0006'));\nINSERT INTO plt1 SELECT to_char(i % 11, 'FM0000'), to_char(i % 11,\n'FM0000') FROM generate_series(0, 500) i WHERE i % 11 NOT IN (0,10,3,7,8,9);\nINSERT INTO plt1 SELECT NULL,NULL FROM generate_series(0, 500) i WHERE i %\n11 IN (3);\nANALYSE plt1;\nCREATE TABLE plt2 (c varchar, d varchar) PARTITION BY LIST(c,d);\nCREATE TABLE plt2_p1 PARTITION OF plt2 FOR VALUES IN\n(('0001','0001'),('0002','0002'));\nCREATE TABLE plt2_p2 PARTITION OF plt2 FOR VALUES IN\n(('0004','0004'),('0005','0005'),('0006','0006'));\nCREATE TABLE plt2_p3 PARTITION OF plt2 DEFAULT;\nINSERT INTO plt2 SELECT to_char(i % 11, 'FM0000'), to_char(i % 11,\n'FM0000') FROM generate_series(0, 500) i WHERE i % 11 NOT IN (0,10,3);\nINSERT INTO plt2 SELECT NULL,NULL FROM generate_series(0, 500) i WHERE i %\n11 IN (3);\nANALYSE plt2;\n\nEXPLAIN (COSTS OFF)\nSELECT t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON\n(t1.c = t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d =\nt3.d);\n\npostgres=# EXPLAIN (COSTS OFF)\npostgres-# SELECT t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN\nplt2 t2 ON (t1.c = t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c\nAND t2.d = t3.d);\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!?> \\q\n[edb@localhost bin]$ gdb -q -c data/core.66926 postgres\nReading symbols from\n/home/edb/WORK/pg_src/PG_TEMP/postgresql/inst/bin/postgres...done.\n[New LWP 66926]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib64/libthread_db.so.1\".\nCore was generated by `postgres: edb postgres [local] EXPLAIN\n '.\nProgram terminated with signal 11, Segmentation fault.\n#0 0x000000000082be39 in is_dummy_rel (rel=0x40) at joinrels.c:1221\n1221 if (rel->pathlist == NIL)\n(gdb) bt\n#0 0x000000000082be39 in is_dummy_rel (rel=0x40) at joinrels.c:1221\n#1 0x000000000089341c in is_dummy_partition (rel=0x2f86e88, part_index=2)\nat partbounds.c:1959\n#2 0x0000000000891d38 in merge_list_bounds (partnatts=2,\npartsupfunc=0x2f70058, partcollation=0x2fd3c98, outer_rel=0x2f86e88,\ninner_rel=0x2fd4368, jointype=JOIN_LEFT,\n outer_parts=0x7ffea91f8cc0, inner_parts=0x7ffea91f8cb8) at\npartbounds.c:1325\n#3 0x0000000000891991 in partition_bounds_merge (partnatts=2,\npartsupfunc=0x2f70058, partcollation=0x2fd3c98, outer_rel=0x2f86e88,\ninner_rel=0x2fd4368, jointype=JOIN_LEFT,\n outer_parts=0x7ffea91f8cc0, inner_parts=0x7ffea91f8cb8) at\npartbounds.c:1198\n#4 0x000000000082cc5a in compute_partition_bounds (root=0x2f9e910,\nrel1=0x2f86e88, rel2=0x2fd4368, joinrel=0x2fae388, parent_sjinfo=0x2f7dfa8,\nparts1=0x7ffea91f8cc0,\n parts2=0x7ffea91f8cb8) at joinrels.c:1644\n#5 0x000000000082c474 in try_partitionwise_join (root=0x2f9e910,\nrel1=0x2f86e88, rel2=0x2fd4368, joinrel=0x2fae388, parent_sjinfo=0x2f7dfa8,\nparent_restrictlist=0x2fae650)\n at joinrels.c:1402\n#6 0x000000000082b6e2 in populate_joinrel_with_paths (root=0x2f9e910,\nrel1=0x2f86e88, rel2=0x2fd4368, joinrel=0x2fae388, sjinfo=0x2f7dfa8,\nrestrictlist=0x2fae650) at joinrels.c:926\n#7 0x000000000082b135 in make_join_rel (root=0x2f9e910, rel1=0x2f86e88,\nrel2=0x2fd4368) at joinrels.c:760\n#8 0x000000000082a643 in make_rels_by_clause_joins (root=0x2f9e910,\nold_rel=0x2f86e88, other_rels_list=0x2f90148, other_rels=0x2f90160) at\njoinrels.c:312\n#9 0x000000000082a119 in join_search_one_level (root=0x2f9e910, level=3)\nat joinrels.c:123\n#10 0x000000000080cd97 in standard_join_search (root=0x2f9e910,\nlevels_needed=3, initial_rels=0x2f90148) at allpaths.c:3020\n#11 0x000000000080cd10 in make_rel_from_joinlist (root=0x2f9e910,\njoinlist=0x2fd7550) at allpaths.c:2951\n#12 0x000000000080899a in make_one_rel (root=0x2f9e910, joinlist=0x2fd7550)\nat allpaths.c:228\n#13 0x000000000084516a in query_planner (root=0x2f9e910,\nqp_callback=0x84ad85 <standard_qp_callback>, qp_extra=0x7ffea91f9140) at\nplanmain.c:276\n#14 0x000000000084788d in grouping_planner (root=0x2f9e910,\ntuple_fraction=0) at planner.c:1447\n#15 0x0000000000846f56 in subquery_planner (glob=0x2fa0c08,\nparse=0x2f56d30, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at\nplanner.c:1025\n#16 0x000000000084578b in standard_planner (parse=0x2f56d30,\n query_string=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT\nt1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c =\nt2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\",\ncursorOptions=2048, boundParams=0x0) at planner.c:406\n#17 0x0000000000845536 in planner (parse=0x2f56d30,\n query_string=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT\nt1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c =\nt2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\",\ncursorOptions=2048, boundParams=0x0) at planner.c:277\n#18 0x0000000000978faf in pg_plan_query (querytree=0x2f56d30,\n query_string=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT\nt1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c =\nt2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\",\ncursorOptions=2048, boundParams=0x0) at postgres.c:847\n#19 0x0000000000693e50 in ExplainOneQuery (query=0x2f56d30,\ncursorOptions=2048, into=0x0, es=0x2fa0920,\n queryString=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT\nt1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c =\nt2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\",\nparams=0x0, queryEnv=0x0) at explain.c:397\n#20 0x00000000006939a5 in ExplainQuery (pstate=0x2f9e0a0, stmt=0x2f56b50,\nparams=0x0, dest=0x2f9e008) at explain.c:281\n#21 0x0000000000981de8 in standard_ProcessUtility (pstmt=0x2fd2220,\n queryString=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT\nt1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c =\nt2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\",\nreadOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\nqueryEnv=0x0, dest=0x2f9e008, qc=0x7ffea91f9aa0) at utility.c:862\n#22 0x0000000000981585 in ProcessUtility (pstmt=0x2fd2220,\n queryString=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT\nt1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c =\nt2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\",\nreadOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\nqueryEnv=0x0, dest=0x2f9e008, qc=0x7ffea91f9aa0) at utility.c:527\n#23 0x00000000009801ba in PortalRunUtility (portal=0x2f10180,\npstmt=0x2fd2220, isTopLevel=true, setHoldSnapshot=true, dest=0x2f9e008,\nqc=0x7ffea91f9aa0) at pquery.c:1155\n#24 0x000000000097ff20 in FillPortalStore (portal=0x2f10180,\nisTopLevel=true) at pquery.c:1028\n#25 0x000000000097f883 in PortalRun (portal=0x2f10180,\ncount=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2fd2310,\naltdest=0x2fd2310, qc=0x7ffea91f9c60) at pquery.c:760\n#26 0x00000000009795d1 in exec_simple_query (\n query_string=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT\nt1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c =\nt2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\")\nat postgres.c:1214\n#27 0x000000000097da8d in PostgresMain (dbname=0x2ed8068 \"postgres\",\nusername=0x2ed8048 \"edb\") at postgres.c:4497\n#28 0x00000000008b9699 in BackendRun (port=0x2ecfd00) at postmaster.c:4560\n\nThanks & Regards,\nRajkumar Raghuwanshi\n\n\n\nOn Mon, Oct 11, 2021 at 11:05 AM Rajkumar Raghuwanshi <\nrajkumar.raghuwanshi@enterprisedb.com> wrote:\n\n> Thanks for the patch, it applied cleanly and fixed the reported issue. I\n> observed another case where\n> In case of multi-col list partition on the same column query is not\n> picking partition wise join. Is this expected?\n>\n> CREATE TABLE plt1 (a int, b int, c varchar) PARTITION BY LIST(c,c);\n> CREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN\n> (('0001','0001'),('0002','0002'),('0003','0003'));\n> CREATE TABLE plt1_p2 PARTITION OF plt1 FOR VALUES IN\n> (('0004','0004'),('0005','0005'),('0006','0006'));\n> CREATE TABLE plt1_p3 PARTITION OF plt1 DEFAULT;\n> INSERT INTO plt1 SELECT i, i % 47, to_char(i % 11, 'FM0000') FROM\n> generate_series(0, 500) i WHERE i % 11 NOT IN (0,10);\n> ANALYSE plt1;\n> CREATE TABLE plt2 (a int, b int, c varchar) PARTITION BY LIST(c,c);\n> CREATE TABLE plt2_p1 PARTITION OF plt2 FOR VALUES IN\n> (('0001','0001'),('0002','0002'),('0003','0003'));\n> CREATE TABLE plt2_p2 PARTITION OF plt2 FOR VALUES IN\n> (('0004','0004'),('0005','0005'),('0006','0006'));\n> CREATE TABLE plt2_p3 PARTITION OF plt2 DEFAULT;\n> INSERT INTO plt2 SELECT i, i % 47, to_char(i % 11, 'FM0000') FROM\n> generate_series(0, 500) i WHERE i % 11 NOT IN (0,10);\n> ANALYSE plt2;\n> SET enable_partitionwise_join TO true;\n> EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.a, t2.c FROM plt1 t1 INNER JOIN\n> plt2 t2 ON t1.c = t2.c;\n>\n> postgres=# EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.a, t2.c FROM plt1 t1\n> INNER JOIN plt2 t2 ON t1.c = t2.c;\n> QUERY PLAN\n> --------------------------------------------\n> Hash Join\n> Hash Cond: ((t1.c)::text = (t2.c)::text)\n> -> Append\n> -> Seq Scan on plt1_p1 t1_1\n> -> Seq Scan on plt1_p2 t1_2\n> -> Seq Scan on plt1_p3 t1_3\n> -> Hash\n> -> Append\n> -> Seq Scan on plt2_p1 t2_1\n> -> Seq Scan on plt2_p2 t2_2\n> -> Seq Scan on plt2_p3 t2_3\n> (11 rows)\n>\n> Thanks & Regards,\n> Rajkumar Raghuwanshi\n>\n>\n>\n> On Thu, Oct 7, 2021 at 6:03 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com>\n> wrote:\n>\n>> Thanks Rajkumar for testing.\n>>\n>> > I think it should throw an error as the partition by list has only 1\n>> column but we are giving 2 values.\n>>\n>> I also agree that it should throw an error in the above case. Fixed the\n>> issue in the attached patch. Also added related test cases to the\n>> regression test suite.\n>>\n>>\n>> > also if you see \\d+ showing plt1_p1 partition value as ‘(0001,0001)’\n>> instead of ('0001','0001').\n>>\n>> Now throwing errors in the initial stage, this case doesn't arise.\n>>\n>> Please share if you find any other issues.\n>>\n>> Thanks & Regards,\n>> Nitin Jadhav\n>>\n>>\n>>\n>>\n>>\n>> On Thu, Oct 7, 2021 at 4:05 PM Rajkumar Raghuwanshi <\n>> rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>>\n>>> Thanks Nitin,\n>>>\n>>> v4 patches applied cleanly and make check is passing now. While testing\n>>> further I observed that if multiple values are given for a single\n>>> column list partition it is not giving error instead it is changing\n>>> values itself. Please find the example below.\n>>>\n>>> postgres=# CREATE TABLE plt1 (a int, b varchar) PARTITION BY LIST(b);\n>>> CREATE TABLE\n>>> postgres=# CREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN\n>>> (('0001','0001'),('0002','0002'));\n>>> CREATE TABLE\n>>> postgres=# \\d+ plt1;\n>>> Partitioned table \"public.plt1\"\n>>> Column | Type | Collation | Nullable | Default | Storage\n>>> | Compression | Stats target | Description\n>>>\n>>> --------+-------------------+-----------+----------+---------+----------+-------------+--------------+-------------\n>>> a | integer | | | | plain\n>>> | | |\n>>> b | character varying | | | | extended\n>>> | | |\n>>> Partition key: LIST (b)\n>>> Partitions: plt1_p1 FOR VALUES IN ('(0001,0001)', '(0002,0002)')\n>>>\n>>> I think it should throw an error as the partition by list has only 1\n>>> column but we are giving 2 values.\n>>> also if you see \\d+ showing plt1_p1 partition value as ‘(0001,0001)’\n>>> instead of ('0001','0001').\n>>>\n>>> Thanks & Regards,\n>>> Rajkumar Raghuwanshi\n>>>\n>>>\n>>>\n>>> On Sun, Oct 3, 2021 at 1:52 AM Nitin Jadhav <\n>>> nitinjadhavpostgres@gmail.com> wrote:\n>>>\n>>>> > > On PG head + Nitin's v3 patch + Amit's Delta patch. Make check is\n>>>> failing with below errors.\n>>>> >\n>>>> > Thanks Rajkumar for testing.\n>>>> >\n>>>> > Here's a v2 of the delta patch that should fix both of these test\n>>>> > failures. As I mentioned in my last reply, my delta patch fixed what\n>>>> > I think were problems in Nitin's v3 patch but were not complete by\n>>>> > themselves. Especially, I hadn't bothered to investigate various /*\n>>>> > TODO: handle multi-column list partitioning */ sites to deal with my\n>>>> > own changes.\n>>>>\n>>>> Thanks Rajkumar for testing and Thank you Amit for working on v2 of\n>>>> the delta patch. Actually I had done the code changes related to\n>>>> partition-wise join and I was in the middle of fixing the review\n>>>> comments, So I could not share the patch. Anyways thanks for your\n>>>> efforts.\n>>>>\n>>>> > I noticed that multi-column list partitions containing NULLs don't\n>>>> > work correctly with partition pruning yet.\n>>>> >\n>>>> > create table p0 (a int, b text, c bool) partition by list (a, b, c);\n>>>> > create table p01 partition of p0 for values in ((1, 1, true), (NULL,\n>>>> 1, false));\n>>>> > create table p02 partition of p0 for values in ((1, NULL, false));\n>>>> > explain select * from p0 where a is null;\n>>>> > QUERY PLAN\n>>>> > --------------------------------------------------------\n>>>> > Seq Scan on p01 p0 (cost=0.00..22.50 rows=6 width=37)\n>>>> > Filter: (a IS NULL)\n>>>> > (2 rows)\n>>>> >\n>>>> > In the attached updated version, I've dealt with some of those such\n>>>> > that at least the existing cases exercising partition pruning and\n>>>> > partition wise joins now pass.\n>>>>\n>>>> wrt partition pruning, I have checked the output of the above case\n>>>> with the v2 version of the delta patch and without that. The output\n>>>> remains same. Kindly let me know if I am missing something. But I feel\n>>>> the above output is correct as the partition p01 is the only partition\n>>>> which contains NULL value for column a, hence it is showing \"Seq scan\n>>>> on p01\" in the output. Kindly correct me if I am wrong. I feel the\n>>>> code changes related to 'null_keys' is not required, hence not\n>>>> incorporated that in the attached patch.\n>>>>\n>>>> wrt partition-wise join, I had run the regression test (with new cases\n>>>> related to partition-wise join) on v2 of the delta patch and observed\n>>>> the crash. Hence I have not incorporated the partition-wise join\n>>>> related code from v2 of delta patch to main v4 patch. Instead I have\n>>>> added the partition-wise join related code done by me in the attached\n>>>> patch. Please share your thoughts and if possible we can improvise the\n>>>> code. Rest of the changes looks good to me and I have incorporated\n>>>> that in the attached patch.\n>>>>\n>>>>\n>>>> > I guess that may be due to the following newly added code being\n>>>> incomplete:\n>>>> > Maybe this function needs to return a \"bitmapset\" of indexes, because\n>>>> > multiple partitions can now contain NULL values.\n>>>>\n>>>> I feel this function is not required at all as we are not separating\n>>>> the non null and null partitions now. Removed in the attached patch.\n>>>> Also removed the \"scan_null' variable from the structure\n>>>> \"PruneStepResult\" and cleaned up the corresponding code blocks.\n>>>>\n>>>>\n>>>> > This function name may be too generic. Given that it is specific to\n>>>> > implementing list bound de-duplication, maybe the following signature\n>>>> > is more appropriate:\n>>>> >\n>>>> > static bool\n>>>> > checkListBoundDuplicated(List *list_bounds, List *new_bound)\n>>>>\n>>>> Yes. The function name looks more generic. How about using\n>>>> \"isListBoundDuplicated()\"? I have used this name in the patch. Please\n>>>> let me know if that does not look correct.\n>>>>\n>>>>\n>>>> > Also, better if the function comment mentions those parameter names,\n>>>> like:\n>>>> >\n>>>> > \"Returns TRUE if the list bound element 'new_bound' is already present\n>>>> > in the target list 'list_bounds', FALSE otherwise.\"\n>>>>\n>>>> Fixed.\n>>>>\n>>>>\n>>>> > +/*\n>>>> > + * transformPartitionListBounds\n>>>> > + *\n>>>> > + * Converts the expressions of list partition bounds from the raw\n>>>> grammar\n>>>> > + * representation.\n>>>> >\n>>>> > A sentence about the result format would be helpful, like:\n>>>> >\n>>>> > The result is a List of Lists of Const nodes to account for the\n>>>> > partition key possibly containing more than one column.\n>>>>\n>>>> Fixed.\n>>>>\n>>>>\n>>>> > + int i = 0;\n>>>> > + int j = 0;\n>>>> >\n>>>> > Better to initialize such loop counters closer to the loop.\n>>>>\n>>>> Fixed in all the places.\n>>>>\n>>>>\n>>>> > + colname[i] = (char *) palloc0(NAMEDATALEN * sizeof(char));\n>>>> > + colname[i] = get_attname(RelationGetRelid(parent),\n>>>> > + key->partattrs[i], false);\n>>>> >\n>>>> > The palloc in the 1st statement is wasteful, because the 2nd statement\n>>>> > overwrites its pointer by the pointer to the string palloc'd by\n>>>> > get_attname().\n>>>>\n>>>> Removed the 1st statement as it is not required.\n>>>>\n>>>>\n>>>> > + ListCell *cell2 = NULL;\n>>>> >\n>>>> > No need to explicitly initialize the loop variable.\n>>>>\n>>>> Fixed in all the places.\n>>>>\n>>>>\n>>>> > + RowExpr *rowexpr = NULL;\n>>>> > +\n>>>> > + if (!IsA(expr, RowExpr))\n>>>> > + ereport(ERROR,\n>>>> > + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n>>>> > + errmsg(\"Invalid list bound specification\"),\n>>>> > + parser_errposition(pstate, exprLocation((Node\n>>>> > *) spec))));\n>>>> > +\n>>>> > + rowexpr = (RowExpr *) expr;\n>>>> >\n>>>> > It's okay to assign rowexpr at the top here instead of the dummy\n>>>> > NULL-initialization and write the condition as:\n>>>> >\n>>>> > if (!IsA(rowexpr, RowExpr))\n>>>>\n>>>> Fixed.\n>>>>\n>>>>\n>>>> > + if (isDuplicate)\n>>>> > + continue;\n>>>> > +\n>>>> > + result = lappend(result, values);\n>>>> >\n>>>> > I can see you copied this style from the existing code, but how about\n>>>> > writing this simply as:\n>>>> >\n>>>> > if (!isDuplicate)\n>>>> > result = lappend(result, values);\n>>>>\n>>>> This looks good. I have changed in the patch.\n>>>>\n>>>>\n>>>> > -/* One value coming from some (index'th) list partition */\n>>>> > +/* One bound of a list partition */\n>>>> > typedef struct PartitionListValue\n>>>> > {\n>>>> > int index;\n>>>> > - Datum value;\n>>>> > + Datum *values;\n>>>> > + bool *isnulls;\n>>>> > } PartitionListValue;\n>>>> >\n>>>> > Given that this is a locally-defined struct, I wonder if it makes\n>>>> > sense to rename the struct while we're at it. Call it, say,\n>>>> > PartitionListBound?\n>>>>\n>>>> Yes. PartitionListBound looks more appropriate and it also matches the\n>>>> similar structures of the other partition strategies.\n>>>>\n>>>> > Also, please keep part of the existing comment that says that the\n>>>> > bound belongs to index'th partition.\n>>>>\n>>>> Retained the old comment.\n>>>>\n>>>>\n>>>> > + * partition_bound_accepts_nulls\n>>>> > + *\n>>>> > + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n>>>> > */\n>>>> >\n>>>> > I suggest slight rewording, as follows:\n>>>> >\n>>>> > \"Returns TRUE if any of the partition bounds contains a NULL value,\n>>>> > FALSE otherwise.\"\n>>>>\n>>>> Fixed.\n>>>>\n>>>>\n>>>> > - PartitionListValue *all_values;\n>>>> > + PartitionListValue **all_values;\n>>>> > ...\n>>>> > - all_values = (PartitionListValue *)\n>>>> > - palloc(ndatums * sizeof(PartitionListValue));\n>>>> > + ndatums = get_list_datum_count(boundspecs, nparts);\n>>>> > + all_values = (PartitionListValue **)\n>>>> > + palloc(ndatums * sizeof(PartitionListValue *));\n>>>> >\n>>>> > I don't see the need to redefine all_values's pointer type. No need\n>>>> > to palloc PartitionListValue repeatedly for every datum as done\n>>>> > further down as follows:\n>>>> >\n>>>> > + all_values[j] = (PartitionListValue *)\n>>>> > palloc(sizeof(PartitionListValue));\n>>>> >\n>>>> > You do need the following two though:\n>>>> >\n>>>> > + all_values[j]->values = (Datum *) palloc0(key->partnatts *\n>>>> > sizeof(Datum));\n>>>> > + all_values[j]->isnulls = (bool *) palloc0(key->partnatts *\n>>>> > sizeof(bool));\n>>>> >\n>>>> > If you change the above the way I suggest, you'd also need to revert\n>>>> > the following change:\n>>>> >\n>>>> > - qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n>>>> > + qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n>>>> > qsort_partition_list_value_cmp, (void *) key);\n>>>> >\n>>>> > + int orig_index = all_values[i]->index;\n>>>> > + boundinfo->datums[i] = (Datum *) palloc(key->partnatts *\n>>>> sizeof(Datum));\n>>>> >\n>>>> > Missing a newline between these two statements.\n>>>>\n>>>> Fixed. Made necessary changes to keep the intent of existing code.\n>>>>\n>>>>\n>>>> > @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n>>>> > *parttyplen, bool *parttypbyval,\n>>>> > if (b1->nindexes != b2->nindexes)\n>>>> > return false;\n>>>> >\n>>>> > - if (b1->null_index != b2->null_index)\n>>>> > + if (get_partition_bound_null_index(b1) !=\n>>>> > get_partition_bound_null_index(b2))\n>>>> >\n>>>> > As mentioned in the last message, this bit in partition_bounds_equal()\n>>>> > needs to be comparing \"bitmapsets\" of null bound indexes, that is\n>>>> > after fixing get_partition_bound_null_index() as previously mentioned.\n>>>>\n>>>> As mentioned earlier, removed the functionality of\n>>>> get_partition_bound_null_index(), hence the above condition is not\n>>>> required and removed.\n>>>>\n>>>> > But...\n>>>> >\n>>>> > @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n>>>> > *parttyplen, bool *parttypbyval,\n>>>> > * context. datumIsEqual() should be simple enough to\n>>>> be\n>>>> > * safe.\n>>>> > */\n>>>> > - if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n>>>> > + if (b1->isnulls)\n>>>> > + b1_isnull = b1->isnulls[i][j];\n>>>> > + if (b2->isnulls)\n>>>> > + b2_isnull = b2->isnulls[i][j];\n>>>> > +\n>>>> > + /*\n>>>> > + * If any of the partition bound has NULL value, then\n>>>> check\n>>>> > + * equality for the NULL value instead of comparing\n>>>> the datums\n>>>> > + * as it does not contain valid value in case of NULL.\n>>>> > + */\n>>>> > + if (b1_isnull || b2_isnull)\n>>>> > + {\n>>>> > + if (b1_isnull != b2_isnull)\n>>>> > + return false;\n>>>> > + }\n>>>> >\n>>>> > ...if you have this in the main loop, I don't think we need the above\n>>>> > code stanza which appears to implement a short-cut for this long-form\n>>>> > logic.\n>>>>\n>>>> Yes. May be we could have ignored the above code stanza if we would\n>>>> have comparing the null indexes using get_partition_bound_null_index()\n>>>> in the beginning of the function. But hence we are not separating the\n>>>> non null partitions and null partitions, I would like to keep the\n>>>> logic in the inner loop as we are doing it for non null bound values\n>>>> in the above code stanza, just to give a feel that null bound values\n>>>> are also handled the same way as non null values. Please correct me if\n>>>> I am wrong.\n>>>>\n>>>>\n>>>> > + (key->strategy != PARTITION_STRATEGY_LIST ||\n>>>> > + !src->isnulls[i][j]))\n>>>> >\n>>>> > I think it's better to write this condition as follows just like the\n>>>> > accompanying condition involving src->kind:\n>>>> >\n>>>> > (src->nulls == NULL || !src->isnulls[i][j])\n>>>>\n>>>> Fixed.\n>>>>\n>>>>\n>>>> > In check_new_partition_bound():\n>>>> >\n>>>> > + Datum *values = (Datum *)\n>>>> > palloc0(key->partnatts * sizeof(Datum));\n>>>> > + bool *isnulls = (bool *)\n>>>> > palloc0(key->partnatts * sizeof(bool));\n>>>> >\n>>>> > Doesn't seem like a bad idea to declare these as:\n>>>> >\n>>>> > Datum values[PARTITION_MAX_KEYS];\n>>>> > bool isnulls[PARTITION_MAX_KEYS];\n>>>>\n>>>> Thanks for the suggestion. I have changed as above.\n>>>>\n>>>> > I looked at get_qual_for_list_multi_column() and immediately thought\n>>>> > that it may be a bad idea. I think it's better to integrate the logic\n>>>> > for multi-column case into the existing function even if that makes\n>>>> > the function appear more complex. Having two functions with the same\n>>>> > goal and mostly the same code is not a good idea mainly because it\n>>>> > becomes a maintenance burden.\n>>>>\n>>>> Actually I had written a separate function because of the complexity.\n>>>> Now I have understood that since the objective is same, it should be\n>>>> done in a single function irrespective of complexity.\n>>>>\n>>>> > I have attempted a rewrite such that get_qual_for_list() now handles\n>>>> > both the single-column and multi-column cases. Changes included in\n>>>> > the delta patch. The patch updates some outputs of the newly added\n>>>> > tests for multi-column list partitions, because the new code emits the\n>>>> > IS NOT NULL tests a bit differently than\n>>>> > get_qual_for_list_mutli_column() would. Notably, the old approach\n>>>> > would emit IS NOT NULL for every non-NULL datum matched to a given\n>>>> > column, not just once for the column. However, the patch makes a few\n>>>> > other tests fail, mainly because I had to fix\n>>>> > partition_bound_accepts_nulls() to handle the multi-column case,\n>>>> > though didn't bother to update all callers of it to also handle the\n>>>> > multi-column case correctly. I guess that's a TODO you're going to\n>>>> > deal with at some point anyway. :)\n>>>>\n>>>> Thank you very much for your efforts. The changes looks good to me and\n>>>> I have incorporated these changes in the attached patch.\n>>>>\n>>>> I have completed the coding for all the TODOs and hence removed in the\n>>>> patch. The naming conventions used for function/variable names varies\n>>>> across the files. Some places it is like 'namesLikeThis' and in some\n>>>> place it is like 'names_like_this'. I have used the naming conventions\n>>>> based on the surrounding styles used. I am happy to change those if\n>>>> required.\n>>>>\n>>>> I have verified 'make check' with the attached patch and it is working\n>>>> fine.\n>>>>\n>>>>\n>>>> Thanks & Regards,\n>>>> Nitin Jadhav\n>>>>\n>>>>\n>>>> On Mon, Sep 13, 2021 at 3:47 PM Rajkumar Raghuwanshi\n>>>> <rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>>>> >\n>>>> > On PG head + Nitin's v3 patch + Amit's Delta patch. Make check is\n>>>> failing with below errors.\n>>>> >\n>>>> > --inherit.sql is failing with error :\"ERROR: negative bitmapset\n>>>> member not allowed\"\n>>>> > update mlparted_tab mlp set c = 'xxx'\n>>>> > from\n>>>> > (select a from some_tab union all select a+1 from some_tab) ss (a)\n>>>> > where (mlp.a = ss.a and mlp.b = 'b') or mlp.a = 3;\n>>>> > ERROR: negative bitmapset member not allowed\n>>>> >\n>>>> > --partition_join.sql is crashing with enable_partitionwise_join set\n>>>> to true.\n>>>> > CREATE TABLE plt1_adv (a int, b int, c text) PARTITION BY LIST (c);\n>>>> > CREATE TABLE plt1_adv_p1 PARTITION OF plt1_adv FOR VALUES IN ('0001',\n>>>> '0003');\n>>>> > CREATE TABLE plt1_adv_p2 PARTITION OF plt1_adv FOR VALUES IN ('0004',\n>>>> '0006');\n>>>> > CREATE TABLE plt1_adv_p3 PARTITION OF plt1_adv FOR VALUES IN ('0008',\n>>>> '0009');\n>>>> > INSERT INTO plt1_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM\n>>>> generate_series(1, 299) i WHERE i % 10 IN (1, 3, 4, 6, 8, 9);\n>>>> > ANALYZE plt1_adv;\n>>>> > CREATE TABLE plt2_adv (a int, b int, c text) PARTITION BY LIST (c);\n>>>> > CREATE TABLE plt2_adv_p1 PARTITION OF plt2_adv FOR VALUES IN ('0002',\n>>>> '0003');\n>>>> > CREATE TABLE plt2_adv_p2 PARTITION OF plt2_adv FOR VALUES IN ('0004',\n>>>> '0006');\n>>>> > CREATE TABLE plt2_adv_p3 PARTITION OF plt2_adv FOR VALUES IN ('0007',\n>>>> '0009');\n>>>> > INSERT INTO plt2_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM\n>>>> generate_series(1, 299) i WHERE i % 10 IN (2, 3, 4, 6, 7, 9);\n>>>> > ANALYZE plt2_adv;\n>>>> > -- inner join\n>>>> > EXPLAIN (COSTS OFF)\n>>>> > SELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2\n>>>> ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\n>>>> > server closed the connection unexpectedly\n>>>> > This probably means the server terminated abnormally\n>>>> > before or while processing the request.\n>>>> > connection to server was lost\n>>>> >\n>>>> >\n>>>> > --stack-trace\n>>>> > Core was generated by `postgres: edb regression [local] EXPLAIN\n>>>> '.\n>>>> > Program terminated with signal 6, Aborted.\n>>>> > #0 0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n>>>> > Missing separate debuginfos, use: debuginfo-install\n>>>> glibc-2.17-222.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64\n>>>> krb5-libs-1.15.1-19.el7.x86_64 libcom_err-1.42.9-12.el7_5.x86_64\n>>>> libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-12.el7.x86_64\n>>>> openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64\n>>>> zlib-1.2.7-17.el7.x86_64\n>>>> > (gdb) bt\n>>>> > #0 0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n>>>> > #1 0x00007f7d339bb968 in abort () from /lib64/libc.so.6\n>>>> > #2 0x0000000000b0fbc3 in ExceptionalCondition\n>>>> (conditionName=0xcbda10 \"part_index >= 0\", errorType=0xcbd1c3\n>>>> \"FailedAssertion\", fileName=0xcbd2fe \"partbounds.c\", lineNumber=1957)\n>>>> > at assert.c:69\n>>>> > #3 0x0000000000892aa1 in is_dummy_partition (rel=0x19b37c0,\n>>>> part_index=-1) at partbounds.c:1957\n>>>> > #4 0x00000000008919bd in merge_list_bounds (partnatts=1,\n>>>> partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0,\n>>>> inner_rel=0x1922938, jointype=JOIN_INNER,\n>>>> > outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at\n>>>> partbounds.c:1529\n>>>> > #5 0x00000000008910de in partition_bounds_merge (partnatts=1,\n>>>> partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0,\n>>>> inner_rel=0x1922938, jointype=JOIN_INNER,\n>>>> > outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at\n>>>> partbounds.c:1223\n>>>> > #6 0x000000000082c41a in compute_partition_bounds (root=0x1a19ed0,\n>>>> rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30,\n>>>> parent_sjinfo=0x7fffd67752a0, parts1=0x7fffd67751b0,\n>>>> > parts2=0x7fffd67751a8) at joinrels.c:1644\n>>>> > #7 0x000000000082bc34 in try_partitionwise_join (root=0x1a19ed0,\n>>>> rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30,\n>>>> parent_sjinfo=0x7fffd67752a0, parent_restrictlist=0x1ab3318)\n>>>> > at joinrels.c:1402\n>>>> > #8 0x000000000082aea2 in populate_joinrel_with_paths\n>>>> (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30,\n>>>> sjinfo=0x7fffd67752a0, restrictlist=0x1ab3318)\n>>>> > at joinrels.c:926\n>>>> > #9 0x000000000082a8f5 in make_join_rel (root=0x1a19ed0,\n>>>> rel1=0x19b37c0, rel2=0x1922938) at joinrels.c:760\n>>>> > #10 0x0000000000829e03 in make_rels_by_clause_joins (root=0x1a19ed0,\n>>>> old_rel=0x19b37c0, other_rels_list=0x1ab2970, other_rels=0x1ab2990) at\n>>>> joinrels.c:312\n>>>> > #11 0x00000000008298d9 in join_search_one_level (root=0x1a19ed0,\n>>>> level=2) at joinrels.c:123\n>>>> > #12 0x000000000080c566 in standard_join_search (root=0x1a19ed0,\n>>>> levels_needed=2, initial_rels=0x1ab2970) at allpaths.c:3020\n>>>> > #13 0x000000000080c4df in make_rel_from_joinlist (root=0x1a19ed0,\n>>>> joinlist=0x199d538) at allpaths.c:2951\n>>>> > #14 0x000000000080816b in make_one_rel (root=0x1a19ed0,\n>>>> joinlist=0x199d538) at allpaths.c:228\n>>>> > #15 0x000000000084491d in query_planner (root=0x1a19ed0,\n>>>> qp_callback=0x84a538 <standard_qp_callback>, qp_extra=0x7fffd6775630) at\n>>>> planmain.c:276\n>>>> > #16 0x0000000000847040 in grouping_planner (root=0x1a19ed0,\n>>>> tuple_fraction=0) at planner.c:1447\n>>>> > #17 0x0000000000846709 in subquery_planner (glob=0x19b39d8,\n>>>> parse=0x1aaa290, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at\n>>>> planner.c:1025\n>>>> > #18 0x0000000000844f3e in standard_planner (parse=0x1aaa290,\n>>>> > query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>>>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>>>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048,\n>>>> boundParams=0x0) at planner.c:406\n>>>> > #19 0x0000000000844ce9 in planner (parse=0x1aaa290,\n>>>> > query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>>>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>>>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048,\n>>>> boundParams=0x0) at planner.c:277\n>>>> > #20 0x0000000000978483 in pg_plan_query (querytree=0x1aaa290,\n>>>> > query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>>>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>>>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048,\n>>>> boundParams=0x0) at postgres.c:847\n>>>> > #21 0x00000000006937fc in ExplainOneQuery (query=0x1aaa290,\n>>>> cursorOptions=2048, into=0x0, es=0x19b36f0,\n>>>> > queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>>>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>>>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>>>> > params=0x0, queryEnv=0x0) at explain.c:397\n>>>> > #22 0x0000000000693351 in ExplainQuery (pstate=0x197c410,\n>>>> stmt=0x1aaa0b0, params=0x0, dest=0x197c378) at explain.c:281\n>>>> > #23 0x00000000009811fa in standard_ProcessUtility (pstmt=0x1a0bfc8,\n>>>> > queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>>>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>>>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>>>> > readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n>>>> queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:845\n>>>> > #24 0x00000000009809ec in ProcessUtility (pstmt=0x1a0bfc8,\n>>>> > queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>>>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>>>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>>>> > readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n>>>> queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:527\n>>>> > #25 0x000000000097f636 in PortalRunUtility (portal=0x1893b40,\n>>>> pstmt=0x1a0bfc8, isTopLevel=true, setHoldSnapshot=true, dest=0x197c378,\n>>>> qc=0x7fffd6775f90) at pquery.c:1147\n>>>> > #26 0x000000000097f3a5 in FillPortalStore (portal=0x1893b40,\n>>>> isTopLevel=true) at pquery.c:1026\n>>>> > #27 0x000000000097ed11 in PortalRun (portal=0x1893b40,\n>>>> count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1a0c0b8,\n>>>> altdest=0x1a0c0b8, qc=0x7fffd6776150) at pquery.c:758\n>>>> > #28 0x0000000000978aa5 in exec_simple_query (\n>>>> >\n>>>> > Thanks & Regards,\n>>>> > Rajkumar Raghuwanshi\n>>>> >\n>>>> >\n>>>> > On Fri, Sep 3, 2021 at 7:17 PM Amit Langote <amitlangote09@gmail.com>\n>>>> wrote:\n>>>> >>\n>>>> >> On Wed, Sep 1, 2021 at 2:31 PM Amit Langote <amitlangote09@gmail.com>\n>>>> wrote:\n>>>> >> > On Tue, Aug 31, 2021 at 8:02 PM Nitin Jadhav\n>>>> >> > <nitinjadhavpostgres@gmail.com> wrote:\n>>>> >> > > The attached patch also fixes the above comments.\n>>>> >> >\n>>>> >> > I noticed that multi-column list partitions containing NULLs don't\n>>>> >> > work correctly with partition pruning yet.\n>>>> >> >\n>>>> >> > create table p0 (a int, b text, c bool) partition by list (a, b,\n>>>> c);\n>>>> >> > create table p01 partition of p0 for values in ((1, 1, true),\n>>>> (NULL, 1, false));\n>>>> >> > create table p02 partition of p0 for values in ((1, NULL, false));\n>>>> >> > explain select * from p0 where a is null;\n>>>> >> > QUERY PLAN\n>>>> >> > --------------------------------------------------------\n>>>> >> > Seq Scan on p01 p0 (cost=0.00..22.50 rows=6 width=37)\n>>>> >> > Filter: (a IS NULL)\n>>>> >> > (2 rows)\n>>>> >> >\n>>>> >> > I guess that may be due to the following newly added code being\n>>>> incomplete:\n>>>> >> >\n>>>> >> > +/*\n>>>> >> > + * get_partition_bound_null_index\n>>>> >> > + *\n>>>> >> > + * Returns the partition index of the partition bound which\n>>>> accepts NULL.\n>>>> >> > + */\n>>>> >> > +int\n>>>> >> > +get_partition_bound_null_index(PartitionBoundInfo boundinfo)\n>>>> >> > +{\n>>>> >> > + int i = 0;\n>>>> >> > + int j = 0;\n>>>> >> > +\n>>>> >> > + if (!boundinfo->isnulls)\n>>>> >> > + return -1;\n>>>> >> >\n>>>> >> > - if (!val->constisnull)\n>>>> >> > - count++;\n>>>> >> > + for (i = 0; i < boundinfo->ndatums; i++)\n>>>> >> > + {\n>>>> >> > + //TODO: Handle for multi-column cases\n>>>> >> > + for (j = 0; j < 1; j++)\n>>>> >> > + {\n>>>> >> > + if (boundinfo->isnulls[i][j])\n>>>> >> > + return boundinfo->indexes[i];\n>>>> >> > }\n>>>> >> > }\n>>>> >> >\n>>>> >> > + return -1;\n>>>> >> > +}\n>>>> >> >\n>>>> >> > Maybe this function needs to return a \"bitmapset\" of indexes,\n>>>> because\n>>>> >> > multiple partitions can now contain NULL values.\n>>>> >> >\n>>>> >> > Some other issues I noticed and suggestions for improvement:\n>>>> >> >\n>>>> >> > +/*\n>>>> >> > + * checkForDuplicates\n>>>> >> > + *\n>>>> >> > + * Returns TRUE if the list bound element is already present in\n>>>> the list of\n>>>> >> > + * list bounds, FALSE otherwise.\n>>>> >> > + */\n>>>> >> > +static bool\n>>>> >> > +checkForDuplicates(List *source, List *searchElem)\n>>>> >> >\n>>>> >> > This function name may be too generic. Given that it is specific\n>>>> to\n>>>> >> > implementing list bound de-duplication, maybe the following\n>>>> signature\n>>>> >> > is more appropriate:\n>>>> >> >\n>>>> >> > static bool\n>>>> >> > checkListBoundDuplicated(List *list_bounds, List *new_bound)\n>>>> >> >\n>>>> >> > Also, better if the function comment mentions those parameter\n>>>> names, like:\n>>>> >> >\n>>>> >> > \"Returns TRUE if the list bound element 'new_bound' is already\n>>>> present\n>>>> >> > in the target list 'list_bounds', FALSE otherwise.\"\n>>>> >> >\n>>>> >> > +/*\n>>>> >> > + * transformPartitionListBounds\n>>>> >> > + *\n>>>> >> > + * Converts the expressions of list partition bounds from the raw\n>>>> grammar\n>>>> >> > + * representation.\n>>>> >> >\n>>>> >> > A sentence about the result format would be helpful, like:\n>>>> >> >\n>>>> >> > The result is a List of Lists of Const nodes to account for the\n>>>> >> > partition key possibly containing more than one column.\n>>>> >> >\n>>>> >> > + int i = 0;\n>>>> >> > + int j = 0;\n>>>> >> >\n>>>> >> > Better to initialize such loop counters closer to the loop.\n>>>> >> >\n>>>> >> > + colname[i] = (char *) palloc0(NAMEDATALEN *\n>>>> sizeof(char));\n>>>> >> > + colname[i] = get_attname(RelationGetRelid(parent),\n>>>> >> > + key->partattrs[i], false);\n>>>> >> >\n>>>> >> > The palloc in the 1st statement is wasteful, because the 2nd\n>>>> statement\n>>>> >> > overwrites its pointer by the pointer to the string palloc'd by\n>>>> >> > get_attname().\n>>>> >> >\n>>>> >> > + ListCell *cell2 = NULL;\n>>>> >> >\n>>>> >> > No need to explicitly initialize the loop variable.\n>>>> >> >\n>>>> >> > + RowExpr *rowexpr = NULL;\n>>>> >> > +\n>>>> >> > + if (!IsA(expr, RowExpr))\n>>>> >> > + ereport(ERROR,\n>>>> >> > + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n>>>> >> > + errmsg(\"Invalid list bound specification\"),\n>>>> >> > + parser_errposition(pstate,\n>>>> exprLocation((Node\n>>>> >> > *) spec))));\n>>>> >> > +\n>>>> >> > + rowexpr = (RowExpr *) expr;\n>>>> >> >\n>>>> >> > It's okay to assign rowexpr at the top here instead of the dummy\n>>>> >> > NULL-initialization and write the condition as:\n>>>> >> >\n>>>> >> > if (!IsA(rowexpr, RowExpr))\n>>>> >> >\n>>>> >> > + if (isDuplicate)\n>>>> >> > + continue;\n>>>> >> > +\n>>>> >> > + result = lappend(result, values);\n>>>> >> >\n>>>> >> > I can see you copied this style from the existing code, but how\n>>>> about\n>>>> >> > writing this simply as:\n>>>> >> >\n>>>> >> > if (!isDuplicate)\n>>>> >> > result = lappend(result, values);\n>>>> >> >\n>>>> >> > -/* One value coming from some (index'th) list partition */\n>>>> >> > +/* One bound of a list partition */\n>>>> >> > typedef struct PartitionListValue\n>>>> >> > {\n>>>> >> > int index;\n>>>> >> > - Datum value;\n>>>> >> > + Datum *values;\n>>>> >> > + bool *isnulls;\n>>>> >> > } PartitionListValue;\n>>>> >> >\n>>>> >> > Given that this is a locally-defined struct, I wonder if it makes\n>>>> >> > sense to rename the struct while we're at it. Call it, say,\n>>>> >> > PartitionListBound?\n>>>> >> >\n>>>> >> > Also, please keep part of the existing comment that says that the\n>>>> >> > bound belongs to index'th partition.\n>>>> >> >\n>>>> >> > Will send more comments in a bit...\n>>>> >>\n>>>> >> + * partition_bound_accepts_nulls\n>>>> >> + *\n>>>> >> + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n>>>> >> */\n>>>> >>\n>>>> >> I suggest slight rewording, as follows:\n>>>> >>\n>>>> >> \"Returns TRUE if any of the partition bounds contains a NULL value,\n>>>> >> FALSE otherwise.\"\n>>>> >>\n>>>> >> - PartitionListValue *all_values;\n>>>> >> + PartitionListValue **all_values;\n>>>> >> ...\n>>>> >> - all_values = (PartitionListValue *)\n>>>> >> - palloc(ndatums * sizeof(PartitionListValue));\n>>>> >> + ndatums = get_list_datum_count(boundspecs, nparts);\n>>>> >> + all_values = (PartitionListValue **)\n>>>> >> + palloc(ndatums * sizeof(PartitionListValue *));\n>>>> >>\n>>>> >> I don't see the need to redefine all_values's pointer type. No need\n>>>> >> to palloc PartitionListValue repeatedly for every datum as done\n>>>> >> further down as follows:\n>>>> >>\n>>>> >> + all_values[j] = (PartitionListValue *)\n>>>> >> palloc(sizeof(PartitionListValue));\n>>>> >>\n>>>> >> You do need the following two though:\n>>>> >>\n>>>> >> + all_values[j]->values = (Datum *) palloc0(key->partnatts\n>>>> *\n>>>> >> sizeof(Datum));\n>>>> >> + all_values[j]->isnulls = (bool *) palloc0(key->partnatts\n>>>> *\n>>>> >> sizeof(bool));\n>>>> >>\n>>>> >> If you change the above the way I suggest, you'd also need to revert\n>>>> >> the following change:\n>>>> >>\n>>>> >> - qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n>>>> >> + qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n>>>> >> qsort_partition_list_value_cmp, (void *) key);\n>>>> >>\n>>>> >> + int orig_index = all_values[i]->index;\n>>>> >> + boundinfo->datums[i] = (Datum *) palloc(key->partnatts *\n>>>> sizeof(Datum));\n>>>> >>\n>>>> >> Missing a newline between these two statements.\n>>>> >>\n>>>> >> BTW, I noticed that the boundDatums variable is no longer used in\n>>>> >> create_list_bounds. I traced back its origin and found that a recent\n>>>> >> commit 53d86957e98 introduced it to implement an idea to reduce the\n>>>> >> finer-grained pallocs that were being done in create_list_bounds().\n>>>> I\n>>>> >> don't think that this patch needs to throw away that work. You can\n>>>> >> make it work as the attached delta patch that applies on top of v3.\n>>>> >> Please check.\n>>>> >>\n>>>> >> @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n>>>> >> *parttyplen, bool *parttypbyval,\n>>>> >> if (b1->nindexes != b2->nindexes)\n>>>> >> return false;\n>>>> >>\n>>>> >> - if (b1->null_index != b2->null_index)\n>>>> >> + if (get_partition_bound_null_index(b1) !=\n>>>> >> get_partition_bound_null_index(b2))\n>>>> >>\n>>>> >> As mentioned in the last message, this bit in\n>>>> partition_bounds_equal()\n>>>> >> needs to be comparing \"bitmapsets\" of null bound indexes, that is\n>>>> >> after fixing get_partition_bound_null_index() as previously\n>>>> mentioned.\n>>>> >>\n>>>> >> But...\n>>>> >>\n>>>> >> @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n>>>> >> *parttyplen, bool *parttypbyval,\n>>>> >> * context. datumIsEqual() should be simple enough\n>>>> to be\n>>>> >> * safe.\n>>>> >> */\n>>>> >> - if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n>>>> >> + if (b1->isnulls)\n>>>> >> + b1_isnull = b1->isnulls[i][j];\n>>>> >> + if (b2->isnulls)\n>>>> >> + b2_isnull = b2->isnulls[i][j];\n>>>> >> +\n>>>> >> + /*\n>>>> >> + * If any of the partition bound has NULL value,\n>>>> then check\n>>>> >> + * equality for the NULL value instead of comparing\n>>>> the datums\n>>>> >> + * as it does not contain valid value in case of\n>>>> NULL.\n>>>> >> + */\n>>>> >> + if (b1_isnull || b2_isnull)\n>>>> >> + {\n>>>> >> + if (b1_isnull != b2_isnull)\n>>>> >> + return false;\n>>>> >> + }\n>>>> >>\n>>>> >> ...if you have this in the main loop, I don't think we need the above\n>>>> >> code stanza which appears to implement a short-cut for this long-form\n>>>> >> logic.\n>>>> >>\n>>>> >> + (key->strategy != PARTITION_STRATEGY_LIST ||\n>>>> >> + !src->isnulls[i][j]))\n>>>> >>\n>>>> >> I think it's better to write this condition as follows just like the\n>>>> >> accompanying condition involving src->kind:\n>>>> >>\n>>>> >> (src->nulls == NULL || !src->isnulls[i][j])\n>>>> >>\n>>>> >> (Skipped looking at merge_list_bounds() and related changes for now\n>>>> as\n>>>> >> I see a lot of TODOs remain to be done.)\n>>>> >>\n>>>> >> In check_new_partition_bound():\n>>>> >>\n>>>> >> + Datum *values = (Datum *)\n>>>> >> palloc0(key->partnatts * sizeof(Datum));\n>>>> >> + bool *isnulls = (bool *)\n>>>> >> palloc0(key->partnatts * sizeof(bool));\n>>>> >>\n>>>> >> Doesn't seem like a bad idea to declare these as:\n>>>> >>\n>>>> >> Datum values[PARTITION_MAX_KEYS];\n>>>> >> bool isnulls[PARTITION_MAX_KEYS];\n>>>> >>\n>>>> >>\n>>>> >> I looked at get_qual_for_list_multi_column() and immediately thought\n>>>> >> that it may be a bad idea. I think it's better to integrate the\n>>>> logic\n>>>> >> for multi-column case into the existing function even if that makes\n>>>> >> the function appear more complex. Having two functions with the same\n>>>> >> goal and mostly the same code is not a good idea mainly because it\n>>>> >> becomes a maintenance burden.\n>>>> >>\n>>>> >> I have attempted a rewrite such that get_qual_for_list() now handles\n>>>> >> both the single-column and multi-column cases. Changes included in\n>>>> >> the delta patch. The patch updates some outputs of the newly added\n>>>> >> tests for multi-column list partitions, because the new code emits\n>>>> the\n>>>> >> IS NOT NULL tests a bit differently than\n>>>> >> get_qual_for_list_mutli_column() would. Notably, the old approach\n>>>> >> would emit IS NOT NULL for every non-NULL datum matched to a given\n>>>> >> column, not just once for the column. However, the patch makes a few\n>>>> >> other tests fail, mainly because I had to fix\n>>>> >> partition_bound_accepts_nulls() to handle the multi-column case,\n>>>> >> though didn't bother to update all callers of it to also handle the\n>>>> >> multi-column case correctly. I guess that's a TODO you're going to\n>>>> >> deal with at some point anyway. :)\n>>>> >>\n>>>> >> I still have more than half of v3 left to look at, so will continue\n>>>> >> looking. In the meantime, please check the changes I suggested,\n>>>> >> including the delta patch, and let me know your thoughts.\n>>>> >>\n>>>> >> --\n>>>> >> Amit Langote\n>>>> >> EDB: http://www.enterprisedb.com\n>>>>\n>>>\n\nHi Nitin,While testing further I got a crash with partition wise join enabled for multi-col list partitions. please find test case & stack-trace below.SET enable_partitionwise_join TO on;CREATE TABLE plt1 (c varchar, d varchar) PARTITION BY LIST(c,d);CREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN (('0001','0001'),('0002','0002'),(NULL,NULL));CREATE TABLE plt1_p2 PARTITION OF plt1 FOR VALUES IN (('0004','0004'),('0005','0005'),('0006','0006'));INSERT INTO plt1 SELECT to_char(i % 11, 'FM0000'), to_char(i % 11, 'FM0000') FROM generate_series(0, 500) i WHERE i % 11 NOT IN (0,10,3,7,8,9);INSERT INTO plt1 SELECT NULL,NULL FROM generate_series(0, 500) i WHERE i % 11 IN (3);ANALYSE plt1;CREATE TABLE plt2 (c varchar, d varchar) PARTITION BY LIST(c,d);CREATE TABLE plt2_p1 PARTITION OF plt2 FOR VALUES IN (('0001','0001'),('0002','0002'));CREATE TABLE plt2_p2 PARTITION OF plt2 FOR VALUES IN (('0004','0004'),('0005','0005'),('0006','0006'));CREATE TABLE plt2_p3 PARTITION OF plt2 DEFAULT;INSERT INTO plt2 SELECT to_char(i % 11, 'FM0000'), to_char(i % 11, 'FM0000') FROM generate_series(0, 500) i WHERE i % 11 NOT IN (0,10,3);INSERT INTO plt2 SELECT NULL,NULL FROM generate_series(0, 500) i WHERE i % 11 IN (3);ANALYSE plt2;EXPLAIN (COSTS OFF)SELECT t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c = t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);postgres=# EXPLAIN (COSTS OFF)postgres-# SELECT t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c = t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);server closed the connection unexpectedly\tThis probably means the server terminated abnormally\tbefore or while processing the request.The connection to the server was lost. Attempting reset: Failed.!?> \\q[edb@localhost bin]$ gdb -q -c data/core.66926 postgres Reading symbols from /home/edb/WORK/pg_src/PG_TEMP/postgresql/inst/bin/postgres...done.[New LWP 66926][Thread debugging using libthread_db enabled]Using host libthread_db library \"/lib64/libthread_db.so.1\".Core was generated by `postgres: edb postgres [local] EXPLAIN                            '.Program terminated with signal 11, Segmentation fault.#0  0x000000000082be39 in is_dummy_rel (rel=0x40) at joinrels.c:12211221\t\tif (rel->pathlist == NIL)(gdb) bt#0  0x000000000082be39 in is_dummy_rel (rel=0x40) at joinrels.c:1221#1  0x000000000089341c in is_dummy_partition (rel=0x2f86e88, part_index=2) at partbounds.c:1959#2  0x0000000000891d38 in merge_list_bounds (partnatts=2, partsupfunc=0x2f70058, partcollation=0x2fd3c98, outer_rel=0x2f86e88, inner_rel=0x2fd4368, jointype=JOIN_LEFT,     outer_parts=0x7ffea91f8cc0, inner_parts=0x7ffea91f8cb8) at partbounds.c:1325#3  0x0000000000891991 in partition_bounds_merge (partnatts=2, partsupfunc=0x2f70058, partcollation=0x2fd3c98, outer_rel=0x2f86e88, inner_rel=0x2fd4368, jointype=JOIN_LEFT,     outer_parts=0x7ffea91f8cc0, inner_parts=0x7ffea91f8cb8) at partbounds.c:1198#4  0x000000000082cc5a in compute_partition_bounds (root=0x2f9e910, rel1=0x2f86e88, rel2=0x2fd4368, joinrel=0x2fae388, parent_sjinfo=0x2f7dfa8, parts1=0x7ffea91f8cc0,     parts2=0x7ffea91f8cb8) at joinrels.c:1644#5  0x000000000082c474 in try_partitionwise_join (root=0x2f9e910, rel1=0x2f86e88, rel2=0x2fd4368, joinrel=0x2fae388, parent_sjinfo=0x2f7dfa8, parent_restrictlist=0x2fae650)    at joinrels.c:1402#6  0x000000000082b6e2 in populate_joinrel_with_paths (root=0x2f9e910, rel1=0x2f86e88, rel2=0x2fd4368, joinrel=0x2fae388, sjinfo=0x2f7dfa8, restrictlist=0x2fae650) at joinrels.c:926#7  0x000000000082b135 in make_join_rel (root=0x2f9e910, rel1=0x2f86e88, rel2=0x2fd4368) at joinrels.c:760#8  0x000000000082a643 in make_rels_by_clause_joins (root=0x2f9e910, old_rel=0x2f86e88, other_rels_list=0x2f90148, other_rels=0x2f90160) at joinrels.c:312#9  0x000000000082a119 in join_search_one_level (root=0x2f9e910, level=3) at joinrels.c:123#10 0x000000000080cd97 in standard_join_search (root=0x2f9e910, levels_needed=3, initial_rels=0x2f90148) at allpaths.c:3020#11 0x000000000080cd10 in make_rel_from_joinlist (root=0x2f9e910, joinlist=0x2fd7550) at allpaths.c:2951#12 0x000000000080899a in make_one_rel (root=0x2f9e910, joinlist=0x2fd7550) at allpaths.c:228#13 0x000000000084516a in query_planner (root=0x2f9e910, qp_callback=0x84ad85 <standard_qp_callback>, qp_extra=0x7ffea91f9140) at planmain.c:276#14 0x000000000084788d in grouping_planner (root=0x2f9e910, tuple_fraction=0) at planner.c:1447#15 0x0000000000846f56 in subquery_planner (glob=0x2fa0c08, parse=0x2f56d30, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at planner.c:1025#16 0x000000000084578b in standard_planner (parse=0x2f56d30,     query_string=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c = t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\", cursorOptions=2048, boundParams=0x0) at planner.c:406#17 0x0000000000845536 in planner (parse=0x2f56d30,     query_string=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c = t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\", cursorOptions=2048, boundParams=0x0) at planner.c:277#18 0x0000000000978faf in pg_plan_query (querytree=0x2f56d30,     query_string=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c = t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\", cursorOptions=2048, boundParams=0x0) at postgres.c:847#19 0x0000000000693e50 in ExplainOneQuery (query=0x2f56d30, cursorOptions=2048, into=0x0, es=0x2fa0920,     queryString=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c = t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\", params=0x0, queryEnv=0x0) at explain.c:397#20 0x00000000006939a5 in ExplainQuery (pstate=0x2f9e0a0, stmt=0x2f56b50, params=0x0, dest=0x2f9e008) at explain.c:281#21 0x0000000000981de8 in standard_ProcessUtility (pstmt=0x2fd2220,     queryString=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c = t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\", readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x2f9e008, qc=0x7ffea91f9aa0) at utility.c:862#22 0x0000000000981585 in ProcessUtility (pstmt=0x2fd2220,     queryString=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c = t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\", readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x2f9e008, qc=0x7ffea91f9aa0) at utility.c:527#23 0x00000000009801ba in PortalRunUtility (portal=0x2f10180, pstmt=0x2fd2220, isTopLevel=true, setHoldSnapshot=true, dest=0x2f9e008, qc=0x7ffea91f9aa0) at pquery.c:1155#24 0x000000000097ff20 in FillPortalStore (portal=0x2f10180, isTopLevel=true) at pquery.c:1028#25 0x000000000097f883 in PortalRun (portal=0x2f10180, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2fd2310, altdest=0x2fd2310, qc=0x7ffea91f9c60) at pquery.c:760#26 0x00000000009795d1 in exec_simple_query (    query_string=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c = t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\") at postgres.c:1214#27 0x000000000097da8d in PostgresMain (dbname=0x2ed8068 \"postgres\", username=0x2ed8048 \"edb\") at postgres.c:4497#28 0x00000000008b9699 in BackendRun (port=0x2ecfd00) at postmaster.c:4560Thanks & Regards,Rajkumar RaghuwanshiOn Mon, Oct 11, 2021 at 11:05 AM Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote:Thanks for the patch, it applied cleanly and fixed the reported issue.  I observed another case whereIn case of multi-col list partition on the same column query is not picking partition wise join. Is this expected?CREATE TABLE plt1 (a int, b int, c varchar) PARTITION BY LIST(c,c);CREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN (('0001','0001'),('0002','0002'),('0003','0003'));CREATE TABLE plt1_p2 PARTITION OF plt1 FOR VALUES IN (('0004','0004'),('0005','0005'),('0006','0006'));CREATE TABLE plt1_p3 PARTITION OF plt1 DEFAULT;INSERT INTO plt1 SELECT i, i % 47, to_char(i % 11, 'FM0000') FROM generate_series(0, 500) i WHERE i % 11 NOT  IN (0,10);ANALYSE plt1;CREATE TABLE plt2 (a int, b int, c varchar) PARTITION BY LIST(c,c);CREATE TABLE plt2_p1 PARTITION OF plt2 FOR VALUES IN (('0001','0001'),('0002','0002'),('0003','0003'));CREATE TABLE plt2_p2 PARTITION OF plt2 FOR VALUES IN (('0004','0004'),('0005','0005'),('0006','0006'));CREATE TABLE plt2_p3 PARTITION OF plt2 DEFAULT;INSERT INTO plt2 SELECT i, i % 47, to_char(i % 11, 'FM0000') FROM generate_series(0, 500) i WHERE i % 11 NOT  IN (0,10);ANALYSE plt2;SET enable_partitionwise_join TO true;EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.a, t2.c FROM plt1 t1 INNER JOIN plt2 t2 ON t1.c = t2.c;postgres=# EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.a, t2.c FROM plt1 t1 INNER JOIN plt2 t2 ON t1.c = t2.c;                 QUERY PLAN                 -------------------------------------------- Hash Join   Hash Cond: ((t1.c)::text = (t2.c)::text)   ->  Append         ->  Seq Scan on plt1_p1 t1_1         ->  Seq Scan on plt1_p2 t1_2         ->  Seq Scan on plt1_p3 t1_3   ->  Hash         ->  Append               ->  Seq Scan on plt2_p1 t2_1               ->  Seq Scan on plt2_p2 t2_2               ->  Seq Scan on plt2_p3 t2_3(11 rows)Thanks & Regards,Rajkumar RaghuwanshiOn Thu, Oct 7, 2021 at 6:03 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:Thanks Rajkumar for testing.> I think it should throw an error as the partition by list has only 1 column but we are giving 2 values. I also agree that it should throw an error in the above case. Fixed the issue in the attached patch. Also added related test cases to the regression test suite.> also if you see \\d+ showing plt1_p1 partition value as ‘(0001,0001)’ instead of ('0001','0001').Now throwing errors in the initial stage, this case doesn't arise. Please share if you find any other issues.Thanks & Regards,Nitin JadhavOn Thu, Oct 7, 2021 at 4:05 PM Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote:Thanks Nitin,v4 patches applied cleanly and make check is passing now. While testing further I observed that if multiple values are given for a single column list partition it is not giving error instead it is changing values itself. Please find the example below.postgres=# CREATE TABLE plt1 (a int, b varchar) PARTITION BY LIST(b);CREATE TABLEpostgres=# CREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN (('0001','0001'),('0002','0002'));CREATE TABLEpostgres=# \\d+ plt1;                                          Partitioned table \"public.plt1\" Column |       Type        | Collation | Nullable | Default | Storage  | Compression | Stats target | Description --------+-------------------+-----------+----------+---------+----------+-------------+--------------+------------- a      | integer           |           |          |         | plain    |             |              |  b      | character varying |           |          |         | extended |             |              | Partition key: LIST (b)Partitions: plt1_p1 FOR VALUES IN ('(0001,0001)', '(0002,0002)')I think it should throw an error as the partition by list has only 1 column but we are giving 2 values. also if you see \\d+ showing plt1_p1 partition value as ‘(0001,0001)’ instead of ('0001','0001').Thanks & Regards,Rajkumar RaghuwanshiOn Sun, Oct 3, 2021 at 1:52 AM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:> > On PG head + Nitin's v3 patch + Amit's Delta patch.  Make check is failing with below errors.\n>\n> Thanks Rajkumar for testing.\n>\n> Here's a v2 of the delta patch that should fix both of these test\n> failures.  As I mentioned in my last reply, my delta patch fixed what\n> I think were problems in Nitin's v3 patch but were not complete by\n> themselves.  Especially, I hadn't bothered to investigate various /*\n> TODO: handle multi-column list partitioning */ sites to deal with my\n> own changes.\n\nThanks Rajkumar for testing and Thank you Amit for working on v2 of\nthe delta patch. Actually I had done the code changes related to\npartition-wise join and I was in the middle of fixing the review\ncomments, So I could not share the patch. Anyways thanks for your\nefforts.\n\n> I noticed that multi-column list partitions containing NULLs don't\n> work correctly with partition pruning yet.\n>\n> create table p0 (a int, b text, c bool) partition by list (a, b, c);\n> create table p01 partition of p0 for values in ((1, 1, true), (NULL, 1, false));\n> create table p02 partition of p0 for values in ((1, NULL, false));\n> explain select * from p0 where a is null;\n>                        QUERY PLAN\n> --------------------------------------------------------\n> Seq Scan on p01 p0  (cost=0.00..22.50 rows=6 width=37)\n>  Filter: (a IS NULL)\n> (2 rows)\n>\n> In the attached updated version, I've dealt with some of those such\n> that at least the existing cases exercising partition pruning and\n> partition wise joins now pass.\n\nwrt partition pruning, I have checked the output of the above case\nwith the v2 version of the delta patch and without that. The output\nremains same. Kindly let me know if I am missing something. But I feel\nthe above output is correct as the partition p01 is the only partition\nwhich contains NULL value for column a, hence it is showing \"Seq scan\non p01\" in the output. Kindly correct me if I am wrong. I feel the\ncode changes related to 'null_keys' is not required, hence not\nincorporated that in the attached patch.\n\nwrt partition-wise join, I had run the regression test (with new cases\nrelated to partition-wise join) on v2 of the delta patch and observed\nthe crash. Hence I have not incorporated the partition-wise join\nrelated code from v2 of delta patch to main v4 patch. Instead I have\nadded the partition-wise join related code done by me in the attached\npatch. Please share your thoughts and if possible we can improvise the\ncode. Rest of the changes looks good to me and I have incorporated\nthat in the attached patch.\n\n\n> I guess that may be due to the following newly added code being incomplete:\n> Maybe this function needs to return a \"bitmapset\" of indexes, because\n> multiple partitions can now contain NULL values.\n\nI feel this function is not required at all as we are not separating\nthe non null and null partitions now. Removed in the attached patch.\nAlso removed the \"scan_null' variable from the structure\n\"PruneStepResult\" and cleaned up the corresponding code blocks.\n\n\n> This function name may be too generic.  Given that it is specific to\n> implementing list bound de-duplication, maybe the following signature\n> is more appropriate:\n>\n> static bool\n> checkListBoundDuplicated(List *list_bounds, List *new_bound)\n\nYes. The function name looks more generic. How about using\n\"isListBoundDuplicated()\"? I have used this name in the patch. Please\nlet me know if that does not look correct.\n\n\n> Also, better if the function comment mentions those parameter names, like:\n>\n> \"Returns TRUE if the list bound element 'new_bound' is already present\n> in the target list 'list_bounds', FALSE otherwise.\"\n\nFixed.\n\n\n> +/*\n> + * transformPartitionListBounds\n> + *\n> + * Converts the expressions of list partition bounds from the raw grammar\n> + * representation.\n>\n> A sentence about the result format would be helpful, like:\n>\n> The result is a List of Lists of Const nodes to account for the\n> partition key possibly containing more than one column.\n\nFixed.\n\n\n> +   int             i = 0;\n> +   int             j = 0;\n>\n> Better to initialize such loop counters closer to the loop.\n\nFixed in all the places.\n\n\n> +           colname[i] = (char *) palloc0(NAMEDATALEN * sizeof(char));\n> +           colname[i] = get_attname(RelationGetRelid(parent),\n> +                                    key->partattrs[i], false);\n>\n> The palloc in the 1st statement is wasteful, because the 2nd statement\n> overwrites its pointer by the pointer to the string palloc'd by\n> get_attname().\n\nRemoved the 1st statement as it is not required.\n\n\n> +           ListCell   *cell2 = NULL;\n>\n> No need to explicitly initialize the loop variable.\n\nFixed in all the places.\n\n\n> +           RowExpr     *rowexpr = NULL;\n> +\n> +           if (!IsA(expr, RowExpr))\n> +               ereport(ERROR,\n> +                       (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> +                       errmsg(\"Invalid list bound specification\"),\n> +                       parser_errposition(pstate, exprLocation((Node\n> *) spec))));\n> +\n> +           rowexpr = (RowExpr *) expr;\n>\n> It's okay to assign rowexpr at the top here instead of the dummy\n> NULL-initialization and write the condition as:\n>\n>    if (!IsA(rowexpr, RowExpr))\n\nFixed.\n\n\n> +       if (isDuplicate)\n> +           continue;\n> +\n> +       result = lappend(result, values);\n>\n> I can see you copied this style from the existing code, but how about\n> writing this simply as:\n>\n>    if (!isDuplicate)\n>        result = lappend(result, values);\n\nThis looks good. I have changed in the patch.\n\n\n> -/* One value coming from some (index'th) list partition */\n> +/* One bound of a list partition */\n> typedef struct PartitionListValue\n> {\n>    int         index;\n> -   Datum       value;\n> +   Datum      *values;\n> +   bool       *isnulls;\n>  } PartitionListValue;\n>\n> Given that this is a locally-defined struct, I wonder if it makes\n> sense to rename the struct while we're at it.  Call it, say,\n> PartitionListBound?\n\nYes. PartitionListBound looks more appropriate and it also matches the\nsimilar structures of the other partition strategies.\n\n> Also, please keep part of the existing comment that says that the\n> bound belongs to index'th partition.\n\nRetained the old comment.\n\n\n> + * partition_bound_accepts_nulls\n> + *\n> + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n>  */\n>\n> I suggest slight rewording, as follows:\n>\n> \"Returns TRUE if any of the partition bounds contains a NULL value,\n> FALSE otherwise.\"\n\nFixed.\n\n\n> -   PartitionListValue *all_values;\n> +   PartitionListValue **all_values;\n> ...\n> -   all_values = (PartitionListValue *)\n> -       palloc(ndatums * sizeof(PartitionListValue));\n> +   ndatums = get_list_datum_count(boundspecs, nparts);\n> +   all_values = (PartitionListValue **)\n> +       palloc(ndatums * sizeof(PartitionListValue *));\n>\n> I don't see the need to redefine all_values's pointer type.  No need\n> to palloc PartitionListValue repeatedly for every datum as done\n> further down as follows:\n>\n> +           all_values[j] = (PartitionListValue *)\n> palloc(sizeof(PartitionListValue));\n>\n> You do need the following two though:\n>\n> +           all_values[j]->values = (Datum *) palloc0(key->partnatts *\n> sizeof(Datum));\n> +           all_values[j]->isnulls = (bool *) palloc0(key->partnatts *\n> sizeof(bool));\n>\n> If you change the above the way I suggest, you'd also need to revert\n> the following change:\n>\n> -   qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n> +   qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n>              qsort_partition_list_value_cmp, (void *) key);\n>\n> +       int         orig_index = all_values[i]->index;\n> +       boundinfo->datums[i] = (Datum *) palloc(key->partnatts * sizeof(Datum));\n>\n> Missing a newline between these two statements.\n\nFixed. Made necessary changes to keep the intent of existing code.\n\n\n> @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n> *parttyplen, bool *parttypbyval,\n>    if (b1->nindexes != b2->nindexes)\n>        return false;\n>\n> -   if (b1->null_index != b2->null_index)\n> +   if (get_partition_bound_null_index(b1) !=\n> get_partition_bound_null_index(b2))\n>\n> As mentioned in the last message, this bit in partition_bounds_equal()\n> needs to be comparing \"bitmapsets\" of null bound indexes, that is\n> after fixing get_partition_bound_null_index() as previously mentioned.\n\nAs mentioned earlier, removed the functionality of\nget_partition_bound_null_index(), hence the above condition is not\nrequired and removed.\n\n> But...\n>\n> @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n> *parttyplen, bool *parttypbyval,\n>                 * context.  datumIsEqual() should be simple enough to be\n>                 * safe.\n>                 */\n> -               if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n> +               if (b1->isnulls)\n> +                   b1_isnull = b1->isnulls[i][j];\n> +               if (b2->isnulls)\n> +                   b2_isnull = b2->isnulls[i][j];\n> +\n> +               /*\n> +                * If any of the partition bound has NULL value, then check\n> +                * equality for the NULL value instead of comparing the datums\n> +                * as it does not contain valid value in case of NULL.\n> +                */\n> +               if (b1_isnull || b2_isnull)\n> +               {\n> +                   if (b1_isnull != b2_isnull)\n> +                       return false;\n> +               }\n>\n> ...if you have this in the main loop, I don't think we need the above\n> code stanza which appears to implement a short-cut for this long-form\n> logic.\n\nYes. May be we could have ignored the above code stanza if we would\nhave comparing the null indexes using get_partition_bound_null_index()\nin the beginning of the function. But hence we are not separating the\nnon null partitions and null partitions, I would like to keep the\nlogic in the inner loop as we are doing it for non null bound values\nin the above code stanza, just to give a feel that null bound values\nare also handled the same way as non null values. Please correct me if\nI am wrong.\n\n\n> +               (key->strategy != PARTITION_STRATEGY_LIST ||\n> +                !src->isnulls[i][j]))\n>\n> I think it's better to write this condition as follows just like the\n> accompanying condition involving src->kind:\n>\n>    (src->nulls == NULL || !src->isnulls[i][j])\n\nFixed.\n\n\n> In check_new_partition_bound():\n>\n> +                       Datum      *values = (Datum *)\n> palloc0(key->partnatts * sizeof(Datum));\n> +                       bool       *isnulls = (bool *)\n> palloc0(key->partnatts * sizeof(bool));\n>\n> Doesn't seem like a bad idea to declare these as:\n>\n>     Datum    values[PARTITION_MAX_KEYS];\n>    bool        isnulls[PARTITION_MAX_KEYS];\n\nThanks for the suggestion. I have changed as above.\n\n> I looked at get_qual_for_list_multi_column() and immediately thought\n> that it may be a bad idea.  I think it's better to integrate the logic\n> for multi-column case into the existing function even if that makes\n> the function appear more complex.  Having two functions with the same\n> goal and mostly the same code is not a good idea mainly because it\n> becomes a maintenance burden.\n\nActually I had written a separate function because of the complexity.\nNow I have understood that since the objective is same, it should be\ndone in a single function irrespective of complexity.\n\n> I have attempted a rewrite such that get_qual_for_list() now handles\n> both the single-column and multi-column cases.  Changes included in\n> the delta patch.  The patch updates some outputs of the newly added\n> tests for multi-column list partitions, because the new code emits the\n> IS NOT NULL tests a bit differently than\n> get_qual_for_list_mutli_column() would.  Notably, the old approach\n> would emit IS NOT NULL for every non-NULL datum matched to a given\n> column, not just once for the column.  However, the patch makes a few\n> other tests fail, mainly because I had to fix\n> partition_bound_accepts_nulls() to handle the multi-column case,\n> though didn't bother to update all callers of it to also handle the\n> multi-column case correctly.  I guess that's a TODO you're going to\n> deal with at some point anyway. :)\n\nThank you very much for your efforts. The changes looks good to me and\nI have incorporated these changes in the attached patch.\n\nI have completed the coding for all the TODOs and hence removed in the\npatch. The naming conventions used for function/variable names varies\nacross the files. Some places it is like 'namesLikeThis' and in some\nplace it is like 'names_like_this'. I have used the naming conventions\nbased on the surrounding styles used. I am happy to change those if\nrequired.\n\nI have verified 'make check' with the attached patch and it is working fine.\n\n\nThanks & Regards,\nNitin Jadhav\n\n\nOn Mon, Sep 13, 2021 at 3:47 PM Rajkumar Raghuwanshi\n<rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>\n> On PG head + Nitin's v3 patch + Amit's Delta patch.  Make check is failing with below errors.\n>\n> --inherit.sql is failing with error :\"ERROR:  negative bitmapset member not allowed\"\n> update mlparted_tab mlp set c = 'xxx'\n> from\n>   (select a from some_tab union all select a+1 from some_tab) ss (a)\n> where (mlp.a = ss.a and mlp.b = 'b') or mlp.a = 3;\n> ERROR:  negative bitmapset member not allowed\n>\n> --partition_join.sql is crashing with enable_partitionwise_join set to true.\n> CREATE TABLE plt1_adv (a int, b int, c text) PARTITION BY LIST (c);\n> CREATE TABLE plt1_adv_p1 PARTITION OF plt1_adv FOR VALUES IN ('0001', '0003');\n> CREATE TABLE plt1_adv_p2 PARTITION OF plt1_adv FOR VALUES IN ('0004', '0006');\n> CREATE TABLE plt1_adv_p3 PARTITION OF plt1_adv FOR VALUES IN ('0008', '0009');\n> INSERT INTO plt1_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM generate_series(1, 299) i WHERE i % 10 IN (1, 3, 4, 6, 8, 9);\n> ANALYZE plt1_adv;\n> CREATE TABLE plt2_adv (a int, b int, c text) PARTITION BY LIST (c);\n> CREATE TABLE plt2_adv_p1 PARTITION OF plt2_adv FOR VALUES IN ('0002', '0003');\n> CREATE TABLE plt2_adv_p2 PARTITION OF plt2_adv FOR VALUES IN ('0004', '0006');\n> CREATE TABLE plt2_adv_p3 PARTITION OF plt2_adv FOR VALUES IN ('0007', '0009');\n> INSERT INTO plt2_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM generate_series(1, 299) i WHERE i % 10 IN (2, 3, 4, 6, 7, 9);\n> ANALYZE plt2_adv;\n> -- inner join\n> EXPLAIN (COSTS OFF)\n> SELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\n> server closed the connection unexpectedly\n>         This probably means the server terminated abnormally\n>         before or while processing the request.\n> connection to server was lost\n>\n>\n> --stack-trace\n> Core was generated by `postgres: edb regression [local] EXPLAIN                                      '.\n> Program terminated with signal 6, Aborted.\n> #0  0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n> Missing separate debuginfos, use: debuginfo-install glibc-2.17-222.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-19.el7.x86_64 libcom_err-1.42.9-12.el7_5.x86_64 libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-12.el7.x86_64 openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-17.el7.x86_64\n> (gdb) bt\n> #0  0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n> #1  0x00007f7d339bb968 in abort () from /lib64/libc.so.6\n> #2  0x0000000000b0fbc3 in ExceptionalCondition (conditionName=0xcbda10 \"part_index >= 0\", errorType=0xcbd1c3 \"FailedAssertion\", fileName=0xcbd2fe \"partbounds.c\", lineNumber=1957)\n>     at assert.c:69\n> #3  0x0000000000892aa1 in is_dummy_partition (rel=0x19b37c0, part_index=-1) at partbounds.c:1957\n> #4  0x00000000008919bd in merge_list_bounds (partnatts=1, partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0, inner_rel=0x1922938, jointype=JOIN_INNER,\n>     outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at partbounds.c:1529\n> #5  0x00000000008910de in partition_bounds_merge (partnatts=1, partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0, inner_rel=0x1922938, jointype=JOIN_INNER,\n>     outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at partbounds.c:1223\n> #6  0x000000000082c41a in compute_partition_bounds (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, parent_sjinfo=0x7fffd67752a0, parts1=0x7fffd67751b0,\n>     parts2=0x7fffd67751a8) at joinrels.c:1644\n> #7  0x000000000082bc34 in try_partitionwise_join (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, parent_sjinfo=0x7fffd67752a0, parent_restrictlist=0x1ab3318)\n>     at joinrels.c:1402\n> #8  0x000000000082aea2 in populate_joinrel_with_paths (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30, sjinfo=0x7fffd67752a0, restrictlist=0x1ab3318)\n>     at joinrels.c:926\n> #9  0x000000000082a8f5 in make_join_rel (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938) at joinrels.c:760\n> #10 0x0000000000829e03 in make_rels_by_clause_joins (root=0x1a19ed0, old_rel=0x19b37c0, other_rels_list=0x1ab2970, other_rels=0x1ab2990) at joinrels.c:312\n> #11 0x00000000008298d9 in join_search_one_level (root=0x1a19ed0, level=2) at joinrels.c:123\n> #12 0x000000000080c566 in standard_join_search (root=0x1a19ed0, levels_needed=2, initial_rels=0x1ab2970) at allpaths.c:3020\n> #13 0x000000000080c4df in make_rel_from_joinlist (root=0x1a19ed0, joinlist=0x199d538) at allpaths.c:2951\n> #14 0x000000000080816b in make_one_rel (root=0x1a19ed0, joinlist=0x199d538) at allpaths.c:228\n> #15 0x000000000084491d in query_planner (root=0x1a19ed0, qp_callback=0x84a538 <standard_qp_callback>, qp_extra=0x7fffd6775630) at planmain.c:276\n> #16 0x0000000000847040 in grouping_planner (root=0x1a19ed0, tuple_fraction=0) at planner.c:1447\n> #17 0x0000000000846709 in subquery_planner (glob=0x19b39d8, parse=0x1aaa290, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at planner.c:1025\n> #18 0x0000000000844f3e in standard_planner (parse=0x1aaa290,\n>     query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048, boundParams=0x0) at planner.c:406\n> #19 0x0000000000844ce9 in planner (parse=0x1aaa290,\n>     query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048, boundParams=0x0) at planner.c:277\n> #20 0x0000000000978483 in pg_plan_query (querytree=0x1aaa290,\n>     query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048, boundParams=0x0) at postgres.c:847\n> #21 0x00000000006937fc in ExplainOneQuery (query=0x1aaa290, cursorOptions=2048, into=0x0, es=0x19b36f0,\n>     queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>     params=0x0, queryEnv=0x0) at explain.c:397\n> #22 0x0000000000693351 in ExplainQuery (pstate=0x197c410, stmt=0x1aaa0b0, params=0x0, dest=0x197c378) at explain.c:281\n> #23 0x00000000009811fa in standard_ProcessUtility (pstmt=0x1a0bfc8,\n>     queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>     readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:845\n> #24 0x00000000009809ec in ProcessUtility (pstmt=0x1a0bfc8,\n>     queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>     readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at utility.c:527\n> #25 0x000000000097f636 in PortalRunUtility (portal=0x1893b40, pstmt=0x1a0bfc8, isTopLevel=true, setHoldSnapshot=true, dest=0x197c378, qc=0x7fffd6775f90) at pquery.c:1147\n> #26 0x000000000097f3a5 in FillPortalStore (portal=0x1893b40, isTopLevel=true) at pquery.c:1026\n> #27 0x000000000097ed11 in PortalRun (portal=0x1893b40, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1a0c0b8, altdest=0x1a0c0b8, qc=0x7fffd6776150) at pquery.c:758\n> #28 0x0000000000978aa5 in exec_simple_query (\n>\n> Thanks & Regards,\n> Rajkumar Raghuwanshi\n>\n>\n> On Fri, Sep 3, 2021 at 7:17 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>>\n>> On Wed, Sep 1, 2021 at 2:31 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> > On Tue, Aug 31, 2021 at 8:02 PM Nitin Jadhav\n>> > <nitinjadhavpostgres@gmail.com> wrote:\n>> > > The attached patch also fixes the above comments.\n>> >\n>> > I noticed that multi-column list partitions containing NULLs don't\n>> > work correctly with partition pruning yet.\n>> >\n>> > create table p0 (a int, b text, c bool) partition by list (a, b, c);\n>> > create table p01 partition of p0 for values in ((1, 1, true), (NULL, 1, false));\n>> > create table p02 partition of p0 for values in ((1, NULL, false));\n>> > explain select * from p0 where a is null;\n>> >                        QUERY PLAN\n>> > --------------------------------------------------------\n>> >  Seq Scan on p01 p0  (cost=0.00..22.50 rows=6 width=37)\n>> >    Filter: (a IS NULL)\n>> > (2 rows)\n>> >\n>> > I guess that may be due to the following newly added code being incomplete:\n>> >\n>> > +/*\n>> > + * get_partition_bound_null_index\n>> > + *\n>> > + * Returns the partition index of the partition bound which accepts NULL.\n>> > + */\n>> > +int\n>> > +get_partition_bound_null_index(PartitionBoundInfo boundinfo)\n>> > +{\n>> > +   int i = 0;\n>> > +   int j = 0;\n>> > +\n>> > +   if (!boundinfo->isnulls)\n>> > +       return -1;\n>> >\n>> > -           if (!val->constisnull)\n>> > -               count++;\n>> > +   for (i = 0; i < boundinfo->ndatums; i++)\n>> > +   {\n>> > +       //TODO: Handle for multi-column cases\n>> > +       for (j = 0; j < 1; j++)\n>> > +       {\n>> > +           if (boundinfo->isnulls[i][j])\n>> > +               return boundinfo->indexes[i];\n>> >         }\n>> >     }\n>> >\n>> > +   return -1;\n>> > +}\n>> >\n>> > Maybe this function needs to return a \"bitmapset\" of indexes, because\n>> > multiple partitions can now contain NULL values.\n>> >\n>> > Some other issues I noticed and suggestions for improvement:\n>> >\n>> > +/*\n>> > + * checkForDuplicates\n>> > + *\n>> > + * Returns TRUE if the list bound element is already present in the list of\n>> > + * list bounds, FALSE otherwise.\n>> > + */\n>> > +static bool\n>> > +checkForDuplicates(List *source, List *searchElem)\n>> >\n>> > This function name may be too generic.  Given that it is specific to\n>> > implementing list bound de-duplication, maybe the following signature\n>> > is more appropriate:\n>> >\n>> > static bool\n>> > checkListBoundDuplicated(List *list_bounds, List *new_bound)\n>> >\n>> > Also, better if the function comment mentions those parameter names, like:\n>> >\n>> > \"Returns TRUE if the list bound element 'new_bound' is already present\n>> > in the target list 'list_bounds', FALSE otherwise.\"\n>> >\n>> > +/*\n>> > + * transformPartitionListBounds\n>> > + *\n>> > + * Converts the expressions of list partition bounds from the raw grammar\n>> > + * representation.\n>> >\n>> > A sentence about the result format would be helpful, like:\n>> >\n>> > The result is a List of Lists of Const nodes to account for the\n>> > partition key possibly containing more than one column.\n>> >\n>> > +   int             i = 0;\n>> > +   int             j = 0;\n>> >\n>> > Better to initialize such loop counters closer to the loop.\n>> >\n>> > +           colname[i] = (char *) palloc0(NAMEDATALEN * sizeof(char));\n>> > +           colname[i] = get_attname(RelationGetRelid(parent),\n>> > +                                    key->partattrs[i], false);\n>> >\n>> > The palloc in the 1st statement is wasteful, because the 2nd statement\n>> > overwrites its pointer by the pointer to the string palloc'd by\n>> > get_attname().\n>> >\n>> > +           ListCell   *cell2 = NULL;\n>> >\n>> > No need to explicitly initialize the loop variable.\n>> >\n>> > +           RowExpr     *rowexpr = NULL;\n>> > +\n>> > +           if (!IsA(expr, RowExpr))\n>> > +               ereport(ERROR,\n>> > +                       (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n>> > +                       errmsg(\"Invalid list bound specification\"),\n>> > +                       parser_errposition(pstate, exprLocation((Node\n>> > *) spec))));\n>> > +\n>> > +           rowexpr = (RowExpr *) expr;\n>> >\n>> > It's okay to assign rowexpr at the top here instead of the dummy\n>> > NULL-initialization and write the condition as:\n>> >\n>> >     if (!IsA(rowexpr, RowExpr))\n>> >\n>> > +       if (isDuplicate)\n>> > +           continue;\n>> > +\n>> > +       result = lappend(result, values);\n>> >\n>> > I can see you copied this style from the existing code, but how about\n>> > writing this simply as:\n>> >\n>> >     if (!isDuplicate)\n>> >         result = lappend(result, values);\n>> >\n>> > -/* One value coming from some (index'th) list partition */\n>> > +/* One bound of a list partition */\n>> >  typedef struct PartitionListValue\n>> >  {\n>> >     int         index;\n>> > -   Datum       value;\n>> > +   Datum      *values;\n>> > +   bool       *isnulls;\n>> >  } PartitionListValue;\n>> >\n>> > Given that this is a locally-defined struct, I wonder if it makes\n>> > sense to rename the struct while we're at it.  Call it, say,\n>> > PartitionListBound?\n>> >\n>> > Also, please keep part of the existing comment that says that the\n>> > bound belongs to index'th partition.\n>> >\n>> > Will send more comments in a bit...\n>>\n>> + * partition_bound_accepts_nulls\n>> + *\n>> + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n>>   */\n>>\n>> I suggest slight rewording, as follows:\n>>\n>> \"Returns TRUE if any of the partition bounds contains a NULL value,\n>> FALSE otherwise.\"\n>>\n>> -   PartitionListValue *all_values;\n>> +   PartitionListValue **all_values;\n>> ...\n>> -   all_values = (PartitionListValue *)\n>> -       palloc(ndatums * sizeof(PartitionListValue));\n>> +   ndatums = get_list_datum_count(boundspecs, nparts);\n>> +   all_values = (PartitionListValue **)\n>> +       palloc(ndatums * sizeof(PartitionListValue *));\n>>\n>> I don't see the need to redefine all_values's pointer type.  No need\n>> to palloc PartitionListValue repeatedly for every datum as done\n>> further down as follows:\n>>\n>> +           all_values[j] = (PartitionListValue *)\n>> palloc(sizeof(PartitionListValue));\n>>\n>> You do need the following two though:\n>>\n>> +           all_values[j]->values = (Datum *) palloc0(key->partnatts *\n>> sizeof(Datum));\n>> +           all_values[j]->isnulls = (bool *) palloc0(key->partnatts *\n>> sizeof(bool));\n>>\n>> If you change the above the way I suggest, you'd also need to revert\n>> the following change:\n>>\n>> -   qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n>> +   qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n>>               qsort_partition_list_value_cmp, (void *) key);\n>>\n>> +       int         orig_index = all_values[i]->index;\n>> +       boundinfo->datums[i] = (Datum *) palloc(key->partnatts * sizeof(Datum));\n>>\n>> Missing a newline between these two statements.\n>>\n>> BTW, I noticed that the boundDatums variable is no longer used in\n>> create_list_bounds.  I traced back its origin and found that a recent\n>> commit 53d86957e98 introduced it to implement an idea to reduce the\n>> finer-grained pallocs that were being done in create_list_bounds().  I\n>> don't think that this patch needs to throw away that work.  You can\n>> make it work as the attached delta patch that applies on top of v3.\n>> Please check.\n>>\n>> @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n>> *parttyplen, bool *parttypbyval,\n>>     if (b1->nindexes != b2->nindexes)\n>>         return false;\n>>\n>> -   if (b1->null_index != b2->null_index)\n>> +   if (get_partition_bound_null_index(b1) !=\n>> get_partition_bound_null_index(b2))\n>>\n>> As mentioned in the last message, this bit in partition_bounds_equal()\n>> needs to be comparing \"bitmapsets\" of null bound indexes, that is\n>> after fixing get_partition_bound_null_index() as previously mentioned.\n>>\n>> But...\n>>\n>> @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n>> *parttyplen, bool *parttypbyval,\n>>                  * context.  datumIsEqual() should be simple enough to be\n>>                  * safe.\n>>                  */\n>> -               if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n>> +               if (b1->isnulls)\n>> +                   b1_isnull = b1->isnulls[i][j];\n>> +               if (b2->isnulls)\n>> +                   b2_isnull = b2->isnulls[i][j];\n>> +\n>> +               /*\n>> +                * If any of the partition bound has NULL value, then check\n>> +                * equality for the NULL value instead of comparing the datums\n>> +                * as it does not contain valid value in case of NULL.\n>> +                */\n>> +               if (b1_isnull || b2_isnull)\n>> +               {\n>> +                   if (b1_isnull != b2_isnull)\n>> +                       return false;\n>> +               }\n>>\n>> ...if you have this in the main loop, I don't think we need the above\n>> code stanza which appears to implement a short-cut for this long-form\n>> logic.\n>>\n>> +               (key->strategy != PARTITION_STRATEGY_LIST ||\n>> +                !src->isnulls[i][j]))\n>>\n>> I think it's better to write this condition as follows just like the\n>> accompanying condition involving src->kind:\n>>\n>>     (src->nulls == NULL || !src->isnulls[i][j])\n>>\n>> (Skipped looking at merge_list_bounds() and related changes for now as\n>> I see a lot of TODOs remain to be done.)\n>>\n>> In check_new_partition_bound():\n>>\n>> +                       Datum      *values = (Datum *)\n>> palloc0(key->partnatts * sizeof(Datum));\n>> +                       bool       *isnulls = (bool *)\n>> palloc0(key->partnatts * sizeof(bool));\n>>\n>> Doesn't seem like a bad idea to declare these as:\n>>\n>>     Datum    values[PARTITION_MAX_KEYS];\n>>     bool        isnulls[PARTITION_MAX_KEYS];\n>>\n>>\n>> I looked at get_qual_for_list_multi_column() and immediately thought\n>> that it may be a bad idea.  I think it's better to integrate the logic\n>> for multi-column case into the existing function even if that makes\n>> the function appear more complex.  Having two functions with the same\n>> goal and mostly the same code is not a good idea mainly because it\n>> becomes a maintenance burden.\n>>\n>> I have attempted a rewrite such that get_qual_for_list() now handles\n>> both the single-column and multi-column cases.  Changes included in\n>> the delta patch.  The patch updates some outputs of the newly added\n>> tests for multi-column list partitions, because the new code emits the\n>> IS NOT NULL tests a bit differently than\n>> get_qual_for_list_mutli_column() would.  Notably, the old approach\n>> would emit IS NOT NULL for every non-NULL datum matched to a given\n>> column, not just once for the column.  However, the patch makes a few\n>> other tests fail, mainly because I had to fix\n>> partition_bound_accepts_nulls() to handle the multi-column case,\n>> though didn't bother to update all callers of it to also handle the\n>> multi-column case correctly.  I guess that's a TODO you're going to\n>> deal with at some point anyway. :)\n>>\n>> I still have more than half of v3 left to look at, so will continue\n>> looking.   In the meantime, please check the changes I suggested,\n>> including the delta patch, and let me know your thoughts.\n>>\n>> --\n>> Amit Langote\n>> EDB: http://www.enterprisedb.com", "msg_date": "Mon, 11 Oct 2021 16:12:32 +0530", "msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Hi Rajkumar,\n\nOn Mon, Oct 11, 2021 at 2:36 PM Rajkumar Raghuwanshi\n<rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>\n> Thanks for the patch, it applied cleanly and fixed the reported issue. I observed another case where\n> In case of multi-col list partition on the same column query is not picking partition wise join. Is this expected?\n>\n> CREATE TABLE plt1 (a int, b int, c varchar) PARTITION BY LIST(c,c);\n> CREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN (('0001','0001'),('0002','0002'),('0003','0003'));\n> CREATE TABLE plt1_p2 PARTITION OF plt1 FOR VALUES IN (('0004','0004'),('0005','0005'),('0006','0006'));\n> CREATE TABLE plt1_p3 PARTITION OF plt1 DEFAULT;\n> INSERT INTO plt1 SELECT i, i % 47, to_char(i % 11, 'FM0000') FROM generate_series(0, 500) i WHERE i % 11 NOT IN (0,10);\n> ANALYSE plt1;\n> CREATE TABLE plt2 (a int, b int, c varchar) PARTITION BY LIST(c,c);\n> CREATE TABLE plt2_p1 PARTITION OF plt2 FOR VALUES IN (('0001','0001'),('0002','0002'),('0003','0003'));\n> CREATE TABLE plt2_p2 PARTITION OF plt2 FOR VALUES IN (('0004','0004'),('0005','0005'),('0006','0006'));\n> CREATE TABLE plt2_p3 PARTITION OF plt2 DEFAULT;\n> INSERT INTO plt2 SELECT i, i % 47, to_char(i % 11, 'FM0000') FROM generate_series(0, 500) i WHERE i % 11 NOT IN (0,10);\n> ANALYSE plt2;\n> SET enable_partitionwise_join TO true;\n> EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.a, t2.c FROM plt1 t1 INNER JOIN plt2 t2 ON t1.c = t2.c;\n>\n> postgres=# EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.a, t2.c FROM plt1 t1 INNER JOIN plt2 t2 ON t1.c = t2.c;\n> QUERY PLAN\n> --------------------------------------------\n> Hash Join\n> Hash Cond: ((t1.c)::text = (t2.c)::text)\n> -> Append\n> -> Seq Scan on plt1_p1 t1_1\n> -> Seq Scan on plt1_p2 t1_2\n> -> Seq Scan on plt1_p3 t1_3\n> -> Hash\n> -> Append\n> -> Seq Scan on plt2_p1 t2_1\n> -> Seq Scan on plt2_p2 t2_2\n> -> Seq Scan on plt2_p3 t2_3\n> (11 rows)\n\nInteresting test case.\n\nI think this might be an *existing* limitation of the code that\ncompares join clauses against the partition key(s) to determine if\npartition-wise join should be considered. The clause t1.c = t2.c\nshould have been matched with both of the partition keys (c, c), but\nit is not given the way have_partkey_equi_join() is currently coded.\nI suspect you'd get the same behavior if you'd used a RANGE\npartitioned table with keys (c, c). Not sure though if it'd be\nworthwhile to fix that coding to cater to this odd partition key\nsetting.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 15 Oct 2021 17:38:31 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "> While testing further I got a crash with partition wise join enabled for\nmulti-col list partitions. please find test case & stack-trace below.\n\nThanks for sharing. I have fixed the issue in the attached patch.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Mon, Oct 11, 2021 at 4:12 PM Rajkumar Raghuwanshi <\nrajkumar.raghuwanshi@enterprisedb.com> wrote:\n\n> Hi Nitin,\n>\n> While testing further I got a crash with partition wise join enabled for\n> multi-col list partitions. please find test case & stack-trace below.\n>\n> SET enable_partitionwise_join TO on;\n> CREATE TABLE plt1 (c varchar, d varchar) PARTITION BY LIST(c,d);\n> CREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN\n> (('0001','0001'),('0002','0002'),(NULL,NULL));\n> CREATE TABLE plt1_p2 PARTITION OF plt1 FOR VALUES IN\n> (('0004','0004'),('0005','0005'),('0006','0006'));\n> INSERT INTO plt1 SELECT to_char(i % 11, 'FM0000'), to_char(i % 11,\n> 'FM0000') FROM generate_series(0, 500) i WHERE i % 11 NOT IN (0,10,3,7,8,9);\n> INSERT INTO plt1 SELECT NULL,NULL FROM generate_series(0, 500) i WHERE i %\n> 11 IN (3);\n> ANALYSE plt1;\n> CREATE TABLE plt2 (c varchar, d varchar) PARTITION BY LIST(c,d);\n> CREATE TABLE plt2_p1 PARTITION OF plt2 FOR VALUES IN\n> (('0001','0001'),('0002','0002'));\n> CREATE TABLE plt2_p2 PARTITION OF plt2 FOR VALUES IN\n> (('0004','0004'),('0005','0005'),('0006','0006'));\n> CREATE TABLE plt2_p3 PARTITION OF plt2 DEFAULT;\n> INSERT INTO plt2 SELECT to_char(i % 11, 'FM0000'), to_char(i % 11,\n> 'FM0000') FROM generate_series(0, 500) i WHERE i % 11 NOT IN (0,10,3);\n> INSERT INTO plt2 SELECT NULL,NULL FROM generate_series(0, 500) i WHERE i %\n> 11 IN (3);\n> ANALYSE plt2;\n>\n> EXPLAIN (COSTS OFF)\n> SELECT t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON\n> (t1.c = t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d =\n> t3.d);\n>\n> postgres=# EXPLAIN (COSTS OFF)\n> postgres-# SELECT t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN\n> plt2 t2 ON (t1.c = t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c\n> AND t2.d = t3.d);\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> !?> \\q\n> [edb@localhost bin]$ gdb -q -c data/core.66926 postgres\n> Reading symbols from\n> /home/edb/WORK/pg_src/PG_TEMP/postgresql/inst/bin/postgres...done.\n> [New LWP 66926]\n> [Thread debugging using libthread_db enabled]\n> Using host libthread_db library \"/lib64/libthread_db.so.1\".\n> Core was generated by `postgres: edb postgres [local] EXPLAIN\n> '.\n> Program terminated with signal 11, Segmentation fault.\n> #0 0x000000000082be39 in is_dummy_rel (rel=0x40) at joinrels.c:1221\n> 1221 if (rel->pathlist == NIL)\n> (gdb) bt\n> #0 0x000000000082be39 in is_dummy_rel (rel=0x40) at joinrels.c:1221\n> #1 0x000000000089341c in is_dummy_partition (rel=0x2f86e88, part_index=2)\n> at partbounds.c:1959\n> #2 0x0000000000891d38 in merge_list_bounds (partnatts=2,\n> partsupfunc=0x2f70058, partcollation=0x2fd3c98, outer_rel=0x2f86e88,\n> inner_rel=0x2fd4368, jointype=JOIN_LEFT,\n> outer_parts=0x7ffea91f8cc0, inner_parts=0x7ffea91f8cb8) at\n> partbounds.c:1325\n> #3 0x0000000000891991 in partition_bounds_merge (partnatts=2,\n> partsupfunc=0x2f70058, partcollation=0x2fd3c98, outer_rel=0x2f86e88,\n> inner_rel=0x2fd4368, jointype=JOIN_LEFT,\n> outer_parts=0x7ffea91f8cc0, inner_parts=0x7ffea91f8cb8) at\n> partbounds.c:1198\n> #4 0x000000000082cc5a in compute_partition_bounds (root=0x2f9e910,\n> rel1=0x2f86e88, rel2=0x2fd4368, joinrel=0x2fae388, parent_sjinfo=0x2f7dfa8,\n> parts1=0x7ffea91f8cc0,\n> parts2=0x7ffea91f8cb8) at joinrels.c:1644\n> #5 0x000000000082c474 in try_partitionwise_join (root=0x2f9e910,\n> rel1=0x2f86e88, rel2=0x2fd4368, joinrel=0x2fae388, parent_sjinfo=0x2f7dfa8,\n> parent_restrictlist=0x2fae650)\n> at joinrels.c:1402\n> #6 0x000000000082b6e2 in populate_joinrel_with_paths (root=0x2f9e910,\n> rel1=0x2f86e88, rel2=0x2fd4368, joinrel=0x2fae388, sjinfo=0x2f7dfa8,\n> restrictlist=0x2fae650) at joinrels.c:926\n> #7 0x000000000082b135 in make_join_rel (root=0x2f9e910, rel1=0x2f86e88,\n> rel2=0x2fd4368) at joinrels.c:760\n> #8 0x000000000082a643 in make_rels_by_clause_joins (root=0x2f9e910,\n> old_rel=0x2f86e88, other_rels_list=0x2f90148, other_rels=0x2f90160) at\n> joinrels.c:312\n> #9 0x000000000082a119 in join_search_one_level (root=0x2f9e910, level=3)\n> at joinrels.c:123\n> #10 0x000000000080cd97 in standard_join_search (root=0x2f9e910,\n> levels_needed=3, initial_rels=0x2f90148) at allpaths.c:3020\n> #11 0x000000000080cd10 in make_rel_from_joinlist (root=0x2f9e910,\n> joinlist=0x2fd7550) at allpaths.c:2951\n> #12 0x000000000080899a in make_one_rel (root=0x2f9e910,\n> joinlist=0x2fd7550) at allpaths.c:228\n> #13 0x000000000084516a in query_planner (root=0x2f9e910,\n> qp_callback=0x84ad85 <standard_qp_callback>, qp_extra=0x7ffea91f9140) at\n> planmain.c:276\n> #14 0x000000000084788d in grouping_planner (root=0x2f9e910,\n> tuple_fraction=0) at planner.c:1447\n> #15 0x0000000000846f56 in subquery_planner (glob=0x2fa0c08,\n> parse=0x2f56d30, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at\n> planner.c:1025\n> #16 0x000000000084578b in standard_planner (parse=0x2f56d30,\n> query_string=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT\n> t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c =\n> t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\",\n> cursorOptions=2048, boundParams=0x0) at planner.c:406\n> #17 0x0000000000845536 in planner (parse=0x2f56d30,\n> query_string=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT\n> t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c =\n> t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\",\n> cursorOptions=2048, boundParams=0x0) at planner.c:277\n> #18 0x0000000000978faf in pg_plan_query (querytree=0x2f56d30,\n> query_string=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT\n> t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c =\n> t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\",\n> cursorOptions=2048, boundParams=0x0) at postgres.c:847\n> #19 0x0000000000693e50 in ExplainOneQuery (query=0x2f56d30,\n> cursorOptions=2048, into=0x0, es=0x2fa0920,\n> queryString=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT\n> t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c =\n> t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\",\n> params=0x0, queryEnv=0x0) at explain.c:397\n> #20 0x00000000006939a5 in ExplainQuery (pstate=0x2f9e0a0, stmt=0x2f56b50,\n> params=0x0, dest=0x2f9e008) at explain.c:281\n> #21 0x0000000000981de8 in standard_ProcessUtility (pstmt=0x2fd2220,\n> queryString=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT\n> t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c =\n> t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\",\n> readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n> queryEnv=0x0, dest=0x2f9e008, qc=0x7ffea91f9aa0) at utility.c:862\n> #22 0x0000000000981585 in ProcessUtility (pstmt=0x2fd2220,\n> queryString=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT\n> t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c =\n> t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\",\n> readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n> queryEnv=0x0, dest=0x2f9e008, qc=0x7ffea91f9aa0) at utility.c:527\n> #23 0x00000000009801ba in PortalRunUtility (portal=0x2f10180,\n> pstmt=0x2fd2220, isTopLevel=true, setHoldSnapshot=true, dest=0x2f9e008,\n> qc=0x7ffea91f9aa0) at pquery.c:1155\n> #24 0x000000000097ff20 in FillPortalStore (portal=0x2f10180,\n> isTopLevel=true) at pquery.c:1028\n> #25 0x000000000097f883 in PortalRun (portal=0x2f10180,\n> count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2fd2310,\n> altdest=0x2fd2310, qc=0x7ffea91f9c60) at pquery.c:760\n> #26 0x00000000009795d1 in exec_simple_query (\n> query_string=0x2eadcd0 \"EXPLAIN (COSTS OFF)\\nSELECT\n> t1.c,t2.c,t3.c,t1.d,t2.d,t3.d FROM plt1 t1 INNER JOIN plt2 t2 ON (t1.c =\n> t2.c AND t1.d = t2.d) LEFT JOIN plt1 t3 on (t2.c = t3.c AND t2.d = t3.d);\")\n> at postgres.c:1214\n> #27 0x000000000097da8d in PostgresMain (dbname=0x2ed8068 \"postgres\",\n> username=0x2ed8048 \"edb\") at postgres.c:4497\n> #28 0x00000000008b9699 in BackendRun (port=0x2ecfd00) at postmaster.c:4560\n>\n> Thanks & Regards,\n> Rajkumar Raghuwanshi\n>\n>\n>\n> On Mon, Oct 11, 2021 at 11:05 AM Rajkumar Raghuwanshi <\n> rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>\n>> Thanks for the patch, it applied cleanly and fixed the reported issue. I\n>> observed another case where\n>> In case of multi-col list partition on the same column query is not\n>> picking partition wise join. Is this expected?\n>>\n>> CREATE TABLE plt1 (a int, b int, c varchar) PARTITION BY LIST(c,c);\n>> CREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN\n>> (('0001','0001'),('0002','0002'),('0003','0003'));\n>> CREATE TABLE plt1_p2 PARTITION OF plt1 FOR VALUES IN\n>> (('0004','0004'),('0005','0005'),('0006','0006'));\n>> CREATE TABLE plt1_p3 PARTITION OF plt1 DEFAULT;\n>> INSERT INTO plt1 SELECT i, i % 47, to_char(i % 11, 'FM0000') FROM\n>> generate_series(0, 500) i WHERE i % 11 NOT IN (0,10);\n>> ANALYSE plt1;\n>> CREATE TABLE plt2 (a int, b int, c varchar) PARTITION BY LIST(c,c);\n>> CREATE TABLE plt2_p1 PARTITION OF plt2 FOR VALUES IN\n>> (('0001','0001'),('0002','0002'),('0003','0003'));\n>> CREATE TABLE plt2_p2 PARTITION OF plt2 FOR VALUES IN\n>> (('0004','0004'),('0005','0005'),('0006','0006'));\n>> CREATE TABLE plt2_p3 PARTITION OF plt2 DEFAULT;\n>> INSERT INTO plt2 SELECT i, i % 47, to_char(i % 11, 'FM0000') FROM\n>> generate_series(0, 500) i WHERE i % 11 NOT IN (0,10);\n>> ANALYSE plt2;\n>> SET enable_partitionwise_join TO true;\n>> EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.a, t2.c FROM plt1 t1 INNER JOIN\n>> plt2 t2 ON t1.c = t2.c;\n>>\n>> postgres=# EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.a, t2.c FROM plt1 t1\n>> INNER JOIN plt2 t2 ON t1.c = t2.c;\n>> QUERY PLAN\n>> --------------------------------------------\n>> Hash Join\n>> Hash Cond: ((t1.c)::text = (t2.c)::text)\n>> -> Append\n>> -> Seq Scan on plt1_p1 t1_1\n>> -> Seq Scan on plt1_p2 t1_2\n>> -> Seq Scan on plt1_p3 t1_3\n>> -> Hash\n>> -> Append\n>> -> Seq Scan on plt2_p1 t2_1\n>> -> Seq Scan on plt2_p2 t2_2\n>> -> Seq Scan on plt2_p3 t2_3\n>> (11 rows)\n>>\n>> Thanks & Regards,\n>> Rajkumar Raghuwanshi\n>>\n>>\n>>\n>> On Thu, Oct 7, 2021 at 6:03 PM Nitin Jadhav <\n>> nitinjadhavpostgres@gmail.com> wrote:\n>>\n>>> Thanks Rajkumar for testing.\n>>>\n>>> > I think it should throw an error as the partition by list has only 1\n>>> column but we are giving 2 values.\n>>>\n>>> I also agree that it should throw an error in the above case. Fixed the\n>>> issue in the attached patch. Also added related test cases to the\n>>> regression test suite.\n>>>\n>>>\n>>> > also if you see \\d+ showing plt1_p1 partition value as ‘(0001,0001)’\n>>> instead of ('0001','0001').\n>>>\n>>> Now throwing errors in the initial stage, this case doesn't arise.\n>>>\n>>> Please share if you find any other issues.\n>>>\n>>> Thanks & Regards,\n>>> Nitin Jadhav\n>>>\n>>>\n>>>\n>>>\n>>>\n>>> On Thu, Oct 7, 2021 at 4:05 PM Rajkumar Raghuwanshi <\n>>> rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>>>\n>>>> Thanks Nitin,\n>>>>\n>>>> v4 patches applied cleanly and make check is passing now. While testing\n>>>> further I observed that if multiple values are given for a single\n>>>> column list partition it is not giving error instead it is changing\n>>>> values itself. Please find the example below.\n>>>>\n>>>> postgres=# CREATE TABLE plt1 (a int, b varchar) PARTITION BY LIST(b);\n>>>> CREATE TABLE\n>>>> postgres=# CREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN\n>>>> (('0001','0001'),('0002','0002'));\n>>>> CREATE TABLE\n>>>> postgres=# \\d+ plt1;\n>>>> Partitioned table\n>>>> \"public.plt1\"\n>>>> Column | Type | Collation | Nullable | Default | Storage\n>>>> | Compression | Stats target | Description\n>>>>\n>>>> --------+-------------------+-----------+----------+---------+----------+-------------+--------------+-------------\n>>>> a | integer | | | | plain\n>>>> | | |\n>>>> b | character varying | | | | extended\n>>>> | | |\n>>>> Partition key: LIST (b)\n>>>> Partitions: plt1_p1 FOR VALUES IN ('(0001,0001)', '(0002,0002)')\n>>>>\n>>>> I think it should throw an error as the partition by list has only 1\n>>>> column but we are giving 2 values.\n>>>> also if you see \\d+ showing plt1_p1 partition value as ‘(0001,0001)’\n>>>> instead of ('0001','0001').\n>>>>\n>>>> Thanks & Regards,\n>>>> Rajkumar Raghuwanshi\n>>>>\n>>>>\n>>>>\n>>>> On Sun, Oct 3, 2021 at 1:52 AM Nitin Jadhav <\n>>>> nitinjadhavpostgres@gmail.com> wrote:\n>>>>\n>>>>> > > On PG head + Nitin's v3 patch + Amit's Delta patch. Make check is\n>>>>> failing with below errors.\n>>>>> >\n>>>>> > Thanks Rajkumar for testing.\n>>>>> >\n>>>>> > Here's a v2 of the delta patch that should fix both of these test\n>>>>> > failures. As I mentioned in my last reply, my delta patch fixed what\n>>>>> > I think were problems in Nitin's v3 patch but were not complete by\n>>>>> > themselves. Especially, I hadn't bothered to investigate various /*\n>>>>> > TODO: handle multi-column list partitioning */ sites to deal with my\n>>>>> > own changes.\n>>>>>\n>>>>> Thanks Rajkumar for testing and Thank you Amit for working on v2 of\n>>>>> the delta patch. Actually I had done the code changes related to\n>>>>> partition-wise join and I was in the middle of fixing the review\n>>>>> comments, So I could not share the patch. Anyways thanks for your\n>>>>> efforts.\n>>>>>\n>>>>> > I noticed that multi-column list partitions containing NULLs don't\n>>>>> > work correctly with partition pruning yet.\n>>>>> >\n>>>>> > create table p0 (a int, b text, c bool) partition by list (a, b, c);\n>>>>> > create table p01 partition of p0 for values in ((1, 1, true), (NULL,\n>>>>> 1, false));\n>>>>> > create table p02 partition of p0 for values in ((1, NULL, false));\n>>>>> > explain select * from p0 where a is null;\n>>>>> > QUERY PLAN\n>>>>> > --------------------------------------------------------\n>>>>> > Seq Scan on p01 p0 (cost=0.00..22.50 rows=6 width=37)\n>>>>> > Filter: (a IS NULL)\n>>>>> > (2 rows)\n>>>>> >\n>>>>> > In the attached updated version, I've dealt with some of those such\n>>>>> > that at least the existing cases exercising partition pruning and\n>>>>> > partition wise joins now pass.\n>>>>>\n>>>>> wrt partition pruning, I have checked the output of the above case\n>>>>> with the v2 version of the delta patch and without that. The output\n>>>>> remains same. Kindly let me know if I am missing something. But I feel\n>>>>> the above output is correct as the partition p01 is the only partition\n>>>>> which contains NULL value for column a, hence it is showing \"Seq scan\n>>>>> on p01\" in the output. Kindly correct me if I am wrong. I feel the\n>>>>> code changes related to 'null_keys' is not required, hence not\n>>>>> incorporated that in the attached patch.\n>>>>>\n>>>>> wrt partition-wise join, I had run the regression test (with new cases\n>>>>> related to partition-wise join) on v2 of the delta patch and observed\n>>>>> the crash. Hence I have not incorporated the partition-wise join\n>>>>> related code from v2 of delta patch to main v4 patch. Instead I have\n>>>>> added the partition-wise join related code done by me in the attached\n>>>>> patch. Please share your thoughts and if possible we can improvise the\n>>>>> code. Rest of the changes looks good to me and I have incorporated\n>>>>> that in the attached patch.\n>>>>>\n>>>>>\n>>>>> > I guess that may be due to the following newly added code being\n>>>>> incomplete:\n>>>>> > Maybe this function needs to return a \"bitmapset\" of indexes, because\n>>>>> > multiple partitions can now contain NULL values.\n>>>>>\n>>>>> I feel this function is not required at all as we are not separating\n>>>>> the non null and null partitions now. Removed in the attached patch.\n>>>>> Also removed the \"scan_null' variable from the structure\n>>>>> \"PruneStepResult\" and cleaned up the corresponding code blocks.\n>>>>>\n>>>>>\n>>>>> > This function name may be too generic. Given that it is specific to\n>>>>> > implementing list bound de-duplication, maybe the following signature\n>>>>> > is more appropriate:\n>>>>> >\n>>>>> > static bool\n>>>>> > checkListBoundDuplicated(List *list_bounds, List *new_bound)\n>>>>>\n>>>>> Yes. The function name looks more generic. How about using\n>>>>> \"isListBoundDuplicated()\"? I have used this name in the patch. Please\n>>>>> let me know if that does not look correct.\n>>>>>\n>>>>>\n>>>>> > Also, better if the function comment mentions those parameter names,\n>>>>> like:\n>>>>> >\n>>>>> > \"Returns TRUE if the list bound element 'new_bound' is already\n>>>>> present\n>>>>> > in the target list 'list_bounds', FALSE otherwise.\"\n>>>>>\n>>>>> Fixed.\n>>>>>\n>>>>>\n>>>>> > +/*\n>>>>> > + * transformPartitionListBounds\n>>>>> > + *\n>>>>> > + * Converts the expressions of list partition bounds from the raw\n>>>>> grammar\n>>>>> > + * representation.\n>>>>> >\n>>>>> > A sentence about the result format would be helpful, like:\n>>>>> >\n>>>>> > The result is a List of Lists of Const nodes to account for the\n>>>>> > partition key possibly containing more than one column.\n>>>>>\n>>>>> Fixed.\n>>>>>\n>>>>>\n>>>>> > + int i = 0;\n>>>>> > + int j = 0;\n>>>>> >\n>>>>> > Better to initialize such loop counters closer to the loop.\n>>>>>\n>>>>> Fixed in all the places.\n>>>>>\n>>>>>\n>>>>> > + colname[i] = (char *) palloc0(NAMEDATALEN *\n>>>>> sizeof(char));\n>>>>> > + colname[i] = get_attname(RelationGetRelid(parent),\n>>>>> > + key->partattrs[i], false);\n>>>>> >\n>>>>> > The palloc in the 1st statement is wasteful, because the 2nd\n>>>>> statement\n>>>>> > overwrites its pointer by the pointer to the string palloc'd by\n>>>>> > get_attname().\n>>>>>\n>>>>> Removed the 1st statement as it is not required.\n>>>>>\n>>>>>\n>>>>> > + ListCell *cell2 = NULL;\n>>>>> >\n>>>>> > No need to explicitly initialize the loop variable.\n>>>>>\n>>>>> Fixed in all the places.\n>>>>>\n>>>>>\n>>>>> > + RowExpr *rowexpr = NULL;\n>>>>> > +\n>>>>> > + if (!IsA(expr, RowExpr))\n>>>>> > + ereport(ERROR,\n>>>>> > + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n>>>>> > + errmsg(\"Invalid list bound specification\"),\n>>>>> > + parser_errposition(pstate, exprLocation((Node\n>>>>> > *) spec))));\n>>>>> > +\n>>>>> > + rowexpr = (RowExpr *) expr;\n>>>>> >\n>>>>> > It's okay to assign rowexpr at the top here instead of the dummy\n>>>>> > NULL-initialization and write the condition as:\n>>>>> >\n>>>>> > if (!IsA(rowexpr, RowExpr))\n>>>>>\n>>>>> Fixed.\n>>>>>\n>>>>>\n>>>>> > + if (isDuplicate)\n>>>>> > + continue;\n>>>>> > +\n>>>>> > + result = lappend(result, values);\n>>>>> >\n>>>>> > I can see you copied this style from the existing code, but how about\n>>>>> > writing this simply as:\n>>>>> >\n>>>>> > if (!isDuplicate)\n>>>>> > result = lappend(result, values);\n>>>>>\n>>>>> This looks good. I have changed in the patch.\n>>>>>\n>>>>>\n>>>>> > -/* One value coming from some (index'th) list partition */\n>>>>> > +/* One bound of a list partition */\n>>>>> > typedef struct PartitionListValue\n>>>>> > {\n>>>>> > int index;\n>>>>> > - Datum value;\n>>>>> > + Datum *values;\n>>>>> > + bool *isnulls;\n>>>>> > } PartitionListValue;\n>>>>> >\n>>>>> > Given that this is a locally-defined struct, I wonder if it makes\n>>>>> > sense to rename the struct while we're at it. Call it, say,\n>>>>> > PartitionListBound?\n>>>>>\n>>>>> Yes. PartitionListBound looks more appropriate and it also matches the\n>>>>> similar structures of the other partition strategies.\n>>>>>\n>>>>> > Also, please keep part of the existing comment that says that the\n>>>>> > bound belongs to index'th partition.\n>>>>>\n>>>>> Retained the old comment.\n>>>>>\n>>>>>\n>>>>> > + * partition_bound_accepts_nulls\n>>>>> > + *\n>>>>> > + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n>>>>> > */\n>>>>> >\n>>>>> > I suggest slight rewording, as follows:\n>>>>> >\n>>>>> > \"Returns TRUE if any of the partition bounds contains a NULL value,\n>>>>> > FALSE otherwise.\"\n>>>>>\n>>>>> Fixed.\n>>>>>\n>>>>>\n>>>>> > - PartitionListValue *all_values;\n>>>>> > + PartitionListValue **all_values;\n>>>>> > ...\n>>>>> > - all_values = (PartitionListValue *)\n>>>>> > - palloc(ndatums * sizeof(PartitionListValue));\n>>>>> > + ndatums = get_list_datum_count(boundspecs, nparts);\n>>>>> > + all_values = (PartitionListValue **)\n>>>>> > + palloc(ndatums * sizeof(PartitionListValue *));\n>>>>> >\n>>>>> > I don't see the need to redefine all_values's pointer type. No need\n>>>>> > to palloc PartitionListValue repeatedly for every datum as done\n>>>>> > further down as follows:\n>>>>> >\n>>>>> > + all_values[j] = (PartitionListValue *)\n>>>>> > palloc(sizeof(PartitionListValue));\n>>>>> >\n>>>>> > You do need the following two though:\n>>>>> >\n>>>>> > + all_values[j]->values = (Datum *) palloc0(key->partnatts\n>>>>> *\n>>>>> > sizeof(Datum));\n>>>>> > + all_values[j]->isnulls = (bool *) palloc0(key->partnatts\n>>>>> *\n>>>>> > sizeof(bool));\n>>>>> >\n>>>>> > If you change the above the way I suggest, you'd also need to revert\n>>>>> > the following change:\n>>>>> >\n>>>>> > - qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n>>>>> > + qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n>>>>> > qsort_partition_list_value_cmp, (void *) key);\n>>>>> >\n>>>>> > + int orig_index = all_values[i]->index;\n>>>>> > + boundinfo->datums[i] = (Datum *) palloc(key->partnatts *\n>>>>> sizeof(Datum));\n>>>>> >\n>>>>> > Missing a newline between these two statements.\n>>>>>\n>>>>> Fixed. Made necessary changes to keep the intent of existing code.\n>>>>>\n>>>>>\n>>>>> > @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n>>>>> > *parttyplen, bool *parttypbyval,\n>>>>> > if (b1->nindexes != b2->nindexes)\n>>>>> > return false;\n>>>>> >\n>>>>> > - if (b1->null_index != b2->null_index)\n>>>>> > + if (get_partition_bound_null_index(b1) !=\n>>>>> > get_partition_bound_null_index(b2))\n>>>>> >\n>>>>> > As mentioned in the last message, this bit in\n>>>>> partition_bounds_equal()\n>>>>> > needs to be comparing \"bitmapsets\" of null bound indexes, that is\n>>>>> > after fixing get_partition_bound_null_index() as previously\n>>>>> mentioned.\n>>>>>\n>>>>> As mentioned earlier, removed the functionality of\n>>>>> get_partition_bound_null_index(), hence the above condition is not\n>>>>> required and removed.\n>>>>>\n>>>>> > But...\n>>>>> >\n>>>>> > @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n>>>>> > *parttyplen, bool *parttypbyval,\n>>>>> > * context. datumIsEqual() should be simple enough\n>>>>> to be\n>>>>> > * safe.\n>>>>> > */\n>>>>> > - if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n>>>>> > + if (b1->isnulls)\n>>>>> > + b1_isnull = b1->isnulls[i][j];\n>>>>> > + if (b2->isnulls)\n>>>>> > + b2_isnull = b2->isnulls[i][j];\n>>>>> > +\n>>>>> > + /*\n>>>>> > + * If any of the partition bound has NULL value,\n>>>>> then check\n>>>>> > + * equality for the NULL value instead of comparing\n>>>>> the datums\n>>>>> > + * as it does not contain valid value in case of\n>>>>> NULL.\n>>>>> > + */\n>>>>> > + if (b1_isnull || b2_isnull)\n>>>>> > + {\n>>>>> > + if (b1_isnull != b2_isnull)\n>>>>> > + return false;\n>>>>> > + }\n>>>>> >\n>>>>> > ...if you have this in the main loop, I don't think we need the above\n>>>>> > code stanza which appears to implement a short-cut for this long-form\n>>>>> > logic.\n>>>>>\n>>>>> Yes. May be we could have ignored the above code stanza if we would\n>>>>> have comparing the null indexes using get_partition_bound_null_index()\n>>>>> in the beginning of the function. But hence we are not separating the\n>>>>> non null partitions and null partitions, I would like to keep the\n>>>>> logic in the inner loop as we are doing it for non null bound values\n>>>>> in the above code stanza, just to give a feel that null bound values\n>>>>> are also handled the same way as non null values. Please correct me if\n>>>>> I am wrong.\n>>>>>\n>>>>>\n>>>>> > + (key->strategy != PARTITION_STRATEGY_LIST ||\n>>>>> > + !src->isnulls[i][j]))\n>>>>> >\n>>>>> > I think it's better to write this condition as follows just like the\n>>>>> > accompanying condition involving src->kind:\n>>>>> >\n>>>>> > (src->nulls == NULL || !src->isnulls[i][j])\n>>>>>\n>>>>> Fixed.\n>>>>>\n>>>>>\n>>>>> > In check_new_partition_bound():\n>>>>> >\n>>>>> > + Datum *values = (Datum *)\n>>>>> > palloc0(key->partnatts * sizeof(Datum));\n>>>>> > + bool *isnulls = (bool *)\n>>>>> > palloc0(key->partnatts * sizeof(bool));\n>>>>> >\n>>>>> > Doesn't seem like a bad idea to declare these as:\n>>>>> >\n>>>>> > Datum values[PARTITION_MAX_KEYS];\n>>>>> > bool isnulls[PARTITION_MAX_KEYS];\n>>>>>\n>>>>> Thanks for the suggestion. I have changed as above.\n>>>>>\n>>>>> > I looked at get_qual_for_list_multi_column() and immediately thought\n>>>>> > that it may be a bad idea. I think it's better to integrate the\n>>>>> logic\n>>>>> > for multi-column case into the existing function even if that makes\n>>>>> > the function appear more complex. Having two functions with the same\n>>>>> > goal and mostly the same code is not a good idea mainly because it\n>>>>> > becomes a maintenance burden.\n>>>>>\n>>>>> Actually I had written a separate function because of the complexity.\n>>>>> Now I have understood that since the objective is same, it should be\n>>>>> done in a single function irrespective of complexity.\n>>>>>\n>>>>> > I have attempted a rewrite such that get_qual_for_list() now handles\n>>>>> > both the single-column and multi-column cases. Changes included in\n>>>>> > the delta patch. The patch updates some outputs of the newly added\n>>>>> > tests for multi-column list partitions, because the new code emits\n>>>>> the\n>>>>> > IS NOT NULL tests a bit differently than\n>>>>> > get_qual_for_list_mutli_column() would. Notably, the old approach\n>>>>> > would emit IS NOT NULL for every non-NULL datum matched to a given\n>>>>> > column, not just once for the column. However, the patch makes a few\n>>>>> > other tests fail, mainly because I had to fix\n>>>>> > partition_bound_accepts_nulls() to handle the multi-column case,\n>>>>> > though didn't bother to update all callers of it to also handle the\n>>>>> > multi-column case correctly. I guess that's a TODO you're going to\n>>>>> > deal with at some point anyway. :)\n>>>>>\n>>>>> Thank you very much for your efforts. The changes looks good to me and\n>>>>> I have incorporated these changes in the attached patch.\n>>>>>\n>>>>> I have completed the coding for all the TODOs and hence removed in the\n>>>>> patch. The naming conventions used for function/variable names varies\n>>>>> across the files. Some places it is like 'namesLikeThis' and in some\n>>>>> place it is like 'names_like_this'. I have used the naming conventions\n>>>>> based on the surrounding styles used. I am happy to change those if\n>>>>> required.\n>>>>>\n>>>>> I have verified 'make check' with the attached patch and it is working\n>>>>> fine.\n>>>>>\n>>>>>\n>>>>> Thanks & Regards,\n>>>>> Nitin Jadhav\n>>>>>\n>>>>>\n>>>>> On Mon, Sep 13, 2021 at 3:47 PM Rajkumar Raghuwanshi\n>>>>> <rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>>>>> >\n>>>>> > On PG head + Nitin's v3 patch + Amit's Delta patch. Make check is\n>>>>> failing with below errors.\n>>>>> >\n>>>>> > --inherit.sql is failing with error :\"ERROR: negative bitmapset\n>>>>> member not allowed\"\n>>>>> > update mlparted_tab mlp set c = 'xxx'\n>>>>> > from\n>>>>> > (select a from some_tab union all select a+1 from some_tab) ss (a)\n>>>>> > where (mlp.a = ss.a and mlp.b = 'b') or mlp.a = 3;\n>>>>> > ERROR: negative bitmapset member not allowed\n>>>>> >\n>>>>> > --partition_join.sql is crashing with enable_partitionwise_join set\n>>>>> to true.\n>>>>> > CREATE TABLE plt1_adv (a int, b int, c text) PARTITION BY LIST (c);\n>>>>> > CREATE TABLE plt1_adv_p1 PARTITION OF plt1_adv FOR VALUES IN\n>>>>> ('0001', '0003');\n>>>>> > CREATE TABLE plt1_adv_p2 PARTITION OF plt1_adv FOR VALUES IN\n>>>>> ('0004', '0006');\n>>>>> > CREATE TABLE plt1_adv_p3 PARTITION OF plt1_adv FOR VALUES IN\n>>>>> ('0008', '0009');\n>>>>> > INSERT INTO plt1_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM\n>>>>> generate_series(1, 299) i WHERE i % 10 IN (1, 3, 4, 6, 8, 9);\n>>>>> > ANALYZE plt1_adv;\n>>>>> > CREATE TABLE plt2_adv (a int, b int, c text) PARTITION BY LIST (c);\n>>>>> > CREATE TABLE plt2_adv_p1 PARTITION OF plt2_adv FOR VALUES IN\n>>>>> ('0002', '0003');\n>>>>> > CREATE TABLE plt2_adv_p2 PARTITION OF plt2_adv FOR VALUES IN\n>>>>> ('0004', '0006');\n>>>>> > CREATE TABLE plt2_adv_p3 PARTITION OF plt2_adv FOR VALUES IN\n>>>>> ('0007', '0009');\n>>>>> > INSERT INTO plt2_adv SELECT i, i, to_char(i % 10, 'FM0000') FROM\n>>>>> generate_series(1, 299) i WHERE i % 10 IN (2, 3, 4, 6, 7, 9);\n>>>>> > ANALYZE plt2_adv;\n>>>>> > -- inner join\n>>>>> > EXPLAIN (COSTS OFF)\n>>>>> > SELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv\n>>>>> t2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\n>>>>> > server closed the connection unexpectedly\n>>>>> > This probably means the server terminated abnormally\n>>>>> > before or while processing the request.\n>>>>> > connection to server was lost\n>>>>> >\n>>>>> >\n>>>>> > --stack-trace\n>>>>> > Core was generated by `postgres: edb regression [local] EXPLAIN\n>>>>> '.\n>>>>> > Program terminated with signal 6, Aborted.\n>>>>> > #0 0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n>>>>> > Missing separate debuginfos, use: debuginfo-install\n>>>>> glibc-2.17-222.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64\n>>>>> krb5-libs-1.15.1-19.el7.x86_64 libcom_err-1.42.9-12.el7_5.x86_64\n>>>>> libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-12.el7.x86_64\n>>>>> openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64\n>>>>> zlib-1.2.7-17.el7.x86_64\n>>>>> > (gdb) bt\n>>>>> > #0 0x00007f7d339ba277 in raise () from /lib64/libc.so.6\n>>>>> > #1 0x00007f7d339bb968 in abort () from /lib64/libc.so.6\n>>>>> > #2 0x0000000000b0fbc3 in ExceptionalCondition\n>>>>> (conditionName=0xcbda10 \"part_index >= 0\", errorType=0xcbd1c3\n>>>>> \"FailedAssertion\", fileName=0xcbd2fe \"partbounds.c\", lineNumber=1957)\n>>>>> > at assert.c:69\n>>>>> > #3 0x0000000000892aa1 in is_dummy_partition (rel=0x19b37c0,\n>>>>> part_index=-1) at partbounds.c:1957\n>>>>> > #4 0x00000000008919bd in merge_list_bounds (partnatts=1,\n>>>>> partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0,\n>>>>> inner_rel=0x1922938, jointype=JOIN_INNER,\n>>>>> > outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at\n>>>>> partbounds.c:1529\n>>>>> > #5 0x00000000008910de in partition_bounds_merge (partnatts=1,\n>>>>> partsupfunc=0x1922798, partcollation=0x1922738, outer_rel=0x19b37c0,\n>>>>> inner_rel=0x1922938, jointype=JOIN_INNER,\n>>>>> > outer_parts=0x7fffd67751b0, inner_parts=0x7fffd67751a8) at\n>>>>> partbounds.c:1223\n>>>>> > #6 0x000000000082c41a in compute_partition_bounds (root=0x1a19ed0,\n>>>>> rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30,\n>>>>> parent_sjinfo=0x7fffd67752a0, parts1=0x7fffd67751b0,\n>>>>> > parts2=0x7fffd67751a8) at joinrels.c:1644\n>>>>> > #7 0x000000000082bc34 in try_partitionwise_join (root=0x1a19ed0,\n>>>>> rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30,\n>>>>> parent_sjinfo=0x7fffd67752a0, parent_restrictlist=0x1ab3318)\n>>>>> > at joinrels.c:1402\n>>>>> > #8 0x000000000082aea2 in populate_joinrel_with_paths\n>>>>> (root=0x1a19ed0, rel1=0x19b37c0, rel2=0x1922938, joinrel=0x1ab7f30,\n>>>>> sjinfo=0x7fffd67752a0, restrictlist=0x1ab3318)\n>>>>> > at joinrels.c:926\n>>>>> > #9 0x000000000082a8f5 in make_join_rel (root=0x1a19ed0,\n>>>>> rel1=0x19b37c0, rel2=0x1922938) at joinrels.c:760\n>>>>> > #10 0x0000000000829e03 in make_rels_by_clause_joins (root=0x1a19ed0,\n>>>>> old_rel=0x19b37c0, other_rels_list=0x1ab2970, other_rels=0x1ab2990) at\n>>>>> joinrels.c:312\n>>>>> > #11 0x00000000008298d9 in join_search_one_level (root=0x1a19ed0,\n>>>>> level=2) at joinrels.c:123\n>>>>> > #12 0x000000000080c566 in standard_join_search (root=0x1a19ed0,\n>>>>> levels_needed=2, initial_rels=0x1ab2970) at allpaths.c:3020\n>>>>> > #13 0x000000000080c4df in make_rel_from_joinlist (root=0x1a19ed0,\n>>>>> joinlist=0x199d538) at allpaths.c:2951\n>>>>> > #14 0x000000000080816b in make_one_rel (root=0x1a19ed0,\n>>>>> joinlist=0x199d538) at allpaths.c:228\n>>>>> > #15 0x000000000084491d in query_planner (root=0x1a19ed0,\n>>>>> qp_callback=0x84a538 <standard_qp_callback>, qp_extra=0x7fffd6775630) at\n>>>>> planmain.c:276\n>>>>> > #16 0x0000000000847040 in grouping_planner (root=0x1a19ed0,\n>>>>> tuple_fraction=0) at planner.c:1447\n>>>>> > #17 0x0000000000846709 in subquery_planner (glob=0x19b39d8,\n>>>>> parse=0x1aaa290, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at\n>>>>> planner.c:1025\n>>>>> > #18 0x0000000000844f3e in standard_planner (parse=0x1aaa290,\n>>>>> > query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>>>>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>>>>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048,\n>>>>> boundParams=0x0) at planner.c:406\n>>>>> > #19 0x0000000000844ce9 in planner (parse=0x1aaa290,\n>>>>> > query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>>>>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>>>>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048,\n>>>>> boundParams=0x0) at planner.c:277\n>>>>> > #20 0x0000000000978483 in pg_plan_query (querytree=0x1aaa290,\n>>>>> > query_string=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>>>>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>>>>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\", cursorOptions=2048,\n>>>>> boundParams=0x0) at postgres.c:847\n>>>>> > #21 0x00000000006937fc in ExplainOneQuery (query=0x1aaa290,\n>>>>> cursorOptions=2048, into=0x0, es=0x19b36f0,\n>>>>> > queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>>>>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>>>>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>>>>> > params=0x0, queryEnv=0x0) at explain.c:397\n>>>>> > #22 0x0000000000693351 in ExplainQuery (pstate=0x197c410,\n>>>>> stmt=0x1aaa0b0, params=0x0, dest=0x197c378) at explain.c:281\n>>>>> > #23 0x00000000009811fa in standard_ProcessUtility (pstmt=0x1a0bfc8,\n>>>>> > queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>>>>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>>>>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>>>>> > readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL,\n>>>>> params=0x0, queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at\n>>>>> utility.c:845\n>>>>> > #24 0x00000000009809ec in ProcessUtility (pstmt=0x1a0bfc8,\n>>>>> > queryString=0x1830fa0 \"EXPLAIN (COSTS OFF)\\nSELECT t1.a, t1.c,\n>>>>> t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv t2 ON (t1.a = t2.a AND t1.c\n>>>>> = t2.c) WHERE t1.b < 10 ORDER BY t1.a;\",\n>>>>> > readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL,\n>>>>> params=0x0, queryEnv=0x0, dest=0x197c378, qc=0x7fffd6775f90) at\n>>>>> utility.c:527\n>>>>> > #25 0x000000000097f636 in PortalRunUtility (portal=0x1893b40,\n>>>>> pstmt=0x1a0bfc8, isTopLevel=true, setHoldSnapshot=true, dest=0x197c378,\n>>>>> qc=0x7fffd6775f90) at pquery.c:1147\n>>>>> > #26 0x000000000097f3a5 in FillPortalStore (portal=0x1893b40,\n>>>>> isTopLevel=true) at pquery.c:1026\n>>>>> > #27 0x000000000097ed11 in PortalRun (portal=0x1893b40,\n>>>>> count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1a0c0b8,\n>>>>> altdest=0x1a0c0b8, qc=0x7fffd6776150) at pquery.c:758\n>>>>> > #28 0x0000000000978aa5 in exec_simple_query (\n>>>>> >\n>>>>> > Thanks & Regards,\n>>>>> > Rajkumar Raghuwanshi\n>>>>> >\n>>>>> >\n>>>>> > On Fri, Sep 3, 2021 at 7:17 PM Amit Langote <amitlangote09@gmail.com>\n>>>>> wrote:\n>>>>> >>\n>>>>> >> On Wed, Sep 1, 2021 at 2:31 PM Amit Langote <\n>>>>> amitlangote09@gmail.com> wrote:\n>>>>> >> > On Tue, Aug 31, 2021 at 8:02 PM Nitin Jadhav\n>>>>> >> > <nitinjadhavpostgres@gmail.com> wrote:\n>>>>> >> > > The attached patch also fixes the above comments.\n>>>>> >> >\n>>>>> >> > I noticed that multi-column list partitions containing NULLs don't\n>>>>> >> > work correctly with partition pruning yet.\n>>>>> >> >\n>>>>> >> > create table p0 (a int, b text, c bool) partition by list (a, b,\n>>>>> c);\n>>>>> >> > create table p01 partition of p0 for values in ((1, 1, true),\n>>>>> (NULL, 1, false));\n>>>>> >> > create table p02 partition of p0 for values in ((1, NULL, false));\n>>>>> >> > explain select * from p0 where a is null;\n>>>>> >> > QUERY PLAN\n>>>>> >> > --------------------------------------------------------\n>>>>> >> > Seq Scan on p01 p0 (cost=0.00..22.50 rows=6 width=37)\n>>>>> >> > Filter: (a IS NULL)\n>>>>> >> > (2 rows)\n>>>>> >> >\n>>>>> >> > I guess that may be due to the following newly added code being\n>>>>> incomplete:\n>>>>> >> >\n>>>>> >> > +/*\n>>>>> >> > + * get_partition_bound_null_index\n>>>>> >> > + *\n>>>>> >> > + * Returns the partition index of the partition bound which\n>>>>> accepts NULL.\n>>>>> >> > + */\n>>>>> >> > +int\n>>>>> >> > +get_partition_bound_null_index(PartitionBoundInfo boundinfo)\n>>>>> >> > +{\n>>>>> >> > + int i = 0;\n>>>>> >> > + int j = 0;\n>>>>> >> > +\n>>>>> >> > + if (!boundinfo->isnulls)\n>>>>> >> > + return -1;\n>>>>> >> >\n>>>>> >> > - if (!val->constisnull)\n>>>>> >> > - count++;\n>>>>> >> > + for (i = 0; i < boundinfo->ndatums; i++)\n>>>>> >> > + {\n>>>>> >> > + //TODO: Handle for multi-column cases\n>>>>> >> > + for (j = 0; j < 1; j++)\n>>>>> >> > + {\n>>>>> >> > + if (boundinfo->isnulls[i][j])\n>>>>> >> > + return boundinfo->indexes[i];\n>>>>> >> > }\n>>>>> >> > }\n>>>>> >> >\n>>>>> >> > + return -1;\n>>>>> >> > +}\n>>>>> >> >\n>>>>> >> > Maybe this function needs to return a \"bitmapset\" of indexes,\n>>>>> because\n>>>>> >> > multiple partitions can now contain NULL values.\n>>>>> >> >\n>>>>> >> > Some other issues I noticed and suggestions for improvement:\n>>>>> >> >\n>>>>> >> > +/*\n>>>>> >> > + * checkForDuplicates\n>>>>> >> > + *\n>>>>> >> > + * Returns TRUE if the list bound element is already present in\n>>>>> the list of\n>>>>> >> > + * list bounds, FALSE otherwise.\n>>>>> >> > + */\n>>>>> >> > +static bool\n>>>>> >> > +checkForDuplicates(List *source, List *searchElem)\n>>>>> >> >\n>>>>> >> > This function name may be too generic. Given that it is specific\n>>>>> to\n>>>>> >> > implementing list bound de-duplication, maybe the following\n>>>>> signature\n>>>>> >> > is more appropriate:\n>>>>> >> >\n>>>>> >> > static bool\n>>>>> >> > checkListBoundDuplicated(List *list_bounds, List *new_bound)\n>>>>> >> >\n>>>>> >> > Also, better if the function comment mentions those parameter\n>>>>> names, like:\n>>>>> >> >\n>>>>> >> > \"Returns TRUE if the list bound element 'new_bound' is already\n>>>>> present\n>>>>> >> > in the target list 'list_bounds', FALSE otherwise.\"\n>>>>> >> >\n>>>>> >> > +/*\n>>>>> >> > + * transformPartitionListBounds\n>>>>> >> > + *\n>>>>> >> > + * Converts the expressions of list partition bounds from the\n>>>>> raw grammar\n>>>>> >> > + * representation.\n>>>>> >> >\n>>>>> >> > A sentence about the result format would be helpful, like:\n>>>>> >> >\n>>>>> >> > The result is a List of Lists of Const nodes to account for the\n>>>>> >> > partition key possibly containing more than one column.\n>>>>> >> >\n>>>>> >> > + int i = 0;\n>>>>> >> > + int j = 0;\n>>>>> >> >\n>>>>> >> > Better to initialize such loop counters closer to the loop.\n>>>>> >> >\n>>>>> >> > + colname[i] = (char *) palloc0(NAMEDATALEN *\n>>>>> sizeof(char));\n>>>>> >> > + colname[i] = get_attname(RelationGetRelid(parent),\n>>>>> >> > + key->partattrs[i], false);\n>>>>> >> >\n>>>>> >> > The palloc in the 1st statement is wasteful, because the 2nd\n>>>>> statement\n>>>>> >> > overwrites its pointer by the pointer to the string palloc'd by\n>>>>> >> > get_attname().\n>>>>> >> >\n>>>>> >> > + ListCell *cell2 = NULL;\n>>>>> >> >\n>>>>> >> > No need to explicitly initialize the loop variable.\n>>>>> >> >\n>>>>> >> > + RowExpr *rowexpr = NULL;\n>>>>> >> > +\n>>>>> >> > + if (!IsA(expr, RowExpr))\n>>>>> >> > + ereport(ERROR,\n>>>>> >> > +\n>>>>> (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n>>>>> >> > + errmsg(\"Invalid list bound\n>>>>> specification\"),\n>>>>> >> > + parser_errposition(pstate,\n>>>>> exprLocation((Node\n>>>>> >> > *) spec))));\n>>>>> >> > +\n>>>>> >> > + rowexpr = (RowExpr *) expr;\n>>>>> >> >\n>>>>> >> > It's okay to assign rowexpr at the top here instead of the dummy\n>>>>> >> > NULL-initialization and write the condition as:\n>>>>> >> >\n>>>>> >> > if (!IsA(rowexpr, RowExpr))\n>>>>> >> >\n>>>>> >> > + if (isDuplicate)\n>>>>> >> > + continue;\n>>>>> >> > +\n>>>>> >> > + result = lappend(result, values);\n>>>>> >> >\n>>>>> >> > I can see you copied this style from the existing code, but how\n>>>>> about\n>>>>> >> > writing this simply as:\n>>>>> >> >\n>>>>> >> > if (!isDuplicate)\n>>>>> >> > result = lappend(result, values);\n>>>>> >> >\n>>>>> >> > -/* One value coming from some (index'th) list partition */\n>>>>> >> > +/* One bound of a list partition */\n>>>>> >> > typedef struct PartitionListValue\n>>>>> >> > {\n>>>>> >> > int index;\n>>>>> >> > - Datum value;\n>>>>> >> > + Datum *values;\n>>>>> >> > + bool *isnulls;\n>>>>> >> > } PartitionListValue;\n>>>>> >> >\n>>>>> >> > Given that this is a locally-defined struct, I wonder if it makes\n>>>>> >> > sense to rename the struct while we're at it. Call it, say,\n>>>>> >> > PartitionListBound?\n>>>>> >> >\n>>>>> >> > Also, please keep part of the existing comment that says that the\n>>>>> >> > bound belongs to index'th partition.\n>>>>> >> >\n>>>>> >> > Will send more comments in a bit...\n>>>>> >>\n>>>>> >> + * partition_bound_accepts_nulls\n>>>>> >> + *\n>>>>> >> + * Returns TRUE if partition bound has NULL value, FALSE otherwise.\n>>>>> >> */\n>>>>> >>\n>>>>> >> I suggest slight rewording, as follows:\n>>>>> >>\n>>>>> >> \"Returns TRUE if any of the partition bounds contains a NULL value,\n>>>>> >> FALSE otherwise.\"\n>>>>> >>\n>>>>> >> - PartitionListValue *all_values;\n>>>>> >> + PartitionListValue **all_values;\n>>>>> >> ...\n>>>>> >> - all_values = (PartitionListValue *)\n>>>>> >> - palloc(ndatums * sizeof(PartitionListValue));\n>>>>> >> + ndatums = get_list_datum_count(boundspecs, nparts);\n>>>>> >> + all_values = (PartitionListValue **)\n>>>>> >> + palloc(ndatums * sizeof(PartitionListValue *));\n>>>>> >>\n>>>>> >> I don't see the need to redefine all_values's pointer type. No need\n>>>>> >> to palloc PartitionListValue repeatedly for every datum as done\n>>>>> >> further down as follows:\n>>>>> >>\n>>>>> >> + all_values[j] = (PartitionListValue *)\n>>>>> >> palloc(sizeof(PartitionListValue));\n>>>>> >>\n>>>>> >> You do need the following two though:\n>>>>> >>\n>>>>> >> + all_values[j]->values = (Datum *)\n>>>>> palloc0(key->partnatts *\n>>>>> >> sizeof(Datum));\n>>>>> >> + all_values[j]->isnulls = (bool *)\n>>>>> palloc0(key->partnatts *\n>>>>> >> sizeof(bool));\n>>>>> >>\n>>>>> >> If you change the above the way I suggest, you'd also need to revert\n>>>>> >> the following change:\n>>>>> >>\n>>>>> >> - qsort_arg(all_values, ndatums, sizeof(PartitionListValue),\n>>>>> >> + qsort_arg(all_values, ndatums, sizeof(PartitionListValue *),\n>>>>> >> qsort_partition_list_value_cmp, (void *) key);\n>>>>> >>\n>>>>> >> + int orig_index = all_values[i]->index;\n>>>>> >> + boundinfo->datums[i] = (Datum *) palloc(key->partnatts *\n>>>>> sizeof(Datum));\n>>>>> >>\n>>>>> >> Missing a newline between these two statements.\n>>>>> >>\n>>>>> >> BTW, I noticed that the boundDatums variable is no longer used in\n>>>>> >> create_list_bounds. I traced back its origin and found that a\n>>>>> recent\n>>>>> >> commit 53d86957e98 introduced it to implement an idea to reduce the\n>>>>> >> finer-grained pallocs that were being done in\n>>>>> create_list_bounds(). I\n>>>>> >> don't think that this patch needs to throw away that work. You can\n>>>>> >> make it work as the attached delta patch that applies on top of v3.\n>>>>> >> Please check.\n>>>>> >>\n>>>>> >> @@ -915,7 +949,7 @@ partition_bounds_equal(int partnatts, int16\n>>>>> >> *parttyplen, bool *parttypbyval,\n>>>>> >> if (b1->nindexes != b2->nindexes)\n>>>>> >> return false;\n>>>>> >>\n>>>>> >> - if (b1->null_index != b2->null_index)\n>>>>> >> + if (get_partition_bound_null_index(b1) !=\n>>>>> >> get_partition_bound_null_index(b2))\n>>>>> >>\n>>>>> >> As mentioned in the last message, this bit in\n>>>>> partition_bounds_equal()\n>>>>> >> needs to be comparing \"bitmapsets\" of null bound indexes, that is\n>>>>> >> after fixing get_partition_bound_null_index() as previously\n>>>>> mentioned.\n>>>>> >>\n>>>>> >> But...\n>>>>> >>\n>>>>> >> @@ -988,7 +1022,22 @@ partition_bounds_equal(int partnatts, int16\n>>>>> >> *parttyplen, bool *parttypbyval,\n>>>>> >> * context. datumIsEqual() should be simple enough\n>>>>> to be\n>>>>> >> * safe.\n>>>>> >> */\n>>>>> >> - if (!datumIsEqual(b1->datums[i][j],\n>>>>> b2->datums[i][j],\n>>>>> >> + if (b1->isnulls)\n>>>>> >> + b1_isnull = b1->isnulls[i][j];\n>>>>> >> + if (b2->isnulls)\n>>>>> >> + b2_isnull = b2->isnulls[i][j];\n>>>>> >> +\n>>>>> >> + /*\n>>>>> >> + * If any of the partition bound has NULL value,\n>>>>> then check\n>>>>> >> + * equality for the NULL value instead of comparing\n>>>>> the datums\n>>>>> >> + * as it does not contain valid value in case of\n>>>>> NULL.\n>>>>> >> + */\n>>>>> >> + if (b1_isnull || b2_isnull)\n>>>>> >> + {\n>>>>> >> + if (b1_isnull != b2_isnull)\n>>>>> >> + return false;\n>>>>> >> + }\n>>>>> >>\n>>>>> >> ...if you have this in the main loop, I don't think we need the\n>>>>> above\n>>>>> >> code stanza which appears to implement a short-cut for this\n>>>>> long-form\n>>>>> >> logic.\n>>>>> >>\n>>>>> >> + (key->strategy != PARTITION_STRATEGY_LIST ||\n>>>>> >> + !src->isnulls[i][j]))\n>>>>> >>\n>>>>> >> I think it's better to write this condition as follows just like the\n>>>>> >> accompanying condition involving src->kind:\n>>>>> >>\n>>>>> >> (src->nulls == NULL || !src->isnulls[i][j])\n>>>>> >>\n>>>>> >> (Skipped looking at merge_list_bounds() and related changes for now\n>>>>> as\n>>>>> >> I see a lot of TODOs remain to be done.)\n>>>>> >>\n>>>>> >> In check_new_partition_bound():\n>>>>> >>\n>>>>> >> + Datum *values = (Datum *)\n>>>>> >> palloc0(key->partnatts * sizeof(Datum));\n>>>>> >> + bool *isnulls = (bool *)\n>>>>> >> palloc0(key->partnatts * sizeof(bool));\n>>>>> >>\n>>>>> >> Doesn't seem like a bad idea to declare these as:\n>>>>> >>\n>>>>> >> Datum values[PARTITION_MAX_KEYS];\n>>>>> >> bool isnulls[PARTITION_MAX_KEYS];\n>>>>> >>\n>>>>> >>\n>>>>> >> I looked at get_qual_for_list_multi_column() and immediately thought\n>>>>> >> that it may be a bad idea. I think it's better to integrate the\n>>>>> logic\n>>>>> >> for multi-column case into the existing function even if that makes\n>>>>> >> the function appear more complex. Having two functions with the\n>>>>> same\n>>>>> >> goal and mostly the same code is not a good idea mainly because it\n>>>>> >> becomes a maintenance burden.\n>>>>> >>\n>>>>> >> I have attempted a rewrite such that get_qual_for_list() now handles\n>>>>> >> both the single-column and multi-column cases. Changes included in\n>>>>> >> the delta patch. The patch updates some outputs of the newly added\n>>>>> >> tests for multi-column list partitions, because the new code emits\n>>>>> the\n>>>>> >> IS NOT NULL tests a bit differently than\n>>>>> >> get_qual_for_list_mutli_column() would. Notably, the old approach\n>>>>> >> would emit IS NOT NULL for every non-NULL datum matched to a given\n>>>>> >> column, not just once for the column. However, the patch makes a\n>>>>> few\n>>>>> >> other tests fail, mainly because I had to fix\n>>>>> >> partition_bound_accepts_nulls() to handle the multi-column case,\n>>>>> >> though didn't bother to update all callers of it to also handle the\n>>>>> >> multi-column case correctly. I guess that's a TODO you're going to\n>>>>> >> deal with at some point anyway. :)\n>>>>> >>\n>>>>> >> I still have more than half of v3 left to look at, so will continue\n>>>>> >> looking. In the meantime, please check the changes I suggested,\n>>>>> >> including the delta patch, and let me know your thoughts.\n>>>>> >>\n>>>>> >> --\n>>>>> >> Amit Langote\n>>>>> >> EDB: http://www.enterprisedb.com\n>>>>>\n>>>>", "msg_date": "Fri, 22 Oct 2021 15:19:39 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Fri, Oct 22, 2021 at 2:48 AM Nitin Jadhav <nitinjadhavpostgres@gmail.com>\nwrote:\n\n> > While testing further I got a crash with partition wise join enabled for\n> multi-col list partitions. please find test case & stack-trace below.\n>\n> Thanks for sharing. I have fixed the issue in the attached patch.\n>\n> Thanks & Regards,\n> Nitin Jadhav\n>\n>\n>>>>>>\n>>>>>> Hi,\n\n+isListBoundDuplicated(List *list_bounds, List *new_bound)\n\n+ Const *value1 = castNode(Const, list_nth(elem, i));\n+ Const *value2 = castNode(Const, list_nth(new_bound, i));\n\nShould the upper bound for index i take into account the length of\nnew_bound ?\nIf the length of new_bound is always the same as that for elem, please add\nan assertion.\n\nFor transformPartitionListBounds():\n+ deparse_expression((Node *) list_nth(partexprs, j),\n+\n deparse_context_for(RelationGetRelationName(parent),\n+\n RelationGetRelid(parent)),\n\nPlease consider calling RelationGetRelationName(parent)\nand RelationGetRelid(parent) (and assigning to local variables) outside the\nloop.\n\n+get_list_datum_count(PartitionBoundSpec **boundspecs, int nparts)\n\nget_list_datum_count -> get_list_datums_count\n\nFor partition_bounds_equal():\n\n+ if (b1->isnulls)\n+ b1_isnull = b1->isnulls[i][j];\n+ if (b2->isnulls)\n+ b2_isnull = b2->isnulls[i][j];\n\nShould the initialization of b1_isnull and b2_isnull be done inside the\nloop (so that they don't inherit value from previous iteration) ?\n\nCheers\n\nOn Fri, Oct 22, 2021 at 2:48 AM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:> While testing further I got a crash with partition wise join enabled for multi-col list partitions. please find test case & stack-trace below.Thanks for sharing. I have fixed the issue in the attached patch.Thanks & Regards,Nitin JadhavHi,+isListBoundDuplicated(List *list_bounds, List *new_bound)+           Const   *value1 = castNode(Const, list_nth(elem, i));+           Const   *value2 = castNode(Const, list_nth(new_bound, i));Should the upper bound for index i take into account the length of new_bound ?If the length of new_bound is always the same as that for elem, please add an assertion.For transformPartitionListBounds():+               deparse_expression((Node *) list_nth(partexprs, j),+                                  deparse_context_for(RelationGetRelationName(parent),+                                                      RelationGetRelid(parent)),Please consider calling RelationGetRelationName(parent) and RelationGetRelid(parent) (and assigning to local variables) outside the loop.+get_list_datum_count(PartitionBoundSpec **boundspecs, int nparts)get_list_datum_count -> get_list_datums_countFor partition_bounds_equal():+               if (b1->isnulls)+                   b1_isnull = b1->isnulls[i][j];+               if (b2->isnulls)+                   b2_isnull = b2->isnulls[i][j];Should the initialization of b1_isnull and b2_isnull be done inside the loop (so that they don't inherit value from previous iteration) ?Cheers", "msg_date": "Fri, 22 Oct 2021 03:50:33 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Fri, Oct 22, 2021 at 3:50 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Fri, Oct 22, 2021 at 2:48 AM Nitin Jadhav <\n> nitinjadhavpostgres@gmail.com> wrote:\n>\n>> > While testing further I got a crash with partition wise join enabled\n>> for multi-col list partitions. please find test case & stack-trace below.\n>>\n>> Thanks for sharing. I have fixed the issue in the attached patch.\n>>\n>> Thanks & Regards,\n>> Nitin Jadhav\n>>\n>>\n>>>>>>>\n>>>>>>> Hi,\n>\n> +isListBoundDuplicated(List *list_bounds, List *new_bound)\n>\n> + Const *value1 = castNode(Const, list_nth(elem, i));\n> + Const *value2 = castNode(Const, list_nth(new_bound, i));\n>\n> Should the upper bound for index i take into account the length of\n> new_bound ?\n> If the length of new_bound is always the same as that for elem, please add\n> an assertion.\n>\n> For transformPartitionListBounds():\n> + deparse_expression((Node *) list_nth(partexprs, j),\n> +\n> deparse_context_for(RelationGetRelationName(parent),\n> +\n> RelationGetRelid(parent)),\n>\n> Please consider calling RelationGetRelationName(parent)\n> and RelationGetRelid(parent) (and assigning to local variables) outside the\n> loop.\n>\n> +get_list_datum_count(PartitionBoundSpec **boundspecs, int nparts)\n>\n> get_list_datum_count -> get_list_datums_count\n>\n> For partition_bounds_equal():\n>\n> + if (b1->isnulls)\n> + b1_isnull = b1->isnulls[i][j];\n> + if (b2->isnulls)\n> + b2_isnull = b2->isnulls[i][j];\n>\n> Should the initialization of b1_isnull and b2_isnull be done inside the\n> loop (so that they don't inherit value from previous iteration) ?\n>\n> Cheers\n>\n\nHi,\nContinuing review.\n\n+ * For the multi-column case, we must make an BoolExpr that\n\nan BoolExpr -> a BoolExpr\n\nIn get_qual_for_list(), it would be better if repetitive code can be\nextracted into a helper method:\n\n+ if (val->constisnull)\n+ {\n+ NullTest *nulltest = makeNode(NullTest);\n+\n+ key_is_null[j] = true;\n+\n+ nulltest->arg = keyCol[j];\n+ nulltest->nulltesttype = IS_NULL;\n+ nulltest->argisrow = false;\n+ nulltest->location = -1;\n+\n+ if (key->partnatts > 1)\n+ and_args = lappend(and_args, nulltest);\n+ else\n+ is_null_test = (Expr *) nulltest;\n+ }\n+ else\n+ {\n+ if (key->partnatts > 1)\n+ {\n+ Expr *opexpr =\n+ make_partition_op_expr(key, j,\n+ BTEqualStrategyNumber,\n+ keyCol[j],\n+ (Expr *) val);\n+ and_args = lappend(and_args, opexpr);\n+ }\n+ else\n+ datum_elem = (Expr *) val;\n+ }\n\nFor match_clause_to_partition_key():\n\n+ if (part_scheme->strategy != PARTITION_STRATEGY_LIST)\n+ {\n+ *clause_is_not_null = (nulltest->nulltesttype == IS_NOT_NULL);\n+ return PARTCLAUSE_MATCH_NULLNESS;\n+ }\n+ else\n\nSince the if block ends with return, the 'else' is not needed - else block\ncan be indented to the left.\n\nget_min_and_max_off(): I think get_min_and_max_offset as method name would\nbe more informative.\n\n+ Assert(0 == partition_lbound_datum_cmp(partsupfunc, partcollation,\n+ boundinfo->datums[off],\n+ boundinfo->isnulls[off],\n+ values, isnulls, nvalues));\n\nIf the 'while (off >= 1)' loop exits without modifying off, is the above\nassertion always true (can boundinfo->datums[off] be accessed without\nchecking bound) ?\n\nCheers\n\nOn Fri, Oct 22, 2021 at 3:50 AM Zhihong Yu <zyu@yugabyte.com> wrote:On Fri, Oct 22, 2021 at 2:48 AM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:> While testing further I got a crash with partition wise join enabled for multi-col list partitions. please find test case & stack-trace below.Thanks for sharing. I have fixed the issue in the attached patch.Thanks & Regards,Nitin JadhavHi,+isListBoundDuplicated(List *list_bounds, List *new_bound)+           Const   *value1 = castNode(Const, list_nth(elem, i));+           Const   *value2 = castNode(Const, list_nth(new_bound, i));Should the upper bound for index i take into account the length of new_bound ?If the length of new_bound is always the same as that for elem, please add an assertion.For transformPartitionListBounds():+               deparse_expression((Node *) list_nth(partexprs, j),+                                  deparse_context_for(RelationGetRelationName(parent),+                                                      RelationGetRelid(parent)),Please consider calling RelationGetRelationName(parent) and RelationGetRelid(parent) (and assigning to local variables) outside the loop.+get_list_datum_count(PartitionBoundSpec **boundspecs, int nparts)get_list_datum_count -> get_list_datums_countFor partition_bounds_equal():+               if (b1->isnulls)+                   b1_isnull = b1->isnulls[i][j];+               if (b2->isnulls)+                   b2_isnull = b2->isnulls[i][j];Should the initialization of b1_isnull and b2_isnull be done inside the loop (so that they don't inherit value from previous iteration) ?Cheers Hi,Continuing review.+            * For the multi-column case, we must make an BoolExpr thatan BoolExpr -> a BoolExprIn get_qual_for_list(), it would be better if repetitive code can be extracted into a helper method:+               if (val->constisnull)+               {+                   NullTest   *nulltest = makeNode(NullTest);++                   key_is_null[j] = true;++                   nulltest->arg = keyCol[j];+                   nulltest->nulltesttype = IS_NULL;+                   nulltest->argisrow = false;+                   nulltest->location = -1;++                   if (key->partnatts > 1)+                       and_args = lappend(and_args, nulltest);+                   else+                       is_null_test = (Expr *) nulltest;+               }+               else+               {+                   if (key->partnatts > 1)+                   {+                       Expr *opexpr =+                           make_partition_op_expr(key, j,+                                                  BTEqualStrategyNumber,+                                                  keyCol[j],+                                                  (Expr *) val);+                       and_args = lappend(and_args, opexpr);+                   }+                   else+                       datum_elem = (Expr *) val;+               }For match_clause_to_partition_key():+       if (part_scheme->strategy != PARTITION_STRATEGY_LIST)+       {+           *clause_is_not_null = (nulltest->nulltesttype == IS_NOT_NULL);+           return PARTCLAUSE_MATCH_NULLNESS;+       }+       elseSince the if block ends with return, the 'else' is not needed - else block can be indented to the left.get_min_and_max_off(): I think get_min_and_max_offset as method name would be more informative.+   Assert(0 == partition_lbound_datum_cmp(partsupfunc, partcollation,+                                          boundinfo->datums[off],+                                          boundinfo->isnulls[off],+                                          values, isnulls, nvalues));If the 'while (off >= 1)' loop exits without modifying off, is the above assertion always true (can boundinfo->datums[off] be accessed without checking bound) ?Cheers", "msg_date": "Fri, 22 Oct 2021 09:01:35 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Hi Nitin,\n\nOn Fri, Oct 22, 2021 at 6:48 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> Thanks for sharing. I have fixed the issue in the attached patch.\n\nI noticed that there's no commitfest entry for this. Will you please\nadd this to the next one?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Oct 2021 17:10:04 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Thanks for reviewing.\n\n> +isListBoundDuplicated(List *list_bounds, List *new_bound)\n>\n> + Const *value1 = castNode(Const, list_nth(elem, i));\n> + Const *value2 = castNode(Const, list_nth(new_bound, i));\n>\n> Should the upper bound for index i take into account the length of new_bound ?\n> If the length of new_bound is always the same as that for elem, please add an assertion.\n\nThe length of 'elem' should be same as length of 'new_bound'. Added\nassert statement for the same.\n\n\n> For transformPartitionListBounds():\n> + deparse_expression((Node *) list_nth(partexprs, j),\n> + deparse_context_for(RelationGetRelationName(parent),\n> + RelationGetRelid(parent)),\n>\n> Please consider calling RelationGetRelationName(parent) and RelationGetRelid(parent) (and assigning to local variables) outside the loop.\n\nI don't think this is an issue as 'RelationGetRelationName' and\n'RelationGetRelid' are macros. Please let me know if your opinion is\ndifferent.\n\n\n> +get_list_datum_count(PartitionBoundSpec **boundspecs, int nparts)\n>\n> get_list_datum_count -> get_list_datums_count\n\nThere was a function earlier with the name\n'get_non_null_list_datum_count()'. So now this has changed to\n'get_list_datum_count()' as we are not separating the non null datums\nfrom the list. The new name is inline with the old function name which\nwas already accepted by the community. So I feel it is better to not\nchange.\n\n\n> For partition_bounds_equal():\n>\n> + if (b1->isnulls)\n> + b1_isnull = b1->isnulls[i][j];\n> + if (b2->isnulls)\n> + b2_isnull = b2->isnulls[i][j];\n>\n> Should the initialization of b1_isnull and b2_isnull be done inside the loop (so that they don't inherit value from previous iteration) ?\n\nNice catch. Fixed.\n\n\n> In get_qual_for_list(), it would be better if repetitive code can be extracted into a helper method:\n\nI have removed the repetitive code and made a common function named\n'get_qual_for_list_datums()'.\n\n\n> For match_clause_to_partition_key():\n>\n> + if (part_scheme->strategy != PARTITION_STRATEGY_LIST)\n> + {\n> + *clause_is_not_null = (nulltest->nulltesttype == IS_NOT_NULL);\n> + return PARTCLAUSE_MATCH_NULLNESS;\n> + }\n> + else\n>\n> Since the if block ends with return, the 'else' is not needed - else block can be indented to the left.\n\nFixed.\n\n\n> get_min_and_max_off(): I think get_min_and_max_offset as method name would be more informative.\n\nFixed.\n\n\n> + Assert(0 == partition_lbound_datum_cmp(partsupfunc, partcollation,\n> + boundinfo->datums[off],\n> + boundinfo->isnulls[off],\n> + values, isnulls, nvalues));\n>\n> If the 'while (off >= 1)' loop exits without modifying off, is the above assertion always true (can boundinfo->datums[off] be accessed without checking bound) ?\n\nYes. The assertion holds good even though the control doesn't enter\nthe loop. In that case the 'off' can be directly considered as minoff\nor maxoff. Since we are considering it as valid, the assertion is\nneeded.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Fri, Oct 22, 2021 at 9:30 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>\n>\n> On Fri, Oct 22, 2021 at 3:50 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>>\n>>\n>>\n>> On Fri, Oct 22, 2021 at 2:48 AM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:\n>>>\n>>> > While testing further I got a crash with partition wise join enabled for multi-col list partitions. please find test case & stack-trace below.\n>>>\n>>> Thanks for sharing. I have fixed the issue in the attached patch.\n>>>\n>>> Thanks & Regards,\n>>> Nitin Jadhav\n>>>\n>>>>>>>>\n>>>>>>>>\n>> Hi,\n>>\n>> +isListBoundDuplicated(List *list_bounds, List *new_bound)\n>>\n>> + Const *value1 = castNode(Const, list_nth(elem, i));\n>> + Const *value2 = castNode(Const, list_nth(new_bound, i));\n>>\n>> Should the upper bound for index i take into account the length of new_bound ?\n>> If the length of new_bound is always the same as that for elem, please add an assertion.\n>>\n>> For transformPartitionListBounds():\n>> + deparse_expression((Node *) list_nth(partexprs, j),\n>> + deparse_context_for(RelationGetRelationName(parent),\n>> + RelationGetRelid(parent)),\n>>\n>> Please consider calling RelationGetRelationName(parent) and RelationGetRelid(parent) (and assigning to local variables) outside the loop.\n>>\n>> +get_list_datum_count(PartitionBoundSpec **boundspecs, int nparts)\n>>\n>> get_list_datum_count -> get_list_datums_count\n>>\n>> For partition_bounds_equal():\n>>\n>> + if (b1->isnulls)\n>> + b1_isnull = b1->isnulls[i][j];\n>> + if (b2->isnulls)\n>> + b2_isnull = b2->isnulls[i][j];\n>>\n>> Should the initialization of b1_isnull and b2_isnull be done inside the loop (so that they don't inherit value from previous iteration) ?\n>>\n>> Cheers\n>\n>\n> Hi,\n> Continuing review.\n>\n> + * For the multi-column case, we must make an BoolExpr that\n>\n> an BoolExpr -> a BoolExpr\n>\n> In get_qual_for_list(), it would be better if repetitive code can be extracted into a helper method:\n>\n> + if (val->constisnull)\n> + {\n> + NullTest *nulltest = makeNode(NullTest);\n> +\n> + key_is_null[j] = true;\n> +\n> + nulltest->arg = keyCol[j];\n> + nulltest->nulltesttype = IS_NULL;\n> + nulltest->argisrow = false;\n> + nulltest->location = -1;\n> +\n> + if (key->partnatts > 1)\n> + and_args = lappend(and_args, nulltest);\n> + else\n> + is_null_test = (Expr *) nulltest;\n> + }\n> + else\n> + {\n> + if (key->partnatts > 1)\n> + {\n> + Expr *opexpr =\n> + make_partition_op_expr(key, j,\n> + BTEqualStrategyNumber,\n> + keyCol[j],\n> + (Expr *) val);\n> + and_args = lappend(and_args, opexpr);\n> + }\n> + else\n> + datum_elem = (Expr *) val;\n> + }\n>\n> For match_clause_to_partition_key():\n>\n> + if (part_scheme->strategy != PARTITION_STRATEGY_LIST)\n> + {\n> + *clause_is_not_null = (nulltest->nulltesttype == IS_NOT_NULL);\n> + return PARTCLAUSE_MATCH_NULLNESS;\n> + }\n> + else\n>\n> Since the if block ends with return, the 'else' is not needed - else block can be indented to the left.\n>\n> get_min_and_max_off(): I think get_min_and_max_offset as method name would be more informative.\n>\n> + Assert(0 == partition_lbound_datum_cmp(partsupfunc, partcollation,\n> + boundinfo->datums[off],\n> + boundinfo->isnulls[off],\n> + values, isnulls, nvalues));\n>\n> If the 'while (off >= 1)' loop exits without modifying off, is the above assertion always true (can boundinfo->datums[off] be accessed without checking bound) ?\n>\n> Cheers", "msg_date": "Tue, 2 Nov 2021 19:03:37 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "> I noticed that there's no commitfest entry for this. Will you please\n> add this to the next one?\n\nI have added it to Nov commitfest.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Fri, Oct 29, 2021 at 1:40 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hi Nitin,\n>\n> On Fri, Oct 22, 2021 at 6:48 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > Thanks for sharing. I have fixed the issue in the attached patch.\n>\n> I noticed that there's no commitfest entry for this. Will you please\n> add this to the next one?\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Nov 2021 19:05:48 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Hi,\n\nFew comments for v7 patch, note that I haven't been through the\nprevious discussion, if any of the review comments that has been\nalready discussed & overridden, then please ignore here too:\n\n\npartbounds.c: In function ‘get_qual_for_list.isra.18’:\npartbounds.c:4284:29: warning: ‘boundinfo’ may be used uninitialized\nin this function [-Wmaybe-uninitialized]\n datumCopy(bound_info->datums[i][j],\n ~~~~~~~~~~^~~~~~~~\npartbounds.c:4335:21: note: ‘boundinfo’ was declared here\n PartitionBoundInfo boundinfo;\n ^~~~~~~~~\npartbounds.c: In function ‘partition_bounds_merge’:\npartbounds.c:1305:12: warning: ‘inner_isnull’ may be used\nuninitialized in this function [-Wmaybe-uninitialized]\n bool *inner_isnull;\n ^~~~~~~~~~~~\npartbounds.c:1304:12: warning: ‘outer_isnull’ may be used\nuninitialized in this function [-Wmaybe-uninitialized]\n bool *outer_isnull;\n ^~~~~~~~~~~~\n\nGot these warnings with gcc -O2 compilation.\n----\n\n /*\n+ * isListBoundDuplicated\n+ *\n+ * Returns TRUE if the list bound element 'new_bound' is already present\n+ * in the target list 'list_bounds', FALSE otherwise.\n+ */\n+static bool\n+isListBoundDuplicated(List *list_bounds, List *new_bound)\n+{\n+ ListCell *cell = NULL;\n+\n+ foreach(cell, list_bounds)\n+ {\n+ int i;\n+ List *elem = lfirst(cell);\n+ bool isDuplicate = true;\n+\n+ Assert(list_length(elem) == list_length(new_bound));\n+\n+ for (i = 0; i < list_length(elem); i++)\n+ {\n+ Const *value1 = castNode(Const, list_nth(elem, i));\n+ Const *value2 = castNode(Const, list_nth(new_bound, i));\n+\n+ if (!equal(value1, value2))\n+ {\n+ isDuplicate = false;\n+ break;\n+ }\n+ }\n+\n+ if (isDuplicate)\n+ return true;\n+ }\n+\n+ return false;\n+}\n\nThis function is unnecessarily complicated, I think you can avoid\ninner for loops; simply replace for-loop-block with \"if\n(equal(lfirst(cell), new_bound)) return true\".\n----\n\n+ char **colname = (char **) palloc0(partnatts * sizeof(char *));\n+ Oid *coltype = palloc0(partnatts * sizeof(Oid));\n+ int32 *coltypmod = palloc0(partnatts * sizeof(int));\n+ Oid *partcollation = palloc0(partnatts * sizeof(Oid));\n+\nThis allocation seems to be worthless, read ahead.\n----\n\n+ for (i = 0; i < partnatts; i++)\n+ {\n+ if (key->partattrs[i] != 0)\n+ colname[i] = get_attname(RelationGetRelid(parent),\n+ key->partattrs[i], false);\n+ else\n+ {\n+ colname[i] =\n+ deparse_expression((Node *) list_nth(partexprs, j),\n+ deparse_context_for(RelationGetRelationName(parent),\n+ RelationGetRelid(parent)),\n+ false, false);\n+ ++j;\n+ }\n+\n+ coltype[i] = get_partition_col_typid(key, i);\n+ coltypmod[i] = get_partition_col_typmod(key, i);\n+ partcollation[i] = get_partition_col_collation(key, i);\n+ }\n\nI think there is no need for this separate loop inside\ntransformPartitionListBounds, you can do that same in the next loop as\nwell. And instead of get_partition_col_* calling and storing, simply\nuse that directly as an argument to transformPartitionBoundValue().\n----\n\n+\n+ if (IsA(expr, RowExpr) &&\n+ partnatts != list_length(((RowExpr *) expr)->args))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n+ errmsg(\"Must specify exactly one value per partitioning column\"),\n+ parser_errposition(pstate, exprLocation((Node *) spec))));\n+\n\nI think this should be inside the \"else\" block after \"!IsA(rowexpr,\nRowExpr)\" error and you can avoid IsA() check too.\n----\n\n- if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n+ if (b1->isnulls)\n+ b1_isnull = b1->isnulls[i][j];\n+ if (b2->isnulls)\n+ b2_isnull = b2->isnulls[i][j];\n+\n+ /*\n+ * If any of the partition bound has NULL value, then check\n+ * equality for the NULL value instead of comparing the datums\n+ * as it does not contain valid value in case of NULL.\n+ */\n+ if (b1_isnull || b2_isnull)\n+ {\n+ if (b1_isnull != b2_isnull)\n+ return false;\n+ }\n+ else if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n\nLooks difficult to understand at first glance, how about the following:\n\nif (b1->isnulls != b2->isnulls)\n return false;\n\nif (b1->isnulls)\n{\n if (b1->isnulls[i][j] != b2->isnulls[i][j])\n return false;\n if (b1->isnulls[i][j])\n continue;\n}\n\nSee how range partitioning infinite values are handled. Also, place\nthis before the comment block that was added for the \"!datumIsEqual()\"\ncase.\n----\n\n+ if (src->isnulls)\n+ dest->isnulls[i] = (bool *) palloc(sizeof(bool) * natts);\n...\n+ if (src->isnulls)\n+ dest->isnulls[i][j] = src->isnulls[i][j];\n+\nNothing wrong with this but if we could have checked \"dest->isnulls\"\ninstead of \"src->isnulls\" would be much better.\n----\n\n- if (dest->kind == NULL ||\n- dest->kind[i][j] == PARTITION_RANGE_DATUM_VALUE)\n+ if ((dest->kind == NULL ||\n+ dest->kind[i][j] == PARTITION_RANGE_DATUM_VALUE) &&\n+ (key->strategy != PARTITION_STRATEGY_LIST ||\n+ (src->isnulls == NULL || !src->isnulls[i][j])))\n dest->datums[i][j] = datumCopy(src->datums[i][j],\n byval, typlen);\nCondition \"key->strategy != PARTITION_STRATEGY_LIST\" seems to be unnecessary.\n----\n\n+ for (i = 0; i < partnatts; i++)\n+ {\n+ if (outer_isnull[i])\n+ {\n+ outer_has_null = true;\n+ if (outer_map.merged_indexes[outer_index] == -1)\n+ consider_outer_null = true;\n+ }\n+ }\n+\n+ for (i = 0; i < partnatts; i++)\n+ {\n+ if (inner_isnull[i])\n+ {\n+ inner_has_null = true;\n+ if (inner_map.merged_indexes[inner_index] == -1)\n+ consider_inner_null = true;\n+ }\n+ }\n\nCan't be a single loop?\n----\n\nIt would be helpful if you could run pgindent on your patch if not done already.\n----\n\nThat's all for now, I am yet to finish the complete patch reading and\nunderstand the code flow, but I am out of time now.\n\nRegards,\nAmul\n\n\n", "msg_date": "Fri, 3 Dec 2021 19:25:31 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Thank you for reviewing the patch.\n\n> partbounds.c: In function ‘get_qual_for_list.isra.18’:\n> partbounds.c:4284:29: warning: ‘boundinfo’ may be used uninitialized\n> in this function [-Wmaybe-uninitialized]\n> datumCopy(bound_info->datums[i][j],\n> ~~~~~~~~~~^~~~~~~~\n> partbounds.c:4335:21: note: ‘boundinfo’ was declared here\n> PartitionBoundInfo boundinfo;\n> ^~~~~~~~~\n> partbounds.c: In function ‘partition_bounds_merge’:\n> partbounds.c:1305:12: warning: ‘inner_isnull’ may be used\n> uninitialized in this function [-Wmaybe-uninitialized]\n> bool *inner_isnull;\n> ^~~~~~~~~~~~\n> partbounds.c:1304:12: warning: ‘outer_isnull’ may be used\n> uninitialized in this function [-Wmaybe-uninitialized]\n> bool *outer_isnull;\n> ^~~~~~~~~~~~\n\nFixed.\n\n> This function is unnecessarily complicated, I think you can avoid\n> inner for loops; simply replace for-loop-block with \"if\n> (equal(lfirst(cell), new_bound)) return true\".\n\nThank you for the suggestion. Fixed.\n\n> + char **colname = (char **) palloc0(partnatts * sizeof(char *));\n> + Oid *coltype = palloc0(partnatts * sizeof(Oid));\n> + int32 *coltypmod = palloc0(partnatts * sizeof(int));\n> + Oid *partcollation = palloc0(partnatts * sizeof(Oid));\n> +\n> This allocation seems to be worthless, read ahead.\n>\n> I think there is no need for this separate loop inside\n> transformPartitionListBounds, you can do that same in the next loop as\n> well. And instead of get_partition_col_* calling and storing, simply\n> use that directly as an argument to transformPartitionBoundValue().\n\nYes. The loop can be avoided and content of the above loop can be\nincluded in the next loop but the next loop iterates over a list of\nmulti column datums. For each iteration, we need the information of\nall the columns. The above data (colname, coltype, coltypmod and\npartcollation) remains same for each iteration of the loop, If we\nmodify as suggested, then the function to fetch these information has\nto be called every-time. To avoid this situation I have made a\nseparate loop outside which only runs as many number of columns and\nstores in a variable which can be reused later. Please let me correct\nif I am wrong.\n\n> I think this should be inside the \"else\" block after \"!IsA(rowexpr,\n> RowExpr)\" error and you can avoid IsA() check too.\n\nThis is required to handle the situation when one partition key is\nmentioned and multiple values are provided in the partition bound\nspecification.\n\n> Looks difficult to understand at first glance, how about the following:\n>\n> if (b1->isnulls != b2->isnulls)\n> return false;\n>\n> if (b1->isnulls)\n> {\n> if (b1->isnulls[i][j] != b2->isnulls[i][j])\n> return false;\n> if (b1->isnulls[i][j])\n> continue;\n> }\n>\n> See how range partitioning infinite values are handled. Also, place\n> this before the comment block that was added for the \"!datumIsEqual()\"\n> case.\n\nFixed. I feel the 'continue' block is not required and hence removed it.\n\n> Nothing wrong with this but if we could have checked \"dest->isnulls\"\n> instead of \"src->isnulls\" would be much better.\n\nHere we are copying the data from 'src' to 'dest'. If there is no data\nin 'src', it is unnecessary to copy. Hence checking 'src'.\n\n> Condition \"key->strategy != PARTITION_STRATEGY_LIST\" seems to be unnecessary.\n\nFixed.\n\n> Can't be a single loop?\n\nYes. Fixed.\n\n\n\nOn Fri, Dec 3, 2021 at 7:26 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> Hi,\n>\n> Few comments for v7 patch, note that I haven't been through the\n> previous discussion, if any of the review comments that has been\n> already discussed & overridden, then please ignore here too:\n>\n>\n> partbounds.c: In function ‘get_qual_for_list.isra.18’:\n> partbounds.c:4284:29: warning: ‘boundinfo’ may be used uninitialized\n> in this function [-Wmaybe-uninitialized]\n> datumCopy(bound_info->datums[i][j],\n> ~~~~~~~~~~^~~~~~~~\n> partbounds.c:4335:21: note: ‘boundinfo’ was declared here\n> PartitionBoundInfo boundinfo;\n> ^~~~~~~~~\n> partbounds.c: In function ‘partition_bounds_merge’:\n> partbounds.c:1305:12: warning: ‘inner_isnull’ may be used\n> uninitialized in this function [-Wmaybe-uninitialized]\n> bool *inner_isnull;\n> ^~~~~~~~~~~~\n> partbounds.c:1304:12: warning: ‘outer_isnull’ may be used\n> uninitialized in this function [-Wmaybe-uninitialized]\n> bool *outer_isnull;\n> ^~~~~~~~~~~~\n>\n> Got these warnings with gcc -O2 compilation.\n> ----\n>\n> /*\n> + * isListBoundDuplicated\n> + *\n> + * Returns TRUE if the list bound element 'new_bound' is already present\n> + * in the target list 'list_bounds', FALSE otherwise.\n> + */\n> +static bool\n> +isListBoundDuplicated(List *list_bounds, List *new_bound)\n> +{\n> + ListCell *cell = NULL;\n> +\n> + foreach(cell, list_bounds)\n> + {\n> + int i;\n> + List *elem = lfirst(cell);\n> + bool isDuplicate = true;\n> +\n> + Assert(list_length(elem) == list_length(new_bound));\n> +\n> + for (i = 0; i < list_length(elem); i++)\n> + {\n> + Const *value1 = castNode(Const, list_nth(elem, i));\n> + Const *value2 = castNode(Const, list_nth(new_bound, i));\n> +\n> + if (!equal(value1, value2))\n> + {\n> + isDuplicate = false;\n> + break;\n> + }\n> + }\n> +\n> + if (isDuplicate)\n> + return true;\n> + }\n> +\n> + return false;\n> +}\n>\n> This function is unnecessarily complicated, I think you can avoid\n> inner for loops; simply replace for-loop-block with \"if\n> (equal(lfirst(cell), new_bound)) return true\".\n> ----\n>\n> + char **colname = (char **) palloc0(partnatts * sizeof(char *));\n> + Oid *coltype = palloc0(partnatts * sizeof(Oid));\n> + int32 *coltypmod = palloc0(partnatts * sizeof(int));\n> + Oid *partcollation = palloc0(partnatts * sizeof(Oid));\n> +\n> This allocation seems to be worthless, read ahead.\n> ----\n>\n> + for (i = 0; i < partnatts; i++)\n> + {\n> + if (key->partattrs[i] != 0)\n> + colname[i] = get_attname(RelationGetRelid(parent),\n> + key->partattrs[i], false);\n> + else\n> + {\n> + colname[i] =\n> + deparse_expression((Node *) list_nth(partexprs, j),\n> + deparse_context_for(RelationGetRelationName(parent),\n> + RelationGetRelid(parent)),\n> + false, false);\n> + ++j;\n> + }\n> +\n> + coltype[i] = get_partition_col_typid(key, i);\n> + coltypmod[i] = get_partition_col_typmod(key, i);\n> + partcollation[i] = get_partition_col_collation(key, i);\n> + }\n>\n> I think there is no need for this separate loop inside\n> transformPartitionListBounds, you can do that same in the next loop as\n> well. And instead of get_partition_col_* calling and storing, simply\n> use that directly as an argument to transformPartitionBoundValue().\n> ----\n>\n> +\n> + if (IsA(expr, RowExpr) &&\n> + partnatts != list_length(((RowExpr *) expr)->args))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> + errmsg(\"Must specify exactly one value per partitioning column\"),\n> + parser_errposition(pstate, exprLocation((Node *) spec))));\n> +\n>\n> I think this should be inside the \"else\" block after \"!IsA(rowexpr,\n> RowExpr)\" error and you can avoid IsA() check too.\n> ----\n>\n> - if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n> + if (b1->isnulls)\n> + b1_isnull = b1->isnulls[i][j];\n> + if (b2->isnulls)\n> + b2_isnull = b2->isnulls[i][j];\n> +\n> + /*\n> + * If any of the partition bound has NULL value, then check\n> + * equality for the NULL value instead of comparing the datums\n> + * as it does not contain valid value in case of NULL.\n> + */\n> + if (b1_isnull || b2_isnull)\n> + {\n> + if (b1_isnull != b2_isnull)\n> + return false;\n> + }\n> + else if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n>\n> Looks difficult to understand at first glance, how about the following:\n>\n> if (b1->isnulls != b2->isnulls)\n> return false;\n>\n> if (b1->isnulls)\n> {\n> if (b1->isnulls[i][j] != b2->isnulls[i][j])\n> return false;\n> if (b1->isnulls[i][j])\n> continue;\n> }\n>\n> See how range partitioning infinite values are handled. Also, place\n> this before the comment block that was added for the \"!datumIsEqual()\"\n> case.\n> ----\n>\n> + if (src->isnulls)\n> + dest->isnulls[i] = (bool *) palloc(sizeof(bool) * natts);\n> ...\n> + if (src->isnulls)\n> + dest->isnulls[i][j] = src->isnulls[i][j];\n> +\n> Nothing wrong with this but if we could have checked \"dest->isnulls\"\n> instead of \"src->isnulls\" would be much better.\n> ----\n>\n> - if (dest->kind == NULL ||\n> - dest->kind[i][j] == PARTITION_RANGE_DATUM_VALUE)\n> + if ((dest->kind == NULL ||\n> + dest->kind[i][j] == PARTITION_RANGE_DATUM_VALUE) &&\n> + (key->strategy != PARTITION_STRATEGY_LIST ||\n> + (src->isnulls == NULL || !src->isnulls[i][j])))\n> dest->datums[i][j] = datumCopy(src->datums[i][j],\n> byval, typlen);\n> Condition \"key->strategy != PARTITION_STRATEGY_LIST\" seems to be unnecessary.\n> ----\n>\n> + for (i = 0; i < partnatts; i++)\n> + {\n> + if (outer_isnull[i])\n> + {\n> + outer_has_null = true;\n> + if (outer_map.merged_indexes[outer_index] == -1)\n> + consider_outer_null = true;\n> + }\n> + }\n> +\n> + for (i = 0; i < partnatts; i++)\n> + {\n> + if (inner_isnull[i])\n> + {\n> + inner_has_null = true;\n> + if (inner_map.merged_indexes[inner_index] == -1)\n> + consider_inner_null = true;\n> + }\n> + }\n>\n> Can't be a single loop?\n> ----\n>\n> It would be helpful if you could run pgindent on your patch if not done already.\n> ----\n>\n> That's all for now, I am yet to finish the complete patch reading and\n> understand the code flow, but I am out of time now.\n>\n> Regards,\n> Amul", "msg_date": "Mon, 6 Dec 2021 19:28:26 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Mon, Dec 6, 2021 at 7:27 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> Thank you for reviewing the patch.\n>\n> > partbounds.c: In function ‘get_qual_for_list.isra.18’:\n> > partbounds.c:4284:29: warning: ‘boundinfo’ may be used uninitialized\n> > in this function [-Wmaybe-uninitialized]\n> > datumCopy(bound_info->datums[i][j],\n> > ~~~~~~~~~~^~~~~~~~\n> > partbounds.c:4335:21: note: ‘boundinfo’ was declared here\n> > PartitionBoundInfo boundinfo;\n> > ^~~~~~~~~\n> > partbounds.c: In function ‘partition_bounds_merge’:\n> > partbounds.c:1305:12: warning: ‘inner_isnull’ may be used\n> > uninitialized in this function [-Wmaybe-uninitialized]\n> > bool *inner_isnull;\n> > ^~~~~~~~~~~~\n> > partbounds.c:1304:12: warning: ‘outer_isnull’ may be used\n> > uninitialized in this function [-Wmaybe-uninitialized]\n> > bool *outer_isnull;\n> > ^~~~~~~~~~~~\n>\n> Fixed.\n>\n> > This function is unnecessarily complicated, I think you can avoid\n> > inner for loops; simply replace for-loop-block with \"if\n> > (equal(lfirst(cell), new_bound)) return true\".\n>\n> Thank you for the suggestion. Fixed.\n>\n> > + char **colname = (char **) palloc0(partnatts * sizeof(char *));\n> > + Oid *coltype = palloc0(partnatts * sizeof(Oid));\n> > + int32 *coltypmod = palloc0(partnatts * sizeof(int));\n> > + Oid *partcollation = palloc0(partnatts * sizeof(Oid));\n> > +\n> > This allocation seems to be worthless, read ahead.\n> >\n> > I think there is no need for this separate loop inside\n> > transformPartitionListBounds, you can do that same in the next loop as\n> > well. And instead of get_partition_col_* calling and storing, simply\n> > use that directly as an argument to transformPartitionBoundValue().\n>\n> Yes. The loop can be avoided and content of the above loop can be\n> included in the next loop but the next loop iterates over a list of\n> multi column datums. For each iteration, we need the information of\n> all the columns. The above data (colname, coltype, coltypmod and\n> partcollation) remains same for each iteration of the loop, If we\n> modify as suggested, then the function to fetch these information has\n> to be called every-time. To avoid this situation I have made a\n> separate loop outside which only runs as many number of columns and\n> stores in a variable which can be reused later. Please let me correct\n> if I am wrong.\n>\n\nOk, colname can be fetched in advance but I don't think it worth it to\nfetch coltype, coltypmod & partcollation; and, store in the\nexplicitly allocated memory, instead, you can directly call\nget_partition_col_* inline functions.\n\n> > I think this should be inside the \"else\" block after \"!IsA(rowexpr,\n> > RowExpr)\" error and you can avoid IsA() check too.\n>\n> This is required to handle the situation when one partition key is\n> mentioned and multiple values are provided in the partition bound\n> specification.\n>\n> > Looks difficult to understand at first glance, how about the following:\n> >\n> > if (b1->isnulls != b2->isnulls)\n> > return false;\n> >\n> > if (b1->isnulls)\n> > {\n> > if (b1->isnulls[i][j] != b2->isnulls[i][j])\n> > return false;\n> > if (b1->isnulls[i][j])\n> > continue;\n> > }\n> >\n> > See how range partitioning infinite values are handled. Also, place\n> > this before the comment block that was added for the \"!datumIsEqual()\"\n> > case.\n>\n> Fixed. I feel the 'continue' block is not required and hence removed it.\n>\n> > Nothing wrong with this but if we could have checked \"dest->isnulls\"\n> > instead of \"src->isnulls\" would be much better.\n>\n> Here we are copying the data from 'src' to 'dest'. If there is no data\n> in 'src', it is unnecessary to copy. Hence checking 'src'.\n>\n\nI am not sure how that makes a difference since you do allocate 'dest'\nbased on 'src'; anyway, I leave that choice to you.\n\n> > Condition \"key->strategy != PARTITION_STRATEGY_LIST\" seems to be unnecessary.\n>\n> Fixed.\n>\n> > Can't be a single loop?\n>\n> Yes. Fixed.\n>\n\nThanks, will have a look.\n\nRegards,\nAmul\n\n\n", "msg_date": "Mon, 6 Dec 2021 20:22:04 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Hi Nitin,\n\nWas looking at warnings generated by v8:\n\npartbounds.c:971:17: warning: unused variable 'b1_isnull' [-Wunused-variable]\n bool b1_isnull = false;\n ^\npartbounds.c:972:17: warning: unused variable 'b2_isnull' [-Wunused-variable]\n bool b2_isnull = false;\n\nAnd it seems they've resulted from the above change:\n\nOn Mon, Dec 6, 2021 at 10:57 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> > Looks difficult to understand at first glance, how about the following:\n> >\n> > if (b1->isnulls != b2->isnulls)\n> > return false;\n\nI don't think having this block is correct, because this says that two\nPartitionBoundInfos can't be \"logically\" equal unless their isnulls\npointers are the same, which is not the case unless they are\nphysically the same PartitionBoundInfo. What this means for its only\ncaller compute_partition_bounds() is that it now always needs to\nperform partition_bounds_merge() for a pair of list-partitioned\nrelations, even if they have exactly the same bounds.\n\nSo, I'd suggest removing the block.\n\n> > if (b1->isnulls)\n> > {\n> > if (b1->isnulls[i][j] != b2->isnulls[i][j])\n> > return false;\n> > if (b1->isnulls[i][j])\n> > continue;\n> > }\n> >\n> > See how range partitioning infinite values are handled. Also, place\n> > this before the comment block that was added for the \"!datumIsEqual()\"\n> > case.\n>\n> Fixed. I feel the 'continue' block is not required and hence removed it.\n\nActually, you should've kept the continue block as Amul suggested and\nremove the \"else\" from the following:\n\n /* < the long comment snipped >*/\n else if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n parttypbyval[j], parttyplen[j]))\n return false;\n\nbecause with this, list bounds will never be passed to datumIsEqual()\nfor comparison, even if both are non-NULL.\n\nIOW, the block of code should look as follows, including the comments:\n\n /*\n * If the bound datums can be NULL, check that the datums on\n * both sides are either both NULL or not NULL.\n */\n if (b1->isnulls)\n {\n if (b1->isnulls[i][j] != b2->isnulls[i][j])\n return false;\n\n /* Must not pass NULL datums to datumIsEqual(). */\n if (b1->isnulls[i][j])\n continue;\n }\n\n /* < the long comment snipped >*/\n if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n parttypbyval[j], parttyplen[j]))\n return false;\n\nAlso, please remove the declarations of b1_isnull and b2_isnull to get\nrid of the warnings.\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Dec 2021 14:54:05 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Thu, Dec 9, 2021 at 2:54 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hi Nitin,\n>\n> Was looking at warnings generated by v8:\n>\n> partbounds.c:971:17: warning: unused variable 'b1_isnull' [-Wunused-variable]\n> bool b1_isnull = false;\n> ^\n> partbounds.c:972:17: warning: unused variable 'b2_isnull' [-Wunused-variable]\n> bool b2_isnull = false;\n>\n> And it seems they've resulted from the above change:\n\nI meant: \"below change:\".\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Dec 2021 14:56:04 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Thu, Dec 9, 2021 at 11:24 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n[....]\n> On Mon, Dec 6, 2021 at 10:57 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > > Looks difficult to understand at first glance, how about the following:\n> > >\n> > > if (b1->isnulls != b2->isnulls)\n> > > return false;\n>\n> I don't think having this block is correct, because this says that two\n> PartitionBoundInfos can't be \"logically\" equal unless their isnulls\n> pointers are the same, which is not the case unless they are\n> physically the same PartitionBoundInfo. What this means for its only\n> caller compute_partition_bounds() is that it now always needs to\n> perform partition_bounds_merge() for a pair of list-partitioned\n> relations, even if they have exactly the same bounds.\n>\n> So, I'd suggest removing the block.\n>\n\nAgreed, I too realized the same; the check is incorrect and have noted\nit for the next post. But note that, we need a kind of check here otherwise,\nhow could two bounds be equal if one has nulls and the other doesn't.\nAlso, we would have a segmentation fault in the next block while\naccessing b2->isnulls if that is null.\n\nI would suggest check like this:\n\nif ((b1->isnulls == NULL) != (b2->isnulls == NULL))\nreturn false;\n\nOR\n\nif ((b1->isnulls) ^ (b2->isnulls))\nreturn false;\n\nRegards,\nAmul\n\n\n", "msg_date": "Thu, 9 Dec 2021 11:41:19 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Thu, Dec 9, 2021 at 3:12 PM Amul Sul <sulamul@gmail.com> wrote:\n> On Thu, Dec 9, 2021 at 11:24 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> [....]\n> > On Mon, Dec 6, 2021 at 10:57 PM Nitin Jadhav\n> > <nitinjadhavpostgres@gmail.com> wrote:\n> > > > Looks difficult to understand at first glance, how about the following:\n> > > >\n> > > > if (b1->isnulls != b2->isnulls)\n> > > > return false;\n> >\n> > I don't think having this block is correct, because this says that two\n> > PartitionBoundInfos can't be \"logically\" equal unless their isnulls\n> > pointers are the same, which is not the case unless they are\n> > physically the same PartitionBoundInfo. What this means for its only\n> > caller compute_partition_bounds() is that it now always needs to\n> > perform partition_bounds_merge() for a pair of list-partitioned\n> > relations, even if they have exactly the same bounds.\n> >\n> > So, I'd suggest removing the block.\n> >\n>\n> Agreed, I too realized the same; the check is incorrect and have noted\n> it for the next post. But note that, we need a kind of check here otherwise,\n> how could two bounds be equal if one has nulls and the other doesn't.\n\nWe check partition strategy at the top and that ensures that isnulls\nfields should either be both NULL or not, same as the block above that\nchecks 'kind'. Maybe adding an Assert inside the block makes sense,\nlike this:\n\n /*\n * If the bound datums can be NULL, check that the datums on\n * both sides are either both NULL or not NULL.\n */\n if (b1->isnulls != NULL)\n {\n /*\n * Both bound collections have the same partition strategy,\n * so the other side must allow NULL datums as well.\n */\n Assert(b2->isnulls != NULL);\n\n if (b1->isnulls[i][j] != b2->isnulls[i][j])\n return false;\n\n /* Must not pass NULL datums to datumIsEqual(). */\n if (b1->isnulls[i][j])\n continue;\n }\n\n /* < the long comment snipped >*/\n if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n parttypbyval[j], parttyplen[j]))\n return false;\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Dec 2021 15:32:56 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Thu, Dec 9, 2021 at 12:03 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, Dec 9, 2021 at 3:12 PM Amul Sul <sulamul@gmail.com> wrote:\n> > On Thu, Dec 9, 2021 at 11:24 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > >\n> > [....]\n> > > On Mon, Dec 6, 2021 at 10:57 PM Nitin Jadhav\n> > > <nitinjadhavpostgres@gmail.com> wrote:\n> > > > > Looks difficult to understand at first glance, how about the following:\n> > > > >\n> > > > > if (b1->isnulls != b2->isnulls)\n> > > > > return false;\n> > >\n> > > I don't think having this block is correct, because this says that two\n> > > PartitionBoundInfos can't be \"logically\" equal unless their isnulls\n> > > pointers are the same, which is not the case unless they are\n> > > physically the same PartitionBoundInfo. What this means for its only\n> > > caller compute_partition_bounds() is that it now always needs to\n> > > perform partition_bounds_merge() for a pair of list-partitioned\n> > > relations, even if they have exactly the same bounds.\n> > >\n> > > So, I'd suggest removing the block.\n> > >\n> >\n> > Agreed, I too realized the same; the check is incorrect and have noted\n> > it for the next post. But note that, we need a kind of check here otherwise,\n> > how could two bounds be equal if one has nulls and the other doesn't.\n>\n> We check partition strategy at the top and that ensures that isnulls\n> fields should either be both NULL or not, same as the block above that\n> checks 'kind'. Maybe adding an Assert inside the block makes sense,\n> like this:\n>\n> /*\n> * If the bound datums can be NULL, check that the datums on\n> * both sides are either both NULL or not NULL.\n> */\n> if (b1->isnulls != NULL)\n> {\n> /*\n> * Both bound collections have the same partition strategy,\n> * so the other side must allow NULL datums as well.\n> */\n> Assert(b2->isnulls != NULL);\n>\n\nMake sense, thanks!\n\nRegards,\nAmul\n\n\n", "msg_date": "Thu, 9 Dec 2021 12:43:19 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Thu, Dec 9, 2021 at 12:43 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Thu, Dec 9, 2021 at 12:03 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > On Thu, Dec 9, 2021 at 3:12 PM Amul Sul <sulamul@gmail.com> wrote:\n> > > On Thu, Dec 9, 2021 at 11:24 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > >\n> > > [....]\n> > > > On Mon, Dec 6, 2021 at 10:57 PM Nitin Jadhav\n> > > > <nitinjadhavpostgres@gmail.com> wrote:\n> > > > > > Looks difficult to understand at first glance, how about the following:\n> > > > > >\n> > > > > > if (b1->isnulls != b2->isnulls)\n> > > > > > return false;\n> > > >\n> > > > I don't think having this block is correct, because this says that two\n> > > > PartitionBoundInfos can't be \"logically\" equal unless their isnulls\n> > > > pointers are the same, which is not the case unless they are\n> > > > physically the same PartitionBoundInfo. What this means for its only\n> > > > caller compute_partition_bounds() is that it now always needs to\n> > > > perform partition_bounds_merge() for a pair of list-partitioned\n> > > > relations, even if they have exactly the same bounds.\n> > > >\n> > > > So, I'd suggest removing the block.\n> > > >\n> > >\n> > > Agreed, I too realized the same; the check is incorrect and have noted\n> > > it for the next post. But note that, we need a kind of check here otherwise,\n> > > how could two bounds be equal if one has nulls and the other doesn't.\n> >\n> > We check partition strategy at the top and that ensures that isnulls\n> > fields should either be both NULL or not, same as the block above that\n> > checks 'kind'. Maybe adding an Assert inside the block makes sense,\n> > like this:\n> >\n> > /*\n> > * If the bound datums can be NULL, check that the datums on\n> > * both sides are either both NULL or not NULL.\n> > */\n> > if (b1->isnulls != NULL)\n> > {\n> > /*\n> > * Both bound collections have the same partition strategy,\n> > * so the other side must allow NULL datums as well.\n> > */\n> > Assert(b2->isnulls != NULL);\n> >\n>\n> Make sense, thanks!\n>\n\nIn addition to Amit's suggestions, here are a few more:\n\n+ char **colname = (char **) palloc0(partnatts * sizeof(char *));\n+ Oid *coltype = palloc0(partnatts * sizeof(Oid));\n+ int32 *coltypmod = palloc0(partnatts * sizeof(int));\n+ Oid *partcollation = palloc0(partnatts * sizeof(Oid));\n+\n\nNone of them really needed to be palloc0; also, as described\npreviously you can avoid the last three by using get_partition_col_*\ndirectly.\n---\n\n+ i = 0;\n+ foreach(cell2, rowexpr->args)\n+ {\n\nIt's up to you, rather than using a separate index variable and\nincrementing that at the end, I think we can use\nforeach_current_index(cell2) which would look much nicer.\n---\n\n+ all_values[j].values = (Datum *) palloc0(key->partnatts *\nsizeof(Datum));\n+ all_values[j].isnulls = (bool *) palloc0(key->partnatts *\nsizeof(bool));\n+ all_values[j].index = i;\n\npalloc0 is unnecessary for the \"values\".\n---\n\n dest->datums[i] = &boundDatums[i * natts];\n+ if (src->isnulls)\n+ dest->isnulls[i] = (bool *) palloc(sizeof(bool) * natts);\n\nI think you can allocate memory for isnulls the same way you do\nallocate boundDatums and just do the memcpy.\n---\n\n+ for (i = 0; i < partnatts; i++)\n+ {\n+ if (outer_isnull && outer_isnull[i])\n+ {\n+ outer_has_null = true;\n+ if (outer_map.merged_indexes[outer_index] == -1)\n+ consider_outer_null = true;\n+ }\n\nI am wondering why you are not breaking the loop once you set\nconsider_outer_null?\nNote that if you do that then you need a separate loop for the\ninner_isnull part.\n---\n\n@@ -1351,14 +1431,30 @@ merge_list_bounds(FmgrInfo *partsupfunc, Oid\n*partcollation,\n /* A list value missing from the inner side. */\n Assert(outer_pos < outer_bi->ndatums);\n\n- /*\n- * If the inner side has the default partition, or this is an\n- * outer join, try to assign a merged partition to the outer\n- * partition (see process_outer_partition()). Otherwise, the\n- * outer partition will not contribute to the result.\n- */\n- if (inner_has_default || IS_OUTER_JOIN(jointype))\n+ if (outer_has_null || inner_has_null)\n {\n+ if (consider_outer_null || consider_inner_null)\n+ {\n+ /* Merge the NULL partitions. */\n+ merged_index = merge_null_partitions(&outer_map, &inner_map,\n+ consider_outer_null,\n+ consider_inner_null,\n+ outer_index,\ninner_index,\n+ jointype, &next_index);\n+\n\nI have doubts about the condition that allows reaching\nmerge_null_partitions() but I am not sure I am correct. I think if the\nlist values missing from the __inner side__ then we might need to\ncheck only \"inner_has_null\" & \"consider_inner_null\" and merge the\nsame, but why is this code also checking \"outer_has_null\" &\n\"consider_outer_null\". Correct me if I am missing something.\n---\n\n+ if (isnulls && isnulls[i])\n+ cmpval = 0; /* NULL \"=\" NULL */\n+ else\n+ cmpval = 1; /* NULL \">\" not-NULL */\n+ }\n+ else if (isnulls && isnulls[i])\n+ cmpval = -1; /* not-NULL \"<\" NULL */\n\nI really doubt this assumption is correct; aren't those strict operators?\n---\n\n+get_list_partbound_value_string(List *bound_value)\n+{\n+ StringInfo buf = makeStringInfo();\n+ StringInfo boundconstraint = makeStringInfo();\n\nboundconstraint should be declared inside \"if (ncols > 1)\" block.\n---\n\n+ foreach(cell, bound_value)\n+ {\n+ Const *val = castNode(Const, lfirst(cell));\n+\n+ appendStringInfoString(buf, sep);\n+ get_const_expr(val, &context, -1);\n+ sep = \", \";\n+ ncols++;\n+ }\n\nI think no need to increment ncols every time, you have a list and you\ncan get that. Also, I think since you have ncols already, you can\nprepend and append parenthesis before and after so that you can avoid\nextra StringInfo.\n---\n\n typedef struct PartitionBoundInfoData\n {\n char strategy; /* hash, list or range? */\n+ int partnatts; /* number of partition key columns */\n int ndatums; /* Length of the datums[] array */\n Datum **datums;\n+ bool **isnulls;\n\nAdding \"partnatts\" to this struct seems to be unnecessary, AFAIUC,\nadded that for partition_bound_accepts_nulls(), but we can easily get\nthat value from the partitioning key & pass an additional argument.\nAlso, no information about the length of the \"isnulls\" array.\n---\n\nI think it would be helpful if you could split the patch: one for\nmulti-value list partitioning and another for the partition wise join, thanks.\n\nRegards,\nAmul\n\n\n", "msg_date": "Thu, 9 Dec 2021 18:33:38 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Hi,\n\nIs this okay?\n\npostgres=# CREATE TABLE t1 (a int, b int) PARTITION BY LIST ( a, a, a );\nCREATE TABLE\n\npostgres=# CREATE TABLE t1_1 PARTITION OF t1 FOR VALUES IN ((1, 2, 3), (4,\n5, 6));\nCREATE TABLE\n\npostgres=# \\d t1\n Partitioned table \"public.t1\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\n b | integer | | |\nPartition key: LIST (a, a, a)\nNumber of partitions: 1 (Use \\d+ to list them.)\n\n--\n\nAlso, getting some compiler warnings when building the source. please check.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Mon, Dec 6, 2021 at 7:27 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com>\nwrote:\n\n> Thank you for reviewing the patch.\n>\n> > partbounds.c: In function ‘get_qual_for_list.isra.18’:\n> > partbounds.c:4284:29: warning: ‘boundinfo’ may be used uninitialized\n> > in this function [-Wmaybe-uninitialized]\n> > datumCopy(bound_info->datums[i][j],\n> > ~~~~~~~~~~^~~~~~~~\n> > partbounds.c:4335:21: note: ‘boundinfo’ was declared here\n> > PartitionBoundInfo boundinfo;\n> > ^~~~~~~~~\n> > partbounds.c: In function ‘partition_bounds_merge’:\n> > partbounds.c:1305:12: warning: ‘inner_isnull’ may be used\n> > uninitialized in this function [-Wmaybe-uninitialized]\n> > bool *inner_isnull;\n> > ^~~~~~~~~~~~\n> > partbounds.c:1304:12: warning: ‘outer_isnull’ may be used\n> > uninitialized in this function [-Wmaybe-uninitialized]\n> > bool *outer_isnull;\n> > ^~~~~~~~~~~~\n>\n> Fixed.\n>\n> > This function is unnecessarily complicated, I think you can avoid\n> > inner for loops; simply replace for-loop-block with \"if\n> > (equal(lfirst(cell), new_bound)) return true\".\n>\n> Thank you for the suggestion. Fixed.\n>\n> > + char **colname = (char **) palloc0(partnatts * sizeof(char *));\n> > + Oid *coltype = palloc0(partnatts * sizeof(Oid));\n> > + int32 *coltypmod = palloc0(partnatts * sizeof(int));\n> > + Oid *partcollation = palloc0(partnatts * sizeof(Oid));\n> > +\n> > This allocation seems to be worthless, read ahead.\n> >\n> > I think there is no need for this separate loop inside\n> > transformPartitionListBounds, you can do that same in the next loop as\n> > well. And instead of get_partition_col_* calling and storing, simply\n> > use that directly as an argument to transformPartitionBoundValue().\n>\n> Yes. The loop can be avoided and content of the above loop can be\n> included in the next loop but the next loop iterates over a list of\n> multi column datums. For each iteration, we need the information of\n> all the columns. The above data (colname, coltype, coltypmod and\n> partcollation) remains same for each iteration of the loop, If we\n> modify as suggested, then the function to fetch these information has\n> to be called every-time. To avoid this situation I have made a\n> separate loop outside which only runs as many number of columns and\n> stores in a variable which can be reused later. Please let me correct\n> if I am wrong.\n>\n> > I think this should be inside the \"else\" block after \"!IsA(rowexpr,\n> > RowExpr)\" error and you can avoid IsA() check too.\n>\n> This is required to handle the situation when one partition key is\n> mentioned and multiple values are provided in the partition bound\n> specification.\n>\n> > Looks difficult to understand at first glance, how about the following:\n> >\n> > if (b1->isnulls != b2->isnulls)\n> > return false;\n> >\n> > if (b1->isnulls)\n> > {\n> > if (b1->isnulls[i][j] != b2->isnulls[i][j])\n> > return false;\n> > if (b1->isnulls[i][j])\n> > continue;\n> > }\n> >\n> > See how range partitioning infinite values are handled. Also, place\n> > this before the comment block that was added for the \"!datumIsEqual()\"\n> > case.\n>\n> Fixed. I feel the 'continue' block is not required and hence removed it.\n>\n> > Nothing wrong with this but if we could have checked \"dest->isnulls\"\n> > instead of \"src->isnulls\" would be much better.\n>\n> Here we are copying the data from 'src' to 'dest'. If there is no data\n> in 'src', it is unnecessary to copy. Hence checking 'src'.\n>\n> > Condition \"key->strategy != PARTITION_STRATEGY_LIST\" seems to be\n> unnecessary.\n>\n> Fixed.\n>\n> > Can't be a single loop?\n>\n> Yes. Fixed.\n>\n>\n>\n> On Fri, Dec 3, 2021 at 7:26 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Few comments for v7 patch, note that I haven't been through the\n> > previous discussion, if any of the review comments that has been\n> > already discussed & overridden, then please ignore here too:\n> >\n> >\n> > partbounds.c: In function ‘get_qual_for_list.isra.18’:\n> > partbounds.c:4284:29: warning: ‘boundinfo’ may be used uninitialized\n> > in this function [-Wmaybe-uninitialized]\n> > datumCopy(bound_info->datums[i][j],\n> > ~~~~~~~~~~^~~~~~~~\n> > partbounds.c:4335:21: note: ‘boundinfo’ was declared here\n> > PartitionBoundInfo boundinfo;\n> > ^~~~~~~~~\n> > partbounds.c: In function ‘partition_bounds_merge’:\n> > partbounds.c:1305:12: warning: ‘inner_isnull’ may be used\n> > uninitialized in this function [-Wmaybe-uninitialized]\n> > bool *inner_isnull;\n> > ^~~~~~~~~~~~\n> > partbounds.c:1304:12: warning: ‘outer_isnull’ may be used\n> > uninitialized in this function [-Wmaybe-uninitialized]\n> > bool *outer_isnull;\n> > ^~~~~~~~~~~~\n> >\n> > Got these warnings with gcc -O2 compilation.\n> > ----\n> >\n> > /*\n> > + * isListBoundDuplicated\n> > + *\n> > + * Returns TRUE if the list bound element 'new_bound' is already present\n> > + * in the target list 'list_bounds', FALSE otherwise.\n> > + */\n> > +static bool\n> > +isListBoundDuplicated(List *list_bounds, List *new_bound)\n> > +{\n> > + ListCell *cell = NULL;\n> > +\n> > + foreach(cell, list_bounds)\n> > + {\n> > + int i;\n> > + List *elem = lfirst(cell);\n> > + bool isDuplicate = true;\n> > +\n> > + Assert(list_length(elem) == list_length(new_bound));\n> > +\n> > + for (i = 0; i < list_length(elem); i++)\n> > + {\n> > + Const *value1 = castNode(Const, list_nth(elem, i));\n> > + Const *value2 = castNode(Const, list_nth(new_bound, i));\n> > +\n> > + if (!equal(value1, value2))\n> > + {\n> > + isDuplicate = false;\n> > + break;\n> > + }\n> > + }\n> > +\n> > + if (isDuplicate)\n> > + return true;\n> > + }\n> > +\n> > + return false;\n> > +}\n> >\n> > This function is unnecessarily complicated, I think you can avoid\n> > inner for loops; simply replace for-loop-block with \"if\n> > (equal(lfirst(cell), new_bound)) return true\".\n> > ----\n> >\n> > + char **colname = (char **) palloc0(partnatts * sizeof(char *));\n> > + Oid *coltype = palloc0(partnatts * sizeof(Oid));\n> > + int32 *coltypmod = palloc0(partnatts * sizeof(int));\n> > + Oid *partcollation = palloc0(partnatts * sizeof(Oid));\n> > +\n> > This allocation seems to be worthless, read ahead.\n> > ----\n> >\n> > + for (i = 0; i < partnatts; i++)\n> > + {\n> > + if (key->partattrs[i] != 0)\n> > + colname[i] = get_attname(RelationGetRelid(parent),\n> > + key->partattrs[i], false);\n> > + else\n> > + {\n> > + colname[i] =\n> > + deparse_expression((Node *) list_nth(partexprs, j),\n> > + deparse_context_for(RelationGetRelationName(parent),\n> > + RelationGetRelid(parent)),\n> > + false, false);\n> > + ++j;\n> > + }\n> > +\n> > + coltype[i] = get_partition_col_typid(key, i);\n> > + coltypmod[i] = get_partition_col_typmod(key, i);\n> > + partcollation[i] = get_partition_col_collation(key, i);\n> > + }\n> >\n> > I think there is no need for this separate loop inside\n> > transformPartitionListBounds, you can do that same in the next loop as\n> > well. And instead of get_partition_col_* calling and storing, simply\n> > use that directly as an argument to transformPartitionBoundValue().\n> > ----\n> >\n> > +\n> > + if (IsA(expr, RowExpr) &&\n> > + partnatts != list_length(((RowExpr *) expr)->args))\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> > + errmsg(\"Must specify exactly one value per partitioning column\"),\n> > + parser_errposition(pstate, exprLocation((Node *) spec))));\n> > +\n> >\n> > I think this should be inside the \"else\" block after \"!IsA(rowexpr,\n> > RowExpr)\" error and you can avoid IsA() check too.\n> > ----\n> >\n> > - if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n> > + if (b1->isnulls)\n> > + b1_isnull = b1->isnulls[i][j];\n> > + if (b2->isnulls)\n> > + b2_isnull = b2->isnulls[i][j];\n> > +\n> > + /*\n> > + * If any of the partition bound has NULL value, then\n> check\n> > + * equality for the NULL value instead of comparing the\n> datums\n> > + * as it does not contain valid value in case of NULL.\n> > + */\n> > + if (b1_isnull || b2_isnull)\n> > + {\n> > + if (b1_isnull != b2_isnull)\n> > + return false;\n> > + }\n> > + else if (!datumIsEqual(b1->datums[i][j],\n> b2->datums[i][j],\n> >\n> > Looks difficult to understand at first glance, how about the following:\n> >\n> > if (b1->isnulls != b2->isnulls)\n> > return false;\n> >\n> > if (b1->isnulls)\n> > {\n> > if (b1->isnulls[i][j] != b2->isnulls[i][j])\n> > return false;\n> > if (b1->isnulls[i][j])\n> > continue;\n> > }\n> >\n> > See how range partitioning infinite values are handled. Also, place\n> > this before the comment block that was added for the \"!datumIsEqual()\"\n> > case.\n> > ----\n> >\n> > + if (src->isnulls)\n> > + dest->isnulls[i] = (bool *) palloc(sizeof(bool) * natts);\n> > ...\n> > + if (src->isnulls)\n> > + dest->isnulls[i][j] = src->isnulls[i][j];\n> > +\n> > Nothing wrong with this but if we could have checked \"dest->isnulls\"\n> > instead of \"src->isnulls\" would be much better.\n> > ----\n> >\n> > - if (dest->kind == NULL ||\n> > - dest->kind[i][j] == PARTITION_RANGE_DATUM_VALUE)\n> > + if ((dest->kind == NULL ||\n> > + dest->kind[i][j] == PARTITION_RANGE_DATUM_VALUE) &&\n> > + (key->strategy != PARTITION_STRATEGY_LIST ||\n> > + (src->isnulls == NULL || !src->isnulls[i][j])))\n> > dest->datums[i][j] = datumCopy(src->datums[i][j],\n> > byval, typlen);\n> > Condition \"key->strategy != PARTITION_STRATEGY_LIST\" seems to be\n> unnecessary.\n> > ----\n> >\n> > + for (i = 0; i < partnatts; i++)\n> > + {\n> > + if (outer_isnull[i])\n> > + {\n> > + outer_has_null = true;\n> > + if (outer_map.merged_indexes[outer_index] == -1)\n> > + consider_outer_null = true;\n> > + }\n> > + }\n> > +\n> > + for (i = 0; i < partnatts; i++)\n> > + {\n> > + if (inner_isnull[i])\n> > + {\n> > + inner_has_null = true;\n> > + if (inner_map.merged_indexes[inner_index] == -1)\n> > + consider_inner_null = true;\n> > + }\n> > + }\n> >\n> > Can't be a single loop?\n> > ----\n> >\n> > It would be helpful if you could run pgindent on your patch if not done\n> already.\n> > ----\n> >\n> > That's all for now, I am yet to finish the complete patch reading and\n> > understand the code flow, but I am out of time now.\n> >\n> > Regards,\n> > Amul\n>\n\nHi,Is this okay?postgres=# CREATE TABLE t1 (a int, b int) PARTITION BY LIST ( a, a, a );CREATE TABLEpostgres=# CREATE TABLE t1_1 PARTITION OF t1 FOR VALUES IN ((1, 2, 3), (4, 5, 6));CREATE TABLEpostgres=# \\d t1           Partitioned table \"public.t1\" Column |  Type   | Collation | Nullable | Default--------+---------+-----------+----------+--------- a      | integer |           |          | b      | integer |           |          |Partition key: LIST (a, a, a)Number of partitions: 1 (Use \\d+ to list them.)--Also, getting some compiler warnings when building the source. please check.--With Regards,Ashutosh Sharma.On Mon, Dec 6, 2021 at 7:27 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:Thank you for reviewing the patch.\n\n> partbounds.c: In function ‘get_qual_for_list.isra.18’:\n> partbounds.c:4284:29: warning: ‘boundinfo’ may be used uninitialized\n> in this function [-Wmaybe-uninitialized]\n>         datumCopy(bound_info->datums[i][j],\n>                   ~~~~~~~~~~^~~~~~~~\n> partbounds.c:4335:21: note: ‘boundinfo’ was declared here\n>  PartitionBoundInfo boundinfo;\n>                     ^~~~~~~~~\n> partbounds.c: In function ‘partition_bounds_merge’:\n> partbounds.c:1305:12: warning: ‘inner_isnull’ may be used\n> uninitialized in this function [-Wmaybe-uninitialized]\n>   bool    *inner_isnull;\n>            ^~~~~~~~~~~~\n> partbounds.c:1304:12: warning: ‘outer_isnull’ may be used\n> uninitialized in this function [-Wmaybe-uninitialized]\n>   bool    *outer_isnull;\n>           ^~~~~~~~~~~~\n\nFixed.\n\n> This function is unnecessarily complicated, I think you can avoid\n> inner for loops; simply replace for-loop-block with  \"if\n> (equal(lfirst(cell), new_bound)) return true\".\n\nThank you for the suggestion. Fixed.\n\n> + char   **colname = (char **) palloc0(partnatts * sizeof(char *));\n> + Oid    *coltype = palloc0(partnatts * sizeof(Oid));\n> + int32    *coltypmod = palloc0(partnatts * sizeof(int));\n> + Oid    *partcollation = palloc0(partnatts * sizeof(Oid));\n> +\n> This allocation seems to be worthless, read ahead.\n>\n> I think there is no need for this separate loop inside\n> transformPartitionListBounds, you can do that same in the next loop as\n> well. And instead of  get_partition_col_* calling and storing, simply\n> use that directly as an argument to transformPartitionBoundValue().\n\nYes. The loop can be avoided and content of the above loop can be\nincluded in the next loop but the next loop iterates over a list of\nmulti column datums. For each iteration, we need the information of\nall the columns. The above data (colname, coltype, coltypmod and\npartcollation) remains same for each iteration of the loop, If we\nmodify as suggested, then the function to fetch these information has\nto be called every-time. To avoid this situation I have made a\nseparate loop outside which only runs as many number of columns and\nstores in a variable which can be reused later. Please let me correct\nif I am wrong.\n\n> I think this should be inside the \"else\" block after \"!IsA(rowexpr,\n> RowExpr)\" error and you can avoid IsA() check too.\n\nThis is required to handle the situation when one partition key is\nmentioned and multiple values are provided in the partition bound\nspecification.\n\n> Looks difficult to understand at first glance, how about the following:\n>\n> if (b1->isnulls != b2->isnulls)\n>    return false;\n>\n> if (b1->isnulls)\n> {\n>    if (b1->isnulls[i][j] != b2->isnulls[i][j])\n>        return false;\n>    if (b1->isnulls[i][j])\n>        continue;\n> }\n>\n> See how range partitioning infinite values are handled. Also, place\n> this before the comment block that was added for the \"!datumIsEqual()\"\n> case.\n\nFixed. I feel the 'continue' block is not required and hence removed it.\n\n> Nothing wrong with this but if we could have checked \"dest->isnulls\"\n> instead of \"src->isnulls\" would be much better.\n\nHere we are copying the data from 'src' to 'dest'. If there is no data\nin 'src', it is unnecessary to copy. Hence checking 'src'.\n\n> Condition \"key->strategy != PARTITION_STRATEGY_LIST\" seems to be unnecessary.\n\nFixed.\n\n> Can't be a single loop?\n\nYes. Fixed.\n\n\n\nOn Fri, Dec 3, 2021 at 7:26 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> Hi,\n>\n> Few comments for v7 patch, note that I haven't been through the\n> previous discussion, if any of the review comments that has been\n> already discussed & overridden, then please ignore here too:\n>\n>\n> partbounds.c: In function ‘get_qual_for_list.isra.18’:\n> partbounds.c:4284:29: warning: ‘boundinfo’ may be used uninitialized\n> in this function [-Wmaybe-uninitialized]\n>          datumCopy(bound_info->datums[i][j],\n>                    ~~~~~~~~~~^~~~~~~~\n> partbounds.c:4335:21: note: ‘boundinfo’ was declared here\n>   PartitionBoundInfo boundinfo;\n>                      ^~~~~~~~~\n> partbounds.c: In function ‘partition_bounds_merge’:\n> partbounds.c:1305:12: warning: ‘inner_isnull’ may be used\n> uninitialized in this function [-Wmaybe-uninitialized]\n>    bool    *inner_isnull;\n>             ^~~~~~~~~~~~\n> partbounds.c:1304:12: warning: ‘outer_isnull’ may be used\n> uninitialized in this function [-Wmaybe-uninitialized]\n>    bool    *outer_isnull;\n>            ^~~~~~~~~~~~\n>\n> Got these warnings with gcc -O2 compilation.\n> ----\n>\n>  /*\n> + * isListBoundDuplicated\n> + *\n> + * Returns TRUE if the list bound element 'new_bound' is already present\n> + * in the target list 'list_bounds', FALSE otherwise.\n> + */\n> +static bool\n> +isListBoundDuplicated(List *list_bounds, List *new_bound)\n> +{\n> + ListCell   *cell = NULL;\n> +\n> + foreach(cell, list_bounds)\n> + {\n> + int i;\n> + List   *elem = lfirst(cell);\n> + bool isDuplicate = true;\n> +\n> + Assert(list_length(elem) == list_length(new_bound));\n> +\n> + for (i = 0; i < list_length(elem); i++)\n> + {\n> + Const   *value1 = castNode(Const, list_nth(elem, i));\n> + Const   *value2 = castNode(Const, list_nth(new_bound, i));\n> +\n> + if (!equal(value1, value2))\n> + {\n> + isDuplicate = false;\n> + break;\n> + }\n> + }\n> +\n> + if (isDuplicate)\n> + return true;\n> + }\n> +\n> + return false;\n> +}\n>\n> This function is unnecessarily complicated, I think you can avoid\n> inner for loops; simply replace for-loop-block with  \"if\n> (equal(lfirst(cell), new_bound)) return true\".\n> ----\n>\n> + char   **colname = (char **) palloc0(partnatts * sizeof(char *));\n> + Oid    *coltype = palloc0(partnatts * sizeof(Oid));\n> + int32    *coltypmod = palloc0(partnatts * sizeof(int));\n> + Oid    *partcollation = palloc0(partnatts * sizeof(Oid));\n> +\n> This allocation seems to be worthless, read ahead.\n> ----\n>\n> + for (i = 0; i < partnatts; i++)\n> + {\n> + if (key->partattrs[i] != 0)\n> + colname[i] = get_attname(RelationGetRelid(parent),\n> + key->partattrs[i], false);\n> + else\n> + {\n> + colname[i] =\n> + deparse_expression((Node *) list_nth(partexprs, j),\n> +    deparse_context_for(RelationGetRelationName(parent),\n> +    RelationGetRelid(parent)),\n> +    false, false);\n> + ++j;\n> + }\n> +\n> + coltype[i] = get_partition_col_typid(key, i);\n> + coltypmod[i] = get_partition_col_typmod(key, i);\n> + partcollation[i] = get_partition_col_collation(key, i);\n> + }\n>\n> I think there is no need for this separate loop inside\n> transformPartitionListBounds, you can do that same in the next loop as\n> well. And instead of  get_partition_col_* calling and storing, simply\n> use that directly as an argument to transformPartitionBoundValue().\n> ----\n>\n> +\n> + if (IsA(expr, RowExpr) &&\n> + partnatts != list_length(((RowExpr *) expr)->args))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> + errmsg(\"Must specify exactly one value per partitioning column\"),\n> + parser_errposition(pstate, exprLocation((Node *) spec))));\n> +\n>\n> I think this should be inside the \"else\" block after \"!IsA(rowexpr,\n> RowExpr)\" error and you can avoid IsA() check too.\n> ----\n>\n> -               if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n> +               if (b1->isnulls)\n> +                   b1_isnull = b1->isnulls[i][j];\n> +               if (b2->isnulls)\n> +                   b2_isnull = b2->isnulls[i][j];\n> +\n> +               /*\n> +                * If any of the partition bound has NULL value, then check\n> +                * equality for the NULL value instead of comparing the datums\n> +                * as it does not contain valid value in case of NULL.\n> +                */\n> +               if (b1_isnull || b2_isnull)\n> +               {\n> +                   if (b1_isnull != b2_isnull)\n> +                       return false;\n> +               }\n> +               else if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n>\n> Looks difficult to understand at first glance, how about the following:\n>\n> if (b1->isnulls != b2->isnulls)\n>     return false;\n>\n> if (b1->isnulls)\n> {\n>     if (b1->isnulls[i][j] != b2->isnulls[i][j])\n>         return false;\n>     if (b1->isnulls[i][j])\n>         continue;\n> }\n>\n> See how range partitioning infinite values are handled. Also, place\n> this before the comment block that was added for the \"!datumIsEqual()\"\n> case.\n> ----\n>\n> +       if (src->isnulls)\n> +           dest->isnulls[i] = (bool *) palloc(sizeof(bool) * natts);\n> ...\n> +           if (src->isnulls)\n> +               dest->isnulls[i][j] = src->isnulls[i][j];\n> +\n> Nothing wrong with this but if we could have checked \"dest->isnulls\"\n> instead of \"src->isnulls\" would be much better.\n> ----\n>\n> -           if (dest->kind == NULL ||\n> -               dest->kind[i][j] == PARTITION_RANGE_DATUM_VALUE)\n> +           if ((dest->kind == NULL ||\n> +                dest->kind[i][j] == PARTITION_RANGE_DATUM_VALUE) &&\n> +               (key->strategy != PARTITION_STRATEGY_LIST ||\n> +                (src->isnulls == NULL || !src->isnulls[i][j])))\n>                 dest->datums[i][j] = datumCopy(src->datums[i][j],\n>                                                byval, typlen);\n> Condition \"key->strategy != PARTITION_STRATEGY_LIST\" seems to be unnecessary.\n> ----\n>\n> +       for (i = 0; i < partnatts; i++)\n> +       {\n> +           if (outer_isnull[i])\n> +           {\n> +               outer_has_null = true;\n> +               if (outer_map.merged_indexes[outer_index] == -1)\n> +                   consider_outer_null = true;\n> +           }\n> +       }\n> +\n> +       for (i = 0; i < partnatts; i++)\n> +       {\n> +           if (inner_isnull[i])\n> +           {\n> +               inner_has_null = true;\n> +               if (inner_map.merged_indexes[inner_index] == -1)\n> +                   consider_inner_null = true;\n> +           }\n> +       }\n>\n> Can't be a single loop?\n> ----\n>\n> It would be helpful if you could run pgindent on your patch if not done already.\n> ----\n>\n> That's all for now, I am yet to finish the complete patch reading and\n> understand the code flow, but I am out of time now.\n>\n> Regards,\n> Amul", "msg_date": "Mon, 13 Dec 2021 20:06:52 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Hi,\n\nOn Mon, Dec 13, 2021 at 11:37 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi,\n>\n> Is this okay?\n>\n> postgres=# CREATE TABLE t1 (a int, b int) PARTITION BY LIST ( a, a, a );\n> CREATE TABLE\n>\n> postgres=# CREATE TABLE t1_1 PARTITION OF t1 FOR VALUES IN ((1, 2, 3), (4, 5, 6));\n> CREATE TABLE\n>\n> postgres=# \\d t1\n> Partitioned table \"public.t1\"\n> Column | Type | Collation | Nullable | Default\n> --------+---------+-----------+----------+---------\n> a | integer | | |\n> b | integer | | |\n> Partition key: LIST (a, a, a)\n> Number of partitions: 1 (Use \\d+ to list them.)\n\nI'd say it's not okay for a user to expect this to work sensibly, and\nI don't think it would be worthwhile to write code to point that out\nto the user if that is what you were implying.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Dec 2021 22:34:11 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Mon, Dec 20, 2021 at 7:04 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> Hi,\n>\n> On Mon, Dec 13, 2021 at 11:37 PM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> >\n> > Hi,\n> >\n> > Is this okay?\n> >\n> > postgres=# CREATE TABLE t1 (a int, b int) PARTITION BY LIST ( a, a, a );\n> > CREATE TABLE\n> >\n> > postgres=# CREATE TABLE t1_1 PARTITION OF t1 FOR VALUES IN ((1, 2, 3),\n> (4, 5, 6));\n> > CREATE TABLE\n> >\n> > postgres=# \\d t1\n> > Partitioned table \"public.t1\"\n> > Column | Type | Collation | Nullable | Default\n> > --------+---------+-----------+----------+---------\n> > a | integer | | |\n> > b | integer | | |\n> > Partition key: LIST (a, a, a)\n> > Number of partitions: 1 (Use \\d+ to list them.)\n>\n> I'd say it's not okay for a user to expect this to work sensibly, and\n> I don't think it would be worthwhile to write code to point that out\n> to the user if that is what you were implying.\n>\n\nOK. As you wish.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Mon, Dec 20, 2021 at 7:04 PM Amit Langote <amitlangote09@gmail.com> wrote:Hi,\n\nOn Mon, Dec 13, 2021 at 11:37 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi,\n>\n> Is this okay?\n>\n> postgres=# CREATE TABLE t1 (a int, b int) PARTITION BY LIST ( a, a, a );\n> CREATE TABLE\n>\n> postgres=# CREATE TABLE t1_1 PARTITION OF t1 FOR VALUES IN ((1, 2, 3), (4, 5, 6));\n> CREATE TABLE\n>\n> postgres=# \\d t1\n>            Partitioned table \"public.t1\"\n>  Column |  Type   | Collation | Nullable | Default\n> --------+---------+-----------+----------+---------\n>  a      | integer |           |          |\n>  b      | integer |           |          |\n> Partition key: LIST (a, a, a)\n> Number of partitions: 1 (Use \\d+ to list them.)\n\nI'd say it's not okay for a user to expect this to work sensibly, and\nI don't think it would be worthwhile to write code to point that out\nto the user if that is what you were implying.OK. As you wish.--With Regards,Ashutosh Sharma.", "msg_date": "Tue, 21 Dec 2021 11:16:51 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Tue, Dec 21, 2021 at 2:47 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> On Mon, Dec 20, 2021 at 7:04 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> On Mon, Dec 13, 2021 at 11:37 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>> >\n>> > Hi,\n>> >\n>> > Is this okay?\n>> >\n>> > postgres=# CREATE TABLE t1 (a int, b int) PARTITION BY LIST ( a, a, a );\n>> > CREATE TABLE\n>> >\n>> > postgres=# CREATE TABLE t1_1 PARTITION OF t1 FOR VALUES IN ((1, 2, 3), (4, 5, 6));\n>> > CREATE TABLE\n>> >\n>> > postgres=# \\d t1\n>> > Partitioned table \"public.t1\"\n>> > Column | Type | Collation | Nullable | Default\n>> > --------+---------+-----------+----------+---------\n>> > a | integer | | |\n>> > b | integer | | |\n>> > Partition key: LIST (a, a, a)\n>> > Number of partitions: 1 (Use \\d+ to list them.)\n>>\n>> I'd say it's not okay for a user to expect this to work sensibly, and\n>> I don't think it would be worthwhile to write code to point that out\n>> to the user if that is what you were implying.\n>\n> OK. As you wish.\n\nActually, we *do* have some code in check_new_partition_bound() to\npoint it out if an empty range is specified for a partition, something\nthat one (or a DDL script) may accidentally do:\n\n /*\n * First check if the resulting range would be empty with\n * specified lower and upper bounds...\n */\n cmpval = partition_rbound_cmp(key->partnatts,\n key->partsupfunc,\n key->partcollation,\n lower->datums, lower->kind,\n true, upper);\n Assert(cmpval != 0);\n if (cmpval > 0)\n {\n /* Point to problematic key in the lower datums list. */\n PartitionRangeDatum *datum = list_nth(spec->lowerdatums,\n cmpval - 1);\n\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n errmsg(\"empty range bound specified for\npartition \\\"%s\\\"\",\n relname),\n errdetail(\"Specified lower bound %s is\ngreater than or equal to upper bound %s.\",\n\nget_range_partbound_string(spec->lowerdatums),\n\nget_range_partbound_string(spec->upperdatums)),\n parser_errposition(pstate, datum->location)));\n }\n\nSo one may wonder why we don't catch and point out more such user\nmistakes, like the one in your example. It may not be hard to\nimplement a proof that the partition bound definition a user entered\nresults in a self-contradictory partition constraint using the\nfacilities given in predtest.c. (The empty-range proof seemed simple\nenough to implement as the above block of code.) I don't however see\nwhy we should do that for partition constraints if we don't do the\nsame for CHECK constraints; for example, the following definition,\nwhile allowed, is not very useful:\n\ncreate table foo (a int check (a = 1 and a = 2));\n\\d foo\n Table \"public.foo\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\nCheck constraints:\n \"foo_a_check\" CHECK (a = 1 AND a = 2)\n\nMaybe partitioning should be looked at differently than the free-form\nCHECK constraints, but I'm not so sure. Or if others insist that it\nmay be worthwhile to improve the user experience in such cases, we\ncould do that as a separate patch than the patch to implement\nmulti-column list partitioning.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Dec 2021 21:59:42 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Thanks for reviewing.\n\n> Ok, colname can be fetched in advance but I don't think it worth it to\n> fetch coltype, coltypmod & partcollation; and, store in the\n> explicitly allocated memory, instead, you can directly call\n> get_partition_col_* inline functions.\n\nFixed.\n---\n\n> /*\n> * If the bound datums can be NULL, check that the datums on\n> * both sides are either both NULL or not NULL.\n> */\n> if (b1->isnulls)\n> {\n> if (b1->isnulls[i][j] != b2->isnulls[i][j])\n> return false;\n>\n> /* Must not pass NULL datums to datumIsEqual(). */\n> if (b1->isnulls[i][j])\n> continue;\n> }\n>\n> /* < the long comment snipped >*/\n> if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j],\n> parttypbyval[j], parttyplen[j]))\n> return false;\n\nMake sense. Fixed as per the suggestion.\n---\n\n> + i = 0;\n> + foreach(cell2, rowexpr->args)\n> + {\n>\n> It's up to you, rather than using a separate index variable and\n> incrementing that at the end, I think we can use\n> foreach_current_index(cell2) which would look much nicer.\n\nThanks for the suggestion. I have removed the increment operation and\nretained the index variable with a call to foreach_current_index()\nsince the index variable is required in 3 places. It looks better than\nbefore.\n---\n\n> + all_values[j].values = (Datum *) palloc0(key->partnatts *\n> sizeof(Datum));\n> + all_values[j].isnulls = (bool *) palloc0(key->partnatts *\n> sizeof(bool));\n> + all_values[j].index = i;\n>\n> palloc0 is unnecessary for the \"values\".\n\nFixed.\n---\n\n> dest->datums[i] = &boundDatums[i * natts];\n> + if (src->isnulls)\n> + dest->isnulls[i] = (bool *) palloc(sizeof(bool) * natts);\n>\n> I think you can allocate memory for isnulls the same way you do\n> allocate boundDatums and just do the memcpy.\n\nFixed.\n---\n\n> + for (i = 0; i < partnatts; i++)\n> + {\n> + if (outer_isnull && outer_isnull[i])\n> + {\n> + outer_has_null = true;\n> + if (outer_map.merged_indexes[outer_index] == -1)\n> + consider_outer_null = true;\n> + }\n>\n> I am wondering why you are not breaking the loop once you set\n> consider_outer_null?\n> Note that if you do that then you need a separate loop for the\n> inner_isnull part.\n\nRight. Fixed.\n---\n\n> I have doubts about the condition that allows reaching\n> merge_null_partitions() but I am not sure I am correct. I think if the\n> list values missing from the __inner side__ then we might need to\n> check only \"inner_has_null\" & \"consider_inner_null\" and merge the\n> same, but why is this code also checking \"outer_has_null\" &\n> \"consider_outer_null\". Correct me if I am missing something.\n\nYou are correct. These conditions are not required. Fixed.\n---\n\n> + if (isnulls && isnulls[i])\n> + cmpval = 0; /* NULL \"=\" NULL */\n> + else\n> + cmpval = 1; /* NULL \">\" not-NULL */\n> + }\n> + else if (isnulls && isnulls[i])\n> + cmpval = -1; /* not-NULL \"<\" NULL */\n>\n> I really doubt this assumption is correct; aren't those strict operators?\n\nNow there are possibilities of multiple NULL values. We should have a\nmechanism to sort it when the bound values contain Non NULL and NULL\nvalues. As per the above logic we put the NULL values at the end.\nPlease let me know if I am wrong.\n---\n\n> +get_list_partbound_value_string(List *bound_value)\n> +{\n> + StringInfo buf = makeStringInfo();\n> + StringInfo boundconstraint = makeStringInfo();\n>\n> boundconstraint should be declared inside \"if (ncols > 1)\" block.\n\nFixed.\n---\n\n> + foreach(cell, bound_value)\n> + {\n> + Const *val = castNode(Const, lfirst(cell));\n> +\n> + appendStringInfoString(buf, sep);\n> + get_const_expr(val, &context, -1);\n> + sep = \", \";\n> + ncols++;\n> + }\n>\n> I think no need to increment ncols every time, you have a list and you\n> can get that. Also, I think since you have ncols already, you can\n> prepend and append parenthesis before and after so that you can avoid\n> extra StringInfo.\n\nFixed.\n---\n\n> typedef struct PartitionBoundInfoData\n> {\n> char strategy; /* hash, list or range? */\n> + int partnatts; /* number of partition key columns */\n> int ndatums; /* Length of the datums[] array */\n> Datum **datums;\n> + bool **isnulls;\n>\n> Adding \"partnatts\" to this struct seems to be unnecessary, AFAIUC,\n> added that for partition_bound_accepts_nulls(), but we can easily get\n> that value from the partitioning key & pass an additional argument.\n> Also, no information about the length of the \"isnulls\" array.\n\nThis is required during merge_list_bounds(). AFAIK partition key\ninformation is not available here.\n\n> I think it would be helpful if you could split the patch: one for\n> multi-value list partitioning and another for the partition wise join, thanks.\n\nI have split the patch into 2 patches. One is for the multi column\nlist partitioning core changes and the other is for partition-wise\njoin support. Each patch has its respective test cases in the\nregression suit and regression tests run successfully on each patch.\nKindly let me know if any other changes are required here.\n\n\nThanks & Regards,\nNitin Jadhav\n\nOn Tue, Dec 21, 2021 at 6:30 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Tue, Dec 21, 2021 at 2:47 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > On Mon, Dec 20, 2021 at 7:04 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >> On Mon, Dec 13, 2021 at 11:37 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >> >\n> >> > Hi,\n> >> >\n> >> > Is this okay?\n> >> >\n> >> > postgres=# CREATE TABLE t1 (a int, b int) PARTITION BY LIST ( a, a, a );\n> >> > CREATE TABLE\n> >> >\n> >> > postgres=# CREATE TABLE t1_1 PARTITION OF t1 FOR VALUES IN ((1, 2, 3), (4, 5, 6));\n> >> > CREATE TABLE\n> >> >\n> >> > postgres=# \\d t1\n> >> > Partitioned table \"public.t1\"\n> >> > Column | Type | Collation | Nullable | Default\n> >> > --------+---------+-----------+----------+---------\n> >> > a | integer | | |\n> >> > b | integer | | |\n> >> > Partition key: LIST (a, a, a)\n> >> > Number of partitions: 1 (Use \\d+ to list them.)\n> >>\n> >> I'd say it's not okay for a user to expect this to work sensibly, and\n> >> I don't think it would be worthwhile to write code to point that out\n> >> to the user if that is what you were implying.\n> >\n> > OK. As you wish.\n>\n> Actually, we *do* have some code in check_new_partition_bound() to\n> point it out if an empty range is specified for a partition, something\n> that one (or a DDL script) may accidentally do:\n>\n> /*\n> * First check if the resulting range would be empty with\n> * specified lower and upper bounds...\n> */\n> cmpval = partition_rbound_cmp(key->partnatts,\n> key->partsupfunc,\n> key->partcollation,\n> lower->datums, lower->kind,\n> true, upper);\n> Assert(cmpval != 0);\n> if (cmpval > 0)\n> {\n> /* Point to problematic key in the lower datums list. */\n> PartitionRangeDatum *datum = list_nth(spec->lowerdatums,\n> cmpval - 1);\n>\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n> errmsg(\"empty range bound specified for\n> partition \\\"%s\\\"\",\n> relname),\n> errdetail(\"Specified lower bound %s is\n> greater than or equal to upper bound %s.\",\n>\n> get_range_partbound_string(spec->lowerdatums),\n>\n> get_range_partbound_string(spec->upperdatums)),\n> parser_errposition(pstate, datum->location)));\n> }\n>\n> So one may wonder why we don't catch and point out more such user\n> mistakes, like the one in your example. It may not be hard to\n> implement a proof that the partition bound definition a user entered\n> results in a self-contradictory partition constraint using the\n> facilities given in predtest.c. (The empty-range proof seemed simple\n> enough to implement as the above block of code.) I don't however see\n> why we should do that for partition constraints if we don't do the\n> same for CHECK constraints; for example, the following definition,\n> while allowed, is not very useful:\n>\n> create table foo (a int check (a = 1 and a = 2));\n> \\d foo\n> Table \"public.foo\"\n> Column | Type | Collation | Nullable | Default\n> --------+---------+-----------+----------+---------\n> a | integer | | |\n> Check constraints:\n> \"foo_a_check\" CHECK (a = 1 AND a = 2)\n>\n> Maybe partitioning should be looked at differently than the free-form\n> CHECK constraints, but I'm not so sure. Or if others insist that it\n> may be worthwhile to improve the user experience in such cases, we\n> could do that as a separate patch than the patch to implement\n> multi-column list partitioning.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com", "msg_date": "Tue, 21 Dec 2021 18:36:33 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Tue, Dec 21, 2021 at 6:34 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> ---\n>\n> > + if (isnulls && isnulls[i])\n> > + cmpval = 0; /* NULL \"=\" NULL */\n> > + else\n> > + cmpval = 1; /* NULL \">\" not-NULL */\n> > + }\n> > + else if (isnulls && isnulls[i])\n> > + cmpval = -1; /* not-NULL \"<\" NULL */\n> >\n> > I really doubt this assumption is correct; aren't those strict operators?\n>\n> Now there are possibilities of multiple NULL values. We should have a\n> mechanism to sort it when the bound values contain Non NULL and NULL\n> values. As per the above logic we put the NULL values at the end.\n> Please let me know if I am wrong.\n\nOk, but I am not sure about the comparison approach, let's see what\nothers think.\n\n> ---\n[...]\n>\n> > typedef struct PartitionBoundInfoData\n> > {\n> > char strategy; /* hash, list or range? */\n> > + int partnatts; /* number of partition key columns */\n> > int ndatums; /* Length of the datums[] array */\n> > Datum **datums;\n> > + bool **isnulls;\n> >\n> > Adding \"partnatts\" to this struct seems to be unnecessary, AFAIUC,\n> > added that for partition_bound_accepts_nulls(), but we can easily get\n> > that value from the partitioning key & pass an additional argument.\n> > Also, no information about the length of the \"isnulls\" array.\n>\n> This is required during merge_list_bounds(). AFAIK partition key\n> information is not available here.\n>\n\nYou can get that as an argument, see merge_range_bounds().\n\n> > I think it would be helpful if you could split the patch: one for\n> > multi-value list partitioning and another for the partition wise join, thanks.\n>\n> I have split the patch into 2 patches. One is for the multi column\n> list partitioning core changes and the other is for partition-wise\n> join support. Each patch has its respective test cases in the\n> regression suit and regression tests run successfully on each patch.\n> Kindly let me know if any other changes are required here.\n>\n\nThanks, for the slit that is much helpful, I have a few comments for\nthe 0001 patch as follow:\n\n+ char **colname = (char **) palloc0(partnatts * sizeof(char *));\n\npalloc0 is unnecessary.\n---\n\n+ foreach(cell2, rowexpr->args)\n+ {\n+ int idx = foreach_current_index(cell2);\n+ Node *expr = lfirst(cell2);\n+ Const *val =\n+ transformPartitionBoundValue(pstate, expr, colname[i],\n+ get_partition_col_typid(key, idx),\n+ get_partition_col_typmod(key, idx),\n+ get_partition_col_collation(key, idx));\n+\n+ values = lappend(values, val);\n+ }\n\nArray index for colname should be \"idx\".\n---\n\n result->scan_default = partition_bound_has_default(boundinfo);\n+\n return result;\n...\n\n /* Always include the default partition if any. */\n result->scan_default = partition_bound_has_default(boundinfo);\n-\n return result;\n\n...\n else\n result->scan_default = partition_bound_has_default(boundinfo);\n+\n return result;\n...\n\n- /* Add columns specified to SET NULL or SET DEFAULT if\nprovided. */\n+ /*\n+ * Add columns specified to SET NULL or SET DEFAULT if\n+ * provided.\n+ */\n\nspurious change -- look like something not related to your patch.\n--\n\n- * For range partitioning, we must only perform pruning with values\n- * for either all partition keys or a prefix thereof.\n+ * For range partitioning and list partitioning, we must only perform\n+ * pruning with values for either all partition keys or a prefix\n+ * thereof.\n */\n- if (keyno > nvalues && context->strategy == PARTITION_STRATEGY_RANGE)\n+ if (keyno > nvalues && (context->strategy == PARTITION_STRATEGY_RANGE ||\n+ context->strategy == PARTITION_STRATEGY_LIST))\n break;\n\nI think this is not true for multi-value list partitions, we might\nstill want prune partitions for e.g. (100, IS NULL, 20). Correct me\nif I am missing something here.\n---\n\n /*\n- * For range partitioning, if we have no clauses for the current key,\n- * we can't consider any later keys either, so we can stop here.\n+ * For range partitioning and list partitioning, if we have no clauses\n+ * for the current key, we can't consider any later keys either, so we\n+ * can stop here.\n */\n- if (part_scheme->strategy == PARTITION_STRATEGY_RANGE &&\n+ if ((part_scheme->strategy == PARTITION_STRATEGY_RANGE ||\n+ part_scheme->strategy == PARTITION_STRATEGY_LIST) &&\n clauselist == NIL)\n break\n\nSimilarly, why would this be true for list partitioning? How can we\nprune partitions if values is for e.g. (100, <not given>, 20).\n--\n\n- if (bms_is_member(keyno, opstep->nullkeys))\n+ if (bms_is_member(keyno, opstep->nullkeys) &&\n+ context->strategy != PARTITION_STRATEGY_LIST)\n continue;\nWill that prune for all NULL partitioning key values?\n---\n\n+ appendStringInfoString\n+ (buf,\nget_list_partbound_value_string(lfirst(cell)));\n\nFormatting is not quite right.\n--\n\n+/*\n+ * get_min_and_max_offset\n+ *\n+ * Fetches the minimum and maximum offset of the matching partitions.\n+ */\n\n...\n\n+/*\n+ * get_min_or_max_off\n+ *\n+ * Fetches either minimum or maximum offset of the matching partitions\n+ * depending on the value of is_min parameter.\n+ */\n\nI am not sure we really have to have separate functions but if needed\nthen I would prefer to have a separate function for each min and max\nrather than combining.\n---\n\n+ if (part_scheme->strategy != PARTITION_STRATEGY_LIST)\n+ {\n+ *clause_is_not_null = (nulltest->nulltesttype == IS_NOT_NULL);\n+ return PARTCLAUSE_MATCH_NULLNESS;\n+ }\n+\n+ expr = makeConst(UNKNOWNOID, -1, InvalidOid, -2, (Datum) 0,\ntrue, false);\n+ partclause = (PartClauseInfo *) palloc(sizeof(PartClauseInfo));\n+\n+ partclause->keyno = partkeyidx;\n+ partclause->expr = (Expr *) expr;\n+ partclause->is_null = true;\n+\n+ if (nulltest->nulltesttype == IS_NOT_NULL)\n+ {\n+ partclause->op_is_ne = true;\n+ partclause->op_strategy = InvalidStrategy;\n+ }\n+ else\n+ {\n+ partclause->op_is_ne = false;\n+ partclause->op_strategy = BTEqualStrategyNumber;\n+ }\n\n- return PARTCLAUSE_MATCH_NULLNESS;\n+ *pc = partclause;\n+ return PARTCLAUSE_MATCH_CLAUSE;\n\nI still believe considering NULL value for match clause is not a\nfundamentally correct thing. And that is only for List partitioning\nwhich isn't aligned with the other partitioning.\n---\n\nRegards,\nAmul\n\n\n", "msg_date": "Thu, 23 Dec 2021 18:32:44 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Thanks for reviewing.\n\n> > > Adding \"partnatts\" to this struct seems to be unnecessary, AFAIUC,\n> > > added that for partition_bound_accepts_nulls(), but we can easily get\n> > > that value from the partitioning key & pass an additional argument.\n> > > Also, no information about the length of the \"isnulls\" array.\n> >\n> > This is required during merge_list_bounds(). AFAIK partition key\n> > information is not available here.\n> >\n>\n> You can get that as an argument, see merge_range_bounds().\n\nFixed.\n---\n\n> + char **colname = (char **) palloc0(partnatts * sizeof(char *));\n>\n> palloc0 is unnecessary.\n\nFixed.\n---\n\n> + foreach(cell2, rowexpr->args)\n> + {\n> + int idx = foreach_current_index(cell2);\n> + Node *expr = lfirst(cell2);\n> + Const *val =\n> + transformPartitionBoundValue(pstate, expr, colname[i],\n> + get_partition_col_typid(key, idx),\n> + get_partition_col_typmod(key, idx),\n> + get_partition_col_collation(key, idx));\n> +\n> + values = lappend(values, val);\n> + }\n>\n> Array index for colname should be \"idx\".\n\nFixed.\n---\n\n> result->scan_default = partition_bound_has_default(boundinfo);\n> +\n> return result;\n> ...\n>\n> /* Always include the default partition if any. */\n> result->scan_default = partition_bound_has_default(boundinfo);\n> -\n> return result;\n>\n> ...\n> else\n> result->scan_default = partition_bound_has_default(boundinfo);\n> +\n> return result;\n> ...\n>\n> - /* Add columns specified to SET NULL or SET DEFAULT if\n> provided. */\n> + /*\n> + * Add columns specified to SET NULL or SET DEFAULT if\n> + * provided.\n> + */\n>\n> spurious change -- look like something not related to your patch.\n\nFixed.\n---\n\n> - * For range partitioning, we must only perform pruning with values\n> - * for either all partition keys or a prefix thereof.\n> + * For range partitioning and list partitioning, we must only perform\n> + * pruning with values for either all partition keys or a prefix\n> + * thereof.\n> */\n> - if (keyno > nvalues && context->strategy == PARTITION_STRATEGY_RANGE)\n> + if (keyno > nvalues && (context->strategy == PARTITION_STRATEGY_RANGE ||\n> + context->strategy == PARTITION_STRATEGY_LIST))\n> break;\n>\n> I think this is not true for multi-value list partitions, we might\n> still want prune partitions for e.g. (100, IS NULL, 20). Correct me\n> if I am missing something here.\n\nAFAIK, the above condition/comments says that, either we should\ninclude all keys or prefixes of the partition keys to get the\npartition pruning results. For example if we have a table with 2\ncolumns and both are present in the partition key. Let the column\nnames be 'a' and 'b'.\n\nSELECT * FROM table WHERE a=1 AND b=1; - This query works for pruning\nand it refers to a comment which says all partition keys are included.\nSELECT * FROM table WHERE b=1; - Here partition pruning does not work\nas it does not contain prefix of the partition keys.\nSELECT * FROM table WHERE a=1; - This query works fine as column 'a'\nis prefix of partition keys.\n\nPlease let me know if you need more information.\n---\n\n> - * For range partitioning, if we have no clauses for the current key,\n> - * we can't consider any later keys either, so we can stop here.\n> + * For range partitioning and list partitioning, if we have no clauses\n> + * for the current key, we can't consider any later keys either, so we\n> + * can stop here.\n> */\n> - if (part_scheme->strategy == PARTITION_STRATEGY_RANGE &&\n> + if ((part_scheme->strategy == PARTITION_STRATEGY_RANGE ||\n> + part_scheme->strategy == PARTITION_STRATEGY_LIST) &&\n> clauselist == NIL)\n> break\n>\n> Similarly, why would this be true for list partitioning? How can we\n> prune partitions if values is for e.g. (100, <not given>, 20).\n\nThe above description holds good for this also. As per the current\ndesign, partition pruning is not applicable for the above example.\nKindly confirm whether we should support such scenarios.\n---\n\n> - if (bms_is_member(keyno, opstep->nullkeys))\n> + if (bms_is_member(keyno, opstep->nullkeys) &&\n> + context->strategy != PARTITION_STRATEGY_LIST)\n> continue;\n> Will that prune for all NULL partitioning key values?\n\nYes. This allows pruning with NULL values for list partitioning.\n---\n\n> + appendStringInfoString\n> + (buf,\n> get_list_partbound_value_string(lfirst(cell)));\n>\n> Formatting is not quite right.\n\nFixed.\n---\n\n> +/*\n> + * get_min_and_max_offset\n> + *\n> + * Fetches the minimum and maximum offset of the matching partitions.\n> + */\n>\n> ...\n>\n> +/*\n> + * get_min_or_max_off\n> + *\n> + * Fetches either minimum or maximum offset of the matching partitions\n> + * depending on the value of is_min parameter.\n> + */\n>\n> I am not sure we really have to have separate functions but if needed\n> then I would prefer to have a separate function for each min and max\n> rather than combining.\n\nIf we don't make a separate function, then we have to include this\ncode in get_matching_list_bounds() which is already a big function. I\njust made a separate function to not increase the complexity of\nget_matching_list_bounds() and most of the code present in\nget_min_or_max_off() is common for min and max calculation. If we make\nit separate then there might be a lot of duplications. Please let me\nknow if you still feel if any action is required.\n---\n\n> + if (part_scheme->strategy != PARTITION_STRATEGY_LIST)\n> + {\n> + *clause_is_not_null = (nulltest->nulltesttype == IS_NOT_NULL);\n> + return PARTCLAUSE_MATCH_NULLNESS;\n> + }\n> +\n> + expr = makeConst(UNKNOWNOID, -1, InvalidOid, -2, (Datum) 0,\n> true, false);\n> + partclause = (PartClauseInfo *) palloc(sizeof(PartClauseInfo));\n> +\n> + partclause->keyno = partkeyidx;\n> + partclause->expr = (Expr *) expr;\n> + partclause->is_null = true;\n> +\n> + if (nulltest->nulltesttype == IS_NOT_NULL)\n> + {\n> + partclause->op_is_ne = true;\n> + partclause->op_strategy = InvalidStrategy;\n> + }\n> + else\n> + {\n> + partclause->op_is_ne = false;\n> + partclause->op_strategy = BTEqualStrategyNumber;\n> + }\n>\n> - return PARTCLAUSE_MATCH_NULLNESS;\n> + *pc = partclause;\n> + return PARTCLAUSE_MATCH_CLAUSE;\n>\n> I still believe considering NULL value for match clause is not a\n> fundamentally correct thing. And that is only for List partitioning\n> which isn't aligned with the other partitioning.\n\nAs other partitions which support multiple partition keys (Range\npartitioning) do not support NULL values. This feature supports\nmultiple partition keys with list partitioning and it also supports\nNULL values. With the existing design, I have tried to support this\nfeature with minimal changes as possible. If this is not the right\napproach to support NULL values, I would like to know how we can\nsupport multiple NULL values. Kindly provide more information.\n\nThanks & Regards,\nNitin Jadhav\n\n\n\n\nOn Thu, Dec 23, 2021 at 6:33 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Tue, Dec 21, 2021 at 6:34 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > ---\n> >\n> > > + if (isnulls && isnulls[i])\n> > > + cmpval = 0; /* NULL \"=\" NULL */\n> > > + else\n> > > + cmpval = 1; /* NULL \">\" not-NULL */\n> > > + }\n> > > + else if (isnulls && isnulls[i])\n> > > + cmpval = -1; /* not-NULL \"<\" NULL */\n> > >\n> > > I really doubt this assumption is correct; aren't those strict operators?\n> >\n> > Now there are possibilities of multiple NULL values. We should have a\n> > mechanism to sort it when the bound values contain Non NULL and NULL\n> > values. As per the above logic we put the NULL values at the end.\n> > Please let me know if I am wrong.\n>\n> Ok, but I am not sure about the comparison approach, let's see what\n> others think.\n>\n> > ---\n> [...]\n> >\n> > > typedef struct PartitionBoundInfoData\n> > > {\n> > > char strategy; /* hash, list or range? */\n> > > + int partnatts; /* number of partition key columns */\n> > > int ndatums; /* Length of the datums[] array */\n> > > Datum **datums;\n> > > + bool **isnulls;\n> > >\n> > > Adding \"partnatts\" to this struct seems to be unnecessary, AFAIUC,\n> > > added that for partition_bound_accepts_nulls(), but we can easily get\n> > > that value from the partitioning key & pass an additional argument.\n> > > Also, no information about the length of the \"isnulls\" array.\n> >\n> > This is required during merge_list_bounds(). AFAIK partition key\n> > information is not available here.\n> >\n>\n> You can get that as an argument, see merge_range_bounds().\n>\n> > > I think it would be helpful if you could split the patch: one for\n> > > multi-value list partitioning and another for the partition wise join, thanks.\n> >\n> > I have split the patch into 2 patches. One is for the multi column\n> > list partitioning core changes and the other is for partition-wise\n> > join support. Each patch has its respective test cases in the\n> > regression suit and regression tests run successfully on each patch.\n> > Kindly let me know if any other changes are required here.\n> >\n>\n> Thanks, for the slit that is much helpful, I have a few comments for\n> the 0001 patch as follow:\n>\n> + char **colname = (char **) palloc0(partnatts * sizeof(char *));\n>\n> palloc0 is unnecessary.\n> ---\n>\n> + foreach(cell2, rowexpr->args)\n> + {\n> + int idx = foreach_current_index(cell2);\n> + Node *expr = lfirst(cell2);\n> + Const *val =\n> + transformPartitionBoundValue(pstate, expr, colname[i],\n> + get_partition_col_typid(key, idx),\n> + get_partition_col_typmod(key, idx),\n> + get_partition_col_collation(key, idx));\n> +\n> + values = lappend(values, val);\n> + }\n>\n> Array index for colname should be \"idx\".\n> ---\n>\n> result->scan_default = partition_bound_has_default(boundinfo);\n> +\n> return result;\n> ...\n>\n> /* Always include the default partition if any. */\n> result->scan_default = partition_bound_has_default(boundinfo);\n> -\n> return result;\n>\n> ...\n> else\n> result->scan_default = partition_bound_has_default(boundinfo);\n> +\n> return result;\n> ...\n>\n> - /* Add columns specified to SET NULL or SET DEFAULT if\n> provided. */\n> + /*\n> + * Add columns specified to SET NULL or SET DEFAULT if\n> + * provided.\n> + */\n>\n> spurious change -- look like something not related to your patch.\n> --\n>\n> - * For range partitioning, we must only perform pruning with values\n> - * for either all partition keys or a prefix thereof.\n> + * For range partitioning and list partitioning, we must only perform\n> + * pruning with values for either all partition keys or a prefix\n> + * thereof.\n> */\n> - if (keyno > nvalues && context->strategy == PARTITION_STRATEGY_RANGE)\n> + if (keyno > nvalues && (context->strategy == PARTITION_STRATEGY_RANGE ||\n> + context->strategy == PARTITION_STRATEGY_LIST))\n> break;\n>\n> I think this is not true for multi-value list partitions, we might\n> still want prune partitions for e.g. (100, IS NULL, 20). Correct me\n> if I am missing something here.\n> ---\n>\n> /*\n> - * For range partitioning, if we have no clauses for the current key,\n> - * we can't consider any later keys either, so we can stop here.\n> + * For range partitioning and list partitioning, if we have no clauses\n> + * for the current key, we can't consider any later keys either, so we\n> + * can stop here.\n> */\n> - if (part_scheme->strategy == PARTITION_STRATEGY_RANGE &&\n> + if ((part_scheme->strategy == PARTITION_STRATEGY_RANGE ||\n> + part_scheme->strategy == PARTITION_STRATEGY_LIST) &&\n> clauselist == NIL)\n> break\n>\n> Similarly, why would this be true for list partitioning? How can we\n> prune partitions if values is for e.g. (100, <not given>, 20).\n> --\n>\n> - if (bms_is_member(keyno, opstep->nullkeys))\n> + if (bms_is_member(keyno, opstep->nullkeys) &&\n> + context->strategy != PARTITION_STRATEGY_LIST)\n> continue;\n> Will that prune for all NULL partitioning key values?\n> ---\n>\n> + appendStringInfoString\n> + (buf,\n> get_list_partbound_value_string(lfirst(cell)));\n>\n> Formatting is not quite right.\n> --\n>\n> +/*\n> + * get_min_and_max_offset\n> + *\n> + * Fetches the minimum and maximum offset of the matching partitions.\n> + */\n>\n> ...\n>\n> +/*\n> + * get_min_or_max_off\n> + *\n> + * Fetches either minimum or maximum offset of the matching partitions\n> + * depending on the value of is_min parameter.\n> + */\n>\n> I am not sure we really have to have separate functions but if needed\n> then I would prefer to have a separate function for each min and max\n> rather than combining.\n> ---\n>\n> + if (part_scheme->strategy != PARTITION_STRATEGY_LIST)\n> + {\n> + *clause_is_not_null = (nulltest->nulltesttype == IS_NOT_NULL);\n> + return PARTCLAUSE_MATCH_NULLNESS;\n> + }\n> +\n> + expr = makeConst(UNKNOWNOID, -1, InvalidOid, -2, (Datum) 0,\n> true, false);\n> + partclause = (PartClauseInfo *) palloc(sizeof(PartClauseInfo));\n> +\n> + partclause->keyno = partkeyidx;\n> + partclause->expr = (Expr *) expr;\n> + partclause->is_null = true;\n> +\n> + if (nulltest->nulltesttype == IS_NOT_NULL)\n> + {\n> + partclause->op_is_ne = true;\n> + partclause->op_strategy = InvalidStrategy;\n> + }\n> + else\n> + {\n> + partclause->op_is_ne = false;\n> + partclause->op_strategy = BTEqualStrategyNumber;\n> + }\n>\n> - return PARTCLAUSE_MATCH_NULLNESS;\n> + *pc = partclause;\n> + return PARTCLAUSE_MATCH_CLAUSE;\n>\n> I still believe considering NULL value for match clause is not a\n> fundamentally correct thing. And that is only for List partitioning\n> which isn't aligned with the other partitioning.\n> ---\n>\n> Regards,\n> Amul", "msg_date": "Wed, 29 Dec 2021 19:28:19 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "On Wed, Dec 29, 2021 at 7:26 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n>\n> > - * For range partitioning, we must only perform pruning with values\n> > - * for either all partition keys or a prefix thereof.\n> > + * For range partitioning and list partitioning, we must only perform\n> > + * pruning with values for either all partition keys or a prefix\n> > + * thereof.\n> > */\n> > - if (keyno > nvalues && context->strategy == PARTITION_STRATEGY_RANGE)\n> > + if (keyno > nvalues && (context->strategy == PARTITION_STRATEGY_RANGE ||\n> > + context->strategy == PARTITION_STRATEGY_LIST))\n> > break;\n> >\n> > I think this is not true for multi-value list partitions, we might\n> > still want prune partitions for e.g. (100, IS NULL, 20). Correct me\n> > if I am missing something here.\n>\n> AFAIK, the above condition/comments says that, either we should\n> include all keys or prefixes of the partition keys to get the\n> partition pruning results. For example if we have a table with 2\n> columns and both are present in the partition key. Let the column\n> names be 'a' and 'b'.\n>\n> SELECT * FROM table WHERE a=1 AND b=1; - This query works for pruning\n> and it refers to a comment which says all partition keys are included.\n> SELECT * FROM table WHERE b=1; - Here partition pruning does not work\n> as it does not contain prefix of the partition keys.\n> SELECT * FROM table WHERE a=1; - This query works fine as column 'a'\n> is prefix of partition keys.\n>\n> Please let me know if you need more information.\n\nThat what I was assuming is not correct. The dependency of the prefix\nis true for the range partitioning but why should that be in the case\nof list partitioning? I think all partitioning keys in the list will\nnot be dependent on each other, AFAICU. If you prune list partitions\nbased on the b=1 value that still is correct & gives the correct\nresult, correct me If I am wrong.\n\n> ---\n>\n> > +/*\n> > + * get_min_and_max_offset\n> > + *\n> > + * Fetches the minimum and maximum offset of the matching partitions.\n> > + */\n> >\n> > ...\n> >\n> > +/*\n> > + * get_min_or_max_off\n> > + *\n> > + * Fetches either minimum or maximum offset of the matching partitions\n> > + * depending on the value of is_min parameter.\n> > + */\n> >\n> > I am not sure we really have to have separate functions but if needed\n> > then I would prefer to have a separate function for each min and max\n> > rather than combining.\n>\n> If we don't make a separate function, then we have to include this\n> code in get_matching_list_bounds() which is already a big function. I\n> just made a separate function to not increase the complexity of\n> get_matching_list_bounds() and most of the code present in\n> get_min_or_max_off() is common for min and max calculation. If we make\n> it separate then there might be a lot of duplications. Please let me\n> know if you still feel if any action is required.\n\nHmm, ok, I personally didn't like to have two functions one gives max\nand min and the other gives only max or min, the other could have\ndifferent opinions.\n\nHow about keeping only one function say, get_min_max_off() and based\non the argument e.g. minoff & maxoff fetch the value, I mean e.g. if\nminoff is not null then fetch the value otherwise skip that, same for\nmaxoff too.\n\n> ---\n>\n> > + if (part_scheme->strategy != PARTITION_STRATEGY_LIST)\n> > + {\n> > + *clause_is_not_null = (nulltest->nulltesttype == IS_NOT_NULL);\n> > + return PARTCLAUSE_MATCH_NULLNESS;\n> > + }\n> > +\n> > + expr = makeConst(UNKNOWNOID, -1, InvalidOid, -2, (Datum) 0,\n> > true, false);\n> > + partclause = (PartClauseInfo *) palloc(sizeof(PartClauseInfo));\n> > +\n> > + partclause->keyno = partkeyidx;\n> > + partclause->expr = (Expr *) expr;\n> > + partclause->is_null = true;\n> > +\n> > + if (nulltest->nulltesttype == IS_NOT_NULL)\n> > + {\n> > + partclause->op_is_ne = true;\n> > + partclause->op_strategy = InvalidStrategy;\n> > + }\n> > + else\n> > + {\n> > + partclause->op_is_ne = false;\n> > + partclause->op_strategy = BTEqualStrategyNumber;\n> > + }\n> >\n> > - return PARTCLAUSE_MATCH_NULLNESS;\n> > + *pc = partclause;\n> > + return PARTCLAUSE_MATCH_CLAUSE;\n> >\n> > I still believe considering NULL value for match clause is not a\n> > fundamentally correct thing. And that is only for List partitioning\n> > which isn't aligned with the other partitioning.\n>\n> As other partitions which support multiple partition keys (Range\n> partitioning) do not support NULL values. This feature supports\n> multiple partition keys with list partitioning and it also supports\n> NULL values. With the existing design, I have tried to support this\n> feature with minimal changes as possible. If this is not the right\n> approach to support NULL values, I would like to know how we can\n> support multiple NULL values. Kindly provide more information.\n\nI haven't studied the whole partition pruning code and don't know the\ncomplete code flow, but AFAICU, this is not the correct way to handle null\nvalue.\n\nRegards,\nAmul\n\n\n", "msg_date": "Mon, 3 Jan 2022 09:33:37 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" }, { "msg_contents": "Hi,\n\nThe cfbot reports some clang warning on the last version of the patchset:\n\nhttps://cirrus-ci.com/task/6721617647632384\n\n[16:35:24.444] partprune.c:2775:8: error: explicitly assigning value of variable of type 'int' to itself [-Werror,-Wself-assign]\n[16:35:24.444] off = off;\n[16:35:24.444]\n\nA quick look at the patch seems to indicate that it's harmless dead code that\nshould be removed, and there are no other failure reported by the cfbot, but I\ndefinitely didn't read the patch carefully so a confirmation (and an updated\npatch) would be welcome.\n\nHowever I see that Amul raised some concerns, so I will change the patch status\nto Waiting on Author. Feel free to switch it back to Needs Review if you think\nit's more appropriate.\n\n\n", "msg_date": "Fri, 14 Jan 2022 17:49:15 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-Column List Partitioning" } ]
[ { "msg_contents": "Hi,\r\n\r\nSince we have introduced `pgxactoff` in [941697c3c1ae5d6ee153065adb96e1e63ee11224](https://github.com/postgres/postgres/commit/941697c3c1ae5d6ee153065adb96e1e63ee11224), and `pgxactoff` is always the index of `proc->pgprocno` in `procArray->pgprocnos`. So it seems that we could directly use `proc->pgxactoff` as the value of `index` in `ProcArrayRemove()`? My thought is to replace \r\n\r\n```c\r\nfor (index = 0; index < arrayP->numProcs; index++)\r\n{\r\n if (arrayP->pgprocnos[index] == proc->pgprocno)\r\n {\r\n /* ... */\r\n }\r\n}\r\n```\r\n\r\nwith \r\n\r\n```c\r\nindex = proc->pgxactoff;\r\n/* ... */\r\n```\r\n\r\nI would appreciate your help.", "msg_date": "Fri, 7 May 2021 00:30:13 +0800", "msg_from": "\"=?utf-8?B?55uP5LiA?=\" <w@hidva.com>", "msg_from_op": true, "msg_subject": "use `proc->pgxactoff` as the value of `index` in `ProcArrayRemove()`" }, { "msg_contents": "Hi,\n\nOn 2021-05-07 00:30:13 +0800, 盏一 wrote:\n> Since we have introduced `pgxactoff` in [941697c3c1ae5d6ee153065adb96e1e63ee11224](https://github.com/postgres/postgres/commit/941697c3c1ae5d6ee153065adb96e1e63ee11224), and `pgxactoff` is always the index of `proc->pgprocno` in `procArray->pgprocnos`. So it seems that we could directly use `proc->pgxactoff` as the value of `index` in `ProcArrayRemove()`? My thought is to replace\n>\n> ```c\n> for (index = 0; index < arrayP->numProcs; index++)\n> {\n> if (arrayP->pgprocnos[index] == proc->pgprocno)\n> {\n> /* ... */\n> }\n> }\n> ```\n>\n> with\n>\n> ```c\n> index = proc->pgxactoff;\n> /* ... */\n> ```\n\nSounds like a plan! Do you want to write a patch?\n\nIf you do, I think it might be worthwhile to add an only-with-assertions\nloop checking that there's no other entry with the same pgprocno in the\ndense arrays.\n\nGiven that the code is new in 14, I wonder if we should cram this\nsimplification in before beta? I don't think this is likely to matter\nperformance wise, but it seems like it'll make maintenance easier to not\nhave it look different in 14 than it does both in 13 and 15.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 May 2021 12:08:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: use `proc->pgxactoff` as the value of `index` in\n `ProcArrayRemove()`" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-07 00:30:13 +0800, 盏一 wrote:\n>> Since we have introduced `pgxactoff` in [941697c3c1ae5d6ee153065adb96e1e63ee11224](https://github.com/postgres/postgres/commit/941697c3c1ae5d6ee153065adb96e1e63ee11224), and `pgxactoff` is always the index of `proc->pgprocno` in `procArray->pgprocnos`. So it seems that we could directly use `proc->pgxactoff` as the value of `index` in `ProcArrayRemove()`? My thought is to replace\n\n> Sounds like a plan! Do you want to write a patch?\n\n> If you do, I think it might be worthwhile to add an only-with-assertions\n> loop checking that there's no other entry with the same pgprocno in the\n> dense arrays.\n\nHmm, I can definitely see keeping a check that the selected entry\nhas the right PID and/or pgprocno, but making it search for duplicates\nseems a bit over the top. The existing code isn't guarding against\nthat, and I don't really see a reason why there's a meaningful risk\nof it.\n\n> Given that the code is new in 14, I wonder if we should cram this\n> simplification in before beta?\n\n+1, seems like a pretty clear missed opportunity in 941697c3c.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 May 2021 15:27:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: use `proc->pgxactoff` as the value of `index` in\n `ProcArrayRemove()`" }, { "msg_contents": "Hi,\n\nOn 2021-05-06 15:27:29 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > If you do, I think it might be worthwhile to add an only-with-assertions\n> > loop checking that there's no other entry with the same pgprocno in the\n> > dense arrays.\n>\n> Hmm, I can definitely see keeping a check that the selected entry\n> has the right PID and/or pgprocno, but making it search for duplicates\n> seems a bit over the top. The existing code isn't guarding against\n> that, and I don't really see a reason why there's a meaningful risk\n> of it.\n\nThe current code makes it at least more likely for things to fall over\nbadly if there's such an issue, because there's a 50/50 chance that the\nwrong entry would be moved. I do dimly remember hitting a nasty bug or\ntwo during the development of 941697c3c where such a thing happened, but\nI don't remember the details.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 May 2021 12:45:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: use `proc->pgxactoff` as the value of `index` in\n `ProcArrayRemove()`" }, { "msg_contents": "\nOn 5/6/21 3:27 PM, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2021-05-07 00:30:13 +0800, 盏一 wrote:\n>>> Since we have introduced `pgxactoff` in [941697c3c1ae5d6ee153065adb96e1e63ee11224](https://github.com/postgres/postgres/commit/941697c3c1ae5d6ee153065adb96e1e63ee11224), and `pgxactoff` is always the index of `proc->pgprocno` in `procArray->pgprocnos`. So it seems that we could directly use `proc->pgxactoff` as the value of `index` in `ProcArrayRemove()`? My thought is to replace\n>> Sounds like a plan! Do you want to write a patch?\n>> If you do, I think it might be worthwhile to add an only-with-assertions\n>> loop checking that there's no other entry with the same pgprocno in the\n>> dense arrays.\n> Hmm, I can definitely see keeping a check that the selected entry\n> has the right PID and/or pgprocno, but making it search for duplicates\n> seems a bit over the top. The existing code isn't guarding against\n> that, and I don't really see a reason why there's a meaningful risk\n> of it.\n>\n>> Given that the code is new in 14, I wonder if we should cram this\n>> simplification in before beta?\n> +1, seems like a pretty clear missed opportunity in 941697c3c.\n>\n> \t\t\t\n\n\nopen item then?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 6 May 2021 16:16:05 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: use `proc->pgxactoff` as the value of `index` in\n `ProcArrayRemove()`" }, { "msg_contents": "&gt;&nbsp;Sounds like a plan! Do you want to write a patch?\r\nAdd the patch.", "msg_date": "Fri, 7 May 2021 04:36:25 +0800", "msg_from": "\"=?utf-8?B?55uP5LiA?=\" <w@hidva.com>", "msg_from_op": true, "msg_subject": "Re: use `proc->pgxactoff` as the value of `index` in\n `ProcArrayRemove()`" }, { "msg_contents": "Hi,\n\nOn 2021-05-07 04:36:25 +0800, 盏一 wrote:\n> &gt;&nbsp;Sounds like a plan! Do you want to write a patch?\n> Add the patch.\n\nThanks for the patch. I finally pushed an edited version of it. There\nwere other loops over ->pgprocnos, so I put assertions in those - that\ngains us a a good bit more checking than we had before...\n\nI also couldn't resist to do some small formatting cleanups - I found\nthe memmove calls just too hard to read.\n\nI took the authorship information as you had it in the diff you attached\n- I hope that's OK?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 11 Jun 2021 21:42:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: use `proc->pgxactoff` as the value of `index` in\n `ProcArrayRemove()`" } ]
[ { "msg_contents": "Hi Hackers,\n\nLast email, format error, missing some information, so I resend this email. \n\n With PG 13.2(3fb4c75e857adee3da4386e947ba58a75f3e74b7), I tested subtransaction with parallel scan, I got a subtransaction coredump as below:\n\n \n\n```\n\n(gdb) bt\n\n#0 0x00001517ce61f7ff in raise () from /lib64/libc.so.6\n\n#1 0x00001517ce609c35 in abort () from /lib64/libc.so.6\n\n#2 0x0000000000aaf93f in ExceptionalCondition (conditionName=0xb4c920 \"TransactionIdFollowsOrEquals(xid, TransactionXmin)\", errorType=0xb4c796 \"FailedAssertion\", \n\n fileName=0xb4c738 \"/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/subtrans.c\", lineNumber=156)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/utils/error/assert.c:67\n\n#3 0x0000000000563111 in SubTransGetTopmostTransaction (xid=196963)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/subtrans.c:156\n\n#4 0x0000000000b05206 in XidInMVCCSnapshot (xid=196963, snapshot=0x2f8ec58)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/utils/time/snapmgr.c:2293\n\n#5 0x00000000004ff2bc in HeapTupleSatisfiesMVCC (htup=0x7ffc21807120, snapshot=0x2f8ec58, buffer=1946)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam_visibility.c:1073\n\n#6 0x0000000000500363 in HeapTupleSatisfiesVisibility (tup=0x7ffc21807120, snapshot=0x2f8ec58, buffer=1946)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam_visibility.c:1695\n\n#7 0x00000000004e423b in heapgetpage (sscan=0x2f8e840, page=1685) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam.c:447\n\n#8 0x00000000004e68bf in heapgettup_pagemode (scan=0x2f8e840, dir=ForwardScanDirection, nkeys=0, key=0x0)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam.c:1077\n\n#9 0x00000000004e6eb6 in heap_getnextslot (sscan=0x2f8e840, direction=ForwardScanDirection, slot=0x2fd8a38)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam.c:1333\n\n#10 0x000000000075350e in table_scan_getnextslot (sscan=0x2f8e840, direction=ForwardScanDirection, slot=0x2fd8a38)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/include/access/tableam.h:906\n\n#11 0x00000000007535d6 in SeqNext (node=0x2fd86a8) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/nodeSeqscan.c:80\n\n#12 0x000000000071af3c in ExecScanFetch (node=0x2fd86a8, accessMtd=0x753542 <SeqNext>, recheckMtd=0x7535e7 <SeqRecheck>)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execScan.c:133\n\n#13 0x000000000071afdd in ExecScan (node=0x2fd86a8, accessMtd=0x753542 <SeqNext>, recheckMtd=0x7535e7 <SeqRecheck>)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execScan.c:199\n\n#14 0x0000000000753631 in ExecSeqScan (pstate=0x2fd86a8) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/nodeSeqscan.c:112\n\n#15 0x00000000007173af in ExecProcNodeFirst (node=0x2fd86a8) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execProcnode.c:450\n\n#16 0x000000000070b7f7 in ExecProcNode (node=0x2fd86a8) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/include/executor/executor.h:248\n\n#17 0x000000000070e302 in ExecutePlan (estate=0x2fd7ca0, planstate=0x2fd86a8, use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true, numberTuples=0, \n\n direction=ForwardScanDirection, dest=0x2f8ddd8, execute_once=true) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execMain.c:1632\n\n#18 0x000000000070be22 in standard_ExecutorRun (queryDesc=0x2f8e7a8, direction=ForwardScanDirection, count=0, execute_once=true)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execMain.c:350\n\n#19 0x000000000070bc50 in ExecutorRun (queryDesc=0x2f8e7a8, direction=ForwardScanDirection, count=0, execute_once=true)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execMain.c:294\n\n#20 0x00000000007131d5 in ParallelQueryMain (seg=0x2ef30b8, toc=0x1517cf85c000)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execParallel.c:1448\n\n#21 0x000000000055f70c in ParallelWorkerMain (main_arg=897455922) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/parallel.c:1470\n\n#22 0x000000000086e255 in StartBackgroundWorker () at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/bgworker.c:879\n\n#23 0x000000000088192c in do_start_bgworker (rw=0x2f221c0) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:5870\n\n#24 0x0000000000881cd8 in maybe_start_bgworkers () at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:6095\n\n#25 0x0000000000880d14 in sigusr1_handler (postgres_signal_arg=10) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:5255\n\n#26 <signal handler called>\n\n#27 0x00001517ce6dc4bb in select () from /lib64/libc.so.6\n\n#28 0x000000000087c867 in ServerLoop () at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:1703\n\n#29 0x000000000087c232 in PostmasterMain (argc=3, argv=0x2ef1070) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:1412\n\n#30 0x0000000000783418 in main (argc=3, argv=0x2ef1070) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/main/main.c:210\n\n(gdb) f 3\n\n#3 0x0000000000563111 in SubTransGetTopmostTransaction (xid=196963)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/subtrans.c:325\n\n325 Assert(TransactionIdFollowsOrEquals(xid, TransactionXmin));\n\n(gdb) p xid\n\n$1 = 196963\n\n(gdb) f 4\n\n#4 0x0000000000b05206 in XidInMVCCSnapshot (xid=196963, snapshot=0x2f8ec58)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/utils/time/snapmgr.c:2293\n\n2293 xid = SubTransGetTopmostTransaction(xid);\n\n(gdb) p *snapshot\n\n$2 = {snapshot_type = SNAPSHOT_MVCC, xmin = 196962, xmax = 210314, xip = 0x2f8ecc0, xcnt = 82, subxip = 0x0, subxcnt = 0, suboverflowed = true, \n\n takenDuringRecovery = false, copied = true, curcid = 14, speculativeToken = 2139062143, active_count = 0, regd_count = 1, ph_node = {first_child = 0x0, \n\n next_sibling = 0xf65ca0 <CatalogSnapshotData+64>, prev_or_parent = 0x2f8dbc8}, whenTaken = 0, lsn = 0}\n\n(gdb) p TransactionXmin\n\n$3 = 196992\n\n(gdb) f 3\n\n#3 0x0000000000563111 in SubTransGetTopmostTransaction (xid=196963)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/subtrans.c:325\n\n325 Assert(TransactionIdFollowsOrEquals(xid, TransactionXmin));\n\n(gdb) p xid\n\n$4 = 196963\n\n(gdb) p TransactionXmin\n\n$5 = 196992\n\n```\n\n \n\nAfter simple analyze, I think it should be a bug.\n\n \n\n1, Main process first get active snapshot(xmin 196962, xmax 210314), and push active snapshot. \n\n2, Main process call InitializeParallelDSM, get transaction snapshot(xmin 196992, xmax 210320), and send this snapshot with parameters PARALLEL_KEY_TRANSACTION_SNAPSHOT.\n\n3, Main process call ExecParallelInitializeDSM->ExecSeqScanInitializeDSM->table_parallelscan_initialize SerializeSnapshot active snapshot(xmin 196962, xmax 210314)\n\n4, Main process Lanch Parallel Workers process.\n\n \n\nParallel Workers process work flow:\n\n1, ParallelWorkerMain->BackgroundWorkerInitializeConnectionByOid->GetTransactionSnapshot(xmin 196992, xmax 210320) and set TransactionXmin to 196992.\n\n2, ParallelWorkerMain->RestoreTransactionSnapshot->SetTransactionSnapshot->ProcArrayInstallRestoredXmin and set TransactionXmin to 196992.\n\n3, ParallelWorkerMain->ExecParallelInitializeWorker->ExecSeqScanInitializeWorker->table_beginscan_parallel restore active snapshot(xmin 196962, xmax 210314) from main process.\n\n4, Got coredump ParallelWorkerMain->ParallelQueryMain->ExecutorRun...ExecScan->heapgetpage->HeapTupleSatisfiesMVCC->XidInMVCCSnapshot->SubTransGetTopmostTransaction \n\n \n\nSo the root cause is the Parallel Workers process set the TransactionXmin with later transcation snapshot. When parallel scan, Parallel Workers process use the older active snapshot.\n\nIt leads to subtrans assert coredump. I don't know how to fix it. Is there any ideas?\n\n \n\nThanks\n\nPengcheng\n\n \n\n \n\n \n\n\nHi Hackers, Last email, format error, missing some information, so I resend this email.   With PG 13.2(3fb4c75e857adee3da4386e947ba58a75f3e74b7), I tested subtransaction with parallel scan, I got a subtransaction coredump as below: ```(gdb) bt#0  0x00001517ce61f7ff in raise () from /lib64/libc.so.6#1  0x00001517ce609c35 in abort () from /lib64/libc.so.6#2  0x0000000000aaf93f in ExceptionalCondition (conditionName=0xb4c920 \"TransactionIdFollowsOrEquals(xid, TransactionXmin)\", errorType=0xb4c796 \"FailedAssertion\",     fileName=0xb4c738 \"/home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/subtrans.c\", lineNumber=156)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/utils/error/assert.c:67#3  0x0000000000563111 in SubTransGetTopmostTransaction (xid=196963)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/subtrans.c:156#4  0x0000000000b05206 in XidInMVCCSnapshot (xid=196963, snapshot=0x2f8ec58)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/utils/time/snapmgr.c:2293#5  0x00000000004ff2bc in HeapTupleSatisfiesMVCC (htup=0x7ffc21807120, snapshot=0x2f8ec58, buffer=1946)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam_visibility.c:1073#6  0x0000000000500363 in HeapTupleSatisfiesVisibility (tup=0x7ffc21807120, snapshot=0x2f8ec58, buffer=1946)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam_visibility.c:1695#7  0x00000000004e423b in heapgetpage (sscan=0x2f8e840, page=1685) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam.c:447#8  0x00000000004e68bf in heapgettup_pagemode (scan=0x2f8e840, dir=ForwardScanDirection, nkeys=0, key=0x0)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam.c:1077#9  0x00000000004e6eb6 in heap_getnextslot (sscan=0x2f8e840, direction=ForwardScanDirection, slot=0x2fd8a38)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam.c:1333#10 0x000000000075350e in table_scan_getnextslot (sscan=0x2f8e840, direction=ForwardScanDirection, slot=0x2fd8a38)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/include/access/tableam.h:906#11 0x00000000007535d6 in SeqNext (node=0x2fd86a8) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/nodeSeqscan.c:80#12 0x000000000071af3c in ExecScanFetch (node=0x2fd86a8, accessMtd=0x753542 <SeqNext>, recheckMtd=0x7535e7 <SeqRecheck>)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execScan.c:133#13 0x000000000071afdd in ExecScan (node=0x2fd86a8, accessMtd=0x753542 <SeqNext>, recheckMtd=0x7535e7 <SeqRecheck>)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execScan.c:199#14 0x0000000000753631 in ExecSeqScan (pstate=0x2fd86a8) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/nodeSeqscan.c:112#15 0x00000000007173af in ExecProcNodeFirst (node=0x2fd86a8) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execProcnode.c:450#16 0x000000000070b7f7 in ExecProcNode (node=0x2fd86a8) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/include/executor/executor.h:248#17 0x000000000070e302 in ExecutePlan (estate=0x2fd7ca0, planstate=0x2fd86a8, use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true, numberTuples=0,     direction=ForwardScanDirection, dest=0x2f8ddd8, execute_once=true) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execMain.c:1632#18 0x000000000070be22 in standard_ExecutorRun (queryDesc=0x2f8e7a8, direction=ForwardScanDirection, count=0, execute_once=true)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execMain.c:350#19 0x000000000070bc50 in ExecutorRun (queryDesc=0x2f8e7a8, direction=ForwardScanDirection, count=0, execute_once=true)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execMain.c:294#20 0x00000000007131d5 in ParallelQueryMain (seg=0x2ef30b8, toc=0x1517cf85c000)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execParallel.c:1448#21 0x000000000055f70c in ParallelWorkerMain (main_arg=897455922) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/parallel.c:1470#22 0x000000000086e255 in StartBackgroundWorker () at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/bgworker.c:879#23 0x000000000088192c in do_start_bgworker (rw=0x2f221c0) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:5870#24 0x0000000000881cd8 in maybe_start_bgworkers () at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:6095#25 0x0000000000880d14 in sigusr1_handler (postgres_signal_arg=10) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:5255#26 <signal handler called>#27 0x00001517ce6dc4bb in select () from /lib64/libc.so.6#28 0x000000000087c867 in ServerLoop () at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:1703#29 0x000000000087c232 in PostmasterMain (argc=3, argv=0x2ef1070) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:1412#30 0x0000000000783418 in main (argc=3, argv=0x2ef1070) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/main/main.c:210(gdb) f 3#3  0x0000000000563111 in SubTransGetTopmostTransaction (xid=196963)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/subtrans.c:325325             Assert(TransactionIdFollowsOrEquals(xid, TransactionXmin));(gdb) p xid$1 = 196963(gdb) f 4#4  0x0000000000b05206 in XidInMVCCSnapshot (xid=196963, snapshot=0x2f8ec58)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/utils/time/snapmgr.c:22932293                            xid = SubTransGetTopmostTransaction(xid);(gdb) p *snapshot$2 = {snapshot_type = SNAPSHOT_MVCC, xmin = 196962, xmax = 210314, xip = 0x2f8ecc0, xcnt = 82, subxip = 0x0, subxcnt = 0, suboverflowed = true,   takenDuringRecovery = false, copied = true, curcid = 14, speculativeToken = 2139062143, active_count = 0, regd_count = 1, ph_node = {first_child = 0x0,     next_sibling = 0xf65ca0 <CatalogSnapshotData+64>, prev_or_parent = 0x2f8dbc8}, whenTaken = 0, lsn = 0}(gdb) p TransactionXmin$3 = 196992(gdb) f 3#3  0x0000000000563111 in SubTransGetTopmostTransaction (xid=196963)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/subtrans.c:325325             Assert(TransactionIdFollowsOrEquals(xid, TransactionXmin));(gdb) p xid$4 = 196963(gdb) p TransactionXmin$5 = 196992``` After simple analyze, I think it should be a bug. 1, Main process first get active snapshot(xmin 196962, xmax 210314), and push active snapshot. 2, Main process call InitializeParallelDSM, get transaction snapshot(xmin 196992, xmax 210320), and send this snapshot with parameters PARALLEL_KEY_TRANSACTION_SNAPSHOT.3, Main process call ExecParallelInitializeDSM->ExecSeqScanInitializeDSM->table_parallelscan_initialize  SerializeSnapshot active snapshot(xmin 196962, xmax 210314)4, Main process Lanch Parallel Workers process. Parallel Workers process work flow:1, ParallelWorkerMain->BackgroundWorkerInitializeConnectionByOid->GetTransactionSnapshot(xmin 196992, xmax 210320) and set TransactionXmin to 196992.2, ParallelWorkerMain->RestoreTransactionSnapshot->SetTransactionSnapshot->ProcArrayInstallRestoredXmin and set TransactionXmin to 196992.3, ParallelWorkerMain->ExecParallelInitializeWorker->ExecSeqScanInitializeWorker->table_beginscan_parallel restore active snapshot(xmin 196962, xmax 210314) from main process.4, Got coredump ParallelWorkerMain->ParallelQueryMain->ExecutorRun...ExecScan->heapgetpage->HeapTupleSatisfiesMVCC->XidInMVCCSnapshot->SubTransGetTopmostTransaction  So the root cause is the Parallel Workers process set the TransactionXmin with later transcation snapshot. When parallel scan, Parallel Workers process use the older active snapshot.It leads to subtrans assert coredump. I don't know how to fix it. Is there any ideas? ThanksPengcheng", "msg_date": "Fri, 7 May 2021 11:32:57 +0800", "msg_from": "\"Pengchengliu\" <pengchengliu@tju.edu.cn>", "msg_from_op": true, "msg_subject": "Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "Hi,\n\nOn 2021-05-07 11:32:57 +0800, Pengchengliu wrote:\n> Hi Hackers,\n> \n> Last email, format error, missing some information, so I resend this email. \n> \n> With PG 13.2(3fb4c75e857adee3da4386e947ba58a75f3e74b7), I tested subtransaction with parallel scan, I got a subtransaction coredump as below:\n\n\n> So the root cause is the Parallel Workers process set the TransactionXmin with later transcation snapshot. When parallel scan, Parallel Workers process use the older active snapshot.\n> \n> It leads to subtrans assert coredump. I don't know how to fix it. Is there any ideas?\n\nDo you have steps to reliably reproduce this?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 May 2021 20:55:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "Hi Andres,\n Reproduce steps.\n\n1, Modify and adjust NUM_SUBTRANS_BUFFERS to 128 from 32 in the file \"src/include/access/subtrans.h\" line number 15.\n2, configure with enable assert and build it.\n3, init a new database cluster.\n4, modify postgres.conf and add some parameters as below. As the coredump from parallel scan, so we adjust parallel setting, make it easy to reproduce. \n\n max_connections = 2000\n\n parallel_setup_cost=0\n parallel_tuple_cost=0\n min_parallel_table_scan_size=0\n max_parallel_workers_per_gather=8\n max_parallel_workers = 32\n\n5, start the database cluster.\n6, use the script init_test.sql in attachment to create tables. \n7, use pgbench with script sub_120.sql in attachment to test it. Try it sometimes, you should get the coredump file.\n pgbench -d postgres -p 33550 -n -r -f sub_120.sql -c 200 -j 200 -T 120\n\nThanks\nPengcheng\n\n\n-----Original Message-----\nFrom: Andres Freund <andres@anarazel.de> \nSent: 2021年5月7日 11:55\nTo: Pengchengliu <pengchengliu@tju.edu.cn>\nCc: pgsql-hackers@postgresql.org\nSubject: Re: Parallel scan with SubTransGetTopmostTransaction assert coredump\n\nHi,\n\nOn 2021-05-07 11:32:57 +0800, Pengchengliu wrote:\n> Hi Hackers,\n> \n> Last email, format error, missing some information, so I resend this email. \n> \n> With PG 13.2(3fb4c75e857adee3da4386e947ba58a75f3e74b7), I tested subtransaction with parallel scan, I got a subtransaction coredump as below:\n\n\n> So the root cause is the Parallel Workers process set the TransactionXmin with later transcation snapshot. When parallel scan, Parallel Workers process use the older active snapshot.\n> \n> It leads to subtrans assert coredump. I don't know how to fix it. Is there any ideas?\n\nDo you have steps to reliably reproduce this?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 11 May 2021 09:26:23 +0800", "msg_from": "\"Pengchengliu\" <pengchengliu@tju.edu.cn>", "msg_from_op": true, "msg_subject": "RE: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "Hi Andres,\n Reproduce steps.\n\n1, Modify and adjust NUM_SUBTRANS_BUFFERS to 128 from 32 in the file \"src/include/access/subtrans.h\" line number 15.\n2, configure with enable assert and build it.\n3, init a new database cluster.\n4, modify postgres.conf and add some parameters as below. As the coredump from parallel scan, so we adjust parallel setting, make it easy to reproduce. \n\n max_connections = 2000\n\n parallel_setup_cost=0\n parallel_tuple_cost=0\n min_parallel_table_scan_size=0\n max_parallel_workers_per_gather=8\n max_parallel_workers = 32\n\n5, start the database cluster.\n6, use the script init_test.sql in attachment to create tables. \n7, use pgbench with script sub_120.sql in attachment to test it. Try it sometimes, you should get the coredump file.\n pgbench -d postgres -p 33550 -n -r -f sub_120.sql -c 200 -j 200 -T 120\n\nThanks\nPengcheng\n\n\n-----Original Message-----\nFrom: Andres Freund <andres@anarazel.de> \nSent: 2021年5月7日 11:55\nTo: Pengchengliu <pengchengliu@tju.edu.cn>\nCc: pgsql-hackers@postgresql.org\nSubject: Re: Parallel scan with SubTransGetTopmostTransaction assert coredump\n\nHi,\n\nOn 2021-05-07 11:32:57 +0800, Pengchengliu wrote:\n> Hi Hackers,\n> \n> Last email, format error, missing some information, so I resend this email. \n> \n> With PG 13.2(3fb4c75e857adee3da4386e947ba58a75f3e74b7), I tested subtransaction with parallel scan, I got a subtransaction coredump as below:\n\n\n> So the root cause is the Parallel Workers process set the TransactionXmin with later transcation snapshot. When parallel scan, Parallel Workers process use the older active snapshot.\n> \n> It leads to subtrans assert coredump. I don't know how to fix it. Is there any ideas?\n\nDo you have steps to reliably reproduce this?\n\nGreetings,\n\nAndres Freund", "msg_date": "Tue, 11 May 2021 09:27:45 +0800", "msg_from": "\"Pengchengliu\" <pengchengliu@tju.edu.cn>", "msg_from_op": true, "msg_subject": "RE: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Tue, May 11, 2021 at 11:28 AM Pengchengliu <pengchengliu@tju.edu.cn> wrote:\n>\n> Hi Andres,\n> Reproduce steps.\n>\n> 1, Modify and adjust NUM_SUBTRANS_BUFFERS to 128 from 32 in the file \"src/include/access/subtrans.h\" line number 15.\n> 2, configure with enable assert and build it.\n> 3, init a new database cluster.\n> 4, modify postgres.conf and add some parameters as below. As the coredump from parallel scan, so we adjust parallel setting, make it easy to reproduce.\n>\n> max_connections = 2000\n>\n> parallel_setup_cost=0\n> parallel_tuple_cost=0\n> min_parallel_table_scan_size=0\n> max_parallel_workers_per_gather=8\n> max_parallel_workers = 32\n>\n> 5, start the database cluster.\n> 6, use the script init_test.sql in attachment to create tables.\n> 7, use pgbench with script sub_120.sql in attachment to test it. Try it sometimes, you should get the coredump file.\n> pgbench -d postgres -p 33550 -n -r -f sub_120.sql -c 200 -j 200 -T 120\n>\n\nHi,\n\nI had a go at reproducing your reported issue, making sure to follow\nall your steps.\nUnfortunately, your script seemed to run OK with pgbench and no\ncrash/coredump occurred for me (and yes, I definitely had asserts\nenabled).\nI tried with both the 13.2 source code\n(3fb4c75e857adee3da4386e947ba58a75f3e74b7), running through the script\nwith pgbench twice to completion, and also did the same using the\nlatest Postgres source code.\n\nWill be interesting to see if anyone is able to reproduce your issue.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Tue, 11 May 2021 21:08:23 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "Hi Andres,\n\n Thanks for you replay. \n\n And If you still cannot reproduce it in 2 minitus. Could you run pgbench longer time, such as 30 or 60 minutes.\n\n \n\n This coredump, It should be from parallel scan only. \n\n For normal scan(without parallel), SubTransGetTopmostTransaction assert(HeapTupleSatisfiesMVCC->XidInMVCCSnapshot->SubTransGetTopmostTransaction->Assert(TransactionIdFollowsOrEquals(xid, TransactionXmin))),\n\n I think this Assert is correct. As single scan get transaction snapshot while set the TransactionXmin and Snapshot->xmin. \n\n In XidInMVCCSnapshot, it will check whether xid precedes snapshot->xmin first. If it is, XidInMVCCSnapshot will return false directly.\n\n So in XidInMVCCSnapshot->SubTransGetTopmostTransaction, xid cannot precede snapshot->xmin.\n\n \n\n But for parallel scan, it is different. I modify the code and use the sleep to replace the SubTransGetTopmostTransaction Assert. \n\n Then we can check TransactionXmin and the snapshot from DSA.\n\n The stack is as below, when got the Assert error. \n\n \n\n(gdb) bt\n\n#0 0x0000149fb3d254bb in select () from /lib64/libc.so.6\n\n#1 0x0000000000b1d3b3 in pg_usleep (microsec=1000000) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/port/pgsleep.c:56\n\n#2 0x0000000000562a3b in SubTransGetTopmostTransaction (xid=799225) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/subtrans.c:164\n\n#3 0x0000000000b04acb in XidInMVCCSnapshot (xid=799225, snapshot=0x2af2d00) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/utils/time/snapmgr.c:2293\n\n#4 0x00000000004ff24c in HeapTupleSatisfiesMVCC (htup=0x7fffc1465f60, snapshot=0x2af2d00, buffer=109832)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam_visibility.c:1073\n\n#5 0x00000000005002f3 in HeapTupleSatisfiesVisibility (tup=0x7fffc1465f60, snapshot=0x2af2d00, buffer=109832)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam_visibility.c:1695\n\n#6 0x00000000004e41cb in heapgetpage (sscan=0x2af3118, page=10846) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam.c:447\n\n#7 0x00000000004e684f in heapgettup_pagemode (scan=0x2af3118, dir=ForwardScanDirection, nkeys=0, key=0x0)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam.c:1077\n\n#8 0x00000000004e6e46 in heap_getnextslot (sscan=0x2af3118, direction=ForwardScanDirection, slot=0x2affab0)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam.c:1333\n\n#9 0x0000000000752e1a in table_scan_getnextslot (sscan=0x2af3118, direction=ForwardScanDirection, slot=0x2affab0)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/include/access/tableam.h:906\n\n#10 0x0000000000752ee2 in SeqNext (node=0x2aff538) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/nodeSeqscan.c:80\n\n#11 0x000000000071a848 in ExecScanFetch (node=0x2aff538, accessMtd=0x752e4e <SeqNext>, recheckMtd=0x752ef3 <SeqRecheck>)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execScan.c:133\n\n#12 0x000000000071a8e9 in ExecScan (node=0x2aff538, accessMtd=0x752e4e <SeqNext>, recheckMtd=0x752ef3 <SeqRecheck>)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execScan.c:199\n\n#13 0x0000000000752f3d in ExecSeqScan (pstate=0x2aff538) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/nodeSeqscan.c:112\n\n#14 0x0000000000725794 in ExecProcNode (node=0x2aff538) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/include/executor/executor.h:248\n\n#15 0x0000000000725c7f in fetch_input_tuple (aggstate=0x2afeff0) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/nodeAgg.c:589\n\n#16 0x0000000000728f98 in agg_retrieve_direct (aggstate=0x2afeff0) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/nodeAgg.c:2463\n\n#17 0x00000000007289f2 in ExecAgg (pstate=0x2afeff0) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/nodeAgg.c:2183\n\n#18 0x0000000000716cbb in ExecProcNodeFirst (node=0x2afeff0) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execProcnode.c:450\n\n#19 0x000000000070b103 in ExecProcNode (node=0x2afeff0) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/include/executor/executor.h:248\n\n#20 0x000000000070dc0e in ExecutePlan (estate=0x2afeb30, planstate=0x2afeff0, use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true, numberTuples=0, \n\n direction=ForwardScanDirection, dest=0x2ab0578, execute_once=true) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execMain.c:1632\n\n#21 0x000000000070b72e in standard_ExecutorRun (queryDesc=0x2af2c68, direction=ForwardScanDirection, count=0, execute_once=true)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execMain.c:350\n\n#22 0x000000000070b55c in ExecutorRun (queryDesc=0x2af2c68, direction=ForwardScanDirection, count=0, execute_once=true)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execMain.c:294\n\n#23 0x0000000000712ae1 in ParallelQueryMain (seg=0x2a0a0c8, toc=0x149fb4dab000) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execParallel.c:1448\n\n#24 0x000000000055f69c in ParallelWorkerMain (main_arg=1403863538) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/parallel.c:1470\n\n#25 0x000000000086db61 in StartBackgroundWorker () at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/bgworker.c:879\n\n#26 0x0000000000881238 in do_start_bgworker (rw=0x2a351b0) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:5870\n\n#27 0x00000000008815e4 in maybe_start_bgworkers () at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:6095\n\n#28 0x0000000000880620 in sigusr1_handler (postgres_signal_arg=10) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:5255\n\n#29 <signal handler called>\n\n#30 0x0000149fb3d254bb in select () from /lib64/libc.so.6\n\n#31 0x000000000087c173 in ServerLoop () at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:1703\n\n#32 0x000000000087bb3e in PostmasterMain (argc=3, argv=0x2a08080) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:1412\n\n#33 0x0000000000782d24 in main (argc=3, argv=0x2a08080) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/main/main.c:210\n\n(gdb) f 24\n\n#24 0x000000000055f69c in ParallelWorkerMain (main_arg=1403863538) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/parallel.c:1470\n\n1470 entrypt(seg, toc);\n\n(gdb) p *ActiveSnapshot->as_snap //active snapshot from main process\n\n$18 = {snapshot_type = SNAPSHOT_MVCC, xmin = 799162, xmax = 822061, xip = 0x2ab0190, xcnt = 169, subxip = 0x0, subxcnt = 0, suboverflowed = true, takenDuringRecovery = false, \n\n copied = true, curcid = 119, speculativeToken = 2139062143, active_count = 1, regd_count = 2, ph_node = {first_child = 0x2af2d40, next_sibling = 0x0, prev_or_parent = 0x0}, \n\n whenTaken = 0, lsn = 0}\n\n(gdb) p *CurrentSnapshot //transaction snapshot from main process \n\n$19 = {snapshot_type = SNAPSHOT_MVCC, xmin = 799425, xmax = 822293, xip = 0x2ab1c00, xcnt = 172, subxip = 0x149f29302010, subxcnt = 0, suboverflowed = true, \n\n takenDuringRecovery = false, copied = false, curcid = 119, speculativeToken = 0, active_count = 0, regd_count = 0, ph_node = {first_child = 0x0, next_sibling = 0x0, \n\n prev_or_parent = 0x0}, whenTaken = 0, lsn = 0}\n\n(gdb) f 4\n\n#4 0x00000000004ff24c in HeapTupleSatisfiesMVCC (htup=0x7fffc1465f60, snapshot=0x2af2d00, buffer=109832)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam_visibility.c:1073\n\n1073 XidInMVCCSnapshot(HeapTupleHeaderGetRawXmin(tuple), snapshot))\n\n(gdb) p *snapshot //ative snap shot from main process for scan\n\n$20 = {snapshot_type = SNAPSHOT_MVCC, xmin = 799162, xmax = 822061, xip = 0x2af2d68, xcnt = 169, subxip = 0x0, subxcnt = 0, suboverflowed = true, takenDuringRecovery = false, \n\n copied = true, curcid = 119, speculativeToken = 2139062143, active_count = 0, regd_count = 1, ph_node = {first_child = 0x0, next_sibling = 0xf65ca0 <CatalogSnapshotData+64>, \n\n prev_or_parent = 0x2ab0168}, whenTaken = 0, lsn = 0}\n\n(gdb) p TransactionXmin\n\n$21 = 799425\n\n(gdb) f 3\n\n#3 0x0000000000b04acb in XidInMVCCSnapshot (xid=799225, snapshot=0x2af2d00) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/utils/time/snapmgr.c:2293\n\n2293 xid = SubTransGetTopmostTransaction(xid);\n\n(gdb) p xid\n\n$22 = 799225\n\n \n\nThe main process:\n\n 1, Main process get the transaction snapshot(xmin 799162, xmax 822061) and push the active snapshot first. And set this active snapshot to QueryDesc in CreateQueryDesc.\n\n 2, Main process collect active snapshot(xmin 799162, xmax 82206) and get the newer transaction snapshot(xmin 799425, xmax 822293). And use the parameter PARALLEL_KEY_TRANSACTION_SNAPSHOT, PARALLEL_KEY_ACTIVE_SNAPSHOT\n\n store transaction snapshot and active snapshot.\n\n 3, Main process ExecGather->ExecInitParallelPlan->ExecParallelInitializeDSM->ExecSeqScanInitializeDSM->table_parallelscan_initialize, send active snapshot(xmin 799162, xmax 82206) with plan id to parallel work process.\n\n 4, lauch parallel work process.\n\n \n\n The parallel work process:\n\n 1, Get Snapshot and set TransactionXmin itself, in ParallelWorkerMain->BackgroundWorkerInitializeConnectionByOid->GetTransactionSnapshot->GetSnapshotData.\n\n 2, Acooding PARALLEL_KEY_TRANSACTION_SNAPSHOT(xmin 799425, xmax 82229) from main process, and set TransactionXmin 799425 in ParallelWorkerMain->RestoreTransactionSnapshot->SetTransactionSnapshot->ProcArrayInstallRestoredXmin.\n\n 3, ExecParallelInitializeWorker->ExecSeqScanInitializeWorker->table_beginscan_parallel get the active snapshot(xmin 799162, xmax 82206) from main process, and set this snapshot to scan->rs_base.rs_snapshot.\n\n 4, parallel scan begin, with active snapshot(xmin 799162, xmax 82206) and TransactionXmin(799425),when scan tuple(xmin 799225) SubTransGetTopmostTransaction assert got. \n\n In HeapTupleSatisfiesMVCC->XidInMVCCSnapshot->SubTransGetTopmostTransaction.\n\n \n\n As main process gets the active snapshot (xmin 799162, xmax 822061) which is earlier than transaction snapshot(xmin 799425, xmax 822293). Parallel work process set TransactionXmin with transaction snapshot(xmin 799425, xmax 822293).\n\n But scan tuple with active snapshot (xmin 799162, xmax 822061).\n\nThanks\n\nPengcheng\n\n \n\n \n\n \n\n \n\n \n\n-----Original Message-----\nFrom: Greg Nancarrow <gregn4422@gmail.com> \nSent: 2021年5月11日 19:08\nTo: Pengchengliu <pengchengliu@tju.edu.cn>\nCc: Andres Freund <andres@anarazel.de>; PostgreSQL-development <pgsql-hackers@postgresql.org>\nSubject: Re: Parallel scan with SubTransGetTopmostTransaction assert coredump\n\n \n\nOn Tue, May 11, 2021 at 11:28 AM Pengchengliu < <mailto:pengchengliu@tju.edu.cn> pengchengliu@tju.edu.cn> wrote:\n\n> \n\n> Hi Andres,\n\n> Reproduce steps.\n\n> \n\n> 1, Modify and adjust NUM_SUBTRANS_BUFFERS to 128 from 32 in the file \"src/include/access/subtrans.h\" line number 15.\n\n> 2, configure with enable assert and build it.\n\n> 3, init a new database cluster.\n\n> 4, modify postgres.conf and add some parameters as below. As the coredump from parallel scan, so we adjust parallel setting, make it easy to reproduce.\n\n> \n\n> max_connections = 2000\n\n> \n\n> parallel_setup_cost=0\n\n> parallel_tuple_cost=0\n\n> min_parallel_table_scan_size=0\n\n> max_parallel_workers_per_gather=8\n\n> max_parallel_workers = 32\n\n> \n\n> 5, start the database cluster.\n\n> 6, use the script init_test.sql in attachment to create tables.\n\n> 7, use pgbench with script sub_120.sql in attachment to test it. Try it sometimes, you should get the coredump file.\n\n> pgbench -d postgres -p 33550 -n -r -f sub_120.sql -c 200 -j 200 -T 120\n\n> \n\n \n\nHi,\n\n \n\nI had a go at reproducing your reported issue, making sure to follow all your steps.\n\nUnfortunately, your script seemed to run OK with pgbench and no crash/coredump occurred for me (and yes, I definitely had asserts enabled).\n\nI tried with both the 13.2 source code\n\n(3fb4c75e857adee3da4386e947ba58a75f3e74b7), running through the script with pgbench twice to completion, and also did the same using the latest Postgres source code.\n\n \n\nWill be interesting to see if anyone is able to reproduce your issue.\n\n \n\nRegards,\n\nGreg Nancarrow\n\nFujitsu Australia\n\n\nHi Andres,  Thanks for you replay.   And If you still cannot reproduce it in 2 minitus. Could you run pgbench longer time, such as 30 or 60 minutes.   This coredump, It should be from parallel scan only.   For normal scan(without parallel), SubTransGetTopmostTransaction assert(HeapTupleSatisfiesMVCC->XidInMVCCSnapshot->SubTransGetTopmostTransaction->Assert(TransactionIdFollowsOrEquals(xid, TransactionXmin))),  I think this Assert is correct. As single scan get transaction snapshot while set the TransactionXmin and Snapshot->xmin.   In XidInMVCCSnapshot, it will check whether xid precedes snapshot->xmin first. If it is, XidInMVCCSnapshot will return false directly.  So in XidInMVCCSnapshot->SubTransGetTopmostTransaction, xid cannot precede snapshot->xmin.    But for parallel scan, it is different. I modify the code and use the sleep to replace the  SubTransGetTopmostTransaction Assert.   Then we can check TransactionXmin and the snapshot from DSA.   The stack is as below, when got the Assert error.  (gdb) bt#0  0x0000149fb3d254bb in select () from /lib64/libc.so.6#1  0x0000000000b1d3b3 in pg_usleep (microsec=1000000) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/port/pgsleep.c:56#2  0x0000000000562a3b in SubTransGetTopmostTransaction (xid=799225) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/subtrans.c:164#3  0x0000000000b04acb in XidInMVCCSnapshot (xid=799225, snapshot=0x2af2d00) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/utils/time/snapmgr.c:2293#4  0x00000000004ff24c in HeapTupleSatisfiesMVCC (htup=0x7fffc1465f60, snapshot=0x2af2d00, buffer=109832)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam_visibility.c:1073#5  0x00000000005002f3 in HeapTupleSatisfiesVisibility (tup=0x7fffc1465f60, snapshot=0x2af2d00, buffer=109832)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam_visibility.c:1695#6  0x00000000004e41cb in heapgetpage (sscan=0x2af3118, page=10846) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam.c:447#7  0x00000000004e684f in heapgettup_pagemode (scan=0x2af3118, dir=ForwardScanDirection, nkeys=0, key=0x0)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam.c:1077#8  0x00000000004e6e46 in heap_getnextslot (sscan=0x2af3118, direction=ForwardScanDirection, slot=0x2affab0)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam.c:1333#9  0x0000000000752e1a in table_scan_getnextslot (sscan=0x2af3118, direction=ForwardScanDirection, slot=0x2affab0)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/include/access/tableam.h:906#10 0x0000000000752ee2 in SeqNext (node=0x2aff538) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/nodeSeqscan.c:80#11 0x000000000071a848 in ExecScanFetch (node=0x2aff538, accessMtd=0x752e4e <SeqNext>, recheckMtd=0x752ef3 <SeqRecheck>)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execScan.c:133#12 0x000000000071a8e9 in ExecScan (node=0x2aff538, accessMtd=0x752e4e <SeqNext>, recheckMtd=0x752ef3 <SeqRecheck>)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execScan.c:199#13 0x0000000000752f3d in ExecSeqScan (pstate=0x2aff538) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/nodeSeqscan.c:112#14 0x0000000000725794 in ExecProcNode (node=0x2aff538) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/include/executor/executor.h:248#15 0x0000000000725c7f in fetch_input_tuple (aggstate=0x2afeff0) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/nodeAgg.c:589#16 0x0000000000728f98 in agg_retrieve_direct (aggstate=0x2afeff0) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/nodeAgg.c:2463#17 0x00000000007289f2 in ExecAgg (pstate=0x2afeff0) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/nodeAgg.c:2183#18 0x0000000000716cbb in ExecProcNodeFirst (node=0x2afeff0) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execProcnode.c:450#19 0x000000000070b103 in ExecProcNode (node=0x2afeff0) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/include/executor/executor.h:248#20 0x000000000070dc0e in ExecutePlan (estate=0x2afeb30, planstate=0x2afeff0, use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true, numberTuples=0,     direction=ForwardScanDirection, dest=0x2ab0578, execute_once=true) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execMain.c:1632#21 0x000000000070b72e in standard_ExecutorRun (queryDesc=0x2af2c68, direction=ForwardScanDirection, count=0, execute_once=true)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execMain.c:350#22 0x000000000070b55c in ExecutorRun (queryDesc=0x2af2c68, direction=ForwardScanDirection, count=0, execute_once=true)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execMain.c:294#23 0x0000000000712ae1 in ParallelQueryMain (seg=0x2a0a0c8, toc=0x149fb4dab000) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/executor/execParallel.c:1448#24 0x000000000055f69c in ParallelWorkerMain (main_arg=1403863538) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/parallel.c:1470#25 0x000000000086db61 in StartBackgroundWorker () at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/bgworker.c:879#26 0x0000000000881238 in do_start_bgworker (rw=0x2a351b0) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:5870#27 0x00000000008815e4 in maybe_start_bgworkers () at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:6095#28 0x0000000000880620 in sigusr1_handler (postgres_signal_arg=10) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:5255#29 <signal handler called>#30 0x0000149fb3d254bb in select () from /lib64/libc.so.6#31 0x000000000087c173 in ServerLoop () at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:1703#32 0x000000000087bb3e in PostmasterMain (argc=3, argv=0x2a08080) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/postmaster/postmaster.c:1412#33 0x0000000000782d24 in main (argc=3, argv=0x2a08080) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/main/main.c:210(gdb) f 24#24 0x000000000055f69c in ParallelWorkerMain (main_arg=1403863538) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/parallel.c:14701470            entrypt(seg, toc);(gdb) p *ActiveSnapshot->as_snap  //active snapshot from main process$18 = {snapshot_type = SNAPSHOT_MVCC, xmin = 799162, xmax = 822061, xip = 0x2ab0190, xcnt = 169, subxip = 0x0, subxcnt = 0, suboverflowed = true, takenDuringRecovery = false,   copied = true, curcid = 119, speculativeToken = 2139062143, active_count = 1, regd_count = 2, ph_node = {first_child = 0x2af2d40, next_sibling = 0x0, prev_or_parent = 0x0},   whenTaken = 0, lsn = 0}(gdb) p *CurrentSnapshot  //transaction snapshot from main process $19 = {snapshot_type = SNAPSHOT_MVCC, xmin = 799425, xmax = 822293, xip = 0x2ab1c00, xcnt = 172, subxip = 0x149f29302010, subxcnt = 0, suboverflowed = true,   takenDuringRecovery = false, copied = false, curcid = 119, speculativeToken = 0, active_count = 0, regd_count = 0, ph_node = {first_child = 0x0, next_sibling = 0x0,     prev_or_parent = 0x0}, whenTaken = 0, lsn = 0}(gdb) f 4#4  0x00000000004ff24c in HeapTupleSatisfiesMVCC (htup=0x7fffc1465f60, snapshot=0x2af2d00, buffer=109832)    at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/heap/heapam_visibility.c:10731073                            XidInMVCCSnapshot(HeapTupleHeaderGetRawXmin(tuple), snapshot))(gdb) p *snapshot  //ative snap shot from main process for scan$20 = {snapshot_type = SNAPSHOT_MVCC, xmin = 799162, xmax = 822061, xip = 0x2af2d68, xcnt = 169, subxip = 0x0, subxcnt = 0, suboverflowed = true, takenDuringRecovery = false,   copied = true, curcid = 119, speculativeToken = 2139062143, active_count = 0, regd_count = 1, ph_node = {first_child = 0x0, next_sibling = 0xf65ca0 <CatalogSnapshotData+64>,     prev_or_parent = 0x2ab0168}, whenTaken = 0, lsn = 0}(gdb) p TransactionXmin$21 = 799425(gdb) f 3#3  0x0000000000b04acb in XidInMVCCSnapshot (xid=799225, snapshot=0x2af2d00) at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/utils/time/snapmgr.c:22932293                            xid = SubTransGetTopmostTransaction(xid);(gdb) p xid$22 = 799225  The main process:  1, Main process get the transaction snapshot(xmin 799162, xmax  822061) and push the active snapshot first. And set this active snapshot to QueryDesc in CreateQueryDesc.  2, Main process collect active snapshot(xmin 799162, xmax  82206) and get the newer transaction snapshot(xmin 799425, xmax 822293). And use the parameter PARALLEL_KEY_TRANSACTION_SNAPSHOT, PARALLEL_KEY_ACTIVE_SNAPSHOT     store transaction snapshot and active snapshot.  3, Main process ExecGather->ExecInitParallelPlan->ExecParallelInitializeDSM->ExecSeqScanInitializeDSM->table_parallelscan_initialize, send active snapshot(xmin 799162, xmax  82206) with plan id to parallel work process.  4, lauch parallel work process.    The parallel work process:  1, Get Snapshot and set TransactionXmin itself, in ParallelWorkerMain->BackgroundWorkerInitializeConnectionByOid->GetTransactionSnapshot->GetSnapshotData.  2, Acooding PARALLEL_KEY_TRANSACTION_SNAPSHOT(xmin 799425, xmax 82229) from main process, and set TransactionXmin 799425 in ParallelWorkerMain->RestoreTransactionSnapshot->SetTransactionSnapshot->ProcArrayInstallRestoredXmin.  3, ExecParallelInitializeWorker->ExecSeqScanInitializeWorker->table_beginscan_parallel get the active snapshot(xmin 799162, xmax  82206) from main process, and set this snapshot to scan->rs_base.rs_snapshot.  4, parallel scan begin, with active snapshot(xmin 799162, xmax  82206) and TransactionXmin(799425),when scan tuple(xmin 799225) SubTransGetTopmostTransaction assert got.      In  HeapTupleSatisfiesMVCC->XidInMVCCSnapshot->SubTransGetTopmostTransaction.    As main process gets the active snapshot (xmin 799162, xmax 822061) which is earlier than transaction snapshot(xmin 799425, xmax 822293). Parallel work process set TransactionXmin with transaction snapshot(xmin 799425, xmax 822293).  But scan tuple with active snapshot (xmin 799162, xmax 822061). ThanksPengcheng        -----Original Message-----From: Greg Nancarrow <gregn4422@gmail.com> Sent: 2021年5月11日 19:08To: Pengchengliu <pengchengliu@tju.edu.cn>Cc: Andres Freund <andres@anarazel.de>; PostgreSQL-development <pgsql-hackers@postgresql.org>Subject: Re: Parallel scan with SubTransGetTopmostTransaction assert coredump On Tue, May 11, 2021 at 11:28 AM Pengchengliu <pengchengliu@tju.edu.cn> wrote:> > Hi Andres,>    Reproduce steps.> > 1, Modify and adjust NUM_SUBTRANS_BUFFERS to 128 from 32 in the file  \"src/include/access/subtrans.h\" line number 15.> 2, configure with enable assert and build it.> 3, init a new database cluster.> 4, modify  postgres.conf  and add some parameters as below. As the coredump from parallel scan, so we adjust parallel setting, make it easy to reproduce.> >   max_connections = 2000> >   parallel_setup_cost=0>   parallel_tuple_cost=0>   min_parallel_table_scan_size=0>   max_parallel_workers_per_gather=8>   max_parallel_workers = 32> > 5, start the database cluster.> 6, use the script init_test.sql  in attachment to create tables.> 7, use pgbench with script sub_120.sql in attachment to test it. Try it sometimes, you should get the coredump file.>     pgbench  -d postgres -p 33550  -n -r -f sub_120.sql   -c 200 -j 200 -T 120>  Hi, I had a go at reproducing your reported issue, making sure to follow all your steps.Unfortunately, your script seemed to run OK with pgbench and no crash/coredump occurred for me (and yes, I definitely had asserts enabled).I tried with both the 13.2 source code(3fb4c75e857adee3da4386e947ba58a75f3e74b7), running through the script with pgbench twice to completion, and also did the same using the latest Postgres source code. Will be interesting to see if anyone is able to reproduce your issue. Regards,Greg NancarrowFujitsu Australia", "msg_date": "Thu, 13 May 2021 09:25:05 +0800", "msg_from": "\"Pengchengliu\" <pengchengliu@tju.edu.cn>", "msg_from_op": true, "msg_subject": "RE: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Thu, May 13, 2021 at 11:25 AM Pengchengliu <pengchengliu@tju.edu.cn> wrote:\n>\n\n> Hi Andres,\n> Thanks for you replay.\n\nEr, it's Greg who has replied so far (not Andres).\n\n>\n> And If you still cannot reproduce it in 2 minitus. Could you run pgbench longer time, such as 30 or 60 minutes.\n>\n\nActually, I did run it, multiple times, for more than 60 minutes, but\nno assert/crash/coredump occurred in my environment.\n\n>\n> The parallel work process:\n>\n> 1, Get Snapshot and set TransactionXmin itself, in ParallelWorkerMain->BackgroundWorkerInitializeConnectionByOid->GetTransactionSnapshot->GetSnapshotData.\n>\n> 2, Acooding PARALLEL_KEY_TRANSACTION_SNAPSHOT(xmin 799425, xmax 82229) from main process, and set TransactionXmin 799425 in ParallelWorkerMain->RestoreTransactionSnapshot->SetTransactionSnapshot->ProcArrayInstallRestoredXmin.\n>\n> 3, ExecParallelInitializeWorker->ExecSeqScanInitializeWorker->table_beginscan_parallel get the active snapshot(xmin 799162, xmax 82206) from main process, and set this snapshot to scan->rs_base.rs_snapshot.\n>\n> 4, parallel scan begin, with active snapshot(xmin 799162, xmax 82206) and TransactionXmin(799425),when scan tuple(xmin 799225) SubTransGetTopmostTransaction assert got.\n>\n> In HeapTupleSatisfiesMVCC->XidInMVCCSnapshot->SubTransGetTopmostTransaction.\n>\n\nI added some logging at a couple of points in the code:\n1) In the Worker process code - ParallelWorkerMain() - where it\nrestores the serialized transaction and active snapshots (i.e. passed\nto the Worker from the main process).\n2) In the HeapTupleSatisfiesMVCC() function, immediately before it\ncalls XidInMVCCSnapshot()\n\nAfter running it for an hour, examination of the log showed that in\nALL cases, the transaction snapshot xmin,xmax was always THE SAME as\nthe active snapshot xmin,xmax.\n(Can you verify that this occurs on your system when things are\nworking, prior to the coredump?)\n\nThis is different to what you are getting in your environment (at\nleast, different to what you described when the problem occurs).\nIn your case, you say that the main process gets \"the newer\ntransaction snapshot\" - where exactly is this happening in your case?\n(or this is what you don't yet know?)\nPerhaps very occasionally this somehow happens on your system and\ntriggers the Assert (and coredump)? I have not been able to reproduce\nthat on my system.\n\nHave you reproduced this issue on any other system, using the same\nsteps as you provided?\nI'm wondering if there might be something else in your environment\nthat may be influencing this problem.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Fri, 14 May 2021 00:15:07 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "Hi Greg,\n\n Thanks for you replay and test.\n\n \n\n When main process gets the transaction snapshot in InitializeParallelDSM->GetTransactionSnapshot, the transaction snapshot xmin is very likely follows active snapshot xmin.\n\n \n\n Use gdb it is easy to verify it.\n\n Create env as blow:\n\n \n\n 1, Use PG13.2(3fb4c75e857adee3da4386e947ba58a75f3e74b7), init a cluster database.\n\n 2, Append the postgres.conf as below:\n\n \n\n max_connections = 2000\n\n parallel_setup_cost=0\n\n parallel_tuple_cost=0\n\n min_parallel_table_scan_size=0\n\n max_parallel_workers_per_gather=8\n\n max_parallel_workers = 128 \n\n \n\n 3, Start the cluster database. Use the init_test.sql script in attachment to create some test tables.\n\n 4, Use the sub_120.sql script in attachment with pgbench to test it.\n\n \n\n pgbench -d postgres -p 33550 -n -r -f sub_120.sql -c 200 -j 200 -T 1800\n\n \n\n \n\n \n\n Then you can login the database, and use GDB to verify it. \n\n 1, First use explain, make sure force Parallel is OK.\n\n \n\n postgres=# explain (verbose,analyze) select count(*) from contend1;\n\n QUERY PLAN \n\n \n\n-------------------------------------------------------------------------------------------------------------------------------------\n\n----------------\n\nFinalize Aggregate (cost=12006.11..12006.12 rows=1 width=8) (actual time=1075.214..1075.449 rows=1 loops=1)\n\n Output: count(*)\n\n -> Gather (cost=12006.08..12006.09 rows=8 width=8) (actual time=1075.198..1075.433 rows=1 loops=1)\n\n Output: (PARTIAL count(*))\n\n Workers Planned: 8\n\n Workers Launched: 0\n\n -> Partial Aggregate (cost=12006.08..12006.09 rows=1 width=8) (actual time=1074.674..1074.676 rows=1 loops=1)\n\n Output: PARTIAL count(*)\n\n -> Parallel Seq Scan on public.contend1 (cost=0.00..11690.06 rows=126406 width=0) (actual time=0.008..587.454 rows=1\n\n010200 loops=1)\n\n Output: id, val, c2, c3, c4, c5, c6, c7, c8, c9, c10, crt_time\n\nPlanning Time: 0.123 ms\n\nExecution Time: 1075.588 ms\n\n postgres=# select pg_backend_pid();\n\npg_backend_pid \n\n----------------\n\n 2182678\n\n \n\n \n\n 2, use gdb to debug our backend process. Add the breakpoint in parallel.c:219 and continue.\n\n \n\n gdb -q -p 2182678 \n\n ...\n\n (gdb) b parallel.c:219\n\nBreakpoint 1 at 0x55d085: file /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/parallel.c, line 219.\n\n (gdb) c\n\nContinuing.\n\n \n\n 3, In the psql clinet, we can execute the explain command in step1 again. \n\n After we get the breakpoint in gdb, we wait a moment. Then we execute next.\n\n Use gdb check active_snapshot and transaction_snapshot, active_snapshot->xmin is 158987 and transaction_snapshot->xmin is 162160.\n\nWhen I use gdb test it, sometimes active_snapshot is the same as transaction_snapshot. Then you can try it multiple times, and before execute next, try wait longer time.\n\n \n\n Breakpoint 1, InitializeParallelDSM (pcxt=0x2d53670)\n\n at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/parallel.c:219\n\n219 Snapshot transaction_snapshot = GetTransactionSnapshot();\n\n(gdb) n \n\n220 Snapshot active_snapshot = GetActiveSnapshot();\n\n(gdb) \n\n223 oldcontext = MemoryContextSwitchTo(TopTransactionContext);\n\n(gdb) p *transaction_snapshot\n\n$1 = {snapshot_type = SNAPSHOT_MVCC, xmin = 162160, xmax = 183011, xip = 0x2d50d10, xcnt = 179, subxip = 0x148a9cddf010, \n\n subxcnt = 0, suboverflowed = true, takenDuringRecovery = false, copied = false, curcid = 0, speculativeToken = 0, \n\n active_count = 0, regd_count = 0, ph_node = {first_child = 0x0, next_sibling = 0x0, prev_or_parent = 0x0}, whenTaken = 0, lsn = 0}\n\n(gdb) p *active_snapshot\n\n$2 = {snapshot_type = SNAPSHOT_MVCC, xmin = 158987, xmax = 173138, xip = 0x2d53288, xcnt = 178, subxip = 0x0, subxcnt = 0, \n\n suboverflowed = true, takenDuringRecovery = false, copied = true, curcid = 0, speculativeToken = 0, active_count = 1, \n\n regd_count = 2, ph_node = {first_child = 0x0, next_sibling = 0x0, prev_or_parent = 0x2d52e48}, whenTaken = 0, lsn = 0}\n\n(gdb) \n\n \n\nThanks\n\nPengcheng\n\n \n\n \n\n \n\n-----Original Message-----\nFrom: Greg Nancarrow <gregn4422@gmail.com> \nSent: 2021年5月13日 22:15\nTo: Pengchengliu <pengchengliu@tju.edu.cn>\nCc: Andres Freund <andres@anarazel.de>; PostgreSQL-development <pgsql-hackers@postgresql.org>\nSubject: Re: Parallel scan with SubTransGetTopmostTransaction assert coredump\n\n \n\nOn Thu, May 13, 2021 at 11:25 AM Pengchengliu < <mailto:pengchengliu@tju.edu.cn> pengchengliu@tju.edu.cn> wrote:\n\n> \n\n \n\n> Hi Andres,\n\n> Thanks for you replay.\n\n \n\nEr, it's Greg who has replied so far (not Andres).\n\n \n\n> \n\n> And If you still cannot reproduce it in 2 minitus. Could you run pgbench longer time, such as 30 or 60 minutes.\n\n> \n\n \n\nActually, I did run it, multiple times, for more than 60 minutes, but no assert/crash/coredump occurred in my environment.\n\n \n\n> \n\n> The parallel work process:\n\n> \n\n> 1, Get Snapshot and set TransactionXmin itself, in ParallelWorkerMain->BackgroundWorkerInitializeConnectionByOid->GetTransactionSnapshot->GetSnapshotData.\n\n> \n\n> 2, Acooding PARALLEL_KEY_TRANSACTION_SNAPSHOT(xmin 799425, xmax 82229) from main process, and set TransactionXmin 799425 in ParallelWorkerMain->RestoreTransactionSnapshot->SetTransactionSnapshot->ProcArrayInstallRestoredXmin.\n\n> \n\n> 3, ExecParallelInitializeWorker->ExecSeqScanInitializeWorker->table_beginscan_parallel get the active snapshot(xmin 799162, xmax 82206) from main process, and set this snapshot to scan->rs_base.rs_snapshot.\n\n> \n\n> 4, parallel scan begin, with active snapshot(xmin 799162, xmax 82206) and TransactionXmin(799425),when scan tuple(xmin 799225) SubTransGetTopmostTransaction assert got.\n\n> \n\n> In HeapTupleSatisfiesMVCC->XidInMVCCSnapshot->SubTransGetTopmostTransaction.\n\n> \n\n \n\nI added some logging at a couple of points in the code:\n\n1) In the Worker process code - ParallelWorkerMain() - where it restores the serialized transaction and active snapshots (i.e. passed to the Worker from the main process).\n\n2) In the HeapTupleSatisfiesMVCC() function, immediately before it calls XidInMVCCSnapshot()\n\n \n\nAfter running it for an hour, examination of the log showed that in ALL cases, the transaction snapshot xmin,xmax was always THE SAME as the active snapshot xmin,xmax.\n\n(Can you verify that this occurs on your system when things are working, prior to the coredump?)\n\n \n\nThis is different to what you are getting in your environment (at least, different to what you described when the problem occurs).\n\nIn your case, you say that the main process gets \"the newer transaction snapshot\" - where exactly is this happening in your case?\n\n(or this is what you don't yet know?)\n\nPerhaps very occasionally this somehow happens on your system and triggers the Assert (and coredump)? I have not been able to reproduce that on my system.\n\n \n\nHave you reproduced this issue on any other system, using the same steps as you provided?\n\nI'm wondering if there might be something else in your environment that may be influencing this problem.\n\n \n\n \n\nRegards,\n\nGreg Nancarrow\n\nFujitsu Australia", "msg_date": "Fri, 14 May 2021 10:24:42 +0800", "msg_from": "\"Pengchengliu\" <pengchengliu@tju.edu.cn>", "msg_from_op": true, "msg_subject": "RE: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Fri, May 14, 2021 at 12:25 PM Pengchengliu <pengchengliu@tju.edu.cn> wrote:\n>\n> Hi Greg,\n>\n> Thanks for you replay and test.\n>\n>\n>\n> When main process gets the transaction snapshot in InitializeParallelDSM->GetTransactionSnapshot, the transaction snapshot xmin is very likely follows active snapshot xmin.\n>\n>\n>\n> Use gdb it is easy to verify it.\n>\n> Create env as blow:\n>\n>\n>\n> 1, Use PG13.2(3fb4c75e857adee3da4386e947ba58a75f3e74b7), init a cluster database.\n>\n> 2, Append the postgres.conf as below:\n>\n>\n>\n> max_connections = 2000\n>\n> parallel_setup_cost=0\n>\n> parallel_tuple_cost=0\n>\n> min_parallel_table_scan_size=0\n>\n> max_parallel_workers_per_gather=8\n>\n> max_parallel_workers = 128\n>\n>\n>\n> 3, Start the cluster database. Use the init_test.sql script in attachment to create some test tables.\n>\n> 4, Use the sub_120.sql script in attachment with pgbench to test it.\n>\n>\n>\n> pgbench -d postgres -p 33550 -n -r -f sub_120.sql -c 200 -j 200 -T 1800\n>\n>\n>\n>\n>\n>\n>\n> Then you can login the database, and use GDB to verify it.\n>\n> 1, First use explain, make sure force Parallel is OK.\n>\n>\n>\n> postgres=# explain (verbose,analyze) select count(*) from contend1;\n>\n> QUERY PLAN\n>\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------------\n>\n> ----------------\n>\n> Finalize Aggregate (cost=12006.11..12006.12 rows=1 width=8) (actual time=1075.214..1075.449 rows=1 loops=1)\n>\n> Output: count(*)\n>\n> -> Gather (cost=12006.08..12006.09 rows=8 width=8) (actual time=1075.198..1075.433 rows=1 loops=1)\n>\n> Output: (PARTIAL count(*))\n>\n> Workers Planned: 8\n>\n> Workers Launched: 0\n>\n> -> Partial Aggregate (cost=12006.08..12006.09 rows=1 width=8) (actual time=1074.674..1074.676 rows=1 loops=1)\n>\n> Output: PARTIAL count(*)\n>\n> -> Parallel Seq Scan on public.contend1 (cost=0.00..11690.06 rows=126406 width=0) (actual time=0.008..587.454 rows=1\n>\n> 010200 loops=1)\n>\n> Output: id, val, c2, c3, c4, c5, c6, c7, c8, c9, c10, crt_time\n>\n> Planning Time: 0.123 ms\n>\n> Execution Time: 1075.588 ms\n>\n> postgres=# select pg_backend_pid();\n>\n> pg_backend_pid\n>\n> ----------------\n>\n> 2182678\n>\n>\n>\n>\n>\n> 2, use gdb to debug our backend process. Add the breakpoint in parallel.c:219 and continue.\n>\n>\n>\n> gdb -q -p 2182678\n>\n> ...\n>\n> (gdb) b parallel.c:219\n>\n> Breakpoint 1 at 0x55d085: file /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/parallel.c, line 219.\n>\n> (gdb) c\n>\n> Continuing.\n>\n>\n>\n> 3, In the psql clinet, we can execute the explain command in step1 again.\n>\n> After we get the breakpoint in gdb, we wait a moment. Then we execute next.\n>\n> Use gdb check active_snapshot and transaction_snapshot, active_snapshot->xmin is 158987 and transaction_snapshot->xmin is 162160.\n>\n> When I use gdb test it, sometimes active_snapshot is the same as transaction_snapshot. Then you can try it multiple times, and before execute next, try wait longer time.\n>\n>\n>\n> Breakpoint 1, InitializeParallelDSM (pcxt=0x2d53670)\n>\n> at /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/parallel.c:219\n>\n> 219 Snapshot transaction_snapshot = GetTransactionSnapshot();\n>\n> (gdb) n\n>\n> 220 Snapshot active_snapshot = GetActiveSnapshot();\n>\n> (gdb)\n>\n> 223 oldcontext = MemoryContextSwitchTo(TopTransactionContext);\n>\n> (gdb) p *transaction_snapshot\n>\n> $1 = {snapshot_type = SNAPSHOT_MVCC, xmin = 162160, xmax = 183011, xip = 0x2d50d10, xcnt = 179, subxip = 0x148a9cddf010,\n>\n> subxcnt = 0, suboverflowed = true, takenDuringRecovery = false, copied = false, curcid = 0, speculativeToken = 0,\n>\n> active_count = 0, regd_count = 0, ph_node = {first_child = 0x0, next_sibling = 0x0, prev_or_parent = 0x0}, whenTaken = 0, lsn = 0}\n>\n> (gdb) p *active_snapshot\n>\n> $2 = {snapshot_type = SNAPSHOT_MVCC, xmin = 158987, xmax = 173138, xip = 0x2d53288, xcnt = 178, subxip = 0x0, subxcnt = 0,\n>\n> suboverflowed = true, takenDuringRecovery = false, copied = true, curcid = 0, speculativeToken = 0, active_count = 1,\n>\n> regd_count = 2, ph_node = {first_child = 0x0, next_sibling = 0x0, prev_or_parent = 0x2d52e48}, whenTaken = 0, lsn = 0}\n>\n> (gdb)\n>\n>\n\nHi Pengcheng,\n\nI followed all your steps.\nHowever, I perhaps get different behavior in my environment.\n99% of the time, the xmin and xmax of the active_snapshot and\ntransaction_snapshot are the same (regardless of how long I wait at\ndifferent points after the breakpoint is hit). I've had one or two\ninstances where the xmax values differ. I managed to catch just one\ncase where there were different xmin and xmax values in the snapshots,\nbut this occurred just prior to the pgbench client completing and\nterminating, and when I continued in the debugger, there was no\ncrash/coredump.\n\nHowever, I think I've spotted something potentially important to this issue:\nFor me, almost always \"suboverflowed = false\" in the snapshots (except\nin that one case, right at the end of the pgbench run), yet in your\ngdb example \"suboverflowed = true\" in both of the snapshots (i.e. the\nsnapshot subxip array has overflowed). I'm guessing that this may be\nrelated to the coredump issue, but I'm not exactly sure how it has\nhappened, and why it seemingly isn't being handled correctly and\ncauses that Assert to fire in your case.\nCan you try and find out how the snapshot suboverflow is being set in\nyour case? (since you are getting this readily in your examined\nsnapshots???) I think there's only several places where it can be set\nto \"true\" (e.g. procarray.c:1641).\nAlso, does increasing PGPROC_MAX_CACHED_SUBXIDS avoid, or delay, the\nproblem for you? It's currently defined as 64.\nI notice that there's been some changes related to snapshot data\nhandling and subxid overflow since 13.2, so I'm wondering whether your\ncoredump issue can be reproduced with the latest code?\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Fri, 14 May 2021 18:47:26 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "Hi Greg,\n\n \n\n When you get the different xmin between active snapshot and transaction snapshot, maybe there is no coredump. \n\n As maybe there is not tupe(xmin) between ActiveSnapshot->xmin and TransactionSnapshot->xmin which needs to be scaned in parallel process.\n\n \n\n There is no doubt, it is very likely that ActiveSnapshot->xmin precedes TransactionSnapshot->xmin.\n\n \n\n For this coredump, we must make sure parallel and snapshot overflow. If snapshot is not overflow, you cannot get the coredump. \n\n As coredump is from parallel scan in MVCC when snapshot is overflow.\n\n \n\n Did you use pgbench with the script sub_120.sql which I provide in attachment? \n\n As the default PGPROC_MAX_CACHED_SUBXIDS is 64. In script sub_120.sql, for one transaction, it will use 120 subtransactions which is much larger than 64.\n\n While getting the snapshot, it must be overflow. I really don't know why your snapshot is not overflow.\n\n Did you increase the number PGPROC_MAX_CACHED_SUBXIDS? Please don't change any codes, now we just use the origin codes in PG13.2.\n\n I have checked the codes in master branch, there is no change about this mechanism. This issue should still exist. \n\n \n\n \n\n Thanks\n\n Pengcheng\n\n \n\n-----Original Message-----\nFrom: Greg Nancarrow <gregn4422@gmail.com> \nSent: 2021年5月14日 16:47\nTo: Pengchengliu <pengchengliu@tju.edu.cn>\nCc: Andres Freund <andres@anarazel.de>; PostgreSQL-development <pgsql-hackers@postgresql.org>\nSubject: Re: Parallel scan with SubTransGetTopmostTransaction assert coredump\n\n \n\nOn Fri, May 14, 2021 at 12:25 PM Pengchengliu < <mailto:pengchengliu@tju.edu.cn> pengchengliu@tju.edu.cn> wrote:\n\n> \n\n> Hi Greg,\n\n> \n\n> Thanks for you replay and test.\n\n> \n\n> \n\n> \n\n> When main process gets the transaction snapshot in InitializeParallelDSM->GetTransactionSnapshot, the transaction snapshot xmin is very likely follows active snapshot xmin.\n\n> \n\n> \n\n> \n\n> Use gdb it is easy to verify it.\n\n> \n\n> Create env as blow:\n\n> \n\n> \n\n> \n\n> 1, Use PG13.2(3fb4c75e857adee3da4386e947ba58a75f3e74b7), init a cluster database.\n\n> \n\n> 2, Append the postgres.conf as below:\n\n> \n\n> \n\n> \n\n> max_connections = 2000\n\n> \n\n> parallel_setup_cost=0\n\n> \n\n> parallel_tuple_cost=0\n\n> \n\n> min_parallel_table_scan_size=0\n\n> \n\n> max_parallel_workers_per_gather=8\n\n> \n\n> max_parallel_workers = 128\n\n> \n\n> \n\n> \n\n> 3, Start the cluster database. Use the init_test.sql script in attachment to create some test tables.\n\n> \n\n> 4, Use the sub_120.sql script in attachment with pgbench to test it.\n\n> \n\n> \n\n> \n\n> pgbench -d postgres -p 33550 -n -r -f sub_120.sql -c 200 -j 200 -T 1800\n\n> \n\n> \n\n> \n\n> \n\n> \n\n> \n\n> \n\n> Then you can login the database, and use GDB to verify it.\n\n> \n\n> 1, First use explain, make sure force Parallel is OK.\n\n> \n\n> \n\n> \n\n> postgres=# explain (verbose,analyze) select count(*) from contend1;\n\n> \n\n> \n\n> QUERY PLAN\n\n> \n\n> \n\n> \n\n> ----------------------------------------------------------------------\n\n> ---------------------------------------------------------------\n\n> \n\n> ----------------\n\n> \n\n> Finalize Aggregate (cost=12006.11..12006.12 rows=1 width=8) (actual \n\n> time=1075.214..1075.449 rows=1 loops=1)\n\n> \n\n> Output: count(*)\n\n> \n\n> -> Gather (cost=12006.08..12006.09 rows=8 width=8) (actual \n\n> time=1075.198..1075.433 rows=1 loops=1)\n\n> \n\n> Output: (PARTIAL count(*))\n\n> \n\n> Workers Planned: 8\n\n> \n\n> Workers Launched: 0\n\n> \n\n> -> Partial Aggregate (cost=12006.08..12006.09 rows=1 \n\n> width=8) (actual time=1074.674..1074.676 rows=1 loops=1)\n\n> \n\n> Output: PARTIAL count(*)\n\n> \n\n> -> Parallel Seq Scan on public.contend1 \n\n> (cost=0.00..11690.06 rows=126406 width=0) (actual time=0.008..587.454 \n\n> rows=1\n\n> \n\n> 010200 loops=1)\n\n> \n\n> Output: id, val, c2, c3, c4, c5, c6, c7, c8, c9, \n\n> c10, crt_time\n\n> \n\n> Planning Time: 0.123 ms\n\n> \n\n> Execution Time: 1075.588 ms\n\n> \n\n> postgres=# select pg_backend_pid();\n\n> \n\n> pg_backend_pid\n\n> \n\n> ----------------\n\n> \n\n> 2182678\n\n> \n\n> \n\n> \n\n> \n\n> \n\n> 2, use gdb to debug our backend process. Add the breakpoint in parallel.c:219 and continue.\n\n> \n\n> \n\n> \n\n> gdb -q -p 2182678\n\n> \n\n> ...\n\n> \n\n> (gdb) b parallel.c:219\n\n> \n\n> Breakpoint 1 at 0x55d085: file /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/access/transam/parallel.c, line 219.\n\n> \n\n> (gdb) c\n\n> \n\n> Continuing.\n\n> \n\n> \n\n> \n\n> 3, In the psql clinet, we can execute the explain command in step1 again.\n\n> \n\n> After we get the breakpoint in gdb, we wait a moment. Then we execute next.\n\n> \n\n> Use gdb check active_snapshot and transaction_snapshot, active_snapshot->xmin is 158987 and transaction_snapshot->xmin is 162160.\n\n> \n\n> When I use gdb test it, sometimes active_snapshot is the same as transaction_snapshot. Then you can try it multiple times, and before execute next, try wait longer time.\n\n> \n\n> \n\n> \n\n> Breakpoint 1, InitializeParallelDSM (pcxt=0x2d53670)\n\n> \n\n> at \n\n> /home/liupc/build/build_postgres2/../../devel/postgres2/src/backend/ac\n\n> cess/transam/parallel.c:219\n\n> \n\n> 219 Snapshot transaction_snapshot = GetTransactionSnapshot();\n\n> \n\n> (gdb) n\n\n> \n\n> 220 Snapshot active_snapshot = GetActiveSnapshot();\n\n> \n\n> (gdb)\n\n> \n\n> 223 oldcontext = MemoryContextSwitchTo(TopTransactionContext);\n\n> \n\n> (gdb) p *transaction_snapshot\n\n> \n\n> $1 = {snapshot_type = SNAPSHOT_MVCC, xmin = 162160, xmax = 183011, xip \n\n> = 0x2d50d10, xcnt = 179, subxip = 0x148a9cddf010,\n\n> \n\n> subxcnt = 0, suboverflowed = true, takenDuringRecovery = false, \n\n> copied = false, curcid = 0, speculativeToken = 0,\n\n> \n\n> active_count = 0, regd_count = 0, ph_node = {first_child = 0x0, \n\n> next_sibling = 0x0, prev_or_parent = 0x0}, whenTaken = 0, lsn = 0}\n\n> \n\n> (gdb) p *active_snapshot\n\n> \n\n> $2 = {snapshot_type = SNAPSHOT_MVCC, xmin = 158987, xmax = 173138, xip \n\n> = 0x2d53288, xcnt = 178, subxip = 0x0, subxcnt = 0,\n\n> \n\n> suboverflowed = true, takenDuringRecovery = false, copied = true, \n\n> curcid = 0, speculativeToken = 0, active_count = 1,\n\n> \n\n> regd_count = 2, ph_node = {first_child = 0x0, next_sibling = 0x0, \n\n> prev_or_parent = 0x2d52e48}, whenTaken = 0, lsn = 0}\n\n> \n\n> (gdb)\n\n> \n\n> \n\n \n\nHi Pengcheng,\n\n \n\nI followed all your steps.\n\nHowever, I perhaps get different behavior in my environment.\n\n99% of the time, the xmin and xmax of the active_snapshot and transaction_snapshot are the same (regardless of how long I wait at different points after the breakpoint is hit). I've had one or two instances where the xmax values differ. I managed to catch just one case where there were different xmin and xmax values in the snapshots, but this occurred just prior to the pgbench client completing and terminating, and when I continued in the debugger, there was no crash/coredump.\n\n \n\nHowever, I think I've spotted something potentially important to this issue:\n\nFor me, almost always \"suboverflowed = false\" in the snapshots (except in that one case, right at the end of the pgbench run), yet in your gdb example \"suboverflowed = true\" in both of the snapshots (i.e. the snapshot subxip array has overflowed). I'm guessing that this may be related to the coredump issue, but I'm not exactly sure how it has happened, and why it seemingly isn't being handled correctly and causes that Assert to fire in your case.\n\nCan you try and find out how the snapshot suboverflow is being set in your case? (since you are getting this readily in your examined\n\nsnapshots???) I think there's only several places where it can be set to \"true\" (e.g. procarray.c:1641).\n\nAlso, does increasing PGPROC_MAX_CACHED_SUBXIDS avoid, or delay, the problem for you? It's currently defined as 64.\n\nI notice that there's been some changes related to snapshot data handling and subxid overflow since 13.2, so I'm wondering whether your coredump issue can be reproduced with the latest code?\n\n \n\nRegards,\n\nGreg Nancarrow\n\nFujitsu Australia", "msg_date": "Fri, 14 May 2021 18:35:53 +0800", "msg_from": "\"Pengchengliu\" <pengchengliu@tju.edu.cn>", "msg_from_op": true, "msg_subject": "RE: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Fri, May 14, 2021 at 8:36 PM Pengchengliu <pengchengliu@tju.edu.cn> wrote:\n> Did you use pgbench with the script sub_120.sql which I provide in attachment?\n\nyes\n\n>\n> Did you increase the number PGPROC_MAX_CACHED_SUBXIDS? Please don't change any codes, now we just use the origin codes in PG13.2.\n>\n\nNo, I have made no source code changes at all.\nThat was my suggestion, for you to try - because if the problem is\navoided by increasing PGPROC_MAX_CACHED_SUBXIDS (to say 128) then it\nprobably indicates the overflow condition is affecting the xmin.xmax\nof the two snapshots such that it invalidates the condition that is\nasserted.\n\n\nI think one problem is that in your settings, you haven't set\n\"max_worker_processes\", yet have set \"max_parallel_workers = 128\".\nI'm finding no more than 8 parallel workers are actually active at any one time.\nOn top of this, you've got pgbench running with 200 concurrent clients.\nSo many queries are actually executing parallel plans without using\nparallel workers, as the workers can't actually be launched (and this\nis probably why I'm finding it hard to reproduce the issue, if the\nproblem involves snapshot suboverflow and parallel workers).\nI find that the following settings improve the parallelism per query\nand the whole test runs very much faster:\n\nmax_connections = 2000\nparallel_setup_cost=0\nparallel_tuple_cost=0\nmin_parallel_table_scan_size=0\nmax_parallel_workers_per_gather=4\nmax_parallel_workers = 100\nmax_worker_processes = 128\n\nand adjust the pgbench command-line: pgbench -d postgres -p 33550\n-n -r -f sub_120.sql -c 25 -j 25 -T 1800\n\nProblem is, I still get no coredump when using this.\nCan you try these settings and let me know if the crash still happens\nif you use these settings?\n\nI also tried:\n\nmax_connections = 2000\nparallel_setup_cost=0\nparallel_tuple_cost=0\nmin_parallel_table_scan_size=0\nmax_parallel_workers_per_gather=2\nmax_parallel_workers = 280\nmax_worker_processes = 300\n\nand the pgbench command-line: pgbench -d postgres -p 33550 -n -r\n-f sub_120.sql -c 140 -j 140 -T 1800\n\n- but I still get no coredump.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Sat, 15 May 2021 02:44:18 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "Hi Greg,\n It is really weird. Could you make sure is the SnapShot overflow in you ENV? It is very impoint. \n Abount SnapShot overflow and Subtrans, you can refer this https://www.cybertec-postgresql.com/en/subtransactions-and-performance-in-postgresql/. \n \n In the script sub_120.sql, for one transaction, we use 120 transcations. So this pgxact->overflowed will be set to true. \n Then snapshot must be overflow. When MVCC, it will call SubTransGetTopmostTransaction. \n So the snapshot overflow is requirement.\n \n Even though there is no coredump in you ENV, from the codes, we can find some clue.\n \n First, in main process , ActiveSnapshot xmin is very likely preceds TransactionSnapShot xmin.\n Second, in parallel work process, it sets TransactionXmin with TransactionSnapShot from main process. But table Scan with ative Snapshot from main process.\n So in parallel work process SubTransGetTopmostTransaction, the Assert TransactionIdFollowsOrEquals(xid, TransactionXmin) is not correct. \n At least this assert is unsuitable for parallel work process. \n For my analyze, if there is any incorrect, please corret me.\n \n BTW, I test it in a high performance server. It is verly easily be reproduced. My colleague and me use different environment both can reproduce it. \n \n \n Thanks\n Pengcheng\n\n\n\r\n\r\n\nHi Greg,  It is really weird. Could you make sure is the SnapShot overflow in you ENV? It is very impoint.   Abount SnapShot overflow and Subtrans, you can refer this https://www.cybertec-postgresql.com/en/subtransactions-and-performance-in-postgresql/.    In the script sub_120.sql, for one transaction, we use 120 transcations. So this pgxact->overflowed will be set to true.   Then snapshot must be overflow. When MVCC, it will call SubTransGetTopmostTransaction.   So the snapshot overflow is requirement.    Even though there is no coredump in you ENV, from the codes, we can find some clue.   First, in main process , ActiveSnapshot xmin is very likely preceds TransactionSnapShot xmin.  Second, in parallel work process, it sets TransactionXmin with TransactionSnapShot from main process. But table Scan with ative Snapshot from main process.  So in parallel work process SubTransGetTopmostTransaction, the Assert TransactionIdFollowsOrEquals(xid, TransactionXmin) is not correct.   At least this assert is unsuitable for parallel work process.   For my analyze, if there is any incorrect, please corret me.    BTW, I test it in a high performance server. It is verly easily be reproduced. My colleague and me use different environment both can reproduce it.       Thanks  Pengcheng", "msg_date": "Sat, 15 May 2021 10:37:29 +0800 (GMT+08:00)", "msg_from": "=?UTF-8?B?5YiY6bmP56iL?= <pengchengliu@tju.edu.cn>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?B?UmU6UmU6IFBhcmFsbGVsIHNjYW4gd2l0aCBTdWJUcmFuc0dldFRvcG1vc3RUcmFuc2FjdGlvbiBhc3NlcnQgY29yZWR1bXA=?=" }, { "msg_contents": "Hi Tom & Robert,\n Could you review this Assert(TransactionIdFollowsOrEquals(xid, TransactionXmin)) in SubTransGetTopmostTransaction. \n I think this assert is unsuitable for parallel work process.\n\n Before we discuss it in \n https://www.postgresql-archive.org/Parallel-scan-with-SubTransGetTopmostTransaction-assert-coredump-td6197408.html\n \nThanks\nPengcheng\n\n-----Original Message-----\nFrom: Greg Nancarrow <gregn4422@gmail.com> \nSent: 2021年5月15日 0:44\nTo: Pengchengliu <pengchengliu@tju.edu.cn>\nCc: Andres Freund <andres@anarazel.de>; PostgreSQL-development <pgsql-hackers@postgresql.org>\nSubject: Re: Parallel scan with SubTransGetTopmostTransaction assert coredump\n\nOn Fri, May 14, 2021 at 8:36 PM Pengchengliu <pengchengliu@tju.edu.cn> wrote:\n> Did you use pgbench with the script sub_120.sql which I provide in attachment?\n\nyes\n\n>\n> Did you increase the number PGPROC_MAX_CACHED_SUBXIDS? Please don't change any codes, now we just use the origin codes in PG13.2.\n>\n\nNo, I have made no source code changes at all.\nThat was my suggestion, for you to try - because if the problem is avoided by increasing PGPROC_MAX_CACHED_SUBXIDS (to say 128) then it probably indicates the overflow condition is affecting the xmin.xmax of the two snapshots such that it invalidates the condition that is asserted.\n\n\nI think one problem is that in your settings, you haven't set \"max_worker_processes\", yet have set \"max_parallel_workers = 128\".\nI'm finding no more than 8 parallel workers are actually active at any one time.\nOn top of this, you've got pgbench running with 200 concurrent clients.\nSo many queries are actually executing parallel plans without using parallel workers, as the workers can't actually be launched (and this is probably why I'm finding it hard to reproduce the issue, if the problem involves snapshot suboverflow and parallel workers).\nI find that the following settings improve the parallelism per query and the whole test runs very much faster:\n\nmax_connections = 2000\nparallel_setup_cost=0\nparallel_tuple_cost=0\nmin_parallel_table_scan_size=0\nmax_parallel_workers_per_gather=4\nmax_parallel_workers = 100\nmax_worker_processes = 128\n\nand adjust the pgbench command-line: pgbench -d postgres -p 33550\n-n -r -f sub_120.sql -c 25 -j 25 -T 1800\n\nProblem is, I still get no coredump when using this.\nCan you try these settings and let me know if the crash still happens if you use these settings?\n\nI also tried:\n\nmax_connections = 2000\nparallel_setup_cost=0\nparallel_tuple_cost=0\nmin_parallel_table_scan_size=0\nmax_parallel_workers_per_gather=2\nmax_parallel_workers = 280\nmax_worker_processes = 300\n\nand the pgbench command-line: pgbench -d postgres -p 33550 -n -r\n-f sub_120.sql -c 140 -j 140 -T 1800\n\n- but I still get no coredump.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Mon, 17 May 2021 19:18:05 +0800", "msg_from": "\"Pengchengliu\" <pengchengliu@tju.edu.cn>", "msg_from_op": true, "msg_subject": "RE: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Sat, May 15, 2021 at 12:37 PM 刘鹏程 <pengchengliu@tju.edu.cn> wrote:\n>\n>\n> BTW, I test it in a high performance server. It is verly easily be reproduced. My colleague and me use different environment both can reproduce it.\n>\n\nHi Pengcheng,\n\nAlthough the issue won't reproduce easily in my system, I can\ncertainly see how, for the snapshots used in the parallel worker case,\nthe Active snapshot used is potentially an earlier snapshot that the\nTransaction snapshot. I don't know why it is getting a newer\nTransaction snapshot in InitializeParallelDSM(), when it has\npreviously pushed the return value of GetTransactionSnapshot() as the\nActive snapshot.\n\nSo I too hope Tom or Robert can explain what is going on here and how\nto resolve it (as you requested them to, in your other post).\n\nI actually think that the Assert in SubTransGetTopmostTransaction() is\ncorrect, but in the parallel-worker case, the snapshots are not being\nsetup correctly.\n\nCan you try the trivial change below and see if it prevents the coredump?\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\ndiff --git a/src/backend/access/transam/parallel.c\nb/src/backend/access/transam/parallel.c\nindex 14a8690019..870889053f 100644\n--- a/src/backend/access/transam/parallel.c\n+++ b/src/backend/access/transam/parallel.c\n@@ -216,7 +216,7 @@ InitializeParallelDSM(ParallelContext *pcxt)\n int i;\n FixedParallelState *fps;\n dsm_handle session_dsm_handle = DSM_HANDLE_INVALID;\n- Snapshot transaction_snapshot = GetTransactionSnapshot();\n+ Snapshot transaction_snapshot = GetActiveSnapshot();\n Snapshot active_snapshot = GetActiveSnapshot();\n\n /* We might be running in a very short-lived memory context. */\n\n\n", "msg_date": "Mon, 17 May 2021 22:58:41 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: Parallel scan with SubTransGetTopmostTransaction assert\n coredump" }, { "msg_contents": "Hi Greg,\n > I actually think that the Assert in SubTransGetTopmostTransaction() is correct, but in the parallel-worker case, the snapshots are not being setup correctly.\n \n I agree with you that Assert in SubTransGetTopmostTransaction() is correct. The root cause is Transaction Xmin are not being setup correctly in the parallel-worker.\n \n Actually I am very confused about ActiveSnapshot and TransactionSnapshot. I don't know why main process send ActiveSnapshot and TransactionSnapshot separately. And what is exact difference between them?\n If you know that, could you explain that for me? It will be very appreciated. \n Before we know them exactly, I think we should not modify the TransactionSnapshot to ActiveSnapshot in main process. If it is, the main process should send ActiveSnapshot only. \n\nThanks\nPengcheng\n \n \n\n-----Original Message-----\nFrom: Greg Nancarrow <gregn4422@gmail.com> \nSent: 2021年5月17日 20:59\nTo: 刘鹏程 <pengchengliu@tju.edu.cn>\nCc: Andres Freund <andres@anarazel.de>; PostgreSQL-development <pgsql-hackers@postgresql.org>\nSubject: Re: Re: Parallel scan with SubTransGetTopmostTransaction assert coredump\n\nOn Sat, May 15, 2021 at 12:37 PM 刘鹏程 <pengchengliu@tju.edu.cn> wrote:\n>\n>\n> BTW, I test it in a high performance server. It is verly easily be reproduced. My colleague and me use different environment both can reproduce it.\n>\n\nHi Pengcheng,\n\nAlthough the issue won't reproduce easily in my system, I can certainly see how, for the snapshots used in the parallel worker case, the Active snapshot used is potentially an earlier snapshot that the Transaction snapshot. I don't know why it is getting a newer Transaction snapshot in InitializeParallelDSM(), when it has previously pushed the return value of GetTransactionSnapshot() as the Active snapshot.\n\nSo I too hope Tom or Robert can explain what is going on here and how to resolve it (as you requested them to, in your other post).\n\nI actually think that the Assert in SubTransGetTopmostTransaction() is correct, but in the parallel-worker case, the snapshots are not being setup correctly.\n\nCan you try the trivial change below and see if it prevents the coredump?\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\ndiff --git a/src/backend/access/transam/parallel.c\nb/src/backend/access/transam/parallel.c\nindex 14a8690019..870889053f 100644\n--- a/src/backend/access/transam/parallel.c\n+++ b/src/backend/access/transam/parallel.c\n@@ -216,7 +216,7 @@ InitializeParallelDSM(ParallelContext *pcxt)\n int i;\n FixedParallelState *fps;\n dsm_handle session_dsm_handle = DSM_HANDLE_INVALID;\n- Snapshot transaction_snapshot = GetTransactionSnapshot();\n+ Snapshot transaction_snapshot = GetActiveSnapshot();\n Snapshot active_snapshot = GetActiveSnapshot();\n\n /* We might be running in a very short-lived memory context. */\n\n\n", "msg_date": "Tue, 18 May 2021 09:27:33 +0800", "msg_from": "\"Pengchengliu\" <pengchengliu@tju.edu.cn>", "msg_from_op": true, "msg_subject": "RE: Re: Parallel scan with SubTransGetTopmostTransaction assert\n coredump" }, { "msg_contents": "I've also seen the reports of the same Assert(TransactionIdFollowsOrEquals(xid,\nTransactionXmin)) with a subsequent crash in a parallel worker in\nPostgreSQL v11-based build, Though I was unable to investigate deeper and\nreproduce the issue. The details above in the thread make me think it is a\nreal and long-time-persistent error that is surely worth to be fixed.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nI've also seen the reports of the same Assert(TransactionIdFollowsOrEquals(xid, TransactionXmin)) with a subsequent crash in a parallel worker in PostgreSQL v11-based build, Though I was unable to investigate deeper and reproduce the issue. The details above in the thread make me think it is a real and long-time-persistent error that is surely worth to be fixed.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Tue, 18 May 2021 12:18:15 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: Parallel scan with SubTransGetTopmostTransaction assert\n coredump" }, { "msg_contents": "On Tue, May 18, 2021 at 11:27 AM Pengchengliu <pengchengliu@tju.edu.cn> wrote:\n>\n> Hi Greg,\n>\n> Actually I am very confused about ActiveSnapshot and TransactionSnapshot. I don't know why main process send ActiveSnapshot and TransactionSnapshot separately. And what is exact difference between them?\n> If you know that, could you explain that for me? It will be very appreciated.\n\nIn the context of a parallel-worker, I am a little confused too, so I\ncan't explain it either.\nIt is not really explained in the file\n\"src\\backend\\access\\transam\\README.parallel\", it only mentions the\nfollowing as part of the state that needs to be copied to each worker:\n\n - The transaction snapshot.\n - The active snapshot, which might be different from the transaction snapshot.\n\nSo they might be different, but exactly when and why?\n\nWhen I debugged a typical parallel-SELECT case, I found that prior to\nplan execution, GetTransactionSnapshot() was called and its return\nvalue was stored in both the QueryDesc and the estate (es_snapshot),\nwhich was then pushed on the ActiveSnapshot stack. So by the time\nInitializeParallelDSM() was called, the (top) ActiveSnapshot was the\nlast snapshot returned from GetTransactionSnapshot().\nSo why InitializeParallelDSM() calls GetTransactionSnapshot() again is\nnot clear to me (because isn't then the ActiveSnapshot a potentially\nearlier snapshot? - which it shouldn't be, AFAIK. And also, it's then\ndifferent to the non-parallel case).\n\n> Before we know them exactly, I think we should not modify the TransactionSnapshot to ActiveSnapshot in main process. If it is, the main process should send ActiveSnapshot only.\n\nI think it would be worth you trying my suggested change (if you have\na development environment, which I assume you have). Sure, IF it was\ndeemed a proper solution, you'd only send the one snapshot, and adjust\naccordingly in ParallelWorkerMain(), but we need not worry about that\nin order to test it.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Tue, 18 May 2021 19:14:37 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: Parallel scan with SubTransGetTopmostTransaction assert\n coredump" }, { "msg_contents": "Hi Greg,\n Thanks a lot for you explanation and your fix.\n \n I think your fix can resolve the core dump issue. As with your fix, parallel process reset Transaction Xmin from ActiveSnapshot. \n But it will change Transaction snapshot for all parallel scenarios. I don't know whether it bring in other issue. \n For test only, I think it is enough. \n\n So is there anybody can explain what's exactly difference between ActiveSnapshot and TransactionSnapshot in parallel work process. \n Then maybe we can find a better solution and try to fix it really.\n\nThanks\nPengcheng \n\n-----Original Message-----\nFrom: Greg Nancarrow <gregn4422@gmail.com> \nSent: 2021年5月18日 17:15\nTo: Pengchengliu <pengchengliu@tju.edu.cn>\nCc: Andres Freund <andres@anarazel.de>; PostgreSQL-development <pgsql-hackers@postgresql.org>\nSubject: Re: Re: Parallel scan with SubTransGetTopmostTransaction assert coredump\n\nOn Tue, May 18, 2021 at 11:27 AM Pengchengliu <pengchengliu@tju.edu.cn> wrote:\n>\n> Hi Greg,\n>\n> Actually I am very confused about ActiveSnapshot and TransactionSnapshot. I don't know why main process send ActiveSnapshot and TransactionSnapshot separately. And what is exact difference between them?\n> If you know that, could you explain that for me? It will be very appreciated.\n\nIn the context of a parallel-worker, I am a little confused too, so I can't explain it either.\nIt is not really explained in the file\n\"src\\backend\\access\\transam\\README.parallel\", it only mentions the following as part of the state that needs to be copied to each worker:\n\n - The transaction snapshot.\n - The active snapshot, which might be different from the transaction snapshot.\n\nSo they might be different, but exactly when and why?\n\nWhen I debugged a typical parallel-SELECT case, I found that prior to plan execution, GetTransactionSnapshot() was called and its return value was stored in both the QueryDesc and the estate (es_snapshot), which was then pushed on the ActiveSnapshot stack. So by the time\nInitializeParallelDSM() was called, the (top) ActiveSnapshot was the last snapshot returned from GetTransactionSnapshot().\nSo why InitializeParallelDSM() calls GetTransactionSnapshot() again is not clear to me (because isn't then the ActiveSnapshot a potentially earlier snapshot? - which it shouldn't be, AFAIK. And also, it's then different to the non-parallel case).\n\n> Before we know them exactly, I think we should not modify the TransactionSnapshot to ActiveSnapshot in main process. If it is, the main process should send ActiveSnapshot only.\n\nI think it would be worth you trying my suggested change (if you have a development environment, which I assume you have). Sure, IF it was deemed a proper solution, you'd only send the one snapshot, and adjust accordingly in ParallelWorkerMain(), but we need not worry about that in order to test it.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 20 May 2021 09:18:46 +0800", "msg_from": "\"Pengchengliu\" <pengchengliu@tju.edu.cn>", "msg_from_op": true, "msg_subject": "RE: Re: Parallel scan with SubTransGetTopmostTransaction assert\n coredump" }, { "msg_contents": "On Thu, May 20, 2021 at 11:18 AM Pengchengliu <pengchengliu@tju.edu.cn> wrote:\n>\n> Hi Greg,\n> Thanks a lot for you explanation and your fix.\n>\n> I think your fix can resolve the core dump issue. As with your fix, parallel process reset Transaction Xmin from ActiveSnapshot.\n> But it will change Transaction snapshot for all parallel scenarios. I don't know whether it bring in other issue.\n> For test only, I think it is enough.\n>\n> So is there anybody can explain what's exactly difference between ActiveSnapshot and TransactionSnapshot in parallel work process.\n> Then maybe we can find a better solution and try to fix it really.\n>\n\nI am proposing the attached patch to fix this issue (patches for both\nPG13.2 and latest PG14 source are provided).\n\nPerhaps this will trigger others who better know the intricacies of\nsnapshot handling, and know the reasons and history behind why the\ntransaction_snapshot and active_snapshot are currently passed\nseparately to parallel workers, to speak up here and discuss the issue\nfurther, and check my fix (and note, I haven't attempted to modify\nREADME.parallel in the patch until I get further clarification on this\nissue).\nPerhaps someone can explain the purpose of calling\nGetTransactionSnapshot() AGAIN in InitializeParallelDSM() and how is\nthis consistent with the current ActiveSnapshot?\nAFAICS, that doesn't seem correct, and that's why the patch removes it.\n\nI've rebuilt Postgres with the patch applied and run the regression\ntests, with and without \"force_parallel_mode=regress\", and all tests\npass.\nSo if the fix is wrong, no tests currently exist that detect issues with it.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Thu, 20 May 2021 16:08:36 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: Parallel scan with SubTransGetTopmostTransaction assert\n coredump" }, { "msg_contents": "On Thu, May 20, 2021 at 4:08 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n\nKeep cfbot happy, use the PG14 patch as latest.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Mon, 24 May 2021 12:04:37 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: Parallel scan with SubTransGetTopmostTransaction assert\n coredump" }, { "msg_contents": "On Mon, May 24, 2021 at 12:04:37PM +1000, Greg Nancarrow wrote:\n> Keep cfbot happy, use the PG14 patch as latest.\n\nThis stuff is usually very tricky. Do we have a way to reliably\nreproduce the report discussed here?\n--\nMichael", "msg_date": "Mon, 24 May 2021 13:50:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Re: Parallel scan with SubTransGetTopmostTransaction assert\n coredump" }, { "msg_contents": "On Mon, May 24, 2021 at 2:50 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, May 24, 2021 at 12:04:37PM +1000, Greg Nancarrow wrote:\n> > Keep cfbot happy, use the PG14 patch as latest.\n>\n> This stuff is usually very tricky.\n\nAgreed. That's why I was looking for experts in this snapshot-handling\ncode, to look closer at this issue, check my proposed fix, come up\nwith a better solution etc.\n\n>Do we have a way to reliably\n> reproduce the report discussed here?\n\nI couldn't reproduce it in my environment (though I could understand\nwhat was going wrong, based on the description provided).\nhouzj (houzj.fnst@fujitsu.com) was able to reproduce it in his\nenvironment and kindly provided to me the following information:\n(He said that he followed most of the steps described by the original\nproblem reporter, Pengcheng, but perhaps steps 2 and 7 are a little\ndifferent from his steps. See the emails higher in the thread for the\ntwo scripts \"init_test.sql\" and \"sub_120.sql\")\n\n===\n\n1, Modify and adjust NUM_SUBTRANS_BUFFERS to 128 from 32 in the file\n\"src/include/access/subtrans.h\" line number 15.\n2, configure with enable assert and build it.( ./configure\n--enable-cassert --prefix=/home/pgsql)\n3, init a new database cluster.\n4, modify postgres.conf and add some parameters as below. As the\ncoredump from parallel scan, so we adjust parallel setting, make it\neasy to reproduce.\n\n max_connections = 2000\n\n parallel_setup_cost=0\n parallel_tuple_cost=0\n min_parallel_table_scan_size=0\n max_parallel_workers_per_gather=8\n max_parallel_workers = 32\n\n5, start the database cluster.\n6, use the script init_test.sql in attachment to create tables.\n7, use pgbench with script sub_120.sql in attachment to test it. Try\nit sometimes, you should get the coredump file.\n pgbench -d postgres -p 33550 -n -r -f sub_120.sql -c 200 -j 200 -T 12000\n (If cannot reproduce it, maybe you can try run two parallel pgbench\nxx at the same time)\n\nIn my environment(CentOS 8.2, 128G RAM, 40 processors, disk SAS\nIntel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz),\nsometimes I can reproduce in about 5 minutes , but sometimes it needs\nabout half an hour.\n\nBest regards,\nhouzj\n\n===\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Mon, 24 May 2021 15:21:44 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: Parallel scan with SubTransGetTopmostTransaction assert\n coredump" }, { "msg_contents": "пн, 24 мая 2021 г. в 09:22, Greg Nancarrow <gregn4422@gmail.com>:\n\n> On Mon, May 24, 2021 at 2:50 PM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> >\n> > On Mon, May 24, 2021 at 12:04:37PM +1000, Greg Nancarrow wrote:\n> > > Keep cfbot happy, use the PG14 patch as latest.\n> >\n> > This stuff is usually very tricky.\n>\n> Agreed. That's why I was looking for experts in this snapshot-handling\n> code, to look closer at this issue, check my proposed fix, come up\n> with a better solution etc.\n>\n> >Do we have a way to reliably\n> > reproduce the report discussed here?\n>\n\nUsing a recipe similar to what has been described above in the thread, I\nreliably reproduced the bug in many Postgres versions. (v.11, v.13 etc.).\n1. Make & make install\n2. Make check\n3. run SubTransGetTopmostTransaction-rep.sh in the Postgres source code\ndir.\n\nThe test fails with coredumps in around 10 minutes. With applied fix has\nnever failed yet.\n(Though transaction snapshots kitchen is indeed tricky and I am not 100%\nsure the fix does right thing which is safe in all circumstances)", "msg_date": "Mon, 24 May 2021 17:55:51 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: Parallel scan with SubTransGetTopmostTransaction assert\n coredump" }, { "msg_contents": "On Mon, May 24, 2021 at 11:56 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>\n> Using a recipe similar to what has been described above in the thread, I reliably reproduced the bug in many Postgres versions. (v.11, v.13 etc.).\n> 1. Make & make install\n> 2. Make check\n> 3. run SubTransGetTopmostTransaction-rep.sh in the Postgres source code dir.\n>\n\nThanks, using the latest PG14 code\n(713a431c781fbfe1a22fae4991836077f0f4c513), with no modifications, and\nbuilding with \"--enable-cassert\", your script reproduced the coredump\non my system in about 10 minutes, on the 1st run.\n\n#0 0x00007f1542d1137f in raise () from /lib64/libc.so.6\n#1 0x00007f1542cfbdb5 in abort () from /lib64/libc.so.6\n#2 0x00000000009380cb in ExceptionalCondition ()\n#3 0x00000000005aadbc in SubTransGetTopmostTransaction ()\n#4 0x00000000009741ef in XidInMVCCSnapshot ()\n#5 0x0000000000563326 in HeapTupleSatisfiesVisibility ()\n#6 0x0000000000559d50 in heapgetpage ()\n#7 0x000000000055a0e9 in heapgettup_pagemode ()\n#8 0x000000000055b98e in heap_getnextslot ()\n#9 0x00000000006df713 in SeqNext ()\n#10 0x00000000006b725f in ExecScan ()\n#11 0x00000000006bef41 in fetch_input_tuple ()\n#12 0x00000000006c1f6f in ExecAgg ()\n#13 0x00000000006ade83 in standard_ExecutorRun ()\n#14 0x00000000006b21ed in ParallelQueryMain ()\n#15 0x00000000005a8c11 in ParallelWorkerMain ()\n#16 0x0000000000795f3c in StartBackgroundWorker ()\n#17 0x00000000007a1bc1 in maybe_start_bgworkers ()\n#18 0x00000000007a25c5 in sigusr1_handler ()\n#19 <signal handler called>\n#20 0x00007f1542dce25b in select () from /lib64/libc.so.6\n#21 0x00000000007a2cf6 in ServerLoop ()\n#22 0x00000000007a4659 in PostmasterMain ()\n#23 0x00000000005042ed in main ()\n\n(Using a debug-build, it seems much harder to reproduce on my system)\n\nWith my patch applied, I've had no crashes, from many runs.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Tue, 25 May 2021 14:12:30 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: Parallel scan with SubTransGetTopmostTransaction assert\n coredump" }, { "msg_contents": "Just a note here. After examining the core dump I did notice something.\n\nWhile in XidInMVCCSnapshot call the snapshot->suboverflowed is set true \nalthough subxip == NULL and subxcnt == 0. As far as I understand, \nsnapshot->suboverflowed is set true in the GetRunningTransactionData \ncall.\n\nAnd then I decided to put elog around CurrentRunningXacts->subxcnt's \nassigment.\ndiff --git a/src/backend/storage/ipc/procarray.c \nb/src/backend/storage/ipc/procarray.c\nindex 42a89fc5dc9..3d2db02f580 100644\n--- a/src/backend/storage/ipc/procarray.c\n+++ b/src/backend/storage/ipc/procarray.c\n@@ -2781,6 +2781,9 @@ GetRunningTransactionData(void)\n         * increases if slots do.\n         */\n\n+       if (suboverflowed)\n+               elog(WARNING, \" >>> CurrentRunningXacts->subxid_overflow \nis true\");\n+\n        CurrentRunningXacts->xcnt = count - subcount;\n        CurrentRunningXacts->subxcnt = subcount;\n        CurrentRunningXacts->subxid_overflow = suboverflowed;\n\n... and did get a bunch of messages. I.e. subxid_overflow is set true \nvery often.\n\nI've increased the value of PGPROC_MAX_CACHED_SUBXIDS. Once it becomes \nmore than 120 there are no messages and no failed assertions are \nprovided any more.\n\n---\nBest regards,\nMaxim Orlov.", "msg_date": "Fri, 04 Jun 2021 15:30:15 +0300", "msg_from": "Maxim Orlov <m.orlov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "The analysis in the beginning of the discussion seems to be right, but \nthe fix v2 looks too invasive for me.\n\nPersonally, I'd like not to remove snapshot even if transaction is \nread-only. I propose to consider \"xid < TransactionXmin\" as a legit case \nand just promote xid to TransactionXmin.\n\nIt's annoying this old bug still not fixed. What do you think?\n---\nBest regards,\nMaxim Orlov.", "msg_date": "Tue, 22 Jun 2021 18:06:49 +0300", "msg_from": "Maxim Orlov <m.orlov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Wed, Jun 23, 2021 at 1:06 AM Maxim Orlov <m.orlov@postgrespro.ru> wrote:\n>\n> The analysis in the beginning of the discussion seems to be right, but\n> the fix v2 looks too invasive for me.\n>\n> Personally, I'd like not to remove snapshot even if transaction is\n> read-only. I propose to consider \"xid < TransactionXmin\" as a legit case\n> and just promote xid to TransactionXmin.\n>\n> It's annoying this old bug still not fixed. What do you think?\n\n\nThis v3 patch doesn't look right to me at all.\nIt's not addressing the fundamental problem, it just seems to be\nworking around it, by fiddling a xid value to avoid an assertion\nfailure.\n\nI really can't see how the v2 patch \"removes a snapshot\" in the\nread-only transaction case, and is \"invasive\".\nHave you found a case that the v2 patch breaks?\n\nThe v2 patch does follow the analysis in the beginning of the\ndiscussion, which identified that in setting up parallel workers, a\n\"later transaction snapshot\" was taken than the one actually used in\nthe statement execution, and that's what eventually leads to the\nobserved Assertion failure.\nThe original problem reporter stated: \"So the root cause is the\nParallel Workers process set the TransactionXmin with later\ntranscation snapshot\".\nAlso, as far as I can see, it makes no sense to pass parallel workers\nboth an active snapshot and a (later) transaction snapshot. In the\nleader, prior to execution and running parallel workers, a transaction\nsnapshot is obtained and pushed as the active snapshot (note: only ONE\nsnapshot at this point). It makes little sense to me to then obtain\nANOTHER transaction snapshot when setting up parallel workers, and\npass that (only) to the workers along with the original (earlier)\nactive snapshot. (If there is a reason for passing the TWO snapshots\nto parallel workers, original code authors and/or snapshot experts\nshould speak up now, and code comments should be added accordingly to\nexplain the purpose and how it is MEANT to work)\nThis is why the v2 patch updates the code to just pass one snapshot\n(the active snapshot) to the parallel workers, which gets restored in\neach worker as the transaction snapshot and set as the active snapshot\n(so is then consistent with the snapshot setup in the parallel\nleader).\nThe v3 patch doesn't address any of this at all. It is just checking\nif the xid is < the TransactionXMin in the snapshot, and if so,\nsetting the xid to be TransactionXMin, and thus avoiding the Assertion\nfailure. But that TransactionXMin was from the \"later transaction\nsnapshot\", which was only obtained for the workers and doesn't\nnecessarily match the earlier active snapshot taken, that is\nassociated with the actual statement execution. As far as I am\nconcerned, that \"later transaction snapshot\", taken for use by the\nworkers, doesn't seem right, so that's why the v2 patch removes the\nuse of it and only uses the active snapshot, for consistency with the\nrest of the code. I think that if there was something fundamentally\nwrong with the v2 patch's removal of that \"later transaction snapshot\"\nused by the parallel workers, then surely there would be a test\nfailure somewhere - but that is not the case.\nI think the v2 patch should be restored as the proposed solution here.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Wed, 23 Jun 2021 12:11:06 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": ">\n> Have you found a case that the v2 patch breaks?\n>\n> The v2 patch does follow the analysis in the beginning of the\n> discussion, which identified that in setting up parallel workers, a\n> \"later transaction snapshot\" was taken than the one actually used in\n> the statement execution, and that's what eventually leads to the\n> observed Assertion failure.\n> The original problem reporter stated: \"So the root cause is the\n> Parallel Workers process set the TransactionXmin with later\n> transcation snapshot\".\n> Also, as far as I can see, it makes no sense to pass parallel workers\n> both an active snapshot and a (later) transaction snapshot. In the\n> leader, prior to execution and running parallel workers, a transaction\n> snapshot is obtained and pushed as the active snapshot (note: only ONE\n> snapshot at this point). It makes little sense to me to then obtain\n> ANOTHER transaction snapshot when setting up parallel workers, and\n> pass that (only) to the workers along with the original (earlier)\n> active snapshot. (If there is a reason for passing the TWO snapshots\n> to parallel workers, original code authors and/or snapshot experts\n> should speak up now, and code comments should be added accordingly to\n> explain the purpose and how it is MEANT to work)\n> This is why the v2 patch updates the code to just pass one snapshot\n> (the active snapshot) to the parallel workers, which gets restored in\n> each worker as the transaction snapshot and set as the active snapshot\n> (so is then consistent with the snapshot setup in the parallel\n> leader).\n>\n\nI fully agree that _probably_ two snapshots are not needed for read-only\ntransactions. But I think we need to be very careful with everything\nrelated to transaction integrity as there are different isolation levels\netc. The only fact I can not break something, seems like not an argument\nsolid enough.\n\nRobert could you give us some help and insight into why there is a need for\nthe separate active and transaction snapshots in the parallel\ninfrastructure patch as per your commit 924bcf4f16d ?\n\nUntil then just a workaround like v3 patch seems safer for me (still I am\nnot a very big specialist in this).\n\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nHave you found a case that the v2 patch breaks?\n\nThe v2 patch does follow the analysis in the beginning of the\ndiscussion, which identified that in setting up parallel workers, a\n\"later transaction snapshot\" was taken than the one actually used in\nthe statement execution, and that's what eventually leads to the\nobserved Assertion failure.\nThe original problem reporter stated: \"So the root cause is the\nParallel Workers process set the TransactionXmin with later\ntranscation snapshot\".\nAlso, as far as I can see, it makes no sense to pass parallel workers\nboth an active snapshot and a (later) transaction snapshot. In the\nleader, prior to execution and running parallel workers, a transaction\nsnapshot is obtained and pushed as the active snapshot (note: only ONE\nsnapshot at this point). It makes little sense to me to then obtain\nANOTHER transaction snapshot when setting up parallel workers, and\npass that (only) to the workers along with the original (earlier)\nactive snapshot. (If there is a reason for passing the TWO snapshots\nto parallel workers, original code authors and/or snapshot experts\nshould speak up now, and code comments should be added accordingly to\nexplain the purpose and how it is MEANT to work)\nThis is why the v2 patch updates the code to just pass one snapshot\n(the active snapshot) to the parallel workers, which gets restored in\neach worker as the transaction snapshot and set as the active snapshot\n(so is then consistent with the snapshot setup in the parallel\nleader).I fully agree that _probably_ two snapshots are not needed for read-only transactions. But I think we need to be very careful with everything related to transaction integrity as there are different isolation levels etc. The only fact I can not break something, seems like not an argument solid enough. Robert could you give us some help and insight into why there is a need for the separate active and transaction snapshots in the parallel infrastructure patch as per your commit 924bcf4f16d ?Until then just a workaround like v3 patch seems safer for me (still I am not a very big specialist in this). --Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Thu, 24 Jun 2021 15:57:28 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "By the way in the initial discussion on parallel infrastructure\nhttps://www.postgresql.org/message-id/20150208002027.GH9201%40alap3.anarazel.de\nI've seen a proposal to set the worker's PGXACT->xmin\nto be the minimum of the top transaction id and the snapshots.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nBy the way in the initial discussion on parallel infrastructure https://www.postgresql.org/message-id/20150208002027.GH9201%40alap3.anarazel.de I've seen a proposal to set the worker's PGXACT->xminto be the minimum of the top transaction id and the snapshots.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Thu, 24 Jun 2021 17:09:09 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "Hi,\n\nI took a quick look on this - I'm no expert in the details of snapshots,\nso take my comments with a grain of salt.\n\nAFAICS both Greg Nancarrow and Pavel Borisov are kinda right. I think\nGreg is right the v3 patch does not seem like the right (or correct)\nsolution, for a couple reasons:\n\n\n1) It fixes the symptoms, not the cause. If we set TransactionXmin to a\nbogus value, this only fixes it locally in SubTransGetTopmostTransaction\nbut I'd be surprised if other places did not have the same issue.\n\n\n2) The xid/TransactionXmin comparison in the v2 fix:\n\n xid = xid > TransactionXmin ? xid : TransactionXmin;\n\nseems broken, because it compares the XIDs directly, but that's not\ngoing to work after generating enough transactions.\n\n\n3) But even if this uses TransactionIdFollowsOrEquals, it seems very\nstrange because the \"xid\" comes from\n\n XidInMVCCSnapshot(HeapTupleHeaderGetRawXmin(tuple), snapshot))\n\ni.e. it's the raw xmin from the tuple, so why should we be setting it to\nTransactionXmin? That seems pretty strange, TBH.\n\n\nSo yeah, I think this is due to confusion with two snapshots and failing\nto consider both of them when calculating TransactionXmin.\n\nBut I think removing one of the snapshots (as the v2 does it) is rather\nstrange too. I very much doubt having both the transaction and active\nsnapshots in the parallel worker is not intentional, and Pavel may very\nwell be right this breaks isolation levels that use the xact snapshot\n(i.e. REPEATABLE READ and SERIALIZABLE). I haven't checked, though.\n\nSo I think we need to keep both snapshots, but fix TransactionXmin to\nconsider both of them - I suppose ParallelWorkerMain could simply look\nat the two snapshots, and use the minimum. Which is what [1] (already\nlinked by Pavel) talks about, although that's related to concerns about\none of the processes dying, which is not what's happening here.\n\n\nI'm wondering what consequences this may have on production systems,\nthough. We've only seen this failing because of the assert, so what\nhappens when the build has asserts disabled?\n\nLooking at SubTransGetTopmostTransaction, it seems the while loop ends\nimmediately (because it's pretty much the opposite of the assert), so we\njust return the \"xid\" as topmost XID. But will that produce incorrect\nquery results, or what?\n\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/20150208002027.GH9201%40alap3.anarazel.de\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 9 Jul 2021 19:36:22 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On 2021-07-09 20:36, Tomas Vondra wrote:\n> Hi,\n> \n> I took a quick look on this - I'm no expert in the details of \n> snapshots,\n> so take my comments with a grain of salt.\n> \n> AFAICS both Greg Nancarrow and Pavel Borisov are kinda right. I think\n> Greg is right the v3 patch does not seem like the right (or correct)\n> solution, for a couple reasons:\n> \n> \n> 1) It fixes the symptoms, not the cause. If we set TransactionXmin to a\n> bogus value, this only fixes it locally in \n> SubTransGetTopmostTransaction\n> but I'd be surprised if other places did not have the same issue.\n> \n> \n> 2) The xid/TransactionXmin comparison in the v2 fix:\n> \n> xid = xid > TransactionXmin ? xid : TransactionXmin;\n> \n> seems broken, because it compares the XIDs directly, but that's not\n> going to work after generating enough transactions.\n> \n> \n> 3) But even if this uses TransactionIdFollowsOrEquals, it seems very\n> strange because the \"xid\" comes from\n> \n> XidInMVCCSnapshot(HeapTupleHeaderGetRawXmin(tuple), snapshot))\n> \n> i.e. it's the raw xmin from the tuple, so why should we be setting it \n> to\n> TransactionXmin? That seems pretty strange, TBH.\n> \n> \n> So yeah, I think this is due to confusion with two snapshots and \n> failing\n> to consider both of them when calculating TransactionXmin.\n> \n> But I think removing one of the snapshots (as the v2 does it) is rather\n> strange too. I very much doubt having both the transaction and active\n> snapshots in the parallel worker is not intentional, and Pavel may very\n> well be right this breaks isolation levels that use the xact snapshot\n> (i.e. REPEATABLE READ and SERIALIZABLE). I haven't checked, though.\n> \n> So I think we need to keep both snapshots, but fix TransactionXmin to\n> consider both of them - I suppose ParallelWorkerMain could simply look\n> at the two snapshots, and use the minimum. Which is what [1] (already\n> linked by Pavel) talks about, although that's related to concerns about\n> one of the processes dying, which is not what's happening here.\n> \n> \n> I'm wondering what consequences this may have on production systems,\n> though. We've only seen this failing because of the assert, so what\n> happens when the build has asserts disabled?\n> \n> Looking at SubTransGetTopmostTransaction, it seems the while loop ends\n> immediately (because it's pretty much the opposite of the assert), so \n> we\n> just return the \"xid\" as topmost XID. But will that produce incorrect\n> query results, or what?\n> \n> \n> \n> regards\n> \n> \n> [1]\n> https://www.postgresql.org/message-id/20150208002027.GH9201%40alap3.anarazel.de\n\nPFA v4 patch based on the ideas above.\n\nIn principle I see little difference with v3. But I agree it is more \ncorrect.\n\nI did test this patch on Linux and MacOS using testing algo above and \ngot no error. On master branch before the patch I still see this error.\n\n---\nBest regards,\nMaxim Orlov.", "msg_date": "Tue, 13 Jul 2021 13:10:34 +0300", "msg_from": "Maxim Orlov <m.orlov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": ">\n> > So yeah, I think this is due to confusion with two snapshots and\n> > failing\n> > to consider both of them when calculating TransactionXmin.\n> >\n> > But I think removing one of the snapshots (as the v2 does it) is rather\n> > strange too. I very much doubt having both the transaction and active\n> > snapshots in the parallel worker is not intentional, and Pavel may very\n> > well be right this breaks isolation levels that use the xact snapshot\n> > (i.e. REPEATABLE READ and SERIALIZABLE). I haven't checked, though.\n> >\n> > So I think we need to keep both snapshots, but fix TransactionXmin to\n> > consider both of them - I suppose ParallelWorkerMain could simply look\n> > at the two snapshots, and use the minimum. Which is what [1] (already\n> > linked by Pavel) talks about, although that's related to concerns about\n> > one of the processes dying, which is not what's happening here.\n> >\n> >\n> > I'm wondering what consequences this may have on production systems,\n> > though. We've only seen this failing because of the assert, so what\n> > happens when the build has asserts disabled?\n\n> Looking at SubTransGetTopmostTransaction, it seems the while loop ends\n> > immediately (because it's pretty much the opposite of the assert), so\n> > we\n> > just return the \"xid\" as topmost XID. But will that produce incorrect\n> > query results, or what?\n>\nI haven't seen anything incorrect on production systems with asserts turned\noff, by I should note this assertion is not immediately reproduced so it is\nnot that easy to catch the possible logical inconsequences of parallel scan\nresults. As I've noticed that subxids cache is used in most cases and even\nthis assertion is also rare in parallel scans. Maybe that is why we lived\nwith this bug unnoticed so long.\n\n/www.postgresql.org/message-id/20150208002027.GH9201%40alap3.anarazel.de\n> <https://www.postgresql.org/message-id/20150208002027.GH9201%40alap3.anarazel.de>\n>\n> PFA v4 patch based on the ideas above.\n>\n> In principle I see little difference with v3. But I agree it is more\n> correct.\n>\n> I did test this patch on Linux and MacOS using testing algo above and\n> got no error. On master branch before the patch I still see this error.\n>\n\nI've tested the patch and the error doesn't appear anymore.\n\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n> So yeah, I think this is due to confusion with two snapshots and \n> failing\n> to consider both of them when calculating TransactionXmin.\n> \n> But I think removing one of the snapshots (as the v2 does it) is rather\n> strange too. I very much doubt having both the transaction and active\n> snapshots in the parallel worker is not intentional, and Pavel may very\n> well be right this breaks isolation levels that use the xact snapshot\n> (i.e. REPEATABLE READ and SERIALIZABLE). I haven't checked, though.\n> \n> So I think we need to keep both snapshots, but fix TransactionXmin to\n> consider both of them - I suppose ParallelWorkerMain could simply look\n> at the two snapshots, and use the minimum. Which is what [1] (already\n> linked by Pavel) talks about, although that's related to concerns about\n> one of the processes dying, which is not what's happening here.\n> \n> \n> I'm wondering what consequences this may have on production systems,\n> though. We've only seen this failing because of the assert, so what\n> happens when the build has asserts disabled? \n> Looking at SubTransGetTopmostTransaction, it seems the while loop ends\n> immediately (because it's pretty much the opposite of the assert), so \n> we\n> just return the \"xid\" as topmost XID. But will that produce incorrect\n> query results, or what?\nI haven't seen anything incorrect on production systems with asserts turned off, by I should note this assertion is not immediately reproduced so it is not that easy to catch the possible logical inconsequences of parallel scan results. As I've noticed that subxids cache is used in most cases and even this assertion is also rare in parallel scans. Maybe that is why we lived with this bug unnoticed so long./www.postgresql.org/message-id/20150208002027.GH9201%40alap3.anarazel.de\n\nPFA v4 patch based on the ideas above.\n\nIn principle I see little difference with v3. But I agree it is more \ncorrect.\n\nI did test this patch on Linux and MacOS using testing algo above and \ngot no error. On master branch before the patch I still see this error.I've tested the patch and the error doesn't appear anymore.--Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Tue, 13 Jul 2021 14:23:48 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Sat, Jul 10, 2021 at 3:36 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> But I think removing one of the snapshots (as the v2 does it) is rather\n> strange too. I very much doubt having both the transaction and active\n> snapshots in the parallel worker is not intentional, and Pavel may very\n> well be right this breaks isolation levels that use the xact snapshot\n> (i.e. REPEATABLE READ and SERIALIZABLE). I haven't checked, though.\n>\n\nUnfortunately there is currently no test, code-comment, README or\ndeveloper-discussion that definitively determines which approach (v2\nvs v3/v4) is a valid fix for this issue.\nWe don't know if having both the transaction and active snapshots in a\nparallel worker is intentional or not, and if so, why so?\n(certainly in the non-parallel case of the same statement execution,\nthere is only one snapshot in question here - the obtained transaction\nsnapshot is pushed as the active snapshot, as it is done in 95% of\ncases in the code)\nIt seems that only the original code authors know how the snapshot\nhandling in parallel-workers is MEANT to work, and they have yet to\nspeak up about it here.\nAt this point, we can only all agree that there is a problem to be fixed here.\n\nMy concern with the v3/v4 patch approach is that because the\nparallel-workers use a later snapshot to what is actually used in the\nexecution context for the statement in the parallel leader, then it is\npossible for the parallel leader and parallel workers to have\ndifferent transaction visibility, and surely this cannot be correct.\nFor example, suppose a transaction that deletes a row, completes in\nthe window between these two snapshots.\nCouldn't the row be visible to the parallel workers but not to the\nparallel leader?\nMy guess is that currently there are not enough\nconcurrent-transactions tests to expose such a problem, and the window\nhere is fairly small.\n\nSo we can fiddle xmin values to avoid the immediate Assert issue here,\nbut it's not addressing potential xmax-related issues.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 15 Jul 2021 12:34:47 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Wed, Jul 14, 2021 at 10:34 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> Unfortunately there is currently no test, code-comment, README or\n> developer-discussion that definitively determines which approach (v2\n> vs v3/v4) is a valid fix for this issue.\n> We don't know if having both the transaction and active snapshots in a\n> parallel worker is intentional or not, and if so, why so?\n> (certainly in the non-parallel case of the same statement execution,\n> there is only one snapshot in question here - the obtained transaction\n> snapshot is pushed as the active snapshot, as it is done in 95% of\n> cases in the code)\n> It seems that only the original code authors know how the snapshot\n> handling in parallel-workers is MEANT to work, and they have yet to\n> speak up about it here.\n> At this point, we can only all agree that there is a problem to be fixed here.\n\nHi.\n\nThanks to Thomas Munro for drawing my attention to this thread. I\nwasn't intentionally ignoring it, but there's a lot of email in the\nworld and only so much time.\n\nWhen I wrote this code originally, the idea that I had in mind was\nsimply this: whatever state we have in the leader ought to be\nreproduced in each worker. So if there's an active snapshot in the\nleader, we ought to make that active in all the workers, and if\nthere's a transaction snapshot in the leader, we ought to make that\nthe transaction snapshot in all of the workers.\n\nBut I see now that my thinking was fuzzy, and I'm going to blame that\non the name GetTransactionSnapshot() being slightly misleading. If\nIsolationUsesXactSnapshot() is true, then there's really such a thing\nas a transaction snapshot and reproducing that in the worker is a\nsensible thing to do. But when !IsolationUsesXactSnapshot(),\nGetTransactionSnapshot() doesn't just \"get the transaction snapshot\",\nbecause there isn't any such thing. It takes a whole new snapshot, on\nthe theory that you wouldn't be calling this function unless you had\nfinished up with the snapshot you got the last time you called this\nfunction. And in the case of initiating parallel query, that is the\nwrong thing.\n\nI think that, at least in the case where IsolationUsesXactSnapshot()\nis true, we need to make sure that calling GetTransactionSnapshot() in\na worker produces the same result that it would have produced in the\nleader. Say one of the workers calls an sql or plpgsql function and\nthat function runs a bunch of SQL statements. It seems to me that\nthere's probably a way for this to result in calls inside the worker\nto GetTransactionSnapshot(), and if that doesn't return the same\nsnapshot as in the leader, then we've broken MVCC.\n\nWhat about when IsolationUsesXactSnapshot() is false? Perhaps it's OK\nto just skip this altogether in that case. Certainly what we're doing\ncan't be right, because copying a snapshot that wouldn't have been\ntaken without parallel query can't ever be the right thing to do.\nPerhaps we need to copy something else instead. I'm not really sure.\n\nSo I think v2 is probably on the right track, but wrong when the\ntransaction isolation level is REPEATABLE READ or SERIALIZABLE, and v3\nand v4 just seem like unprincipled hacks that try to avoid the\nassertion failure by lying about whether there's a problem.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 21 Jul 2021 12:15:41 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Thu, Jul 22, 2021 at 2:15 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Thanks to Thomas Munro for drawing my attention to this thread. I\n> wasn't intentionally ignoring it, but there's a lot of email in the\n> world and only so much time.\n>\n> When I wrote this code originally, the idea that I had in mind was\n> simply this: whatever state we have in the leader ought to be\n> reproduced in each worker. So if there's an active snapshot in the\n> leader, we ought to make that active in all the workers, and if\n> there's a transaction snapshot in the leader, we ought to make that\n> the transaction snapshot in all of the workers.\n>\n> But I see now that my thinking was fuzzy, and I'm going to blame that\n> on the name GetTransactionSnapshot() being slightly misleading. If\n> IsolationUsesXactSnapshot() is true, then there's really such a thing\n> as a transaction snapshot and reproducing that in the worker is a\n> sensible thing to do. But when !IsolationUsesXactSnapshot(),\n> GetTransactionSnapshot() doesn't just \"get the transaction snapshot\",\n> because there isn't any such thing. It takes a whole new snapshot, on\n> the theory that you wouldn't be calling this function unless you had\n> finished up with the snapshot you got the last time you called this\n> function. And in the case of initiating parallel query, that is the\n> wrong thing.\n>\n> I think that, at least in the case where IsolationUsesXactSnapshot()\n> is true, we need to make sure that calling GetTransactionSnapshot() in\n> a worker produces the same result that it would have produced in the\n> leader. Say one of the workers calls an sql or plpgsql function and\n> that function runs a bunch of SQL statements. It seems to me that\n> there's probably a way for this to result in calls inside the worker\n> to GetTransactionSnapshot(), and if that doesn't return the same\n> snapshot as in the leader, then we've broken MVCC.\n>\n> What about when IsolationUsesXactSnapshot() is false? Perhaps it's OK\n> to just skip this altogether in that case. Certainly what we're doing\n> can't be right, because copying a snapshot that wouldn't have been\n> taken without parallel query can't ever be the right thing to do.\n> Perhaps we need to copy something else instead. I'm not really sure.\n>\n> So I think v2 is probably on the right track, but wrong when the\n> transaction isolation level is REPEATABLE READ or SERIALIZABLE, and v3\n> and v4 just seem like unprincipled hacks that try to avoid the\n> assertion failure by lying about whether there's a problem.\n>\n\nMany thanks for taking time to respond to this (and thanks to Thomas Munro too).\nIt's much appreciated, as this is a complex area.\nFor the time being, I'll reinstate the v2 patch (say as v5) as a\npartial solution, and then work on addressing the REPEATABLE READ and\nSERIALIZABLE transaction isolation levels, which you point out are not\nhandled correctly by the patch.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Fri, 23 Jul 2021 16:00:14 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "пт, 23 июл. 2021 г. в 10:00, Greg Nancarrow <gregn4422@gmail.com>:\n\n> On Thu, Jul 22, 2021 at 2:15 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > Thanks to Thomas Munro for drawing my attention to this thread. I\n> > wasn't intentionally ignoring it, but there's a lot of email in the\n> > world and only so much time.\n> >\n> > When I wrote this code originally, the idea that I had in mind was\n> > simply this: whatever state we have in the leader ought to be\n> > reproduced in each worker. So if there's an active snapshot in the\n> > leader, we ought to make that active in all the workers, and if\n> > there's a transaction snapshot in the leader, we ought to make that\n> > the transaction snapshot in all of the workers.\n> >\n> > But I see now that my thinking was fuzzy, and I'm going to blame that\n> > on the name GetTransactionSnapshot() being slightly misleading. If\n> > IsolationUsesXactSnapshot() is true, then there's really such a thing\n> > as a transaction snapshot and reproducing that in the worker is a\n> > sensible thing to do. But when !IsolationUsesXactSnapshot(),\n> > GetTransactionSnapshot() doesn't just \"get the transaction snapshot\",\n> > because there isn't any such thing. It takes a whole new snapshot, on\n> > the theory that you wouldn't be calling this function unless you had\n> > finished up with the snapshot you got the last time you called this\n> > function. And in the case of initiating parallel query, that is the\n> > wrong thing.\n> >\n> > I think that, at least in the case where IsolationUsesXactSnapshot()\n> > is true, we need to make sure that calling GetTransactionSnapshot() in\n> > a worker produces the same result that it would have produced in the\n> > leader. Say one of the workers calls an sql or plpgsql function and\n> > that function runs a bunch of SQL statements. It seems to me that\n> > there's probably a way for this to result in calls inside the worker\n> > to GetTransactionSnapshot(), and if that doesn't return the same\n> > snapshot as in the leader, then we've broken MVCC.\n> >\n> > What about when IsolationUsesXactSnapshot() is false? Perhaps it's OK\n> > to just skip this altogether in that case. Certainly what we're doing\n> > can't be right, because copying a snapshot that wouldn't have been\n> > taken without parallel query can't ever be the right thing to do.\n> > Perhaps we need to copy something else instead. I'm not really sure.\n> >\n> > So I think v2 is probably on the right track, but wrong when the\n> > transaction isolation level is REPEATABLE READ or SERIALIZABLE, and v3\n> > and v4 just seem like unprincipled hacks that try to avoid the\n> > assertion failure by lying about whether there's a problem.\n> >\n>\n> Many thanks for taking time to respond to this (and thanks to Thomas Munro\n> too).\n> It's much appreciated, as this is a complex area.\n> For the time being, I'll reinstate the v2 patch (say as v5) as a\n> partial solution, and then work on addressing the REPEATABLE READ and\n> SERIALIZABLE transaction isolation levels, which you point out are not\n> handled correctly by the patch.\n>\n`\nI've looked at v5 patch. It is completely similar to v2 patch, which I've\nalready tested using the workflow, I've described in the comments above.\nBefore the patch I get the errors quite soon, the patch corrects them.\nFurthermore I've tested the same patch under REPEATABLE READ and\nSERIALIZABLE and detected no flaws. So, now, when we've got Robert's\nexplanation, it seems to me that v2 (aka v5) patch can be committed\n(leaving possible REPEATABLE READ and SERIALIZABLE improvements for the\nfuture). I don't really sure it is still possible on 07/21 СF, but I'd\nchange status of the patch to the ready-for-committer. Also I'd like the\nbugfix to be backported to the previous PG versions.\n\nFurther consideration on the patch and on the backporting is very much\nwelcome!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nпт, 23 июл. 2021 г. в 10:00, Greg Nancarrow <gregn4422@gmail.com>:On Thu, Jul 22, 2021 at 2:15 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Thanks to Thomas Munro for drawing my attention to this thread. I\n> wasn't intentionally ignoring it, but there's a lot of email in the\n> world and only so much time.\n>\n> When I wrote this code originally, the idea that I had in mind was\n> simply this: whatever state we have in the leader ought to be\n> reproduced in each worker. So if there's an active snapshot in the\n> leader, we ought to make that active in all the workers, and if\n> there's a transaction snapshot in the leader, we ought to make that\n> the transaction snapshot in all of the workers.\n>\n> But I see now that my thinking was fuzzy, and I'm going to blame that\n> on the name GetTransactionSnapshot() being slightly misleading. If\n> IsolationUsesXactSnapshot() is true, then there's really such a thing\n> as a transaction snapshot and reproducing that in the worker is a\n> sensible thing to do. But when !IsolationUsesXactSnapshot(),\n> GetTransactionSnapshot() doesn't just \"get the transaction snapshot\",\n> because there isn't any such thing. It takes a whole new snapshot, on\n> the theory that you wouldn't be calling this function unless you had\n> finished up with the snapshot you got the last time you called this\n> function. And in the case of initiating parallel query, that is the\n> wrong thing.\n>\n> I think that, at least in the case where IsolationUsesXactSnapshot()\n> is true, we need to make sure that calling GetTransactionSnapshot() in\n> a worker produces the same result that it would have produced in the\n> leader. Say one of the workers calls an sql or plpgsql function and\n> that function runs a bunch of SQL statements. It seems to me that\n> there's probably a way for this to result in calls inside the worker\n> to GetTransactionSnapshot(), and if that doesn't return the same\n> snapshot as in the leader, then we've broken MVCC.\n>\n> What about when IsolationUsesXactSnapshot() is false? Perhaps it's OK\n> to just skip this altogether in that case. Certainly what we're doing\n> can't be right, because copying a snapshot that wouldn't have been\n> taken without parallel query can't ever be the right thing to do.\n> Perhaps we need to copy something else instead. I'm not really sure.\n>\n> So I think v2 is probably on the right track, but wrong when the\n> transaction isolation level is REPEATABLE READ or SERIALIZABLE, and v3\n> and v4 just seem like unprincipled hacks that try to avoid the\n> assertion failure by lying about whether there's a problem.\n>\n\nMany thanks for taking time to respond to this (and thanks to Thomas Munro too).\nIt's much appreciated, as this is a complex area.\nFor the time being, I'll reinstate the v2 patch (say as v5) as a\npartial solution, and then work on addressing the REPEATABLE READ and\nSERIALIZABLE transaction isolation levels, which you point out are not\nhandled correctly by the patch.`I've looked at v5 patch. It is completely similar to v2 patch, which I've already tested using the workflow, I've described in the comments above.  Before the patch I get the errors quite soon, the patch corrects them. Furthermore I've tested the same patch under REPEATABLE READ and SERIALIZABLE and detected no flaws. So, now, when we've got Robert's explanation, it seems to me that v2 (aka v5) patch can be committed (leaving possible REPEATABLE READ and SERIALIZABLE improvements for the future). I don't really sure it is still possible on 07/21 СF, but I'd change status of the patch to the ready-for-committer. Also I'd like the bugfix to be backported to the previous PG versions. Further consideration on the patch and on the backporting is very much welcome!-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Tue, 3 Aug 2021 18:00:05 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Tue, Aug 3, 2021 at 9:59 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> I've looked at v5 patch. It is completely similar to v2 patch, which I've already tested using the workflow, I've described in the comments above. Before the patch I get the errors quite soon, the patch corrects them. Furthermore I've tested the same patch under REPEATABLE READ and SERIALIZABLE and detected no flaws. So, now, when we've got Robert's explanation, it seems to me that v2 (aka v5) patch can be committed (leaving possible REPEATABLE READ and SERIALIZABLE improvements for the future). I don't really sure it is still possible on 07/21 СF, but I'd change status of the patch to the ready-for-committer. Also I'd like the bugfix to be backported to the previous PG versions.\n\nI agree that the fix should be back-ported, but I'm not keen to commit\nanything unless it works for all isolation levels.\n\nThe idea I sort of had floating around in my mind is a little\ndifferent than what Greg has implemented. I was thinking that we could\njust skip SerializeSnapshot and the corresponding shm_toc_allocate()\nif !IsolationUsesXactSnapshot(). Then on the restore side we could\njust call shm_toc_lookup() with noError = true and skip\nRestoreTransactionSnapshot/RestoreSnapshot if it returns NULL.\n\nI don't know why Greg's patch is changing anything related to the\nactive snapshot (as opposed to the transaction snapshot). Maybe\nthere's a reason why we need that change, but I don't know what it is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 3 Aug 2021 13:20:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Wed, Aug 4, 2021 at 3:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>The idea I sort of had floating around in my mind is a little\n>different than what Greg has implemented. I was thinking that we could\n>just skip SerializeSnapshot and the corresponding shm_toc_allocate()\n>if !IsolationUsesXactSnapshot(). Then on the restore side we could\n>just call shm_toc_lookup() with noError = true and skip\n>RestoreTransactionSnapshot/RestoreSnapshot if it returns NULL.\n\nI've tried to follow your description and have attached a patch to\nhopefully match it, but it doesn't pass \"make check-world\".\nPerhaps I messed something up (apologies if so), or additional changes\nare needed to match what you had in mind or correct additional issues\nyou didn't foresee?\n\nt/001_pgbench_with_server.pl .. 10/?\n# Failed test 'pgbench scale 1 initialization status (got 1 vs expected 0)'\n# at t/001_pgbench_with_server.pl line 108.\n...\n# creating primary keys...\n# pgbench: fatal: query failed: ERROR: cannot take query snapshot\nduring a parallel operation\n# CONTEXT: parallel worker\n# pgbench: query was: alter table pgbench_accounts add primary key (aid)\n\n\n>I don't know why Greg's patch is changing anything related to the\n>active snapshot (as opposed to the transaction snapshot). Maybe\n>there's a reason why we need that change, but I don't know what it is.\n\nI don't think my v2/v5 patch is changing anything related to the\nactive snapshot (is it?).\nIt's restoring the serialized active snapshot, installing it as the\ntransaction snapshot and setting it as the active snapshot.\nThe patch removes the additionally-acquired transaction snapshot in\nInitializeParallelDSM (which seems like a later snapshot to that which\nis actually being used in the execution state for the statement, back\nin the leader, if I recall correctly). Basically, in the parallel\nworkers, I've tried to match the snapshot setup to that used in the\nleader.\nIf the v2/v5 patch doesn't work correctly for some isolation level,\nI've yet to find it (but can't absolutely rule out there's some case\nnot accounted for).\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Wed, 4 Aug 2021 13:41:11 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "ср, 4 авг. 2021 г. в 07:41, Greg Nancarrow <gregn4422@gmail.com>:\n\n> On Wed, Aug 4, 2021 at 3:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> >The idea I sort of had floating around in my mind is a little\n> >different than what Greg has implemented. I was thinking that we could\n> >just skip SerializeSnapshot and the corresponding shm_toc_allocate()\n> >if !IsolationUsesXactSnapshot(). Then on the restore side we could\n> >just call shm_toc_lookup() with noError = true and skip\n> >RestoreTransactionSnapshot/RestoreSnapshot if it returns NULL.\n>\n> I've tried to follow your description and have attached a patch to\n> hopefully match it, but it doesn't pass \"make check-world\".\n> Perhaps I messed something up (apologies if so), or additional changes\n> are needed to match what you had in mind or correct additional issues\n> you didn't foresee?\n>\n> t/001_pgbench_with_server.pl .. 10/?\n> # Failed test 'pgbench scale 1 initialization status (got 1 vs expected\n> 0)'\n> # at t/001_pgbench_with_server.pl line 108.\n> ...\n> # creating primary keys...\n> # pgbench: fatal: query failed: ERROR: cannot take query snapshot\n> during a parallel operation\n> # CONTEXT: parallel worker\n> # pgbench: query was: alter table pgbench_accounts add primary key (aid)\n>\n> Greg, thanks for the fast response! I suppose that a check\nfor IsolationUsesXactSnapshot() is also useful in a GetTransactionSnapshot\nfor the correct processing of a case with NULL transaction snapshot.\nThis corrects mentioned check-world test.\nPFA v7 patch.", "msg_date": "Wed, 4 Aug 2021 13:55:30 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Wed, Aug 4, 2021 at 7:55 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>\n>>\n> Greg, thanks for the fast response! I suppose that a check for IsolationUsesXactSnapshot() is also useful in a GetTransactionSnapshot for the correct processing of a case with NULL transaction snapshot.\n> This corrects mentioned check-world test.\n> PFA v7 patch.\n\nAh, thanks for that (I didn't debug that failure).\nBut is the coredump issue reproducible now? (using v7 and your test script)\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Wed, 4 Aug 2021 20:17:56 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Wed, Aug 4, 2021 at 8:17 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> Ah, thanks for that (I didn't debug that failure).\n> But is the coredump issue reproducible now? (using v7 and your test script)\n>\n\nEr, with the v7 patch, the problem still occurs (that Assert still\nfires during a run of the SubTransGetTopmostTransaction-rep.sh\nscript).\nStracktrace from the coredump is below:\n\n#0 0x00007f06e5e7d37f in raise () from /lib64/libc.so.6\n#1 0x00007f06e5e67db5 in abort () from /lib64/libc.so.6\n#2 0x0000000000b06547 in ExceptionalCondition (\n conditionName=0xba8d78 \"TransactionIdFollowsOrEquals(xid,\nTransactionXmin)\", errorType=0xba8d0b \"FailedAssertion\",\n fileName=0xba8d00 \"subtrans.c\", lineNumber=156) at assert.c:69\n#3 0x0000000000576f0f in SubTransGetTopmostTransaction (xid=3676) at\nsubtrans.c:156\n#4 0x0000000000b5f55a in XidInMVCCSnapshot (xid=3676,\nsnapshot=0x2e44560) at snapmgr.c:2293\n#5 0x000000000050e014 in HeapTupleSatisfiesMVCC (htup=0x7ffdc6dee6f0,\nsnapshot=0x2e44560, buffer=15559)\n at heapam_visibility.c:1070\n#6 0x000000000050f148 in HeapTupleSatisfiesVisibility\n(tup=0x7ffdc6dee6f0, snapshot=0x2e44560, buffer=15559)\n at heapam_visibility.c:1771\n#7 0x00000000004f1d26 in heapgetpage (sscan=0x2e9a9a0, page=10310) at\nheapam.c:466\n#8 0x00000000004f45cf in heapgettup_pagemode (scan=0x2e9a9a0,\ndir=ForwardScanDirection, nkeys=0, key=0x0)\n at heapam.c:1118\n#9 0x00000000004f4c91 in heap_getnextslot (sscan=0x2e9a9a0,\ndirection=ForwardScanDirection, slot=0x2e8f9a0)\n at heapam.c:1396\n#10 0x000000000077abb1 in table_scan_getnextslot (sscan=0x2e9a9a0,\ndirection=ForwardScanDirection, slot=0x2e8f9a0)\n at ../../../src/include/access/tableam.h:1044\n#11 0x000000000077ac79 in SeqNext (node=0x2e8f428) at nodeSeqscan.c:80\n#12 0x000000000073d997 in ExecScanFetch (node=0x2e8f428,\naccessMtd=0x77abe5 <SeqNext>, recheckMtd=0x77ac8a <SeqRecheck>)\n at execScan.c:133\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Wed, 4 Aug 2021 21:54:20 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "ср, 4 авг. 2021 г. в 14:18, Greg Nancarrow <gregn4422@gmail.com>:\n\n> On Wed, Aug 4, 2021 at 7:55 PM Pavel Borisov <pashkin.elfe@gmail.com>\n> wrote:\n> >\n> >>\n> > Greg, thanks for the fast response! I suppose that a check for\n> IsolationUsesXactSnapshot() is also useful in a GetTransactionSnapshot for\n> the correct processing of a case with NULL transaction snapshot.\n> > This corrects mentioned check-world test.\n> > PFA v7 patch.\n>\n> Ah, thanks for that (I didn't debug that failure).\n> But is the coredump issue reproducible now? (using v7 and your test script)\n>\n> Now I've run my test script attached above in the thread on v6 and v7 and\nquite soon got crashes with the Assert and a backtrace identical to the\noriginal one. So it may be useful for further development, but now it is\nnot enough to fix the original crash.\n\nAnd the same script run on v2/v5 patch was completed without crash at every\nisolation level, I've tested i.e. READ COMMITTED, REPEATABLE READ and\nSERIALIZABLE. If I remember correctly none of us could demonstrate any\nerrors with REPEATABLE READ and SERIALIZABLE on v2/v5. That fact was the\nbase of my proposal to commit v2/v5 i.e. to fix the obvious bug and let the\nfurther improvements (if any) be potentially done later.\n\nAt SERIALIZABLE level with v2/v5 I get an error which I don't have before\nthe patch (but no crash):\npgbench: error: client 6 script 0 aborted in command 594 query 0: ERROR:\n could not serialize access due to read/write dependencies among\ntransactions\nDETAIL: Reason code: Canceled on identification as a pivot, during\nconflict out checking.\n\nср, 4 авг. 2021 г. в 14:18, Greg Nancarrow <gregn4422@gmail.com>:On Wed, Aug 4, 2021 at 7:55 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>\n>>\n> Greg, thanks for the fast response! I suppose that a check for IsolationUsesXactSnapshot() is also useful in a GetTransactionSnapshot for the correct processing of a case with NULL transaction snapshot.\n> This corrects mentioned check-world test.\n> PFA v7 patch.\n\nAh, thanks for that (I didn't debug that failure).\nBut is the coredump issue reproducible now? (using v7 and your test script)Now I've run my test script attached above in the thread on v6 and v7 and quite soon got crashes with the Assert and a backtrace identical to the original one. So it may be useful for further development, but now it is not enough to fix the original crash.And the same script run on v2/v5 patch was completed without crash at every isolation level, I've tested i.e. READ COMMITTED, REPEATABLE READ and SERIALIZABLE. If I remember correctly none of us could demonstrate any errors with REPEATABLE READ and SERIALIZABLE on v2/v5. That fact was the base of my proposal to commit v2/v5 i.e. to fix the obvious bug and let the further improvements (if any) be potentially done later.At SERIALIZABLE level with v2/v5 I get an error which I don't have before the patch (but no crash):pgbench: error: client 6 script 0 aborted in command 594 query 0: ERROR:  could not serialize access due to read/write dependencies among transactionsDETAIL:  Reason code: Canceled on identification as a pivot, during conflict out checking.", "msg_date": "Wed, 4 Aug 2021 16:06:24 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": ">\n> At SERIALIZABLE level with v2/v5 I get an error which I don't have before\n> the patch (but no crash):\n> pgbench: error: client 6 script 0 aborted in command 594 query 0: ERROR:\n> could not serialize access due to read/write dependencies among\n> transactions\n> DETAIL: Reason code: Canceled on identification as a pivot, during\n> conflict out checking.\n>\nI should correct myself: the mentioned error under SERIALIZABLE is also\npresent before the patch.\n\nAt SERIALIZABLE level with v2/v5 I get an error which I don't have before the patch (but no crash):pgbench: error: client 6 script 0 aborted in command 594 query 0: ERROR:  could not serialize access due to read/write dependencies among transactionsDETAIL:  Reason code: Canceled on identification as a pivot, during conflict out checking.I should correct myself: the mentioned error under SERIALIZABLE is also present before the patch.", "msg_date": "Wed, 4 Aug 2021 16:07:39 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Tue, Aug 3, 2021 at 11:41 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> I've tried to follow your description and have attached a patch to\n> hopefully match it, but it doesn't pass \"make check-world\".\n> Perhaps I messed something up (apologies if so), or additional changes\n> are needed to match what you had in mind or correct additional issues\n> you didn't foresee?\n\nThis is the sort of thing I was thinking about but I don't understand\nwhy it doesn't fix the reported problem. Apparently I haven't\ncorrectly understood what the issue is.\n\n> >I don't know why Greg's patch is changing anything related to the\n> >active snapshot (as opposed to the transaction snapshot). Maybe\n> >there's a reason why we need that change, but I don't know what it is.\n>\n> I don't think my v2/v5 patch is changing anything related to the\n> active snapshot (is it?).\n> It's restoring the serialized active snapshot, installing it as the\n> transaction snapshot and setting it as the active snapshot.\n\nWhy do you think it's right to install the serialized *active*\nsnapshot as the *transaction* snapshot? I've been operating on the\npresumption that we wanted the worker to install the leader's\ntransaction snapshot as its transaction snapshot as its transaction\nsnapshot and the leader's active snapshot as its active snapshot,\nbecause in my mind the active snapshot and the transaction snapshot\nare two separate things. However, there's clearly something wrong with\nthat line of reasoning, because apparently your proposal fixes the\nproblem and mine doesn't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Aug 2021 09:43:13 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Wed, Aug 4, 2021 at 11:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Why do you think it's right to install the serialized *active*\n> snapshot as the *transaction* snapshot? I've been operating on the\n> presumption that we wanted the worker to install the leader's\n> transaction snapshot as its transaction snapshot as its transaction\n> snapshot and the leader's active snapshot as its active snapshot,\n> because in my mind the active snapshot and the transaction snapshot\n> are two separate things. However, there's clearly something wrong with\n> that line of reasoning, because apparently your proposal fixes the\n> problem and mine doesn't.\n>\n\nIn setting up the snapshot for the execution state used in command\nexecution, GetTransactionSnapshot() is called and (possibly a copy of)\nthe returned snapshot is pushed as the ActiveSnapshot.\nThe way I see it is that there is essentially only one snapshot here,\nthe last-acquired TransactionSnapshot, which the ActiveSnapshot points\nto (or a copy of it). Rightly or wrongly, my v2/v5 patch is basically\nchanging the code to do the same snapshot setup in each of the\nparallel workers.\n\nSo why (current Postgres code, no patches applied) in setting up for\nparallel-worker execution (in InitializeParallelDSM) does the Postgres\ncode then acquire ANOTHER TransactionSnapshot (by calling\nGetTransactionSnashot(), which could return CurrentSnapshot or a new\nsnapshot) and serialize that, as well as serializing what the\nActiveSnapshot points to, and then restore those in the workers as two\nseparate snapshots? Is it a mistake? Or if intentional and correct,\nhow so?\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 5 Aug 2021 12:03:04 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Thu, Aug 5, 2021 at 12:03 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n\nAs the current v7 patch doesn't fix the coredump issue and also the\ncfbot is now failing (as one of the regression tests fails) I'm\nreinstating my v2/v5 patch (as v8) as the current best solution to\nthis issue.\nSo far I haven't found a test case (e.g. isolation level) that breaks\nthe patch. Pavel did encounter an error (but no crash) when testing\nthe patch with SERIALIZABLE, but he found that this error occurred\nanyway without the patch.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Mon, 9 Aug 2021 14:18:05 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Wed, Aug 4, 2021 at 10:03 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> In setting up the snapshot for the execution state used in command\n> execution, GetTransactionSnapshot() is called and (possibly a copy of)\n> the returned snapshot is pushed as the ActiveSnapshot.\n\nI mean, there are LOTS of PushActiveSnapshot() calls in the code. A\nlot of those specifically say\nPushActiveSnapshot(GetTransactionSnapshot()) but others are pushing\nsnapshots obtained from various other places. I really don't think it\ncan possibly be correct in general to assume that the snapshot on top\nof the active snapshot stack is the same as the transaction snapshot.\n\n> So why (current Postgres code, no patches applied) in setting up for\n> parallel-worker execution (in InitializeParallelDSM) does the Postgres\n> code then acquire ANOTHER TransactionSnapshot (by calling\n> GetTransactionSnashot(), which could return CurrentSnapshot or a new\n> snapshot) and serialize that, as well as serializing what the\n> ActiveSnapshot points to, and then restore those in the workers as two\n> separate snapshots? Is it a mistake? Or if intentional and correct,\n> how so?\n\nWell, I already agreed that in cases where GetTransactionSnapshot()\nwill acquire an altogether new snapshot, we shouldn't call it, but\nbeyond that I don't see why you think this is wrong. I mean, suppose\nwe only call GetTransactionSnapshot() at parallel worker when\nIsolationUsesXactSnapshot(). In that case, CurrentSnapshot is a\ndurable, transaction-lifetime piece of backend-local state that can\naffect the results of future calls to GetTransactionSnapshot(), and\ntherefore seems to need to be replicated to workers. Separately,\nregardless of IsolationUsesXactSnapshot(), the active snapshot is\naccessible via calls to GetActiveSnapshot() and therefore should also\nbe replicated to workers. Now, I don't know of any necessary reason\nwhy those two things need to be the same, because again, there are\nPushActiveSnapshot() calls all over the place, and they're not all\npushing the transaction snapshot. So therefore it makes sense to me\nthat we need to replicate those two things separately - the active\nsnapshot in the leader becomes the active snapshot in the workers and\nthe transaction snapshot in the leader becomes the transaction\nsnapshot in the worker.\n\nNow there is evidently something wrong with this line of reasoning\nbecause the code is buggy and my proposed fix doesn't work, but I\ndon't know what is wrong with it. You seem to think that it's crazy\nthat we try to replicate the active snapshot to the active snapshot\nand the transaction snapshot to the transaction snapshot, but that\ndid, and still does, seem really sane to me. The only part that now\nseems clearly wrong to me is that !IsolationUsesXactSnapshot() case\ntakes an *extra* snapshot, but since fixing that didn't fix the bug,\nthere's evidently more to the problem than that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 9 Aug 2021 10:35:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Tue, Aug 10, 2021 at 12:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Now there is evidently something wrong with this line of reasoning\n> because the code is buggy and my proposed fix doesn't work, but I\n> don't know what is wrong with it. You seem to think that it's crazy\n> that we try to replicate the active snapshot to the active snapshot\n> and the transaction snapshot to the transaction snapshot, but that\n> did, and still does, seem really sane to me. The only part that now\n> seems clearly wrong to me is that !IsolationUsesXactSnapshot() case\n> takes an *extra* snapshot, but since fixing that didn't fix the bug,\n> there's evidently more to the problem than that.\n>\n\nI traced through snapshot processing during a parallel SELECT, up to\nthe point of the existing GetTransactionSnapshot() and\nGetCurrentSnapshot() calls in InitializeParallelDSM().\nI'm seeing the following sequence of calls, numbered below:\n\n\nPortalStart():\n\n case PORTAL_ONE_SELECT:\n\n(1) PushActiveSnapshot(GetTransactionSnapshot());\n\n...\n\n queryDesc = CreateQueryDesc(linitial_node(PlannedStmt, portal->stmts),\n portal->sourceText,\n(2) GetActiveSnapshot(),\n InvalidSnapshot,\n None_Receiver,\n params,\n portal->queryEnv,\n 0);\n\n...\n\n(3) PopActiveSnapshot();\n\n\nPortalRunSelect():\n\n(4) PushActiveSnapshot(queryDesc->snapshot);\n ExecutorRun(queryDesc, direction, (uint64) count,\n portal->run_once);\n\n InitializeParallelDSM():\n\n(5) Snapshot transaction_snapshot = GetTransactionSnapshot();\n(6) Snapshot active_snapshot = GetActiveSnapshot();\n\n\n nprocessed = queryDesc->estate->es_processed;\n(7) PopActiveSnapshot();\n\n\n\nThe snapshot used in execution of the query is clearly the\nActiveSnapshot at the time of creating the QueryDesc [at (2)] which is\na copy of the TransactionSnapshot originally acquired [at (1)].\n\nIn InitializeParallelDSM() it acquires both the TransactionSnapshot\n[at (5)] and the ActiveSnapshot [at (6)], to be serialized in the DSM\nfor the workers (each of which will deserialize and restore these).\nBut the problem I see is that the GetTransactionSnapshot() call [at\n(5)] may acquire a new snapshot (i.e. a later snapshot than the\nActiveSnapshot used in the execution of the query), for example, if a\nconcurrent transaction has completed since GetTransactionSnapshot()\nwas last called [in (1)].\nIn this case, GetTransactionSnapshot() calls GetSnapshotDataReuse()\nand it returns false, causing a new snapshot to be built by\nGetTransactionSnapshot().\n\n curXactCompletionCount = ShmemVariableCache->xactCompletionCount;\n if (curXactCompletionCount != snapshot->snapXactCompletionCount)\n return false;\n\nWhen this TransactionSnapshot is restored in a worker process, it\naccordingly sets TransactionXmin, and if we look back at the coredump\nstacktrace and the Assert condition that failed in the worker, we see\nthat the xid was expected to be >= TransactionXmin, but the Assert\nfired because the xid was < TransactionXmin.\n\n Assert(TransactionIdFollowsOrEquals(xid, TransactionXmin));\n\nThis is explained by the TransactionSnapshot being a later snapshot in\nthis case.\nSo this is why it seems to be wrong to call GetTransactionSnapshot()\nin InitializeParallelDSM() and use a separate, potentially later,\nsnapshot than that used in the execution state for the query.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Wed, 11 Aug 2021 22:32:15 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Wed, Aug 11, 2021 at 8:32 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> This is explained by the TransactionSnapshot being a later snapshot in\n> this case.\n> So this is why it seems to be wrong to call GetTransactionSnapshot()\n> in InitializeParallelDSM() and use a separate, potentially later,\n> snapshot than that used in the execution state for the query.\n\nThanks for the research. I agree with your logic here, but:\n\n1. Then why doesn't the approach I proposed fix it?\n\n2. Consider the case where the toplevel query is something like SELECT\ncomplexfunc() FROM generate_series(1,10) g -- in a case like this, I\nthink complexfunc() can cause snapshots to be taken internally. For\nexample suppose we end up inside exec_eval_simple_expr, or\nSPI_cursor_open_internal, in either case with read_only = false. Here\nwe're going to again call GetTransactionSnapshot() and then execute a\nquery which may use parallelism.\n\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 11 Aug 2021 15:37:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Thu, Aug 12, 2021 at 5:37 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> 1. Then why doesn't the approach I proposed fix it?\n>\n\nI think that with your approach, it is not doing the expected\ninitialization done by SetTransactionSnapshot() (which is called by\nRestoreTransactionSnapshot(), which your approach skips in the case of\nthe SQL script that reproduces the problem, because\nIsolationUsesXactSnapshot() returns false for XACT_READ_COMMITTED).\nThere's some comments in SetTransactionSnapshot() explaining the\ntricky parts of this initialization, testing that the source\ntransaction is still running, dealing with a race condition, and\nsetting up TransactionXmin.\nAlso, there's an \"if (IsolationUsesXactSnapshot()) ...\" block within\nthat function, doing some required setup for transaction-snapshot\nmode, so it doesn't seem like a good idea to not call\nRestoreTransactionSnapshot() if !IsolationUsesXactSnapshot(), as the\nfunction is obviously catering for both cases, when the isolation\nlevel does and doesn't use a transaction snapshot. So I think\nSetTransactionSnapshot() always needs to be called.\n\nWith your proposed approach, what I'm seeing is that the worker calls\nGetTransactionSnapshot() at some point, which then builds a new\nsnapshot, and results in increasing TransactionXmin (probably because\nanother concurrent transaction has since completed). This snapshot is\nthus later than the snapshot used in the execution state of the query\nbeing executed. This causes the Assert in\nSubTransGetTopmostTransaction() to fire because the xid doesn't follow\nor equal the TransactionXmin value.\n\n> 2. Consider the case where the toplevel query is something like SELECT\n> complexfunc() FROM generate_series(1,10) g -- in a case like this, I\n> think complexfunc() can cause snapshots to be taken internally. For\n> example suppose we end up inside exec_eval_simple_expr, or\n> SPI_cursor_open_internal, in either case with read_only = false. Here\n> we're going to again call GetTransactionSnapshot() and then execute a\n> query which may use parallelism.\n>\n>\n\nA query always uses the ActiveSnapshot at the time the QueryDesc is\ncreated - so as long as you don't (as the current code does) obtain a\npotentially later snapshot and try to restore that in the worker as\nthe TransactionSnapshot (or let the worker create a new snapshot,\nbecause no TransactionSnapshot was restored, which may have a greater\nxmin than the ActiveSnapshot) then I think it should be OK.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Fri, 13 Aug 2021 16:52:31 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Fri, Aug 13, 2021 at 2:52 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> With your proposed approach, what I'm seeing is that the worker calls\n> GetTransactionSnapshot() at some point, which then builds a new\n> snapshot, and results in increasing TransactionXmin (probably because\n> another concurrent transaction has since completed). This snapshot is\n> thus later than the snapshot used in the execution state of the query\n> being executed. This causes the Assert in\n> SubTransGetTopmostTransaction() to fire because the xid doesn't follow\n> or equal the TransactionXmin value.\n\nAh ha! Thank you. So I think what I was missing here is that even\nthough the transaction snapshot is not a well-defined concept when\n!IsolationUsesXactSnapshot(), we still need TransactionXmin to be set\nto a value that's earlier than any XID we might inquire about. So the\nproposal to install the leader's active snapshot as the worker's\ntransaction snapshot is really just a way of making that happen. Now\nthat I understand better, that seems OK to me when\n!IsolationUsesXactSnapshot(), but otherwise I think we need to\nserialize and restore the actual transaction snapshot. Do you agree?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 17 Aug 2021 15:00:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Wed, Aug 18, 2021 at 5:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Ah ha! Thank you. So I think what I was missing here is that even\n> though the transaction snapshot is not a well-defined concept when\n> !IsolationUsesXactSnapshot(), we still need TransactionXmin to be set\n> to a value that's earlier than any XID we might inquire about. So the\n> proposal to install the leader's active snapshot as the worker's\n> transaction snapshot is really just a way of making that happen. Now\n> that I understand better, that seems OK to me when\n> !IsolationUsesXactSnapshot(), but otherwise I think we need to\n> serialize and restore the actual transaction snapshot. Do you agree?\n>\n\nYes, I think I agree on that.\nI've updated the patch to restore the actual transaction snapshot in\nthe IsolationUsesXactSnapshot() case, otherwise the active snapshot is\ninstalled as the transaction snapshot.\nI've tested the patch for the different transaction isolation levels,\nand the reported coredump (from assertion failure) is not occurring.\n(In the \"serializable\" case there are \"could not serialize access due\nto read/write dependencies among transactions\" errors, as Pavel has\npreviously reported, but these occur without the patch and it appears\nto be an unrelated issue)\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Wed, 18 Aug 2021 23:28:20 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Wed, Aug 18, 2021 at 9:28 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> Yes, I think I agree on that.\n> I've updated the patch to restore the actual transaction snapshot in\n> the IsolationUsesXactSnapshot() case, otherwise the active snapshot is\n> installed as the transaction snapshot.\n> I've tested the patch for the different transaction isolation levels,\n> and the reported coredump (from assertion failure) is not occurring.\n> (In the \"serializable\" case there are \"could not serialize access due\n> to read/write dependencies among transactions\" errors, as Pavel has\n> previously reported, but these occur without the patch and it appears\n> to be an unrelated issue)\n\nI think this looks pretty good. I am not sure I see any reason to\nintroduce a new function RestoreTxnSnapshotAndSetAsActive. Couldn't we\njust use RestoreTransactionSnapshot() and then call\nPushActiveSnapshot() from parallel.c? That seems preferable to me from\nthe standpoint of avoiding multiplication of APIs.\n\nI also think that the comments should explain why we are doing this,\nrather than just that we are doing this. So instead of this:\n\n+ /*\n+ * If the transaction snapshot was serialized, restore it and restore the\n+ * active snapshot too. Otherwise, the active snapshot is also installed as\n+ * the transaction snapshot.\n+ */\n\n...perhaps something like:\n\nIf the transaction isolation level is READ COMMITTED or SERIALIZABLE,\nthe leader has serialized the transaction snapshot and we must restore\nit. At lower isolation levels, there is no transaction-lifetime\nsnapshot, but we need TransactionXmin to get set to a value which is\nless than or equal to the xmin of every snapshot that will be used by\nthis worker. The easiest way to accomplish that is to install the\nactive snapshot as the transaction snapshot. Code running in this\nparallel worker might take new snapshots via GetTransactionSnapshot()\nor GetLatestSnapshot(), but it shouldn't have any way of acquiring a\nsnapshot older than the active snapshot.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Aug 2021 15:00:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Tue, Aug 24, 2021 at 5:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>\n> I think this looks pretty good. I am not sure I see any reason to\n> introduce a new function RestoreTxnSnapshotAndSetAsActive. Couldn't we\n> just use RestoreTransactionSnapshot() and then call\n> PushActiveSnapshot() from parallel.c? That seems preferable to me from\n> the standpoint of avoiding multiplication of APIs.\n>\n\nI initially thought this too, but RestoreTransactionSnapshot() sets up\nthe resultant transaction snapshot in \"CurrentSnapshot\", which is\nstatic to snapmgr.c (like the other pointers to valid snapshots) and I\ndidn't really want to mess with the visibility of that, to allow a\ncall to PushActiveSnapshot(CurrentSnapshot) in parallel.c. Also, I\nwasn't sure if it was safe to call GetTransactionSnapshot() here\nwithout the risk of unwanted side-effects - but, looking at it again,\nI think it is probably OK, so I did use it in my revised patch\n(attached) and removed\nthat new function RestoreTxnSnapshotAndSetAsActive(). Do you agree\nthat it is OK to call GetTransactionSnapshot() here?\n\n> I also think that the comments should explain why we are doing this,\n> rather than just that we are doing this. So instead of this:\n>\n> + /*\n> + * If the transaction snapshot was serialized, restore it and restore the\n> + * active snapshot too. Otherwise, the active snapshot is also installed as\n> + * the transaction snapshot.\n> + */\n>\n> ...perhaps something like:\n>\n> If the transaction isolation level is READ COMMITTED or SERIALIZABLE,\n> the leader has serialized the transaction snapshot and we must restore\n> it. At lower isolation levels, there is no transaction-lifetime\n> snapshot, but we need TransactionXmin to get set to a value which is\n> less than or equal to the xmin of every snapshot that will be used by\n> this worker. The easiest way to accomplish that is to install the\n> active snapshot as the transaction snapshot. Code running in this\n> parallel worker might take new snapshots via GetTransactionSnapshot()\n> or GetLatestSnapshot(), but it shouldn't have any way of acquiring a\n> snapshot older than the active snapshot.\n>\n\nI agree, that is a better comment and detailed description, but didn't\nyou mean \"If the transaction isolation level is REPEATABLE READ or\nSERIALIZABLE ...\"?\n\nsince we have:\n\n#define XACT_READ_UNCOMMITTED 0\n#define XACT_READ_COMMITTED 1\n#define XACT_REPEATABLE_READ 2\n#define XACT_SERIALIZABLE 3\n\n#define IsolationUsesXactSnapshot() (XactIsoLevel >= XACT_REPEATABLE_READ)\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Tue, 24 Aug 2021 14:20:19 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Tue, Aug 24, 2021 at 12:20 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> I initially thought this too, but RestoreTransactionSnapshot() sets up\n> the resultant transaction snapshot in \"CurrentSnapshot\", which is\n> static to snapmgr.c (like the other pointers to valid snapshots) and I\n> didn't really want to mess with the visibility of that, to allow a\n> call to PushActiveSnapshot(CurrentSnapshot) in parallel.c. Also, I\n> wasn't sure if it was safe to call GetTransactionSnapshot() here\n> without the risk of unwanted side-effects - but, looking at it again,\n> I think it is probably OK, so I did use it in my revised patch\n> (attached) and removed\n> that new function RestoreTxnSnapshotAndSetAsActive(). Do you agree\n> that it is OK to call GetTransactionSnapshot() here?\n\nI guess I was thinking more of rejiggering things so that we save the\nresults of each RestoreSnapshot() call in a local variable, e.g.\nasnapshot and tsnapshot. And then I think we could just\nRestoreTransactionSnapshot() on whichever one we want, and then\nPushActiveSnapshot(asnapshot) either way. I think it would be worth\ntrying to move the PushActiveSnapshot() call out of the if statement\ninstead it in two places, written differently but doing the same\nthing.\n\n> I agree, that is a better comment and detailed description, but didn't\n> you mean \"If the transaction isolation level is REPEATABLE READ or\n> SERIALIZABLE ...\"?\n\nI sure did!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 24 Aug 2021 11:37:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Wed, Aug 25, 2021 at 1:37 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> I guess I was thinking more of rejiggering things so that we save the\n> results of each RestoreSnapshot() call in a local variable, e.g.\n> asnapshot and tsnapshot. And then I think we could just\n> RestoreTransactionSnapshot() on whichever one we want, and then\n> PushActiveSnapshot(asnapshot) either way. I think it would be worth\n> trying to move the PushActiveSnapshot() call out of the if statement\n> instead it in two places, written differently but doing the same\n> thing.\n>\n\nI've attached an updated patch, hopefully more along the lines that\nyou were thinking of.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Wed, 25 Aug 2021 19:36:54 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" }, { "msg_contents": "On Wed, Aug 25, 2021 at 5:36 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> I've attached an updated patch, hopefully more along the lines that\n> you were thinking of.\n\nLGTM. Committed and back-patched to v10 and up. In theory the same bug\nexists in 9.6, but you'd have to have third-party code using the\nparallel context infrastructure in order to hit it. If the patch\nback-patched cleanly I would have done so just in case, but\nshm_toc_lookup lacks a bool noError option in that version.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 25 Aug 2021 09:39:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump" } ]
[ { "msg_contents": "Hi\n\nI searched operators for multirange type, and the current doc is little bit\nmessy, because chapter \"Range Functions and Operators\" contains operators\nand functions for multirange type too.\n\nI think so this chapter should be renamed to Range, Multirange Functions\nand Operators\"\n\ndiff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\nindex 0b5571460d..f6ed1568af 100644\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -18164,7 +18164,7 @@ SELECT NULLIF(value, '(none)') ...\n </sect1>\n\n <sect1 id=\"functions-range\">\n- <title>Range Functions and Operators</title>\n+ <title>Range, Multirange Functions and Operators</title>\n\n <para>\n See <xref linkend=\"rangetypes\"/> for an overview of range types.\n\nRegards\n\nPavel\n\nHiI searched operators for multirange type, and the current doc is little bit messy, because chapter \"Range Functions and Operators\" contains operators and functions for multirange type too.I think so this chapter should be renamed to Range, Multirange Functions and Operators\"diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgmlindex 0b5571460d..f6ed1568af 100644--- a/doc/src/sgml/func.sgml+++ b/doc/src/sgml/func.sgml@@ -18164,7 +18164,7 @@ SELECT NULLIF(value, '(none)') ...   </sect1>   <sect1 id=\"functions-range\">-  <title>Range Functions and Operators</title>+  <title>Range, Multirange Functions and Operators</title>    <para>    See <xref linkend=\"rangetypes\"/> for an overview of range types.RegardsPavel", "msg_date": "Fri, 7 May 2021 07:17:23 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "doc issue missing type name \"multirange\" in chapter title" }, { "msg_contents": "Hi!\n\nSorry for the late reply.\n\nOn Fri, May 7, 2021 at 8:18 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I searched operators for multirange type, and the current doc is little bit messy, because chapter \"Range Functions and Operators\" contains operators and functions for multirange type too.\n>\n> I think so this chapter should be renamed to Range, Multirange Functions and Operators\"\n>\n> diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\n> index 0b5571460d..f6ed1568af 100644\n> --- a/doc/src/sgml/func.sgml\n> +++ b/doc/src/sgml/func.sgml\n> @@ -18164,7 +18164,7 @@ SELECT NULLIF(value, '(none)') ...\n> </sect1>\n>\n> <sect1 id=\"functions-range\">\n> - <title>Range Functions and Operators</title>\n> + <title>Range, Multirange Functions and Operators</title>\n>\n> <para>\n> See <xref linkend=\"rangetypes\"/> for an overview of range types.\n\nThank you for catching this. What do you think about titling this\nsection \"Range and Multirange Functions and Operators\"?\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 10 Jun 2021 23:46:46 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: doc issue missing type name \"multirange\" in chapter title" }, { "msg_contents": "On Thu, Jun 10, 2021 at 11:46:46PM +0300, Alexander Korotkov wrote:\n> On Fri, May 7, 2021 at 8:18 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > I searched operators for multirange type, and the current doc is little bit messy, because chapter \"Range Functions and Operators\" contains operators and functions for multirange type too.\n> >\n> > I think so this chapter should be renamed to Range, Multirange Functions and Operators\"\n> >\n> > diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\n> > index 0b5571460d..f6ed1568af 100644\n> > --- a/doc/src/sgml/func.sgml\n> > +++ b/doc/src/sgml/func.sgml\n> > @@ -18164,7 +18164,7 @@ SELECT NULLIF(value, '(none)') ...\n> > </sect1>\n> >\n> > <sect1 id=\"functions-range\">\n> > - <title>Range Functions and Operators</title>\n> > + <title>Range, Multirange Functions and Operators</title>\n> >\n> > <para>\n> > See <xref linkend=\"rangetypes\"/> for an overview of range types.\n> \n> Thank you for catching this. What do you think about titling this\n> section \"Range and Multirange Functions and Operators\"?\n\nYes, it should say \"and\".\n\nIf it's confusing to say \"and\" twice, then perhaps you'd say:\n\nFunctions and Operators for Range and Multirange Types\n\n\n", "msg_date": "Thu, 10 Jun 2021 19:28:21 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: doc issue missing type name \"multirange\" in chapter title" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> If it's confusing to say \"and\" twice, then perhaps you'd say:\n> Functions and Operators for Range and Multirange Types\n\nUh, that's still two \"and\"s. In any case, I think it's better\nto keep this section heading aligned with all of its siblings.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Jun 2021 20:39:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: doc issue missing type name \"multirange\" in chapter title" }, { "msg_contents": "On Fri, Jun 11, 2021 at 3:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > If it's confusing to say \"and\" twice, then perhaps you'd say:\n> > Functions and Operators for Range and Multirange Types\n>\n> Uh, that's still two \"and\"s. In any case, I think it's better\n> to keep this section heading aligned with all of its siblings.\n\nWhat about \"Range/Multirange Functions and Operators\"?\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Fri, 11 Jun 2021 23:07:36 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: doc issue missing type name \"multirange\" in chapter title" }, { "msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> What about \"Range/Multirange Functions and Operators\"?\n\nBetter than a comma, I guess. Personally I didn't have a\nproblem with the form with two \"ands\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Jun 2021 16:16:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: doc issue missing type name \"multirange\" in chapter title" }, { "msg_contents": "On Fri, Jun 11, 2021 at 11:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > What about \"Range/Multirange Functions and Operators\"?\n>\n> Better than a comma, I guess. Personally I didn't have a\n> problem with the form with two \"ands\".\n\nThank you. I propose to push the slash option because it both evades\ndouble \"and\" and it's aligned with sibling section headers (we have\n\"Date/Time Functions and Operators\").\n\nAny objections?\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sun, 13 Jun 2021 14:48:10 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: doc issue missing type name \"multirange\" in chapter title" }, { "msg_contents": "On Sun, Jun 13, 2021 at 2:48 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Fri, Jun 11, 2021 at 11:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > > What about \"Range/Multirange Functions and Operators\"?\n> >\n> > Better than a comma, I guess. Personally I didn't have a\n> > problem with the form with two \"ands\".\n>\n> Thank you. I propose to push the slash option because it both evades\n> double \"and\" and it's aligned with sibling section headers (we have\n> \"Date/Time Functions and Operators\").\n>\n> Any objections?\n\nI heard no objection. So, pushed.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 15 Jun 2021 16:10:52 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: doc issue missing type name \"multirange\" in chapter title" }, { "msg_contents": "út 15. 6. 2021 v 15:11 odesílatel Alexander Korotkov <aekorotkov@gmail.com>\nnapsal:\n\n> On Sun, Jun 13, 2021 at 2:48 PM Alexander Korotkov <aekorotkov@gmail.com>\n> wrote:\n> > On Fri, Jun 11, 2021 at 11:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > > > What about \"Range/Multirange Functions and Operators\"?\n> > >\n> > > Better than a comma, I guess. Personally I didn't have a\n> > > problem with the form with two \"ands\".\n> >\n> > Thank you. I propose to push the slash option because it both evades\n> > double \"and\" and it's aligned with sibling section headers (we have\n> > \"Date/Time Functions and Operators\").\n> >\n> > Any objections?\n>\n> I heard no objection. So, pushed.\n>\n\nThank you\n\nPavel\n\n\n> ------\n> Regards,\n> Alexander Korotkov\n>\n\nút 15. 6. 2021 v 15:11 odesílatel Alexander Korotkov <aekorotkov@gmail.com> napsal:On Sun, Jun 13, 2021 at 2:48 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Fri, Jun 11, 2021 at 11:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > > What about \"Range/Multirange Functions and Operators\"?\n> >\n> > Better than a comma, I guess.  Personally I didn't have a\n> > problem with the form with two \"ands\".\n>\n> Thank you.  I propose to push the slash option because it both evades\n> double \"and\" and it's aligned with sibling section headers (we have\n> \"Date/Time Functions and Operators\").\n>\n> Any objections?\n\nI heard no objection.  So, pushed.Thank youPavel\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Tue, 15 Jun 2021 15:13:21 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: doc issue missing type name \"multirange\" in chapter title" } ]
[ { "msg_contents": "Hi\n\nI am testing new features in Postgres 14, and I found bug\n\nEXPLAIN ANALYZE VERBOSE for insert to FDW table with batch_size 1000\nreturns\n\n-------------------------------------------------------------------------------------------------------------------------------\n Insert on public.vzdalena_tabulka2 (cost=0.00..175.00 rows=0 width=0)\n(actual time=30.269..30.270 rows=0 loops=1)\n Remote SQL:\n\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\n Batch Size: 1000\n -> Function Scan on pg_catalog.generate_series g (cost=0.00..175.00\nrows=10000 width=36) (actual time=0.453..1.988 rows=10\n Output: g.i, ('AHOJ'::text || (g.i)::text)\n Function Call: generate_series(1, 10000)\n Planning Time: 0.075 ms\n Execution Time: 31.032 ms\n(8 rows)\n\nRegards\n\nPavel\n\nHiI am testing new features in Postgres 14, and I found bug EXPLAIN ANALYZE VERBOSE  for insert to FDW table with batch_size 1000 returns------------------------------------------------------------------------------------------------------------------------------- Insert on public.vzdalena_tabulka2  (cost=0.00..175.00 rows=0 width=0) (actual time=30.269..30.270 rows=0 loops=1)   Remote SQL: \\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F   Batch Size: 1000   ->  Function Scan on pg_catalog.generate_series g  (cost=0.00..175.00 rows=10000 width=36) (actual time=0.453..1.988 rows=10         Output: g.i, ('AHOJ'::text || (g.i)::text)         Function Call: generate_series(1, 10000) Planning Time: 0.075 ms Execution Time: 31.032 ms(8 rows)RegardsPavel", "msg_date": "Fri, 7 May 2021 11:48:03 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "batch fdw insert bug (Postgres 14)" }, { "msg_contents": "pá 7. 5. 2021 v 11:48 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> I am testing new features in Postgres 14, and I found bug\n>\n> EXPLAIN ANALYZE VERBOSE for insert to FDW table with batch_size 1000\n> returns\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------\n> Insert on public.vzdalena_tabulka2 (cost=0.00..175.00 rows=0 width=0)\n> (actual time=30.269..30.270 rows=0 loops=1)\n> Remote SQL:\n> \\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\n> Batch Size: 1000\n> -> Function Scan on pg_catalog.generate_series g (cost=0.00..175.00\n> rows=10000 width=36) (actual time=0.453..1.988 rows=10\n> Output: g.i, ('AHOJ'::text || (g.i)::text)\n> Function Call: generate_series(1, 10000)\n> Planning Time: 0.075 ms\n> Execution Time: 31.032 ms\n> (8 rows)\n>\n\nreproducer\n\nCREATE DATABASE omega;\n\n\\c omega\n\nCREATE TABLE tabulka(a int, b varchar);\n\n\\c postgres\n\nCREATE EXTENSION postgres_fdw;\n\nCREATE SERVER omega_db\n FOREIGN DATA WRAPPER postgres_fdw\n OPTIONS (dbname 'omega');\n\nCREATE USER MAPPING FOR \"pavel.stehule\"\n SERVER omega_db OPTIONS (user 'pavel.stehule');\n\nCREATE FOREIGN TABLE vzdalena_tabulka(a int, b varchar)\n SERVER omega_db\n OPTIONS (table_name 'tabulka');\n\nCREATE FOREIGN TABLE vzdalena_tabulka2(a int, b varchar)\n SERVER omega_db\n OPTIONS (table_name 'tabulka', batch_size '1000');\n\nEXPLAIN ANALYZE VERBOSE INSERT INTO vzdalena_tabulka SELECT i, 'AHOJ' || i\nFROM generate_series(1,10000) g(i);\nEXPLAIN ANALYZE VERBOSE INSERT INTO vzdalena_tabulka2 SELECT i, 'AHOJ' || i\nFROM generate_series(1,10000) g(i);\n\nPavel\n\n>\n> Regards\n>\n> Pavel\n>\n\npá 7. 5. 2021 v 11:48 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:HiI am testing new features in Postgres 14, and I found bug EXPLAIN ANALYZE VERBOSE  for insert to FDW table with batch_size 1000 returns------------------------------------------------------------------------------------------------------------------------------- Insert on public.vzdalena_tabulka2  (cost=0.00..175.00 rows=0 width=0) (actual time=30.269..30.270 rows=0 loops=1)   Remote SQL: \\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F   Batch Size: 1000   ->  Function Scan on pg_catalog.generate_series g  (cost=0.00..175.00 rows=10000 width=36) (actual time=0.453..1.988 rows=10         Output: g.i, ('AHOJ'::text || (g.i)::text)         Function Call: generate_series(1, 10000) Planning Time: 0.075 ms Execution Time: 31.032 ms(8 rows)reproducer CREATE DATABASE omega;\\c omegaCREATE TABLE tabulka(a int, b varchar);\\c postgresCREATE EXTENSION postgres_fdw;CREATE SERVER omega_db  FOREIGN DATA WRAPPER postgres_fdw  OPTIONS (dbname 'omega');CREATE USER MAPPING FOR \"pavel.stehule\"  SERVER omega_db OPTIONS (user 'pavel.stehule');CREATE FOREIGN TABLE vzdalena_tabulka(a int, b varchar)  SERVER omega_db  OPTIONS (table_name 'tabulka');CREATE FOREIGN TABLE vzdalena_tabulka2(a int, b varchar)  SERVER omega_db  OPTIONS (table_name 'tabulka', batch_size '1000');EXPLAIN ANALYZE VERBOSE INSERT INTO vzdalena_tabulka SELECT i, 'AHOJ' || i FROM generate_series(1,10000) g(i);EXPLAIN ANALYZE VERBOSE INSERT INTO vzdalena_tabulka2 SELECT i, 'AHOJ' || i FROM generate_series(1,10000) g(i);PavelRegardsPavel", "msg_date": "Fri, 7 May 2021 12:41:27 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: batch fdw insert bug (Postgres 14)" }, { "msg_contents": "> I am testing new features in Postgres 14, and I found bug \r\n> EXPLAIN ANALYZE VERBOSE  for insert to FDW table with batch_size 1000 returns\r\n> -------------------------------------------------------------------------------------------------------------------------------\r\n> Insert on public.vzdalena_tabulka2  (cost=0.00..175.00 rows=0 width=0) (actual time=30.269..30.270 rows=0 loops=1)\r\n>   Remote SQL: \\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\r\n>   Batch Size: 1000\r\n>   ->  Function Scan on pg_catalog.generate_series g  (cost=0.00..175.00 rows=10000 width=36) (actual time=0.453..1.988 rows=10\r\n>         Output: g.i, ('AHOJ'::text || (g.i)::text)\r\n>         Function Call: generate_series(1, 10000)\r\n> Planning Time: 0.075 ms\r\n> Execution Time: 31.032 ms\r\n> (8 rows)\r\n> reproducer\r\n\r\nI can reproduce the issue and did some basic analysis on it.\r\n\r\nThe \"Remote SQL\" is built from the following code:\r\n\r\n----------------\r\n\t\tchar\t *sql = strVal(list_nth(fdw_private,\r\n\t\t\t\t\t\t\t\t\t\t FdwModifyPrivateUpdateSql));\r\n\r\n\t\tExplainPropertyText(\"Remote SQL\", sql, es);\r\n---------------\r\n\r\nIt use the query string stored in list fdw_private.\r\nHowever, the \"fmstate->query\" will also point to the string in fdw_private,\r\nby postgresBeginForeignModify --> create_foreign_modify --> \"fmstate->query = query;\"\r\n\r\nAnd in execute_foreign_modify(), \" fmstate->query \" will be freed when rebuild the query\r\nstring to do the batch insert like the following:\r\n\r\n----------------\r\nif (operation == CMD_INSERT && fmstate->num_slots != *numSlots) \r\n{ \r\n... \r\n /* Build INSERT string with numSlots records in its VALUES clause. */\r\n initStringInfo(&sql); \r\n rebuildInsertSql(&sql, fmstate->orig_query, fmstate->values_end,\r\n fmstate->p_nums, *numSlots - 1)\r\n** pfree(fmstate->query); \r\n fmstate->query = sql.data;\r\n----------------\r\n\r\nSo, it output the freed pointer as \"Remote SQL\".\r\n\r\nFor the fix.\r\nThe query string could be rebuilt depending on the numSlots,\r\nwhich query string should be output ?\r\nshould we just output the original query string like the attached patch ?\r\nOr should we output the last one?\r\n\r\nBest regards,\r\nhouzj", "msg_date": "Fri, 7 May 2021 12:46:44 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: batch fdw insert bug (Postgres 14)" }, { "msg_contents": "\nOn 5/7/21 2:46 PM, houzj.fnst@fujitsu.com wrote:\n> \n>> I am testing new features in Postgres 14, and I found bug\n>> EXPLAIN ANALYZE VERBOSE  for insert to FDW table with batch_size 1000 returns\n>> -------------------------------------------------------------------------------------------------------------------------------\n>>  Insert on public.vzdalena_tabulka2  (cost=0.00..175.00 rows=0 width=0) (actual time=30.269..30.270 rows=0 loops=1)\n>>    Remote SQL: \\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\n>>    Batch Size: 1000\n>>    ->  Function Scan on pg_catalog.generate_series g  (cost=0.00..175.00 rows=10000 width=36) (actual time=0.453..1.988 rows=10\n>>          Output: g.i, ('AHOJ'::text || (g.i)::text)\n>>          Function Call: generate_series(1, 10000)\n>>  Planning Time: 0.075 ms\n>>  Execution Time: 31.032 ms\n>> (8 rows)\n>> reproducer\n> \n> I can reproduce the issue and did some basic analysis on it.\n> \n> The \"Remote SQL\" is built from the following code:\n> \n> ----------------\n> \t\tchar\t *sql = strVal(list_nth(fdw_private,\n> \t\t\t\t\t\t\t\t\t\t FdwModifyPrivateUpdateSql));\n> \n> \t\tExplainPropertyText(\"Remote SQL\", sql, es);\n> ---------------\n> \n> It use the query string stored in list fdw_private.\n> However, the \"fmstate->query\" will also point to the string in fdw_private,\n> by postgresBeginForeignModify --> create_foreign_modify --> \"fmstate->query = query;\"\n> \n> And in execute_foreign_modify(), \" fmstate->query \" will be freed when rebuild the query\n> string to do the batch insert like the following:\n> \n> ----------------\n> if (operation == CMD_INSERT && fmstate->num_slots != *numSlots)\n> {\n> ...\n> /* Build INSERT string with numSlots records in its VALUES clause. */\n> initStringInfo(&sql);\n> rebuildInsertSql(&sql, fmstate->orig_query, fmstate->values_end,\n> fmstate->p_nums, *numSlots - 1)\n> ** pfree(fmstate->query);\n> fmstate->query = sql.data;\n> ----------------\n> \n> So, it output the freed pointer as \"Remote SQL\".\n> \n> For the fix.\n> The query string could be rebuilt depending on the numSlots,\n> which query string should be output ?\n> should we just output the original query string like the attached patch ?\n> Or should we output the last one?\n> \n\nYeah. The problem is we build fdw_private list once (which references \nthe SQL string), and during execution we may pfree() it. But then \nEXPLAIN ANALYZE gets the same fdw_private list and tries to use the SQL \nstring which we pfreed() already.\n\nI think the simplest fix is simply to pstrdup() the query when building \nfmstate in create_foreign_modify. We've already been doing that for \norig_query anyway. That seems cleaner than printing the last query we \nbuild (which would be confusing I think).\n\nI've pushed a fix doing that. We only need that for INSERT queries, and \nwe might even restrict that to cases with batching if needed.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 May 2021 22:43:09 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: batch fdw insert bug (Postgres 14)" }, { "msg_contents": "pá 7. 5. 2021 v 22:43 odesílatel Tomas Vondra <tomas.vondra@enterprisedb.com>\nnapsal:\n\n>\n> On 5/7/21 2:46 PM, houzj.fnst@fujitsu.com wrote:\n> >\n> >> I am testing new features in Postgres 14, and I found bug\n> >> EXPLAIN ANALYZE VERBOSE for insert to FDW table with batch_size 1000\n> returns\n> >>\n> -------------------------------------------------------------------------------------------------------------------------------\n> >> Insert on public.vzdalena_tabulka2 (cost=0.00..175.00 rows=0\n> width=0) (actual time=30.269..30.270 rows=0 loops=1)\n> >> Remote SQL:\n> \\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\n> >> Batch Size: 1000\n> >> -> Function Scan on pg_catalog.generate_series g\n> (cost=0.00..175.00 rows=10000 width=36) (actual time=0.453..1.988 rows=10\n> >> Output: g.i, ('AHOJ'::text || (g.i)::text)\n> >> Function Call: generate_series(1, 10000)\n> >> Planning Time: 0.075 ms\n> >> Execution Time: 31.032 ms\n> >> (8 rows)\n> >> reproducer\n> >\n> > I can reproduce the issue and did some basic analysis on it.\n> >\n> > The \"Remote SQL\" is built from the following code:\n> >\n> > ----------------\n> > char *sql = strVal(list_nth(fdw_private,\n> >\n> FdwModifyPrivateUpdateSql));\n> >\n> > ExplainPropertyText(\"Remote SQL\", sql, es);\n> > ---------------\n> >\n> > It use the query string stored in list fdw_private.\n> > However, the \"fmstate->query\" will also point to the string in\n> fdw_private,\n> > by postgresBeginForeignModify --> create_foreign_modify -->\n> \"fmstate->query = query;\"\n> >\n> > And in execute_foreign_modify(), \" fmstate->query \" will be freed when\n> rebuild the query\n> > string to do the batch insert like the following:\n> >\n> > ----------------\n> > if (operation == CMD_INSERT && fmstate->num_slots != *numSlots)\n> > {\n> > ...\n> > /* Build INSERT string with numSlots records in its VALUES\n> clause. */\n> > initStringInfo(&sql);\n> > rebuildInsertSql(&sql, fmstate->orig_query, fmstate->values_end,\n> > fmstate->p_nums, *numSlots - 1)\n> > ** pfree(fmstate->query);\n> > fmstate->query = sql.data;\n> > ----------------\n> >\n> > So, it output the freed pointer as \"Remote SQL\".\n> >\n> > For the fix.\n> > The query string could be rebuilt depending on the numSlots,\n> > which query string should be output ?\n> > should we just output the original query string like the attached patch ?\n> > Or should we output the last one?\n> >\n>\n> Yeah. The problem is we build fdw_private list once (which references\n> the SQL string), and during execution we may pfree() it. But then\n> EXPLAIN ANALYZE gets the same fdw_private list and tries to use the SQL\n> string which we pfreed() already.\n>\n> I think the simplest fix is simply to pstrdup() the query when building\n> fmstate in create_foreign_modify. We've already been doing that for\n> orig_query anyway. That seems cleaner than printing the last query we\n> build (which would be confusing I think).\n>\n> I've pushed a fix doing that. We only need that for INSERT queries, and\n> we might even restrict that to cases with batching if needed.\n>\n\nGreat\n\nThank you\n\nPavel\n\n\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\npá 7. 5. 2021 v 22:43 odesílatel Tomas Vondra <tomas.vondra@enterprisedb.com> napsal:\nOn 5/7/21 2:46 PM, houzj.fnst@fujitsu.com wrote:\n> \n>>   I am testing new features in Postgres 14, and I found bug\n>> EXPLAIN ANALYZE VERBOSE  for insert to FDW table with batch_size 1000 returns\n>> -------------------------------------------------------------------------------------------------------------------------------\n>>   Insert on public.vzdalena_tabulka2  (cost=0.00..175.00 rows=0 width=0) (actual time=30.269..30.270 rows=0 loops=1)\n>>     Remote SQL: \\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\n>>     Batch Size: 1000\n>>     ->  Function Scan on pg_catalog.generate_series g  (cost=0.00..175.00 rows=10000 width=36) (actual time=0.453..1.988 rows=10\n>>           Output: g.i, ('AHOJ'::text || (g.i)::text)\n>>           Function Call: generate_series(1, 10000)\n>>   Planning Time: 0.075 ms\n>>   Execution Time: 31.032 ms\n>> (8 rows)\n>> reproducer\n> \n> I can reproduce the issue and did some basic analysis on it.\n> \n> The \"Remote SQL\" is built from the following code:\n> \n> ----------------\n>               char       *sql = strVal(list_nth(fdw_private,\n>                                                                                 FdwModifyPrivateUpdateSql));\n> \n>               ExplainPropertyText(\"Remote SQL\", sql, es);\n> ---------------\n> \n> It use the query string stored in list fdw_private.\n> However, the \"fmstate->query\" will also point to the string in fdw_private,\n> by  postgresBeginForeignModify --> create_foreign_modify --> \"fmstate->query = query;\"\n> \n> And in execute_foreign_modify(), \" fmstate->query \"  will be freed when rebuild the query\n> string to do the batch insert like the following:\n> \n> ----------------\n> if (operation == CMD_INSERT && fmstate->num_slots != *numSlots)\n> {\n> ...\n>          /* Build INSERT string with numSlots records in its VALUES clause. */\n>          initStringInfo(&sql);\n>          rebuildInsertSql(&sql, fmstate->orig_query, fmstate->values_end,\n>                                           fmstate->p_nums, *numSlots - 1)\n> **      pfree(fmstate->query);\n>          fmstate->query = sql.data;\n> ----------------\n> \n> So, it output the freed pointer as \"Remote SQL\".\n> \n> For the fix.\n> The query string could be rebuilt depending on the numSlots,\n> which query string should be output ?\n> should we just output the original query string like the attached patch ?\n> Or should we output the last one?\n> \n\nYeah. The problem is we build fdw_private list once (which references \nthe SQL string), and during execution we may pfree() it. But then \nEXPLAIN ANALYZE gets the same fdw_private list and tries to use the SQL \nstring which we pfreed() already.\n\nI think the simplest fix is simply to pstrdup() the query when building \nfmstate in create_foreign_modify. We've already been doing that for \norig_query anyway. That seems cleaner than printing the last query we \nbuild (which would be confusing I think).\n\nI've pushed a fix doing that. We only need that for INSERT queries, and \nwe might even restrict that to cases with batching if needed.GreatThank youPavel\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 7 May 2021 22:46:40 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: batch fdw insert bug (Postgres 14)" }, { "msg_contents": "From: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> I think the simplest fix is simply to pstrdup() the query when building\r\n> fmstate in create_foreign_modify. We've already been doing that for\r\n> orig_query anyway. That seems cleaner than printing the last query we\r\n> build (which would be confusing I think).\r\n> \r\n> I've pushed a fix doing that. We only need that for INSERT queries, and\r\n> we might even restrict that to cases with batching if needed.\r\n\r\nThank you for investigating and committing the fix. (I'm relieved that the feature was not reverted.)\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Mon, 10 May 2021 01:19:33 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: batch fdw insert bug (Postgres 14)" } ]
[ { "msg_contents": "Hi,\n\nI am not convinced with the following behaviour of RANGE Partitioning.\nKindly let me know if this is expected behaviour or it should be changed.\n\n*Case-1*:\npostgres@68941=#create table r(a int, b int) partition by range(a,b);\nCREATE TABLE\npostgres@68941=#create table r1 partition of r for values from (100,0) to\n(200,100);\nCREATE TABLE\npostgres@68941=#create table r2 partition of r for values from (400,200) to\n(500,300);\nCREATE TABLE\npostgres@68941=#create table r3 partition of r for values from (0,100) to\n(100,200);\nERROR: partition \"r3\" would overlap partition \"r1\"\nLINE 1: ...able r3 partition of r for values from (0,100) to (100,200);\n\nAs we can see here, I am trying to create a partition table with ranges\nfrom (0,100) to (100,200)\nwhich is actually not overlapped with any of the existing partitions. But I\nam getting error saying,\nit overlaps with partition 'r1'.\n\n*Case-2:*\npostgres@68941=#\\d+ r\n Partitioned table \"public.r\"\n Column | Type | Collation | Nullable | Default | Storage | Compression\n| Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n a | integer | | | | plain |\n| |\n b | integer | | | | plain |\n| |\nPartition key: RANGE (a, b)\nPartitions: r1 FOR VALUES FROM (100, 0) TO (200, 100),\n r2 FOR VALUES FROM (400, 200) TO (500, 300),\n r3 FOR VALUES FROM (200, 100) TO (300, 200)\n\npostgres@68941=#insert into r values(300, 50);\nINSERT 0 1\npostgres@68941=#select * from r3;\n a | b\n-----+-----\n 300 | 50\n(2 rows)\n\nAs per my understanding, in the range partitioned table, lower bound is\nincluded and upper bound is excluded.\nand in case of multi-column partition keys, the row comparison operator is\nused for tuple routing which means\nthe columns are compared left to right. If the partition key value is equal\nto the upper bound of that column then\nthe next column will be considered.\n\nSo, In case of insertion of row (300, 50). Based on the understanding,\npartition 'r3' should have rejected it.\n\nKindly confirm whether the above is expected or not. If expected, kindly\nexplain.\n\nThanks and Regards,\nNitin Jadhav\n\nHi,I am not convinced with the following behaviour of RANGE Partitioning.Kindly let me know if this is expected behaviour or it should be changed.Case-1:postgres@68941=#create table r(a int, b int) partition by range(a,b);CREATE TABLEpostgres@68941=#create table r1 partition of r for values from (100,0) to (200,100);CREATE TABLEpostgres@68941=#create table r2 partition of r for values from (400,200) to (500,300);CREATE TABLEpostgres@68941=#create table r3 partition of r for values from (0,100) to (100,200);ERROR:  partition \"r3\" would overlap partition \"r1\"LINE 1: ...able r3 partition of r for values from (0,100) to (100,200);As we can see here, I am trying to create a partition table with ranges from (0,100) to (100,200)which is actually not overlapped with any of the existing partitions. But I am getting error saying,it overlaps with partition 'r1'. Case-2:postgres@68941=#\\d+ r                                      Partitioned table \"public.r\" Column |  Type   | Collation | Nullable | Default | Storage | Compression | Stats target | Description --------+---------+-----------+----------+---------+---------+-------------+--------------+------------- a      | integer |           |          |         | plain   |             |              |  b      | integer |           |          |         | plain   |             |              | Partition key: RANGE (a, b)Partitions: r1 FOR VALUES FROM (100, 0) TO (200, 100),            r2 FOR VALUES FROM (400, 200) TO (500, 300),            r3 FOR VALUES FROM (200, 100) TO (300, 200)postgres@68941=#insert into r values(300, 50);INSERT 0 1postgres@68941=#select * from r3;  a  |  b  -----+----- 300 |  50(2 rows)As per my understanding, in the range partitioned table, lower bound is included and upper bound is excluded.and in case of multi-column partition keys, the row comparison operator is used for tuple routing which meansthe columns are compared left to right. If the partition key value is equal to the upper bound of that column then the next column will be considered.So, In case of insertion of row (300, 50). Based on the understanding, partition 'r3' should have rejected it.Kindly confirm whether the above is expected or not. If expected, kindly explain.Thanks and Regards,Nitin Jadhav", "msg_date": "Fri, 7 May 2021 16:20:39 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Query regarding RANGE Partitioning" }, { "msg_contents": "On Fri, May 7, 2021 at 4:21 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> I am not convinced with the following behaviour of RANGE Partitioning.\n> Kindly let me know if this is expected behaviour or it should be changed.\n>\n> Case-1:\n> postgres@68941=#create table r(a int, b int) partition by range(a,b);\n> CREATE TABLE\n> postgres@68941=#create table r1 partition of r for values from (100,0) to (200,100);\n> CREATE TABLE\n> postgres@68941=#create table r2 partition of r for values from (400,200) to (500,300);\n> CREATE TABLE\n> postgres@68941=#create table r3 partition of r for values from (0,100) to (100,200);\n> ERROR: partition \"r3\" would overlap partition \"r1\"\n> LINE 1: ...able r3 partition of r for values from (0,100) to (100,200);\n>\n> As we can see here, I am trying to create a partition table with ranges from (0,100) to (100,200)\n> which is actually not overlapped with any of the existing partitions. But I am getting error saying,\n> it overlaps with partition 'r1'.\n\noverlapping range is (100, 0), (100, 200)\n\n>\n> Case-2:\n> postgres@68941=#\\d+ r\n> Partitioned table \"public.r\"\n> Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description\n> --------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n> a | integer | | | | plain | | |\n> b | integer | | | | plain | | |\n> Partition key: RANGE (a, b)\n> Partitions: r1 FOR VALUES FROM (100, 0) TO (200, 100),\n> r2 FOR VALUES FROM (400, 200) TO (500, 300),\n> r3 FOR VALUES FROM (200, 100) TO (300, 200)\n>\n> postgres@68941=#insert into r values(300, 50);\n> INSERT 0 1\n> postgres@68941=#select * from r3;\n> a | b\n> -----+-----\n> 300 | 50\n> (2 rows)\n>\n> As per my understanding, in the range partitioned table, lower bound is included and upper bound is excluded.\n> and in case of multi-column partition keys, the row comparison operator is used for tuple routing which means\n> the columns are compared left to right. If the partition key value is equal to the upper bound of that column then\n> the next column will be considered.\n>\n> So, In case of insertion of row (300, 50). Based on the understanding, partition 'r3' should have rejected it.\n\nr3 contains (300, 0) to (300, 200) which contains (300, 50). First key\n300 is equal to upper bound 300, so it compares 50, which is less than\nthe upper bound of the second column. Am I missing something?\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 7 May 2021 16:44:01 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Query regarding RANGE Partitioning" }, { "msg_contents": "Hi Nitin,\n\nOn Fri, May 7, 2021 at 4:21 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com>\nwrote:\n\n> Hi,\n>\n> I am not convinced with the following behaviour of RANGE Partitioning.\n> Kindly let me know if this is expected behaviour or it should be changed.\n>\n> *Case-1*:\n> postgres@68941=#create table r(a int, b int) partition by range(a,b);\n> CREATE TABLE\n> postgres@68941=#create table r1 partition of r for values from (100,0) to\n> (200,100);\n> CREATE TABLE\n> postgres@68941=#create table r2 partition of r for values from (400,200)\n> to (500,300);\n> CREATE TABLE\n> postgres@68941=#create table r3 partition of r for values from (0,100) to\n> (100,200);\n> ERROR: partition \"r3\" would overlap partition \"r1\"\n> LINE 1: ...able r3 partition of r for values from (0,100) to (100,200);\n>\n> As we can see here, I am trying to create a partition table with ranges\n> from (0,100) to (100,200)\n> which is actually not overlapped with any of the existing partitions. But\n> I am getting error saying,\n> it overlaps with partition 'r1'.\n>\n>\n*Case-2:*\n> postgres@68941=#\\d+ r\n> Partitioned table \"public.r\"\n> Column | Type | Collation | Nullable | Default | Storage | Compression\n> | Stats target | Description\n>\n> --------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n> a | integer | | | | plain |\n> | |\n> b | integer | | | | plain |\n> | |\n> Partition key: RANGE (a, b)\n> Partitions: r1 FOR VALUES FROM (100, 0) TO (200, 100),\n> r2 FOR VALUES FROM (400, 200) TO (500, 300),\n> r3 FOR VALUES FROM (200, 100) TO (300, 200)\n>\n> postgres@68941=#insert into r values(300, 50);\n> INSERT 0 1\n> postgres@68941=#select * from r3;\n> a | b\n> -----+-----\n> 300 | 50\n> (2 rows)\n>\n> As per my understanding, in the range partitioned table, lower bound is\n> included and upper bound is excluded.\n> and in case of multi-column partition keys, the row comparison operator is\n> used for tuple routing which means\n> the columns are compared left to right. If the partition key value is\n> equal to the upper bound of that column then\n> the next column will be considered.\n>\n> So, In case of insertion of row (300, 50). Based on the understanding,\n> partition 'r3' should have rejected it.\n>\n> Kindly confirm whether the above is expected or not. If expected, kindly\n> explain.\n>\n\nIf you describe the partition r3, you can see the way partition\nconstraints are formed:\n\npostgres=# \\d+ r3\n Table \"public.r3\"\n Column | Type | Collation | Nullable | Default | Storage | Compression\n| Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n a | integer | | | | plain |\n| |\n b | integer | | | | plain |\n| |\nPartition of: r FOR VALUES FROM (200, 100) TO (300, 200)\nPartition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND ((a > 200)\nOR ((a = 200) AND (b >= 100))) AND ((a < 300) OR ((a = 300) AND (b < 200))))\nAccess method: heap\n\nThe above constraint very well fits the tuple you are trying to insert\nthat is: (a, b) = (300, 50) (where (a = 300) AND (b < 200))\n\nAlso, the table partition syntax documentation[1]\n<https://www.postgresql.org/docs/current/sql-createtable.html>clarifies\nthis (look\nfor \"partition_bound_expr\"):\n\n\"When creating a range partition, the lower bound specified with\nFROM is an inclusive bound, whereas the upper bound specified with\nTO is an exclusive bound. That is, the values specified in the FROM\nlist are valid values of the corresponding partition key columns\nfor this partition, whereas those in the TO list are not. Note that\nthis statement must be understood according to the rules of row-wise\ncomparison (Section 9.24.5). For example, given PARTITION BY RANGE\n(x,y), a partition bound FROM (1, 2) TO (3, 4) allows x=1 with any y>=2,\nx=2 with any non-null y, and x=3 with any y<4.\"\n\nSo, in your case the partition (a, b) for bound (200, 100) TO (300, 200)\nwould transform to allowing:\na = 200 with any b >= 100 OR\na > 200 and a < 300 with any non-null b\nOR a=300 with any b<200\n\nYour particular tuple (300, 50) fits in the last part of the OR i.e\n(a=300 with any b<200).\n\nSo, IMHO, the range partitioning is behaving as expected.\n\nSimilarly, for the case-1 you mention above:\ncreate table r1 partition of r for values from (100,0) to (200,100);\ncreate table r3 partition of r for values from (0,100) to (100,200);\nhere, (100, 0) or r1 would overlap with (100, 200) of r3.\n\n\n[1] https://www.postgresql.org/docs/current/sql-createtable.html\n\nRegards,\nJeevan Ladhe\n\nHi Nitin,On Fri, May 7, 2021 at 4:21 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:Hi,I am not convinced with the following behaviour of RANGE Partitioning.Kindly let me know if this is expected behaviour or it should be changed.Case-1:postgres@68941=#create table r(a int, b int) partition by range(a,b);CREATE TABLEpostgres@68941=#create table r1 partition of r for values from (100,0) to (200,100);CREATE TABLEpostgres@68941=#create table r2 partition of r for values from (400,200) to (500,300);CREATE TABLEpostgres@68941=#create table r3 partition of r for values from (0,100) to (100,200);ERROR:  partition \"r3\" would overlap partition \"r1\"LINE 1: ...able r3 partition of r for values from (0,100) to (100,200);As we can see here, I am trying to create a partition table with ranges from (0,100) to (100,200)which is actually not overlapped with any of the existing partitions. But I am getting error saying,it overlaps with partition 'r1'.  Case-2:postgres@68941=#\\d+ r                                      Partitioned table \"public.r\" Column |  Type   | Collation | Nullable | Default | Storage | Compression | Stats target | Description --------+---------+-----------+----------+---------+---------+-------------+--------------+------------- a      | integer |           |          |         | plain   |             |              |  b      | integer |           |          |         | plain   |             |              | Partition key: RANGE (a, b)Partitions: r1 FOR VALUES FROM (100, 0) TO (200, 100),            r2 FOR VALUES FROM (400, 200) TO (500, 300),            r3 FOR VALUES FROM (200, 100) TO (300, 200)postgres@68941=#insert into r values(300, 50);INSERT 0 1postgres@68941=#select * from r3;  a  |  b  -----+----- 300 |  50(2 rows)As per my understanding, in the range partitioned table, lower bound is included and upper bound is excluded.and in case of multi-column partition keys, the row comparison operator is used for tuple routing which meansthe columns are compared left to right. If the partition key value is equal to the upper bound of that column then the next column will be considered.So, In case of insertion of row (300, 50). Based on the understanding, partition 'r3' should have rejected it.Kindly confirm whether the above is expected or not. If expected, kindly explain.If you describe the partition r3, you can see the way partitionconstraints are formed:postgres=# \\d+ r3                                           Table \"public.r3\" Column |  Type   | Collation | Nullable | Default | Storage | Compression | Stats target | Description --------+---------+-----------+----------+---------+---------+-------------+--------------+------------- a      | integer |           |          |         | plain   |             |              |  b      | integer |           |          |         | plain   |             |              | Partition of: r FOR VALUES FROM (200, 100) TO (300, 200)Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND ((a > 200) OR ((a = 200) AND (b >= 100))) AND ((a < 300) OR ((a = 300) AND (b < 200))))Access method: heapThe above constraint very well fits the tuple you are trying to insertthat is: (a, b) = (300, 50) (where (a = 300) AND (b < 200))Also, the table partition syntax documentation[1] clarifies this (lookfor \"partition_bound_expr\"):\"When creating a range partition, the lower bound specified withFROM is an inclusive bound, whereas the upper bound specified withTO is an exclusive bound. That is, the values specified in the FROMlist are valid values of the corresponding partition key columnsfor this partition, whereas those in the TO list are not. Note thatthis statement must be understood according to the rules of row-wisecomparison (Section 9.24.5). For example, given PARTITION BY RANGE(x,y), a partition bound FROM (1, 2) TO (3, 4) allows x=1 with any y>=2,x=2 with any non-null y, and x=3 with any y<4.\"So, in your case the partition (a, b) for bound (200, 100) TO (300, 200)would transform to allowing:a = 200 with any b >= 100 ORa > 200 and a < 300 with any non-null bOR a=300 with any b<200Your particular tuple (300, 50) fits in the last part of the OR i.e(a=300 with any b<200).So, IMHO, the range partitioning is behaving as expected.Similarly, for the case-1 you mention above:create table r1 partition of r for values from (100,0) to (200,100);create table r3 partition of r for values from (0,100) to (100,200);here, (100, 0) or r1 would overlap with (100, 200) of r3.[1] https://www.postgresql.org/docs/current/sql-createtable.htmlRegards,Jeevan Ladhe", "msg_date": "Fri, 7 May 2021 18:23:06 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Query regarding RANGE Partitioning" }, { "msg_contents": "Thanks Ashutosh and Jeevan for replying.\n\n> \"When creating a range partition, the lower bound specified with\n> FROM is an inclusive bound, whereas the upper bound specified with\n> TO is an exclusive bound. That is, the values specified in the FROM\n> list are valid values of the corresponding partition key columns\n> for this partition, whereas those in the TO list are not. Note that\n> this statement must be understood according to the rules of row-wise\n> comparison (Section 9.24.5). For example, given PARTITION BY RANGE\n> (x,y), a partition bound FROM (1, 2) TO (3, 4) allows x=1 with any y>=2,\n> x=2 with any non-null y, and x=3 with any y<4.\"\n\nThanks for the detailed explanation. I understood more about how the\npartition constraints are prepared based on the RANGE bound values and\nhow the tuple routing takes place based on that.\n\n> overlapping range is (100, 0), (100, 200)\n\n> Similarly, for the case-1 you mention above:\n> create table r1 partition of r for values from (100,0) to (200,100);\n> create table r3 partition of r for values from (0,100) to (100,200);\n> here, (100, 0) or r1 would overlap with (100, 200) of r3.\n\npostgres@68941=#\\d+ r1\n Table \"public.r1\"\n Column | Type | Collation | Nullable | Default | Storage | Compression\n| Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n a | integer | | | | plain |\n| |\n b | integer | | | | plain |\n| |\nPartition of: r FOR VALUES FROM (100, 0) TO (200, 100)\nPartition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND ((a > 100)\nOR ((a = 100) AND (b >= 0))) AND ((a < 200) OR ((a = 200) AND (b < 100))))\nAccess method: heap\n\nI understand that partition 'r1' says, when column 'a' value is '100',\ncolumn 'b'\nvalues should be greater than '0'. Because of this constraint, creation of\npartition 'r3' for values from (0,100) to (100,200) failed since the\ncondition\nwhen value of column 'a' is 100, column 'b' should be less than '200' which\noverlaps with the constraints of 'r1'. So, based on the documentation, the\nbehaviour is correct.\n\nSo in the above scenarios, users cannot create a partition for column 'a'\nvalues\nfrom (0) to (100). If user tries insert any values for column 'a' between\n'0' to '100',\neither it should go to default partition if exists. Otherwise it should\nfail saying, no partition\nfound. I feel there should be some way to create partitions in these\nscenarios.\n\nPlease correct if I am wrong and please share your thoughts on this.\n\nThanks & Regards,\nNitin Jadhav\n\n\n\nOn Fri, May 7, 2021 at 6:23 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>\nwrote:\n\n> Hi Nitin,\n>\n> On Fri, May 7, 2021 at 4:21 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com>\n> wrote:\n>\n>> Hi,\n>>\n>> I am not convinced with the following behaviour of RANGE Partitioning.\n>> Kindly let me know if this is expected behaviour or it should be changed.\n>>\n>> *Case-1*:\n>> postgres@68941=#create table r(a int, b int) partition by range(a,b);\n>> CREATE TABLE\n>> postgres@68941=#create table r1 partition of r for values from (100,0)\n>> to (200,100);\n>> CREATE TABLE\n>> postgres@68941=#create table r2 partition of r for values from (400,200)\n>> to (500,300);\n>> CREATE TABLE\n>> postgres@68941=#create table r3 partition of r for values from (0,100)\n>> to (100,200);\n>> ERROR: partition \"r3\" would overlap partition \"r1\"\n>> LINE 1: ...able r3 partition of r for values from (0,100) to (100,200);\n>>\n>> As we can see here, I am trying to create a partition table with ranges\n>> from (0,100) to (100,200)\n>> which is actually not overlapped with any of the existing partitions. But\n>> I am getting error saying,\n>> it overlaps with partition 'r1'.\n>>\n>>\n> *Case-2:*\n>> postgres@68941=#\\d+ r\n>> Partitioned table \"public.r\"\n>> Column | Type | Collation | Nullable | Default | Storage |\n>> Compression | Stats target | Description\n>>\n>> --------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n>> a | integer | | | | plain |\n>> | |\n>> b | integer | | | | plain |\n>> | |\n>> Partition key: RANGE (a, b)\n>> Partitions: r1 FOR VALUES FROM (100, 0) TO (200, 100),\n>> r2 FOR VALUES FROM (400, 200) TO (500, 300),\n>> r3 FOR VALUES FROM (200, 100) TO (300, 200)\n>>\n>> postgres@68941=#insert into r values(300, 50);\n>> INSERT 0 1\n>> postgres@68941=#select * from r3;\n>> a | b\n>> -----+-----\n>> 300 | 50\n>> (2 rows)\n>>\n>> As per my understanding, in the range partitioned table, lower bound is\n>> included and upper bound is excluded.\n>> and in case of multi-column partition keys, the row comparison operator\n>> is used for tuple routing which means\n>> the columns are compared left to right. If the partition key value is\n>> equal to the upper bound of that column then\n>> the next column will be considered.\n>>\n>> So, In case of insertion of row (300, 50). Based on the understanding,\n>> partition 'r3' should have rejected it.\n>>\n>> Kindly confirm whether the above is expected or not. If expected, kindly\n>> explain.\n>>\n>\n> If you describe the partition r3, you can see the way partition\n> constraints are formed:\n>\n> postgres=# \\d+ r3\n> Table \"public.r3\"\n> Column | Type | Collation | Nullable | Default | Storage | Compression\n> | Stats target | Description\n>\n> --------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n> a | integer | | | | plain |\n> | |\n> b | integer | | | | plain |\n> | |\n> Partition of: r FOR VALUES FROM (200, 100) TO (300, 200)\n> Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND ((a > 200)\n> OR ((a = 200) AND (b >= 100))) AND ((a < 300) OR ((a = 300) AND (b < 200))))\n> Access method: heap\n>\n> The above constraint very well fits the tuple you are trying to insert\n> that is: (a, b) = (300, 50) (where (a = 300) AND (b < 200))\n>\n> Also, the table partition syntax documentation[1]\n> <https://www.postgresql.org/docs/current/sql-createtable.html>clarifies\n> this (look\n> for \"partition_bound_expr\"):\n>\n> \"When creating a range partition, the lower bound specified with\n> FROM is an inclusive bound, whereas the upper bound specified with\n> TO is an exclusive bound. That is, the values specified in the FROM\n> list are valid values of the corresponding partition key columns\n> for this partition, whereas those in the TO list are not. Note that\n> this statement must be understood according to the rules of row-wise\n> comparison (Section 9.24.5). For example, given PARTITION BY RANGE\n> (x,y), a partition bound FROM (1, 2) TO (3, 4) allows x=1 with any y>=2,\n> x=2 with any non-null y, and x=3 with any y<4.\"\n>\n> So, in your case the partition (a, b) for bound (200, 100) TO (300, 200)\n> would transform to allowing:\n> a = 200 with any b >= 100 OR\n> a > 200 and a < 300 with any non-null b\n> OR a=300 with any b<200\n>\n> Your particular tuple (300, 50) fits in the last part of the OR i.e\n> (a=300 with any b<200).\n>\n> So, IMHO, the range partitioning is behaving as expected.\n>\n> Similarly, for the case-1 you mention above:\n> create table r1 partition of r for values from (100,0) to (200,100);\n> create table r3 partition of r for values from (0,100) to (100,200);\n> here, (100, 0) or r1 would overlap with (100, 200) of r3.\n>\n>\n> [1] https://www.postgresql.org/docs/current/sql-createtable.html\n>\n> Regards,\n> Jeevan Ladhe\n>\n\nThanks Ashutosh and Jeevan for replying.>  \"When creating a range partition, the lower bound specified with> FROM is an inclusive bound, whereas the upper bound specified with> TO is an exclusive bound. That is, the values specified in the FROM> list are valid values of the corresponding partition key columns> for this partition, whereas those in the TO list are not. Note that> this statement must be understood according to the rules of row-wise> comparison (Section 9.24.5). For example, given PARTITION BY RANGE> (x,y), a partition bound FROM (1, 2) TO (3, 4) allows x=1 with any y>=2,> x=2 with any non-null y, and x=3 with any y<4.\"Thanks for the detailed explanation. I understood more about how the partition constraints are prepared based on the RANGE bound values andhow the tuple routing takes place based on that.> overlapping range is (100, 0), (100, 200)> Similarly, for the case-1 you mention above:> create table r1 partition of r for values from (100,0) to (200,100);> create table r3 partition of r for values from (0,100) to (100,200);> here, (100, 0) or r1 would overlap with (100, 200) of r3.postgres@68941=#\\d+ r1                                           Table \"public.r1\" Column |  Type   | Collation | Nullable | Default | Storage | Compression | Stats target | Description --------+---------+-----------+----------+---------+---------+-------------+--------------+------------- a      | integer |           |          |         | plain   |             |              |  b      | integer |           |          |         | plain   |             |              | Partition of: r FOR VALUES FROM (100, 0) TO (200, 100)Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND ((a > 100) OR ((a = 100) AND (b >= 0))) AND ((a < 200) OR ((a = 200) AND (b < 100))))Access method: heapI understand that partition 'r1' says, when column 'a' value is '100', column 'b' values should be greater than '0'. Because of this constraint, creation of partition 'r3' for values from (0,100) to (100,200) failed since the condition when value of column 'a' is 100, column 'b' should be less than '200' which overlaps with the constraints of 'r1'. So, based on the documentation, the behaviour is correct.So in the above scenarios, users cannot create a partition for column 'a' values from (0) to (100). If user tries insert any values for column 'a' between '0' to '100',either it should go to default partition if exists. Otherwise it should fail saying, no partition found. I feel there should be some way to create partitions in these scenarios.Please correct if I am wrong and please share your thoughts on this.Thanks & Regards,Nitin JadhavOn Fri, May 7, 2021 at 6:23 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com> wrote:Hi Nitin,On Fri, May 7, 2021 at 4:21 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:Hi,I am not convinced with the following behaviour of RANGE Partitioning.Kindly let me know if this is expected behaviour or it should be changed.Case-1:postgres@68941=#create table r(a int, b int) partition by range(a,b);CREATE TABLEpostgres@68941=#create table r1 partition of r for values from (100,0) to (200,100);CREATE TABLEpostgres@68941=#create table r2 partition of r for values from (400,200) to (500,300);CREATE TABLEpostgres@68941=#create table r3 partition of r for values from (0,100) to (100,200);ERROR:  partition \"r3\" would overlap partition \"r1\"LINE 1: ...able r3 partition of r for values from (0,100) to (100,200);As we can see here, I am trying to create a partition table with ranges from (0,100) to (100,200)which is actually not overlapped with any of the existing partitions. But I am getting error saying,it overlaps with partition 'r1'.  Case-2:postgres@68941=#\\d+ r                                      Partitioned table \"public.r\" Column |  Type   | Collation | Nullable | Default | Storage | Compression | Stats target | Description --------+---------+-----------+----------+---------+---------+-------------+--------------+------------- a      | integer |           |          |         | plain   |             |              |  b      | integer |           |          |         | plain   |             |              | Partition key: RANGE (a, b)Partitions: r1 FOR VALUES FROM (100, 0) TO (200, 100),            r2 FOR VALUES FROM (400, 200) TO (500, 300),            r3 FOR VALUES FROM (200, 100) TO (300, 200)postgres@68941=#insert into r values(300, 50);INSERT 0 1postgres@68941=#select * from r3;  a  |  b  -----+----- 300 |  50(2 rows)As per my understanding, in the range partitioned table, lower bound is included and upper bound is excluded.and in case of multi-column partition keys, the row comparison operator is used for tuple routing which meansthe columns are compared left to right. If the partition key value is equal to the upper bound of that column then the next column will be considered.So, In case of insertion of row (300, 50). Based on the understanding, partition 'r3' should have rejected it.Kindly confirm whether the above is expected or not. If expected, kindly explain.If you describe the partition r3, you can see the way partitionconstraints are formed:postgres=# \\d+ r3                                           Table \"public.r3\" Column |  Type   | Collation | Nullable | Default | Storage | Compression | Stats target | Description --------+---------+-----------+----------+---------+---------+-------------+--------------+------------- a      | integer |           |          |         | plain   |             |              |  b      | integer |           |          |         | plain   |             |              | Partition of: r FOR VALUES FROM (200, 100) TO (300, 200)Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND ((a > 200) OR ((a = 200) AND (b >= 100))) AND ((a < 300) OR ((a = 300) AND (b < 200))))Access method: heapThe above constraint very well fits the tuple you are trying to insertthat is: (a, b) = (300, 50) (where (a = 300) AND (b < 200))Also, the table partition syntax documentation[1] clarifies this (lookfor \"partition_bound_expr\"):\"When creating a range partition, the lower bound specified withFROM is an inclusive bound, whereas the upper bound specified withTO is an exclusive bound. That is, the values specified in the FROMlist are valid values of the corresponding partition key columnsfor this partition, whereas those in the TO list are not. Note thatthis statement must be understood according to the rules of row-wisecomparison (Section 9.24.5). For example, given PARTITION BY RANGE(x,y), a partition bound FROM (1, 2) TO (3, 4) allows x=1 with any y>=2,x=2 with any non-null y, and x=3 with any y<4.\"So, in your case the partition (a, b) for bound (200, 100) TO (300, 200)would transform to allowing:a = 200 with any b >= 100 ORa > 200 and a < 300 with any non-null bOR a=300 with any b<200Your particular tuple (300, 50) fits in the last part of the OR i.e(a=300 with any b<200).So, IMHO, the range partitioning is behaving as expected.Similarly, for the case-1 you mention above:create table r1 partition of r for values from (100,0) to (200,100);create table r3 partition of r for values from (0,100) to (100,200);here, (100, 0) or r1 would overlap with (100, 200) of r3.[1] https://www.postgresql.org/docs/current/sql-createtable.htmlRegards,Jeevan Ladhe", "msg_date": "Sat, 8 May 2021 13:49:58 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Query regarding RANGE Partitioning" }, { "msg_contents": "Hi Nitin,\n\nOn Sat, May 8, 2021 at 5:20 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> Thanks Ashutosh and Jeevan for replying.\n>\n> > \"When creating a range partition, the lower bound specified with\n> > FROM is an inclusive bound, whereas the upper bound specified with\n> > TO is an exclusive bound. That is, the values specified in the FROM\n> > list are valid values of the corresponding partition key columns\n> > for this partition, whereas those in the TO list are not. Note that\n> > this statement must be understood according to the rules of row-wise\n> > comparison (Section 9.24.5). For example, given PARTITION BY RANGE\n> > (x,y), a partition bound FROM (1, 2) TO (3, 4) allows x=1 with any y>=2,\n> > x=2 with any non-null y, and x=3 with any y<4.\"\n>\n> Thanks for the detailed explanation. I understood more about how the\n> partition constraints are prepared based on the RANGE bound values and\n> how the tuple routing takes place based on that.\n>\n> > overlapping range is (100, 0), (100, 200)\n>\n> > Similarly, for the case-1 you mention above:\n> > create table r1 partition of r for values from (100,0) to (200,100);\n> > create table r3 partition of r for values from (0,100) to (100,200);\n> > here, (100, 0) or r1 would overlap with (100, 200) of r3.\n\nThanks Jeevan, that's right.\n\nAnother way to look at this: the partition key (100, 0) would be\ninsertable into r3, because the key satisfies its proposed exclusive\nupper bound (< (100, 200)). The same key is also insertable into r1,\nbecause it satisfies the latter's inclusive upper bound (>= (100, 0)).\nThat is, the key (100, 0) is insertable into both r1 and r3, so the\nerror that the proposed range of r3 would overlap r1's.\n\n> postgres@68941=#\\d+ r1\n> Table \"public.r1\"\n> Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description\n> --------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n> a | integer | | | | plain | | |\n> b | integer | | | | plain | | |\n> Partition of: r FOR VALUES FROM (100, 0) TO (200, 100)\n> Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND ((a > 100) OR ((a = 100) AND (b >= 0))) AND ((a < 200) OR ((a = 200) AND (b < 100))))\n> Access method: heap\n>\n> I understand that partition 'r1' says, when column 'a' value is '100', column 'b'\n> values should be greater than '0'. Because of this constraint, creation of\n> partition 'r3' for values from (0,100) to (100,200) failed since the condition\n> when value of column 'a' is 100, column 'b' should be less than '200' which\n> overlaps with the constraints of 'r1'. So, based on the documentation, the\n> behaviour is correct.\n>\n> So in the above scenarios, users cannot create a partition for column 'a' values\n> from (0) to (100). If user tries insert any values for column 'a' between '0' to '100',\n> either it should go to default partition if exists. Otherwise it should fail saying, no partition\n> found. I feel there should be some way to create partitions in these scenarios.\n\nWell, you simply need to come up with bound values for r3 that don't\noverlap with existing partitions' ranges; the following will work for\nexample:\n\ncreate table r3 partition of r for values from (0,100) to (100,0);\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 May 2021 21:11:12 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Query regarding RANGE Partitioning" }, { "msg_contents": "Hi Amit,\n\n> Thanks Jeevan, that's right.\n\n> Another way to look at this: the partition key (100, 0) would be\n> insertable into r3, because the key satisfies its proposed exclusive\n> upper bound (< (100, 200)). The same key is also insertable into r1,\n> because it satisfies the latter's inclusive upper bound (>= (100, 0)).\n> That is, the key (100, 0) is insertable into both r1 and r3, so the\n> error that the proposed range of r3 would overlap r1's.\n\nThanks for the explanation.\n\n> Well, you simply need to come up with bound values for r3 that don't\n> overlap with existing partitions' ranges; the following will work for\n> example:\n>\n> create table r3 partition of r for values from (0,100) to (100,0);\n\nThanks for the clarification.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Wed, May 12, 2021 at 5:41 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> Hi Nitin,\n>\n> On Sat, May 8, 2021 at 5:20 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > Thanks Ashutosh and Jeevan for replying.\n> >\n> > > \"When creating a range partition, the lower bound specified with\n> > > FROM is an inclusive bound, whereas the upper bound specified with\n> > > TO is an exclusive bound. That is, the values specified in the FROM\n> > > list are valid values of the corresponding partition key columns\n> > > for this partition, whereas those in the TO list are not. Note that\n> > > this statement must be understood according to the rules of row-wise\n> > > comparison (Section 9.24.5). For example, given PARTITION BY RANGE\n> > > (x,y), a partition bound FROM (1, 2) TO (3, 4) allows x=1 with any\n> y>=2,\n> > > x=2 with any non-null y, and x=3 with any y<4.\"\n> >\n> > Thanks for the detailed explanation. I understood more about how the\n> > partition constraints are prepared based on the RANGE bound values and\n> > how the tuple routing takes place based on that.\n> >\n> > > overlapping range is (100, 0), (100, 200)\n> >\n> > > Similarly, for the case-1 you mention above:\n> > > create table r1 partition of r for values from (100,0) to (200,100);\n> > > create table r3 partition of r for values from (0,100) to (100,200);\n> > > here, (100, 0) or r1 would overlap with (100, 200) of r3.\n>\n> Thanks Jeevan, that's right.\n>\n> Another way to look at this: the partition key (100, 0) would be\n> insertable into r3, because the key satisfies its proposed exclusive\n> upper bound (< (100, 200)). The same key is also insertable into r1,\n> because it satisfies the latter's inclusive upper bound (>= (100, 0)).\n> That is, the key (100, 0) is insertable into both r1 and r3, so the\n> error that the proposed range of r3 would overlap r1's.\n>\n> > postgres@68941=#\\d+ r1\n> > Table \"public.r1\"\n> > Column | Type | Collation | Nullable | Default | Storage |\n> Compression | Stats target | Description\n> >\n> --------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n> > a | integer | | | | plain |\n> | |\n> > b | integer | | | | plain |\n> | |\n> > Partition of: r FOR VALUES FROM (100, 0) TO (200, 100)\n> > Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND ((a >\n> 100) OR ((a = 100) AND (b >= 0))) AND ((a < 200) OR ((a = 200) AND (b <\n> 100))))\n> > Access method: heap\n> >\n> > I understand that partition 'r1' says, when column 'a' value is '100',\n> column 'b'\n> > values should be greater than '0'. Because of this constraint, creation\n> of\n> > partition 'r3' for values from (0,100) to (100,200) failed since the\n> condition\n> > when value of column 'a' is 100, column 'b' should be less than '200'\n> which\n> > overlaps with the constraints of 'r1'. So, based on the documentation,\n> the\n> > behaviour is correct.\n> >\n> > So in the above scenarios, users cannot create a partition for column\n> 'a' values\n> > from (0) to (100). If user tries insert any values for column 'a'\n> between '0' to '100',\n> > either it should go to default partition if exists. Otherwise it should\n> fail saying, no partition\n> > found. I feel there should be some way to create partitions in these\n> scenarios.\n>\n> Well, you simply need to come up with bound values for r3 that don't\n> overlap with existing partitions' ranges; the following will work for\n> example:\n>\n> create table r3 partition of r for values from (0,100) to (100,0);\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n>\n\nHi Amit,> Thanks Jeevan, that's right.> Another way to look at this: the partition key (100, 0) would be> insertable into r3, because the key satisfies its proposed exclusive> upper bound (< (100, 200)).  The same key is also insertable into r1,> because it satisfies the latter's inclusive upper bound (>= (100, 0)).> That is, the key (100, 0) is insertable into both r1 and r3, so the> error that the proposed range of r3 would overlap r1's.Thanks for the explanation.> Well, you simply need to come up with bound values for r3 that don't> overlap with existing partitions' ranges; the following will work for> example:>> create table r3 partition of r for values from (0,100) to (100,0);Thanks for the clarification. Thanks & Regards,Nitin JadhavOn Wed, May 12, 2021 at 5:41 PM Amit Langote <amitlangote09@gmail.com> wrote:Hi Nitin,\n\nOn Sat, May 8, 2021 at 5:20 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> Thanks Ashutosh and Jeevan for replying.\n>\n> >  \"When creating a range partition, the lower bound specified with\n> > FROM is an inclusive bound, whereas the upper bound specified with\n> > TO is an exclusive bound. That is, the values specified in the FROM\n> > list are valid values of the corresponding partition key columns\n> > for this partition, whereas those in the TO list are not. Note that\n> > this statement must be understood according to the rules of row-wise\n> > comparison (Section 9.24.5). For example, given PARTITION BY RANGE\n> > (x,y), a partition bound FROM (1, 2) TO (3, 4) allows x=1 with any y>=2,\n> > x=2 with any non-null y, and x=3 with any y<4.\"\n>\n> Thanks for the detailed explanation. I understood more about how the\n> partition constraints are prepared based on the RANGE bound values and\n> how the tuple routing takes place based on that.\n>\n> > overlapping range is (100, 0), (100, 200)\n>\n> > Similarly, for the case-1 you mention above:\n> > create table r1 partition of r for values from (100,0) to (200,100);\n> > create table r3 partition of r for values from (0,100) to (100,200);\n> > here, (100, 0) or r1 would overlap with (100, 200) of r3.\n\nThanks Jeevan, that's right.\n\nAnother way to look at this: the partition key (100, 0) would be\ninsertable into r3, because the key satisfies its proposed exclusive\nupper bound (< (100, 200)).  The same key is also insertable into r1,\nbecause it satisfies the latter's inclusive upper bound (>= (100, 0)).\nThat is, the key (100, 0) is insertable into both r1 and r3, so the\nerror that the proposed range of r3 would overlap r1's.\n\n> postgres@68941=#\\d+ r1\n>                                            Table \"public.r1\"\n>  Column |  Type   | Collation | Nullable | Default | Storage | Compression | Stats target | Description\n> --------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n>  a      | integer |           |          |         | plain   |             |              |\n>  b      | integer |           |          |         | plain   |             |              |\n> Partition of: r FOR VALUES FROM (100, 0) TO (200, 100)\n> Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND ((a > 100) OR ((a = 100) AND (b >= 0))) AND ((a < 200) OR ((a = 200) AND (b < 100))))\n> Access method: heap\n>\n> I understand that partition 'r1' says, when column 'a' value is '100', column 'b'\n> values should be greater than '0'. Because of this constraint, creation of\n> partition 'r3' for values from (0,100) to (100,200) failed since the condition\n> when value of column 'a' is 100, column 'b' should be less than '200' which\n> overlaps with the constraints of 'r1'. So, based on the documentation, the\n> behaviour is correct.\n>\n> So in the above scenarios, users cannot create a partition for column 'a' values\n> from (0) to (100). If user tries insert any values for column 'a' between '0' to '100',\n> either it should go to default partition if exists. Otherwise it should fail saying, no partition\n> found. I feel there should be some way to create partitions in these scenarios.\n\nWell, you simply need to come up with bound values for r3 that don't\noverlap with existing partitions' ranges; the following will work for\nexample:\n\ncreate table r3 partition of r for values from (0,100) to (100,0);\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 12 May 2021 20:48:12 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Query regarding RANGE Partitioning" } ]
[ { "msg_contents": "I noticed this while working on the\nEXPLAIN-ANALYZE-for-async-capable-nodes issue:\n\nEXPLAIN (VERBOSE, COSTS OFF)\nDELETE FROM async_pt;\n QUERY PLAN\n----------------------------------------------------------------\n Delete on public.async_pt\n Foreign Delete on public.async_p1 async_pt_1\n Foreign Delete on public.async_p2 async_pt_2\n Delete on public.async_p3 async_pt_3\n -> Append\n -> Async Foreign Delete on public.async_p1 async_pt_1\n Remote SQL: DELETE FROM public.base_tbl1\n -> Async Foreign Delete on public.async_p2 async_pt_2\n Remote SQL: DELETE FROM public.base_tbl2\n -> Seq Scan on public.async_p3 async_pt_3\n Output: async_pt_3.tableoid, async_pt_3.ctid\n(11 rows)\n\nDELETE FROM async_pt;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nconnection to server was lost\n\nThe cause for this would be that direct-update plans are mistakenly\ntreated as async-capable ones, as shown in the EXPLAIN output. To\nfix, I think we should modify postgresPlanDirectModify() so that it\nclears the async-capable flag if it is set. Attached is a patch for\nthat. Maybe I am missing something, though.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Sat, 8 May 2021 01:20:51 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Inherited UPDATE/DELETE vs async execution" }, { "msg_contents": "On Sat, May 08, 2021 at 01:20:51AM +0900, Etsuro Fujita wrote:\n> I noticed this while working on the\n> EXPLAIN-ANALYZE-for-async-capable-nodes issue:\n> \n> DELETE FROM async_pt;\n> server closed the connection unexpectedly\n\nConfirmed, +Tomas, and added at\nhttps://wiki.postgresql.org/wiki/PostgreSQL_14_Open_Items\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 9 May 2021 19:20:44 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Inherited UPDATE/DELETE vs async execution" }, { "msg_contents": "On Mon, May 10, 2021 at 9:20 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Sat, May 08, 2021 at 01:20:51AM +0900, Etsuro Fujita wrote:\n> > I noticed this while working on the\n> > EXPLAIN-ANALYZE-for-async-capable-nodes issue:\n> >\n> > DELETE FROM async_pt;\n> > server closed the connection unexpectedly\n>\n> Confirmed, +Tomas, and added at\n> https://wiki.postgresql.org/wiki/PostgreSQL_14_Open_Items\n\nThanks for that!\n\nMaybe my explanation was not good, but actually, this is a follow-up\nfor commits 27e1f1456 and 86dc90056, which were independently\ndiscussed and committed, and IIUC, the batch-insert work by Tomas\nwould not be related to this. So I’ll work on it unless Tom (or\nanyone else) wants to.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Mon, 10 May 2021 11:20:09 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inherited UPDATE/DELETE vs async execution" }, { "msg_contents": "Fujita-san,\n\nOn Sat, May 8, 2021 at 1:21 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> I noticed this while working on the\n> EXPLAIN-ANALYZE-for-async-capable-nodes issue:\n>\n> EXPLAIN (VERBOSE, COSTS OFF)\n> DELETE FROM async_pt;\n> QUERY PLAN\n> ----------------------------------------------------------------\n> Delete on public.async_pt\n> Foreign Delete on public.async_p1 async_pt_1\n> Foreign Delete on public.async_p2 async_pt_2\n> Delete on public.async_p3 async_pt_3\n> -> Append\n> -> Async Foreign Delete on public.async_p1 async_pt_1\n> Remote SQL: DELETE FROM public.base_tbl1\n> -> Async Foreign Delete on public.async_p2 async_pt_2\n> Remote SQL: DELETE FROM public.base_tbl2\n> -> Seq Scan on public.async_p3 async_pt_3\n> Output: async_pt_3.tableoid, async_pt_3.ctid\n> (11 rows)\n>\n> DELETE FROM async_pt;\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> connection to server was lost\n>\n> The cause for this would be that direct-update plans are mistakenly\n> treated as async-capable ones, as shown in the EXPLAIN output.\n\nI guess that happens because the ForeignScan nodes responsible for\nscanning or direct-updating/deleting from child foreign tables appear\nunder an Append as of 86dc90056, whereas before they would appear as\nchild plans of a ModifyTable node. IIUC, it's the Append that causes\nthe async_capable flag to be set in those ForeignScan nodes.\n\n> To\n> fix, I think we should modify postgresPlanDirectModify() so that it\n> clears the async-capable flag if it is set. Attached is a patch for\n> that. Maybe I am missing something, though.\n\nI see that your patch is to disable asynchronous execution in\nForeignScan nodes responsible for direct update/delete, but why not do\nthe same for other ForeignScan nodes too? Or the other way around --\nis it because fixing the crash that occurs in the former's case would\nbe a significant undertaking for little gain?\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 May 2021 21:20:52 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inherited UPDATE/DELETE vs async execution" }, { "msg_contents": "Amit-san,\n\nOn Mon, May 10, 2021 at 9:21 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Sat, May 8, 2021 at 1:21 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > I noticed this while working on the\n> > EXPLAIN-ANALYZE-for-async-capable-nodes issue:\n> >\n> > EXPLAIN (VERBOSE, COSTS OFF)\n> > DELETE FROM async_pt;\n> > QUERY PLAN\n> > ----------------------------------------------------------------\n> > Delete on public.async_pt\n> > Foreign Delete on public.async_p1 async_pt_1\n> > Foreign Delete on public.async_p2 async_pt_2\n> > Delete on public.async_p3 async_pt_3\n> > -> Append\n> > -> Async Foreign Delete on public.async_p1 async_pt_1\n> > Remote SQL: DELETE FROM public.base_tbl1\n> > -> Async Foreign Delete on public.async_p2 async_pt_2\n> > Remote SQL: DELETE FROM public.base_tbl2\n> > -> Seq Scan on public.async_p3 async_pt_3\n> > Output: async_pt_3.tableoid, async_pt_3.ctid\n> > (11 rows)\n> >\n> > DELETE FROM async_pt;\n> > server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> > connection to server was lost\n> >\n> > The cause for this would be that direct-update plans are mistakenly\n> > treated as async-capable ones, as shown in the EXPLAIN output.\n>\n> I guess that happens because the ForeignScan nodes responsible for\n> scanning or direct-updating/deleting from child foreign tables appear\n> under an Append as of 86dc90056, whereas before they would appear as\n> child plans of a ModifyTable node. IIUC, it's the Append that causes\n> the async_capable flag to be set in those ForeignScan nodes.\n\nThat's right.\n\nThe inherited update/delete work is great! Thanks for that!\n\n> > To\n> > fix, I think we should modify postgresPlanDirectModify() so that it\n> > clears the async-capable flag if it is set. Attached is a patch for\n> > that. Maybe I am missing something, though.\n>\n> I see that your patch is to disable asynchronous execution in\n> ForeignScan nodes responsible for direct update/delete, but why not do\n> the same for other ForeignScan nodes too?\n\nI just thought it would be better to execute other ForeignScan nodes\nasynchronously for performance, if they are async-capable.\n\n> Or the other way around --\n> is it because fixing the crash that occurs in the former's case would\n> be a significant undertaking for little gain?\n\nYeah, I think it would be a good idea to support \"Async Foreign\nDelete\" in the former's case. And actually, I tried to do so, but I\ndidn't, because it seemed to take time. I might be missing something,\nthough.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 11 May 2021 17:56:46 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inherited UPDATE/DELETE vs async execution" }, { "msg_contents": "Fujita-san,\n\nOn Tue, May 11, 2021 at 5:56 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Mon, May 10, 2021 at 9:21 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Sat, May 8, 2021 at 1:21 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > I noticed this while working on the\n> > > EXPLAIN-ANALYZE-for-async-capable-nodes issue:\n> > >\n> > > EXPLAIN (VERBOSE, COSTS OFF)\n> > > DELETE FROM async_pt;\n> > > QUERY PLAN\n> > > ----------------------------------------------------------------\n> > > Delete on public.async_pt\n> > > Foreign Delete on public.async_p1 async_pt_1\n> > > Foreign Delete on public.async_p2 async_pt_2\n> > > Delete on public.async_p3 async_pt_3\n> > > -> Append\n> > > -> Async Foreign Delete on public.async_p1 async_pt_1\n> > > Remote SQL: DELETE FROM public.base_tbl1\n> > > -> Async Foreign Delete on public.async_p2 async_pt_2\n> > > Remote SQL: DELETE FROM public.base_tbl2\n> > > -> Seq Scan on public.async_p3 async_pt_3\n> > > Output: async_pt_3.tableoid, async_pt_3.ctid\n> > > (11 rows)\n> > >\n> > > DELETE FROM async_pt;\n> > > server closed the connection unexpectedly\n> > > This probably means the server terminated abnormally\n> > > before or while processing the request.\n> > > connection to server was lost\n> > >\n> > > The cause for this would be that direct-update plans are mistakenly\n> > > treated as async-capable ones, as shown in the EXPLAIN output.\n> >\n> > I guess that happens because the ForeignScan nodes responsible for\n> > scanning or direct-updating/deleting from child foreign tables appear\n> > under an Append as of 86dc90056, whereas before they would appear as\n> > child plans of a ModifyTable node. IIUC, it's the Append that causes\n> > the async_capable flag to be set in those ForeignScan nodes.\n>\n> That's right.\n>\n> The inherited update/delete work is great! Thanks for that!\n\nThanks.\n\n> > > To\n> > > fix, I think we should modify postgresPlanDirectModify() so that it\n> > > clears the async-capable flag if it is set. Attached is a patch for\n> > > that. Maybe I am missing something, though.\n> >\n> > I see that your patch is to disable asynchronous execution in\n> > ForeignScan nodes responsible for direct update/delete, but why not do\n> > the same for other ForeignScan nodes too?\n>\n> I just thought it would be better to execute other ForeignScan nodes\n> asynchronously for performance, if they are async-capable.\n\nOkay, so I take it that making these ForeignScan nodes (that only\nfetch the data) asynchronous doesn't interfere with update/delete\nsubsequently being performed over presumably the same connection to\nthe remote server.\n\n> > Or the other way around --\n> > is it because fixing the crash that occurs in the former's case would\n> > be a significant undertaking for little gain?\n>\n> Yeah, I think it would be a good idea to support \"Async Foreign\n> Delete\" in the former's case. And actually, I tried to do so, but I\n> didn't, because it seemed to take time.\n\nAh I see. I guess it makes sense to prevent such cases in v14 as your\npatch does, and revisit this in the future.\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 May 2021 21:53:24 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inherited UPDATE/DELETE vs async execution" }, { "msg_contents": "Amit-san,\n\nOn Tue, May 11, 2021 at 9:53 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, May 11, 2021 at 5:56 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Mon, May 10, 2021 at 9:21 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Sat, May 8, 2021 at 1:21 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n\n> > > > To\n> > > > fix, I think we should modify postgresPlanDirectModify() so that it\n> > > > clears the async-capable flag if it is set. Attached is a patch for\n> > > > that. Maybe I am missing something, though.\n> > >\n> > > I see that your patch is to disable asynchronous execution in\n> > > ForeignScan nodes responsible for direct update/delete, but why not do\n> > > the same for other ForeignScan nodes too?\n> >\n> > I just thought it would be better to execute other ForeignScan nodes\n> > asynchronously for performance, if they are async-capable.\n>\n> Okay, so I take it that making these ForeignScan nodes (that only\n> fetch the data) asynchronous doesn't interfere with update/delete\n> subsequently being performed over presumably the same connection to\n> the remote server.\n\nGood point! I don't think it would interfere with the update/delete,\nbecause in that case postgres_fdw would actually perform the\nupdate/delete and the asynchronous foreign scans serially rather than\nconcurrently. (They wouldn't be perfomed in parallel unless they use\ndifferent connections, in other words.)\n\n> > > Or the other way around --\n> > > is it because fixing the crash that occurs in the former's case would\n> > > be a significant undertaking for little gain?\n> >\n> > Yeah, I think it would be a good idea to support \"Async Foreign\n> > Delete\" in the former's case. And actually, I tried to do so, but I\n> > didn't, because it seemed to take time.\n>\n> Ah I see. I guess it makes sense to prevent such cases in v14 as your\n> patch does, and revisit this in the future.\n\n+1\n\nHere is a rebased version of the patch. I'm planning to apply this tommorow.\n\nThanks for the comment!\n\nBest regards,\nEtsuro Fujita", "msg_date": "Wed, 12 May 2021 18:44:56 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inherited UPDATE/DELETE vs async execution" }, { "msg_contents": "On Wed, May 12, 2021 at 6:45 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Tue, May 11, 2021 at 9:53 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Okay, so I take it that making these ForeignScan nodes (that only\n> > fetch the data) asynchronous doesn't interfere with update/delete\n> > subsequently being performed over presumably the same connection to\n> > the remote server.\n>\n> Good point! I don't think it would interfere with the update/delete,\n> because in that case postgres_fdw would actually perform the\n> update/delete and the asynchronous foreign scans serially rather than\n> concurrently. (They wouldn't be perfomed in parallel unless they use\n> different connections, in other words.)\n\nI see, that makes sense.\n\n> > > > Or the other way around --\n> > > > is it because fixing the crash that occurs in the former's case would\n> > > > be a significant undertaking for little gain?\n> > >\n> > > Yeah, I think it would be a good idea to support \"Async Foreign\n> > > Delete\" in the former's case. And actually, I tried to do so, but I\n> > > didn't, because it seemed to take time.\n> >\n> > Ah I see. I guess it makes sense to prevent such cases in v14 as your\n> > patch does, and revisit this in the future.\n>\n> +1\n>\n> Here is a rebased version of the patch. I'm planning to apply this tommorow.\n\n+ /*\n+ * Finally, unset the async-capable flag if it is set.\n+ */\n\nWould it make sense to expand here even just a bit on why we must do this?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 15:32:04 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inherited UPDATE/DELETE vs async execution" }, { "msg_contents": "On Thu, May 13, 2021 at 3:32 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, May 12, 2021 at 6:45 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > Here is a rebased version of the patch. I'm planning to apply this tommorow.\n>\n> + /*\n> + * Finally, unset the async-capable flag if it is set.\n> + */\n>\n> Would it make sense to expand here even just a bit on why we must do this?\n\n+1 How about something like this?\n\n\"Finally, unset the async-capable flag if it is set, as we currently\ndon't support asynchronous execution of direct modifications.\"\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 13 May 2021 17:00:22 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inherited UPDATE/DELETE vs async execution" }, { "msg_contents": "On Thu, May 13, 2021 at 5:00 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 3:32 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Wed, May 12, 2021 at 6:45 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > Here is a rebased version of the patch. I'm planning to apply this tommorow.\n> >\n> > + /*\n> > + * Finally, unset the async-capable flag if it is set.\n> > + */\n> >\n> > Would it make sense to expand here even just a bit on why we must do this?\n>\n> +1 How about something like this?\n>\n> \"Finally, unset the async-capable flag if it is set, as we currently\n> don't support asynchronous execution of direct modifications.\"\n\nPushed after modifying the comment as such. I think we could improve\nit later. :-)\n\nThanks for the comment!\n\nWill close this in the open items list.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 13 May 2021 20:10:52 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inherited UPDATE/DELETE vs async execution" }, { "msg_contents": "On Thu, May 13, 2021 at 8:10 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Thu, May 13, 2021 at 5:00 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Thu, May 13, 2021 at 3:32 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Wed, May 12, 2021 at 6:45 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > > Here is a rebased version of the patch. I'm planning to apply this tommorow.\n> > >\n> > > + /*\n> > > + * Finally, unset the async-capable flag if it is set.\n> > > + */\n> > >\n> > > Would it make sense to expand here even just a bit on why we must do this?\n> >\n> > +1 How about something like this?\n> >\n> > \"Finally, unset the async-capable flag if it is set, as we currently\n> > don't support asynchronous execution of direct modifications.\"\n>\n> Pushed after modifying the comment as such. I think we could improve\n> it later. :-)\n\nLooks good as pushed, thank you.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 21:09:33 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inherited UPDATE/DELETE vs async execution" } ]
[ { "msg_contents": "See\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=7f4bab7f4a0e42ee9fa14707f726017b7869386b\n\nAs usual, please send comments/corrections by Sunday.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 May 2021 12:22:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Draft back-branch release notes are up" }, { "msg_contents": "On Fri, 7 May 2021 at 18:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> See\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=7f4bab7f4a0e42ee9fa14707f726017b7869386b\n>\n> As usual, please send comments/corrections by Sunday.\n\nI noticed only one potential issue.\n\nI see similar (if not duplicate) entries for a \"COMMIT AND CHAIN\"\nissue, committed at nearly the same time, and both by Fujii Masao. Are\nthese the same / should they be contained in one entry?\n\n> +Author: Fujii Masao <fujii@postgresql.org>\n> +Branch: master [8a55cb5ba] 2021-02-19 21:57:52 +0900\n\n> +Author: Fujii Masao <fujii@postgresql.org>\n> +Branch: master [fe06819f1] 2021-02-19 22:01:25 +0900\n\nThanks in advance,\n\nMatthias van de Meent\n\n\n", "msg_date": "Fri, 7 May 2021 18:43:54 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Draft back-branch release notes are up" }, { "msg_contents": "Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> I see similar (if not duplicate) entries for a \"COMMIT AND CHAIN\"\n> issue, committed at nearly the same time, and both by Fujii Masao. Are\n> these the same / should they be contained in one entry?\n\n>> +Author: Fujii Masao <fujii@postgresql.org>\n>> +Branch: master [8a55cb5ba] 2021-02-19 21:57:52 +0900\n\n>> +Author: Fujii Masao <fujii@postgresql.org>\n>> +Branch: master [fe06819f1] 2021-02-19 22:01:25 +0900\n\nNo, the first is a server bug, the second is a psql bug.\n\nThanks for looking though!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 May 2021 13:00:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Draft back-branch release notes are up" }, { "msg_contents": "On 5/7/21 12:22 PM, Tom Lane wrote:\n> See\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=7f4bab7f4a0e42ee9fa14707f726017b7869386b\n\nThanks!\n\n> As usual, please send comments/corrections by Sunday.\n\n==snip=\nA previous bug fix caused environment variables (such as PGPORT) to\noverride entries in the service file in this context. Previously, and in\nother contexts, the priority is the other way around; so restore that\nbehavior.\n==snip==\n\ns/;/,/ per grammar check.\n\nOtherwise on a quick read, looks good. I'll be reading it more\nthoroughly as the day progresses.\n\nAre there going to be any tzdata changes?\n\nThanks,\n\nJonathan", "msg_date": "Fri, 7 May 2021 13:49:06 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Draft back-branch release notes are up" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Are there going to be any tzdata changes?\n\nNope, they're still on 2021a:\nhttps://www.iana.org/time-zones\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 May 2021 13:56:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Draft back-branch release notes are up" }, { "msg_contents": "On 2021-May-07, Tom Lane wrote:\n\n> See\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=7f4bab7f4a0e42ee9fa14707f726017b7869386b\n\nI suppose you're aware of this, so I just want to get it on record that\nthis entry\n\n+<!--\n+Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n+Branch: master [25936fd46] 2021-02-27 18:09:15 -0300\n+Branch: REL_13_STABLE [2688852a4] 2021-02-27 18:09:15 -0300\n+Branch: REL_12_STABLE [262eb990c] 2021-02-27 18:09:15 -0300\n+Branch: REL_11_STABLE [d1c6edd31] 2021-02-27 18:09:15 -0300\n+-->\n+ <para>\n+ Fix use-after-free bug in saving tuples for <literal>AFTER</literal>\n+ triggers (Amit Langote)\n+ </para>\n\nonly goes back to 12; the commit to 11 was just to add the test case.\nThis is obvious if you look at the commit, but if you just look at the\nrelease note entry, that detail might be missed.\n\nThe notes look good.\n\nThanks,\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"No renuncies a nada. No te aferres a nada.\"\n\n\n", "msg_date": "Fri, 7 May 2021 15:23:34 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Draft back-branch release notes are up" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I suppose you're aware of this, so I just want to get it on record that\n> this entry\n\n> +<!--\n> +Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> +Branch: master [25936fd46] 2021-02-27 18:09:15 -0300\n> +Branch: REL_13_STABLE [2688852a4] 2021-02-27 18:09:15 -0300\n> +Branch: REL_12_STABLE [262eb990c] 2021-02-27 18:09:15 -0300\n> +Branch: REL_11_STABLE [d1c6edd31] 2021-02-27 18:09:15 -0300\n> +-->\n> + <para>\n> + Fix use-after-free bug in saving tuples for <literal>AFTER</literal>\n> + triggers (Amit Langote)\n> + </para>\n\n> only goes back to 12; the commit to 11 was just to add the test case.\n> This is obvious if you look at the commit, but if you just look at the\n> release note entry, that detail might be missed.\n\nGood point. I'll make sure this doesn't get into the v11 notes\n(which I probably would have done if you didn't point it out,\nso thanks).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 May 2021 16:25:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Draft back-branch release notes are up" } ]
[ { "msg_contents": "In a bug report back in November [1] a subthread explored why parallel\nquery is excluded any time we have \"Plan nodes which reference a\ncorrelated SubPlan\". Amit's understanding was that the reasoning had\nto do with inability to easily pass (potentially variable length)\nParam values between workers.\n\nHowever a decent-sized subset of this kind of query doesn't actually\nrequire that we communicate between workers. If the Subplan executes\nper-tuple within the worker then there's no reason I can see why it\nneeds to be marked parallel unsafe. Amit concurred but noted that\nidentifying that subset of plans is the difficult part (as is usually\nthe case!)\n\nAt the time I'd started work on an approach to handle this case and\nhoped to \"post about it in a new thread later this week.\" That didn't\nhappen, but here we are now, and I finally have this patch cleaned up\nenough to share.\n\nThe basic idea is that we need to track (both on nodes and relations)\nnot only whether that node or rel is parallel safe but also whether\nit's parallel safe assuming params are rechecked in the using context.\nThat allows us to delay making a final decision until we have\nsufficient context to conclude that a given usage of a Param is\nactually parallel safe or unsafe.\n\nThe first patch in this series was previously posted in the thread\n\"Consider parallel for lateral subqueries with limit\" [2] and is\nrequired as a precursor for various test cases to work here.\n\nThe second patch implements the core of the series. It results in\nparallel query being possible for subplans that execute entirely\nwithin the context of a parallel worker for cases where that subplan\nis in the target, a LATERAL JOIN, or the WHERE and ORDER BY clauses.\n\nThe final patch notes several places where we set e.g.\nrel->consider_parallel but setting the corresponding new value\nrel->consider_parallel_recheckng_params wasn't yet necessary. It shows\nopportunity either for further improvement or concluding certain cases\ncan't benefit and should be left unchanged.\n\nJames\n\n1: https://www.postgresql.org/message-id/CAAaqYe_vihKjc%2B8LuQa49EHW4%2BKfefb3wHqPYFnCuUqozo%2BLFg%40mail.gmail.com\n2: https://www.postgresql.org/message-id/flat/CAAaqYe_HEkmLwf_1iEHxXwQOWiRyiFd%3DuOu6kwj3sWYdVd1-zA%40mail.gmail.com", "msg_date": "Fri, 7 May 2021 12:30:02 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "> On 7 May 2021, at 18:30, James Coleman <jtc331@gmail.com> wrote:\n\n> ..here we are now, and I finally have this patch cleaned up\n> enough to share.\n\nThis patch no longer applies to HEAD, can you please submit a rebased version?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 1 Sep 2021 13:06:04 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Wed, Sep 1, 2021 at 7:06 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 7 May 2021, at 18:30, James Coleman <jtc331@gmail.com> wrote:\n>\n> > ..here we are now, and I finally have this patch cleaned up\n> > enough to share.\n>\n> This patch no longer applies to HEAD, can you please submit a rebased version?\n\nSee attached.\n\nThanks,\nJames", "msg_date": "Tue, 7 Sep 2021 09:16:49 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Tue, Sep 7, 2021 at 6:17 AM James Coleman <jtc331@gmail.com> wrote:\n\n> On Wed, Sep 1, 2021 at 7:06 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > > On 7 May 2021, at 18:30, James Coleman <jtc331@gmail.com> wrote:\n> >\n> > > ..here we are now, and I finally have this patch cleaned up\n> > > enough to share.\n> >\n> > This patch no longer applies to HEAD, can you please submit a rebased\n> version?\n>\n> See attached.\n>\n> Thanks,\n> James\n>\nHi,\nFor v2-0002-Parallel-query-support-for-basic-correlated-subqu.patch :\n\n+ * is when we're going to execute multiple partial parths in parallel\n\nparths -> paths\n\n if (index->amcanparallel &&\n- rel->consider_parallel && outer_relids == NULL &&\n- scantype != ST_BITMAPSCAN)\n+ rel->consider_parallel && outer_relids == NULL &&\n+ scantype != ST_BITMAPSCAN)\n\nthe change above seems unnecessary since the first line of if condition\ndoesn't change.\nSimilar comment for the next hunk.\n\n+ * It's not a partial path; it'a a full path that is executed\nas a subquery.\n\nit'a a -> it's a\n\n+ /* rel->consider_parallel_rechecking_params = false; */\n+ /* rel->partial_pathlist = NIL; */\n\nThe commented code can be taken out.\n\nCheers\n\nOn Tue, Sep 7, 2021 at 6:17 AM James Coleman <jtc331@gmail.com> wrote:On Wed, Sep 1, 2021 at 7:06 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 7 May 2021, at 18:30, James Coleman <jtc331@gmail.com> wrote:\n>\n> > ..here we are now, and I finally have this patch cleaned up\n> > enough to share.\n>\n> This patch no longer applies to HEAD, can you please submit a rebased version?\n\nSee attached.\n\nThanks,\nJamesHi,For v2-0002-Parallel-query-support-for-basic-correlated-subqu.patch :+    * is when we're going to execute multiple partial parths in parallelparths -> paths         if (index->amcanparallel &&-           rel->consider_parallel && outer_relids == NULL &&-           scantype != ST_BITMAPSCAN)+               rel->consider_parallel && outer_relids == NULL &&+               scantype != ST_BITMAPSCAN)the change above seems unnecessary since the first line of if condition doesn't change.Similar comment for the next hunk.+            * It's not a partial path; it'a a full path that is executed as a subquery.it'a a -> it's a+           /* rel->consider_parallel_rechecking_params = false; */+           /* rel->partial_pathlist = NIL; */The commented code can be taken out.Cheers", "msg_date": "Tue, 7 Sep 2021 08:06:00 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Tue, Sep 7, 2021 at 11:06 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>\n>\n> On Tue, Sep 7, 2021 at 6:17 AM James Coleman <jtc331@gmail.com> wrote:\n>>\n>> On Wed, Sep 1, 2021 at 7:06 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> >\n>> > > On 7 May 2021, at 18:30, James Coleman <jtc331@gmail.com> wrote:\n>> >\n>> > > ..here we are now, and I finally have this patch cleaned up\n>> > > enough to share.\n>> >\n>> > This patch no longer applies to HEAD, can you please submit a rebased version?\n>>\n>> See attached.\n>>\n>> Thanks,\n>> James\n>\n> Hi,\n> For v2-0002-Parallel-query-support-for-basic-correlated-subqu.patch :\n>\n> + * is when we're going to execute multiple partial parths in parallel\n>\n> parths -> paths\n>\n> if (index->amcanparallel &&\n> - rel->consider_parallel && outer_relids == NULL &&\n> - scantype != ST_BITMAPSCAN)\n> + rel->consider_parallel && outer_relids == NULL &&\n> + scantype != ST_BITMAPSCAN)\n>\n> the change above seems unnecessary since the first line of if condition doesn't change.\n> Similar comment for the next hunk.\n>\n> + * It's not a partial path; it'a a full path that is executed as a subquery.\n>\n> it'a a -> it's a\n>\n> + /* rel->consider_parallel_rechecking_params = false; */\n> + /* rel->partial_pathlist = NIL; */\n>\n> The commented code can be taken out.\n\nThanks for taking a look at this.\n\nSee updated patch series attached.\n\nJames Coleman", "msg_date": "Wed, 8 Sep 2021 08:47:35 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Wed, Sep 8, 2021 at 8:47 AM James Coleman <jtc331@gmail.com> wrote:\n\n> See updated patch series attached.\n\nJaime,\n\nI noticed on 3-October you moved this into \"waiting on author\"; I\ndon't see anything waiting in this thread, however. Am I missing\nsomething?\n\nI'm planning to change it back to \"needs review\".\n\nThanks,\nJames\n\n\n", "msg_date": "Wed, 3 Nov 2021 09:46:33 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "As a preliminary comment, it would be quite useful to get Tom Lane's\nopinion on this, since it's not an area I understand especially well,\nand I think he understands it better than anyone.\n\nOn Fri, May 7, 2021 at 12:30 PM James Coleman <jtc331@gmail.com> wrote:\n> The basic idea is that we need to track (both on nodes and relations)\n> not only whether that node or rel is parallel safe but also whether\n> it's parallel safe assuming params are rechecked in the using context.\n> That allows us to delay making a final decision until we have\n> sufficient context to conclude that a given usage of a Param is\n> actually parallel safe or unsafe.\n\nI don't really understand what you mean by \"assuming params are\nrechecked in the using context.\" However, I think that a possibly\nbetter approach to this whole area would be to try to solve the\nproblem by putting limits on where you can insert a Gather node.\nConsider:\n\nNested Loop\n-> Seq Scan on x\n-> Index Scan on y\n Index Cond: y.q = x.q\n\nIf you insert a Gather node atop the Index Scan, necessarily changing\nit to a Parallel Index Scan, then you need to pass values around. For\nevery value we get for x.q, we would need to start workers, sending\nthem the value of x.q, and they do a parallel index scan working\ntogether to find all rows where y.q = x.q, and then exit. We repeat\nthis for every tuple from x.q. In the absence of infrastructure to\npass those parameters, we can't put the Gather there. We also don't\nwant to, because it would be really slow.\n\nIf you insert the Gather node atop the Seq Scan or the Nested Loop, in\neither case necessarily changing the Seq Scan to a Parallel Seq Scan,\nyou have no problem. If you put it on top of the Nested Loop, the\nparameter will be set in the workers and used in the workers and\neverything is fine. If you put it on top of the Seq Scan, the\nparameter will be set in the leader -- by the Nested Loop -- and used\nin the leader, and again you have no problem.\n\nSo in my view of the world, the parameter just acts as an additional\nconstraint on where Gather nodes can be placed. I don't see that there\nare any parameters that are unsafe categorically -- they're just\nunsafe if the place where they are set is on a different side of the\nGather from the place where they are used. So I don't understand --\npossibly just because I'm dumb -- the idea behind\nconsider_parallel_rechecking_params, because that seems to be making a\nsort of overall judgement about the safety or unsafety of the\nparameter on its own merits, rather than thinking about the Gather\nplacement.\n\nWhen I last worked on this, I had hoped that extParam or allParam\nwould be the thing that would answer the question: are there any\nparameters used under this node that are not also set under this node?\nBut I seem to recall that neither seemed to be answering precisely\nthat question, and the lousy naming of those fields and limited\ndocumentation of their intended purpose did not help.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Nov 2021 10:48:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> When I last worked on this, I had hoped that extParam or allParam\n> would be the thing that would answer the question: are there any\n> parameters used under this node that are not also set under this node?\n> But I seem to recall that neither seemed to be answering precisely\n> that question, and the lousy naming of those fields and limited\n> documentation of their intended purpose did not help.\n\nFWIW, I've never been very happy with those fields either. IIRC the\ndesign in that area was all Vadim's, but to the extent that there's\nany usable documentation of extParam/allParam, it was filled in by me\nwhile trying to understand what Vadim did. If somebody wants to step\nup and do a rewrite to make the planner's Param management more useful\nor at least easier to understand, I think that'd be great.\n\nBut anyway: yeah, those fields as currently constituted don't help\nmuch. They tell you which Params are consumed by this node or its\nsubnodes, but not where those Params came from. The planner's\nplan_params and outer_params fields might be more nearly the right\nthing, but I'm not sure they're spot-on either, nor that they're\nup-to-date at the point where you'd want to make decisions about\nGather safety.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Nov 2021 11:14:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Wed, Nov 3, 2021 at 11:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> FWIW, I've never been very happy with those fields either. IIRC the\n> design in that area was all Vadim's, but to the extent that there's\n> any usable documentation of extParam/allParam, it was filled in by me\n> while trying to understand what Vadim did. If somebody wants to step\n> up and do a rewrite to make the planner's Param management more useful\n> or at least easier to understand, I think that'd be great.\n\nGood to know, thanks.\n\n> But anyway: yeah, those fields as currently constituted don't help\n> much. They tell you which Params are consumed by this node or its\n> subnodes, but not where those Params came from. The planner's\n> plan_params and outer_params fields might be more nearly the right\n> thing, but I'm not sure they're spot-on either, nor that they're\n> up-to-date at the point where you'd want to make decisions about\n> Gather safety.\n\nOne thing I discovered when I was looking at this a few years ago is\nthat there was only one query in the regression tests where extParam\nand allParam were not the same. The offending query was select 1 =\nall(select (select 1)), and the resulting plan has a Materialize node\nwith an attached InitPlan. For that Materialize node, extParam = {}\nand allParam = {$0}, with $0 also being the output parameter of the\nInitPlan attached that that Materialize node. In every other node in\nthat plan and in every node of every other plan generated by the\nregression tests, the values were identical. So it's extremely niche\nthat these fields are even different from each other, and it's unclear\nto me that we really need both of them.\n\nWhat's also interesting is that extParam is computed (by\nfinalize_plan) as plan->extParam = bms_del_members(plan->extParam,\ninitSetParam). So I think it mostly ends up that extParam for a node\nis not exactly all the parameters that anything under that node cares\nabout, but rather - approximately - all the things that anything under\nthat node cares about that aren't also set someplace under that node.\nIf it were exactly that, I think it would be perfect for our needs\nhere: if the set of things used but not set below the current level is\nempty, it's OK to insert a Gather node; otherwise, it's not, at least,\nnot unless we find a way to pipe parameters from the leader into the\nworkers. But I think there's some reason that I no longer remember why\nit's not exactly that, and therefore the idea doesn't work.\n\nOne problem I do remember is that attaching initplans at the top of\neach subquery level as we presently do is really not good for this\nkind of thing. Suppose you have several levels of Nested Loop and\nsomeplace down in the plan you reference an InitPlan. The planner sees\nno harm in attaching the InitPlan at the top level, which makes it\nunsafe to put the Gather any place but at the top level. If you\nattached the InitPlan to the lowest node in the plan tree that is high\nenough to be above all the places that use the value from that\nparameter, you could potentially shift the Gather down the plan tree,\nwhich would be great if, for example, there's exactly one\nparallel-restricted join and the rest are parallel-safe. The best plan\nmight be to do all the other joins under a Gather and then perform the\nparallel-restricted join above it.\n\nBut I found it very hard to figure out how to rejigger the logic that\nplaces InitPlans to be more intelligent, and eventually gave up.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Nov 2021 12:27:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> One thing I discovered when I was looking at this a few years ago is\n> that there was only one query in the regression tests where extParam\n> and allParam were not the same. The offending query was select 1 =\n> all(select (select 1)), and the resulting plan has a Materialize node\n> with an attached InitPlan. For that Materialize node, extParam = {}\n> and allParam = {$0}, with $0 also being the output parameter of the\n> InitPlan attached that that Materialize node. In every other node in\n> that plan and in every node of every other plan generated by the\n> regression tests, the values were identical. So it's extremely niche\n> that these fields are even different from each other, and it's unclear\n> to me that we really need both of them.\n\nYeah, I've had that nagging feeling about them too. But ISTR trying to\nreduce them to one value years ago, and finding that it didn't quite work,\nor at least would result in more subquery-re-evaluation than we do today.\nYou have to dig into what the executor uses these values for to really\ngrok them. I'm afraid that that detail is all swapped out right now, so\nI can't say much more.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Nov 2021 12:52:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "Hi Robert, thanks for the detailed reply.\n\nOn Wed, Nov 3, 2021 at 10:48 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> As a preliminary comment, it would be quite useful to get Tom Lane's\n> opinion on this, since it's not an area I understand especially well,\n> and I think he understands it better than anyone.\n>\n> On Fri, May 7, 2021 at 12:30 PM James Coleman <jtc331@gmail.com> wrote:\n> > The basic idea is that we need to track (both on nodes and relations)\n> > not only whether that node or rel is parallel safe but also whether\n> > it's parallel safe assuming params are rechecked in the using context.\n> > That allows us to delay making a final decision until we have\n> > sufficient context to conclude that a given usage of a Param is\n> > actually parallel safe or unsafe.\n>\n> I don't really understand what you mean by \"assuming params are\n> rechecked in the using context.\" However, I think that a possibly\n> better approach to this whole area would be to try to solve the\n> problem by putting limits on where you can insert a Gather node.\n> Consider:\n>\n> Nested Loop\n> -> Seq Scan on x\n> -> Index Scan on y\n> Index Cond: y.q = x.q\n>\n> If you insert a Gather node atop the Index Scan, necessarily changing\n> it to a Parallel Index Scan, then you need to pass values around. For\n> every value we get for x.q, we would need to start workers, sending\n> them the value of x.q, and they do a parallel index scan working\n> together to find all rows where y.q = x.q, and then exit. We repeat\n> this for every tuple from x.q. In the absence of infrastructure to\n> pass those parameters, we can't put the Gather there. We also don't\n> want to, because it would be really slow.\n>\n> If you insert the Gather node atop the Seq Scan or the Nested Loop, in\n> either case necessarily changing the Seq Scan to a Parallel Seq Scan,\n> you have no problem. If you put it on top of the Nested Loop, the\n> parameter will be set in the workers and used in the workers and\n> everything is fine. If you put it on top of the Seq Scan, the\n> parameter will be set in the leader -- by the Nested Loop -- and used\n> in the leader, and again you have no problem.\n>\n> So in my view of the world, the parameter just acts as an additional\n> constraint on where Gather nodes can be placed. I don't see that there\n> are any parameters that are unsafe categorically -- they're just\n> unsafe if the place where they are set is on a different side of the\n> Gather from the place where they are used. So I don't understand --\n> possibly just because I'm dumb -- the idea behind\n> consider_parallel_rechecking_params, because that seems to be making a\n> sort of overall judgement about the safety or unsafety of the\n> parameter on its own merits, rather than thinking about the Gather\n> placement.\n\nI had to read through this several times before I understood the point\n(not your fault, this is, as you note, a complicated area). I *think*\nif I grok it properly you're effectively describing what this patch\nresults in conceptually (but possibly solving it from a different\ndirection).\n\nAs I understand the current code, parallel plans are largely chosen\nbased not on where it's safe to insert a Gather node but rather by\ndetermining if a given path is parallel safe. Through that lens params\nare a bit of an odd man out -- they aren't inherently unsafe in the\nway a parallel-unsafe function is, but they can only be used in\nparallel plans under certain conditions (whether because of project\npolicy, performance, or missing infrastructure).\n\nUnder that paradigm the existing consider_parallel and parallel_safe\nboolean values imply everything is about whether a plan is inherently\nparallel safe. Thus the current doesn't have the context to handle the\nnuance of params (as they are not inherently parallel-safe or unsafe).\n\nIntroducing consider_parallel_rechecking_params and\nparallel_safe_ignoring_params allows us to keep more context on params\nand make a more nuanced decision at the proper level of the plan. This\nis what I mean by \"rechecked in the using context\", though I realize\nnow that both \"recheck\" and \"context\" are overloaded terms in the\nproject, so don't describe the concept particularly clearly. When a\npath relies on params we can only make a final determination about its\nparallel safety if we know whether or not the current parallel node\ncan provide the param's value. We don't necessarily know that\ninformation until we attempt to generate a full parallel node in the\nplan (I think what you're describing as \"inserting a Gather node\")\nsince the param may come from another node in the plan. These new\nvalues allow us to do that by tracking tentatively parallel-safe\nsubplans (given proper Gather node placement) and delaying the\nparallel-safety determination until the point at which a param is\navailable (or not).\n\nIs that a more helpful framing of what my goal is here?\n\n> When I last worked on this, I had hoped that extParam or allParam\n> would be the thing that would answer the question: are there any\n> parameters used under this node that are not also set under this node?\n> But I seem to recall that neither seemed to be answering precisely\n> that question, and the lousy naming of those fields and limited\n> documentation of their intended purpose did not help.\n\nI don't really know anything about extParam or allParam, so I can't\noffer any insight here.\n\nThanks,\nJames Coleman\n\n\n", "msg_date": "Wed, 3 Nov 2021 13:34:39 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Wed, Nov 3, 2021 at 1:34 PM James Coleman <jtc331@gmail.com> wrote:\n> As I understand the current code, parallel plans are largely chosen\n> based not on where it's safe to insert a Gather node but rather by\n> determining if a given path is parallel safe. Through that lens params\n> are a bit of an odd man out -- they aren't inherently unsafe in the\n> way a parallel-unsafe function is, but they can only be used in\n> parallel plans under certain conditions (whether because of project\n> policy, performance, or missing infrastructure).\n\nRight.\n\n> Introducing consider_parallel_rechecking_params and\n> parallel_safe_ignoring_params allows us to keep more context on params\n> and make a more nuanced decision at the proper level of the plan. This\n> is what I mean by \"rechecked in the using context\", though I realize\n> now that both \"recheck\" and \"context\" are overloaded terms in the\n> project, so don't describe the concept particularly clearly. When a\n> path relies on params we can only make a final determination about its\n> parallel safety if we know whether or not the current parallel node\n> can provide the param's value. We don't necessarily know that\n> information until we attempt to generate a full parallel node in the\n> plan (I think what you're describing as \"inserting a Gather node\")\n> since the param may come from another node in the plan. These new\n> values allow us to do that by tracking tentatively parallel-safe\n> subplans (given proper Gather node placement) and delaying the\n> parallel-safety determination until the point at which a param is\n> available (or not).\n\nSo I think I agree with you here. But I don't like all of this\n\"ignoring_params\" stuff and I don't see why it's necessary. Say we\ndon't have both parallel_safe and parallel_safe_ignoring_params. Say\nwe just have parallel_safe. If the plan will be parallel safe if the\nparams are available, we label it parallel safe. If the plan will not\nbe parallel safe even if the params are available, we say it's not\nparallel safe. Then, when we get to generate_gather_paths(), we don't\ngenerate any paths if there are required parameters that are not\navailable. What's wrong with that approach?\n\nMaybe it's clearer to say this: I feel like one extra Boolean is\neither too much or too little. I think maybe it's not even needed. But\nif it is needed, then why just a bool instead of, say, a Bitmapset of\nparams that are needed, or something?\n\nI'm sort of speaking from intuition here rather than sure knowledge. I\nmight be totally wrong.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Nov 2021 10:01:37 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Mon, Nov 15, 2021 at 10:01:37AM -0500, Robert Haas wrote:\n> On Wed, Nov 3, 2021 at 1:34 PM James Coleman <jtc331@gmail.com> wrote:\n>> As I understand the current code, parallel plans are largely chosen\n>> based not on where it's safe to insert a Gather node but rather by\n>> determining if a given path is parallel safe. Through that lens params\n>> are a bit of an odd man out -- they aren't inherently unsafe in the\n>> way a parallel-unsafe function is, but they can only be used in\n>> parallel plans under certain conditions (whether because of project\n>> policy, performance, or missing infrastructure).\n> \n> Right.\n\nPlease note that the CF bot is complaining here, so I have moved this\npatch to the next CF, but changed the status as waiting on author.\n--\nMichael", "msg_date": "Fri, 3 Dec 2021 16:35:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Fri, Dec 3, 2021 at 2:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Nov 15, 2021 at 10:01:37AM -0500, Robert Haas wrote:\n> > On Wed, Nov 3, 2021 at 1:34 PM James Coleman <jtc331@gmail.com> wrote:\n> >> As I understand the current code, parallel plans are largely chosen\n> >> based not on where it's safe to insert a Gather node but rather by\n> >> determining if a given path is parallel safe. Through that lens params\n> >> are a bit of an odd man out -- they aren't inherently unsafe in the\n> >> way a parallel-unsafe function is, but they can only be used in\n> >> parallel plans under certain conditions (whether because of project\n> >> policy, performance, or missing infrastructure).\n> >\n> > Right.\n>\n> Please note that the CF bot is complaining here, so I have moved this\n> patch to the next CF, but changed the status as waiting on author.\n\nI rebased this back in December, but somehow forgot to reply with the\nupdated patch, so, here it is finally.\n\nThanks,\nJames Coleman", "msg_date": "Fri, 14 Jan 2022 14:15:40 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Mon, Nov 15, 2021 at 10:01 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Nov 3, 2021 at 1:34 PM James Coleman <jtc331@gmail.com> wrote:\n> > As I understand the current code, parallel plans are largely chosen\n> > based not on where it's safe to insert a Gather node but rather by\n> > determining if a given path is parallel safe. Through that lens params\n> > are a bit of an odd man out -- they aren't inherently unsafe in the\n> > way a parallel-unsafe function is, but they can only be used in\n> > parallel plans under certain conditions (whether because of project\n> > policy, performance, or missing infrastructure).\n>\n> Right.\n>\n> > Introducing consider_parallel_rechecking_params and\n> > parallel_safe_ignoring_params allows us to keep more context on params\n> > and make a more nuanced decision at the proper level of the plan. This\n> > is what I mean by \"rechecked in the using context\", though I realize\n> > now that both \"recheck\" and \"context\" are overloaded terms in the\n> > project, so don't describe the concept particularly clearly. When a\n> > path relies on params we can only make a final determination about its\n> > parallel safety if we know whether or not the current parallel node\n> > can provide the param's value. We don't necessarily know that\n> > information until we attempt to generate a full parallel node in the\n> > plan (I think what you're describing as \"inserting a Gather node\")\n> > since the param may come from another node in the plan. These new\n> > values allow us to do that by tracking tentatively parallel-safe\n> > subplans (given proper Gather node placement) and delaying the\n> > parallel-safety determination until the point at which a param is\n> > available (or not).\n>\n> So I think I agree with you here. But I don't like all of this\n> \"ignoring_params\" stuff and I don't see why it's necessary. Say we\n> don't have both parallel_safe and parallel_safe_ignoring_params. Say\n> we just have parallel_safe. If the plan will be parallel safe if the\n> params are available, we label it parallel safe. If the plan will not\n> be parallel safe even if the params are available, we say it's not\n> parallel safe. Then, when we get to generate_gather_paths(), we don't\n> generate any paths if there are required parameters that are not\n> available. What's wrong with that approach?\n>\n> Maybe it's clearer to say this: I feel like one extra Boolean is\n> either too much or too little. I think maybe it's not even needed. But\n> if it is needed, then why just a bool instead of, say, a Bitmapset of\n> params that are needed, or something?\n>\n> I'm sort of speaking from intuition here rather than sure knowledge. I\n> might be totally wrong.\n\nApologies for quite the delay responding to this.\n\nI've been chewing on this a bit, and I was about to go re-read the\ncode and see how easy it'd be to do exactly what you're suggesting in\ngenerate_gather_paths() (and verifying it doesn't need to happen in\nother places). However there's one (I think large) gotcha with that\napproach (assuming it otherwise makes sense): it means we do\nunnecessary work. In the current patch series we only need to recheck\nparallel safety if we're in a situation where we might actually\nbenefit from doing that work (namely when we have a correlated\nsubquery we might otherwise be able to execute in a parallel plan). If\nwe don't track that status we'd have to recheck the full parallel\nsafety of the path for all paths -- even without correlated\nsubqueries.\n\nAlternatively we could merge these fields into a single enum field\nthat tracked these states. Even better, we could use a bitmap to\nsignify what items are/aren't parallel safe. I'm not sure if that'd\ncreate even larger churn in the patch, but maybe it's worth it either\nway. In theory it'd open up further expansions to this concept later\n(though I'm not aware of any such ideas).\n\nIf you think such an approach would be an improvement I'd be happy to\ntake a pass at a revised patch.\n\nThoughts?\n\nThanks,\nJames Coleman\n\n\n", "msg_date": "Fri, 14 Jan 2022 14:24:50 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Fri, Jan 14, 2022 at 2:25 PM James Coleman <jtc331@gmail.com> wrote:\n> I've been chewing on this a bit, and I was about to go re-read the\n> code and see how easy it'd be to do exactly what you're suggesting in\n> generate_gather_paths() (and verifying it doesn't need to happen in\n> other places). However there's one (I think large) gotcha with that\n> approach (assuming it otherwise makes sense): it means we do\n> unnecessary work. In the current patch series we only need to recheck\n> parallel safety if we're in a situation where we might actually\n> benefit from doing that work (namely when we have a correlated\n> subquery we might otherwise be able to execute in a parallel plan). If\n> we don't track that status we'd have to recheck the full parallel\n> safety of the path for all paths -- even without correlated\n> subqueries.\n\nI don't think there's an intrinsic problem with the idea of making a\ntentative determination about parallel safety and then refining it\nlater, but I'm not sure why you think it would be a lot of work to\nfigure this out at the point where we generate gather paths. I think\nit's just a matter of testing whether the set of parameters that the\npath needs as input is the empty set. It may be that neither extParam\nnor allParam are precisely that thing, but I think both are very\nclose, and it seems to me that there's no theoretical reason why we\ncan't know for every path the set of inputs that it requires \"from the\noutside.\"\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 21 Jan 2022 15:20:44 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I don't think there's an intrinsic problem with the idea of making a\n> tentative determination about parallel safety and then refining it\n> later, but I'm not sure why you think it would be a lot of work to\n> figure this out at the point where we generate gather paths. I think\n> it's just a matter of testing whether the set of parameters that the\n> path needs as input is the empty set. It may be that neither extParam\n> nor allParam are precisely that thing, but I think both are very\n> close, and it seems to me that there's no theoretical reason why we\n> can't know for every path the set of inputs that it requires \"from the\n> outside.\"\n\nI'd be very happy if someone redesigned the extParam/allParam mechanism,\nor at least documented it better. It's confusing and I've never been\nable to escape the feeling that it's somewhat redundant.\n\nThe real problem with it though is that we don't compute those values\nuntil much too late to be useful in path construction; see comments\nfor SS_identify_outer_params. To be helpful to the planner, we'd have\nto rejigger things at least enough to calculate them earlier -- or\nmaybe better, calculate what the planner wants earlier, and then transform\nto what the executor wants later.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Jan 2022 15:54:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Fri, Jan 21, 2022 at 3:20 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Jan 14, 2022 at 2:25 PM James Coleman <jtc331@gmail.com> wrote:\n> > I've been chewing on this a bit, and I was about to go re-read the\n> > code and see how easy it'd be to do exactly what you're suggesting in\n> > generate_gather_paths() (and verifying it doesn't need to happen in\n> > other places). However there's one (I think large) gotcha with that\n> > approach (assuming it otherwise makes sense): it means we do\n> > unnecessary work. In the current patch series we only need to recheck\n> > parallel safety if we're in a situation where we might actually\n> > benefit from doing that work (namely when we have a correlated\n> > subquery we might otherwise be able to execute in a parallel plan). If\n> > we don't track that status we'd have to recheck the full parallel\n> > safety of the path for all paths -- even without correlated\n> > subqueries.\n>\n> I don't think there's an intrinsic problem with the idea of making a\n> tentative determination about parallel safety and then refining it\n> later, but I'm not sure why you think it would be a lot of work to\n> figure this out at the point where we generate gather paths. I think\n> it's just a matter of testing whether the set of parameters that the\n> path needs as input is the empty set. It may be that neither extParam\n> nor allParam are precisely that thing, but I think both are very\n> close, and it seems to me that there's no theoretical reason why we\n> can't know for every path the set of inputs that it requires \"from the\n> outside.\"\n\nAs I understand it now (not sure I realized this before) you're\nsuggesting that *even when there are required params* marking it as\nparallel safe, and then checking the params for parallel safety later.\n From a purely theoretical perspective that seemed reasonable, so I\ntook a pass at that approach.\n\nThe first, and likely most interesting, thing I discovered was that\nthe vast majority of what the patch accomplishes it does so not via\nthe delayed params safety checking but rather via the required outer\nrelids checks I'd added to generate_useful_gather_paths.\n\nFor that to happen I did have to mark PARAM_EXEC params as presumed\nparallel safe. That means that parallel_safe now doesn't strictly mean\n\"parallel safe in the current context\" but \"parallel safe as long as\nany params are provided\". That's a real change, but probably\nacceptable as long as a project policy decision is made in that\ndirection.\n\nThere are a few concerns I have (and I'm not sure what level they rise to):\n\n1. From what I can tell we don't have access on a path to the set of\nparams required by that path (I believe this is what Tom was\nreferencing in his sister reply at this point in the thread). That\nmeans we have to rely on checking that the required outer relids are\nprovided by the current query level. I'm not quite sure yet whether or\nnot that guarantees (or if the rest of the path construction logic\nguarantees for us) that the params provided by the outer rel are used\nin a correlated way that isn't shared across workers. And because we\ndon't have the param information available we can't add additional\nchecks (that I can tell) to verify that.\n2. Are we excluding any paths (by having one that will always be\ninvalid win the cost comparisons in add_partial_path)? I suppose this\ndanger actually exists in the previous version of the patch as well,\nand I don't actually have any examples of this being a problem. Also\nmaybe this can only be a problem if (1) reveals a bug.\n3. The new patch series actually ends up allowing parallelization of\ncorrelated params in a few more places than the original patch series.\n From what I can tell all of the cases are in fact safe to execute in\nparallel, which, if true, means this is a feature not a concern. The\nchanged query plans fall into two categories: a.) putting a gather\ninside a subplan and b.) correlated param usages in a subquery scan\npath on the inner side of a join. I've separated out those specific\nchanges in a separate patch to make it easier to tell which ones I'm\nreferencing.\n\nOn the other hand this is a dramatically simpler patch series.\nAssuming the approach is sound, it should much easier to maintain than\nthe previous version.\n\nThe final patch in the series is a set of additional checks I could\nimagine to try to be more explicit, but at least in the current test\nsuite there isn't anything at all they affect.\n\nDoes this look at least somewhat more like what you'd envisionsed\n(granting the need to squint hard given the relids checks instead of\ndirectly checking params)?\n\nThanks,\nJames Coleman", "msg_date": "Sat, 22 Jan 2022 20:25:19 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "Hi,\n\nOn 2022-01-22 20:25:19 -0500, James Coleman wrote:\n> On the other hand this is a dramatically simpler patch series.\n> Assuming the approach is sound, it should much easier to maintain than\n> the previous version.\n> \n> The final patch in the series is a set of additional checks I could\n> imagine to try to be more explicit, but at least in the current test\n> suite there isn't anything at all they affect.\n> \n> Does this look at least somewhat more like what you'd envisionsed\n> (granting the need to squint hard given the relids checks instead of\n> directly checking params)?\n\nThis fails on freebsd (so likely a timing issue): https://cirrus-ci.com/task/4758411492458496?logs=test_world#L2225\n\nMarked as waiting on author.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Mar 2022 17:48:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "This entry has been waiting on author input for a while (our current\nthreshold is roughly two weeks), so I've marked it Returned with\nFeedback.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/38/3246/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 2 Aug 2022 13:57:42 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Mon, Mar 21, 2022 at 8:48 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-01-22 20:25:19 -0500, James Coleman wrote:\n> > On the other hand this is a dramatically simpler patch series.\n> > Assuming the approach is sound, it should much easier to maintain than\n> > the previous version.\n> >\n> > The final patch in the series is a set of additional checks I could\n> > imagine to try to be more explicit, but at least in the current test\n> > suite there isn't anything at all they affect.\n> >\n> > Does this look at least somewhat more like what you'd envisionsed\n> > (granting the need to squint hard given the relids checks instead of\n> > directly checking params)?\n>\n> This fails on freebsd (so likely a timing issue): https://cirrus-ci.com/task/4758411492458496?logs=test_world#L2225\n>\n> Marked as waiting on author.\n\nI've finally gotten around to checking this out, and the issue was an\n\"explain analyze\" test that had actual loops different on FreeBSD.\nThere doesn't seem to be a way to disable loop output, but instead of\nprocessing the explain output with e.g. a function (as we do some\nother places) to remove the offending and unnecessary output I've just\nremoved the \"analyze\" (as I don't believe it was actually necessary).\n\nAttached is an updated patch series. In this version I've removed the\n\"parallelize some subqueries with limit\" patch since discussion is\nproceeding in the spun off thread. The first patch adds additional\ntests so that you can see how those new tests change with the code\nchanges in the 2nd patch in the series. As before the final patch in\nthe series includes changes where we may also want to verify\ncorrectness but don't have a test demonstrating the need.\n\nThanks,\nJames Coleman", "msg_date": "Mon, 26 Sep 2022 22:56:11 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Tue, 27 Sept 2022 at 08:26, James Coleman <jtc331@gmail.com> wrote:\n>\n> On Mon, Mar 21, 2022 at 8:48 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2022-01-22 20:25:19 -0500, James Coleman wrote:\n> > > On the other hand this is a dramatically simpler patch series.\n> > > Assuming the approach is sound, it should much easier to maintain than\n> > > the previous version.\n> > >\n> > > The final patch in the series is a set of additional checks I could\n> > > imagine to try to be more explicit, but at least in the current test\n> > > suite there isn't anything at all they affect.\n> > >\n> > > Does this look at least somewhat more like what you'd envisionsed\n> > > (granting the need to squint hard given the relids checks instead of\n> > > directly checking params)?\n> >\n> > This fails on freebsd (so likely a timing issue): https://cirrus-ci.com/task/4758411492458496?logs=test_world#L2225\n> >\n> > Marked as waiting on author.\n>\n> I've finally gotten around to checking this out, and the issue was an\n> \"explain analyze\" test that had actual loops different on FreeBSD.\n> There doesn't seem to be a way to disable loop output, but instead of\n> processing the explain output with e.g. a function (as we do some\n> other places) to remove the offending and unnecessary output I've just\n> removed the \"analyze\" (as I don't believe it was actually necessary).\n>\n> Attached is an updated patch series. In this version I've removed the\n> \"parallelize some subqueries with limit\" patch since discussion is\n> proceeding in the spun off thread. The first patch adds additional\n> tests so that you can see how those new tests change with the code\n> changes in the 2nd patch in the series. As before the final patch in\n> the series includes changes where we may also want to verify\n> correctness but don't have a test demonstrating the need.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\n=== Applying patches on top of PostgreSQL commit ID\nbf03cfd162176d543da79f9398131abc251ddbb9 ===\n=== applying patch ./v5-0002-Parallelize-correlated-subqueries.patch\npatching file src/backend/optimizer/path/allpaths.c\n...\nHunk #5 FAILED at 3225.\nHunk #6 FAILED at 3259.\nHunk #7 succeeded at 3432 (offset -6 lines).\n2 out of 7 hunks FAILED -- saving rejects to file\nsrc/backend/optimizer/path/allpaths.c.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3246.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 4 Jan 2023 15:19:24 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "Hi,\n\nThis patch hasn't been updated since September, and it got broken by\n4a29eabd1d91c5484426bc5836e0a7143b064f5a which the incremental sort\nstuff a little bit. But the breakage was rather limited, so I took a\nstab at fixing it - attached is the result, hopefully correct.\n\nI also added a couple minor comments about stuff I noticed while\nrebasing and skimming the patch, I kept those in separate commits.\nThere's also a couple pre-existing TODOs.\n\nJames, what's your plan with this patch. Do you intend to work on it for\nPG16, or are there some issues I missed in the thread?\n\n\nOne of the queries in in incremental_sort changed plans a little bit:\n\nexplain (costs off) select distinct\n unique1,\n (select t.unique1 from tenk1 where tenk1.unique1 = t.unique1)\nfrom tenk1 t, generate_series(1, 1000);\n\nswitched from\n\n Unique (cost=18582710.41..18747375.21 rows=10000 width=8)\n -> Gather Merge (cost=18582710.41..18697375.21 rows=10000000 ...)\n Workers Planned: 2\n -> Sort (cost=18582710.39..18593127.06 rows=4166667 ...)\n Sort Key: t.unique1, ((SubPlan 1))\n ...\n\nto\n\n Unique (cost=18582710.41..18614268.91 rows=10000 ...)\n -> Gather Merge (cost=18582710.41..18614168.91 rows=20000 ...)\n Workers Planned: 2\n -> Unique (cost=18582710.39..18613960.39 rows=10000 ...)\n -> Sort (cost=18582710.39..18593127.06 ...)\n Sort Key: t.unique1, ((SubPlan 1))\n ...\n\nwhich probably makes sense, as the cost estimate decreases a bit.\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 18 Jan 2023 20:09:21 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Wed, Jan 18, 2023 at 2:09 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> This patch hasn't been updated since September, and it got broken by\n> 4a29eabd1d91c5484426bc5836e0a7143b064f5a which the incremental sort\n> stuff a little bit. But the breakage was rather limited, so I took a\n> stab at fixing it - attached is the result, hopefully correct.\n\nThanks for fixing this up; the changes look correct to me.\n\n> I also added a couple minor comments about stuff I noticed while\n> rebasing and skimming the patch, I kept those in separate commits.\n> There's also a couple pre-existing TODOs.\n\nI started work on some of these, but wasn't able to finish this\nevening, so I don't have an updated series yet.\n\n> James, what's your plan with this patch. Do you intend to work on it for\n> PG16, or are there some issues I missed in the thread?\n\nI'd love to see it get into PG16. I don't have any known issues, but\nreviewing activity has been light. Originally Robert had had some\nconcerns about my original approach; I think my updated approach\nresolves those issues, but it'd be good to have that sign-off.\n\nBeyond that I'm mostly looking for review and evaluation of the\napproach I've taken; of note is my description of that in [1].\n\n> One of the queries in in incremental_sort changed plans a little bit:\n>\n> explain (costs off) select distinct\n> unique1,\n> (select t.unique1 from tenk1 where tenk1.unique1 = t.unique1)\n> from tenk1 t, generate_series(1, 1000);\n>\n> switched from\n>\n> Unique (cost=18582710.41..18747375.21 rows=10000 width=8)\n> -> Gather Merge (cost=18582710.41..18697375.21 rows=10000000 ...)\n> Workers Planned: 2\n> -> Sort (cost=18582710.39..18593127.06 rows=4166667 ...)\n> Sort Key: t.unique1, ((SubPlan 1))\n> ...\n>\n> to\n>\n> Unique (cost=18582710.41..18614268.91 rows=10000 ...)\n> -> Gather Merge (cost=18582710.41..18614168.91 rows=20000 ...)\n> Workers Planned: 2\n> -> Unique (cost=18582710.39..18613960.39 rows=10000 ...)\n> -> Sort (cost=18582710.39..18593127.06 ...)\n> Sort Key: t.unique1, ((SubPlan 1))\n> ...\n>\n> which probably makes sense, as the cost estimate decreases a bit.\n\nOff the cuff that seems fine. I'll read it over again when I send the\nupdated series.\n\nJames Coleman\n\n1: https://www.postgresql.org/message-id/CAAaqYe8m0DHUWk7gLKb_C4abTD4nMkU26ErE%3Dahow4zNMZbzPQ%40mail.gmail.com\n\n\n", "msg_date": "Wed, 18 Jan 2023 21:34:21 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Wed, Jan 18, 2023 at 9:34 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Wed, Jan 18, 2023 at 2:09 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > Hi,\n> >\n> > This patch hasn't been updated since September, and it got broken by\n> > 4a29eabd1d91c5484426bc5836e0a7143b064f5a which the incremental sort\n> > stuff a little bit. But the breakage was rather limited, so I took a\n> > stab at fixing it - attached is the result, hopefully correct.\n>\n> Thanks for fixing this up; the changes look correct to me.\n>\n> > I also added a couple minor comments about stuff I noticed while\n> > rebasing and skimming the patch, I kept those in separate commits.\n> > There's also a couple pre-existing TODOs.\n>\n> I started work on some of these, but wasn't able to finish this\n> evening, so I don't have an updated series yet.\n>\n> > James, what's your plan with this patch. Do you intend to work on it for\n> > PG16, or are there some issues I missed in the thread?\n>\n> I'd love to see it get into PG16. I don't have any known issues, but\n> reviewing activity has been light. Originally Robert had had some\n> concerns about my original approach; I think my updated approach\n> resolves those issues, but it'd be good to have that sign-off.\n>\n> Beyond that I'm mostly looking for review and evaluation of the\n> approach I've taken; of note is my description of that in [1].\n\nHere's an updated patch version incorporating feedback from Tomas as\nwell as some additional comments and tweaks.\n\nWhile working through Tomas's comment about a conditional in the\nmax_parallel_hazard_waker being guaranteed true I realized that in the\ncurrent version of the patch the safe_param_ids tracking in\nis_parallel_safe isn't actually needed any longer. That seemed\nsuspicious, and so I started digging, and I found out that in the\ncurrent approach all of the tests pass with only the changes in\nclauses.c. I don't believe that the other changes aren't needed;\nrather I believe there isn't yet a test case exercising them, but I\nrealize that means I can't prove they're needed. I spent some time\npoking at this, but at least with my current level of imagination I\nhaven't stumbled across a query that would exercise these checks. One\nof the reasons I'm fairly confident that this is true is that the\noriginal approach (which was significantly more invasive) definitely\nrequired rechecking parallel safety at each level until we reached the\npoint where the subquery was known to be generated within the current\nworker through the safe_param_ids tracking mechanism. Of course it is\npossible that that complexity is actually required and this simplified\napproach isn't feasible (but I don't have a good reason to suspect\nthat currently). It's also possible that the restrictions on\nsubqueries just aren't necessary...but that isn't compelling because\nit would require proving that you can never have a query level with\nas-yet unsatisfied lateral rels.\n\nNote: All of the existing tests for \"you can't parallelize a\ncorrelated subquery\" are all simple versions which are not actually\nparallel unsafe in theory. I assume they were added to show that the\ncode excluded that broad case, and there wasn't any finer grain of\ndetail required since the code simply didn't support making the\ndecision with that granularity anyway. But that means there weren't\nany existing test cases to exercise the granularity I'm now trying to\nachieve.\n\nIf someone is willing to help out what I'd like help with currently is\nfinding such a test case (where a gather or gather merge path would\notherwise be created but at the current plan level not all of the\nrequired lateral rels are yet available -- meaning that we can't\nperform all of the subqueries within the current worker). In support\nof that patch 0004 converts several of the new parallel safety checks\ninto WARNING messages instead to make it easy to see if a query\nhappens to encounter any of those checks.\n\nJames Coleman", "msg_date": "Sat, 21 Jan 2023 22:07:58 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Sat, Jan 21, 2023 at 10:07 PM James Coleman <jtc331@gmail.com> wrote:\n> ...\n> While working through Tomas's comment about a conditional in the\n> max_parallel_hazard_waker being guaranteed true I realized that in the\n> current version of the patch the safe_param_ids tracking in\n> is_parallel_safe isn't actually needed any longer. That seemed\n> suspicious, and so I started digging, and I found out that in the\n> current approach all of the tests pass with only the changes in\n> clauses.c. I don't believe that the other changes aren't needed;\n> rather I believe there isn't yet a test case exercising them, but I\n> realize that means I can't prove they're needed. I spent some time\n> poking at this, but at least with my current level of imagination I\n> haven't stumbled across a query that would exercise these checks.\n\nI played with this a good bit more yesterday, I'm now a good bit more\nconfident this is correct. I've cleaned up the patch; see attached for\nv7.\n\nHere's some of my thought process:\nThe comments in src/include/nodes/pathnodes.h:2953 tell us that\nPARAM_EXEC params are used to pass values around from one plan node to\nanother in the following ways:\n1. Values down into subqueries (for outer references in subqueries)\n2. Up out of subqueries (for the results of a subplan)\n3. From a NestLoop plan node into its inner relation (when the inner\nscan is parameterized with values from the outer relation)\n\nCase (2) is already known to be safe (we currently add these params to\nsafe_param_ids in max_parallel_hazard_walker when we encounter a\nSubPlan node).\n\nI also believe case (3) is already handled. We don't build partial\npaths for joins when joinrel->lateral_relids is non-empty, and join\norder calculations already require that parameterization here go the\ncorrect way (i.e., inner depends on outer rather than the other way\naround).\n\nThat leaves us with only case (1) to consider in this patch. Another\nway of saying this is that this is really the only thing the\nsafe_param_ids tracking is guarding against. For params passed down\ninto subqueries we can further distinguish between init plans and\n\"regular\" subplans. We already know that params from init plans are\nsafe (at the right level). So we're concerned here with a way to know\nif the params passed to subplans are safe. We already track required\nrels in ParamPathInfo, so it's fairly simple to do this test.\n\nWhich this patch we do in fact now see (as expected) rels with\nnon-empty lateral_relids showing up in generate_[useful_]gather_paths.\nAnd the partial paths can now have non-empty required outer rels. I'm\nnot able to come up with a plan that would actually be caught by those\nchecks; I theorize that because of the few places we actually call\ngenerate_[useful_]gather_paths we are in practice already excluding\nthose, but for now I've left these as a conditional rather than an\nassertion because it seems like the kind of guard we'd want to ensure\nthose methods are safe.\n\nThe other other place that we actually create gather[_merge] paths is\ngather_grouping_paths(), and there I've chosen to use assertions,\nbecause the point at which grouping happens in planning suggests to me\nthat we shouldn't have lateral dependencies at that point. If someone\nis concerned about that, I'd be happy to change those to conditionals\nalso.\n\nJames Coleman", "msg_date": "Mon, 23 Jan 2023 09:00:31 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "James Coleman <jtc331@gmail.com> wrote:\n\n> On Sat, Jan 21, 2023 at 10:07 PM James Coleman <jtc331@gmail.com> wrote:\n> > ...\n> > While working through Tomas's comment about a conditional in the\n> > max_parallel_hazard_waker being guaranteed true I realized that in the\n> > current version of the patch the safe_param_ids tracking in\n> > is_parallel_safe isn't actually needed any longer. That seemed\n> > suspicious, and so I started digging, and I found out that in the\n> > current approach all of the tests pass with only the changes in\n> > clauses.c. I don't believe that the other changes aren't needed;\n> > rather I believe there isn't yet a test case exercising them, but I\n> > realize that means I can't prove they're needed. I spent some time\n> > poking at this, but at least with my current level of imagination I\n> > haven't stumbled across a query that would exercise these checks.\n> \n> I played with this a good bit more yesterday, I'm now a good bit more\n> confident this is correct. I've cleaned up the patch; see attached for\n> v7.\n> \n> Here's some of my thought process:\n> The comments in src/include/nodes/pathnodes.h:2953 tell us that\n> PARAM_EXEC params are used to pass values around from one plan node to\n> another in the following ways:\n> 1. Values down into subqueries (for outer references in subqueries)\n> 2. Up out of subqueries (for the results of a subplan)\n> 3. From a NestLoop plan node into its inner relation (when the inner\n> scan is parameterized with values from the outer relation)\n> \n> Case (2) is already known to be safe (we currently add these params to\n> safe_param_ids in max_parallel_hazard_walker when we encounter a\n> SubPlan node).\n> \n> I also believe case (3) is already handled. We don't build partial\n> paths for joins when joinrel->lateral_relids is non-empty, and join\n> order calculations already require that parameterization here go the\n> correct way (i.e., inner depends on outer rather than the other way\n> around).\n> \n> That leaves us with only case (1) to consider in this patch. Another\n> way of saying this is that this is really the only thing the\n> safe_param_ids tracking is guarding against. For params passed down\n> into subqueries we can further distinguish between init plans and\n> \"regular\" subplans. We already know that params from init plans are\n> safe (at the right level). So we're concerned here with a way to know\n> if the params passed to subplans are safe. We already track required\n> rels in ParamPathInfo, so it's fairly simple to do this test.\n> \n> Which this patch we do in fact now see (as expected) rels with\n> non-empty lateral_relids showing up in generate_[useful_]gather_paths.\n> And the partial paths can now have non-empty required outer rels. I'm\n> not able to come up with a plan that would actually be caught by those\n> checks; I theorize that because of the few places we actually call\n> generate_[useful_]gather_paths we are in practice already excluding\n> those, but for now I've left these as a conditional rather than an\n> assertion because it seems like the kind of guard we'd want to ensure\n> those methods are safe.\n\nMaybe we can later (in separate patches) relax the restrictions imposed on\npartial path creation a little bit, so that more parameterized partial paths\nare created.\n\nOne particular case that should be rejected by your checks is a partial index\npath, which can be parameterized, but I couldn't construct a query that makes\nyour checks fire. Maybe the reason is that a parameterized index path is\nmostly used on the inner side of a NL join, however no partial path can be\nused there. (The problem is that each worker evaluating the NL join would only\nsee a subset of the inner relation, which whould lead to incorrect results.)\n\nSo I'd also choose conditions rather than assert statements.\n\n\nFollowing are my (minor) findings:\n\nIn generate_gather_paths() you added this test\n\n /*\n * Delay gather path creation until the level in the join tree where all\n * params used in a worker are generated within that worker.\n */\n if (!bms_is_subset(required_outer, rel->relids))\n return;\n\nbut I'm not sure if required_outer can contain anything of rel->relids. How\nabout using bms_is_empty(required) outer, or even this?\n\n\tif (required_outer)\n\t\treturn;\n\nSimilarly,\n\n /* We can't pass params to workers. */\n if (!bms_is_subset(PATH_REQ_OUTER(cheapest_partial_path), rel->relids))\n\nmight look like\n\n if (!bms_is_empty(PATH_REQ_OUTER(cheapest_partial_path)))\n\nor\n\n if (PATH_REQ_OUTER(cheapest_partial_path))\n\nIn particular, build_index_paths() does the following when setting\nouter_relids (which eventually becomes (path->param_info->ppi_req_outer):\n\n /* Enforce convention that outer_relids is exactly NULL if empty */\n if (bms_is_empty(outer_relids))\n outer_relids = NULL;\n\n\nAnother question is whether in this call\n\n simple_gather_path = (Path *)\n create_gather_path(root, rel, cheapest_partial_path, rel->reltarget,\n required_outer, rowsp);\n\nrequired_outer should be passed to create_gather_path(). Shouldn't it rather\nbe PATH_REQ_OUTER(cheapest_partial_path) that you test just above? Again,\nbuild_index_paths() initializes outer_relids this way\n\n outer_relids = bms_copy(rel->lateral_relids);\n\nbut then it may add some more relations:\n\n /* OK to include this clause */\n index_clauses = lappend(index_clauses, iclause);\n outer_relids = bms_add_members(outer_relids,\n rinfo->clause_relids);\n\nSo I think that PATH_REQ_OUTER(cheapest_partial_path) in\ngenerate_gather_paths() can eventually contain more relations than\nrequired_outer, and therefore it's safer to check the first.\n\n\nSimilar comments might apply to generate_useful_gather_paths(). Here I also\nsuggest to move this test\n\n /* We can't pass params to workers. */\n if (!bms_is_subset(PATH_REQ_OUTER(subpath), rel->relids))\n continue;\n\nto the top of the loop because it's relatively cheap.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 06 Feb 2023 17:40:55 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Mon, Feb 6, 2023 at 11:39 AM Antonin Houska <ah@cybertec.at> wrote:\n>\n> James Coleman <jtc331@gmail.com> wrote:\n> > Which this patch we do in fact now see (as expected) rels with\n> > non-empty lateral_relids showing up in generate_[useful_]gather_paths.\n> > And the partial paths can now have non-empty required outer rels. I'm\n> > not able to come up with a plan that would actually be caught by those\n> > checks; I theorize that because of the few places we actually call\n> > generate_[useful_]gather_paths we are in practice already excluding\n> > those, but for now I've left these as a conditional rather than an\n> > assertion because it seems like the kind of guard we'd want to ensure\n> > those methods are safe.\n>\n> Maybe we can later (in separate patches) relax the restrictions imposed on\n> partial path creation a little bit, so that more parameterized partial paths\n> are created.\n>\n> One particular case that should be rejected by your checks is a partial index\n> path, which can be parameterized, but I couldn't construct a query that makes\n> your checks fire. Maybe the reason is that a parameterized index path is\n> mostly used on the inner side of a NL join, however no partial path can be\n> used there. (The problem is that each worker evaluating the NL join would only\n> see a subset of the inner relation, which whould lead to incorrect results.)\n>\n> So I'd also choose conditions rather than assert statements.\n\nThanks for confirming.\n\n>\n> Following are my (minor) findings:\n>\n> In generate_gather_paths() you added this test\n>\n> /*\n> * Delay gather path creation until the level in the join tree where all\n> * params used in a worker are generated within that worker.\n> */\n> if (!bms_is_subset(required_outer, rel->relids))\n> return;\n>\n> but I'm not sure if required_outer can contain anything of rel->relids. How\n> about using bms_is_empty(required) outer, or even this?\n>\n> if (required_outer)\n> return;\n>\n> Similarly,\n>\n> /* We can't pass params to workers. */\n> if (!bms_is_subset(PATH_REQ_OUTER(cheapest_partial_path), rel->relids))\n>\n> might look like\n>\n> if (!bms_is_empty(PATH_REQ_OUTER(cheapest_partial_path)))\n>\n> or\n>\n> if (PATH_REQ_OUTER(cheapest_partial_path))\n\nI'm not sure about this change. Deciding is difficult given the fact\nthat we don't seem to currently generate these paths, but I don't see\na reason why lateral_relids can't be present on the rel, and if so,\nthen we need to check to see if they're a subset of relids we can\nsatisfy rather than checking that they don't exist.\n\n> In particular, build_index_paths() does the following when setting\n> outer_relids (which eventually becomes (path->param_info->ppi_req_outer):\n>\n> /* Enforce convention that outer_relids is exactly NULL if empty */\n> if (bms_is_empty(outer_relids))\n> outer_relids = NULL;\n>\n>\n> Another question is whether in this call\n>\n> simple_gather_path = (Path *)\n> create_gather_path(root, rel, cheapest_partial_path, rel->reltarget,\n> required_outer, rowsp);\n>\n> required_outer should be passed to create_gather_path(). Shouldn't it rather\n> be PATH_REQ_OUTER(cheapest_partial_path) that you test just above? Again,\n> build_index_paths() initializes outer_relids this way\n>\n> outer_relids = bms_copy(rel->lateral_relids);\n>\n> but then it may add some more relations:\n>\n> /* OK to include this clause */\n> index_clauses = lappend(index_clauses, iclause);\n> outer_relids = bms_add_members(outer_relids,\n> rinfo->clause_relids);\n>\n> So I think that PATH_REQ_OUTER(cheapest_partial_path) in\n> generate_gather_paths() can eventually contain more relations than\n> required_outer, and therefore it's safer to check the first.\n\nYes, this is a good catch. Originally I didn't know about\nPATH_REQ_OUTER, and I'd missed using it in these places.\n\n>\n> Similar comments might apply to generate_useful_gather_paths(). Here I also\n> suggest to move this test\n>\n> /* We can't pass params to workers. */\n> if (!bms_is_subset(PATH_REQ_OUTER(subpath), rel->relids))\n> continue;\n>\n> to the top of the loop because it's relatively cheap.\n\nMoved.\n\nAttached is v9.\n\nJames Coleman", "msg_date": "Wed, 8 Feb 2023 08:38:27 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "James Coleman <jtc331@gmail.com> wrote:\n\n> On Mon, Feb 6, 2023 at 11:39 AM Antonin Houska <ah@cybertec.at> wrote:\n\n> Attached is v9.\n\nok, I've changed the status to RfC\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 08 Mar 2023 19:07:57 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Mon, Jan 23, 2023 at 10:00 PM James Coleman <jtc331@gmail.com> wrote:\n\n> Which this patch we do in fact now see (as expected) rels with\n> non-empty lateral_relids showing up in generate_[useful_]gather_paths.\n> And the partial paths can now have non-empty required outer rels. I'm\n> not able to come up with a plan that would actually be caught by those\n> checks; I theorize that because of the few places we actually call\n> generate_[useful_]gather_paths we are in practice already excluding\n> those, but for now I've left these as a conditional rather than an\n> assertion because it seems like the kind of guard we'd want to ensure\n> those methods are safe.\n\n\nI'm trying to understand this part. AFAICS we will not create partial\npaths for a rel, base or join, if it has lateral references. So it\nseems to me that in generate_[useful_]gather_paths after we've checked\nthat there are partial paths, the checks for lateral_relids are not\nnecessary because lateral_relids should always be empty in this case.\nMaybe I'm missing something.\n\nAnd while trying the v9 patch I came across a crash with the query\nbelow.\n\nset min_parallel_table_scan_size to 0;\nset parallel_setup_cost to 0;\nset parallel_tuple_cost to 0;\n\nexplain (costs off)\nselect * from pg_description t1 where objoid in\n (select objoid from pg_description t2 where t2.description =\nt1.description);\n QUERY PLAN\n--------------------------------------------------------\n Seq Scan on pg_description t1\n Filter: (SubPlan 1)\n SubPlan 1\n -> Gather\n Workers Planned: 2\n -> Parallel Seq Scan on pg_description t2\n Filter: (description = t1.description)\n(7 rows)\n\nselect * from pg_description t1 where objoid in\n (select objoid from pg_description t2 where t2.description =\nt1.description);\nWARNING: terminating connection because of crash of another server process\n\nSeems something is wrong when extracting the argument from the Param in\nparallel worker.\n\nBTW another rebase is needed as it no longer applies to HEAD.\n\nThanks\nRichard\n\nOn Mon, Jan 23, 2023 at 10:00 PM James Coleman <jtc331@gmail.com> wrote:\nWhich this patch we do in fact now see (as expected) rels with\nnon-empty lateral_relids showing up in generate_[useful_]gather_paths.\nAnd the partial paths can now have non-empty required outer rels. I'm\nnot able to come up with a plan that would actually be caught by those\nchecks; I theorize that because of the few places we actually call\ngenerate_[useful_]gather_paths we are in practice already excluding\nthose, but for now I've left these as a conditional rather than an\nassertion because it seems like the kind of guard we'd want to ensure\nthose methods are safe.I'm trying to understand this part.  AFAICS we will not create partialpaths for a rel, base or join, if it has lateral references.  So itseems to me that in generate_[useful_]gather_paths after we've checkedthat there are partial paths, the checks for lateral_relids are notnecessary because lateral_relids should always be empty in this case.Maybe I'm missing something.And while trying the v9 patch I came across a crash with the querybelow.set min_parallel_table_scan_size to 0;set parallel_setup_cost to 0;set parallel_tuple_cost to 0;explain (costs off)select * from pg_description t1 where objoid in    (select objoid from pg_description t2 where t2.description = t1.description);                       QUERY PLAN-------------------------------------------------------- Seq Scan on pg_description t1   Filter: (SubPlan 1)   SubPlan 1     ->  Gather           Workers Planned: 2           ->  Parallel Seq Scan on pg_description t2                 Filter: (description = t1.description)(7 rows)select * from pg_description t1 where objoid in    (select objoid from pg_description t2 where t2.description = t1.description);WARNING:  terminating connection because of crash of another server processSeems something is wrong when extracting the argument from the Param inparallel worker.BTW another rebase is needed as it no longer applies to HEAD.ThanksRichard", "msg_date": "Tue, 6 Jun 2023 17:35:51 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Tue, Jun 6, 2023 at 4:36 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> On Mon, Jan 23, 2023 at 10:00 PM James Coleman <jtc331@gmail.com> wrote:\n>>\n>> Which this patch we do in fact now see (as expected) rels with\n>> non-empty lateral_relids showing up in generate_[useful_]gather_paths.\n>> And the partial paths can now have non-empty required outer rels. I'm\n>> not able to come up with a plan that would actually be caught by those\n>> checks; I theorize that because of the few places we actually call\n>> generate_[useful_]gather_paths we are in practice already excluding\n>> those, but for now I've left these as a conditional rather than an\n>> assertion because it seems like the kind of guard we'd want to ensure\n>> those methods are safe.\n>\n>\n> I'm trying to understand this part. AFAICS we will not create partial\n> paths for a rel, base or join, if it has lateral references. So it\n> seems to me that in generate_[useful_]gather_paths after we've checked\n> that there are partial paths, the checks for lateral_relids are not\n> necessary because lateral_relids should always be empty in this case.\n> Maybe I'm missing something.\n\nAt first I was thinking \"isn't the point of the patch to generate\npartial paths for rels with lateral references\" given what I'd written\nback in January, but I added \"Assert(bms_is_empty(required_outer));\"\nto both of those functions and the assertion never fails running the\ntests (including my newly parallelizable queries). I'm almost positive\nI'd checked this back in January (not only had I'd explicitly written\nthat I'd confirmed we had non-empty lateral_relids there, but also it\nwas the entire based of the alternate approach to the patch), but...I\ncan't go back to 5 months ago and remember what I'd done.\n\nAh! Your comment about \"after we've checked that there are partial\npaths\" triggered a thought. I think originally I'd had the\n\"bms_is_subset(required_outer, rel->relids)\" check first in these\nfunctions. And indeed if I run the tests with that the assertion moved\nto above the partial paths check, I get failures in\ngenerate_useful_gather_paths specifically. Mystery solved!\n\n> And while trying the v9 patch I came across a crash with the query\n> below.\n>\n> set min_parallel_table_scan_size to 0;\n> set parallel_setup_cost to 0;\n> set parallel_tuple_cost to 0;\n>\n> explain (costs off)\n> select * from pg_description t1 where objoid in\n> (select objoid from pg_description t2 where t2.description = t1.description);\n> QUERY PLAN\n> --------------------------------------------------------\n> Seq Scan on pg_description t1\n> Filter: (SubPlan 1)\n> SubPlan 1\n> -> Gather\n> Workers Planned: 2\n> -> Parallel Seq Scan on pg_description t2\n> Filter: (description = t1.description)\n> (7 rows)\n>\n> select * from pg_description t1 where objoid in\n> (select objoid from pg_description t2 where t2.description = t1.description);\n> WARNING: terminating connection because of crash of another server process\n>\n> Seems something is wrong when extracting the argument from the Param in\n> parallel worker.\n\nWith what I'm trying to change I don't think this plan should ever be\ngenerated since it means we'd have to pass a param from the outer seq\nscan into the parallel subplan, which we can't do (currently).\n\nI've attached the full backtrace to the email, but as you hinted at\nthe parallel worker is trying to get the param (in this case\ndetoasting it), but the param doesn't exist on the worker, so it seg\nfaults. Looking at this further I think there's an existing test case\nthat exposes the misplanning here (the one right under the comment\n\"Parallel Append is not to be used when the subpath depends on the\nouter param\" in select_parallel.sql), but it doesn't seg fault because\nthe param is an integer, doesn't need to be detoasted, and therefore\n(I think) we skate by (but probably with wrong results in depending on\nthe dataset).\n\nInterestingly this is one of the existing test queries my original\npatch approach didn't change, so this gives me something specific to\nwork with improving the path. Thanks for testing this and bringing\nthis to my attention!\n\nBTW are you by any chance testing on ARM macOS? I reproduced the issue\nthere, but for some reason I did not reproduce the error (and the plan\nwasn't parallelized) when I tested this on linux. Perhaps I missed\nsetting something up; it seems odd.\n\n> BTW another rebase is needed as it no longer applies to HEAD.\n\nApologies; I'd rebased, but hadn't updated the thread. See attached\nfor an updated series (albeit still broken on your test query).\n\nThanks,\nJames", "msg_date": "Sun, 11 Jun 2023 21:23:45 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Mon, Jun 12, 2023 at 10:23 AM James Coleman <jtc331@gmail.com> wrote:\n\n> BTW are you by any chance testing on ARM macOS? I reproduced the issue\n> there, but for some reason I did not reproduce the error (and the plan\n> wasn't parallelized) when I tested this on linux. Perhaps I missed\n> setting something up; it seems odd.\n\n\nHmm, that's weird. I was also testing that query on linux. But please\nnote that several GUC settings are needed to generate parallel plan for\nthat query.\n\nset min_parallel_table_scan_size to 0;\nset parallel_setup_cost to 0;\nset parallel_tuple_cost to 0;\n\nThanks\nRichard\n\nOn Mon, Jun 12, 2023 at 10:23 AM James Coleman <jtc331@gmail.com> wrote:\nBTW are you by any chance testing on ARM macOS? I reproduced the issue\nthere, but for some reason I did not reproduce the error (and the plan\nwasn't parallelized) when I tested this on linux. Perhaps I missed\nsetting something up; it seems odd.Hmm, that's weird.  I was also testing that query on linux.  But pleasenote that several GUC settings are needed to generate parallel plan forthat query.set min_parallel_table_scan_size to 0;set parallel_setup_cost to 0;set parallel_tuple_cost to 0;ThanksRichard", "msg_date": "Mon, 26 Jun 2023 14:47:57 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Sun, Jun 11, 2023 at 10:23 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> ...\n> > And while trying the v9 patch I came across a crash with the query\n> > below.\n> >\n> > set min_parallel_table_scan_size to 0;\n> > set parallel_setup_cost to 0;\n> > set parallel_tuple_cost to 0;\n> >\n> > explain (costs off)\n> > select * from pg_description t1 where objoid in\n> > (select objoid from pg_description t2 where t2.description = t1.description);\n> > QUERY PLAN\n> > --------------------------------------------------------\n> > Seq Scan on pg_description t1\n> > Filter: (SubPlan 1)\n> > SubPlan 1\n> > -> Gather\n> > Workers Planned: 2\n> > -> Parallel Seq Scan on pg_description t2\n> > Filter: (description = t1.description)\n> > (7 rows)\n> >\n> > select * from pg_description t1 where objoid in\n> > (select objoid from pg_description t2 where t2.description = t1.description);\n> > WARNING: terminating connection because of crash of another server process\n> >\n> > Seems something is wrong when extracting the argument from the Param in\n> > parallel worker.\n>\n> With what I'm trying to change I don't think this plan should ever be\n> generated since it means we'd have to pass a param from the outer seq\n> scan into the parallel subplan, which we can't do (currently).\n>\n> I've attached the full backtrace to the email, but as you hinted at\n> the parallel worker is trying to get the param (in this case\n> detoasting it), but the param doesn't exist on the worker, so it seg\n> faults. Looking at this further I think there's an existing test case\n> that exposes the misplanning here (the one right under the comment\n> \"Parallel Append is not to be used when the subpath depends on the\n> outer param\" in select_parallel.sql), but it doesn't seg fault because\n> the param is an integer, doesn't need to be detoasted, and therefore\n> (I think) we skate by (but probably with wrong results in depending on\n> the dataset).\n>\n> Interestingly this is one of the existing test queries my original\n> patch approach didn't change, so this gives me something specific to\n> work with improving the path. Thanks for testing this and bringing\n> this to my attention!\n\nHere's what I've found debugging this:\n\nThere's only a single gather path ever created when planning this\nquery, making it easy to know which one is the problem. That gather\npath is created with this stacktrace:\n\n frame #0: 0x0000000105291590\npostgres`create_gather_path(root=0x000000013081ae78,\nrel=0x000000013080c8e8, subpath=0x000000013081c080,\ntarget=0x000000013081c8c0, required_outer=0x0000000000000000,\nrows=0x0000000000000000) at pathnode.c:1971:2\n frame #1: 0x0000000105208e54\npostgres`generate_gather_paths(root=0x000000013081ae78,\nrel=0x000000013080c8e8, override_rows=false) at allpaths.c:3097:4\n frame #2: 0x00000001052090ec\npostgres`generate_useful_gather_paths(root=0x000000013081ae78,\nrel=0x000000013080c8e8, override_rows=false) at allpaths.c:3241:2\n frame #3: 0x0000000105258754\npostgres`apply_scanjoin_target_to_paths(root=0x000000013081ae78,\nrel=0x000000013080c8e8, scanjoin_targets=0x000000013081c978,\nscanjoin_targets_contain_srfs=0x0000000000000000,\nscanjoin_target_parallel_safe=true, tlist_same_exprs=true) at\nplanner.c:7696:3\n frame #4: 0x00000001052533cc\npostgres`grouping_planner(root=0x000000013081ae78, tuple_fraction=0.5)\nat planner.c:1611:3\n frame #5: 0x0000000105251e9c\npostgres`subquery_planner(glob=0x00000001308188d8,\nparse=0x000000013080caf8, parent_root=0x000000013080cc38,\nhasRecursion=false, tuple_fraction=0.5) at planner.c:1062:2\n frame #6: 0x000000010526b134\npostgres`make_subplan(root=0x000000013080cc38,\norig_subquery=0x000000013080ff58, subLinkType=ANY_SUBLINK,\nsubLinkId=0, testexpr=0x000000013080d848, isTopQual=true) at\nsubselect.c:221:12\n frame #7: 0x0000000105268b8c\npostgres`process_sublinks_mutator(node=0x000000013080d6d8,\ncontext=0x000000016b0998f8) at subselect.c:1950:10\n frame #8: 0x0000000105268ad8\npostgres`SS_process_sublinks(root=0x000000013080cc38,\nexpr=0x000000013080d6d8, isQual=true) at subselect.c:1923:9\n frame #9: 0x00000001052527b8\npostgres`preprocess_expression(root=0x000000013080cc38,\nexpr=0x000000013080d6d8, kind=0) at planner.c:1169:10\n frame #10: 0x0000000105252954\npostgres`preprocess_qual_conditions(root=0x000000013080cc38,\njtnode=0x000000013080d108) at planner.c:1214:14\n frame #11: 0x0000000105251580\npostgres`subquery_planner(glob=0x00000001308188d8,\nparse=0x0000000137010d68, parent_root=0x0000000000000000,\nhasRecursion=false, tuple_fraction=0) at planner.c:832:2\n frame #12: 0x000000010525042c\npostgres`standard_planner(parse=0x0000000137010d68,\nquery_string=\"explain (costs off)\\nselect * from pg_description t1\nwhere objoid in\\n (select objoid from pg_description t2 where\nt2.description = t1.description);\", cursorOptions=2048,\nboundParams=0x0000000000000000) at planner.c:411:9\n\nThere aren't any lateral markings on the rels. Additionally the\npartial path has param_info=null (I found out from Tom in a separate\nthread [1] that this is only set for outer relations from the same\nquery level).\n\nThe only param that I could easily find at first was a single param of\ntype PARAM_EXTERN in root->plan_params in make_subplan().\n\nI spent a lot of time trying to figure out where we could find the\nPARAM_EXEC param that's being fed into the subplan, but it doesn't\nseem like we have access to any of these things at the point in the\npath creation process that it's interesting to us when inserting the\ngather nodes.\n\nGiven all of that I settled on this approach:\n1. Modify is_parallel_safe() to by default ignore PARAM_EXEC params.\n2. Add is_parallel_safe_with_params() that checks for the existence of\nsuch params.\n3. Store the required params in a bitmapset on each base rel.\n4. Union the bitmapset on join rels.\n5. Only insert a gather node if that bitmapset is empty.\n\nI have an intuition that there's some spot (e.g. joins) that we should\nbe removing params from this set (e.g., when we've satisfied them),\nbut I haven't been able to come up with such a scenario as yet.\n\nThe attached v11 fixes the issue you reported.\n\nThanks,\nJames Coleman", "msg_date": "Mon, 3 Jul 2023 21:26:44 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Tue, 4 Jul 2023 at 06:56, James Coleman <jtc331@gmail.com> wrote:\n>\n> On Sun, Jun 11, 2023 at 10:23 PM James Coleman <jtc331@gmail.com> wrote:\n> >\n> > ...\n> > > And while trying the v9 patch I came across a crash with the query\n> > > below.\n> > >\n> > > set min_parallel_table_scan_size to 0;\n> > > set parallel_setup_cost to 0;\n> > > set parallel_tuple_cost to 0;\n> > >\n> > > explain (costs off)\n> > > select * from pg_description t1 where objoid in\n> > > (select objoid from pg_description t2 where t2.description = t1.description);\n> > > QUERY PLAN\n> > > --------------------------------------------------------\n> > > Seq Scan on pg_description t1\n> > > Filter: (SubPlan 1)\n> > > SubPlan 1\n> > > -> Gather\n> > > Workers Planned: 2\n> > > -> Parallel Seq Scan on pg_description t2\n> > > Filter: (description = t1.description)\n> > > (7 rows)\n> > >\n> > > select * from pg_description t1 where objoid in\n> > > (select objoid from pg_description t2 where t2.description = t1.description);\n> > > WARNING: terminating connection because of crash of another server process\n> > >\n> > > Seems something is wrong when extracting the argument from the Param in\n> > > parallel worker.\n> >\n> > With what I'm trying to change I don't think this plan should ever be\n> > generated since it means we'd have to pass a param from the outer seq\n> > scan into the parallel subplan, which we can't do (currently).\n> >\n> > I've attached the full backtrace to the email, but as you hinted at\n> > the parallel worker is trying to get the param (in this case\n> > detoasting it), but the param doesn't exist on the worker, so it seg\n> > faults. Looking at this further I think there's an existing test case\n> > that exposes the misplanning here (the one right under the comment\n> > \"Parallel Append is not to be used when the subpath depends on the\n> > outer param\" in select_parallel.sql), but it doesn't seg fault because\n> > the param is an integer, doesn't need to be detoasted, and therefore\n> > (I think) we skate by (but probably with wrong results in depending on\n> > the dataset).\n> >\n> > Interestingly this is one of the existing test queries my original\n> > patch approach didn't change, so this gives me something specific to\n> > work with improving the path. Thanks for testing this and bringing\n> > this to my attention!\n>\n> Here's what I've found debugging this:\n>\n> There's only a single gather path ever created when planning this\n> query, making it easy to know which one is the problem. That gather\n> path is created with this stacktrace:\n>\n> frame #0: 0x0000000105291590\n> postgres`create_gather_path(root=0x000000013081ae78,\n> rel=0x000000013080c8e8, subpath=0x000000013081c080,\n> target=0x000000013081c8c0, required_outer=0x0000000000000000,\n> rows=0x0000000000000000) at pathnode.c:1971:2\n> frame #1: 0x0000000105208e54\n> postgres`generate_gather_paths(root=0x000000013081ae78,\n> rel=0x000000013080c8e8, override_rows=false) at allpaths.c:3097:4\n> frame #2: 0x00000001052090ec\n> postgres`generate_useful_gather_paths(root=0x000000013081ae78,\n> rel=0x000000013080c8e8, override_rows=false) at allpaths.c:3241:2\n> frame #3: 0x0000000105258754\n> postgres`apply_scanjoin_target_to_paths(root=0x000000013081ae78,\n> rel=0x000000013080c8e8, scanjoin_targets=0x000000013081c978,\n> scanjoin_targets_contain_srfs=0x0000000000000000,\n> scanjoin_target_parallel_safe=true, tlist_same_exprs=true) at\n> planner.c:7696:3\n> frame #4: 0x00000001052533cc\n> postgres`grouping_planner(root=0x000000013081ae78, tuple_fraction=0.5)\n> at planner.c:1611:3\n> frame #5: 0x0000000105251e9c\n> postgres`subquery_planner(glob=0x00000001308188d8,\n> parse=0x000000013080caf8, parent_root=0x000000013080cc38,\n> hasRecursion=false, tuple_fraction=0.5) at planner.c:1062:2\n> frame #6: 0x000000010526b134\n> postgres`make_subplan(root=0x000000013080cc38,\n> orig_subquery=0x000000013080ff58, subLinkType=ANY_SUBLINK,\n> subLinkId=0, testexpr=0x000000013080d848, isTopQual=true) at\n> subselect.c:221:12\n> frame #7: 0x0000000105268b8c\n> postgres`process_sublinks_mutator(node=0x000000013080d6d8,\n> context=0x000000016b0998f8) at subselect.c:1950:10\n> frame #8: 0x0000000105268ad8\n> postgres`SS_process_sublinks(root=0x000000013080cc38,\n> expr=0x000000013080d6d8, isQual=true) at subselect.c:1923:9\n> frame #9: 0x00000001052527b8\n> postgres`preprocess_expression(root=0x000000013080cc38,\n> expr=0x000000013080d6d8, kind=0) at planner.c:1169:10\n> frame #10: 0x0000000105252954\n> postgres`preprocess_qual_conditions(root=0x000000013080cc38,\n> jtnode=0x000000013080d108) at planner.c:1214:14\n> frame #11: 0x0000000105251580\n> postgres`subquery_planner(glob=0x00000001308188d8,\n> parse=0x0000000137010d68, parent_root=0x0000000000000000,\n> hasRecursion=false, tuple_fraction=0) at planner.c:832:2\n> frame #12: 0x000000010525042c\n> postgres`standard_planner(parse=0x0000000137010d68,\n> query_string=\"explain (costs off)\\nselect * from pg_description t1\n> where objoid in\\n (select objoid from pg_description t2 where\n> t2.description = t1.description);\", cursorOptions=2048,\n> boundParams=0x0000000000000000) at planner.c:411:9\n>\n> There aren't any lateral markings on the rels. Additionally the\n> partial path has param_info=null (I found out from Tom in a separate\n> thread [1] that this is only set for outer relations from the same\n> query level).\n>\n> The only param that I could easily find at first was a single param of\n> type PARAM_EXTERN in root->plan_params in make_subplan().\n>\n> I spent a lot of time trying to figure out where we could find the\n> PARAM_EXEC param that's being fed into the subplan, but it doesn't\n> seem like we have access to any of these things at the point in the\n> path creation process that it's interesting to us when inserting the\n> gather nodes.\n>\n> Given all of that I settled on this approach:\n> 1. Modify is_parallel_safe() to by default ignore PARAM_EXEC params.\n> 2. Add is_parallel_safe_with_params() that checks for the existence of\n> such params.\n> 3. Store the required params in a bitmapset on each base rel.\n> 4. Union the bitmapset on join rels.\n> 5. Only insert a gather node if that bitmapset is empty.\n>\n> I have an intuition that there's some spot (e.g. joins) that we should\n> be removing params from this set (e.g., when we've satisfied them),\n> but I haven't been able to come up with such a scenario as yet.\n>\n> The attached v11 fixes the issue you reported.\n\nOne of the tests has failed in CFBot at [1] with:\n+++ /tmp/cirrus-ci-build/build/testrun/pg_upgrade/002_pg_upgrade/data/results/select_parallel.out\n2023-12-20 20:08:42.480004000 +0000\n@@ -137,23 +137,24 @@\n explain (costs off)\n select (select max((select pa1.b from part_pa_test pa1 where pa1.a = pa2.a)))\n from part_pa_test pa2;\n- QUERY PLAN\n---------------------------------------------------------------\n- Aggregate\n+ QUERY PLAN\n+--------------------------------------------------------------------\n+ Finalize Aggregate\n -> Gather\n Workers Planned: 3\n- -> Parallel Append\n- -> Parallel Seq Scan on part_pa_test_p1 pa2_1\n- -> Parallel Seq Scan on part_pa_test_p2 pa2_2\n+ -> Partial Aggregate\n+ -> Parallel Append\n+ -> Parallel Seq Scan on part_pa_test_p1 pa2_1\n+ -> Parallel Seq Scan on part_pa_test_p2 pa2_2\n+ SubPlan 1\n+ -> Append\n+ -> Seq Scan on part_pa_test_p1 pa1_1\n+ Filter: (a = pa2.a)\n+ -> Seq Scan on part_pa_test_p2 pa1_2\n+ Filter: (a = pa2.a)\n SubPlan 2\n -> Result\n- SubPlan 1\n- -> Append\n- -> Seq Scan on part_pa_test_p1 pa1_1\n- Filter: (a = pa2.a)\n- -> Seq Scan on part_pa_test_p2 pa1_2\n- Filter: (a = pa2.a)\n-(14 rows)\n+(15 rows)\n\nMore details of the failure is available at [2].\n\n[1] - https://cirrus-ci.com/task/5685696451575808\n[2] - https://api.cirrus-ci.com/v1/artifact/task/5685696451575808/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 9 Jan 2024 12:38:48 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "Hello! \r\nI was going through the previous conversations for this particular patch and it seems that this patch failed some tests previously? \r\nImo we should move it to the next CF so that the remaining issues can be resolved accordingly.", "msg_date": "Tue, 23 Jan 2024 14:33:22 +0000", "msg_from": "Akshat Jaimini <destrex271@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Tue, Jan 23, 2024 at 9:34 AM Akshat Jaimini <destrex271@gmail.com> wrote:\n> Hello!\n> I was going through the previous conversations for this particular patch and it seems that this patch failed some tests previously?\n> Imo we should move it to the next CF so that the remaining issues can be resolved accordingly.\n\nSo I guess the question here is whether this is thought to be ready\nfor serious review or whether it is still thought to need work. If the\nlatter, it should be marked RwF until that happens -- if the former,\nthen we should try to review it rather than letting it languish\nforever.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Jan 2024 15:20:31 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "I think we should move this patch to the next CF as I believe that work is still going on resolving the last reported bug.", "msg_date": "Tue, 30 Jan 2024 16:16:20 +0000", "msg_from": "Akshat Jaimini <destrex271@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Tue, Jan 30, 2024 at 11:17 AM Akshat Jaimini <destrex271@gmail.com> wrote:\n> I think we should move this patch to the next CF as I believe that work is still going on resolving the last reported bug.\n\nWe shouldn't just keep pushing this forward to the next CF. It's been\nidle since July. If it needs more work, mark it RwF and it can be\nreopened when there's something for a reviewer to do.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Jan 2024 11:54:34 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Tue, Jan 9, 2024 at 2:09 AM vignesh C <vignesh21@gmail.com> wrote:\n> ...\n> > Given all of that I settled on this approach:\n> > 1. Modify is_parallel_safe() to by default ignore PARAM_EXEC params.\n> > 2. Add is_parallel_safe_with_params() that checks for the existence of\n> > such params.\n> > 3. Store the required params in a bitmapset on each base rel.\n> > 4. Union the bitmapset on join rels.\n> > 5. Only insert a gather node if that bitmapset is empty.\n> >\n> > I have an intuition that there's some spot (e.g. joins) that we should\n> > be removing params from this set (e.g., when we've satisfied them),\n> > but I haven't been able to come up with such a scenario as yet.\n> >\n> > The attached v11 fixes the issue you reported.\n>\n> One of the tests has failed in CFBot at [1] with:\n> +++ /tmp/cirrus-ci-build/build/testrun/pg_upgrade/002_pg_upgrade/data/results/select_parallel.out\n> 2023-12-20 20:08:42.480004000 +0000\n> @@ -137,23 +137,24 @@\n> explain (costs off)\n> select (select max((select pa1.b from part_pa_test pa1 where pa1.a = pa2.a)))\n> from part_pa_test pa2;\n> - QUERY PLAN\n> ---------------------------------------------------------------\n> - Aggregate\n> + QUERY PLAN\n> +--------------------------------------------------------------------\n> + Finalize Aggregate\n> -> Gather\n> Workers Planned: 3\n> - -> Parallel Append\n> - -> Parallel Seq Scan on part_pa_test_p1 pa2_1\n> - -> Parallel Seq Scan on part_pa_test_p2 pa2_2\n> + -> Partial Aggregate\n> + -> Parallel Append\n> + -> Parallel Seq Scan on part_pa_test_p1 pa2_1\n> + -> Parallel Seq Scan on part_pa_test_p2 pa2_2\n> + SubPlan 1\n> + -> Append\n> + -> Seq Scan on part_pa_test_p1 pa1_1\n> + Filter: (a = pa2.a)\n> + -> Seq Scan on part_pa_test_p2 pa1_2\n> + Filter: (a = pa2.a)\n> SubPlan 2\n> -> Result\n> - SubPlan 1\n> - -> Append\n> - -> Seq Scan on part_pa_test_p1 pa1_1\n> - Filter: (a = pa2.a)\n> - -> Seq Scan on part_pa_test_p2 pa1_2\n> - Filter: (a = pa2.a)\n> -(14 rows)\n> +(15 rows)\n>\n> More details of the failure is available at [2].\n>\n> [1] - https://cirrus-ci.com/task/5685696451575808\n> [2] - https://api.cirrus-ci.com/v1/artifact/task/5685696451575808/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade\n\nThanks for noting this here.\n\nI've finally had a chance to look at this, and I don't believe there's\nany real failure here, merely drift of how the planner works on master\nresulting in this query now being eligible for a different plan shape.\n\nI was a bit wary at first because the changing test query is one I'd\npreviously referenced in [1] as likely exposing the bug I'd fixed\nwhere params where being used across worker boundaries. However\nlooking at the diff in the patch at that point (v10) that particular\ntest query formed a different plan shape (there were two gather nodes\nbeing created, and params crossing between them).\n\nBut in the current revision of master with the current patch applied\nthat's no longer true: we have a Gather node, and the Subplan using\nthe param is properly under that Gather node, and the param should be\nbeing both generated and consumed within the same worker process.\n\nSo I've updated the patch to show that plan change as part of the diff.\n\nSee attached v12\n\nRegards,\nJames Coleman\n\n1: https://www.postgresql.org/message-id/CAAaqYe-_TObm5KwmZLYXBJ3BJGh4cUZWM0v1mY1gWTMkRNQXDQ%40mail.gmail.com", "msg_date": "Tue, 30 Jan 2024 21:51:09 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Tue, Jan 30, 2024 at 11:54 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Jan 30, 2024 at 11:17 AM Akshat Jaimini <destrex271@gmail.com> wrote:\n> > I think we should move this patch to the next CF as I believe that work is still going on resolving the last reported bug.\n>\n> We shouldn't just keep pushing this forward to the next CF. It's been\n> idle since July. If it needs more work, mark it RwF and it can be\n> reopened when there's something for a reviewer to do.\n\nI don't follow the \"Idle since July\" since it just hasn't received\nreview since then, so there's been nothing to reply to.\n\nThat being said, Vignesh's note in January about a now-failing test is\nrelevant activity, and I've just today responded to that, so I'm\nchanging the status back from Waiting on Author to Needs Review.\n\nRegards,\nJames Coleman\n\n\n", "msg_date": "Tue, 30 Jan 2024 21:56:20 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> I've finally had a chance to look at this, and I don't believe there's\n> any real failure here, merely drift of how the planner works on master\n> resulting in this query now being eligible for a different plan shape.\n\n> I was a bit wary at first because the changing test query is one I'd\n> previously referenced in [1] as likely exposing the bug I'd fixed\n> where params where being used across worker boundaries. However\n> looking at the diff in the patch at that point (v10) that particular\n> test query formed a different plan shape (there were two gather nodes\n> being created, and params crossing between them).\n\n> But in the current revision of master with the current patch applied\n> that's no longer true: we have a Gather node, and the Subplan using\n> the param is properly under that Gather node, and the param should be\n> being both generated and consumed within the same worker process.\n\nHmm ... so the question this raises for me is: was that test intended\nto verify behavior of params being passed across workers? If so,\nhaven't you broken the point of the test? This doesn't mean that\nyour code change is wrong; but I think maybe you need to find a way\nto modify that test case so that it still tests what it's meant to.\nThis is a common hazard when changing the planner's behavior.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Jan 2024 22:34:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Tue, Jan 30, 2024 at 10:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Coleman <jtc331@gmail.com> writes:\n> > I've finally had a chance to look at this, and I don't believe there's\n> > any real failure here, merely drift of how the planner works on master\n> > resulting in this query now being eligible for a different plan shape.\n>\n> > I was a bit wary at first because the changing test query is one I'd\n> > previously referenced in [1] as likely exposing the bug I'd fixed\n> > where params where being used across worker boundaries. However\n> > looking at the diff in the patch at that point (v10) that particular\n> > test query formed a different plan shape (there were two gather nodes\n> > being created, and params crossing between them).\n>\n> > But in the current revision of master with the current patch applied\n> > that's no longer true: we have a Gather node, and the Subplan using\n> > the param is properly under that Gather node, and the param should be\n> > being both generated and consumed within the same worker process.\n>\n> Hmm ... so the question this raises for me is: was that test intended\n> to verify behavior of params being passed across workers? If so,\n> haven't you broken the point of the test? This doesn't mean that\n> your code change is wrong; but I think maybe you need to find a way\n> to modify that test case so that it still tests what it's meant to.\n> This is a common hazard when changing the planner's behavior.\n\nI'd been thinking it was covered by another test I'd added in 0001,\nbut looking at it again that test doesn't exercise parallel append\n(though it does exercise a different case of cross-worker param\nusage), so I'll add another test for the parallel append behavior.\n\nRegards,\nJames Coleman\n\n\n", "msg_date": "Wed, 31 Jan 2024 08:53:39 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Tue, Jan 30, 2024 at 9:56 PM James Coleman <jtc331@gmail.com> wrote:\n> I don't follow the \"Idle since July\" since it just hasn't received\n> review since then, so there's been nothing to reply to.\n\nIt wasn't clear to me if you thought that the patch was ready for\nreview since July, or if it was waiting on you since July. Those are\nquite different, IMV.\n\n> That being said, Vignesh's note in January about a now-failing test is\n> relevant activity, and I've just today responded to that, so I'm\n> changing the status back from Waiting on Author to Needs Review.\n\nSounds good.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 31 Jan 2024 15:18:43 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "On Wed, Jan 31, 2024 at 3:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Jan 30, 2024 at 9:56 PM James Coleman <jtc331@gmail.com> wrote:\n> > I don't follow the \"Idle since July\" since it just hasn't received\n> > review since then, so there's been nothing to reply to.\n>\n> It wasn't clear to me if you thought that the patch was ready for\n> review since July, or if it was waiting on you since July. Those are\n> quite different, IMV.\n\nAgreed they're very different. I'd thought it was actually in \"Needs\nreview\" and with no outstanding questions on the thread since July,\nbut maybe I'm missing something -- I've definitely misunderstood CF\napp status before, but usually that's been in the other direction\n(forgetting to mark it back to Needs Review after responding to a\nWaiting on Author.\n\nRegards,\nJames Coleman\n\n\n", "msg_date": "Wed, 31 Jan 2024 21:11:54 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" }, { "msg_contents": "Hi,\n\nI was going through the \"needs review\" patches with no recent messages,\ntrying to figure out what is needed to move them forward, and this one\ncaught my eye because I commented on it before. And it's also a bit sad\nexample, because it started in 2021 and is moving at glacier speed :-(\n\nI read through the thread, to understand how the design changed over\ntime, and I like the current approach (proposed by Robert) much more\nthan the initial idea of adding new flag next to parallel_safe etc.\n\nAnd in general the patch looks reasonably simple and clean, but my\nknowledge of PARAM intricacies is pretty much nil, so I'm hesitant to\nclaim the patch is correct. And I'm not sure what exactly needs to\nhappen to validate the approach :-(\n\n\nThe regression tests currently fail, due to a different plan for one of\nthe new queries in select_parallel. I guess it's due to some committed\npatch, and it looks like a sensible change, but I haven't looked closely.\n\nAlso, I do get this warning when building with GCC 12.2.0 on Debian:\n\nclauses.c: In function ‘max_parallel_hazard_walker’:\nclauses.c:961:49: warning: ‘save_safe_param_ids’ may be used\nuninitialized [-Wmaybe-uninitialized]\n 961 | context->safe_param_ids =\nsave_safe_param_ids;\n |\n~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~\nclauses.c:943:29: note: ‘save_safe_param_ids’ was declared here\n 943 | List *save_safe_param_ids;\n | ^~~~~~~~~~~~~~~~~~~\n\nIt's harmless, the compiler simply does not realize the two blocks\n(where save_safe_param_ids is set and used) have exactly the same if\nconditions, but it's a bit confusing for people too.\n\n\nI was wondering if this could affect some queries in TPC-H, but the only\nquery affected seems to be Q2 - where it helps, cutting the time in\nhalf, but Q2 is generally pretty fast / the expensive part was already\nparallelized quite well (i.e. the correlated subquery is fairly cheap).\n\nHowever, it's not difficult to construct a query where this helps a lot.\nIf the correlated subquery does something expensive (e.g. aggregation of\nnon-trivial amounts of data), this would help. So I wonder if e.g.\nTPC-DS would benefit from this more ...\n\n\nA couple review comments about the code:\n\n1) new fields in max_parallel_hazard_context should have comments:\n\n+\tbool\t\tcheck_params;\n+\tBitmapset **required_params;\n\n\n2) Do we need both is_parallel_safe and is_parallel_safe_with_params?\nISTM the main difference is the for() loop, so why not add an extra\nparameter to is_parallel_safe() and skip that loop if it's null? Or, if\nwe worry about external code, keep is_parallel_safe_with_params() and\ndefine is_parallel_safe() as is_parallel_safe_with_params(...,NULL)?\n\n3) Isn't it a bit weird that is_parallel_safe_with_params() actually\nsets check_params=false, which seems like it doesn't actually check\nparameters? I'm a bit confused / unsure if this is a bug or how it\nactually checks parameters. If it's correct, it may need a comment.\n\n4) The only place setting check_params is max_parallel_hazard, which is\ncalled only for the whole Query from standard_planner(). But it does not\nset required_params - can't this be an issue if the pointer happens to\nbe random garbage?\n\n5) It probably needs a pgindent run, there's a bunch of rather long\nlines and those are likely to get wrapped and look weird.\n\n6) Is the comment in max_parallel_hazard_walker() still accurate? It\ntalks about PARAM_EXTERN and PARAM_EXEC, but the patch removes the\nPARAM_EXTERN check entirely. So maybe the comment needs updating?\n\n7) I don't like the very complex if condition very much, it's hard to\nunderstand. I'd split that into two separate conditions, and add a short\ncomment for each of them. I.e. instead of:\n\n if (param->paramkind != PARAM_EXEC || !(context->check_params ||\ncontext->required_params != NULL))\n return false;\n\nI'd do\n\n /* ... comment ... */\n if (param->paramkind != PARAM_EXEC)\n return false;\n\n /* ... comment ... */\n if (!(context->check_params || context->required_params != NULL))\n return false;\n\nor something like that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 21 Jul 2024 01:06:55 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Parallelize correlated subqueries that execute within each worker" } ]
[ { "msg_contents": "Hi,\n\nEnabling optional ./configure features/packages using --enable-* / --with-*\nwill and should cause fatal errors aborting the configure process\nif some dependencies are missing, letting the user install such packages,\nbefore proceeding and resuming ./configure.\n\nHowever, ./configure currently only saves config.cache at the very end upon success.\n\nThis means the manual loop below can take quite a few iterations,\nuntil the user have encountered all errors for their distribution,\nand installed all required packages:\n\nloop\n ./configure --with-... --enable-...\n if error occured then\n figure out which distro package should be installed using apt-file or google\n install the package\n else\n break\n end\nend loop\n\nTo speed-up the resuming of ./configure, I propose adding AC_CACHE_SAVE entires\nat a few positions *before* possibly executing some fatal checks (AC_MSG_ERROR).\n\nIt's important not to run AC_CACHE_SAVE in between the actual check and AC_MSG_ERROR,\nas that would cache the \"no\" value, causing the package later installed by the user to go undetected.\n\nAttached is a patch adding AC_CACHE_SAVE at places where I think it makes most sense.\nSince it's a macro that expanded causes configure to grow quite a bit,\nI only added it at the 10 places where we get most bang for the bucks, in terms of speed-up.\n\nJust for fun, I also created a little helper-tool, magicmake [1], to automate the pseudo-code loop above,\nto verify all packages could be found, after installation, so that no undesired \"no\" values were cached.\nThe gif animation [2] shows the building of PostgreSQL with these configure flags:\n\nmagicmake ./configure --config-cache --prefix=\"$HOME/pg-head\" --enable-nls --with-perl --with-python --with-tcl --with-icu --with-llvm --with-ssl=openssl --with-gssapi --with-ldap --with-pam --with-systemd --with-libxml --with-libxslt --with-lz4 --with-pgport=54321 --enable-debug --enable-cassert --enable-tap-tests --enable-depend --enable-coverage --enable-profiling --enable-dtrace\n\nThe following packages were detected and installed by magicmake to make it possible to build PostgreSQL with all those options:\n\nliblog-agent-perl lcov systemtap-sdt-dev llvm clang pkg-config libicu-dev libxml2-dev liblz4-dev libreadline-dev bison flex python-is-python3 zlib1g-dev libkrb5-dev libssl-dev libgss-dev libpam0g-dev libxslt1-dev libldap2-dev libsystemd-dev gettext tcl tcl-dev libperl-dev libpython3.8-dev libipc-run-perl dbtoepub fop libxml2-utils xsltproc libterm-table-perl libterm-readkey-perl libterm-size-any-perl\n\n(I only attached the patch for configure.ac, you have to run autoconf to also update configure.)\n\nThoughts?\n\n/Joel\n\n[1] https://github.com/truthly/magicmake\n[2] https://github.com/truthly/magicmake/raw/master/magicmake.gif", "msg_date": "Fri, 07 May 2021 18:43:19 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "[PATCH] Add configure cache checkpoints before fatal checks" } ]
[ { "msg_contents": "Hi\n\nI am testing new features of Postgres 14, and now I am trying to check the\nresult cache. Unfortunately on my test data, the result is not too good.\nthe behaviour is very non linear. Is it expected?\n\ncreate table t1(a int, t2_id int);\ninsert into t1 select random() * 100000, random() * 100000 from\ngenerate_series(1,1000000);\ncreate table t2(b int, id int);\ninsert into t2 select random() * 100000, random() * 100000 from\ngenerate_series(1,1000000);\ncreate index on t2(id);\n\nvacuum analyze t1, t2;\n\nwhen work_mem is 40MB\n\n QUERY PLAN\n\n\n-------------------------------------------------------------------------------------------------------------------------------------------\n\n\n Nested Loop (cost=4.65..472639.79 rows=1000000 width=16) (actual\ntime=0.041..1078.882 rows=1000000 loops=1)\n\n\n -> Seq Scan on t1 (cost=0.00..14425.00 rows=1000000 width=8) (actual\ntime=0.010..60.212 rows=1000000 loops=1)\n\n -> Result Cache (cost=4.65..4.67 rows=1 width=8) (actual\ntime=0.001..0.001 rows=1 loops=1000000)\n\n\n Cache Key: t1.t2_id\n\n\n Hits: 900006 Misses: 99994 Evictions: 0 Overflows: 0 Memory\nUsage: 10547kB\n\n -> Aggregate (cost=4.64..4.65 rows=1 width=8) (actual\ntime=0.003..0.003 rows=1 loops=99994)\n\n\n -> Index Only Scan using t2_id_idx on t2 (cost=0.42..4.62\nrows=11 width=0) (actual time=0.002..0.003 rows=10 loops=99994)\n\n Index Cond: (id = t1.t2_id)\n\n\n Heap Fetches: 0\n\n\n Planning Time: 0.091 ms\n\n\n Execution Time: 1120.177 ms\n\nwhen work_mem is 10MB\n\n\npostgres=# set work_mem to '10MB'; -- 11MB is ok\nSET\npostgres=# explain analyze select * from t1, lateral(select count(*) from\nt2 where t1.t2_id = t2.id) s ;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=4.65..472639.79 rows=1000000 width=16) (actual\ntime=0.040..56576.187 rows=1000000 loops=1)\n -> Seq Scan on t1 (cost=0.00..14425.00 rows=1000000 width=8) (actual\ntime=0.010..76.753 rows=1000000 loops=1)\n -> Result Cache (cost=4.65..4.67 rows=1 width=8) (actual\ntime=0.056..0.056 rows=1 loops=1000000)\n Cache Key: t1.t2_id\n Hits: 884158 Misses: 115842 Evictions: 18752 Overflows: 0\n Memory Usage: 10241kB\n -> Aggregate (cost=4.64..4.65 rows=1 width=8) (actual\ntime=0.005..0.005 rows=1 loops=115842)\n -> Index Only Scan using t2_id_idx on t2 (cost=0.42..4.62\nrows=11 width=0) (actual time=0.003..0.004 rows=10 loops=115842)\n Index Cond: (id = t1.t2_id)\n Heap Fetches: 0\n Planning Time: 0.087 ms\n Execution Time: 56621.421 ms\n(11 rows)\n\nThe query without result cache\n\npostgres=# explain analyze select * from t1, lateral(select count(*) from\nt2 where t1.t2_id = t2.id) s ;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=4.64..4689425.00 rows=1000000 width=16) (actual\ntime=0.031..3260.858 rows=1000000 loops=1)\n -> Seq Scan on t1 (cost=0.00..14425.00 rows=1000000 width=8) (actual\ntime=0.008..71.792 rows=1000000 loops=1)\n -> Aggregate (cost=4.64..4.65 rows=1 width=8) (actual\ntime=0.003..0.003 rows=1 loops=1000000)\n -> Index Only Scan using t2_id_idx on t2 (cost=0.42..4.62\nrows=11 width=0) (actual time=0.002..0.002 rows=10 loops=1000000)\n Index Cond: (id = t1.t2_id)\n Heap Fetches: 0\n Planning Time: 0.081 ms\n Execution Time: 3293.543 ms\n(8 rows)\n\n\n\nSamples: 119K of event 'cycles', 4000 Hz, Event count (approx.):\nOverhead Shared Object Symbol\n 79.20% postgres [.] cache_reduce_memory\n 1.94% [kernel] [k] native_write_msr_safe\n 1.63% [kernel] [k] update_cfs_shares\n 1.00% [kernel] [k] trigger_load_balance\n 0.97% [kernel] [k] timerqueue_add\n 0.51% [kernel] [k] task_tick_fair\n 0.51% [kernel] [k] task_cputime\n 0.50% [kernel] [k] perf_event_task_tick\n 0.50% [kernel] [k] update_curr\n 0.49% [kernel] [k] hrtimer_active\n\nRegards\n\nPavel\n\nHiI am testing new features of Postgres 14, and now I am trying to check the result cache. Unfortunately on my test data, the result is not too good.the behaviour is very non linear. Is it expected?create table t1(a int, t2_id int);insert into t1 select random() * 100000, random() * 100000 from generate_series(1,1000000);create table t2(b int, id int);insert into t2 select random() * 100000, random() * 100000 from generate_series(1,1000000);create index on t2(id);vacuum analyze t1, t2;when work_mem is 40MB                                                                QUERY PLAN                                                                                                                                                     -------------------------------------------------------------------------------------------------------------------------------------------                                                                                     Nested Loop  (cost=4.65..472639.79 rows=1000000 width=16) (actual time=0.041..1078.882 rows=1000000 loops=1)                                                                                                                     ->  Seq Scan on t1  (cost=0.00..14425.00 rows=1000000 width=8) (actual time=0.010..60.212 rows=1000000 loops=1)                                                                                                                ->  Result Cache  (cost=4.65..4.67 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=1000000)                                                                                                                                   Cache Key: t1.t2_id                                                                                                                                                                                                            Hits: 900006  Misses: 99994  Evictions: 0  Overflows: 0  Memory Usage: 10547kB                                                                                                                                                 ->  Aggregate  (cost=4.64..4.65 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=99994)                                                                                                                                        ->  Index Only Scan using t2_id_idx on t2  (cost=0.42..4.62 rows=11 width=0) (actual time=0.002..0.003 rows=10 loops=99994)                                                                                                          Index Cond: (id = t1.t2_id)                                                                                                                                                                                                    Heap Fetches: 0                                                                                                                                                                                            Planning Time: 0.091 ms                                                                                                                                                                                                        Execution Time: 1120.177 ms  when work_mem is 10MBpostgres=# set work_mem to '10MB'; -- 11MB is okSETpostgres=# explain analyze select * from t1, lateral(select count(*) from t2 where t1.t2_id = t2.id) s ;                                                                 QUERY PLAN                                                                 -------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=4.65..472639.79 rows=1000000 width=16) (actual time=0.040..56576.187 rows=1000000 loops=1)   ->  Seq Scan on t1  (cost=0.00..14425.00 rows=1000000 width=8) (actual time=0.010..76.753 rows=1000000 loops=1)   ->  Result Cache  (cost=4.65..4.67 rows=1 width=8) (actual time=0.056..0.056 rows=1 loops=1000000)         Cache Key: t1.t2_id         Hits: 884158  Misses: 115842  Evictions: 18752  Overflows: 0  Memory Usage: 10241kB         ->  Aggregate  (cost=4.64..4.65 rows=1 width=8) (actual time=0.005..0.005 rows=1 loops=115842)               ->  Index Only Scan using t2_id_idx on t2  (cost=0.42..4.62 rows=11 width=0) (actual time=0.003..0.004 rows=10 loops=115842)                     Index Cond: (id = t1.t2_id)                     Heap Fetches: 0 Planning Time: 0.087 ms Execution Time: 56621.421 ms(11 rows)The query without result cachepostgres=# explain analyze select * from t1, lateral(select count(*) from t2 where t1.t2_id = t2.id) s ;                                                              QUERY PLAN                                                               --------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=4.64..4689425.00 rows=1000000 width=16) (actual time=0.031..3260.858 rows=1000000 loops=1)   ->  Seq Scan on t1  (cost=0.00..14425.00 rows=1000000 width=8) (actual time=0.008..71.792 rows=1000000 loops=1)   ->  Aggregate  (cost=4.64..4.65 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=1000000)         ->  Index Only Scan using t2_id_idx on t2  (cost=0.42..4.62 rows=11 width=0) (actual time=0.002..0.002 rows=10 loops=1000000)               Index Cond: (id = t1.t2_id)               Heap Fetches: 0 Planning Time: 0.081 ms Execution Time: 3293.543 ms(8 rows)Samples: 119K of event 'cycles', 4000 Hz, Event count (approx.):Overhead  Shared Object             Symbol                        79.20%  postgres                  [.] cache_reduce_memory        1.94%  [kernel]                  [k] native_write_msr_safe      1.63%  [kernel]                  [k] update_cfs_shares          1.00%  [kernel]                  [k] trigger_load_balance       0.97%  [kernel]                  [k] timerqueue_add             0.51%  [kernel]                  [k] task_tick_fair             0.51%  [kernel]                  [k] task_cputime               0.50%  [kernel]                  [k] perf_event_task_tick       0.50%  [kernel]                  [k] update_curr                0.49%  [kernel]                  [k] hrtimer_active          RegardsPavel", "msg_date": "Fri, 7 May 2021 20:24:29 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "pá 7. 5. 2021 v 20:24 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> I am testing new features of Postgres 14, and now I am trying to check the\n> result cache. Unfortunately on my test data, the result is not too good.\n> the behaviour is very non linear. Is it expected?\n>\n> create table t1(a int, t2_id int);\n> insert into t1 select random() * 100000, random() * 100000 from\n> generate_series(1,1000000);\n> create table t2(b int, id int);\n> insert into t2 select random() * 100000, random() * 100000 from\n> generate_series(1,1000000);\n> create index on t2(id);\n>\n> vacuum analyze t1, t2;\n>\n> when work_mem is 40MB\n>\n> QUERY PLAN\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------\n>\n>\n> Nested Loop (cost=4.65..472639.79 rows=1000000 width=16) (actual\n> time=0.041..1078.882 rows=1000000 loops=1)\n>\n>\n> -> Seq Scan on t1 (cost=0.00..14425.00 rows=1000000 width=8) (actual\n> time=0.010..60.212 rows=1000000 loops=1)\n>\n> -> Result Cache (cost=4.65..4.67 rows=1 width=8) (actual\n> time=0.001..0.001 rows=1 loops=1000000)\n>\n>\n> Cache Key: t1.t2_id\n>\n>\n> Hits: 900006 Misses: 99994 Evictions: 0 Overflows: 0 Memory\n> Usage: 10547kB\n>\n> -> Aggregate (cost=4.64..4.65 rows=1 width=8) (actual\n> time=0.003..0.003 rows=1 loops=99994)\n>\n>\n> -> Index Only Scan using t2_id_idx on t2 (cost=0.42..4.62\n> rows=11 width=0) (actual time=0.002..0.003 rows=10 loops=99994)\n>\n> Index Cond: (id = t1.t2_id)\n>\n>\n> Heap Fetches: 0\n>\n>\n> Planning Time: 0.091 ms\n>\n>\n> Execution Time: 1120.177 ms\n>\n> when work_mem is 10MB\n>\n>\n> postgres=# set work_mem to '10MB'; -- 11MB is ok\n> SET\n> postgres=# explain analyze select * from t1, lateral(select count(*) from\n> t2 where t1.t2_id = t2.id) s ;\n> QUERY\n> PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=4.65..472639.79 rows=1000000 width=16) (actual\n> time=0.040..56576.187 rows=1000000 loops=1)\n> -> Seq Scan on t1 (cost=0.00..14425.00 rows=1000000 width=8) (actual\n> time=0.010..76.753 rows=1000000 loops=1)\n> -> Result Cache (cost=4.65..4.67 rows=1 width=8) (actual\n> time=0.056..0.056 rows=1 loops=1000000)\n> Cache Key: t1.t2_id\n> Hits: 884158 Misses: 115842 Evictions: 18752 Overflows: 0\n> Memory Usage: 10241kB\n> -> Aggregate (cost=4.64..4.65 rows=1 width=8) (actual\n> time=0.005..0.005 rows=1 loops=115842)\n> -> Index Only Scan using t2_id_idx on t2 (cost=0.42..4.62\n> rows=11 width=0) (actual time=0.003..0.004 rows=10 loops=115842)\n> Index Cond: (id = t1.t2_id)\n> Heap Fetches: 0\n> Planning Time: 0.087 ms\n> Execution Time: 56621.421 ms\n> (11 rows)\n>\n>\ncan be possible to disable caching when the number of evictions across some\nlimit ?\n\nCan be calculated some average cache hit ratio against evictions, and when\nthis ratio will be too big, then the cache can be bypassed.\n\n\n\n\n> The query without result cache\n>\n> postgres=# explain analyze select * from t1, lateral(select count(*) from\n> t2 where t1.t2_id = t2.id) s ;\n> QUERY PLAN\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=4.64..4689425.00 rows=1000000 width=16) (actual\n> time=0.031..3260.858 rows=1000000 loops=1)\n> -> Seq Scan on t1 (cost=0.00..14425.00 rows=1000000 width=8) (actual\n> time=0.008..71.792 rows=1000000 loops=1)\n> -> Aggregate (cost=4.64..4.65 rows=1 width=8) (actual\n> time=0.003..0.003 rows=1 loops=1000000)\n> -> Index Only Scan using t2_id_idx on t2 (cost=0.42..4.62\n> rows=11 width=0) (actual time=0.002..0.002 rows=10 loops=1000000)\n> Index Cond: (id = t1.t2_id)\n> Heap Fetches: 0\n> Planning Time: 0.081 ms\n> Execution Time: 3293.543 ms\n> (8 rows)\n>\n>\n>\n> Samples: 119K of event 'cycles', 4000 Hz, Event count (approx.):\n> Overhead Shared Object Symbol\n> 79.20% postgres [.] cache_reduce_memory\n> 1.94% [kernel] [k] native_write_msr_safe\n> 1.63% [kernel] [k] update_cfs_shares\n> 1.00% [kernel] [k] trigger_load_balance\n> 0.97% [kernel] [k] timerqueue_add\n> 0.51% [kernel] [k] task_tick_fair\n> 0.51% [kernel] [k] task_cputime\n> 0.50% [kernel] [k] perf_event_task_tick\n> 0.50% [kernel] [k] update_curr\n> 0.49% [kernel] [k] hrtimer_active\n>\n> Regards\n>\n> Pavel\n>\n\npá 7. 5. 2021 v 20:24 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:HiI am testing new features of Postgres 14, and now I am trying to check the result cache. Unfortunately on my test data, the result is not too good.the behaviour is very non linear. Is it expected?create table t1(a int, t2_id int);insert into t1 select random() * 100000, random() * 100000 from generate_series(1,1000000);create table t2(b int, id int);insert into t2 select random() * 100000, random() * 100000 from generate_series(1,1000000);create index on t2(id);vacuum analyze t1, t2;when work_mem is 40MB                                                                QUERY PLAN                                                                                                                                                     -------------------------------------------------------------------------------------------------------------------------------------------                                                                                     Nested Loop  (cost=4.65..472639.79 rows=1000000 width=16) (actual time=0.041..1078.882 rows=1000000 loops=1)                                                                                                                     ->  Seq Scan on t1  (cost=0.00..14425.00 rows=1000000 width=8) (actual time=0.010..60.212 rows=1000000 loops=1)                                                                                                                ->  Result Cache  (cost=4.65..4.67 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=1000000)                                                                                                                                   Cache Key: t1.t2_id                                                                                                                                                                                                            Hits: 900006  Misses: 99994  Evictions: 0  Overflows: 0  Memory Usage: 10547kB                                                                                                                                                 ->  Aggregate  (cost=4.64..4.65 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=99994)                                                                                                                                        ->  Index Only Scan using t2_id_idx on t2  (cost=0.42..4.62 rows=11 width=0) (actual time=0.002..0.003 rows=10 loops=99994)                                                                                                          Index Cond: (id = t1.t2_id)                                                                                                                                                                                                    Heap Fetches: 0                                                                                                                                                                                            Planning Time: 0.091 ms                                                                                                                                                                                                        Execution Time: 1120.177 ms  when work_mem is 10MBpostgres=# set work_mem to '10MB'; -- 11MB is okSETpostgres=# explain analyze select * from t1, lateral(select count(*) from t2 where t1.t2_id = t2.id) s ;                                                                 QUERY PLAN                                                                 -------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=4.65..472639.79 rows=1000000 width=16) (actual time=0.040..56576.187 rows=1000000 loops=1)   ->  Seq Scan on t1  (cost=0.00..14425.00 rows=1000000 width=8) (actual time=0.010..76.753 rows=1000000 loops=1)   ->  Result Cache  (cost=4.65..4.67 rows=1 width=8) (actual time=0.056..0.056 rows=1 loops=1000000)         Cache Key: t1.t2_id         Hits: 884158  Misses: 115842  Evictions: 18752  Overflows: 0  Memory Usage: 10241kB         ->  Aggregate  (cost=4.64..4.65 rows=1 width=8) (actual time=0.005..0.005 rows=1 loops=115842)               ->  Index Only Scan using t2_id_idx on t2  (cost=0.42..4.62 rows=11 width=0) (actual time=0.003..0.004 rows=10 loops=115842)                     Index Cond: (id = t1.t2_id)                     Heap Fetches: 0 Planning Time: 0.087 ms Execution Time: 56621.421 ms(11 rows)can be possible to disable caching when the number of evictions across some limit ?Can be calculated some average cache hit ratio against evictions, and when this ratio will be too big, then the cache can be bypassed.  The query without result cachepostgres=# explain analyze select * from t1, lateral(select count(*) from t2 where t1.t2_id = t2.id) s ;                                                              QUERY PLAN                                                               --------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=4.64..4689425.00 rows=1000000 width=16) (actual time=0.031..3260.858 rows=1000000 loops=1)   ->  Seq Scan on t1  (cost=0.00..14425.00 rows=1000000 width=8) (actual time=0.008..71.792 rows=1000000 loops=1)   ->  Aggregate  (cost=4.64..4.65 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=1000000)         ->  Index Only Scan using t2_id_idx on t2  (cost=0.42..4.62 rows=11 width=0) (actual time=0.002..0.002 rows=10 loops=1000000)               Index Cond: (id = t1.t2_id)               Heap Fetches: 0 Planning Time: 0.081 ms Execution Time: 3293.543 ms(8 rows)Samples: 119K of event 'cycles', 4000 Hz, Event count (approx.):Overhead  Shared Object             Symbol                        79.20%  postgres                  [.] cache_reduce_memory        1.94%  [kernel]                  [k] native_write_msr_safe      1.63%  [kernel]                  [k] update_cfs_shares          1.00%  [kernel]                  [k] trigger_load_balance       0.97%  [kernel]                  [k] timerqueue_add             0.51%  [kernel]                  [k] task_tick_fair             0.51%  [kernel]                  [k] task_cputime               0.50%  [kernel]                  [k] perf_event_task_tick       0.50%  [kernel]                  [k] update_curr                0.49%  [kernel]                  [k] hrtimer_active          RegardsPavel", "msg_date": "Fri, 7 May 2021 20:45:36 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "Pavel Stehule писал 2021-05-07 21:45:\n>> \n>> Samples: 119K of event 'cycles', 4000 Hz, Event count (approx.):\n>> Overhead Shared Object Symbol\n>> 79.20% postgres [.] cache_reduce_memory\n>> 1.94% [kernel] [k] native_write_msr_safe\n>> 1.63% [kernel] [k] update_cfs_shares\n>> 1.00% [kernel] [k] trigger_load_balance\n>> 0.97% [kernel] [k] timerqueue_add\n>> 0.51% [kernel] [k] task_tick_fair\n>> 0.51% [kernel] [k] task_cputime\n>> 0.50% [kernel] [k] perf_event_task_tick\n>> 0.50% [kernel] [k] update_curr\n>> 0.49% [kernel] [k] hrtimer_active\n>> \n>> Regards\n>> \n>> Pavel\n\nIt is strange to see cache_reduce_memory itself consumes a lot of CPU.\nIt doesn't contain CPU hungry code.\nIt calls prepare_probe_slot, that calls some tuple forming. Then\nit calls resultcache_lookup that may call to ResultCacheHash_hash\nand ResultCacheHash_equal. And finally it calls remove_cache_entry.\nI suppose remove_cache_entry should consume most of CPU time since\nit does deallocations.\nAnd if you compile with --enable-cassert, then remove_cache_entry\niterates through whole cache hashtable, therefore it reaches\nquadratic complexity easily (or more correct M*N, where M - size\nof a table, N - eviction count).\n\nregards,\nYura Sokolov\n\n\n", "msg_date": "Fri, 07 May 2021 22:06:40 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "\n\nOn 5/7/21 9:06 PM, Yura Sokolov wrote:\n> Pavel Stehule писал 2021-05-07 21:45:\n>>>\n>>> Samples: 119K of event 'cycles', 4000 Hz, Event count (approx.):\n>>> Overhead  Shared Object             Symbol\n>>> 79.20%  postgres                  [.] cache_reduce_memory\n>>> 1.94%  [kernel]                  [k] native_write_msr_safe\n>>> 1.63%  [kernel]                  [k] update_cfs_shares\n>>> 1.00%  [kernel]                  [k] trigger_load_balance\n>>> 0.97%  [kernel]                  [k] timerqueue_add\n>>> 0.51%  [kernel]                  [k] task_tick_fair\n>>> 0.51%  [kernel]                  [k] task_cputime\n>>> 0.50%  [kernel]                  [k] perf_event_task_tick\n>>> 0.50%  [kernel]                  [k] update_curr\n>>> 0.49%  [kernel]                  [k] hrtimer_active\n>>>\n>>> Regards\n>>>\n>>> Pavel\n> \n> It is strange to see cache_reduce_memory itself consumes a lot of CPU.\n> It doesn't contain CPU hungry code.\n> It calls prepare_probe_slot, that calls some tuple forming. Then\n> it calls resultcache_lookup that may call to ResultCacheHash_hash\n> and ResultCacheHash_equal. And finally it calls remove_cache_entry.\n> I suppose remove_cache_entry should consume most of CPU time since\n> it does deallocations.\n> And if you compile with --enable-cassert, then remove_cache_entry\n> iterates through whole cache hashtable, therefore it reaches\n> quadratic complexity easily (or more correct M*N, where M - size\n> of a table, N - eviction count).\n> \n\nYeah. I tried reproducing the issue, but without success ...\n\nNot sure what's wrong, but --enable-cassert is one option. Or maybe \nthere's some funny behavior due to collecting timing info?\n\nFWIW the timings on my laptop look like this:\n\nwork_mem=40MB 5065ms\nwork_mem=10MB 5104ms\nresultcache=off 13453ms\n\nSo a very different behavior from what Pavel reported. But if I rebuild \nwith casserts, I get the same massive slowdown, so I guess that's it.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 May 2021 21:17:30 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "pá 7. 5. 2021 v 21:06 odesílatel Yura Sokolov <y.sokolov@postgrespro.ru>\nnapsal:\n\n> Pavel Stehule писал 2021-05-07 21:45:\n> >>\n> >> Samples: 119K of event 'cycles', 4000 Hz, Event count (approx.):\n> >> Overhead Shared Object Symbol\n> >> 79.20% postgres [.] cache_reduce_memory\n> >> 1.94% [kernel] [k] native_write_msr_safe\n> >> 1.63% [kernel] [k] update_cfs_shares\n> >> 1.00% [kernel] [k] trigger_load_balance\n> >> 0.97% [kernel] [k] timerqueue_add\n> >> 0.51% [kernel] [k] task_tick_fair\n> >> 0.51% [kernel] [k] task_cputime\n> >> 0.50% [kernel] [k] perf_event_task_tick\n> >> 0.50% [kernel] [k] update_curr\n> >> 0.49% [kernel] [k] hrtimer_active\n> >>\n> >> Regards\n> >>\n> >> Pavel\n>\n> It is strange to see cache_reduce_memory itself consumes a lot of CPU.\n> It doesn't contain CPU hungry code.\n> It calls prepare_probe_slot, that calls some tuple forming. Then\n> it calls resultcache_lookup that may call to ResultCacheHash_hash\n> and ResultCacheHash_equal. And finally it calls remove_cache_entry.\n> I suppose remove_cache_entry should consume most of CPU time since\n> it does deallocations.\n> And if you compile with --enable-cassert, then remove_cache_entry\n> iterates through whole cache hashtable, therefore it reaches\n> quadratic complexity easily (or more correct M*N, where M - size\n> of a table, N - eviction count).\n>\n\nyes, the slowdown is related to debug assertions\n\nPavel\n\n\n> regards,\n> Yura Sokolov\n>\n\npá 7. 5. 2021 v 21:06 odesílatel Yura Sokolov <y.sokolov@postgrespro.ru> napsal:Pavel Stehule писал 2021-05-07 21:45:\n>> \n>> Samples: 119K of event 'cycles', 4000 Hz, Event count (approx.):\n>> Overhead  Shared Object             Symbol\n>> 79.20%  postgres                  [.] cache_reduce_memory\n>> 1.94%  [kernel]                  [k] native_write_msr_safe\n>> 1.63%  [kernel]                  [k] update_cfs_shares\n>> 1.00%  [kernel]                  [k] trigger_load_balance\n>> 0.97%  [kernel]                  [k] timerqueue_add\n>> 0.51%  [kernel]                  [k] task_tick_fair\n>> 0.51%  [kernel]                  [k] task_cputime\n>> 0.50%  [kernel]                  [k] perf_event_task_tick\n>> 0.50%  [kernel]                  [k] update_curr\n>> 0.49%  [kernel]                  [k] hrtimer_active\n>> \n>> Regards\n>> \n>> Pavel\n\nIt is strange to see cache_reduce_memory itself consumes a lot of CPU.\nIt doesn't contain CPU hungry code.\nIt calls prepare_probe_slot, that calls some tuple forming. Then\nit calls resultcache_lookup that may call to ResultCacheHash_hash\nand ResultCacheHash_equal. And finally it calls remove_cache_entry.\nI suppose remove_cache_entry should consume most of CPU time since\nit does deallocations.\nAnd if you compile with --enable-cassert, then remove_cache_entry\niterates through whole cache hashtable, therefore it reaches\nquadratic complexity easily (or more correct M*N, where M - size\nof a table, N - eviction count).yes, the slowdown is related to debug assertionsPavel\n\nregards,\nYura Sokolov", "msg_date": "Fri, 7 May 2021 21:18:11 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "On Sat, 8 May 2021 at 07:18, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> yes, the slowdown is related to debug assertions\n\nWith USE_ASSERT_CHECKING builds, I did add some code that verifies the\nmemory tracking is set correctly when evicting from the cache. This\ncode is pretty expensive as it loops over the entire cache to check\nthe memory accounting every time we evict something from the cache.\nOriginally, I had this code only run when some other constant was\ndefined, but I ended up changing it to compile that code in for all\nassert enabled builds.\n\nI considered that it might be too expensive as you can see from the\ncomment in [1]. I just wanted to get a few machines other than my own\nto verify that the memory accounting code was working as expected.\nThere have been no complaints of any Assert failures yet, so maybe\nit's safe to consider either removing the code entirely or just having\nit run when some other more specific to purpose constant is defined.\nIf we did the latter, then I'd have concerns that nothing would ever\nrun the code to check the memory accounting, that's why I ended up\nchanging it to run with USE_ASSERT_CHECKING builds.\n\nDavid\n\n[1] https://github.com/postgres/postgres/blob/master/src/backend/executor/nodeResultCache.c#L305\n\n\n", "msg_date": "Sat, 8 May 2021 07:56:34 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "pá 7. 5. 2021 v 21:56 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n\n> On Sat, 8 May 2021 at 07:18, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > yes, the slowdown is related to debug assertions\n>\n> With USE_ASSERT_CHECKING builds, I did add some code that verifies the\n> memory tracking is set correctly when evicting from the cache. This\n> code is pretty expensive as it loops over the entire cache to check\n> the memory accounting every time we evict something from the cache.\n> Originally, I had this code only run when some other constant was\n> defined, but I ended up changing it to compile that code in for all\n> assert enabled builds.\n>\n> I considered that it might be too expensive as you can see from the\n> comment in [1]. I just wanted to get a few machines other than my own\n> to verify that the memory accounting code was working as expected.\n> There have been no complaints of any Assert failures yet, so maybe\n> it's safe to consider either removing the code entirely or just having\n> it run when some other more specific to purpose constant is defined.\n> If we did the latter, then I'd have concerns that nothing would ever\n> run the code to check the memory accounting, that's why I ended up\n> changing it to run with USE_ASSERT_CHECKING builds.\n>\n\nI understand. I think this is too slow for generic assertions, because the\noverhead is about 50x.\n\nBut I understand, so it may be necessary to have this code active some time.\n\nRegards\n\nPavel\n\n\n> David\n>\n> [1]\n> https://github.com/postgres/postgres/blob/master/src/backend/executor/nodeResultCache.c#L305\n>\n\npá 7. 5. 2021 v 21:56 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:On Sat, 8 May 2021 at 07:18, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> yes, the slowdown is related to debug assertions\n\nWith USE_ASSERT_CHECKING builds, I did add some code that verifies the\nmemory tracking is set correctly when evicting from the cache. This\ncode is pretty expensive as it loops over the entire cache to check\nthe memory accounting every time we evict something from the cache.\nOriginally, I had this code only run when some other constant was\ndefined, but I ended up changing it to compile that code in for all\nassert enabled builds.\n\nI considered that it might be too expensive as you can see from the\ncomment in [1].  I just wanted to get a few machines other than my own\nto verify that the memory accounting code was working as expected.\nThere have been no complaints of any Assert failures yet, so maybe\nit's safe to consider either removing the code entirely or just having\nit run when some other more specific to purpose constant is defined.\nIf we did the latter, then I'd have concerns that nothing would ever\nrun the code to check the memory accounting, that's why I ended up\nchanging it to run with USE_ASSERT_CHECKING builds.I understand. I think this is too slow for generic assertions, because the overhead is about 50x.  But I understand, so it may be necessary to have this code active some time.RegardsPavel\n\nDavid\n\n[1] https://github.com/postgres/postgres/blob/master/src/backend/executor/nodeResultCache.c#L305", "msg_date": "Fri, 7 May 2021 22:18:12 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "On Sat, 8 May 2021 at 08:18, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> pá 7. 5. 2021 v 21:56 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n>> With USE_ASSERT_CHECKING builds, I did add some code that verifies the\n>> memory tracking is set correctly when evicting from the cache. This\n>> code is pretty expensive as it loops over the entire cache to check\n>> the memory accounting every time we evict something from the cache.\n>> Originally, I had this code only run when some other constant was\n>> defined, but I ended up changing it to compile that code in for all\n>> assert enabled builds.\n>>\n>> I considered that it might be too expensive as you can see from the\n>> comment in [1]. I just wanted to get a few machines other than my own\n>> to verify that the memory accounting code was working as expected.\n>> There have been no complaints of any Assert failures yet, so maybe\n>> it's safe to consider either removing the code entirely or just having\n>> it run when some other more specific to purpose constant is defined.\n>> If we did the latter, then I'd have concerns that nothing would ever\n>> run the code to check the memory accounting, that's why I ended up\n>> changing it to run with USE_ASSERT_CHECKING builds.\n>\n>\n> I understand. I think this is too slow for generic assertions, because the overhead is about 50x.\n\nI didn't expect it would show up quite that much. If you scaled the\ntest up a bit more and increased work_mem further, then it would be\neven more than 50x.\n\nAt one point when I was developing the patch, I had two high water\nmarks for cache memory. When we reached the upper of the two marks,\nI'd reduce the memory down to the lower of two marks. The lower of\nthe two marks was set to 98% of the higher mark. In the end, I got\nrid of that as I didn't really see what extra overhead there was from\njust running the eviction code every time we require another byte.\nHowever, if we did have that again, then the memory checking could\njust be done when we run the eviction code. We'd then need to consume\nthat 2% more memory before it would run again.\n\nMy current thinking is that I don't really want to add that complexity\njust for some Assert code. I'd only want to do it if there was another\nvalid reason to.\n\nAnother thought I have is that maybe it would be ok just to move\nmemory accounting debug code so it only runs once in\nExecEndResultCache. I struggling to imagine that if the memory\ntracking did go out of whack, that the problem would have accidentally\nfixed itself by the time we got to ExecEndResultCache(). I guess even\nif the accounting was counting far too much memory and we'd evicted\neverything from the cache to try and get the memory usage down, we'd\nstill find the problem during ExecEndResultCache(), even if the cache\nhad become completely empty as a result.\n\nDavid\n\n\n", "msg_date": "Sat, 8 May 2021 09:04:35 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "On 5/7/21 11:04 PM, David Rowley wrote:\n> On Sat, 8 May 2021 at 08:18, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>>\n>> pá 7. 5. 2021 v 21:56 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n>>> With USE_ASSERT_CHECKING builds, I did add some code that verifies the\n>>> memory tracking is set correctly when evicting from the cache. This\n>>> code is pretty expensive as it loops over the entire cache to check\n>>> the memory accounting every time we evict something from the cache.\n>>> Originally, I had this code only run when some other constant was\n>>> defined, but I ended up changing it to compile that code in for all\n>>> assert enabled builds.\n>>>\n>>> I considered that it might be too expensive as you can see from the\n>>> comment in [1]. I just wanted to get a few machines other than my own\n>>> to verify that the memory accounting code was working as expected.\n>>> There have been no complaints of any Assert failures yet, so maybe\n>>> it's safe to consider either removing the code entirely or just having\n>>> it run when some other more specific to purpose constant is defined.\n>>> If we did the latter, then I'd have concerns that nothing would ever\n>>> run the code to check the memory accounting, that's why I ended up\n>>> changing it to run with USE_ASSERT_CHECKING builds.\n>>\n>>\n>> I understand. I think this is too slow for generic assertions, because the overhead is about 50x.\n> \n> I didn't expect it would show up quite that much. If you scaled the\n> test up a bit more and increased work_mem further, then it would be\n> even more than 50x.\n> \n> At one point when I was developing the patch, I had two high water\n> marks for cache memory. When we reached the upper of the two marks,\n> I'd reduce the memory down to the lower of two marks. The lower of\n> the two marks was set to 98% of the higher mark. In the end, I got\n> rid of that as I didn't really see what extra overhead there was from\n> just running the eviction code every time we require another byte.\n> However, if we did have that again, then the memory checking could\n> just be done when we run the eviction code. We'd then need to consume\n> that 2% more memory before it would run again.\n> \n> My current thinking is that I don't really want to add that complexity\n> just for some Assert code. I'd only want to do it if there was another\n> valid reason to.\n> \n\nAgreed. I think this approach to eviction (i.e. evicting more than you \nneed) would be useful if the actual eviction code was expensive, and \ndoing it in a \"batch\" would make it significantly cheaper. But I don't \nthink \"asserts are expensive\" is a good reason for it.\n\n> Another thought I have is that maybe it would be ok just to move\n> memory accounting debug code so it only runs once in\n> ExecEndResultCache. I struggling to imagine that if the memory\n> tracking did go out of whack, that the problem would have accidentally\n> fixed itself by the time we got to ExecEndResultCache(). I guess even\n> if the accounting was counting far too much memory and we'd evicted\n> everything from the cache to try and get the memory usage down, we'd\n> still find the problem during ExecEndResultCache(), even if the cache\n> had become completely empty as a result.\n> \n\nI don't think postponing the debug code until much later is a great \nidea. When something goes wrong it's good to know ASAP, otherwise it's \nmuch more difficult to identify the issue.\n\nNot sure we need to do something here - for regression tests this is not \nan issue, because those generally work with small data sets. And if you \nrun with asserts on large amounts of data, I think this is acceptable.\n\nI had the same dilemma with the new BRIN index opclasses, which also \nhave some extensive and expensive assert checks - for the regression \ntests that's fine, and it proved very useful during development.\n\nI have considered enabling those extra checks only on request somehow, \nbut I'd bet no one would do that and I'd forget it exists pretty soon.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 May 2021 23:16:54 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "On Sat, 8 May 2021 at 09:16, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n> On 5/7/21 11:04 PM, David Rowley wrote:\n> > Another thought I have is that maybe it would be ok just to move\n> > memory accounting debug code so it only runs once in\n> > ExecEndResultCache. I struggling to imagine that if the memory\n> > tracking did go out of whack, that the problem would have accidentally\n> > fixed itself by the time we got to ExecEndResultCache(). I guess even\n> > if the accounting was counting far too much memory and we'd evicted\n> > everything from the cache to try and get the memory usage down, we'd\n> > still find the problem during ExecEndResultCache(), even if the cache\n> > had become completely empty as a result.\n> >\n>\n> I don't think postponing the debug code until much later is a great\n> idea. When something goes wrong it's good to know ASAP, otherwise it's\n> much more difficult to identify the issue.\n\nI thought about this a bit and I was about to agree, but then I changed my mind.\n\nThe biggest concern I have on this topic is that we end up with zero\nvalidation done for the memory accounting. If we can find a cheaper\nplace to put the Asserts that will at least bring our attention to the\nfact that some problem exists, then more investigation can ensue. I\ndon't personally expect that every assert failure will show us the\nexact location of the bug.\n\nAdditionally, I don't really think there is a huge amount of room for\nbugs creeping in here as there's not all that many places that the\n'mem_used' field gets updated, so there are not many places to forget\nto do it.\n\nAnother way to look at this is that, where the Asserts are today,\nthere are zero memory accounting checks done in all cases that don't\nevict any tuples. I feel by moving the tests to ExecEndResultCache()\nwe'll do memory validation for all plans using a Result Cache in\nAssert builds. Yes, we might just need to do a bit more work to find\nout exactly where the problem is, but some investigation would need to\nhappen anyway. I think if anyone changes anything which breaks the\nmemory accounting then they'll be aware of it quite quickly and they\ncan just look at what they did wrong.\n\nDavid", "msg_date": "Sat, 8 May 2021 14:26:44 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "On Sat, May 08, 2021 at 02:26:44PM +1200, David Rowley wrote:\n> On Sat, 8 May 2021 at 09:16, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> > On 5/7/21 11:04 PM, David Rowley wrote:\n> > > Another thought I have is that maybe it would be ok just to move\n> > > memory accounting debug code so it only runs once in\n> > > ExecEndResultCache. I struggling to imagine that if the memory\n> > > tracking did go out of whack, that the problem would have accidentally\n> > > fixed itself by the time we got to ExecEndResultCache(). I guess even\n> > > if the accounting was counting far too much memory and we'd evicted\n> > > everything from the cache to try and get the memory usage down, we'd\n> > > still find the problem during ExecEndResultCache(), even if the cache\n> > > had become completely empty as a result.\n> >\n> > I don't think postponing the debug code until much later is a great\n> > idea. When something goes wrong it's good to know ASAP, otherwise it's\n> > much more difficult to identify the issue.\n> \n> I thought about this a bit and I was about to agree, but then I changed my mind.\n\n> Yes, we might just need to do a bit more work to find\n> out exactly where the problem is, but some investigation would need to\n> happen anyway. I think if anyone changes anything which breaks the\n> memory accounting then they'll be aware of it quite quickly and they\n> can just look at what they did wrong.\n\nYou could put this into a separate function called by ExecEndResultCache().\nThen anyone that breaks the memory accounting can also call the function in the\nplaces they changed to help figure out what they broke.\n\n- * Validate the memory accounting code is correct in assert builds. XXX is\n- * this too expensive for USE_ASSERT_CHECKING?\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 7 May 2021 21:43:20 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "On Sat, 8 May 2021 at 14:43, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> You could put this into a separate function called by ExecEndResultCache().\n> Then anyone that breaks the memory accounting can also call the function in the\n> places they changed to help figure out what they broke.\n\nI almost did it that way and left a call to it in remove_cache_entry()\n#ifdef'd out. However, as mentioned, I'm more concerned about the\naccounting being broken and left broken than I am with making it take\na little less time to find the exact place to fix the breakage. If\nthe breakage was to occur when adding a new entry to the cache then it\nmight not narrow it down much if we just give users an easy way to\ncheck the memory accounting during only evictions. The only way to\nhighlight the problem as soon as it occurs would be to validate the\nmemory tracking every time the mem_used field is changed. I think that\nwould be overkill.\n\nI also find it hard to imagine what other reasons we'll have in the\nfuture to adjust 'mem_used'. At the moment there are 4 places. Two\nthat add bytes and two that subtract bytes. They're all hidden inside\nreusable functions that are in charge of adding and removing entries\nfrom the cache.\n\nDavid\n\n\n", "msg_date": "Sat, 8 May 2021 15:26:57 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "\n\n8 мая 2021 г. 00:16:54 GMT+03:00, Tomas Vondra <tomas.vondra@enterprisedb.com> пишет:\n>On 5/7/21 11:04 PM, David Rowley wrote:\n>> On Sat, 8 May 2021 at 08:18, Pavel Stehule <pavel.stehule@gmail.com>\n>wrote:\n>>>\n>>> pá 7. 5. 2021 v 21:56 odesílatel David Rowley <dgrowleyml@gmail.com>\n>napsal:\n>>>> With USE_ASSERT_CHECKING builds, I did add some code that verifies\n>the\n>>>> memory tracking is set correctly when evicting from the cache. This\n>>>> code is pretty expensive as it loops over the entire cache to check\n>>>> the memory accounting every time we evict something from the cache.\n>>>> Originally, I had this code only run when some other constant was\n>>>> defined, but I ended up changing it to compile that code in for all\n>>>> assert enabled builds.\n>>>>\n>>>> I considered that it might be too expensive as you can see from the\n>>>> comment in [1]. I just wanted to get a few machines other than my\n>own\n>>>> to verify that the memory accounting code was working as expected.\n>>>> There have been no complaints of any Assert failures yet, so maybe\n>>>> it's safe to consider either removing the code entirely or just\n>having\n>>>> it run when some other more specific to purpose constant is\n>defined.\n>>>> If we did the latter, then I'd have concerns that nothing would\n>ever\n>>>> run the code to check the memory accounting, that's why I ended up\n>>>> changing it to run with USE_ASSERT_CHECKING builds.\n>>>\n>>>\n>>> I understand. I think this is too slow for generic assertions,\n>because the overhead is about 50x.\n>> \n>> I didn't expect it would show up quite that much. If you scaled the\n>> test up a bit more and increased work_mem further, then it would be\n>> even more than 50x.\n>> \n>> At one point when I was developing the patch, I had two high water\n>> marks for cache memory. When we reached the upper of the two marks,\n>> I'd reduce the memory down to the lower of two marks. The lower of\n>> the two marks was set to 98% of the higher mark. In the end, I got\n>> rid of that as I didn't really see what extra overhead there was from\n>> just running the eviction code every time we require another byte.\n>> However, if we did have that again, then the memory checking could\n>> just be done when we run the eviction code. We'd then need to consume\n>> that 2% more memory before it would run again.\n>> \n>> My current thinking is that I don't really want to add that\n>complexity\n>> just for some Assert code. I'd only want to do it if there was\n>another\n>> valid reason to.\n>> \n>\n>Agreed. I think this approach to eviction (i.e. evicting more than you \n>need) would be useful if the actual eviction code was expensive, and \n>doing it in a \"batch\" would make it significantly cheaper. But I don't \n>think \"asserts are expensive\" is a good reason for it.\n>\n>> Another thought I have is that maybe it would be ok just to move\n>> memory accounting debug code so it only runs once in\n>> ExecEndResultCache. I struggling to imagine that if the memory\n>> tracking did go out of whack, that the problem would have\n>accidentally\n>> fixed itself by the time we got to ExecEndResultCache(). I guess\n>even\n>> if the accounting was counting far too much memory and we'd evicted\n>> everything from the cache to try and get the memory usage down, we'd\n>> still find the problem during ExecEndResultCache(), even if the cache\n>> had become completely empty as a result.\n>> \n>\n>I don't think postponing the debug code until much later is a great \n>idea. When something goes wrong it's good to know ASAP, otherwise it's \n>much more difficult to identify the issue.\n>\n>Not sure we need to do something here - for regression tests this is\n>not \n>an issue, because those generally work with small data sets. And if you\n>\n>run with asserts on large amounts of data, I think this is acceptable.\n>\n>I had the same dilemma with the new BRIN index opclasses, which also \n>have some extensive and expensive assert checks - for the regression \n>tests that's fine, and it proved very useful during development.\n>\n>I have considered enabling those extra checks only on request somehow, \n>but I'd bet no one would do that and I'd forget it exists pretty soon.\n>\n>\n>regards\n\nPerhaps there is need for flag \"heavy asserts\". Or option \"--enable-cassert=heavy\". Then USE_ASSERT_CHECKING could be defined to integer 1 or 2 depending on \"heaviness\" of enabled checks.\n\nregards\nYura Sokolov\n\n\n", "msg_date": "Sat, 08 May 2021 10:03:35 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "On Sat, 8 May 2021 at 19:03, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> Perhaps there is need for flag \"heavy asserts\". Or option \"--enable-cassert=heavy\". Then USE_ASSERT_CHECKING could be defined to integer 1 or 2 depending on \"heaviness\" of enabled checks.\n\nI'd rather explore all other options before we went and did something\nlike that. The reason I feel this way is; a couple of jobs ago, I\nworked on a project that had quite a large test suite. In order to\nrun the tests on a typical dev machine took about 16 hours. When\nwriting new tests, nobody really ever gave much thought to how long\ntests would take to run. The time it took had already grown to\nsomething much longer than was practical to run before a commit.\nInstead, we had a test farm. We'd commit to master (yuck) and the test\nfarm would run the tests. Unfortunately, the farm was not big enough\nto run every test on every commit, so if it encountered a test failure\n(as it commonly did) it tried to bisect and find the first breaking\ncommit for that test then alert the person (probably the next day).\nbisecting works out pretty badly when someone commits a build breaking\ncommit.\n\nNow that I work on PostgreSQL, I very much like the fact that I can\nrun make check-world -j and see very quickly that no tests fail. I'd\nrather not have something like heavy asserts as I believe it would\njust become a dumping ground for badly thought through Asserts that\nmost likely could be done in some better way that didn't require so\nmuch effort. (I'll put mine in that category) It might not take long\nbefore most committers opted to run make check-world without the heavy\nasserts because it became just too slow. When that happens, it means\nthe buildfarm is left to catch these errors and most likely the\nfeedback will take much longer to get back to the committer due to the\nanimals running these flags also having to churn through the slow\ntests.\n\nSo, I'm -1 for \"heavy asserts\".\n\nDavid\n\n\n", "msg_date": "Sat, 8 May 2021 19:39:25 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "\n\nOn 5/8/21 9:39 AM, David Rowley wrote:\n> On Sat, 8 May 2021 at 19:03, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n>> Perhaps there is need for flag \"heavy asserts\". Or option \"--enable-cassert=heavy\". Then USE_ASSERT_CHECKING could be defined to integer 1 or 2 depending on \"heaviness\" of enabled checks.\n> \n> I'd rather explore all other options before we went and did something\n> like that. The reason I feel this way is; a couple of jobs ago, I\n> worked on a project that had quite a large test suite. In order to\n> run the tests on a typical dev machine took about 16 hours. When\n> writing new tests, nobody really ever gave much thought to how long\n> tests would take to run. The time it took had already grown to\n> something much longer than was practical to run before a commit.\n> Instead, we had a test farm. We'd commit to master (yuck) and the test\n> farm would run the tests. Unfortunately, the farm was not big enough\n> to run every test on every commit, so if it encountered a test failure\n> (as it commonly did) it tried to bisect and find the first breaking\n> commit for that test then alert the person (probably the next day).\n> bisecting works out pretty badly when someone commits a build breaking\n> commit.\n> \n> Now that I work on PostgreSQL, I very much like the fact that I can\n> run make check-world -j and see very quickly that no tests fail. I'd\n> rather not have something like heavy asserts as I believe it would\n> just become a dumping ground for badly thought through Asserts that\n> most likely could be done in some better way that didn't require so\n> much effort. (I'll put mine in that category) It might not take long\n> before most committers opted to run make check-world without the heavy\n> asserts because it became just too slow. When that happens, it means\n> the buildfarm is left to catch these errors and most likely the\n> feedback will take much longer to get back to the committer due to the\n> animals running these flags also having to churn through the slow\n> tests.\n> \n> So, I'm -1 for \"heavy asserts\".\n> \n\nYeah, -1 from me to.\n\nBut I think we should differentiate between asserts that (a) \nsignificantly affect \"our\" regression tests with limited amounts of \ndata, and (b) asserts that are an issue for large data sets. The asserts \ndiscussed in this thread are in (b), but not (a).\n\nI don't know if there's a better way to do these tests, but if there's \nnot I'd not worry about it too much for now.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 8 May 2021 12:33:41 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "On Sat, 8 May 2021 at 22:33, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> I don't know if there's a better way to do these tests, but if there's\n> not I'd not worry about it too much for now.\n\nSo you're -1 on the proposed patch?\n\nDavid\n\n\n", "msg_date": "Sat, 8 May 2021 23:27:04 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "\nOn 5/8/21 1:27 PM, David Rowley wrote:\n> On Sat, 8 May 2021 at 22:33, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>> I don't know if there's a better way to do these tests, but if there's\n>> not I'd not worry about it too much for now.\n> \n> So you're -1 on the proposed patch?\n>\n\nOh! I have not noticed there was a patch. No, I'm not against the patch, \nif it improves the testing (and you're right it's better to do some \ntesting even without evictions).\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 8 May 2021 14:07:37 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "so 8. 5. 2021 v 9:39 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n\n> On Sat, 8 May 2021 at 19:03, Yura Sokolov <y.sokolov@postgrespro.ru>\n> wrote:\n> > Perhaps there is need for flag \"heavy asserts\". Or option\n> \"--enable-cassert=heavy\". Then USE_ASSERT_CHECKING could be defined to\n> integer 1 or 2 depending on \"heaviness\" of enabled checks.\n>\n> I'd rather explore all other options before we went and did something\n> like that. The reason I feel this way is; a couple of jobs ago, I\n> worked on a project that had quite a large test suite. In order to\n> run the tests on a typical dev machine took about 16 hours. When\n> writing new tests, nobody really ever gave much thought to how long\n> tests would take to run. The time it took had already grown to\n> something much longer than was practical to run before a commit.\n> Instead, we had a test farm. We'd commit to master (yuck) and the test\n> farm would run the tests. Unfortunately, the farm was not big enough\n> to run every test on every commit, so if it encountered a test failure\n> (as it commonly did) it tried to bisect and find the first breaking\n> commit for that test then alert the person (probably the next day).\n> bisecting works out pretty badly when someone commits a build breaking\n> commit.\n>\n> Now that I work on PostgreSQL, I very much like the fact that I can\n> run make check-world -j and see very quickly that no tests fail. I'd\n> rather not have something like heavy asserts as I believe it would\n> just become a dumping ground for badly thought through Asserts that\n> most likely could be done in some better way that didn't require so\n> much effort. (I'll put mine in that category) It might not take long\n> before most committers opted to run make check-world without the heavy\n> asserts because it became just too slow. When that happens, it means\n> the buildfarm is left to catch these errors and most likely the\n> feedback will take much longer to get back to the committer due to the\n> animals running these flags also having to churn through the slow\n> tests.\n>\n> So, I'm -1 for \"heavy asserts\".\n>\n\nPersonally, I have not problem with too slow assertions, although it is not\ntoo practical. The main problem is some shock, and feeling so some is\nwrong. I spent 1 hour detecting if it is a bug or not.\n\nCan it be possible to identify this situation?\n\nMaybe use some specific name of this routine - like\n\nassert_only_check_xxxx\n\nThen I can see this warning in perf, and I don't need to do other or deeper\nchecks\n\nRegards\n\nPavel\n\n\n> David\n>\n\nso 8. 5. 2021 v 9:39 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:On Sat, 8 May 2021 at 19:03, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> Perhaps there is need for flag \"heavy asserts\". Or option \"--enable-cassert=heavy\". Then USE_ASSERT_CHECKING could be defined to integer 1 or 2 depending on \"heaviness\" of enabled checks.\n\nI'd rather explore all other options before we went and did something\nlike that.  The reason I feel this way is; a couple of jobs ago, I\nworked on a project that had quite a large test suite.  In order to\nrun the tests on a typical dev machine took about 16 hours. When\nwriting new tests, nobody really ever gave much thought to how long\ntests would take to run. The time it took had already grown to\nsomething much longer than was practical to run before a commit.\nInstead, we had a test farm. We'd commit to master (yuck) and the test\nfarm would run the tests.  Unfortunately, the farm was not big enough\nto run every test on every commit, so if it encountered a test failure\n(as it commonly did) it tried to bisect and find the first breaking\ncommit for that test then alert the person (probably the next day).\nbisecting works out pretty badly when someone commits a build breaking\ncommit.\n\nNow that I work on PostgreSQL, I very much like the fact that I can\nrun make check-world -j and see very quickly that no tests fail.  I'd\nrather not have something like heavy asserts as I believe it would\njust become a dumping ground for badly thought through Asserts that\nmost likely could be done in some better way that didn't require so\nmuch effort. (I'll put mine in that category)  It might not take long\nbefore most committers opted to run make check-world without the heavy\nasserts because it became just too slow.  When that happens, it means\nthe buildfarm is left to catch these errors and most likely the\nfeedback will take much longer to get back to the committer due to the\nanimals running these flags also having to churn through the slow\ntests.\n\nSo, I'm -1 for \"heavy asserts\".Personally, I have not problem with too slow assertions, although it is not too practical. The main problem is some shock, and feeling so some is wrong. I spent 1 hour detecting if it is a bug or not. Can it be possible to identify this situation?Maybe use some specific name of this routine - likeassert_only_check_xxxxThen I can see this warning in perf, and I don't need to do other or deeper checksRegardsPavel\n\nDavid", "msg_date": "Sat, 8 May 2021 17:29:01 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> so 8. 5. 2021 v 9:39 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n>> So, I'm -1 for \"heavy asserts\".\n\n> Personally, I have not problem with too slow assertions, although it is not\n> too practical.\n\nI'm very much on David's side here. I'm currently trying to figure out\nwhy CLOBBER_CACHE_ALWAYS tests have gotten something like 10% slower\nin the past six weeks or so --- if you look at hyrax for example,\nit finished \"make check\" in 12:52:44 in its last fully-green run\nback in mid-March, while the latest report shows 14:08:18. I think\nsomething needs to be done about that. It's bad enough that no\ndevelopers will run cache-clobber tests by hand; but if it takes\na week to get buildfarm feedback, you've got exactly the same\nproblems David describes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 08 May 2021 13:21:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "On Sun, 9 May 2021 at 00:07, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n>\n> On 5/8/21 1:27 PM, David Rowley wrote:\n> > On Sat, 8 May 2021 at 22:33, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> >> I don't know if there's a better way to do these tests, but if there's\n> >> not I'd not worry about it too much for now.\n> >\n> > So you're -1 on the proposed patch?\n> >\n>\n> Oh! I have not noticed there was a patch. No, I'm not against the patch,\n> if it improves the testing (and you're right it's better to do some\n> testing even without evictions).\n\nThere seems to be plenty of other ideas floating around, but since\nnobody seems to be objecting to reducing the Assert overheads, I've\npushed the proposed patch.\n\nThe more I thought about this, the more I think I should have done it\nthis way in the first place.\n\nDavid\n\n\n", "msg_date": "Sun, 9 May 2021 11:45:33 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "On Sun, 9 May 2021 at 03:29, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> Personally, I have not problem with too slow assertions, although it is not too practical. The main problem is some shock, and feeling so some is wrong. I spent 1 hour detecting if it is a bug or not.\n\nThanks for spending the time figuring out where the slowness came from.\n\n> Can it be possible to identify this situation?\n>\n> Maybe use some specific name of this routine - like\n>\n> assert_only_check_xxxx\n>\n> Then I can see this warning in perf, and I don't need to do other or deeper checks\n\nI don't think we need to go around leaving clues for people who run\nperf on cassert builds. I think anyone doing that should just never\nexpect any meaningful results.\n\nDavid\n\n\n", "msg_date": "Sun, 9 May 2021 13:01:12 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "ne 9. 5. 2021 v 3:01 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n\n> On Sun, 9 May 2021 at 03:29, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > Personally, I have not problem with too slow assertions, although it is\n> not too practical. The main problem is some shock, and feeling so some is\n> wrong. I spent 1 hour detecting if it is a bug or not.\n>\n> Thanks for spending the time figuring out where the slowness came from.\n>\n> > Can it be possible to identify this situation?\n> >\n> > Maybe use some specific name of this routine - like\n> >\n> > assert_only_check_xxxx\n> >\n> > Then I can see this warning in perf, and I don't need to do other or\n> deeper checks\n>\n> I don't think we need to go around leaving clues for people who run\n> perf on cassert builds. I think anyone doing that should just never\n> expect any meaningful results.\n>\n\nok\n\nPavel\n\n>\n> David\n>\n\nne 9. 5. 2021 v 3:01 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:On Sun, 9 May 2021 at 03:29, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> Personally, I have not problem with too slow assertions, although it is not too practical. The main problem is some shock, and feeling so some is wrong. I spent 1 hour detecting if it is a bug or not.\n\nThanks for spending the time figuring out where the slowness came from.\n\n> Can it be possible to identify this situation?\n>\n> Maybe use some specific name of this routine - like\n>\n> assert_only_check_xxxx\n>\n> Then I can see this warning in perf, and I don't need to do other or deeper checks\n\nI don't think we need to go around leaving clues for people who run\nperf on cassert builds. I think anyone doing that should just never\nexpect any meaningful results.okPavel\n\nDavid", "msg_date": "Sun, 9 May 2021 06:07:27 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "David Rowley писал 2021-05-09 04:01:\n> On Sun, 9 May 2021 at 03:29, Pavel Stehule <pavel.stehule@gmail.com> \n> wrote:\n>> Personally, I have not problem with too slow assertions, although it \n>> is not too practical. The main problem is some shock, and feeling so \n>> some is wrong. I spent 1 hour detecting if it is a bug or not.\n> \n> Thanks for spending the time figuring out where the slowness came from.\n> \n>> Can it be possible to identify this situation?\n>> \n>> Maybe use some specific name of this routine - like\n>> \n>> assert_only_check_xxxx\n>> \n>> Then I can see this warning in perf, and I don't need to do other or \n>> deeper checks\n> \n> I don't think we need to go around leaving clues for people who run\n> perf on cassert builds. I think anyone doing that should just never\n> expect any meaningful results.\n\nOccasionally there is a need to run cassert builds in production to\ncatch an issue. It is usually ok if cassert build O(1) slower than\noptimized biuld (ie it is slower in some constant factor C). But\nif cassert build will be quadratically slower, it will unusable.\n\nregards,\nYura\n\n\n", "msg_date": "Sun, 09 May 2021 15:57:22 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" }, { "msg_contents": "Hi,\n\nOn 2021-05-09 15:57:22 +0300, Yura Sokolov wrote:\n> Occasionally there is a need to run cassert builds in production to\n> catch an issue. It is usually ok if cassert build O(1) slower than\n> optimized biuld (ie it is slower in some constant factor C). But\n> if cassert build will be quadratically slower, it will unusable.\n\nThe memory context assertion overhead is more than O(1) expensive. I\nthink there's plenty other cases like it. We removed some (e.g. it used\nto be that we scanned O(#shared_buffers) entries in the local pin table,\nat the end of the transaction). I don't think we want to limit ourselves\nto O(1) checks. That's not to say we should have a O(n^2) or such,\nunless we have confidence n rarely will be big.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 9 May 2021 12:04:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: plan with result cache is very slow when work_mem is not enough" } ]
[ { "msg_contents": "The JSON doc has this example (to show the need for double backslash):\n\n$ ? (@ like_regex \"^\\\\d+$\")\n\n\nThe example is not wrong exactly, and can be cast to jsonpath, but as-is \ncan never match anything.\n\nI think it'd be helpful to provide that example so that it more probably \nmatches when the user does a quick trial.\n\nLlet's change it to something like:\n\n$.* ? (@ like_regex \"^\\\\d+$\")\n\n\nPatch attached.\n\nthanks,\n\nErik Rijkers", "msg_date": "Fri, 7 May 2021 22:18:44 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "JSON doc example (matchiness)" }, { "msg_contents": "On Fri, May 07, 2021 at 10:18:44PM +0200, Erik Rijkers wrote:\n> \n> The JSON doc has this example (to show the need for double backslash):\n> \n> $ ? (@ like_regex \"^\\\\d+$\")\n> \n> \n> The example is not wrong exactly, and can be cast to jsonpath, but as-is can\n> never match anything.\n> \n> I think it'd be helpful to provide that example so that it more probably\n> matches when the user does a quick trial.\n> \n> Llet's change it to something like:\n> \n> $.* ? (@ like_regex \"^\\\\d+$\")\n\nAh, I see. What you are telling here is that we match the regex on\nthe full JSON string, which is pretty useless, and you are suggesting\nto change things so as we'd match with the key names at the first\nlevel. Makes sense.\n\nThis paragraph of the docs say:\n\"For example, to match strings that contain only digits\"\nCould we be more precise here? \"strings\" looks to much generic to\nme in this context when actually referring to a set of path of keys in\na JSON blob.\n--\nMichael", "msg_date": "Sat, 8 May 2021 10:48:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: JSON doc example (matchiness)" }, { "msg_contents": "\nOn 5/8/21 3:48 AM, Michael Paquier wrote:\n> On Fri, May 07, 2021 at 10:18:44PM +0200, Erik Rijkers wrote:\n>> The JSON doc has this example (to show the need for double backslash):\n>>\n>> $ ? (@ like_regex \"^\\\\d+$\")\n>>\n>>\n>> The example is not wrong exactly, and can be cast to jsonpath, but as-is can\n>> never match anything.\n>>\n>> I think it'd be helpful to provide that example so that it more probably\n>> matches when the user does a quick trial.\n>>\n>> Llet's change it to something like:\n>>\n>> $.* ? (@ like_regex \"^\\\\d+$\")\n> Ah, I see. What you are telling here is that we match the regex on\n> the full JSON string, which is pretty useless, and you are suggesting\n> to change things so as we'd match with the key names at the first\n> level. Makes sense.\n>\n> This paragraph of the docs say:\n> \"For example, to match strings that contain only digits\"\n> Could we be more precise here? \"strings\" looks to much generic to\n> me in this context when actually referring to a set of path of keys in\n> a JSON blob.\n\nYes, \"string values\"� is probably another small improvement.\n\n\n> --\n> Michael\n\n\n", "msg_date": "Sat, 8 May 2021 18:08:51 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "Re: JSON doc example (matchiness)" }, { "msg_contents": "On Sat, May 8, 2021 at 7:09 PM Erik Rijkers <er@xs4all.nl> wrote:\n> On 5/8/21 3:48 AM, Michael Paquier wrote:\n> > On Fri, May 07, 2021 at 10:18:44PM +0200, Erik Rijkers wrote:\n> >> The JSON doc has this example (to show the need for double backslash):\n> >>\n> >> $ ? (@ like_regex \"^\\\\d+$\")\n> >>\n> >>\n> >> The example is not wrong exactly, and can be cast to jsonpath, but as-is can\n> >> never match anything.\n> >>\n> >> I think it'd be helpful to provide that example so that it more probably\n> >> matches when the user does a quick trial.\n> >>\n> >> Llet's change it to something like:\n> >>\n> >> $.* ? (@ like_regex \"^\\\\d+$\")\n> > Ah, I see. What you are telling here is that we match the regex on\n> > the full JSON string, which is pretty useless, and you are suggesting\n> > to change things so as we'd match with the key names at the first\n> > level. Makes sense.\n> >\n> > This paragraph of the docs say:\n> > \"For example, to match strings that contain only digits\"\n> > Could we be more precise here? \"strings\" looks to much generic to\n> > me in this context when actually referring to a set of path of keys in\n> > a JSON blob.\n>\n> Yes, \"string values\" is probably another small improvement.\n\nWhat about the attached patch? Wording \"string values of the root\nobject\" seems most precise to me.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sun, 9 May 2021 04:01:47 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JSON doc example (matchiness)" }, { "msg_contents": "On Sun, May 9, 2021 at 4:01 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Sat, May 8, 2021 at 7:09 PM Erik Rijkers <er@xs4all.nl> wrote:\n> > On 5/8/21 3:48 AM, Michael Paquier wrote:\n> > > On Fri, May 07, 2021 at 10:18:44PM +0200, Erik Rijkers wrote:\n> > >> The JSON doc has this example (to show the need for double backslash):\n> > >>\n> > >> $ ? (@ like_regex \"^\\\\d+$\")\n> > >>\n> > >>\n> > >> The example is not wrong exactly, and can be cast to jsonpath, but as-is can\n> > >> never match anything.\n> > >>\n> > >> I think it'd be helpful to provide that example so that it more probably\n> > >> matches when the user does a quick trial.\n> > >>\n> > >> Llet's change it to something like:\n> > >>\n> > >> $.* ? (@ like_regex \"^\\\\d+$\")\n> > > Ah, I see. What you are telling here is that we match the regex on\n> > > the full JSON string, which is pretty useless, and you are suggesting\n> > > to change things so as we'd match with the key names at the first\n> > > level. Makes sense.\n> > >\n> > > This paragraph of the docs say:\n> > > \"For example, to match strings that contain only digits\"\n> > > Could we be more precise here? \"strings\" looks to much generic to\n> > > me in this context when actually referring to a set of path of keys in\n> > > a JSON blob.\n> >\n> > Yes, \"string values\" is probably another small improvement.\n>\n> What about the attached patch? Wording \"string values of the root\n> object\" seems most precise to me.\n\nI propose backpatching this to 12 when jsonpath was introduced. It\nseems useful to have this docs improvement every release supporting\njsonpath.\n\nObjections?\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sun, 9 May 2021 23:17:47 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JSON doc example (matchiness)" }, { "msg_contents": "On Sun, May 09, 2021 at 11:17:47PM +0300, Alexander Korotkov wrote:\n> I propose backpatching this to 12 when jsonpath was introduced. It\n> seems useful to have this docs improvement every release supporting\n> jsonpath.\n> \n> Objections?\n\nNone. Backpatching that is a good idea, and your suggested rewording\nis an improvement.\n--\nMichael", "msg_date": "Mon, 10 May 2021 09:41:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: JSON doc example (matchiness)" }, { "msg_contents": "On Mon, May 10, 2021 at 3:41 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Sun, May 09, 2021 at 11:17:47PM +0300, Alexander Korotkov wrote:\n> > I propose backpatching this to 12 when jsonpath was introduced. It\n> > seems useful to have this docs improvement every release supporting\n> > jsonpath.\n> >\n> > Objections?\n>\n> None. Backpatching that is a good idea, and your suggested rewording\n> is an improvement.\n\nPushed, thanks!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 13 May 2021 16:24:56 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JSON doc example (matchiness)" } ]
[ { "msg_contents": "Dear Postgresql community:\nWe are very new users of the Postgresql databases!\nWe have noticed that there is a separate Postgres installation files for different version of the same O/S such as Redhat Linux (one Postgres install file for RHEL 6, one Postgres intall file for RHEL 7, one Postgres install file for RHEL 8, etc.)  PostgreSQL: Linux downloads (Red Hat family)\n\n\n| \n| \n| | \nPostgreSQL: Linux downloads (Red Hat family)\n\n\n |\n\n |\n\n |\n\n\n\n\nQuestion:  will Postgres databases (such as Postgres12 originally installed on a RHEL 6 server) continue to function properly after the server get O/S upgrade to RHEL 7??\n\n\nThank you very much,\n-Peter\n\nDear Postgresql community:We are very new users of the Postgresql databases!We have noticed that there is a separate Postgres installation files for different version of the same O/S such as Redhat Linux (one Postgres install file for RHEL 6, one Postgres intall file for RHEL 7, one Postgres install file for RHEL 8, etc.)  PostgreSQL: Linux downloads (Red Hat family)PostgreSQL: Linux downloads (Red Hat family)Question:  will Postgres databases (such as Postgres12 originally installed on a RHEL 6 server) continue to function properly after the server get O/S upgrade to RHEL 7??Thank you very much,-Peter", "msg_date": "Fri, 7 May 2021 23:22:22 +0000 (UTC)", "msg_from": "Peter Lee <peterlee3672@yahoo.com>", "msg_from_op": true, "msg_subject": "Will Postgres12 installed on a RHEL 6 server continue to function\n after the server get O/S upgrade to RHEL 7?" }, { "msg_contents": "Peter Lee <peterlee3672@yahoo.com> writes:\n> Question:  will Postgres databases (such as Postgres12 originally installed on a RHEL 6 server) continue to function properly after the server get O/S upgrade to RHEL 7??\n\nPossibly, but why don't you do the OS upgrade first? RHEL6 has been\nEOL for some time, so installing new functionality on a server that's\nstill running that doesn't seem like prudent sysadmin-ship.\n\nThe main hazard you'd be looking at if you do it in the other order\nis that RHEL7 might have different locale sorting behavior, resulting\nin effectively-corrupt indexes on text columns. For more info about\nthat see\n\nhttps://wiki.postgresql.org/wiki/Locale_data_changes\n\nThat page claims that only the de_DE.UTF-8 locale changed in RHEL7,\nbut I don't know that I'd risk money on that being true.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 08 May 2021 15:52:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Will Postgres12 installed on a RHEL 6 server continue to function\n after the server get O/S upgrade to RHEL 7?" }, { "msg_contents": "Hi Tom,\nYes, I totally agree with you that the OS upgrade should take place first before we install Postgres.\nIt is just that Postgres was already installed on a RHEL6 server before my time.  \nWhat preparations do you recommend us to do minimize the possible hazard after the OS upgrade?\nThank you very much in advance,\n-Peter\n\n\n 在 2021年5月8日 星期六 下午12:52:29 [PDT], Tom Lane<tgl@sss.pgh.pa.us> 寫道: \n \n Peter Lee <peterlee3672@yahoo.com> writes:\n> Question:  will Postgres databases (such as Postgres12 originally installed on a RHEL 6 server) continue to function properly after the server get O/S upgrade to RHEL 7??\n\nPossibly, but why don't you do the OS upgrade first?  RHEL6 has been\nEOL for some time, so installing new functionality on a server that's\nstill running that doesn't seem like prudent sysadmin-ship.\n\nThe main hazard you'd be looking at if you do it in the other order\nis that RHEL7 might have different locale sorting behavior, resulting\nin effectively-corrupt indexes on text columns.  For more info about\nthat see\n\nhttps://wiki.postgresql.org/wiki/Locale_data_changes\n\nThat page claims that only the de_DE.UTF-8 locale changed in RHEL7,\nbut I don't know that I'd risk money on that being true.\n\n            regards, tom lane\n \n\nHi Tom,Yes, I totally agree with you that the OS upgrade should take place first before we install Postgres.It is just that Postgres was already installed on a RHEL6 server before my time.  What preparations do you recommend us to do minimize the possible hazard after the OS upgrade?Thank you very much in advance,-Peter\n\n\n\n 在 2021年5月8日 星期六 下午12:52:29 [PDT], Tom Lane<tgl@sss.pgh.pa.us> 寫道:\n \n\n\nPeter Lee <peterlee3672@yahoo.com> writes:> Question:  will Postgres databases (such as Postgres12 originally installed on a RHEL 6 server) continue to function properly after the server get O/S upgrade to RHEL 7??Possibly, but why don't you do the OS upgrade first?  RHEL6 has beenEOL for some time, so installing new functionality on a server that'sstill running that doesn't seem like prudent sysadmin-ship.The main hazard you'd be looking at if you do it in the other orderis that RHEL7 might have different locale sorting behavior, resultingin effectively-corrupt indexes on text columns.  For more info aboutthat seehttps://wiki.postgresql.org/wiki/Locale_data_changesThat page claims that only the de_DE.UTF-8 locale changed in RHEL7,but I don't know that I'd risk money on that being true.            regards, tom lane", "msg_date": "Sat, 8 May 2021 23:35:33 +0000 (UTC)", "msg_from": "Peter Lee <peterlee3672@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Will Postgres12 installed on a RHEL 6 server continue to\n function after the server get O/S upgrade to RHEL 7?" } ]
[ { "msg_contents": "Hi,\n\nWhen testing the fdw batch insert, I found a possible issue.\n\nIf I set the batch_size to 0 , it will throw an error:\n\n---------------------\nCREATE FOREIGN TABLE test(a int, b varchar)\n SERVER testserver\n OPTIONS (table_name 'testlocal', batch_size '0');\nERROR: fetch_size requires a non-negative integer value\n---------------------\n\nThe error message here seems not accurate, because\nI can see from the code batch_size should be positive ( > 0).\n\nSo, is it better to change the error message to “fetch_size requires a positive integer value” ?\nI also found fetch_size has the similar issue, attaching a patch to fix this.\n\nBest regards,\nhouzj", "msg_date": "Sat, 8 May 2021 03:38:51 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "On Sat, May 8, 2021 at 9:09 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> The error message here seems not accurate, because\n>\n> I can see from the code batch_size should be positive ( > 0).\n>\n\n> So, is it better to change the error message to “fetch_size requires a positive integer value” ?\n>\n> I also found fetch_size has the similar issue, attaching a patch to fix this.\n\nYes, it should be a positive integer, so your change makes sense.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 8 May 2021 11:22:48 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "From: houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> \n> So, is it better to change the error message to “fetch_size requires a positive integer value” ?\n> I also found fetch_size has the similar issue, attaching a patch to fix this.\n\nI have a faint memory that I fixed them after receiving the same feedback from someone else, strange... Anyway, thanks.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Mon, 10 May 2021 01:26:52 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "\n\nOn 2021/05/10 10:26, tsunakawa.takay@fujitsu.com wrote:\n> From: houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>\n>> So, is it better to change the error message to \u001b$B!H\u001b(Bfetch_size requires a positive integer value\u001b$B!I\u001b(B ?\n>> I also found fetch_size has the similar issue, attaching a patch to fix this.\n> \n> I have a faint memory that I fixed them after receiving the same feedback from someone else, strange... Anyway, thanks.\n\n+1 for the change of the error messages.\n\nOne question is; should we back-patch the change of the error message about\nfetch_size to back branches? Since this is minor thing, is it enough to apply\nthe change only to the master? Even if we should do the back-patch,\nwe would need to wait until upcoming minor release is done before doing that.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 10 May 2021 15:17:28 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> +1 for the change of the error messages.\n\nYeah, this error message seems outright buggy. However, it's a minor\nmatter. Also, some people think \"positive\" is the same thing as\n\"non-negative\", so maybe we need less ambiguous wording?\n\n> One question is; should we back-patch the change of the error message about\n> fetch_size to back branches? Since this is minor thing, is it enough to apply\n> the change only to the master? Even if we should do the back-patch,\n> we would need to wait until upcoming minor release is done before doing that.\n\n+1 for back-patch, but not till after the releases are out. Right now\nis no time for inessential changes ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 May 2021 02:30:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "On Mon, May 10, 2021 at 12:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> > +1 for the change of the error messages.\n>\n> Yeah, this error message seems outright buggy. However, it's a minor\n> matter. Also, some people think \"positive\" is the same thing as\n> \"non-negative\", so maybe we need less ambiguous wording?\n\nSince value 0 can't be considered as either a positive or negative\ninteger, I think we can do as following(roughly):\n\nif (value < 0) \"requires a zero or positive integer value\"\nif (value <= 0) \"requires a positive integer value\"\n\nI'm not sure whether we should consider changing these messages:\nremainder for hash partition must be a non-negative integer\nparallel vacuum degree must be a non-negative integer\nrepeat count size must be a non-negative integer\nnumber of workers must be a non-negative integer\n%s requires a non-negative numeric value\ndistance in phrase operator should be non-negative and less than %d\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 May 2021 14:08:51 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Mon, May 10, 2021 at 12:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, this error message seems outright buggy. However, it's a minor\n>> matter. Also, some people think \"positive\" is the same thing as\n>> \"non-negative\", so maybe we need less ambiguous wording?\n\n> Since value 0 can't be considered as either a positive or negative\n> integer, I think we can do as following(roughly):\n\n> if (value < 0) \"requires a zero or positive integer value\"\n> if (value <= 0) \"requires a positive integer value\"\n\nI was thinking of avoiding the passive voice and writing\n\n\t\"foo must be greater than zero\"\n\nwhich removes all doubt. It's not necessary to keep the \"integer\"\naspect of the existing text, because if someone had supplied a\nnon-integer value, that would not have gotten this far anyway.\n\n> I'm not sure whether we should consider changing these messages:\n> remainder for hash partition must be a non-negative integer\n> parallel vacuum degree must be a non-negative integer\n> repeat count size must be a non-negative integer\n> number of workers must be a non-negative integer\n> %s requires a non-negative numeric value\n> distance in phrase operator should be non-negative and less than %d\n\nI think for consistency it'd be good to change 'em all. I'm almost\ntempted to put this matter into our message style guide too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 May 2021 10:09:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "On Mon, May 10, 2021 at 10:09:40AM -0400, Tom Lane wrote:\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n>> if (value < 0) \"requires a zero or positive integer value\"\n>> if (value <= 0) \"requires a positive integer value\"\n> \n> I was thinking of avoiding the passive voice and writing\n> \n> \t\"foo must be greater than zero\"\n\nSounds like a good idea to me.\n\n>> I'm not sure whether we should consider changing these messages:\n>> remainder for hash partition must be a non-negative integer\n>> parallel vacuum degree must be a non-negative integer\n>> repeat count size must be a non-negative integer\n>> number of workers must be a non-negative integer\n>> %s requires a non-negative numeric value\n>> distance in phrase operator should be non-negative and less than %d\n> \n> I think for consistency it'd be good to change 'em all. I'm almost\n> tempted to put this matter into our message style guide too.\n\n+1.\n--\nMichael", "msg_date": "Tue, 11 May 2021 14:44:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "On Mon, May 10, 2021 at 7:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > On Mon, May 10, 2021 at 12:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Yeah, this error message seems outright buggy. However, it's a minor\n> >> matter. Also, some people think \"positive\" is the same thing as\n> >> \"non-negative\", so maybe we need less ambiguous wording?\n>\n> > Since value 0 can't be considered as either a positive or negative\n> > integer, I think we can do as following(roughly):\n>\n> > if (value < 0) \"requires a zero or positive integer value\"\n> > if (value <= 0) \"requires a positive integer value\"\n>\n> I was thinking of avoiding the passive voice and writing\n>\n> \"foo must be greater than zero\"\n\n+1 for \"foo must be greater than zero\" if (foo <= 0) kind of errors.\nBut, we also have some values for which zero is accepted, see below\nerror messages. How about the error message \"foo must be greater than\nor equal to zero\"?\n\nremainder for hash partition must be a non-negative integer\nparallel vacuum degree must be a non-negative integer\nrepeat count size must be a non-negative integer\nnumber of workers must be a non-negative integer\ndistance in phrase operator should be non-negative and less than %d\n\n> which removes all doubt. It's not necessary to keep the \"integer\"\n> aspect of the existing text, because if someone had supplied a\n> non-integer value, that would not have gotten this far anyway.\n\nThis led me to have a look at two postgres_fdw options: fetch_size and\nbatch_size, whether they accept positive non-integers like '123.456',\n'789.123' and some unsound strings such as '100$%$#$#', '9,223,372,'.\nIt looks like yes, the truncated values 123. 789, 100, 9\n(respectively) are accepted. This is because the way strtol is used to\nfetch the integers from string. I'm not sure if that's intentional.\nfetch_size = strtol(defGetString(def), NULL, 10);\nbatch_size = strtol(defGetString(def), NULL, 10);\n\nI know that fetch_size and batch_size are \"number of rows\", so no\nsensible users may specify values with fractional part or non integer\ncharacters, but still we can fix this with the endptr parameter of\nthe strtol. Note that for the options fdw_startup_cost and\nfdw_tuple_cost it's already fixed.\n\nI'm thinking of starting a separate thread to discuss this, if this\nthread is not the right place.\n\n> > I'm not sure whether we should consider changing these messages:\n> > remainder for hash partition must be a non-negative integer\n> > parallel vacuum degree must be a non-negative integer\n> > repeat count size must be a non-negative integer\n> > number of workers must be a non-negative integer\n> > %s requires a non-negative numeric value\n> > distance in phrase operator should be non-negative and less than %d\n>\n> I think for consistency it'd be good to change 'em all.\n\n+1.\n\n> I'm almost tempted to put this matter into our message style guide too.\n\n+1.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 May 2021 11:28:38 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "On Tue, May 11, 2021 at 11:28 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, May 10, 2021 at 7:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > > On Mon, May 10, 2021 at 12:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> Yeah, this error message seems outright buggy. However, it's a minor\n> > >> matter. Also, some people think \"positive\" is the same thing as\n> > >> \"non-negative\", so maybe we need less ambiguous wording?\n> >\n> > > Since value 0 can't be considered as either a positive or negative\n> > > integer, I think we can do as following(roughly):\n> >\n> > > if (value < 0) \"requires a zero or positive integer value\"\n> > > if (value <= 0) \"requires a positive integer value\"\n> >\n> > I was thinking of avoiding the passive voice and writing\n> >\n> > \"foo must be greater than zero\"\n>\n> +1 for \"foo must be greater than zero\" if (foo <= 0) kind of errors.\n> But, we also have some values for which zero is accepted, see below\n> error messages. How about the error message \"foo must be greater than\n> or equal to zero\"?\n>\n\n+1 for your proposed message for the cases where we have a check if\n(foo < 0). Tom, Michael, do you see any problem with the proposed\nmessage? We would like to make a similar change at another place [1]\nso wanted to be consistent.\n\n[1] - https://www.postgresql.org/message-id/CALj2ACWGB9oHCR5ygkc8u6_QDqecObf9j2MxtOgsjZMMKsLj%3DQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 17 May 2021 16:23:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "On Mon, May 17, 2021 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 11, 2021 at 11:28 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Mon, May 10, 2021 at 7:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > > > On Mon, May 10, 2021 at 12:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > >> Yeah, this error message seems outright buggy. However, it's a minor\n> > > >> matter. Also, some people think \"positive\" is the same thing as\n> > > >> \"non-negative\", so maybe we need less ambiguous wording?\n> > >\n> > > > Since value 0 can't be considered as either a positive or negative\n> > > > integer, I think we can do as following(roughly):\n> > >\n> > > > if (value < 0) \"requires a zero or positive integer value\"\n> > > > if (value <= 0) \"requires a positive integer value\"\n> > >\n> > > I was thinking of avoiding the passive voice and writing\n> > >\n> > > \"foo must be greater than zero\"\n> >\n> > +1 for \"foo must be greater than zero\" if (foo <= 0) kind of errors.\n> > But, we also have some values for which zero is accepted, see below\n> > error messages. How about the error message \"foo must be greater than\n> > or equal to zero\"?\n> >\n>\n> +1 for your proposed message for the cases where we have a check if\n> (foo < 0). Tom, Michael, do you see any problem with the proposed\n> message? We would like to make a similar change at another place [1]\n> so wanted to be consistent.\n>\n> [1] - https://www.postgresql.org/message-id/CALj2ACWGB9oHCR5ygkc8u6_QDqecObf9j2MxtOgsjZMMKsLj%3DQ%40mail.gmail.com\n\nThanks all for your inputs. PSA v2 patch that uses the new convention.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 19 May 2021 16:31:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "\n\nOn 2021/05/19 20:01, Bharath Rupireddy wrote:\n> On Mon, May 17, 2021 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Tue, May 11, 2021 at 11:28 AM Bharath Rupireddy\n>> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>>\n>>> On Mon, May 10, 2021 at 7:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>>\n>>>> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n>>>>> On Mon, May 10, 2021 at 12:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>>>> Yeah, this error message seems outright buggy. However, it's a minor\n>>>>>> matter. Also, some people think \"positive\" is the same thing as\n>>>>>> \"non-negative\", so maybe we need less ambiguous wording?\n>>>>\n>>>>> Since value 0 can't be considered as either a positive or negative\n>>>>> integer, I think we can do as following(roughly):\n>>>>\n>>>>> if (value < 0) \"requires a zero or positive integer value\"\n>>>>> if (value <= 0) \"requires a positive integer value\"\n>>>>\n>>>> I was thinking of avoiding the passive voice and writing\n>>>>\n>>>> \"foo must be greater than zero\"\n>>>\n>>> +1 for \"foo must be greater than zero\" if (foo <= 0) kind of errors.\n>>> But, we also have some values for which zero is accepted, see below\n>>> error messages. How about the error message \"foo must be greater than\n>>> or equal to zero\"?\n>>>\n>>\n>> +1 for your proposed message for the cases where we have a check if\n>> (foo < 0). Tom, Michael, do you see any problem with the proposed\n>> message? We would like to make a similar change at another place [1]\n>> so wanted to be consistent.\n>>\n>> [1] - https://www.postgresql.org/message-id/CALj2ACWGB9oHCR5ygkc8u6_QDqecObf9j2MxtOgsjZMMKsLj%3DQ%40mail.gmail.com\n> \n> Thanks all for your inputs. PSA v2 patch that uses the new convention.\n\nThanks for the patch!\n\n \t\t\t\tereport(ERROR,\n \t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n-\t\t\t\t\t\t errmsg(\"%s requires a non-negative numeric value\",\n+\t\t\t\t\t\t errmsg(\"%s must be greater than or equal to zero\",\n \t\t\t\t\t\t\t\tdef->defname)));\n \t\t}\n \t\telse if (strcmp(def->defname, \"extensions\") == 0)\n@@ -142,7 +142,7 @@ postgres_fdw_validator(PG_FUNCTION_ARGS)\n \t\t\tif (fetch_size <= 0)\n \t\t\t\tereport(ERROR,\n \t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n-\t\t\t\t\t\t errmsg(\"%s requires a non-negative integer value\",\n+\t\t\t\t\t\t errmsg(\"%s must be greater than zero\",\n\nI'm fine to convert \"non-negative\" word to \"greater than\" or \"greater than\nor equal to\" in the messages. But this change also seems to get rid of\nthe information about the data type of the option from the message.\nI'm not sure if this is an improvement. Probably isn't it better to\nconvert \"requires a non-negative integer value\" to \"must be an integer value\ngreater than zero\"?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 19 May 2021 20:49:58 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "On Wed, May 19, 2021 at 5:20 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> ereport(ERROR,\n> (errcode(ERRCODE_SYNTAX_ERROR),\n> - errmsg(\"%s requires a non-negative numeric value\",\n> + errmsg(\"%s must be greater than or equal to zero\",\n> def->defname)));\n> }\n> else if (strcmp(def->defname, \"extensions\") == 0)\n> @@ -142,7 +142,7 @@ postgres_fdw_validator(PG_FUNCTION_ARGS)\n> if (fetch_size <= 0)\n> ereport(ERROR,\n> (errcode(ERRCODE_SYNTAX_ERROR),\n> - errmsg(\"%s requires a non-negative integer value\",\n> + errmsg(\"%s must be greater than zero\",\n>\n> I'm fine to convert \"non-negative\" word to \"greater than\" or \"greater than\n> or equal to\" in the messages. But this change also seems to get rid of\n> the information about the data type of the option from the message.\n> I'm not sure if this is an improvement. Probably isn't it better to\n> convert \"requires a non-negative integer value\" to \"must be an integer value\n> greater than zero\"?\n\nThanks for the comments. Done that way. PSA v3 patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 19 May 2021 21:48:56 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "At Wed, 19 May 2021 21:48:56 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Wed, May 19, 2021 at 5:20 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > I'm fine to convert \"non-negative\" word to \"greater than\" or \"greater than\n> > or equal to\" in the messages. But this change also seems to get rid of\n> > the information about the data type of the option from the message.\n> > I'm not sure if this is an improvement. Probably isn't it better to\n> > convert \"requires a non-negative integer value\" to \"must be an integer value\n> > greater than zero\"?\n> \n> Thanks for the comments. Done that way. PSA v3 patch.\n\n--- a/src/backend/utils/adt/tsquery_op.c\n+++ b/src/backend/utils/adt/tsquery_op.c\n@@ -121,7 +121,7 @@ tsquery_phrase_distance(PG_FUNCTION_ARGS)\n \tif (distance < 0 || distance > MAXENTRYPOS)\n \t\tereport(ERROR,\n \t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n-\t\t\t\t errmsg(\"distance in phrase operator should be non-negative and less than %d\",\n+\t\t\t\t errmsg(\"distance in phrase operator must be an integer value greater than or equal to zero and less than %d\",\n \t\t\t\t\t\tMAXENTRYPOS)));\n\nThough it is not due to this patch, but the message looks wrong. The condition is suggesting:\n\n\"distance in phrase operator must be an integer value greater than or equal to zero and less than or equal to %d\"\n\nI'm not sure readers can read it without biting their tongue. How\nabout something like the following instead?\n\n\"distance in phrase operator must be an integer value between zero and\n %d inclusive.\"\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 20 May 2021 17:14:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "On Thu, May 20, 2021 at 1:44 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 19 May 2021 21:48:56 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > On Wed, May 19, 2021 at 5:20 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > > I'm fine to convert \"non-negative\" word to \"greater than\" or \"greater than\n> > > or equal to\" in the messages. But this change also seems to get rid of\n> > > the information about the data type of the option from the message.\n> > > I'm not sure if this is an improvement. Probably isn't it better to\n> > > convert \"requires a non-negative integer value\" to \"must be an integer value\n> > > greater than zero\"?\n> >\n> > Thanks for the comments. Done that way. PSA v3 patch.\n>\n> --- a/src/backend/utils/adt/tsquery_op.c\n> +++ b/src/backend/utils/adt/tsquery_op.c\n> @@ -121,7 +121,7 @@ tsquery_phrase_distance(PG_FUNCTION_ARGS)\n> if (distance < 0 || distance > MAXENTRYPOS)\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> - errmsg(\"distance in phrase operator should be non-negative and less than %d\",\n> + errmsg(\"distance in phrase operator must be an integer value greater than or equal to zero and less than %d\",\n> MAXENTRYPOS)));\n>\n> Though it is not due to this patch, but the message looks wrong. The condition is suggesting:\n>\n> \"distance in phrase operator must be an integer value greater than or equal to zero and less than or equal to %d\"\n>\n> I'm not sure readers can read it without biting their tongue. How\n> about something like the following instead?\n>\n> \"distance in phrase operator must be an integer value between zero and\n> %d inclusive.\"\n\nThanks. That looks better. PSA v4 patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 20 May 2021 14:43:50 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "On Thu, May 20, 2021 at 2:43 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Thanks. That looks better. PSA v4 patch.\n\nAttaching v5 patch rebased on latest master.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 26 May 2021 11:52:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "\n\nOn 2021/05/26 15:22, Bharath Rupireddy wrote:\n> On Thu, May 20, 2021 at 2:43 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> Thanks. That looks better. PSA v4 patch.\n> \n> Attaching v5 patch rebased on latest master.\n\nThe patch could not be applied cleanly because of recent commit d854720df6.\nCould you rebase the patch?\n\n- /* these must have a non-negative numeric value */\n+ /* these must have a positive numeric value */\n\nIsn't it better to replace this with \"these must have a floating point value\ngreater than or equal to zero\"?\n\n- errmsg(\"%s requires a non-negative numeric value\",\n+ errmsg(\"\\\"%s\\\" must be a numeric value greater than or equal to zero\",\n\n\"numeric\" should be \"floating point\"?\n\n+ <quote>foo must be a numeric value greater than zero</quote> or\n+ <quote>foo must be a numeric value greater than or equal to zero</quote>\n+ if option <quote>foo</quote> expects a numeric value\n\nMaybe this description about numeric value is redundant\nbecause there is already the description about integer value?\n\n-\t/* Number of workers should be non-negative. */\n\nIsn't it better to replace this with \"Number of workers should be greater than zero\"\nrather than removing the comment?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 9 Jul 2021 09:55:17 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "On Fri, Jul 9, 2021 at 6:25 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> The patch could not be applied cleanly because of recent commit d854720df6.\n> Could you rebase the patch?\n\nThanks. Done.\n\n> - /* these must have a non-negative numeric value */\n> + /* these must have a positive numeric value */\n>\n> Isn't it better to replace this with \"these must have a floating point value\n> greater than or equal to zero\"?\n\nChanged.\n\n> - errmsg(\"%s requires a non-negative numeric value\",\n> + errmsg(\"\\\"%s\\\" must be a numeric value greater than or equal to zero\",\n>\n> \"numeric\" should be \"floating point\"?\n\nChanged.\n\n> + <quote>foo must be a numeric value greater than zero</quote> or\n> + <quote>foo must be a numeric value greater than or equal to zero</quote>\n> + if option <quote>foo</quote> expects a numeric value\n>\n> Maybe this description about numeric value is redundant\n> because there is already the description about integer value?\n\nRemoved.\n\n> - /* Number of workers should be non-negative. */\n>\n> Isn't it better to replace this with \"Number of workers should be greater than zero\"\n> rather than removing the comment?\n\nChanged.\n\nPSA v6 patch.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 9 Jul 2021 08:11:53 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "\n\nOn 2021/07/09 11:41, Bharath Rupireddy wrote:\n> PSA v6 patch.\n\nThanks for updating the patch!\n\n+ <simplesect>\n+ <title>Avoid Using <quote>non-negative</quote> Word in Error Messages</title>\n+\n+ <para>\n+ Do not use <quote>non-negative</quote> word in error messages as it looks\n+ ambiguous. Instead, use <quote>foo must be an integer value greater than zero</quote>\n+ or <quote>foo must be an integer value greater than or equal to zero</quote>\n+ if option <quote>foo</quote> expects an integer value.\n+ </para>\n+ </simplesect>\n\nIt seems suitable to put this guide under \"Tricky Words to Avoid\"\nrather than adding it as separate section. Thought?\n\n-\tif (nworkers < 1)\n+\tif (nworkers <= 0)\n \t\tereport(ERROR,\n \t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n-\t\t\t\t errmsg(\"number of workers must be a positive integer\")));\n+\t\t\t\t errmsg(\"number of workers must be an integer value greater than zero\")));\n\nYou replaced \"positve\" with \"greater than zero\". So the error message\nstyle guide should mention not only \"non-negative\" but also \"positive\"\n(probably also \"negative\") keyword?\n\nIf this is true, there are still many messages using \"positive\" or \"negative\"\nkeyword as follows. We should also fix them at all? Of course,\nwhich would increase the change too big unnecessarily, I'm afraid, though..\n\nsrc/backend/storage/ipc/signalfuncs.c:\t\t\t\t errmsg(\"\\\"timeout\\\" must not be negative\")));\nsrc/backend/commands/functioncmds.c:\t\t\t\t\t errmsg(\"COST must be positive\")));\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 13 Jul 2021 00:49:58 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "On Mon, Jul 12, 2021 at 9:20 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> + <simplesect>\n> + <title>Avoid Using <quote>non-negative</quote> Word in Error Messages</title>\n> +\n> + <para>\n> + Do not use <quote>non-negative</quote> word in error messages as it looks\n> + ambiguous. Instead, use <quote>foo must be an integer value greater than zero</quote>\n> + or <quote>foo must be an integer value greater than or equal to zero</quote>\n> + if option <quote>foo</quote> expects an integer value.\n> + </para>\n> + </simplesect>\n>\n> It seems suitable to put this guide under \"Tricky Words to Avoid\"\n> rather than adding it as separate section. Thought?\n\n+1. I will change.\n\n> - if (nworkers < 1)\n> + if (nworkers <= 0)\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> - errmsg(\"number of workers must be a positive integer\")));\n> + errmsg(\"number of workers must be an integer value greater than zero\")));\n>\n> You replaced \"positve\" with \"greater than zero\". So the error message\n> style guide should mention not only \"non-negative\" but also \"positive\"\n> (probably also \"negative\") keyword?\n\nThe main focus of the patch is to replace the ambiguous \"non-negative\"\nwork in the error message. Let's keep it to that. However, I changed\nbelow two messages too to keep them in sync with nearby messages.\nAlso, there seems to be an ambiguity in treating 0 as a positive or\nnegative integer, I thought it makes sense to replace them. But, if\nothers don't agree, I'm happy to revert.\n\n- errmsg(\"modulus for hash partition must be a positive integer\")));\n+ errmsg(\"modulus for hash partition must be an integer value greater\nthan zero\")));\n- errmsg(\"number of workers must be a positive integer\")));\n+ errmsg(\"number of workers must be an integer value greater than zero\")));\n\n> If this is true, there are still many messages using \"positive\" or \"negative\"\n> keyword as follows. We should also fix them at all? Of course,\n> which would increase the change too big unnecessarily, I'm afraid, though..\n>\n> src/backend/storage/ipc/signalfuncs.c: errmsg(\"\\\"timeout\\\" must not be negative\")));\n> src/backend/commands/functioncmds.c: errmsg(\"COST must be positive\")));\n\nYou are right. The change is going to be an unnecessary one. So, let's\nnot do that.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 12 Jul 2021 22:11:22 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "On Mon, Jul 12, 2021 at 10:11 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Jul 12, 2021 at 9:20 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > + <simplesect>\n> > + <title>Avoid Using <quote>non-negative</quote> Word in Error Messages</title>\n> > +\n> > + <para>\n> > + Do not use <quote>non-negative</quote> word in error messages as it looks\n> > + ambiguous. Instead, use <quote>foo must be an integer value greater than zero</quote>\n> > + or <quote>foo must be an integer value greater than or equal to zero</quote>\n> > + if option <quote>foo</quote> expects an integer value.\n> > + </para>\n> > + </simplesect>\n> >\n> > It seems suitable to put this guide under \"Tricky Words to Avoid\"\n> > rather than adding it as separate section. Thought?\n>\n> +1. I will change.\n\nPSA v7 patch with the above change.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 15 Jul 2021 19:54:41 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "On Thu, Jul 15, 2021 at 7:54 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Jul 12, 2021 at 10:11 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Mon, Jul 12, 2021 at 9:20 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > > + <simplesect>\n> > > + <title>Avoid Using <quote>non-negative</quote> Word in Error Messages</title>\n> > > +\n> > > + <para>\n> > > + Do not use <quote>non-negative</quote> word in error messages as it looks\n> > > + ambiguous. Instead, use <quote>foo must be an integer value greater than zero</quote>\n> > > + or <quote>foo must be an integer value greater than or equal to zero</quote>\n> > > + if option <quote>foo</quote> expects an integer value.\n> > > + </para>\n> > > + </simplesect>\n> > >\n> > > It seems suitable to put this guide under \"Tricky Words to Avoid\"\n> > > rather than adding it as separate section. Thought?\n> >\n> > +1. I will change.\n>\n> PSA v7 patch with the above change.\n\nPSA v8 patch rebased on to latest master.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Mon, 26 Jul 2021 10:26:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "On 2021/07/26 13:56, Bharath Rupireddy wrote:\n> On Thu, Jul 15, 2021 at 7:54 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Mon, Jul 12, 2021 at 10:11 PM Bharath Rupireddy\n>> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>>\n>>> On Mon, Jul 12, 2021 at 9:20 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>> + <simplesect>\n>>>> + <title>Avoid Using <quote>non-negative</quote> Word in Error Messages</title>\n>>>> +\n>>>> + <para>\n>>>> + Do not use <quote>non-negative</quote> word in error messages as it looks\n>>>> + ambiguous. Instead, use <quote>foo must be an integer value greater than zero</quote>\n>>>> + or <quote>foo must be an integer value greater than or equal to zero</quote>\n>>>> + if option <quote>foo</quote> expects an integer value.\n>>>> + </para>\n>>>> + </simplesect>\n>>>>\n>>>> It seems suitable to put this guide under \"Tricky Words to Avoid\"\n>>>> rather than adding it as separate section. Thought?\n>>>\n>>> +1. I will change.\n>>\n>> PSA v7 patch with the above change.\n> \n> PSA v8 patch rebased on to latest master.\n\nThanks for updating the patch!\n\n+ <formalpara>\n+ <title>non-negative</title>\n+ <para>\n+ Do not use <quote>non-negative</quote> word in error messages as it looks\n+ ambiguous. Instead, use <quote>foo must be an integer value greater than\n+ zero</quote> or <quote>foo must be an integer value greater than or equal\n+ to zero</quote> if option <quote>foo</quote> expects an integer value.\n+ </para>\n+ </formalpara>\n\nThis description looks a bit redundant. And IMO it's better to also document how \"non-negative\" is ambiguous. So what about the following description, instead? I replaced this description with the following. Patch attached. I also uppercased the first character \"n\" of \"non-negative\" at the title for the sake of consistency with other items.\n\n+ <formalpara>\n+ <title>Non-negative</title>\n+ <para>\n+ Avoid <quote>non-negative</quote> as it is ambiguous\n+ about whether it accepts zero. It's better to use\n+ <quote>greater than zero</quote> or\n+ <quote>greater than or equal to zero</quote>.\n+ </para>\n+ </formalpara>\n\n\n-\t/* Number of workers should be non-negative. */\n+\t/* Number of parallel workers should be greater than zero. */\n \tAssert(nworkers >= 0);\n\nThis should be \"greater than or equal to zero\", instead? Anyway since this is comment not an error message, and also there are still other comments using \"non-negative\", I don't think we need to change only this comment for now. So I excluded this change from the patch. Maybe we can get rid of all \"non-negative\" from comments and documents later *if* necessary.\n\n\n-\t\t\t\t errmsg(\"repeat count size must be a non-negative integer\")));\n+\t\t\t\t errmsg(\"repeat count size must be greater than or equal to zero\")));\n\n-\t\t\t\t errmsg(\"number of workers must be a non-negative integer\")));\n+\t\t\t\t errmsg(\"number of workers must be greater than or equal to zero\")));\n\nIsn't it better to replace \"be greater\" with \"be an integer value greater\"? I applied this to the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 27 Jul 2021 01:07:28 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "On Mon, Jul 26, 2021 at 9:37 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> + <formalpara>\n> + <title>non-negative</title>\n> + <para>\n> + Do not use <quote>non-negative</quote> word in error messages as it looks\n> + ambiguous. Instead, use <quote>foo must be an integer value greater than\n> + zero</quote> or <quote>foo must be an integer value greater than or equal\n> + to zero</quote> if option <quote>foo</quote> expects an integer value.\n> + </para>\n> + </formalpara>\n>\n> This description looks a bit redundant. And IMO it's better to also document how \"non-negative\" is ambiguous. So what about the following description, instead? I replaced this description with the following. Patch attached. I also uppercased the first character \"n\" of \"non-negative\" at the title for the sake of consistency with other items.\n>\n> + <formalpara>\n> + <title>Non-negative</title>\n> + <para>\n> + Avoid <quote>non-negative</quote> as it is ambiguous\n> + about whether it accepts zero. It's better to use\n> + <quote>greater than zero</quote> or\n> + <quote>greater than or equal to zero</quote>.\n> + </para>\n> + </formalpara>\n\nLGTM.\n\n> - /* Number of workers should be non-negative. */\n> + /* Number of parallel workers should be greater than zero. */\n> Assert(nworkers >= 0);\n>\n> This should be \"greater than or equal to zero\", instead? Anyway since this is comment not an error message, and also there are still other comments using \"non-negative\", I don't think we need to change only this comment for now. So I excluded this change from the patch. Maybe we can get rid of all \"non-negative\" from comments and documents later *if* necessary.\n\n+1 to not change any code comments.\n\n> - errmsg(\"repeat count size must be a non-negative integer\")));\n> + errmsg(\"repeat count size must be greater than or equal to zero\")));\n>\n> - errmsg(\"number of workers must be a non-negative integer\")));\n> + errmsg(\"number of workers must be greater than or equal to zero\")));\n>\n> Isn't it better to replace \"be greater\" with \"be an integer value greater\"? I applied this to the patch.\n\n+1.\n\nThanks for the v8 patch, it LGTM.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 27 Jul 2021 11:36:05 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" }, { "msg_contents": "\n\nOn 2021/07/27 15:06, Bharath Rupireddy wrote:\n> Thanks for the v8 patch, it LGTM.\n\nPushed. Thanks!\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 28 Jul 2021 01:32:16 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate error message when set fdw batch_size to 0" } ]
[ { "msg_contents": "In a nearby thread I bemoaned the fact that the core regression tests\nseem to have gotten significantly slower in the last couple of months,\nat least with CCA enabled: hyrax reports completing them in 12:52:44\non 18 March, while its most recent run on 1 May took 14:08:18.\n\nTrying to diagnose the cause overall seemed a bit daunting, but\nI thought I'd dig into the opr_sanity test in particular, as it\nis one of the slower tests under CCA to start with and had also\nslowed down noticeably (from 3701581 ms to 4761183 ms, or 28%).\nI was able to complete a bisection using just that test, and\ngot an unexpected result: most of the slowdown appeared at\nab596105b (BRIN minmax-multi indexes). Apparently the additional\ntime is simply from having to check the additional pg_amop and\npg_amproc entries, which that patch added quite a few of.\n\nI noticed that all of the slowest queries in that test were dependent\non the binary_coercible() plpgsql function that it uses. Now, that\nfunction has always been a rather lame attempt to approximate the\nbehavior of the parser's IsBinaryCoercible() function, so I've been\nthinking for some time that we ought to get rid of it in favor of\nactually using IsBinaryCoercible(). I tried that, by adding a\nshim function to regress.c, and got a most gratifying result:\non my machine opr_sanity's runtime with\ndebug_invalidate_system_caches_always = 1 drops from\n29m9s to 3m19s. Without CCA the speedup is far less impressive,\n360ms to 305ms, but that's still useful. Especially since this\nmakes the test strictly more accurate.\n\n(I am thinking that this suggests that plpgsql may be hurt more\nby cache clobbers than it really needs to be; but doing anything\nabout that would require some research.)\n\nAnyway, I propose that we ought to sneak this into HEAD, since\nit's only touching test code and not anything production-critical.\n\nThe patch is a bit more invasive than I would have liked, because\nadding the SQL definition of binary_coercible() to create_function_1\n(where the other regress.c functions are declared) didn't work:\nthat runs after opr_sanity, and just moving it up to before\nopr_sanity causes the latter to complain about some of the functions\nin it. So I ended up splitting the create_function_1 test into\ncreate_function_0 and create_function_1. It's annoying from a\nparallelism standpoint that create_function_0 runs by itself, but\nthe two parallel groups ahead of it are already full. Maybe we\nshould rebalance that by moving a few of those tests to run in\nparallel with create_function_0, but I didn't do that here.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 08 May 2021 15:44:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Reducing opr_sanity test's runtime under CLOBBER_CACHE_ALWAYS" }, { "msg_contents": "On Sat, May 08, 2021 at 03:44:57PM -0400, Tom Lane wrote:\n> I tried that, by adding a\n> shim function to regress.c, and got a most gratifying result:\n> on my machine opr_sanity's runtime with\n> debug_invalidate_system_caches_always = 1 drops from\n> 29m9s to 3m19s. Without CCA the speedup is far less impressive,\n> 360ms to 305ms, but that's still useful. Especially since this\n> makes the test strictly more accurate.\n\nThe speedup is quite welcome and still impressive in both cases.\n\n> Anyway, I propose that we ought to sneak this into HEAD, since\n> it's only touching test code and not anything production-critical.\n\n+1 for pushing it in HEAD.\n\nLooking at the patch, explicitly_binary_coercible wasn't used since\ne9f42d529f990f94e1b7bdcec4a1111465c85326 (and was renamed there too). Just to\nbe sure, is it ok to remove it, as it was described as\n\n> --- We don't currently use this for any tests in this file, but it is a\n> --- reasonable alternative definition for some scenarios.\n\nIt would still be in the git history in needed, so I'm not objecting.\n\n\n", "msg_date": "Sun, 9 May 2021 21:53:06 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reducing opr_sanity test's runtime under CLOBBER_CACHE_ALWAYS" }, { "msg_contents": "\nOn 5/8/21 3:44 PM, Tom Lane wrote:\n> Anyway, I propose that we ought to sneak this into HEAD, since\n> it's only touching test code and not anything production-critical.\n>\n> The patch is a bit more invasive than I would have liked, because\n> adding the SQL definition of binary_coercible() to create_function_1\n> (where the other regress.c functions are declared) didn't work:\n> that runs after opr_sanity, and just moving it up to before\n> opr_sanity causes the latter to complain about some of the functions\n> in it. So I ended up splitting the create_function_1 test into\n> create_function_0 and create_function_1. It's annoying from a\n> parallelism standpoint that create_function_0 runs by itself, but\n> the two parallel groups ahead of it are already full. Maybe we\n> should rebalance that by moving a few of those tests to run in\n> parallel with create_function_0, but I didn't do that here.\n>\n> Thoughts?\n\n\n+1 for doing it now.\n\n\nYou could possibly just move \"inet macaddr macaddr8 \" to the following\ngroup and so have room for create_function_0. I just tried that and it\nseemed happy.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 9 May 2021 10:30:06 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Reducing opr_sanity test's runtime under CLOBBER_CACHE_ALWAYS" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> Looking at the patch, explicitly_binary_coercible wasn't used since\n> e9f42d529f990f94e1b7bdcec4a1111465c85326 (and was renamed there too). Just to\n> be sure, is it ok to remove it, as it was described as\n\n>> --- We don't currently use this for any tests in this file, but it is a\n>> --- reasonable alternative definition for some scenarios.\n\n> It would still be in the git history in needed, so I'm not objecting.\n\nIt's my own comment, so it doesn't scare me particularly ;-).\nI think that\n\n(a) it's unlikely we'll ever again need that old physically-coercible\ncheck. That was a hangover from Berkeley-era type cheats, and I think\nour standards are higher now. If somebody submits a patch that would\ndepend on such a cheat, I think our response would be \"fix the patch\",\nnot \"it's okay to weaken the type-matching checks\".\n\n(b) if we did need it, we'd probably want an implementation like this\none (ie invoke some C code), both for speed and because it's hard to\nmake a plpgsql function's behavior match the C code's exactly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 09 May 2021 13:01:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Reducing opr_sanity test's runtime under CLOBBER_CACHE_ALWAYS" }, { "msg_contents": "On Sun, May 09, 2021 at 01:01:38PM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > Looking at the patch, explicitly_binary_coercible wasn't used since\n> > e9f42d529f990f94e1b7bdcec4a1111465c85326 (and was renamed there too). Just to\n> > be sure, is it ok to remove it, as it was described as\n> \n> >> --- We don't currently use this for any tests in this file, but it is a\n> >> --- reasonable alternative definition for some scenarios.\n> \n> > It would still be in the git history in needed, so I'm not objecting.\n> \n> It's my own comment, so it doesn't scare me particularly ;-).\n\nYes, I saw that when digging in git history :)\n\n> I think that\n> \n> (a) it's unlikely we'll ever again need that old physically-coercible\n> check. That was a hangover from Berkeley-era type cheats, and I think\n> our standards are higher now. If somebody submits a patch that would\n> depend on such a cheat, I think our response would be \"fix the patch\",\n> not \"it's okay to weaken the type-matching checks\".\n> \n> (b) if we did need it, we'd probably want an implementation like this\n> one (ie invoke some C code), both for speed and because it's hard to\n> make a plpgsql function's behavior match the C code's exactly.\n\nI quite agree with both. As I said I just wanted to mention it for extra\nsafety.\n\n\n", "msg_date": "Mon, 10 May 2021 13:14:13 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reducing opr_sanity test's runtime under CLOBBER_CACHE_ALWAYS" }, { "msg_contents": "Hi,\n\nOn 2021-05-08 15:44:57 -0400, Tom Lane wrote:\n> In a nearby thread I bemoaned the fact that the core regression tests\n> seem to have gotten significantly slower in the last couple of months,\n> at least with CCA enabled: hyrax reports completing them in 12:52:44\n> on 18 March, while its most recent run on 1 May took 14:08:18.\n>\n> Trying to diagnose the cause overall seemed a bit daunting, but\n> I thought I'd dig into the opr_sanity test in particular, as it\n> is one of the slower tests under CCA to start with and had also\n> slowed down noticeably (from 3701581 ms to 4761183 ms, or 28%).\n> I was able to complete a bisection using just that test, and\n> got an unexpected result: most of the slowdown appeared at\n> ab596105b (BRIN minmax-multi indexes). Apparently the additional\n> time is simply from having to check the additional pg_amop and\n> pg_amproc entries, which that patch added quite a few of.\n\nI suspect that it might be not just that. From a quick profile it looks\nlike debug_invalidate_system_caches_always spends a good chunk of its\ntime in ResetCatalogCache() and hash_seq_search(). Those cost linearly\nwith the size of the underlying hash tables.\n\nWo what what might be happening is that the additional catalog entries\npushed some of the catcache hash tables into growing\n(RehashCatCache()). Which then makes all subsequent ResetCatalogCache()\nscans slower.\n\nNot that that changes much - your proposed fix still seems reasonable.\n\n\n> I noticed that all of the slowest queries in that test were dependent\n> on the binary_coercible() plpgsql function that it uses. Now, that\n> function has always been a rather lame attempt to approximate the\n> behavior of the parser's IsBinaryCoercible() function, so I've been\n> thinking for some time that we ought to get rid of it in favor of\n> actually using IsBinaryCoercible(). I tried that, by adding a\n> shim function to regress.c, and got a most gratifying result:\n> on my machine opr_sanity's runtime with\n> debug_invalidate_system_caches_always = 1 drops from\n> 29m9s to 3m19s. Without CCA the speedup is far less impressive,\n> 360ms to 305ms, but that's still useful. Especially since this\n> makes the test strictly more accurate.\n\nCool!\n\n\n> Anyway, I propose that we ought to sneak this into HEAD, since\n> it's only touching test code and not anything production-critical.\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 May 2021 10:44:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Reducing opr_sanity test's runtime under CLOBBER_CACHE_ALWAYS" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-08 15:44:57 -0400, Tom Lane wrote:\n>> I was able to complete a bisection using just that test, and\n>> got an unexpected result: most of the slowdown appeared at\n>> ab596105b (BRIN minmax-multi indexes). Apparently the additional\n>> time is simply from having to check the additional pg_amop and\n>> pg_amproc entries, which that patch added quite a few of.\n\n> I suspect that it might be not just that. From a quick profile it looks\n> like debug_invalidate_system_caches_always spends a good chunk of its\n> time in ResetCatalogCache() and hash_seq_search(). Those cost linearly\n> with the size of the underlying hash tables.\n> Wo what what might be happening is that the additional catalog entries\n> pushed some of the catcache hash tables into growing\n> (RehashCatCache()). Which then makes all subsequent ResetCatalogCache()\n> scans slower.\n\nHm. But constantly flushing the caches should mean that they're never\npopulated with very many entries at one time, which ought to forestall\nthat, at least to some extent.\n\nI wonder if there's anything we could do to make ResetCatalogCache\nfaster? It wouldn't help much for normal execution of course,\nbut it might do something to bring CCA testing time down out of\nthe stratosphere.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 May 2021 14:06:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Reducing opr_sanity test's runtime under CLOBBER_CACHE_ALWAYS" }, { "msg_contents": "Hi,\n\nOn 2021-05-10 14:06:16 -0400, Tom Lane wrote:\n> Hm. But constantly flushing the caches should mean that they're never\n> populated with very many entries at one time, which ought to forestall\n> that, at least to some extent.\n\nThat's probably true...\n\n\n> I wonder if there's anything we could do to make ResetCatalogCache\n> faster? It wouldn't help much for normal execution of course,\n> but it might do something to bring CCA testing time down out of\n> the stratosphere.\n\nWe could make the hashtables shrink, not just grow...\n\nThere's also the issue that most people, I assume, run CCA tests with -O0. In\na quick test that does make a big difference in e.g. ResetCatalogCache(). I\njust added a function specific annotation to optimize just that function and\nthe overall time in my test shrank 10% or so.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 May 2021 11:19:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Reducing opr_sanity test's runtime under CLOBBER_CACHE_ALWAYS" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-10 14:06:16 -0400, Tom Lane wrote:\n>> I wonder if there's anything we could do to make ResetCatalogCache\n>> faster? It wouldn't help much for normal execution of course,\n>> but it might do something to bring CCA testing time down out of\n>> the stratosphere.\n\n> We could make the hashtables shrink, not just grow...\n\nMaybe ...\n\n> There's also the issue that most people, I assume, run CCA tests with -O0. In\n> a quick test that does make a big difference in e.g. ResetCatalogCache(). I\n> just added a function specific annotation to optimize just that function and\n> the overall time in my test shrank 10% or so.\n\nIf they do I think they're nuts ;-). CCA is slow enough already without\nhobbling it.\n\nhyrax appears to use the usual -O2, as does/did avocet. Not sure\nif we have any other CCA buildfarm members right now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 May 2021 14:38:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Reducing opr_sanity test's runtime under CLOBBER_CACHE_ALWAYS" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-10 14:06:16 -0400, Tom Lane wrote:\n>> I wonder if there's anything we could do to make ResetCatalogCache\n>> faster? It wouldn't help much for normal execution of course,\n>> but it might do something to bring CCA testing time down out of\n>> the stratosphere.\n\n> We could make the hashtables shrink, not just grow...\n\nI noticed that we already have counters that can tell whether a\ncatcache or dynahash table is empty, so I experimented with the\nattached patch. Testing one of the slow queries from privileges.sql\n(which might not be very representative of the overall issue),\nI see:\n\nHEAD:\nTime: 536429.715 ms (08:56.430)\n\nwith ResetCatalogCache hack:\nTime: 488938.597 ms (08:08.939)\n\nplus hash_seq_search hack:\nTime: 475400.634 ms (07:55.401)\n\nOf course, the issue with these patches is that they change these\ncounters from things that (I think) we only trust for statistical\npurposes into things that had better be right or you're going to\nhave impossible-to-track-down bugs with sometimes failing to\ninvalidate cache entries. My gut feeling is that the risk-to-reward\nratio is worth it for changing ResetCatalogCache, but not for\nhash_seq_search. This is partly because of the greater absolute\npayback and partly because ResetCatalogCache doesn't deal with\nshared data structures, reducing the risk of counting issues.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 10 May 2021 16:17:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Reducing opr_sanity test's runtime under CLOBBER_CACHE_ALWAYS" }, { "msg_contents": "Hi,\n\nOn 2021-05-10 16:17:18 -0400, Tom Lane wrote:\n> I noticed that we already have counters that can tell whether a\n> catcache or dynahash table is empty, so I experimented with the\n> attached patch. Testing one of the slow queries from privileges.sql\n> (which might not be very representative of the overall issue),\n> I see:\n> \n> HEAD:\n> Time: 536429.715 ms (08:56.430)\n> \n> with ResetCatalogCache hack:\n> Time: 488938.597 ms (08:08.939)\n> \n> plus hash_seq_search hack:\n> Time: 475400.634 ms (07:55.401)\n\nOh, nice.\n\nPerhaps we generally ought to lower the initial sycache sizes further?\n20cb18db4668 did that, but we still have things like PROCNAMEARGNSP,\nPROCOID, RELNAMENSP, RELOID, STATRELATTINH, ... using 128 as the initial\nsize. Not hard to imagine that some of these are larger than what simple\nworkloads or CCA encounter.\n\n\n> Of course, the issue with these patches is that they change these\n> counters from things that (I think) we only trust for statistical\n> purposes into things that had better be right or you're going to\n> have impossible-to-track-down bugs with sometimes failing to\n> invalidate cache entries. My gut feeling is that the risk-to-reward\n> ratio is worth it for changing ResetCatalogCache, but not for\n> hash_seq_search. This is partly because of the greater absolute\n> payback and partly because ResetCatalogCache doesn't deal with\n> shared data structures, reducing the risk of counting issues.\n\nThat sounds reasonable. We could mitigate the risk for dynahash by\ntesting HASH_SHARED_MEM (which we don't store right now), but it's not\nclear it's worth it here. But I wonder if there's other cases where it\ncould help? If we did make the check support shared memory *and*\npartitioned tables, I could easily see it be a win for things like\nLockReleaseAll().\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 May 2021 13:52:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Reducing opr_sanity test's runtime under CLOBBER_CACHE_ALWAYS" }, { "msg_contents": "On Tue, May 11, 2021 at 8:52 AM Andres Freund <andres@anarazel.de> wrote:\n> ... If we did make the check support shared memory *and*\n> partitioned tables, I could easily see it be a win for things like\n> LockReleaseAll().\n\nFor that case, has the idea of maintaining a dlist of local locks been\nconsidered?\n\n\n", "msg_date": "Tue, 11 May 2021 10:57:03 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reducing opr_sanity test's runtime under CLOBBER_CACHE_ALWAYS" }, { "msg_contents": "Hi,\n\nOn 2021-05-11 10:57:03 +1200, Thomas Munro wrote:\n> On Tue, May 11, 2021 at 8:52 AM Andres Freund <andres@anarazel.de> wrote:\n> > ... If we did make the check support shared memory *and*\n> > partitioned tables, I could easily see it be a win for things like\n> > LockReleaseAll().\n\nErrr, that's not even a shared hashtable... So it would help even if we\njust excluded shared memory hashtables.\n\n\n> For that case, has the idea of maintaining a dlist of local locks been\n> considered?\n\nYea, there's been a long discussion on that for\nLockReleaseAll(). Combined with alternatives around shrinking the hashtable...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 May 2021 16:17:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Reducing opr_sanity test's runtime under CLOBBER_CACHE_ALWAYS" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> +1 for doing it now.\n\nPushed.\n\n> You could possibly just move \"inet macaddr macaddr8 \" to the following\n> group and so have room for create_function_0. I just tried that and it\n> seemed happy.\n\nI decided that the minimum change would be to push tstypes to the\nfollowing group, so I did it like that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 May 2021 14:30:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Reducing opr_sanity test's runtime under CLOBBER_CACHE_ALWAYS" } ]
[ { "msg_contents": "Hi,\n\nI have attached the draft for the 2021-05-13 cumulative update release\nannouncement.\n\nPlease review for inaccuracies, typos, or issues that you think should\nbe highlighted in the announcement that were not included in the list.\n\nPlease provide feedback no later than Wednesday, May 12, 2021 AoE[1].\n\nThanks!\n\nJonathan\n\n[1] https://en.wikipedia.org/wiki/Anywhere_on_Earth", "msg_date": "Sat, 8 May 2021 17:53:20 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "2021-05-13 release announcement draft" }, { "msg_contents": "On Sat, May 08, 2021 at 05:53:20PM -0400, Jonathan S. Katz wrote:\n> I have attached the draft for the 2021-05-13 cumulative update release\n> announcement.\n\nLooks good.\n\n\n", "msg_date": "Sun, 9 May 2021 00:47:31 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: 2021-05-13 release announcement draft" } ]
[ { "msg_contents": "Hi,\n\nSince seawasp's bleeding-edge clang moved to \"20210226\", it failed\nevery run except 4, and a couple of days ago it moved to \"20210508\"\nand it's still broken. It's always like this:\n\n2021-05-09 03:31:37.602 CEST [1678796:171] pg_regress/_int LOG:\nstatement: RESET enable_seqscan;\ncorrupted double-linked list\n\n... which doesn't appear in our code, but matches this:\n\nhttps://github.com/bminor/glibc/blob/cedbf6d5f3f70ca911176de87d6e453eeab4b7a1/malloc/malloc.c#L1645\n\nNo reason to think it's our fault, but it'd be nice to see a\nbacktrace. Is gdb installed, and are core files being dumped by that\nSIGABRT, and are they using the default name\n(/proc/sys/kernel/core_pattern = core), which the BF can find with the\nvalue it's using, namely 'core_file_glob' => 'core*'?\n\n\n", "msg_date": "Sun, 9 May 2021 21:23:42 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "seawasp failing, maybe in glibc allocator" }, { "msg_contents": "\nHello Thomas,\n\n> Since seawasp's bleeding-edge clang moved to \"20210226\", it failed\n> every run except 4, and a couple of days ago it moved to \"20210508\"\n> and it's still broken.\n\nIndeed I have noticed that there is indeed an issue, but the investigation \nis not very high on my current too deep pg-unrelated todo list.\n\n> It's always like this:\n>\n> 2021-05-09 03:31:37.602 CEST [1678796:171] pg_regress/_int LOG:\n> statement: RESET enable_seqscan;\n> corrupted double-linked list\n>\n> ... which doesn't appear in our code, but matches this:\n>\n> https://github.com/bminor/glibc/blob/cedbf6d5f3f70ca911176de87d6e453eeab4b7a1/malloc/malloc.c#L1645\n\n> No reason to think it's our fault, but it'd be nice to see a\n> backtrace.\n\nISTM it looks like some kind of memory corruption. If I'd have to guess \nbetween glibc, clang and pg, not sure which one I'd chose between the two \nlaters potential bug sources.\n\n> Is gdb installed, and are core files being dumped by that SIGABRT, and \n> are they using the default name (/proc/sys/kernel/core_pattern = core), \n> which the BF can find with the value it's using, namely 'core_file_glob' \n> => 'core*'?\n\nNope:\n\n sh> cat /proc/sys/kernel/core_pattern\n |/usr/share/apport/apport %p %s %c %d %P %E\n\n-- \nFabien.\n\n\n", "msg_date": "Mon, 10 May 2021 08:59:21 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "On Mon, May 10, 2021 at 6:59 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > Is gdb installed, and are core files being dumped by that SIGABRT, and\n> > are they using the default name (/proc/sys/kernel/core_pattern = core),\n> > which the BF can find with the value it's using, namely 'core_file_glob'\n> > => 'core*'?\n>\n> Nope:\n>\n> sh> cat /proc/sys/kernel/core_pattern\n> |/usr/share/apport/apport %p %s %c %d %P %E\n\nIf you don't care about Ubuntu \"apport\" on this system (something for\nsending crash/bug reports to developers with a GUI), you could\nuninstall it (otherwise it overwrites the core_pattern every time it\nrestarts, no matter what you write in your sysctl.conf, apparently),\nand then sudo sysctl -w kernel.core_pattern=core to undo the setting\nimmediately (or reboot). Then hopefully the build farm would succeed\nin dumping a backtrace into the log.\n\n\n", "msg_date": "Mon, 10 May 2021 19:27:29 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "\n> If you don't care about Ubuntu \"apport\" on this system (something for\n> sending crash/bug reports to developers with a GUI), you could\n> uninstall it (otherwise it overwrites the core_pattern every time it\n> restarts, no matter what you write in your sysctl.conf, apparently),\n> and then sudo sysctl -w kernel.core_pattern=core to undo the setting\n> immediately (or reboot). Then hopefully the build farm would succeed\n> in dumping a backtrace into the log.\n\nI forced-removed apport (which meant removing xserver-xorg). Let's see \nwhether the reports are better or whether I break something.\n\n-- \nFabien.\n\n\n", "msg_date": "Mon, 10 May 2021 11:30:31 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "On Mon, May 10, 2021 at 9:30 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> I forced-removed apport (which meant removing xserver-xorg). Let's see\n> whether the reports are better or whether I break something.\n\nAnd of course this time it succeeded :-)\n\nJust by the way, I noticed it takes ~40 minutes to compile. Is there\na reason you don't install ccache and set eg CC=\"ccache\n/path/to/clang\", CXX=\"ccache /path/to/clang++\", CLANG=\"ccache\n/path/to/clang\"? That doesn't seem to conflict with your goal of\ntesting LLVM/Clang's main branch, because ccache checks the compiler's\nmtime as part of its cache invalidation strategy.\n\n\n", "msg_date": "Mon, 10 May 2021 22:52:24 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "\n> And of course this time it succeeded :-)\n\nHmmm. ISTM that failures are on and off every few attempts.\n\n> Just by the way, I noticed it takes ~40 minutes to compile. Is there\n> a reason you don't install ccache and set eg CC=\"ccache\n> /path/to/clang\", CXX=\"ccache /path/to/clang++\", CLANG=\"ccache\n> /path/to/clang\"? That doesn't seem to conflict with your goal of\n> testing LLVM/Clang's main branch, because ccache checks the compiler's\n> mtime as part of its cache invalidation strategy.\n\nYep.\n\nI remember that I disactivated it for some reason once, but I cannot \nremember why. I just reactivated it, will see what happens.\n\n-- \nFabien.\n\n\n", "msg_date": "Mon, 10 May 2021 13:21:24 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "On Mon, May 10, 2021 at 11:21 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > And of course this time it succeeded :-)\n>\n> Hmmm. ISTM that failures are on and off every few attempts.\n\nOK we got the SIGABRT this time, but still no backtrace. If the\nkernel's core_pattern is \"core\", gdb is installed, then considering\nthat the buildfarm core_file_glob is \"core*\" and the script version is\nrecent (REL_12), then I'm out of ideas. ulimit -c XXX shouldn't be\nneeded because the perl script does that with rlimit.\n\n\n", "msg_date": "Tue, 11 May 2021 12:16:44 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "On 2021-05-11 12:16:44 +1200, Thomas Munro wrote:\n> OK we got the SIGABRT this time, but still no backtrace. If the\n> kernel's core_pattern is \"core\", gdb is installed, then considering\n> that the buildfarm core_file_glob is \"core*\" and the script version is\n> recent (REL_12), then I'm out of ideas. ulimit -c XXX shouldn't be\n> needed because the perl script does that with rlimit.\n\nUnless perhaps the hard rlimit for -C is set? ulimit -c -H should show\nthat.\n\n\n", "msg_date": "Mon, 10 May 2021 18:08:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "\n> On 2021-05-11 12:16:44 +1200, Thomas Munro wrote:\n>> OK we got the SIGABRT this time, but still no backtrace. If the\n>> kernel's core_pattern is \"core\", gdb is installed, then considering\n>> that the buildfarm core_file_glob is \"core*\" and the script version is\n>> recent (REL_12), then I'm out of ideas. ulimit -c XXX shouldn't be\n>> needed because the perl script does that with rlimit.\n>\n> Unless perhaps the hard rlimit for -C is set? ulimit -c -H should show\n> that.\n\nPossibly I have just added \"ulimit -c unlimited\" in the script, we should \nsee the effect on next round.\n\n-- \nFabien.\n\n\n", "msg_date": "Tue, 11 May 2021 10:22:02 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "On 2021-05-11 10:22:02 +0200, Fabien COELHO wrote:\n> \n> > On 2021-05-11 12:16:44 +1200, Thomas Munro wrote:\n> > > OK we got the SIGABRT this time, but still no backtrace. If the\n> > > kernel's core_pattern is \"core\", gdb is installed, then considering\n> > > that the buildfarm core_file_glob is \"core*\" and the script version is\n> > > recent (REL_12), then I'm out of ideas. ulimit -c XXX shouldn't be\n> > > needed because the perl script does that with rlimit.\n> > \n> > Unless perhaps the hard rlimit for -C is set? ulimit -c -H should show\n> > that.\n> \n> Possibly I have just added \"ulimit -c unlimited\" in the script, we should\n> see the effect on next round.\n\nIf it's the hard limit that won't help, because the hard limit can only\nbe increased by a privileged process.\n\n\n", "msg_date": "Tue, 11 May 2021 10:13:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "\nHello Andres,p\n\n>>> Unless perhaps the hard rlimit for -C is set? ulimit -c -H should show\n>>> that.\n>>\n>> Possibly I have just added \"ulimit -c unlimited\" in the script, we should\n>> see the effect on next round.\n>\n> If it's the hard limit that won't help, because the hard limit can only\n> be increased by a privileged process.\n\nIt seems to be the soft one, so ISTM that it may work.\n\nBefore:\n\n sh> ulimit -c -S\n 0\n sh> ulimit -c -H\n unlimited\n\nThen after:\n\n sh> ulimit -c unlimited\n\nI have:\n\n sh> ulimit -c -S\n unlimited\n sh> ulimit -c -H\n unlimited\n\n-- \nFabien.\n\n\n", "msg_date": "Tue, 11 May 2021 23:09:55 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "\n> Possibly I have just added \"ulimit -c unlimited\" in the script, we should see \n> the effect on next round.\n\nfor def5b065 it ended on on the contrib ltree test:\n\n 2021-05-12 20:12:52.528 CEST [3042602:410] pg_regress/ltree LOG: disconnection: session time: 0:00:13.426 user=buildfarm database=contrib_regression_ltree host=[local]\n\n /home/fabien/pg/build-farm-12/buildroot/HEAD/pgsql.build/contrib/ltree/results/ltree.out\n 2021-05-12 20:12:52.523330311 +0200\n @@ -7931,11 +7931,8 @@\n (1 row)\n\n SELECT count(*) FROM _ltreetest WHERE t ~ '1.1.1.*' ;\n - count\n --------\n - 19\n -(1 row)\n -\n +ERROR: stack depth limit exceeded\n +HINT: Increase the configuration parameter \"max_stack_depth\" (currently 2048kB), after ensuring the platform's stack depth limit is adequate.\n SELECT count(*) FROM _ltreetest WHERE t ~ '*.1' ;\n count\n -------\n\nNot a core dump, though.\n\n-- \nFabien.\n\n\n", "msg_date": "Wed, 12 May 2021 20:30:56 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "Hello Andres,\n\nIt finally failed with a core on 8f72bba, in llvm_shutdown, AFAIKS in a \nfree while doing malloc-related housekeeping.\n\nMy guess is that there is an actual memory corruption somewhere. It is \nunobvious whether it is in bleeding-edge llvm or bleeding-edge postgres \nthough.\n\nThe issue is non-deterministically triggered in contrib checks, either in \nint or ltree, but not elsewhere. This suggests issues specific to these \nmodules, or triggered by these modules. Hmmm…\n\nI've just launched a run with valgrind enabled.\n\n-- \nFabien.", "msg_date": "Sat, 15 May 2021 08:41:27 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "On Sat, May 15, 2021 at 6:41 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> The issue is non-deterministically triggered in contrib checks, either in\n> int or ltree, but not elsewhere. This suggests issues specific to these\n> modules, or triggered by these modules. Hmmm…\n\nHmm, yeah. A couple of different ways that ltreetest fails without crashing:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=seawasp&dt=2021-05-13%2001%3A17%3A15\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=seawasp&dt=2021-05-12%2017%3A17%3A15\n\nOtherwise it's mostly free() blowing up, and one case of an assertion\nfailure in llvm::StringMapImpl::RemoveKey, I guess before free() is\nreached:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=seawasp&dt=2021-05-14%2009%3A17%3A15\n\n\n", "msg_date": "Sun, 16 May 2021 22:58:49 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": ">> The issue is non-deterministically triggered in contrib checks, either in\n>> int or ltree, but not elsewhere. This suggests issues specific to these\n>> modules, or triggered by these modules. Hmmm…\n>\n> Hmm, yeah. A couple of different ways that ltreetest fails without crashing:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=seawasp&dt=2021-05-13%2001%3A17%3A15\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=seawasp&dt=2021-05-12%2017%3A17%3A15\n>\n> Otherwise it's mostly free() blowing up, and one case of an assertion\n> failure in llvm::StringMapImpl::RemoveKey, I guess before free() is\n> reached:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=seawasp&dt=2021-05-14%2009%3A17%3A15\n\nIt seems that the upload of the valgrind run (many hours…) failed on \"413 \nrequest entity too large\", and everything seems to have been cleaned \ndespite the \"--keepall\" I think I put when I started the run.\n\nNot sure about the best way to proceed.\n\n-- \nFabien.", "msg_date": "Tue, 18 May 2021 19:02:20 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "On Wed, May 19, 2021 at 5:02 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> It seems that the upload of the valgrind run (many hours…) failed on \"413\n> request entity too large\", and everything seems to have been cleaned\n> despite the \"--keepall\" I think I put when I started the run.\n\nI installed Clang/LLVM version\n\"1:13~++20210520071732+02f2d739e074-1~exp1~20210520052519.57\" from\nhttps://apt.llvm.org/ on a Debian buster box, and I saw that\ncontrib/ltree's test fail about half the time with a range of weird\nand wonderful outputs (wrong answers) similar to seawasp, but it never\ncrashed. I ran it under valgrind and I managed to get:\n\n==529250== Invalid read of size 8\n==529250== at 0x1475B6CA: void\nstd::vector<llvm::orc::SymbolStringPtr,\nstd::allocator<llvm::orc::SymbolStringPtr>\n>::_M_realloc_insert<llvm::orc::SymbolStringPtr\nconst&>(__gnu_cxx::__normal_iterator<llvm::orc::SymbolStringPtr*,\nstd::vector<llvm::orc::SymbolStringPtr,\nstd::allocator<llvm::orc::SymbolStringPtr> > >,\nllvm::orc::SymbolStringPtr const&) (in\n/usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x1474F246:\nllvm::orc::JITDylib::removeTracker(llvm::orc::ResourceTracker&) (in\n/usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x14741E0A:\nllvm::orc::ExecutionSession::removeResourceTracker(llvm::orc::ResourceTracker&)\n(in /usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x14747421: llvm::orc::JITDylib::clear() (in\n/usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x14751F5B: llvm::orc::ExecutionSession::endSession()\n(in /usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x14785207: llvm::orc::LLJIT::~LLJIT() (in\n/usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x147A4C7D: LLVMOrcDisposeLLJIT (in\n/usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x124D0094: llvm_shutdown (llvmjit.c:892)\n==529250== by 0x57BC08: proc_exit_prepare (ipc.c:209)\n==529250== by 0x57BC77: proc_exit (ipc.c:107)\n==529250== by 0x5A545B: PostgresMain (postgres.c:4700)\n==529250== by 0x517569: BackendRun (postmaster.c:4491)\n==529250== by 0x517569: BackendStartup (postmaster.c:4213)\n==529250== by 0x517569: ServerLoop (postmaster.c:1745)\n==529250== Address 0x1a969088 is 1,416 bytes inside a block of size\n3,072 free'd\n==529250== at 0x4839EAB: operator delete(void*) (vg_replace_malloc.c:584)\n==529250== by 0x141DFD8E: ??? (in /usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x141DD9C0: llvm::LazyValueInfo::releaseMemory() (in\n/usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x13294832:\nllvm::PMDataManager::freePass(llvm::Pass*, llvm::StringRef,\nllvm::PassDebuggingString) (in\n/usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x13294774:\nllvm::PMDataManager::removeDeadPasses(llvm::Pass*, llvm::StringRef,\nllvm::PassDebuggingString) (in\n/usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x13290A69:\nllvm::FPPassManager::runOnFunction(llvm::Function&) (in\n/usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x14137632: ??? (in /usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x13290FDE:\nllvm::legacy::PassManagerImpl::run(llvm::Module&) (in\n/usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x131F8625: LLVMRunPassManager (in\n/usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x124D0772: llvm_optimize_module (llvmjit.c:620)\n==529250== by 0x124D0772: llvm_compile_module (llvmjit.c:671)\n==529250== by 0x124D0772: llvm_get_function (llvmjit.c:291)\n==529250== by 0x124DA3BC: ExecRunCompiledExpr (llvmjit_expr.c:2402)\n==529250== by 0x41D15C: ExecEvalExprSwitchContext (executor.h:339)\n==529250== by 0x41D15C: ExecQual (executor.h:408)\n==529250== by 0x41D15C: ExecScan (execScan.c:227)\n==529250== Block was alloc'd at\n==529250== at 0x4838DEF: operator new(unsigned long)\n(vg_replace_malloc.c:342)\n==529250== by 0x141E47D6: ??? (in /usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x141E43B7: ??? (in /usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x141E3202: ??? (in /usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x141E0501: ??? (in /usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x141DDD74: ??? (in /usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x141DDC4C:\nllvm::LazyValueInfo::getConstant(llvm::Value*, llvm::Instruction*) (in\n/usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x13D0EE4B: ??? (in /usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x13D11D42: ??? (in /usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x1329098C:\nllvm::FPPassManager::runOnFunction(llvm::Function&) (in\n/usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x14137632: ??? (in /usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250== by 0x13290FDE:\nllvm::legacy::PassManagerImpl::run(llvm::Module&) (in\n/usr/lib/x86_64-linux-gnu/libLLVM-13.so.1)\n==529250==\n\nMaybe they should rewrite LLVM in Rust.\n\n\n", "msg_date": "Fri, 21 May 2021 21:37:22 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "Hi,\n\nOn 2021-05-21 21:37:22 +1200, Thomas Munro wrote:\n> I installed Clang/LLVM version\n> \"1:13~++20210520071732+02f2d739e074-1~exp1~20210520052519.57\" from\n> https://apt.llvm.org/ on a Debian buster box, and I saw that\n> contrib/ltree's test fail about half the time with a range of weird\n> and wonderful outputs (wrong answers) similar to seawasp, but it never\n> crashed. I ran it under valgrind and I managed to get:\n\nInteresting. I tried this with a slightly older LLVM checkout\n(6f4f0afaa8ae), from 2021-04-20, contrib/ltree tests run without an\nissue, even if I force everything to be jitted+inlined+optimized. The\ngit hash in the package version indicates the commit is from\n2021-05-20. Upgrading my local checkout to see whether I can repro the\nproblem. If I can we at least have a not too large bisection window...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 21 May 2021 14:58:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Interesting. I tried this with a slightly older LLVM checkout\n> (6f4f0afaa8ae), from 2021-04-20, contrib/ltree tests run without an\n> issue, even if I force everything to be jitted+inlined+optimized. The\n> git hash in the package version indicates the commit is from\n> 2021-05-20. Upgrading my local checkout to see whether I can repro the\n> problem. If I can we at least have a not too large bisection window...\n\nWe know that seawasp was okay as of\n\nconfigure: using compiler=clang version 13.0.0 (https://github.com/llvm/llvm-project.git f22d3813850f9e87c5204df6844a93b8c5db7730)\n\nand not okay as of\n\nconfigure: using compiler=clang version 13.0.0 (https://github.com/llvm/llvm-project.git 0e8f5e4a6864839d2292ec1ddfe48b6178c01f85)\n\nso that should correspond to a window of about a week, if I gather\nFabien's update strategy correctly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 May 2021 18:18:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "Hi,\n\nOn 2021-05-21 14:58:38 -0700, Andres Freund wrote:\n> Interesting. I tried this with a slightly older LLVM checkout\n> (6f4f0afaa8ae), from 2021-04-20, contrib/ltree tests run without an\n> issue, even if I force everything to be jitted+inlined+optimized. The\n> git hash in the package version indicates the commit is from\n> 2021-05-20. Upgrading my local checkout to see whether I can repro the\n> problem. If I can we at least have a not too large bisection window...\n\nAfter resolving some PEBKAC issues I was able to reproduce the issue\nafter a git pull for fresh llvm sources and rebuilding. Hope I can\nnarrow it down without needing to bisect, there's faster building\nprojects than LLVM...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 21 May 2021 15:26:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "Hi,\n\nOn 2021-05-21 18:18:54 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Interesting. I tried this with a slightly older LLVM checkout\n> > (6f4f0afaa8ae), from 2021-04-20, contrib/ltree tests run without an\n> > issue, even if I force everything to be jitted+inlined+optimized. The\n> > git hash in the package version indicates the commit is from\n> > 2021-05-20. Upgrading my local checkout to see whether I can repro the\n> > problem. If I can we at least have a not too large bisection window...\n> \n> We know that seawasp was okay as of\n> \n> configure: using compiler=clang version 13.0.0 (https://github.com/llvm/llvm-project.git f22d3813850f9e87c5204df6844a93b8c5db7730)\n> \n> and not okay as of\n> \n> configure: using compiler=clang version 13.0.0 (https://github.com/llvm/llvm-project.git 0e8f5e4a6864839d2292ec1ddfe48b6178c01f85)\n> \n> so that should correspond to a window of about a week, if I gather\n> Fabien's update strategy correctly.\n\nI found the LLVM commit to blame (c8fc5e3ba942057d6c4cdcd1faeae69a28e7b671).\nContacting the author and reading the change to see if I can spit the\nissue myself.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 21 May 2021 15:57:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "Hi,\n\nOn 2021-05-21 15:57:01 -0700, Andres Freund wrote:\n> I found the LLVM commit to blame (c8fc5e3ba942057d6c4cdcd1faeae69a28e7b671).\n> Contacting the author and reading the change to see if I can spit the\n> issue myself.\n\nHrmpf. It's a silent API breakage. The author intended to email us about\nit, but apparently forgot. One now needs to increment a string-pool\nrefcount. The reason that didn't trigger a reliable crash is that\nthere's a path where the refcount of string-pool entries aren't asserted\nto be above before decrementing the refcount... And that there\npractically never are references to the pool entries after use.\n\nContinuing to discusss whether there's a better way to deal with this.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 21 May 2021 17:25:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "> We know that seawasp was okay as of\n>\n> configure: using compiler=clang version 13.0.0 (https://github.com/llvm/llvm-project.git f22d3813850f9e87c5204df6844a93b8c5db7730)\n>\n> and not okay as of\n>\n> configure: using compiler=clang version 13.0.0 (https://github.com/llvm/llvm-project.git 0e8f5e4a6864839d2292ec1ddfe48b6178c01f85)\n>\n> so that should correspond to a window of about a week, if I gather\n> Fabien's update strategy correctly.\n\nIndeed, the full recompilation is triggered once a week, on Saturdays.\n\nIf the build fails for some reason the previous version is kept and I may \nor may not have time to look at it and try to fix it for some time after \nthat, or I may happen that I do not notice the issue for some time…\n\n-- \nFabien.", "msg_date": "Sat, 22 May 2021 10:08:46 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "On Sat, May 22, 2021 at 12:25 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-05-21 15:57:01 -0700, Andres Freund wrote:\n> > I found the LLVM commit to blame (c8fc5e3ba942057d6c4cdcd1faeae69a28e7b671).\n> > Contacting the author and reading the change to see if I can spit the\n> > issue myself.\n>\n> Hrmpf. It's a silent API breakage. The author intended to email us about\n> it, but apparently forgot. One now needs to increment a string-pool\n> refcount. The reason that didn't trigger a reliable crash is that\n> there's a path where the refcount of string-pool entries aren't asserted\n> to be above before decrementing the refcount... And that there\n> practically never are references to the pool entries after use.\n>\n> Continuing to discusss whether there's a better way to deal with this.\n\nAny news?\n\nFWIW this change appears to fix the problem for my system (LLVM 13\nbuild from a couple of days ago). No more weird results, valgrind\nerrors gone. I ran the leak checker to see if I now had the opposite\nproblem, and although there are various leaks reported, I didn't see\nobvious intern pool related stacks.\n\ndiff --git a/src/backend/jit/llvm/llvmjit.c b/src/backend/jit/llvm/llvmjit.c\nindex 71029a39a9..7b09e520f5 100644\n--- a/src/backend/jit/llvm/llvmjit.c\n+++ b/src/backend/jit/llvm/llvmjit.c\n@@ -1116,6 +1116,11 @@\nllvm_resolve_symbols(LLVMOrcDefinitionGeneratorRef GeneratorObj, void\n*Ctx,\n if (error != LLVMErrorSuccess)\n LLVMOrcDisposeMaterializationUnit(mu);\n\n+#if LLVM_VERSION_MAJOR > 12\n+ for (int i = 0; i < LookupSetSize; i++)\n+ LLVMOrcRetainSymbolStringPoolEntry(symbols[i].Name);\n+#endif\n+\n pfree(symbols);\n\n return error;\n\n\n", "msg_date": "Sat, 19 Jun 2021 17:07:51 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "On Sat, Jun 19, 2021 at 5:07 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> if (error != LLVMErrorSuccess)\n> LLVMOrcDisposeMaterializationUnit(mu);\n>\n> +#if LLVM_VERSION_MAJOR > 12\n> + for (int i = 0; i < LookupSetSize; i++)\n> + LLVMOrcRetainSymbolStringPoolEntry(symbols[i].Name);\n> +#endif\n\n(Though, erm, that code probably either needs to move a bit further up\nor become conditional, considering the error case immediately above\nit, not sure which...)\n\n\n", "msg_date": "Sat, 19 Jun 2021 21:37:43 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sat, Jun 19, 2021 at 5:07 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> if (error != LLVMErrorSuccess)\n>> LLVMOrcDisposeMaterializationUnit(mu);\n>> \n>> +#if LLVM_VERSION_MAJOR > 12\n>> + for (int i = 0; i < LookupSetSize; i++)\n>> + LLVMOrcRetainSymbolStringPoolEntry(symbols[i].Name);\n>> +#endif\n\n> (Though, erm, that code probably either needs to move a bit further up\n> or become conditional, considering the error case immediately above\n> it, not sure which...)\n\nIs a compile-time conditional really going to be reliable? See nearby\narguments about compile-time vs run-time checks for libpq features.\nIt's not clear to me how tightly LLVM binds its headers and running\ncode.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 19 Jun 2021 10:12:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "Hi,\n\nOn 2021-06-19 17:07:51 +1200, Thomas Munro wrote:\n> On Sat, May 22, 2021 at 12:25 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2021-05-21 15:57:01 -0700, Andres Freund wrote:\n> > > I found the LLVM commit to blame (c8fc5e3ba942057d6c4cdcd1faeae69a28e7b671).\n> > > Contacting the author and reading the change to see if I can spit the\n> > > issue myself.\n> >\n> > Hrmpf. It's a silent API breakage. The author intended to email us about\n> > it, but apparently forgot. One now needs to increment a string-pool\n> > refcount. The reason that didn't trigger a reliable crash is that\n> > there's a path where the refcount of string-pool entries aren't asserted\n> > to be above before decrementing the refcount... And that there\n> > practically never are references to the pool entries after use.\n> >\n> > Continuing to discusss whether there's a better way to deal with this.\n> \n> Any news?\n\nNothing really. I'd discussed it a bit with the relevant LLVM\nmaintainer, but we didn't come to a real resolution. He apologized for\nnot giving a proper heads up - he'd planned to send out an email, but\nsomehow lost track of that.\n\nGiven that any potential solution would be also end up being a versioned\nifdef, I think adding something like what you suggest here is the least\nunreasonable solution.\n\n> FWIW this change appears to fix the problem for my system (LLVM 13\n> build from a couple of days ago). No more weird results, valgrind\n> errors gone. I ran the leak checker to see if I now had the opposite\n> problem, and although there are various leaks reported, I didn't see\n> obvious intern pool related stacks.\n> \n> diff --git a/src/backend/jit/llvm/llvmjit.c b/src/backend/jit/llvm/llvmjit.c\n> index 71029a39a9..7b09e520f5 100644\n> --- a/src/backend/jit/llvm/llvmjit.c\n> +++ b/src/backend/jit/llvm/llvmjit.c\n> @@ -1116,6 +1116,11 @@\n> llvm_resolve_symbols(LLVMOrcDefinitionGeneratorRef GeneratorObj, void\n> *Ctx,\n> if (error != LLVMErrorSuccess)\n> LLVMOrcDisposeMaterializationUnit(mu);\n> \n> +#if LLVM_VERSION_MAJOR > 12\n> + for (int i = 0; i < LookupSetSize; i++)\n> + LLVMOrcRetainSymbolStringPoolEntry(symbols[i].Name);\n> +#endif\n> +\n> pfree(symbols);\n\nI think this should be part of the earlier loop? Once\nLLVMOrcAbsoluteSymbols() is called that owns the reference, so there\ndoesn't seem to be a reason to increase the refcount only later?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 20 Jun 2021 03:59:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "Hi,\n\nOn 2021-06-19 10:12:03 -0400, Tom Lane wrote:\n> Is a compile-time conditional really going to be reliable? See nearby\n> arguments about compile-time vs run-time checks for libpq features.\n> It's not clear to me how tightly LLVM binds its headers and running\n> code.\n\nIt should be fine (and if not we have plenty other places it'd be\nproblematic). LLVM embeds the version between user of llvm and the\nlibrary version in some symbol, so if there's a sufficient mismatch\nit'll cause link time issues. Of course that only works for major\nversions, but that shouldn't be an issue here.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 20 Jun 2021 04:01:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "On Sun, Jun 20, 2021 at 10:59 PM Andres Freund <andres@anarazel.de> wrote:\n> I think this should be part of the earlier loop? Once\n> LLVMOrcAbsoluteSymbols() is called that owns the reference, so there\n> doesn't seem to be a reason to increase the refcount only later?\n\nRight, that makes sense. Here's a patch like that.\n\nLooking at their release schedule on https://llvm.org/, I see we have\na gamble to make. They currently plan to cut RC1 at the end of July,\nand to release in late September (every second LLVM major release\ncoincides approximately with a PG major release). Option 1: wait\nuntil we branch for 14, and then push this to master so that at least\nseawasp can get back to looking for new problems, and then back-patch\nonly after they release (presumably in time for our November\nreleases). If their API change sticks, PostgreSQL crashes and gives\nweird results with the initial release of LLVM 13 until our fix comes\nout. Option 2: get ahead of their release and get this into 14 +\nAugust back branch releases based on their current/RC behaviour. If\nthey decide to revert the change before the final release, we'll leak\nsymbol names because we hold an extra reference, until we can fix\nthat.\n\nFor the last round of changes[1], there was a similar when-to-act\nquestion, but that was a doesn't-compile-anymore API change, whereas\nthis is a silent demons-might-fly-out-of-your-nose API change.\n\n[1] https://www.postgresql.org/message-id/flat/20201016011244.pmyvr3ee2gbzplq4%40alap3.anarazel.de", "msg_date": "Mon, 21 Jun 2021 11:45:12 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Looking at their release schedule on https://llvm.org/, I see we have\n> a gamble to make. They currently plan to cut RC1 at the end of July,\n> and to release in late September (every second LLVM major release\n> coincides approximately with a PG major release). Option 1: wait\n> until we branch for 14, and then push this to master so that at least\n> seawasp can get back to looking for new problems, and then back-patch\n> only after they release (presumably in time for our November\n> releases). If their API change sticks, PostgreSQL crashes and gives\n> weird results with the initial release of LLVM 13 until our fix comes\n> out. Option 2: get ahead of their release and get this into 14 +\n> August back branch releases based on their current/RC behaviour. If\n> they decide to revert the change before the final release, we'll leak\n> symbol names because we hold an extra reference, until we can fix\n> that.\n\nIf that's an accurate characterization of the tradeoff, I have little\ndifficulty in voting for #2. A crash is strictly worse than a memory\nleak. Besides which, I've heard little indication that they might\nrevert.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 20 Jun 2021 19:56:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "On Sun, Jun 20, 2021 at 11:01 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-06-19 10:12:03 -0400, Tom Lane wrote:\n> > Is a compile-time conditional really going to be reliable? See nearby\n> > arguments about compile-time vs run-time checks for libpq features.\n> > It's not clear to me how tightly LLVM binds its headers and running\n> > code.\n>\n> It should be fine (and if not we have plenty other places it'd be\n> problematic). LLVM embeds the version between user of llvm and the\n> library version in some symbol, so if there's a sufficient mismatch\n> it'll cause link time issues. Of course that only works for major\n> versions, but that shouldn't be an issue here.\n\nI looked into this a bit. On the usual Unixoid server OSes the first\nline of defence is that the major version is baked into the library\nname to support parallel installation of major versions, so our\nllvmjit.so is linked against eg libLLVM-13.so.1 (all controlled by\nllvm-config), and then there are further defences like versioned\nsymbols, LLVM_13 etc on some platforms.\n\nCuriously, they skip this scheme for Macs (see their AddLLVM.cmake\nfile) and Windows. So of course I wanted to try to see if I could\nmake it break in the way Tom imagined, on a Mac. There, I use\nMacPorts, and it has separate packages for major versions, for example\n\"llvm-12\", much like other distros. The package maintainers put\nlibLLVM.dylib (LLVM project's choice for this platform) into different\npaths under .../libexec/llvm-$VERSION/.... (package maintainer's\nchoice), and there is a tool to select the current default (alarm\nbells ringing at this point). The first observation is that the\nMach-O \"compatibility version\" is 1.0.0 on all the .dylibs, so yeah,\nthat mechanism isn't going to save you, but ... it turns out to be a\nmoot question for now because, to my surprise, we're statically\nlinking LLVM into our llvmjit.so on that platform. That turns out to\nbe because llvm-config --libs won't spit out dynamic link options if\nit can't find a library name with the version embedded in it. I see\nnow that Brew's maintainers take it on themselves to create that\nsymlink[1] (unlike MacPorts'), so ... erm, could be trouble there, I\ndunno because I don't want to install that, but if so, maybe they\nasked for it? I guess that none of this stuff really matters for real\nworld non-hacker users, who are probably using an installer that ships\nits own copy of the thing. I expect it'll be the same on Windows when\nwe eventually support LLVM there. /me closes Macintosh\n\n[1] https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/llvm.rb#L175\n\n\n", "msg_date": "Mon, 21 Jun 2021 13:23:34 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "Hi,\n\nOn 2021-06-20 19:56:56 -0400, Tom Lane wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Looking at their release schedule on https://llvm.org/, I see we have\n> > a gamble to make. They currently plan to cut RC1 at the end of July,\n> > and to release in late September (every second LLVM major release\n> > coincides approximately with a PG major release). Option 1: wait\n> > until we branch for 14, and then push this to master so that at least\n> > seawasp can get back to looking for new problems, and then back-patch\n> > only after they release (presumably in time for our November\n> > releases). If their API change sticks, PostgreSQL crashes and gives\n> > weird results with the initial release of LLVM 13 until our fix comes\n> > out. Option 2: get ahead of their release and get this into 14 +\n> > August back branch releases based on their current/RC behaviour. If\n> > they decide to revert the change before the final release, we'll leak\n> > symbol names because we hold an extra reference, until we can fix\n> > that.\n\nI think I'd vote for 2 or 2+ (backpatch immediately).\n\n\n> If that's an accurate characterization of the tradeoff, I have little\n> difficulty in voting for #2. A crash is strictly worse than a memory\n> leak. Besides which, I've heard little indication that they might\n> revert.\n\nWe might be able to get them to revert and put in a different API, but I\ndon't think it'd clearly be an improvement at this point.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Jun 2021 03:23:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "On Mon, Jun 21, 2021 at 11:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If that's an accurate characterization of the tradeoff, I have little\n> difficulty in voting for #2. A crash is strictly worse than a memory\n> leak. Besides which, I've heard little indication that they might\n> revert.\n\nAgreed.\n\nOn Mon, Jun 21, 2021 at 10:23 PM Andres Freund <andres@anarazel.de> wrote:\n> I think I'd vote for 2 or 2+ (backpatch immediately).\n\nYeah, that makes sense. Done.\n\nSeawasp should turn green on its next run.\n\n\n", "msg_date": "Fri, 25 Jun 2021 11:38:01 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "\nHello Thomas,\n\n> Seawasp should turn green on its next run.\n\nHopefully.\n\nIt is not scheduled very soon because Tom complained about the induced \nnoise in one buildfarm report, so I put the check to once a week.\n\nI changed it to start a run in a few minutes. I've rescheduled to once a \nday after that (previous schedule was a check every hour).\n\n-- \nFabien.\n\n\n", "msg_date": "Sat, 26 Jun 2021 08:09:25 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" }, { "msg_contents": "\n>> Seawasp should turn green on its next run.\n\nIt did!\n\n-- \nFabien.\n\n\n", "msg_date": "Sat, 26 Jun 2021 09:15:48 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: seawasp failing, maybe in glibc allocator" } ]
[ { "msg_contents": "Hi,\n\nFor some of the logical replication messages the data type documented\nwas not correct, especially for lsn and xid. For lsn actual datatype\nused is uint64 but is documented as int64, similarly for xid, datatype\nused is uint32 but documented as int32.\nAttached is a patch which has the fix for the same.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Sun, 9 May 2021 18:07:45 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "On Sun, May 9, 2021 at 10:38 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> For some of the logical replication messages the data type documented\n> was not correct, especially for lsn and xid. For lsn actual datatype\n> used is uint64 but is documented as int64, similarly for xid, datatype\n> used is uint32 but documented as int32.\n> Attached is a patch which has the fix for the same.\n> Thoughts?\n\nIf you want to do this then there are more - e.g. Flags should be\nUint8 instead of Int8.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Sun, 9 May 2021 23:13:58 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "On Sun, May 9, 2021, at 9:37 AM, vignesh C wrote:\n> For some of the logical replication messages the data type documented\n> was not correct, especially for lsn and xid. For lsn actual datatype\n> used is uint64 but is documented as int64, similarly for xid, datatype\n> used is uint32 but documented as int32.\n> Attached is a patch which has the fix for the same.\n> Thoughts?\nThere was a discussion [1] a few months ago about it. Read the Message Data\nTypes definition [2]. It is confusing that an internal data type (int64) has a\nname similar to a generic data type in a protocol definition. As I said [1] we\nshould probably inform that that piece of information (LSN) is a XLogRecPtr.\nSince this chapter is intended for developers, I think it is fine to include\nsuch internal detail.\n\n[1] https://www.postgresql.org/message-id/CAH503wBwC8A7DbDYUXRqW1ZAHKpj%2BD9bN7hcgszvP_1FzXbs_Q%40mail.gmail.com\n[2] https://www.postgresql.org/docs/current/protocol-message-types.html\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Sun, May 9, 2021, at 9:37 AM, vignesh C wrote:For some of the logical replication messages the data type documentedwas not correct, especially for lsn and xid. For lsn actual datatypeused is uint64 but is documented as int64, similarly for xid, datatypeused is uint32 but documented as int32.Attached is a patch which has the fix for the same.Thoughts?There was a discussion [1] a few months ago about it. Read the Message DataTypes definition [2]. It is confusing that an internal data type (int64) has aname similar to a generic data type in a protocol definition. As I said [1] weshould probably inform that that piece of information (LSN) is a XLogRecPtr.Since this chapter is intended for developers, I think it is fine to includesuch internal detail.[1] https://www.postgresql.org/message-id/CAH503wBwC8A7DbDYUXRqW1ZAHKpj%2BD9bN7hcgszvP_1FzXbs_Q%40mail.gmail.com[2] https://www.postgresql.org/docs/current/protocol-message-types.html--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Sun, 09 May 2021 10:23:14 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re:_Corrected_documentation_of_data_type_for_the_logical_repli?=\n =?UTF-8?Q?cation_message_formats.?=" }, { "msg_contents": "On Sun, May 9, 2021 at 11:13 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Sun, May 9, 2021 at 10:38 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > For some of the logical replication messages the data type documented\n> > was not correct, especially for lsn and xid. For lsn actual datatype\n> > used is uint64 but is documented as int64, similarly for xid, datatype\n> > used is uint32 but documented as int32.\n> > Attached is a patch which has the fix for the same.\n> > Thoughts?\n>\n> If you want to do this then there are more - e.g. Flags should be\n> Uint8 instead of Int8.\n\nIrrespective of signed/unsigned, from the description of types [1] it\ndoes seem like all those unused \"(must be 0)\" replication flags ought\nto have been written as \"Int8(0)\" instead of \"Int8\".\n\n------\n[1] https://www.postgresql.org/docs/devel/protocol-message-types.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 10 May 2021 11:41:27 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "On Sun, May 9, 2021 at 6:54 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Sun, May 9, 2021, at 9:37 AM, vignesh C wrote:\n>\n> For some of the logical replication messages the data type documented\n> was not correct, especially for lsn and xid. For lsn actual datatype\n> used is uint64 but is documented as int64, similarly for xid, datatype\n> used is uint32 but documented as int32.\n> Attached is a patch which has the fix for the same.\n> Thoughts?\n>\n> There was a discussion [1] a few months ago about it. Read the Message Data\n> Types definition [2]. It is confusing that an internal data type (int64) has a\n> name similar to a generic data type in a protocol definition. As I said [1] we\n> should probably inform that that piece of information (LSN) is a XLogRecPtr.\n> Since this chapter is intended for developers, I think it is fine to include\n> such internal detail.\n\nI agree to specifying the actual dataypes like XLogRecPtr for lsn,\nTimestampTz for timestamp, TransactionId for xid and Oid for the\nobject id. Attached v2 patch which is changed on similar lines.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Mon, 10 May 2021 19:15:56 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "On Sun, May 9, 2021 at 6:44 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Sun, May 9, 2021 at 10:38 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > For some of the logical replication messages the data type documented\n> > was not correct, especially for lsn and xid. For lsn actual datatype\n> > used is uint64 but is documented as int64, similarly for xid, datatype\n> > used is uint32 but documented as int32.\n> > Attached is a patch which has the fix for the same.\n> > Thoughts?\n>\n> If you want to do this then there are more - e.g. Flags should be\n> Uint8 instead of Int8.\n\nThanks for the comments.\nI have made this change in v2 patch posted at [1].\nThis also includes the fix to specify uint8(0) at appropriate places.\n\n[1] -\nhttps://www.postgresql.org/message-id/CALDaNm2G_BJ9G%3DCxy9A6ht-TXPn4nB8W9_BcawuA1uxsNvoWfQ%40mail.gmail.com\n\nRegards,\nVignesh\n\nOn Sun, May 9, 2021 at 6:44 PM Peter Smith <smithpb2250@gmail.com> wrote:>> On Sun, May 9, 2021 at 10:38 PM vignesh C <vignesh21@gmail.com> wrote:> >> > Hi,> >> > For some of the logical replication messages the data type documented> > was not correct, especially for lsn and xid. For lsn actual datatype> > used is uint64 but is documented as int64, similarly for xid, datatype> > used is uint32 but documented as int32.> > Attached is a patch which has the fix for the same.> > Thoughts?>> If you want to do this then there are more - e.g. Flags should be> Uint8 instead of Int8.Thanks for the comments.I have made this change in v2 patch posted at [1]. This also includes the fix to specify uint8(0) at appropriate places.[1] - https://www.postgresql.org/message-id/CALDaNm2G_BJ9G%3DCxy9A6ht-TXPn4nB8W9_BcawuA1uxsNvoWfQ%40mail.gmail.comRegards,Vignesh", "msg_date": "Mon, 10 May 2021 19:21:52 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "On Mon, May 10, 2021 at 11:46 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sun, May 9, 2021 at 6:54 PM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > On Sun, May 9, 2021, at 9:37 AM, vignesh C wrote:\n> >\n> > For some of the logical replication messages the data type documented\n> > was not correct, especially for lsn and xid. For lsn actual datatype\n> > used is uint64 but is documented as int64, similarly for xid, datatype\n> > used is uint32 but documented as int32.\n> > Attached is a patch which has the fix for the same.\n> > Thoughts?\n> >\n> > There was a discussion [1] a few months ago about it. Read the Message Data\n> > Types definition [2]. It is confusing that an internal data type (int64) has a\n> > name similar to a generic data type in a protocol definition. As I said [1] we\n> > should probably inform that that piece of information (LSN) is a XLogRecPtr.\n> > Since this chapter is intended for developers, I think it is fine to include\n> > such internal detail.\n>\n> I agree to specifying the actual dataypes like XLogRecPtr for lsn,\n> TimestampTz for timestamp, TransactionId for xid and Oid for the\n> object id. Attached v2 patch which is changed on similar lines.\n> Thoughts?\n\nAdding new message \"types\" does not seem like a good idea to me. e.g.\nAll the message types must be defined by the page [1] so if you add\nnew ones then they should also be defined on that page. But then how\nmany other places ought to make use of those new types? IMO this\napproach will snowball out of control.\n\nBut I am also doubtful there was ever actually a (signed/unsigned)\nproblem in the first place. AFAIK the message types like \"Int32\" etc\njust happen to have a name that \"looks\" like a C type, but I think\nthat is the extent of it. It is simply saying how data bytes are\ntransferred on the wire. All the low level C functions [2] always deal\nwith unsigned.\n\n~~\n\nMy suggestion would be to restrict your changes to the *description*\nparts of each message. e.g. maybe you could say what C type the bytes\nrepresent when they come off the wire at the other end - something\nlike below.\n\nBEFORE\nInt64\nThe final LSN of the transaction.\n\nAFTER\nInt64\nThe final LSN (XLogRecPtr) of the transaction\n\n------\n[1] https://www.postgresql.org/docs/devel/protocol-message-types.html\n[2] https://linux.die.net/man/3/ntohl\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Tue, 11 May 2021 12:36:47 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "On Mon, May 10, 2021, at 10:45 AM, vignesh C wrote:\n> I agree to specifying the actual dataypes like XLogRecPtr for lsn,\n> TimestampTz for timestamp, TransactionId for xid and Oid for the\n> object id. Attached v2 patch which is changed on similar lines.\n> Thoughts?\nPerhaps I didn't make myself clear, I didn't suggest to replace the actual\nmessage data types [1] with the internal representation. We could probably\nexpand the description to include the internal representation. Hence, it is\nless confusing that the actual text. Peter suggested the same in a previous\nemail.\n\nAlthough, \"Message Data Types\" is one section before \"Message Formats\", it is\nprobably intuitive that the data type for each message refer to the previous\nsection. However, it is not so clear three section later. A sentence like\n\nThe base data types used are described in the section \"Messages Data Types\".\n\nat the first paragraph could help understand what these data types refer to\n(and also add a link to the data types section).\n\n[1] https://www.postgresql.org/docs/current/protocol-message-types.html\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, May 10, 2021, at 10:45 AM, vignesh C wrote:I agree to specifying the actual dataypes like XLogRecPtr for lsn,TimestampTz for timestamp, TransactionId for xid and Oid for theobject id. Attached v2 patch which is changed on similar lines.Thoughts?Perhaps I didn't make myself clear, I didn't suggest to replace the actualmessage data types [1] with the internal representation. We could probablyexpand the description to include the internal representation. Hence, it isless confusing that the actual text. Peter suggested the same in a previousemail.Although, \"Message Data Types\" is one section before \"Message Formats\", it isprobably intuitive that the data type for each message refer to the previoussection. However, it is not so clear three section later. A sentence likeThe base data types used are described in the section \"Messages Data Types\".at the first paragraph could help understand what these data types refer to(and also add a link to the data types section).[1] https://www.postgresql.org/docs/current/protocol-message-types.html--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Tue, 11 May 2021 00:38:39 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re:_Corrected_documentation_of_data_type_for_the_logical_repli?=\n =?UTF-8?Q?cation_message_formats.?=" }, { "msg_contents": "On Tue, May 11, 2021 at 8:06 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, May 10, 2021 at 11:46 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Sun, May 9, 2021 at 6:54 PM Euler Taveira <euler@eulerto.com> wrote:\n> > >\n> > > On Sun, May 9, 2021, at 9:37 AM, vignesh C wrote:\n> > >\n> > > For some of the logical replication messages the data type documented\n> > > was not correct, especially for lsn and xid. For lsn actual datatype\n> > > used is uint64 but is documented as int64, similarly for xid, datatype\n> > > used is uint32 but documented as int32.\n> > > Attached is a patch which has the fix for the same.\n> > > Thoughts?\n> > >\n> > > There was a discussion [1] a few months ago about it. Read the Message Data\n> > > Types definition [2]. It is confusing that an internal data type (int64) has a\n> > > name similar to a generic data type in a protocol definition. As I said [1] we\n> > > should probably inform that that piece of information (LSN) is a XLogRecPtr.\n> > > Since this chapter is intended for developers, I think it is fine to include\n> > > such internal detail.\n> >\n> > I agree to specifying the actual dataypes like XLogRecPtr for lsn,\n> > TimestampTz for timestamp, TransactionId for xid and Oid for the\n> > object id. Attached v2 patch which is changed on similar lines.\n> > Thoughts?\n>\n> Adding new message \"types\" does not seem like a good idea to me. e.g.\n> All the message types must be defined by the page [1] so if you add\n> new ones then they should also be defined on that page. But then how\n> many other places ought to make use of those new types? IMO this\n> approach will snowball out of control.\n>\n> But I am also doubtful there was ever actually a (signed/unsigned)\n> problem in the first place. AFAIK the message types like \"Int32\" etc\n> just happen to have a name that \"looks\" like a C type, but I think\n> that is the extent of it. It is simply saying how data bytes are\n> transferred on the wire. All the low level C functions [2] always deal\n> with unsigned.\n>\n> ~~\n>\n> My suggestion would be to restrict your changes to the *description*\n> parts of each message. e.g. maybe you could say what C type the bytes\n> represent when they come off the wire at the other end - something\n> like below.\n>\n> BEFORE\n> Int64\n> The final LSN of the transaction.\n>\n> AFTER\n> Int64\n> The final LSN (XLogRecPtr) of the transaction\n\nThanks for the comments, Attached v3 patch has the changes as suggested.\n\nRegards,\nVignesh", "msg_date": "Tue, 11 May 2021 20:32:08 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "On Tue, May 11, 2021 at 9:09 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Mon, May 10, 2021, at 10:45 AM, vignesh C wrote:\n>\n> I agree to specifying the actual dataypes like XLogRecPtr for lsn,\n> TimestampTz for timestamp, TransactionId for xid and Oid for the\n> object id. Attached v2 patch which is changed on similar lines.\n> Thoughts?\n>\n> Perhaps I didn't make myself clear, I didn't suggest to replace the actual\n> message data types [1] with the internal representation. We could probably\n> expand the description to include the internal representation. Hence, it\nis\n> less confusing that the actual text. Peter suggested the same in a\nprevious\n> email.\n>\n> Although, \"Message Data Types\" is one section before \"Message Formats\",\nit is\n> probably intuitive that the data type for each message refer to the\nprevious\n> section. However, it is not so clear three section later. A sentence like\n>\n> The base data types used are described in the section \"Messages Data\nTypes\".\n>\n> at the first paragraph could help understand what these data types refer\nto\n> (and also add a link to the data types section).\n\nI have included this at the beginning, the same is available in the patch\nposted at [1].\n[1] -\nhttps://www.postgresql.org/message-id/CALDaNm2QrB-_96ohonQs-YADC9Puk3caXjn%2B2UYZwxAkX%3DREQQ%40mail.gmail.com\n\nRegards,\nVignesh\n\nOn Tue, May 11, 2021 at 9:09 AM Euler Taveira <euler@eulerto.com> wrote:>> On Mon, May 10, 2021, at 10:45 AM, vignesh C wrote:>> I agree to specifying the actual dataypes like XLogRecPtr for lsn,> TimestampTz for timestamp, TransactionId for xid and Oid for the> object id. Attached v2 patch which is changed on similar lines.> Thoughts?>> Perhaps I didn't make myself clear, I didn't suggest to replace the actual> message data types [1] with the internal representation. We could probably> expand the description to include the internal representation. Hence, it is> less confusing that the actual text. Peter suggested the same in a previous> email.>> Although, \"Message Data Types\" is one section before \"Message Formats\", it is> probably intuitive that the data type for each message refer to the previous> section. However, it is not so clear three section later. A sentence like>> The base data types used are described in the section \"Messages Data Types\".>> at the first paragraph could help understand what these data types refer to> (and also add a link to the data types section).I have included this at the beginning, the same is available in the patch posted at [1].[1] - https://www.postgresql.org/message-id/CALDaNm2QrB-_96ohonQs-YADC9Puk3caXjn%2B2UYZwxAkX%3DREQQ%40mail.gmail.comRegards,Vignesh", "msg_date": "Tue, 11 May 2021 20:34:32 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "On Wed, May 12, 2021 at 1:02 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks for the comments, Attached v3 patch has the changes as suggested.\n\nThis v3 mostly looks good to me now except for some minor comments\nabout the flags.\n\n~~~\n\n1. Commit flags\n\n@@ -6534,11 +6536,11 @@ Commit\n </varlistentry>\n <varlistentry>\n <term>\n- Int8\n+ Uint8(0)\n </term>\n <listitem>\n <para>\n- Flags; currently unused (must be 0).\n+ Flags (uint8(0)); currently unused (must be 0).\n </para>\n </listitem>\n </varlistentry>\n\na) There is no data type Uint8. That should be \"Int8(0)\"\n\nb) I think the \"(0)\" does not belong in the description part.\ne.g. change to \"Flags (uint8); currently unused (must be 0).\"\n\n~~~\n\n2. Stream Commit flags\n\n@@ -7276,7 +7284,7 @@ Stream Commit\n </term>\n <listitem>\n <para>\n- Flags; currently unused (must be 0).\n+ Flags (uint8(0)); currently unused (must be 0).\n </para>\n </listitem>\n </varlistentry>\n\na) The data type should say \"Int8(0)\"\n\nb) I think the \"(0)\" does not belong in the description part.\ne.g. change to \"Flags (uint8); currently unused (must be 0).\"\n\n~~~\n\n3. I felt that saying \"(must be 0)\" for those unused flag descriptions\nis unnecessary since that is exactly what \"Int8(0)\" already means.\ne.g. consider change to just say \"Flags (uint8); currently unused.\"\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 12 May 2021 08:05:53 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "On Wed, May 12, 2021 at 3:36 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, May 12, 2021 at 1:02 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Thanks for the comments, Attached v3 patch has the changes as suggested.\n>\n> This v3 mostly looks good to me now except for some minor comments\n> about the flags.\n>\n> ~~~\n>\n> 1. Commit flags\n>\n> @@ -6534,11 +6536,11 @@ Commit\n> </varlistentry>\n> <varlistentry>\n> <term>\n> - Int8\n> + Uint8(0)\n> </term>\n> <listitem>\n> <para>\n> - Flags; currently unused (must be 0).\n> + Flags (uint8(0)); currently unused (must be 0).\n> </para>\n> </listitem>\n> </varlistentry>\n>\n> a) There is no data type Uint8. That should be \"Int8(0)\"\n>\n\nModified.\n\n> b) I think the \"(0)\" does not belong in the description part.\n> e.g. change to \"Flags (uint8); currently unused (must be 0).\"\n>\n\nModified\n\n> ~~~\n>\n> 2. Stream Commit flags\n>\n> @@ -7276,7 +7284,7 @@ Stream Commit\n> </term>\n> <listitem>\n> <para>\n> - Flags; currently unused (must be 0).\n> + Flags (uint8(0)); currently unused (must be 0).\n> </para>\n> </listitem>\n> </varlistentry>\n>\n> a) The data type should say \"Int8(0)\"\n>\n\nModified.\n\n> b) I think the \"(0)\" does not belong in the description part.\n> e.g. change to \"Flags (uint8); currently unused (must be 0).\"\n>\n\nModified.\n\n> ~~~\n>\n> 3. I felt that saying \"(must be 0)\" for those unused flag descriptions\n> is unnecessary since that is exactly what \"Int8(0)\" already means.\n> e.g. consider change to just say \"Flags (uint8); currently unused.\"\n\nModified.\n\nThanks for the comments. Attached v4 patch has the fix for the same.\n\nRegards,\nVignesh", "msg_date": "Wed, 12 May 2021 18:39:22 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "On Wed, May 12, 2021 at 11:09 PM vignesh C <vignesh21@gmail.com> wrote:\n...\n>\n> Thanks for the comments. Attached v4 patch has the fix for the same.\n>\n\nI have not tried this patch so I cannot confirm whether it applies or\nrenders OK, but just going by the v4 content this now LGTM.\n\n--------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 13 May 2021 10:27:45 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "On Thu, May 13, 2021 at 5:57 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, May 12, 2021 at 11:09 PM vignesh C <vignesh21@gmail.com> wrote:\n> ...\n> >\n> > Thanks for the comments. Attached v4 patch has the fix for the same.\n> >\n>\n> I have not tried this patch so I cannot confirm whether it applies or\n> renders OK, but just going by the v4 content this now LGTM.\n\nThanks for having a look at it.\nI have added a commitfest entry for this:\nhttps://commitfest.postgresql.org/33/3117/\n\nRegards,\nVignesh\n\nOn Thu, May 13, 2021 at 5:57 AM Peter Smith <smithpb2250@gmail.com> wrote:>> On Wed, May 12, 2021 at 11:09 PM vignesh C <vignesh21@gmail.com> wrote:> ...> >> > Thanks for the comments. Attached v4 patch has the fix for the same.> >>> I have not tried this patch so I cannot confirm whether it applies or> renders OK, but just going by the v4 content this now LGTM.Thanks for having a look at it.I have added a commitfest entry for this:https://commitfest.postgresql.org/33/3117/Regards,Vignesh", "msg_date": "Mon, 17 May 2021 20:26:05 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "Hi Vignesh.\n\nFYI - Because the other patch [1] was pushed ahead of this one, I\nthink your patch now needs to be updated for the new message types\nthat were introduced.\n\n------\n[1] https://github.com/postgres/postgres/commit/a8fd13cab0ba815e9925dc9676e6309f699b5f72#diff-331c33fd11c3ed85f9dbfead93f139c20ff3a25176651fc2ed37c486b97630e6\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 16 Jul 2021 13:21:54 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "On Fri, Jul 16, 2021 at 8:52 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Vignesh.\n>\n> FYI - Because the other patch [1] was pushed ahead of this one, I\n> think your patch now needs to be updated for the new message types\n> that were introduced.\n\nThanks, I have made the changes for the same in the v5 patch attached.\n\nRegards,\nVignesh", "msg_date": "Fri, 16 Jul 2021 18:31:24 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "I think the patch maybe is not quite correct for all the flags.\n\nFor example,\n\n@@ -7607,44 +7615,44 @@ are available since protocol version 3.\n <varlistentry>\n <term>Int8</term>\n <listitem><para>\n- Flags; currently unused (must be 0).\n+ Flags (uint8); currently unused.\n </para></listitem>\n </varlistentry>\n\nAFAIK, even though the flags are \"unused\", the code still insists that\nmost (or all? Please check the code) of these flag values MUST be 0,\nso I think that this zero value requirement ought to be indicated in\nthe docs using the \"Int8(0)\" convention [1]. For example,\n\nBEFORE\nInt8\nFlags (uint8); currently unused.\n\nAFTER\nInt8(0)\nFlags (uint8); currently unused.\n\n------\n[1] https://www.postgresql.org/docs/devel/protocol-message-types.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Fri, 23 Jul 2021 07:53:18 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "On Fri, Jul 23, 2021 at 3:23 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> I think the patch maybe is not quite correct for all the flags.\n>\n> For example,\n>\n> @@ -7607,44 +7615,44 @@ are available since protocol version 3.\n> <varlistentry>\n> <term>Int8</term>\n> <listitem><para>\n> - Flags; currently unused (must be 0).\n> + Flags (uint8); currently unused.\n> </para></listitem>\n> </varlistentry>\n>\n> AFAIK, even though the flags are \"unused\", the code still insists that\n> most (or all? Please check the code) of these flag values MUST be 0,\n> so I think that this zero value requirement ought to be indicated in\n> the docs using the \"Int8(0)\" convention [1]. For example,\n>\n> BEFORE\n> Int8\n> Flags (uint8); currently unused.\n>\n> AFTER\n> Int8(0)\n> Flags (uint8); currently unused.\n\nThanks for the comments. Attached v6 patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Fri, 23 Jul 2021 21:31:09 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n[ v6-0001-Included-the-actual-datatype-used-in-logical-repl.patch ]\n\nI see what you want to do here, but the way you did it seems quite\ndetrimental to the readability of the field descriptions.\nParenthesized interjections should be used sparingly.\n\nI'm inclined to think that the equivalent data type is part of the\nfield data type specification, and thus that we ought to put it in\nthe data type part of each entry. So we'd have something like\n\n<varlistentry>\n<term>\n Int64 (XLogRecPtr)\n</term>\n<listitem>\n<para>\n The final LSN of the transaction.\n</para>\n</listitem>\n</varlistentry>\n\ninstead of what you did here. Parentheses might not be the best\npunctuation to use, given the existing convention about parenthesized\nspecific values, but we could probably settle on some other markup.\nOr just ignore the ambiguity.\n\nAnother idea is to add the data type info at the ends of items\ninstead of cramming it into the sentences, thus:\n\n The final LSN of the transaction. (XLogRecPtr)\n\nI don't find that better personally, but maybe others will\nthink differently.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Jul 2021 17:00:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "On Sat, Jul 31, 2021 at 7:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> vignesh C <vignesh21@gmail.com> writes:\n> [ v6-0001-Included-the-actual-datatype-used-in-logical-repl.patch ]\n>\n> I see what you want to do here, but the way you did it seems quite\n> detrimental to the readability of the field descriptions.\n> Parenthesized interjections should be used sparingly.\n>\n> I'm inclined to think that the equivalent data type is part of the\n> field data type specification, and thus that we ought to put it in\n> the data type part of each entry. So we'd have something like\n>\n> <varlistentry>\n> <term>\n> Int64 (XLogRecPtr)\n> </term>\n> <listitem>\n> <para>\n> The final LSN of the transaction.\n> </para>\n> </listitem>\n> </varlistentry>\n>\n> instead of what you did here. Parentheses might not be the best\n> punctuation to use, given the existing convention about parenthesized\n> specific values, but we could probably settle on some other markup.\n> Or just ignore the ambiguity.\n\n+1 to change it like suggested above.\n\nThe specific value for the flags might then look like below, but that\ndoes not look too bad to me.\n\n<term>\n Int8 (uint8) (0)\n</term>\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Sun, 1 Aug 2021 20:40:58 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "On Sat, Jul 31, 2021 at 2:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> vignesh C <vignesh21@gmail.com> writes:\n> [ v6-0001-Included-the-actual-datatype-used-in-logical-repl.patch ]\n>\n> I see what you want to do here, but the way you did it seems quite\n> detrimental to the readability of the field descriptions.\n> Parenthesized interjections should be used sparingly.\n>\n> I'm inclined to think that the equivalent data type is part of the\n> field data type specification, and thus that we ought to put it in\n> the data type part of each entry. So we'd have something like\n>\n> <varlistentry>\n> <term>\n> Int64 (XLogRecPtr)\n> </term>\n> <listitem>\n> <para>\n> The final LSN of the transaction.\n> </para>\n> </listitem>\n> </varlistentry>\n>\n\nI made changes based on the feedback, since Peter also was in favour\nof using this approach, I modified based on the first approach.\nAttached v7 patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Sun, 1 Aug 2021 20:56:33 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "On Sun, Aug 1, 2021 at 4:11 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Sat, Jul 31, 2021 at 7:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > vignesh C <vignesh21@gmail.com> writes:\n> > [ v6-0001-Included-the-actual-datatype-used-in-logical-repl.patch ]\n> >\n> > I see what you want to do here, but the way you did it seems quite\n> > detrimental to the readability of the field descriptions.\n> > Parenthesized interjections should be used sparingly.\n> >\n> > I'm inclined to think that the equivalent data type is part of the\n> > field data type specification, and thus that we ought to put it in\n> > the data type part of each entry. So we'd have something like\n> >\n> > <varlistentry>\n> > <term>\n> > Int64 (XLogRecPtr)\n> > </term>\n> > <listitem>\n> > <para>\n> > The final LSN of the transaction.\n> > </para>\n> > </listitem>\n> > </varlistentry>\n> >\n> > instead of what you did here. Parentheses might not be the best\n> > punctuation to use, given the existing convention about parenthesized\n> > specific values, but we could probably settle on some other markup.\n> > Or just ignore the ambiguity.\n>\n> +1 to change it like suggested above.\n>\n> The specific value for the flags might then look like below, but that\n> does not look too bad to me.\n>\n> <term>\n> Int8 (uint8) (0)\n> </term>\n\nI felt we can change it like:\n<term>\n Int8(0) (uint8)\n</term>\n\nI felt the flag value can be kept first followed by the data type since it\nis used similarly for the other message types like below:\n<term>\n Byte1('C')\n</term>\n\nI have made changes in similar lines and posted the patch at [1].\nThoughts?\n\n[1] -\nhttps://www.postgresql.org/message-id/CALDaNm3sK75Mo%2BVzLmNGe29gYtJoeKHshAK0GDiAzfAj6LQPdw%40mail.gmail.com\n\nRegards,\nVignesh\n\nOn Sun, Aug 1, 2021 at 4:11 PM Peter Smith <smithpb2250@gmail.com> wrote:>> On Sat, Jul 31, 2021 at 7:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:> >> > vignesh C <vignesh21@gmail.com> writes:> > [ v6-0001-Included-the-actual-datatype-used-in-logical-repl.patch ]> >> > I see what you want to do here, but the way you did it seems quite> > detrimental to the readability of the field descriptions.> > Parenthesized interjections should be used sparingly.> >> > I'm inclined to think that the equivalent data type is part of the> > field data type specification, and thus that we ought to put it in> > the data type part of each entry.  So we'd have something like> >> > <varlistentry>> > <term>> >         Int64 (XLogRecPtr)> > </term>> > <listitem>> > <para>> >                 The final LSN of the transaction.> > </para>> > </listitem>> > </varlistentry>> >> > instead of what you did here.  Parentheses might not be the best> > punctuation to use, given the existing convention about parenthesized> > specific values, but we could probably settle on some other markup.> > Or just ignore the ambiguity.>> +1 to change it like suggested above.>> The specific value for the flags might then look like below, but that> does not look too bad to me.>> <term>>         Int8 (uint8) (0)> </term>I felt we can change it like:<term>        Int8(0) (uint8)</term>I felt the flag value can be kept first followed by the data type since it is used similarly for the other message types like below:<term>        Byte1('C')</term>I have made changes in similar lines and posted the patch at [1].Thoughts?[1] - https://www.postgresql.org/message-id/CALDaNm3sK75Mo%2BVzLmNGe29gYtJoeKHshAK0GDiAzfAj6LQPdw%40mail.gmail.comRegards,Vignesh", "msg_date": "Sun, 1 Aug 2021 21:02:16 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "On Mon, Aug 2, 2021 at 1:32 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sun, Aug 1, 2021 at 4:11 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Sat, Jul 31, 2021 at 7:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > vignesh C <vignesh21@gmail.com> writes:\n> > > [ v6-0001-Included-the-actual-datatype-used-in-logical-repl.patch ]\n> > >\n> > > I see what you want to do here, but the way you did it seems quite\n> > > detrimental to the readability of the field descriptions.\n> > > Parenthesized interjections should be used sparingly.\n> > >\n> > > I'm inclined to think that the equivalent data type is part of the\n> > > field data type specification, and thus that we ought to put it in\n> > > the data type part of each entry. So we'd have something like\n> > >\n> > > <varlistentry>\n> > > <term>\n> > > Int64 (XLogRecPtr)\n> > > </term>\n> > > <listitem>\n> > > <para>\n> > > The final LSN of the transaction.\n> > > </para>\n> > > </listitem>\n> > > </varlistentry>\n> > >\n> > > instead of what you did here. Parentheses might not be the best\n> > > punctuation to use, given the existing convention about parenthesized\n> > > specific values, but we could probably settle on some other markup.\n> > > Or just ignore the ambiguity.\n> >\n> > +1 to change it like suggested above.\n> >\n> > The specific value for the flags might then look like below, but that\n> > does not look too bad to me.\n> >\n> > <term>\n> > Int8 (uint8) (0)\n> > </term>\n>\n> I felt we can change it like:\n> <term>\n> Int8(0) (uint8)\n> </term>\n>\n> I felt the flag value can be kept first followed by the data type since it is used similarly for the other message types like below:\n> <term>\n> Byte1('C')\n> </term>\n>\n> I have made changes in similar lines and posted the patch at [1].\n> Thoughts?\n\nI agree. The specified value looks better when it comes first, as you did it.\n\n~~\n\nOption #1:\nInt8(0) (uint8)\nInt64 (XLogRecPtr)\n\nOption #2:\nInt8(0) [uint8]\nInt64 [XLogRecPtr]\n\nOption #3:\nInt8(0) -- uint\nInt64 -- XLogRecPtr\n\netc...\n\nProbably my slight favourite is Option #2 above, but YMMV. Any format\nyou choose which is similar to those above is fine by me.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 2 Aug 2021 08:49:24 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> I agree. The specified value looks better when it comes first, as you did it.\n\nActually, it looks to me like we don't have to resolve the question of\nwhich should come first, because I don't see any cases where it's\nuseful to have both. I don't agree with appending \"uint8\" to those\nfield descriptions, because it's adding no information, especially\nwhen the high bit couldn't be set anyway.\n\nAt some point it might be useful to add UInt<n> to the set of base\ndata types, and then go through all the message types and decide\nwhich fields we think are unsigned. But that is not this patch,\nand there would be questions about whether it constituted a protocol\nbreak.\n\nI noticed also that having to add \"(Oid)\" was sort of self-inflicted\ndamage, because the field descriptions were using the very vague\nterm \"ID\", when they could have said \"OID\" and been clear. I left\nthe \"(Oid)\" additions in place but also changed the text.\n\nPushed with those changes. I couldn't resist copy-editing the section\nintro, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 02 Aug 2021 11:40:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." }, { "msg_contents": "On Mon, Aug 2, 2021 at 9:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > I agree. The specified value looks better when it comes first, as you did it.\n>\n> Actually, it looks to me like we don't have to resolve the question of\n> which should come first, because I don't see any cases where it's\n> useful to have both. I don't agree with appending \"uint8\" to those\n> field descriptions, because it's adding no information, especially\n> when the high bit couldn't be set anyway.\n>\n> At some point it might be useful to add UInt<n> to the set of base\n> data types, and then go through all the message types and decide\n> which fields we think are unsigned. But that is not this patch,\n> and there would be questions about whether it constituted a protocol\n> break.\n>\n> I noticed also that having to add \"(Oid)\" was sort of self-inflicted\n> damage, because the field descriptions were using the very vague\n> term \"ID\", when they could have said \"OID\" and been clear. I left\n> the \"(Oid)\" additions in place but also changed the text.\n>\n> Pushed with those changes. I couldn't resist copy-editing the section\n> intro, too.\n\nThanks for pushing the patch.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 2 Aug 2021 21:25:55 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Corrected documentation of data type for the logical replication\n message formats." } ]
[ { "msg_contents": "I happened to be trying to run the core regression tests under\nvalgrind, and I got the complaints attached below, from the\nprocess that had been running the insert_conflict test script.\n\nI could not reproduce the failure in a second run, which is not\nhugely surprising because it appears to be in cross-process\nsinval processing; so timing sensitivity is to be expected.\nThat doesn't make it any less disturbing.\n\nOne point worth mentioning is that I'd forgotten to build with\n\"#define USE_VALGRIND\" in the first try. AFAIK that should make\nvalgrind strictly less sensitive, so I think it's not material,\nbut still.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 09 May 2021 15:29:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Non-reproducible valgrind failure on HEAD" }, { "msg_contents": "Hi,\n\nOn 2021-05-09 15:29:06 -0400, Tom Lane wrote:\n> I happened to be trying to run the core regression tests under\n> valgrind, and I got the complaints attached below, from the\n> process that had been running the insert_conflict test script.\n>\n> I could not reproduce the failure in a second run, which is not\n> hugely surprising because it appears to be in cross-process\n> sinval processing; so timing sensitivity is to be expected.\n> That doesn't make it any less disturbing.\n>\n> One point worth mentioning is that I'd forgotten to build with\n> \"#define USE_VALGRIND\" in the first try. AFAIK that should make\n> valgrind strictly less sensitive, so I think it's not material,\n> but still.\n\nI think it may be material - see the comments in\nAddCatcacheInvalidationMessage(). Valgrind doesn't cope correctly with\nthe sinval ringbuffer being accessed in multiple processes. If process A\nadds an invalidation to position 0 and then later processes another\ninvalidation at the same position 0 that was added by B, valgrind will\nuse the \"is defined\" state from the invalidation it queued itself, not\nthe one that B queued.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 9 May 2021 14:02:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-reproducible valgrind failure on HEAD" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-09 15:29:06 -0400, Tom Lane wrote:\n>> One point worth mentioning is that I'd forgotten to build with\n>> \"#define USE_VALGRIND\" in the first try. AFAIK that should make\n>> valgrind strictly less sensitive, so I think it's not material,\n>> but still.\n\n> I think it may be material - see the comments in\n> AddCatcacheInvalidationMessage().\n\nAh-hah. Please excuse the noise.\n\n(I wonder if we shouldn't adjust the comments in pg_config_manual.h,\nthough, as they certainly leave the impression that USE_VALGRIND\nisn't essential.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 09 May 2021 17:12:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Non-reproducible valgrind failure on HEAD" }, { "msg_contents": "Hi,\n\nOn 2021-05-09 17:12:14 -0400, Tom Lane wrote:\n> (I wonder if we shouldn't adjust the comments in pg_config_manual.h,\n> though, as they certainly leave the impression that USE_VALGRIND\n> isn't essential.)\n\nThat'd make sense to me. If we found a better way to deal with the\nsinval thing it'd be good too - but I am not seeing anything convincing,\nand I looked a couple times over the years...\n\n\n", "msg_date": "Sun, 9 May 2021 14:17:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-reproducible valgrind failure on HEAD" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-09 17:12:14 -0400, Tom Lane wrote:\n>> (I wonder if we shouldn't adjust the comments in pg_config_manual.h,\n>> though, as they certainly leave the impression that USE_VALGRIND\n>> isn't essential.)\n\n> That'd make sense to me. If we found a better way to deal with the\n> sinval thing it'd be good too - but I am not seeing anything convincing,\n> and I looked a couple times over the years...\n\nYeah, it's actually somewhat amazing that we get useful results at all\naround shared-memory accesses.\n\nProposed comment patch attached.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 09 May 2021 18:17:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Non-reproducible valgrind failure on HEAD" }, { "msg_contents": "On 2021-05-09 18:17:36 -0400, Tom Lane wrote:\n> Proposed comment patch attached.\n\nlgtm.\n\n\n", "msg_date": "Sun, 9 May 2021 15:40:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-reproducible valgrind failure on HEAD" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-09 18:17:36 -0400, Tom Lane wrote:\n>> Proposed comment patch attached.\n\n> lgtm.\n\nPushed, thanks.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 09 May 2021 19:36:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Non-reproducible valgrind failure on HEAD" } ]
[ { "msg_contents": "Hello hackers,\n\nI've just noticed this in pg_database.dat:\n\n { oid => '1', oid_symbol => 'TemplateDbOid',\n descr => 'default template for new databases',\n datname => 'template1', ...\n\nbut in the most recent cluster that I pg_upgraded, template1 has oid 16400.\n\nNothing is acting broken, but ... is that supposed to happen?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sun, 9 May 2021 18:35:07 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Is template1 intended to have oid 1 ?" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> I've just noticed this in pg_database.dat:\n\n> { oid => '1', oid_symbol => 'TemplateDbOid',\n> descr => 'default template for new databases',\n> datname => 'template1', ...\n\n> but in the most recent cluster that I pg_upgraded, template1 has oid 16400.\n\n> Nothing is acting broken, but ... is that supposed to happen?\n\ntemplate1 does have OID 1 in a fresh-from-initdb cluster, but it's\njust a database; you can drop it and recreate it if you choose.\nI believe that pg_upgrade does so.\n\n(Technically, the same applies to template0. But our position on\ntemplate0 is that changing the contents of that is unsupported.\nSo pg_upgrade won't touch the target cluster's version of template0.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 09 May 2021 18:40:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is template1 intended to have oid 1 ?" }, { "msg_contents": "On 05/09/21 18:40, Tom Lane wrote:\n> template1 does have OID 1 in a fresh-from-initdb cluster, but it's\n> just a database; you can drop it and recreate it if you choose.\n> I believe that pg_upgrade does so.\n\nI guess it wouldn't even have made me curious, if I hadn't noticed\nthe TemplateDbOid symbol being defined for it, which could end up being\nwrong after a pg_upgrade.\n\nI suppose if it is only ever referred to during bootstrap, no harm's done.\n\nIronically, it might be the most prominently advertised oid_symbol there is,\nas that entry in pg_database.dat is what's used as the example of .dat-file\nformat in bki.sgml.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sun, 9 May 2021 19:01:11 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: Is template1 intended to have oid 1 ?" } ]
[ { "msg_contents": "Hi\n\nWhen using psql help with SQL commands, I found an inconsistency tab-completion for command \"DELETE\" as follows.\n\n=# \\h de[TAB]\ndeallocate declare delete from\n\n=# \\help[TAB]\nABORT CLUSTER DELETE FROM\n\n=# \\help[ENTER]\nAvailable help:\n...\nANALYZE CREATE OPERATOR CLASS DELETE\n...\n\n=# \\h delete\nCommand: DELETE\nDescription: delete rows of a table\n...\n\nYou see, the tab-completion for \"DELETE\" is \"DELETE FROM\" which is not same as help-command said(which is \"DELETE\").\nI tried to figure out why \"FROM\" is introduced here, but no good result got. In [1] someone changed \"DELETE\" to \"DELETE FROM\" but no reason added.\n\nIMO, the \"FROM\" is unnecessary just like \"INTO\" for \"INSERT\" command. So I tried to fix the inconsistency by removing \"FROM\" from \"DELETE FROM\" in tab-complete.c.\nPlease see the attached patch. Any comment or different thought is very welcome.\n\n[1]\nhttps://github.com/postgres/postgres/commit/4c1f9a0f0bb41c31b26bb88ba8c5d3fca4521dd7\n\nRegards,\nTang", "msg_date": "Mon, 10 May 2021 05:36:35 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Remove \"FROM\" in \"DELETE FROM\" when using tab-completion" }, { "msg_contents": "On Mon, May 10, 2021 at 05:36:35AM +0000, tanghy.fnst@fujitsu.com wrote:\n> You see, the tab-completion for \"DELETE\" is \"DELETE FROM\" which is not same as help-command said(which is \"DELETE\").\n> I tried to figure out why \"FROM\" is introduced here, but no good result got. In [1] someone changed \"DELETE\" to \"DELETE FROM\" but no reason added.\n> \n> IMO, the \"FROM\" is unnecessary just like \"INTO\" for \"INSERT\" command. So I tried to fix the inconsistency by removing \"FROM\" from \"DELETE FROM\" in tab-complete.c.\n> Please see the attached patch. Any comment or different thought is very welcome.\n\nI think the behavior now is correct. The goal of autocompletion is to save\nkeystrokes and time. As the only valid keyword after a DELETE (at least in a\nDeleteStmt) is FROM, it's a good thing that you get back \"DELETE FROM\" directly\nrather than asking that to autocomplete in multiple steps.\n\nNow, the \\help command is for commands, which is a different thing as the\ncommand in that case is DELETE not DELETE FROM, even if you will have to follow\nyour DELETE with a FROM.\n\n\n", "msg_date": "Mon, 10 May 2021 13:48:15 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove \"FROM\" in \"DELETE FROM\" when using tab-completion" }, { "msg_contents": "On Mon, May 10, 2021 at 11:17 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, May 10, 2021 at 05:36:35AM +0000, tanghy.fnst@fujitsu.com wrote:\n> > You see, the tab-completion for \"DELETE\" is \"DELETE FROM\" which is not same as help-command said(which is \"DELETE\").\n> > I tried to figure out why \"FROM\" is introduced here, but no good result got. In [1] someone changed \"DELETE\" to \"DELETE FROM\" but no reason added.\n> >\n> > IMO, the \"FROM\" is unnecessary just like \"INTO\" for \"INSERT\" command. So I tried to fix the inconsistency by removing \"FROM\" from \"DELETE FROM\" in tab-complete.c.\n> > Please see the attached patch. Any comment or different thought is very welcome.\n>\n> I think the behavior now is correct. The goal of autocompletion is to save\n> keystrokes and time. As the only valid keyword after a DELETE (at least in a\n> DeleteStmt) is FROM, it's a good thing that you get back \"DELETE FROM\" directly\n> rather than asking that to autocomplete in multiple steps.\n>\n> Now, the \\help command is for commands, which is a different thing as the\n> command in that case is DELETE not DELETE FROM, even if you will have to follow\n> your DELETE with a FROM.\n\nI agree with Julien. But, I also agree with the consistency point\nfrom Tang. So maybe we can fix the insert and add INSERT INTO in the\ntab completion?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 May 2021 11:21:11 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove \"FROM\" in \"DELETE FROM\" when using tab-completion" }, { "msg_contents": "On Mon, May 10, 2021 at 11:21:11AM +0530, Dilip Kumar wrote:\n> On Mon, May 10, 2021 at 11:17 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Mon, May 10, 2021 at 05:36:35AM +0000, tanghy.fnst@fujitsu.com wrote:\n> > > You see, the tab-completion for \"DELETE\" is \"DELETE FROM\" which is not same as help-command said(which is \"DELETE\").\n> > > I tried to figure out why \"FROM\" is introduced here, but no good result got. In [1] someone changed \"DELETE\" to \"DELETE FROM\" but no reason added.\n> > >\n> > > IMO, the \"FROM\" is unnecessary just like \"INTO\" for \"INSERT\" command. So I tried to fix the inconsistency by removing \"FROM\" from \"DELETE FROM\" in tab-complete.c.\n> > > Please see the attached patch. Any comment or different thought is very welcome.\n> >\n> > I think the behavior now is correct. The goal of autocompletion is to save\n> > keystrokes and time. As the only valid keyword after a DELETE (at least in a\n> > DeleteStmt) is FROM, it's a good thing that you get back \"DELETE FROM\" directly\n> > rather than asking that to autocomplete in multiple steps.\n> >\n> > Now, the \\help command is for commands, which is a different thing as the\n> > command in that case is DELETE not DELETE FROM, even if you will have to follow\n> > your DELETE with a FROM.\n> \n> I agree with Julien. But, I also agree with the consistency point\n> from Tang. So maybe we can fix the insert and add INSERT INTO in the\n> tab completion?\n\n+1 for that.\n\n\n", "msg_date": "Mon, 10 May 2021 13:59:06 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove \"FROM\" in \"DELETE FROM\" when using tab-completion" }, { "msg_contents": "On Monday, May 10, 2021 2:48 PM, Julien Rouhaud <rjuju123@gmail.com> worte\n>I think the behavior now is correct. The goal of autocompletion is to save\n>keystrokes and time. As the only valid keyword after a DELETE (at least in a\n>DeleteStmt) is FROM, it's a good thing that you get back \"DELETE FROM\" directly\n>rather than asking that to autocomplete in multiple steps.\n>\n>Now, the \\help command is for commands, which is a different thing as the\n>command in that case is DELETE not DELETE FROM, even if you will have to follow\n>your DELETE with a FROM.\n\nThanks for your reply. I totally agree with you on the convenience of \"DELETE FROM\" autocompletion.\nBut I also noticed some autocompletion for \"DELETE\" in some cases is just \"DELETE\" already. \n\n=# EXPLAIN[TAB]\nANALYZE DECLARE DELETE INSERT SELECT UPDATE VERBOSE\n\n=# COPY ([TAB]\nDELETE INSERT SELECT TABLE UPDATE VALUES WITH\n\nMaybe we should keep the behavior consistent? \nI mean we can change all \"DELETE\" to \"DELETE FROM\" or just remove \"FROM\" for consistency.\n\nOn Monday, May 10, 2021 2:51 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote\n>I agree with Julien. But, I also agree with the consistency point\n>from Tang. So maybe we can fix the insert and add INSERT INTO in the\n>tab completion?\n\nYeah. Change \"INSERT\" to \"INSERT INTO\" can be a good solution, too.\nBut just like I mentioned above, some cases in tab-completion make \"DELETE\" to \"DELETE FROM\", some cases make \"DELETE\" to \"DELETE\".\nI'm not sure which cases could change \"INSERT\" to \"INSERT INTO\".\nPlease share with me your thought on it.\n\nRegards,\nTang \n \n\n\n\n\n\n\n", "msg_date": "Mon, 10 May 2021 06:36:19 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Remove \"FROM\" in \"DELETE FROM\" when using tab-completion" }, { "msg_contents": "On Mon, May 10, 2021 at 06:36:19AM +0000, tanghy.fnst@fujitsu.com wrote:\n> On Monday, May 10, 2021 2:48 PM, Julien Rouhaud <rjuju123@gmail.com> worte\n> >I think the behavior now is correct. The goal of autocompletion is to save\n> >keystrokes and time. As the only valid keyword after a DELETE (at least in a\n> >DeleteStmt) is FROM, it's a good thing that you get back \"DELETE FROM\" directly\n> >rather than asking that to autocomplete in multiple steps.\n> >\n> >Now, the \\help command is for commands, which is a different thing as the\n> >command in that case is DELETE not DELETE FROM, even if you will have to follow\n> >your DELETE with a FROM.\n> \n> Thanks for your reply. I totally agree with you on the convenience of \"DELETE FROM\" autocompletion.\n> But I also noticed some autocompletion for \"DELETE\" in some cases is just \"DELETE\" already. \n> \n> =# EXPLAIN[TAB]\n> ANALYZE DECLARE DELETE INSERT SELECT UPDATE VERBOSE\n> \n> =# COPY ([TAB]\n> DELETE INSERT SELECT TABLE UPDATE VALUES WITH\n> \n> Maybe we should keep the behavior consistent? \n\nDefinitely.\n\n> I mean we can change all \"DELETE\" to \"DELETE FROM\" or just remove \"FROM\" for consistency.\n\nWe should change all to DELETE FROM (apart from \\help of course), and same for\nINSERT, change to INSERT INTO everywhere it makes sense.\n\n\n", "msg_date": "Mon, 10 May 2021 15:14:56 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove \"FROM\" in \"DELETE FROM\" when using tab-completion" }, { "msg_contents": "On Monday, May 10, 2021 4:15 PM, Julien Rouhaud <rjuju123@gmail.com> wrote\n>We should change all to DELETE FROM (apart from \\help of course), and same for\n>INSERT, change to INSERT INTO everywhere it makes sense.\n\nThanks for the reply. Your advice sounds reasonable to me.\nSo I tried to change all \"DELETE\" to \"DELETE FROM\" and \"INSERT\" to \"INSERT INTO\" in the attached patch except \nthe follow cases which I think is in accordance with what PG-Doc said.\n CREATE POLICY\n CREATE [ OR REPLACE ] RULE\n CREATE [ OR REPLACE ] TRIGGER\n ALTER DEFAULT PRIVILEGES\n\nAfter applying the patch, the tap-tests for psql is passed.\nPlease be free to tell me anything insufficient you found in my fix. Thanks.\n\nRegards,\nTang", "msg_date": "Mon, 10 May 2021 12:26:55 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Remove \"FROM\" in \"DELETE FROM\" when using tab-completion" }, { "msg_contents": "On Mon, May 10, 2021 at 5:57 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Monday, May 10, 2021 4:15 PM, Julien Rouhaud <rjuju123@gmail.com> wrote\n> >We should change all to DELETE FROM (apart from \\help of course), and same for\n> >INSERT, change to INSERT INTO everywhere it makes sense.\n>\n> Thanks for the reply. Your advice sounds reasonable to me.\n> So I tried to change all \"DELETE\" to \"DELETE FROM\" and \"INSERT\" to \"INSERT INTO\" in the attached patch except\n> the follow cases which I think is in accordance with what PG-Doc said.\n> CREATE POLICY\n> CREATE [ OR REPLACE ] RULE\n> CREATE [ OR REPLACE ] TRIGGER\n> ALTER DEFAULT PRIVILEGES\n>\n> After applying the patch, the tap-tests for psql is passed.\n> Please be free to tell me anything insufficient you found in my fix. Thanks.\n\nLGTM.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 May 2021 19:14:54 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove \"FROM\" in \"DELETE FROM\" when using tab-completion" }, { "msg_contents": "On Mon, May 10, 2021 at 07:14:54PM +0530, Dilip Kumar wrote:\n> LGTM.\n\nNo objections from me to what you are doing here.\n\n else if (TailMatches(\"DELETE\", \"FROM\", MatchAny))\n COMPLETE_WITH(\"USING\", \"WHERE\");\n- /* XXX: implement tab completion for DELETE ... USING */\n\nWhy are you removing that? This sentence is still true, no?\n--\nMichael", "msg_date": "Tue, 11 May 2021 14:53:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove \"FROM\" in \"DELETE FROM\" when using tab-completion" }, { "msg_contents": "On Tuesday, May 11, 2021 2:53 PM, Michael Paquier <michael@paquier.xyz> wrote\n> else if (TailMatches(\"DELETE\", \"FROM\", MatchAny))\n> COMPLETE_WITH(\"USING\", \"WHERE\");\n>- /* XXX: implement tab completion for DELETE ... USING */\n>\n>Why are you removing that? This sentence is still true, no?\n\nIIRC, XXX in comment is used to flag something that is bogus but works.\nWhen the sentence introduced here in f5ab0a14, the fix for \"DELETE ... USING\" is not as good as it is now.(I guess that's why the comment was added). And for now, IMHO, we can remove the comment directly. \n\nIf my understanding here is wrong, please let me know and that would be great to learn more about PG.\n\nRegards,\nTang\n\n\n", "msg_date": "Tue, 11 May 2021 07:30:47 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Remove \"FROM\" in \"DELETE FROM\" when using tab-completion" }, { "msg_contents": "On Tue, May 11, 2021 at 1:00 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, May 11, 2021 2:53 PM, Michael Paquier <michael@paquier.xyz> wrote\n> > else if (TailMatches(\"DELETE\", \"FROM\", MatchAny))\n> > COMPLETE_WITH(\"USING\", \"WHERE\");\n> >- /* XXX: implement tab completion for DELETE ... USING */\n> >\n> >Why are you removing that? This sentence is still true, no?\n>\n> IIRC, XXX in comment is used to flag something that is bogus but works.\n> When the sentence introduced here in f5ab0a14, the fix for \"DELETE ... USING\" is not as good as it is now.(I guess that's why the comment was added). And for now, IMHO, we can remove the comment directly.\n\nBut your patch is doing nothing to add the implementation for DELETE..\nUSING. Basically, the tab completion support for DELETE....USING is\nstill pending right?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 May 2021 14:14:02 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove \"FROM\" in \"DELETE FROM\" when using tab-completion" }, { "msg_contents": "On Tuesday, May 11, 2021 5:44 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n>But your patch is doing nothing to add the implementation for DELETE..\r\n>USING. Basically, the tab completion support for DELETE....USING is\r\n>still pending right?\r\n\r\nI see, maybe I have a misunderstanding here, I thought tab completion for \"DELETE....USING\" means the code before it as follows.\r\n> > else if (TailMatches(\"DELETE\", \"FROM\", MatchAny))\r\n> > COMPLETE_WITH(\"USING\", \"WHERE\");\r\n\r\nSo I just thought the tab completion support for DELETE....USING is not pending anymore.\r\nAccording to your feedback, maybe something beyond my knowledge is need to be done for DELETE....USING.\r\n\r\nBesides, you are right, the fix in the patch has nothing to do with the comment here.\r\nPatch updated to V2 with the sentence moved back. Thanks.\r\n\r\nRegards,\r\nTang", "msg_date": "Tue, 11 May 2021 09:33:05 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Remove \"FROM\" in \"DELETE FROM\" when using tab-completion" }, { "msg_contents": "On Tue, May 11, 2021 at 3:03 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, May 11, 2021 5:44 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >But your patch is doing nothing to add the implementation for DELETE..\n> >USING. Basically, the tab completion support for DELETE....USING is\n> >still pending right?\n>\n> I see, maybe I have a misunderstanding here, I thought tab completion for \"DELETE....USING\" means the code before it as follows.\n> > > else if (TailMatches(\"DELETE\", \"FROM\", MatchAny))\n> > > COMPLETE_WITH(\"USING\", \"WHERE\");\n>\n> So I just thought the tab completion support for DELETE....USING is not pending anymore.\n> According to your feedback, maybe something beyond my knowledge is need to be done for DELETE....USING.\n\nBasically, it just complete with USING, now after USING tab-completion\nsupport is not yet there, e.g. DELETE FROM t1 USING t1 WHERE cond.\nbut the current code will not suggest anything after USING.\n\n> Besides, you are right, the fix in the patch has nothing to do with the comment here.\n> Patch updated to V2 with the sentence moved back. Thanks.\n\n+1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 May 2021 15:24:56 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove \"FROM\" in \"DELETE FROM\" when using tab-completion" }, { "msg_contents": "On Tuesday, May 11, 2021 6:55 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n>Basically, it just complete with USING, now after USING tab-completion\r\n>support is not yet there, e.g. DELETE FROM t1 USING t1 WHERE cond.\r\n>but the current code will not suggest anything after USING.\r\n\r\nThanks for your kindly explanation. That's really nice of you.\r\nUnderstand now.\r\n\r\nRegards,\r\nTang\r\n", "msg_date": "Tue, 11 May 2021 10:48:16 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Remove \"FROM\" in \"DELETE FROM\" when using tab-completion" }, { "msg_contents": "On Tue, May 11, 2021 at 10:48:16AM +0000, tanghy.fnst@fujitsu.com wrote:\n> Thanks for your kindly explanation. That's really nice of you.\n> Understand now.\n\nThanks for the updated patch. Applied as of 1906cc0.\n--\nMichael", "msg_date": "Thu, 13 May 2021 11:00:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove \"FROM\" in \"DELETE FROM\" when using tab-completion" } ]
[ { "msg_contents": "I have committed the first draft of the PG 14 release notes. You can\nsee the most current build of them here:\n\n\thttps://momjian.us/pgsql_docs/release-14.html\n\nI need clarification on many items, and the document still needs its\nitems properly ordered, and markup added. I also expect a lot of\nfeedback.\n\nI plan to work on completing this document this coming week in\npreparation for beta next week.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 10 May 2021 02:03:08 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "PG 14 release notes, first draft" }, { "msg_contents": "On Mon, 2021-05-10 at 02:03 -0400, Bruce Momjian wrote:\n> When using \\e in psql, if the buffer is not modified by the editor, ignore the editor contents and leave the buffer unchanged (Laurenz Albe)\n> The \\ef and \\ev commands also now have this behavior. DOCS SAY BUFFER IS CLEARED.\n\nIt's a bit more complicated: If you edit the current buffer with \\e, the buffer is\nunchanged if you quit the editor.\nHowever, if you edit the previous statement, a file or the definition of a function\nor view, the query buffer is cleared if you quit the editor without saving.\n\nSuggested wording:\n\nWhen editing anything else than the current query buffer with \\e, and you quit\nthe editor, the query buffer is cleared. This makes the behavior less surprising\nand prevents the unintended re-execution of the previous statement.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 10 May 2021 08:16:16 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 02:03:08AM -0400, Bruce Momjian wrote:\n> I have committed the first draft of the PG 14 release notes. You can\n> see the most current build of them here:\n> \n> \thttps://momjian.us/pgsql_docs/release-14.html\n> \n> I need clarification on many items, and the document still needs its\n> items properly ordered, and markup added. I also expect a lot of\n> feedback.\n\nThere's a small typo:\n\n+Improve tab completion (Vignesh C,, Michael [...]\n\n(duplicated comma)\n\nAlso\n\n+<para>\n+Avoid retrieval of CHECK constraints and DEFAULT exprs in data-only dump (Julien Rouhaud)\n+</para>\n+\n+<para>\n+IS THIS BACKWARD INCOMPATIBLE?\n+</para>\n+</listitem>\n\nThe new behavior doesn't have any impact on the generated dump, as the\nmodification is to avoid retrieving data that won't be used.\n\nFor users, it only means maybe slight faster pg_dump execution, or slightly\nbetter change to be able to run a pg_dump --data-only if pg_constraint is\ncorrupted but not the rest of the user data, so maybe it's not necessary to\nmention that in the release notes?\n\n\n", "msg_date": "Mon, 10 May 2021 14:51:28 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 3:03 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I have committed the first draft of the PG 14 release notes. You can\n> see the most current build of them here:\n>\n> https://momjian.us/pgsql_docs/release-14.html\n\nThank you!\n\n> Add system view pg_stat_replication_slots to report replication slot activity (Sawada Masahiko, Amit Kapila)\n>\n> Function pg_stat_reset_replication_slot() resets slot statistics. THIS IS LOGICAL ONLY, BUT NO \"LOGICAL\" IN THE NAME?\n\nIIUC pg_stat_replication_slots view supports only logical slot for\nnow. But we might have it show also physical slot in the future. I'm\nfine with the current view name and description but someone might want\nto use \"logical replication slot\" instead of just \"replication slot\".\n\n> IS \"ACTIVITY\" THE RIGHT WORD?\n\nThe doc says \"The pg_stat_replication_slots view will contain one row\nper logical replication slot, showing statistics about its usage.\". So\nwe can say \"... to report replication slot statistics about its\nusage\".\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 10 May 2021 17:28:24 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, 10 May 2021 at 18:03, Bruce Momjian <bruce@momjian.us> wrote:\n> I need clarification on many items, and the document still needs its\n> items properly ordered, and markup added. I also expect a lot of\n> feedback.\n\nThanks for drafting that up.\n\n> Add executor method to cache results from the inner-side of joins (David Rowley)\n\nI think it would be more accurate to say \"inner-side of nested loop joins\".\n\n> Allow efficient retrieval of heap rows via tid (Edmund Horner, David Rowley)\n\nI'd say we already had that feature with TID Scan. Maybe it would be\nbetter to write:\n\n\"Allow efficient heap scanning on ranges of tids (Edmund Horner, David Rowley)\"\n\n> Improve the performance of parallel sequential scans (Thomas Munro, David Rowley)\n\nI think it is worth mentioning \"I/O\" before \"performance\". This\nchange won't really help cases if all the table's pages are already in\nshared buffers.\n\nDavid\n\n\n", "msg_date": "Mon, 10 May 2021 20:52:44 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, 10 May 2021 at 08:03, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I have committed the first draft of the PG 14 release notes. You can\n> see the most current build of them here:\n> https://momjian.us/pgsql_docs/release-14.html\n>\n> I need clarification on many items, and the document still needs its\n> items properly ordered, and markup added. I also expect a lot of\n> feedback.\n\nI noticed that the improvement in bloat control in the HeapAM that I\nknow of (3c3b8a4b, 0ff8bbde) weren't documented here. Although each\ncan be considered minor, they together can decrease the bloating\nbehaviour of certain workloads significantly (and limit the total\ndamage), and in my opinion this should be mentioned.\n\n3c3b8a4b: Returns space claimed for the line pointer array back to the\npage's empty space, so that it can also be used for tuple data.\n\n0ff8bbde: Allows large tuples to be inserted on pages which have only\na small amount of data, regardless of fillfactor.\n\nTogether they should be able to help significantly in both bloat\nprevention and bloat reduction.\n\n> I plan to work on completing this document this coming week in\n> preparation for beta next week.\n\nThanks!\n\nWith regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Mon, 10 May 2021 13:44:12 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On 5/10/21 2:03 AM, Bruce Momjian wrote:\n> I have committed the first draft of the PG 14 release notes. You can\n> see the most current build of them here:\n> \n> \thttps://momjian.us/pgsql_docs/release-14.html\n> \n> I need clarification on many items, and the document still needs its\n> items properly ordered, and markup added. I also expect a lot of\n> feedback.\n> \n> I plan to work on completing this document this coming week in\n> preparation for beta next week.\n\nWhile only a small change, this commit does affect user visible behavior \nand so should probably be noted:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=b12bd4869b5e\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Mon, 10 May 2021 07:50:14 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "Hi, Bruce!\n\nOn Mon, May 10, 2021 at 9:03 AM Bruce Momjian <bruce@momjian.us> wrote:\n> I have committed the first draft of the PG 14 release notes. You can\n> see the most current build of them here:\n>\n> https://momjian.us/pgsql_docs/release-14.html\n>\n> I need clarification on many items, and the document still needs its\n> items properly ordered, and markup added. I also expect a lot of\n> feedback.\n>\n> I plan to work on completing this document this coming week in\n> preparation for beta next week.\n\nThank you very much for your work!\n\nLet me provide a missing description for the items related to me.\n\n * Improve handling of compound words in to_tsquery() and\nwebsearch_to_tsquery() (Alexander Korotkov)\nCompound words are now transformed into parts connected with phrase\nsearch operators. For example, to_tsquery('pg_class') becomes 'pg <->\nclass' instead of 'pg & class'. This eliminates bug of handling\ncompound words connected with the phrase operator and makes the search\nof compound words more strict.\n\n * Fix extra distance in phrase operators for quoted text in\nwebsearch_to_tsquery() (Alexander Korotkov)\nFor example, websearch_to_tsquery('english', '\"aaa: bbb\"') becomes\n'aaa <> bbb' instead of 'aaa <2> bbb'.\n\nFeel free to make stylistic and other corrections if needed.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 10 May 2021 16:02:27 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "2021年5月10日(月) 15:03 Bruce Momjian <bruce@momjian.us>:\n>\n> I have committed the first draft of the PG 14 release notes. You can\n> see the most current build of them here:\n>\n> https://momjian.us/pgsql_docs/release-14.html\n>\n> I need clarification on many items, and the document still needs its\n> items properly ordered, and markup added. I also expect a lot of\n> feedback.\n>\n> I plan to work on completing this document this coming week in\n> preparation for beta next week.\n\nThis misses the change of default value, and is a bit unclear:\n\n> Remove password_encryption's support for boolean values, e.g. true (Peter Eisentraut)\n>\n> Previous boolean values enabled md5. Now, only the md5 string does this.\n\nI'd suggest something along these lines:\n\n> The default for password_encryption is now \"scram-sha-256\" (Peter Eisentraut)\n>\n> The pseudo-boolean values \"true\", \"on\", \"yes\" and \"1\" are no longer accepted as an alias for \"md5\".\n\n(It hasn't been a true boolean setting since Pg 9.6).\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 May 2021 22:53:03 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "Thanks for putting it together.\n\nI think these two should be merged:\n| Remove containment operators @ and ~ from contrib modules cube, hstore, intarray, and seg (Justin Pryzby) \n| Remove deprecated containment operators for built-in geometry data types (Justin Pryzby) \n\n| Improve autovacuum's analyze of partitioned tables (Yuzuko Hosoya) \n| DETAILS? \n\nShould say: Autovacuum now analyzes partitioned tables.\n\n| The server variable check_client_connection_interval allows supporting operating systems, e.g., Linux, to automatically cancel queries by disconnected clients. \nThe GUC is actually called client_connection_check_interval - the commit\nmessage used the wrong name.\n\n| This is particularly helpful for reducing index bloat on tables that frequently update indexed columns. \nDoes it mean \"..where indexed columns are frequently updated\"?\n\n| Allow multiple foreign table scans to be run in parallel (Robert Haas, Kyotaro Horiguchi, Thomas Munro, Etsuro Fujita) \nI think it means multiple foreight table scan *nodes*\n\n| If server variable compute_query_id is enabled, display the hash in pg_stat_activity, EXPLAIN VERBOSE, csvlog, and optionally in log_line_prefix (Julien Rouhaud) \nI think needs details, like: \"If disabled, then the hash might be computed by\nan extension, instead\".\n\nLater, you say:\n| Extension pg_stat_statements will need to enable hash computation via the compute_query_id server variable to function properly. pg_stat_statements can now use a custom hash computation method. \nMaybe it should say \"will need hash computation to be enabled\".\n\n| Allow more than the common name (CN) to be matched for client certificate authentication (Andrew Dunstan) \nYour description makes it sound like arbitrary attributes can be compared. But\nthe option just allows comparing CN or DN.\n\n| Allow file system sync at the start of crash recovery on Linux (Thomas Munro) \nI think this should describe the existing, default behavior:\nAllow syncfs method to sync data directory during recovery;\nThe default behavior is to open and fsync every data file, and the new setting\nrecovery_init_sync_method=syncfs instead syncs each filesystem in the data\ndirectory.\n\n| Add date_bin function (John Naylor) \nThis truncate timestamps on an arbitrary interval.\nLike date_trunc() but also supports eg. '15 minutes', and also uses an arbitrary \"origin\".\n\n| Support negative indexes in split_part() (Nikhil Benesch) \n| Negative values count from the last field going forward. \nshould say \"start from the last field and count backward\" ?\n\n| Add configure option --with-openssl to behave like --with-ssl={openssl} (Daniel Gustafsson, Michael Paquier) \n| The option --with-openssl is kept for compatibility. \nI think this is backwards. The new option is with-ssl=openssl, and (as you\nsaid) with-openssl is kept.\n\nShould these be in the \"compatibility\" section?\n\n| Force custom server variable names to match the pattern used for unquoted SQL identifiers (Tom Lane) \n\n| Change password_encryption's default to scram-sha-256 (Peter Eisentraut) \n\n| Change checkpoint_completion_target default to 0.9 (Stephen Frost) \n\n| Reduce the default value of vacuum_cost_page_miss (Peter Geoghegan) \n\nNitpicks to follow:\n\n| Allow some GiST index to be built by presorting the data (Andrey Borodin) \nindexes\n\n| with --with-lz4 support to enable this feature\nI would change to say \"to support\" rather than \"support to enable\"\n\n| Speed truncation of small tables on large shared buffer servers (Kirk Jamison) \n\"on servers with large settings of shared_buffers\"\n\n| Allow windowing functions to perform incremental sorts (David Rowley) \nJust \"window\" functions\n\n| Improve pg_stat_activity reporting for walsenders processes (Tom Lane) \n walsender\n\n| Previously these functions could only be executed by super-users, and still defaults do that. \n..which is still the default behavior.\n\n| This allows multiple queries to be send and only wait for completion when a specific synchronization message is sent. \nbe sent\n\n| Enhance libpq libpq's target_session_attrs parameter options (Haribabu Kommi, Greg Nancarrow, Vignesh C, Tom Lane) \nremove first \"libpq\"\n\n| With the removal of the ! operator in this release, factorial() is the only built-in way to computer a factorial. \ncompute\n\n| For example, GROUP BY CUBE (a,b), CUBE (b,c) will generated duplicate grouping combinations without DISTINCT. \n\nwill generate\n\n| Allow VACUUM VERBOSE to report page deletion counts for each scan of an index (Peter Geoghegan) \n\nI think \"Allow\" is wrong - should just say that VACUUM VERBOSE reports..\n\n|By default, only the root of partitioned tables are imported. \n*is* imported\n\nCan these be merged:\n Allow logical replication to stream long transactions to standbys (Dilip Kumar, Tomas Vondra, Amit Kapila, Nikhil Sontakke) \n Improve the logical replication API to allow streaming large in-progress transactions (Tomas Vondra, Dilip Kumar, Amit Kapila) \n\n\n", "msg_date": "Mon, 10 May 2021 09:00:05 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "Same as the last couple years, I checked for missing items in the release\nnotes, running something like this.\n\ngit log --cherry-pick --oneline origin/REL_13_STABLE...origin/master\n\nShould any of these be included?\n\nf82de5c46b Do COPY FROM encoding conversion/verification in larger chunks.\n9e596b65f4 Add \"LP_DEAD item?\" column to GiST pageinspect functions\n\n10a5b35a00 Report resource usage at the end of recovery\n7e453634bb Add additional information in the vacuum error context.\n1ea396362b Improve logging of bad parameter values in BIND messages.\n\n86dc90056d Rework planning and execution of UPDATE and DELETE.\na1115fa078 Postpone some more stuff out of ExecInitModifyTable.\nc5b7ba4e67 Postpone some stuff out of ExecInitModifyTable.\n\n7db0cd2145 Set PD_ALL_VISIBLE and visibility map bits in COPY FREEZE\n01e658fa74 Hash support for row types\na929e17e5a Allow run-time pruning on nested Append/MergeAppend nodes\n8c15a29745 Allow ALTER TYPE to update an existing type's typsubscript value.\nc7aba7c14e Support subscripting of arbitrary types, not only arrays.\n\n7b94e99960 Remove catalog function currtid()\n926fa801ac Remove undocumented IS [NOT] OF syntax.\ncd9c1b3e19 Rename PGPROC->vacuumFlags to statusFlags\na04daa97a4 Remove es_result_relation_info from EState.\n\n3d351d916b Redefine pg_class.reltuples to be -1 before the first VACUUM or ANALYZE.\nfea10a6434 Rename VariableCacheData.nextFullXid to nextXid.\n9de9294b0c Stop archive recovery if WAL generated with wal_level=minimal is found. (see also 15251c0a6)\n\nf40c6969d0 Routine usage information schema tables\n\nb4af70cb21 Simplify state managed by VACUUM.\n4753ef37e0 Use a WaitLatch for vacuum/autovacuum sleeping\n9dd963ae25 Recycle nbtree pages deleted during same VACUUM.\n3c3b8a4b26 Truncate line pointer array during VACUUM.\n\nad1c36b070 Fix foreign-key selectivity estimation in the presence of constants.\n\n\n", "msg_date": "Mon, 10 May 2021 09:40:45 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Sun, May 9, 2021 at 11:03 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I have committed the first draft of the PG 14 release notes.\n\nThis definitely isn't necessary, since the commit in question was a\ntotally mechanical thing that cleaned up a minor inconsistency:\n\nInitialize work_mem and maintenance_work_mem using current guc.c\ndefault (Peter Geoghegan)\n\nOversight in commit 848ae330a49, which increased the previous defaults\nfor work_mem and maintenance_work_mem by 4X. IS THIS A BEHAVIORAL\nCHANGE?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 10 May 2021 07:53:08 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 4:44 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I noticed that the improvement in bloat control in the HeapAM that I\n> know of (3c3b8a4b, 0ff8bbde) weren't documented here. Although each\n> can be considered minor, they together can decrease the bloating\n> behaviour of certain workloads significantly (and limit the total\n> damage), and in my opinion this should be mentioned.\n>\n> 3c3b8a4b: Returns space claimed for the line pointer array back to the\n> page's empty space, so that it can also be used for tuple data.\n>\n> 0ff8bbde: Allows large tuples to be inserted on pages which have only\n> a small amount of data, regardless of fillfactor.\n\n+1 on mentioning both things.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 10 May 2021 07:54:24 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 7:00 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> | Allow VACUUM VERBOSE to report page deletion counts for each scan of an index (Peter Geoghegan)\n>\n> I think \"Allow\" is wrong - should just say that VACUUM VERBOSE reports..\n\nIt's also not accurate, since the count of deleted pages was always\nshown by VACUUM VERBOSE (once per index scan). The new feature has us\nshow pages deleted by the VACUUM that actually ran (not some earlier\nVACUUM) -- these are \"newly deleted pages\".\n\nI don't think that this item is worth mentioning, though -- it's just\na nice to have. If Bruce removes it from the release notes entirely I\nwon't object.\n\nIn addition to the items that I commented on in my response to\nMatthias just now, I should point out the following item as worthy of\ninclusion:\n\n9dd963ae25 Recycle nbtree pages deleted during same VACUUM.\n\nI suggest that this item be phrased more or less as follows:\n\n\"Allow VACUUM to eagerly place newly deleted B-Tree pages in the Free\nSpace Map. Previously VACUUM could only place preexisting deleted\npages in the Free Space Map for recycling.\"\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 10 May 2021 08:08:01 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 08:16:16AM +0200, Laurenz Albe wrote:\n> On Mon, 2021-05-10 at 02:03 -0400, Bruce Momjian wrote:\n> > When using \\e in psql, if the buffer is not modified by the editor, ignore the editor contents and leave the buffer unchanged (Laurenz Albe)\n> > The \\ef and \\ev commands also now have this behavior. DOCS SAY BUFFER IS CLEARED.\n> \n> It's a bit more complicated: If you edit the current buffer with \\e, the buffer is\n> unchanged if you quit the editor.\n> However, if you edit the previous statement, a file or the definition of a function\n> or view, the query buffer is cleared if you quit the editor without saving.\n> \n> Suggested wording:\n> \n> When editing anything else than the current query buffer with \\e, and you quit\n> the editor, the query buffer is cleared. This makes the behavior less surprising\n> and prevents the unintended re-execution of the previous statement.\n\nOK, I figured it out. I was confused by \\p because \\? says:\n\n\ttest=> \\?\n\tQuery Buffer\n\t \\e [FILE] [LINE] edit the query buffer (or file) with external editor\n\t \\ef [FUNCNAME [LINE]] edit function definition with external editor\n\t \\ev [VIEWNAME [LINE]] edit view definition with external editor\n-->\t \\p show the contents of the query buffer\n\t \\r reset (clear) the query buffer\n\t ...\n\n\nbut the documentaton says:\n\n \\p or \\print\n Print the current query buffer to the standard output. If\n--> the current query buffer is empty, the most recently executed\n--> query is printed instead.\n\nI wasn't aware that \\e loads the previous query if the buffer is empty. \nI came up with this release note text:\n\n\t<listitem>\n\t<!--\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\t2021-04-03 [55873a00e] Improve psql's behavior when the editor is exited withou\n\t-->\n\t\n\t<para>\n\tWhen editing the previous query or a file with psql's \\e, ignore the\n\tcontents if the editor exits without saving (Laurenz Albe)\n\t</para>\n\t\n\t<para>\n\tPreviously, editing the previous query or a file and not saving the\n\teditor contents would still execute the editor contents. The \\ef and\n\t\\ev commands also now have this behavior.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 10 May 2021 12:38:56 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 02:51:28PM +0800, Julien Rouhaud wrote:\n> On Mon, May 10, 2021 at 02:03:08AM -0400, Bruce Momjian wrote:\n> > I have committed the first draft of the PG 14 release notes. You can\n> > see the most current build of them here:\n> > \n> > \thttps://momjian.us/pgsql_docs/release-14.html\n> > \n> > I need clarification on many items, and the document still needs its\n> > items properly ordered, and markup added. I also expect a lot of\n> > feedback.\n> \n> There's a small typo:\n> \n> +Improve tab completion (Vignesh C,, Michael [...]\n> \n> (duplicated comma)\n\nFixed.\n\n> Also\n> \n> +<para>\n> +Avoid retrieval of CHECK constraints and DEFAULT exprs in data-only dump (Julien Rouhaud)\n> +</para>\n> +\n> +<para>\n> +IS THIS BACKWARD INCOMPATIBLE?\n> +</para>\n> +</listitem>\n> \n> The new behavior doesn't have any impact on the generated dump, as the\n> modification is to avoid retrieving data that won't be used.\n> \n> For users, it only means maybe slight faster pg_dump execution, or slightly\n> better change to be able to run a pg_dump --data-only if pg_constraint is\n> corrupted but not the rest of the user data, so maybe it's not necessary to\n> mention that in the release notes?\n\nThanks, removed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 10 May 2021 12:51:32 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 05:28:24PM +0900, Masahiko Sawada wrote:\n> On Mon, May 10, 2021 at 3:03 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I have committed the first draft of the PG 14 release notes. You can\n> > see the most current build of them here:\n> >\n> > https://momjian.us/pgsql_docs/release-14.html\n> \n> Thank you!\n> \n> > Add system view pg_stat_replication_slots to report replication slot activity (Sawada Masahiko, Amit Kapila)\n> >\n> > Function pg_stat_reset_replication_slot() resets slot statistics. THIS IS LOGICAL ONLY, BUT NO \"LOGICAL\" IN THE NAME?\n> \n> IIUC pg_stat_replication_slots view supports only logical slot for\n> now. But we might have it show also physical slot in the future. I'm\n> fine with the current view name and description but someone might want\n> to use \"logical replication slot\" instead of just \"replication slot\".\n\nOK, I was just confirming we are happy with the name.\n> \n> > IS \"ACTIVITY\" THE RIGHT WORD?\n> \n> The doc says \"The pg_stat_replication_slots view will contain one row\n> per logical replication slot, showing statistics about its usage.\". So\n> we can say \"... to report replication slot statistics about its\n> usage\".\n\nOK, I think I prefer \"activity\" so will just keep that.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 10 May 2021 12:55:56 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 08:52:44PM +1200, David Rowley wrote:\n> On Mon, 10 May 2021 at 18:03, Bruce Momjian <bruce@momjian.us> wrote:\n> > I need clarification on many items, and the document still needs its\n> > items properly ordered, and markup added. I also expect a lot of\n> > feedback.\n> \n> Thanks for drafting that up.\n> \n> > Add executor method to cache results from the inner-side of joins (David Rowley)\n> \n> I think it would be more accurate to say \"inner-side of nested loop joins\".\n\nOK, thanks. I suspected that was true.\n\n> > Allow efficient retrieval of heap rows via tid (Edmund Horner, David Rowley)\n> \n> I'd say we already had that feature with TID Scan. Maybe it would be\n> better to write:\n> \n> \"Allow efficient heap scanning on ranges of tids (Edmund Horner, David Rowley)\"\n\nI went with:\n\n\tAllow efficient heap scanning of a range of tids (Edmund Horner,\n\tDavid Rowley)\n\n> > Improve the performance of parallel sequential scans (Thomas Munro, David Rowley)\n> \n> I think it is worth mentioning \"I/O\" before \"performance\". This\n> change won't really help cases if all the table's pages are already in\n> shared buffers.\n\nI went with:\n\n\tImprove the performance of parallel sequential I/O scans (Thomas Munro,\n\tDavid Rowley)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 10 May 2021 13:07:23 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 01:44:12PM +0200, Matthias van de Meent wrote:\n> On Mon, 10 May 2021 at 08:03, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I have committed the first draft of the PG 14 release notes. You can\n> > see the most current build of them here:\n> > https://momjian.us/pgsql_docs/release-14.html\n> >\n> > I need clarification on many items, and the document still needs its\n> > items properly ordered, and markup added. I also expect a lot of\n> > feedback.\n> \n> I noticed that the improvement in bloat control in the HeapAM that I\n> know of (3c3b8a4b, 0ff8bbde) weren't documented here. Although each\n> can be considered minor, they together can decrease the bloating\n> behaviour of certain workloads significantly (and limit the total\n> damage), and in my opinion this should be mentioned.\n> \n> 3c3b8a4b: Returns space claimed for the line pointer array back to the\n> page's empty space, so that it can also be used for tuple data.\n> \n> 0ff8bbde: Allows large tuples to be inserted on pages which have only\n> a small amount of data, regardless of fillfactor.\n> \n> Together they should be able to help significantly in both bloat\n> prevention and bloat reduction.\n\nI looked at those items. I try to mention performance items that enable\nnew workloads or require some user action to benefit from it. I am not\nsure these two qualify, but can others comments? Thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 10 May 2021 13:34:38 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, 2021-05-10 at 12:38 -0400, Bruce Momjian wrote:\n> I came up with this release note text:\n> \n> \t<listitem>\n> \t<!--\n> \tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n> \t2021-04-03 [55873a00e] Improve psql's behavior when the editor is exited withou\n> \t-->\n> \t\n> \t<para>\n> \tWhen editing the previous query or a file with psql's \\e, ignore the\n> \tcontents if the editor exits without saving (Laurenz Albe)\n> \t</para>\n> \t\n> \t<para>\n> \tPreviously, editing the previous query or a file and not saving the\n> \teditor contents would still execute the editor contents. The \\ef and\n> \t\\ev commands also now have this behavior.\n> \t</para>\n> \t</listitem>\n\nThanks, that looks much better.\n\nThe second paragraph starts describing the previous behavior, but the second\nsentence details on the changes. Perhaps it would be better to put that into\nthe first paragraph:\n\n <para>\n When editing the previous query or a file with psql's \\e, or when a\n view or function definition are edited with \\ev or \\ef, ignore the\n contents if the editor exits without saving (Laurenz Albe)\n </para>\n \n <para>\n Previously, editing the previous query or a file and not saving the\n editor contents would still execute the editor contents.\n </para>\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 10 May 2021 19:39:17 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 07:39:17PM +0200, Laurenz Albe wrote:\n> On Mon, 2021-05-10 at 12:38 -0400, Bruce Momjian wrote:\n> > I came up with this release note text:\n> > \n> > \t<listitem>\n> > \t<!--\n> > \tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n> > \t2021-04-03 [55873a00e] Improve psql's behavior when the editor is exited withou\n> > \t-->\n> > \t\n> > \t<para>\n> > \tWhen editing the previous query or a file with psql's \\e, ignore the\n> > \tcontents if the editor exits without saving (Laurenz Albe)\n> > \t</para>\n> > \t\n> > \t<para>\n> > \tPreviously, editing the previous query or a file and not saving the\n> > \teditor contents would still execute the editor contents. The \\ef and\n> > \t\\ev commands also now have this behavior.\n> > \t</para>\n> > \t</listitem>\n> \n> Thanks, that looks much better.\n> \n> The second paragraph starts describing the previous behavior, but the second\n> sentence details on the changes. Perhaps it would be better to put that into\n> the first paragraph:\n> \n> <para>\n> When editing the previous query or a file with psql's \\e, or when a\n> view or function definition are edited with \\ev or \\ef, ignore the\n> contents if the editor exits without saving (Laurenz Albe)\n> </para>\n> \n> <para>\n> Previously, editing the previous query or a file and not saving the\n> editor contents would still execute the editor contents.\n> </para>\n\nUh, I try to keep the first sentence short so people can scan it more\neasily, so I am hesitant to make this change. I went with this change:\n\n\t<listitem>\n\t<!--\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\t2021-04-03 [55873a00e] Improve psql's behavior when the editor is exited withou\n\t-->\n\t\n\t<para>\n\tWhen editing the previous query or a file with psql's \\e, or using \\ef and \\ev, ignore the contents if the editor exits without saving (Laurenz Albe)\n\t</para>\n\t\n\t<para>\n\tPreviously, such edits would still execute the editor contents.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 10 May 2021 13:47:24 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "Hi Bruce,\n\nThanks for doing this work again!\n\n> Add date_bin function (John Naylor)\n>\n> WHAT DOES THIS DO?\n\nHard to describe in a one-liner, but it lines up timestamps into regular\nintervals as specified by the user. It is more clear after seeing examples:\n\nhttps://www.postgresql.org/docs/devel/functions-datetime.html#FUNCTIONS-DATETIME-BIN\n\n> Dramatically improve Unicode normalization (John Naylor)\n>\n> WHAT OPERATIONS USE THIS?\n\nPG13 added the normalize() function to normalize Unicode sequences, as well\nas the IS NORMALIZED syntax to test for that. The commits* here do not\nchange behavior and only improve performance. As such, this really belongs\nin the performance section.\n\n*There is one additional commit that belongs to this entry:\n\nAuthor: Michael Paquier <michael@paquier.xyz>\n2020-10-11 [80f8eb79e] Use perfect hash for NFC and NFKC Unicode\nNormalization quick check\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nHi Bruce,Thanks for doing this work again!> Add date_bin function (John Naylor)>> WHAT DOES THIS DO?Hard to describe in a one-liner, but it lines up timestamps into regular intervals as specified by the user. It is more clear after seeing examples:https://www.postgresql.org/docs/devel/functions-datetime.html#FUNCTIONS-DATETIME-BIN> Dramatically improve Unicode normalization (John Naylor)>> WHAT OPERATIONS USE THIS?PG13 added the normalize() function to normalize Unicode sequences, as well as the IS NORMALIZED syntax to test for that. The commits* here do not change behavior and only improve performance. As such, this really belongs in the performance section. *There is one additional commit that belongs to this entry:Author: Michael Paquier <michael@paquier.xyz>2020-10-11 [80f8eb79e] Use perfect hash for NFC and NFKC Unicode Normalization quick check--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 10 May 2021 14:42:08 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, 10 May 2021 at 19:34, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, May 10, 2021 at 01:44:12PM +0200, Matthias van de Meent wrote:\n> > On Mon, 10 May 2021 at 08:03, Bruce Momjian <bruce@momjian.us> wrote:\n> > >\n> > > I have committed the first draft of the PG 14 release notes. You can\n> > > see the most current build of them here:\n> > > https://momjian.us/pgsql_docs/release-14.html\n> > >\n> > > I need clarification on many items, and the document still needs its\n> > > items properly ordered, and markup added. I also expect a lot of\n> > > feedback.\n> >\n> > I noticed that the improvement in bloat control in the HeapAM that I\n> > know of (3c3b8a4b, 0ff8bbde) weren't documented here. Although each\n> > can be considered minor, they together can decrease the bloating\n> > behaviour of certain workloads significantly (and limit the total\n> > damage), and in my opinion this should be mentioned.\n> >\n> > 3c3b8a4b: Returns space claimed for the line pointer array back to the\n> > page's empty space, so that it can also be used for tuple data.\n> >\n> > 0ff8bbde: Allows large tuples to be inserted on pages which have only\n> > a small amount of data, regardless of fillfactor.\n> >\n> > Together they should be able to help significantly in both bloat\n> > prevention and bloat reduction.\n>\n> I looked at those items. I try to mention performance items that enable\n> new workloads or require some user action to benefit from it.\n\n0ff8bbde Enables a workload that inserts (and non-locally updates)\nlarge (> FILLFACTOR %) tuples in tables that have a low FILLFACTOR.\nPreviously this would fail dramatically by only inserting on new\npages; this would extend the table indefinately. See the thread [0]\n\n3c3b8a4b improves workloads with high local update-then-delete churn.\nPreviously this would irreversably claim space on the page for tuple\nidentifiers even when they were later deleted; now we can reclaim this\nspace when a tuple is deleted from the page.\n\nI see these two improvements in a similar light as the bottom-up index\ndeletion in btree: No user action required, works out-of-the-box,\ndecreases bloat / disk usage, but good to note as it fixes (known)\nbloating footguns that a user might have encountered.\n\n> I am not sure these two qualify, but can others comments? Thanks.\n\nI'd like to refer to Peter Geoghegan's reply [1] upthread.\n\nThank you for your effort,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/flat/6e263217180649339720afe2176c50aa%40opammb0562.comp.optiver.com\n[1] https://www.postgresql.org/message-id/CAH2-Wz%3D-A%3DjRxpB2Owj3KQadCue7%2BNLqj56Q566ees7TapMRvA%40mail.gmail.com\n\n\n", "msg_date": "Mon, 10 May 2021 22:21:00 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 07:54:24AM -0700, Peter Geoghegan wrote:\n> On Mon, May 10, 2021 at 4:44 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > I noticed that the improvement in bloat control in the HeapAM that I\n> > know of (3c3b8a4b, 0ff8bbde) weren't documented here. Although each\n> > can be considered minor, they together can decrease the bloating\n> > behaviour of certain workloads significantly (and limit the total\n> > damage), and in my opinion this should be mentioned.\n> >\n> > 3c3b8a4b: Returns space claimed for the line pointer array back to the\n> > page's empty space, so that it can also be used for tuple data.\n> >\n> > 0ff8bbde: Allows large tuples to be inserted on pages which have only\n> > a small amount of data, regardless of fillfactor.\n> \n> +1 on mentioning both things.\n\nOK, you are confirming what Matthias suggested. I added these two\nitems, which both seem to apply only to heap pages, not index pages:\n\n---------------------------------------------------------------------------\n\n\t<listitem>\n\t<!--\n\tAuthor: Peter Geoghegan <pg@bowt.ie>\n\t2021-04-07 [3c3b8a4b2] Truncate line pointer array during VACUUM.\n\t-->\n\t\n\t<para>\n\tDeallocate space reserved by trailing unused heap line pointers\n\t(Matthias van de Meent, Peter Geoghegan)\n\t</para>\n\t</listitem>\n\t\n---------------------------------------------------------------------------\n\n\t<listitem>\n\t<!--\n\tAuthor: Noah Misch <noah@leadboat.com>\n\t2021-03-30 [0ff8bbdee] Accept slightly-filled pages for tuples larger\n\tthan fill\n\t-->\n\t\n\t<para>\n\tAllow wide tuples to be always added to almost-empty heap pages (John Naylor,\n\tFloris van Nee)\n\t</para>\n\t\n\t<para>\n\tPreviously tuples whose insertion would have exceeded the page's fill\n\tfactor were instead added to new pages.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 10 May 2021 18:58:01 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 3:58 PM Bruce Momjian <bruce@momjian.us> wrote:\n> OK, you are confirming what Matthias suggested. I added these two\n> items, which both seem to apply only to heap pages, not index pages:\n\nThat's right -- these two relate to heap pages only.\n\nI think that Matthias compared these two to bottom-up index deletion\nbecause all three patches are concerned about avoiding \"a permanent\nsolution to a temporary problem\". They're conceptually similar despite\nbeing in fairly different areas. Evidently Matthias has a similar\nmental model to my own when it comes to this stuff.\n\nUnfortunately the practical significance of the line pointer patch is\nhard to demonstrate with a benchmark. I believe that it is very useful\non a sufficiently long timeline and with certain workloads because of\nthe behavior it avoids. As I pointed out on that other thread\nrecently, once you have irreversible bloat very small adverse events\nwill eventually add up and cause big problems. When this happens it'll\nbe very hard or impossible to detect, since it just looks like heap\nfragmentation.\n\nThat said, it's clearly an issue with one of the TPC-C tables if you\nrun BenchmarkSQL for days and days (just one table, though). So there\nis hard evidence that line pointer bloat could get really out of hand\nin at least some tables.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 10 May 2021 16:14:56 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "Thanks for making the updates.\n\nOn Tue, 11 May 2021 at 05:07, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, May 10, 2021 at 08:52:44PM +1200, David Rowley wrote:\n> > > Improve the performance of parallel sequential scans (Thomas Munro, David Rowley)\n> >\n> > I think it is worth mentioning \"I/O\" before \"performance\". This\n> > change won't really help cases if all the table's pages are already in\n> > shared buffers.\n>\n> I went with:\n>\n> Improve the performance of parallel sequential I/O scans (Thomas Munro,\n> David Rowley)\n\nI think I'd have gone with:\n\n\"Improve I/O performance of parallel sequential scans (Thomas Munro,\nDavid Rowley)\"\n\nThe operation we're speeding up is called sequential scan. We don't\nhave any operation that's named sequential I/O scan.\n\nDavid\n\n\n", "msg_date": "Tue, 11 May 2021 12:35:28 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 07:50:14AM -0400, Joe Conway wrote:\n> On 5/10/21 2:03 AM, Bruce Momjian wrote:\n> > I have committed the first draft of the PG 14 release notes. You can\n> > see the most current build of them here:\n> > \n> > \thttps://momjian.us/pgsql_docs/release-14.html\n> > \n> > I need clarification on many items, and the document still needs its\n> > items properly ordered, and markup added. I also expect a lot of\n> > feedback.\n> > \n> > I plan to work on completing this document this coming week in\n> > preparation for beta next week.\n> \n> While only a small change, this commit does affect user visible behavior and\n> so should probably be noted:\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=b12bd4869b5e\n\nI see your point. Here is the release entry I added:\n\n\t<listitem>\n\t<!--\n\tAuthor: Joe Conway <mail@joeconway.com>\n\t2021-03-31 [b12bd4869] Fix has_column_privilege function corner case\n\t-->\n\t\n\t<para>\n\tReturn false for has_column_privilege() checks on non-existent or dropped columns (Joe Conway)\n\t</para>\n\t\n\t<para>\n\tPreviously such columns returned an invalid column error.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 10 May 2021 21:56:45 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 11, 2021 at 12:35:28PM +1200, David Rowley wrote:\n> Thanks for making the updates.\n> \n> On Tue, 11 May 2021 at 05:07, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Mon, May 10, 2021 at 08:52:44PM +1200, David Rowley wrote:\n> > > > Improve the performance of parallel sequential scans (Thomas Munro, David Rowley)\n> > >\n> > > I think it is worth mentioning \"I/O\" before \"performance\". This\n> > > change won't really help cases if all the table's pages are already in\n> > > shared buffers.\n> >\n> > I went with:\n> >\n> > Improve the performance of parallel sequential I/O scans (Thomas Munro,\n> > David Rowley)\n> \n> I think I'd have gone with:\n> \n> \"Improve I/O performance of parallel sequential scans (Thomas Munro,\n> David Rowley)\"\n> \n> The operation we're speeding up is called sequential scan. We don't\n> have any operation that's named sequential I/O scan.\n\nOK, new text:\n\n\t<listitem>\n\t<!--\n\tAuthor: David Rowley <drowley@postgresql.org>\n\t2020-07-26 [56788d215] Allocate consecutive blocks during parallel seqscans\n\t-->\n\t\n\t<para>\n\tImprove the I/O performance of parallel sequential scans (Thomas Munro, David Rowley)\n\t</para>\n\t\n\t<para>\n\tThis was done by allocating blocks in groups to parallel workers.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 10 May 2021 21:58:01 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 04:14:56PM -0700, Peter Geoghegan wrote:\n> On Mon, May 10, 2021 at 3:58 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > OK, you are confirming what Matthias suggested. I added these two\n> > items, which both seem to apply only to heap pages, not index pages:\n> \n> That's right -- these two relate to heap pages only.\n> \n> I think that Matthias compared these two to bottom-up index deletion\n> because all three patches are concerned about avoiding \"a permanent\n> solution to a temporary problem\". They're conceptually similar despite\n> being in fairly different areas. Evidently Matthias has a similar\n> mental model to my own when it comes to this stuff.\n\nAgreed, that is a very interesting distinction.\n\n> Unfortunately the practical significance of the line pointer patch is\n> hard to demonstrate with a benchmark. I believe that it is very useful\n> on a sufficiently long timeline and with certain workloads because of\n> the behavior it avoids. As I pointed out on that other thread\n> recently, once you have irreversible bloat very small adverse events\n> will eventually add up and cause big problems. When this happens it'll\n> be very hard or impossible to detect, since it just looks like heap\n> fragmentation.\n> \n> That said, it's clearly an issue with one of the TPC-C tables if you\n> run BenchmarkSQL for days and days (just one table, though). So there\n> is hard evidence that line pointer bloat could get really out of hand\n> in at least some tables.\n\nOK, once I dug into what you two were saying, I see the significance. I\nwas frankly surprised we didn't already have these optimizations, and\nyou are right they can lead to long-term problems.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 10 May 2021 22:18:18 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 04:02:27PM +0300, Alexander Korotkov wrote:\n> Hi, Bruce!\n> \n> On Mon, May 10, 2021 at 9:03 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > I have committed the first draft of the PG 14 release notes. You can\n> > see the most current build of them here:\n> >\n> > https://momjian.us/pgsql_docs/release-14.html\n> >\n> > I need clarification on many items, and the document still needs its\n> > items properly ordered, and markup added. I also expect a lot of\n> > feedback.\n> >\n> > I plan to work on completing this document this coming week in\n> > preparation for beta next week.\n> \n> Thank you very much for your work!\n> \n> Let me provide a missing description for the items related to me.\n> \n> * Improve handling of compound words in to_tsquery() and\n> websearch_to_tsquery() (Alexander Korotkov)\n> Compound words are now transformed into parts connected with phrase\n> search operators. For example, to_tsquery('pg_class') becomes 'pg <->\n> class' instead of 'pg & class'. This eliminates bug of handling\n> compound words connected with the phrase operator and makes the search\n> of compound words more strict.\n\nOK, what symbols trigger this change? Underscore? What else? You are\nsaying the previous code allowed 'pg' and 'class' anywhere in the\nstring, while the new code requires them to be adjacent, which more\nclosely matches the pattern.\n\n> * Fix extra distance in phrase operators for quoted text in\n> websearch_to_tsquery() (Alexander Korotkov)\n> For example, websearch_to_tsquery('english', '\"aaa: bbb\"') becomes\n> 'aaa <> bbb' instead of 'aaa <2> bbb'.\n\nSo colon and space were considered to be two tokens between 'aaa' and\n'bbb', while is really only one because both tokens are discarded? Is\nthis true of any discarded tokens, e.g. ''\"aaa ?:, bbb\"'?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 10 May 2021 22:34:45 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 10:53:03PM +0900, Ian Lawrence Barwick wrote:\n> 2021年5月10日(月) 15:03 Bruce Momjian <bruce@momjian.us>:\n> >\n> > I have committed the first draft of the PG 14 release notes. You can\n> > see the most current build of them here:\n> >\n> > https://momjian.us/pgsql_docs/release-14.html\n> >\n> > I need clarification on many items, and the document still needs its\n> > items properly ordered, and markup added. I also expect a lot of\n> > feedback.\n> >\n> > I plan to work on completing this document this coming week in\n> > preparation for beta next week.\n> \n> This misses the change of default value, and is a bit unclear:\n> \n> > Remove password_encryption's support for boolean values, e.g. true (Peter Eisentraut)\n> >\n> > Previous boolean values enabled md5. Now, only the md5 string does this.\n> \n> I'd suggest something along these lines:\n> \n> > The default for password_encryption is now \"scram-sha-256\" (Peter Eisentraut)\n> >\n> > The pseudo-boolean values \"true\", \"on\", \"yes\" and \"1\" are no longer accepted as an alias for \"md5\".\n> \n> (It hasn't been a true boolean setting since Pg 9.6).\n\nI reworeded it to:\n\n\t<listitem>\n\t<!--\n\tAuthor: Peter Eisentraut <peter@eisentraut.org>\n\t2020-06-10 [c7eab0e97] Change default of password_encryption to scram-sha-256\n\t-->\n\t\n\t<para>\n\tLimit the ways password_encryption can enable md5 hashing (Peter Eisentraut)\n\t</para>\n\t\n\t<para>\n\tPreviously on/true/yes/1 values enabled md5. Now, only the string md5 does this.\n\t</para>\n\t</listitem>\n\nI also have this entry:\n\n\t<listitem>\n\t<!--\n\tAuthor: Peter Eisentraut <peter@eisentraut.org>\n\t2020-06-10 [c7eab0e97] Change default of password_encryption to scram-sha-256\n\t-->\n\t\n\t<para>\n\tChange password_encryption's default to scram-sha-256 (Peter Eisentraut)\n\t</para>\n\t\n\t<para>\n\tPreviously it was md5.\n\t</para>\n\t</listitem>\n\nI am hesitant to merge them since they are different changes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 10 May 2021 22:49:08 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 11:40 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Same as the last couple years, I checked for missing items in the release\n> notes, running something like this.\n>\n> git log --cherry-pick --oneline origin/REL_13_STABLE...origin/master\n>\n> Should any of these be included?\n>\n> 86dc90056d Rework planning and execution of UPDATE and DELETE.\n> a1115fa078 Postpone some more stuff out of ExecInitModifyTable.\n> c5b7ba4e67 Postpone some stuff out of ExecInitModifyTable.\n\nI was just about to ask Bruce what he thinks about these.\n\nTo clarify, the first one is a big refactoring commit that allowed us\nto get rid of inheritance_planner(), a fairly inefficient way of\nplanning updates/deletes on partitioned tables, especially when many\npartitions remain after pruning (or when pruning cannot be used). One\nmay see the performance of update/deletes, especially on partitioned\ntables, to be generally improved as a result of this commit, but maybe\nnot as significantly as to be mentioned in E.1.3.1.1. Partitioning or\neven E.1.3.1.4. General Performance. However, one user-visible\nfeature that came out of this work is that updates/deletes can now use\nrun-time pruning whereas they couldn't before. Maybe that ought to be\nmentioned. (This reminds me to send a patch to remove the note from\n5.11.4. Partition Pruning that says that runtime pruning cannot be\nused for update/delete).\n\nThe other two commits can lead to improved performance of\nupdate/deletes when there are many unpruned partitions in the plan,\nbut runtime pruning (a new feature as mentioned above) leads to only\none or few partitions to actually be updated/deleted from. I admit\nthough that the cases for which performance has been improved still\nunder-perform the cases that already performed better starting in v12,\nthat is, the cases where the planner itself is able to trim down the\nplan to contain one or few partitions, so maybe nothing very big to\nsee here just yet. You may want to take a look at the benchmark\nresults I had posted here:\nhttps://www.postgresql.org/message-id/CA%2BHiwqEcawatEaUh1uTbZMEZTJeLzbroRTz9_X9Z5CFjTWJkhw%40mail.gmail.com\n\nThank you.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 May 2021 11:57:10 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 7:18 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, May 10, 2021 at 04:14:56PM -0700, Peter Geoghegan wrote:\n> > On Mon, May 10, 2021 at 3:58 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > OK, you are confirming what Matthias suggested. I added these two\n> > > items, which both seem to apply only to heap pages, not index pages:\n> >\n> > That's right -- these two relate to heap pages only.\n> >\n> > I think that Matthias compared these two to bottom-up index deletion\n> > because all three patches are concerned about avoiding \"a permanent\n> > solution to a temporary problem\". They're conceptually similar despite\n> > being in fairly different areas. Evidently Matthias has a similar\n> > mental model to my own when it comes to this stuff.\n>\n> Agreed, that is a very interesting distinction.\n\nBTW, I think that the wording of the bottom-up index deletion item\nshould be changed to describe the result rather than the mechanism\nitself. The important idea is that non-HOT updaters are made to clean\nup their own mess before it gets out of hand (i.e. before a\nversion-driven page split can go ahead), at least for those indexes\nwhose columns are not logically modified by the UPDATE statement\n(usually most indexes on the table). We're making the updaters live\nwithin their means -- they cannot be allowed to avoid paying small\nincremental costs if that ultimately imposes a much larger, lasting\ncost on the system as a whole. This could be thought of as a negative\nexternality.\n\nThe index deletion mechanism itself is not that different to what we\nhad before. The important point is how and when it kicks in, and the\nsystematic effect of that over time. It's a subtractive thing, not an\nadditive thing -- it's helpful because of what *doesn't* happen in\nPostgres 14.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 10 May 2021 20:53:54 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 7:30 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Can these be merged:\n> Allow logical replication to stream long transactions to standbys (Dilip Kumar, Tomas Vondra, Amit Kapila, Nikhil Sontakke)\n> Improve the logical replication API to allow streaming large in-progress transactions (Tomas Vondra, Dilip Kumar, Amit Kapila)\n>\n\nI think we should merge the below two:\nAdd support for streaming to built-in logical replication (Vondra,\nDilip Kumar, Amit Kapila, Ajin Cherian, Nikhil Sontakke, Stas Kelvich)\n\nAllow logical replication to stream long transactions to standbys\n(Dilip Kumar, Tomas Vondra, Amit Kapila, Nikhil Sontakke)\n\nand say something like:\nAllow logical replication to stream long in-progress transactions to standbys\nPreviously transactions that exceeded logical_decoding_work_mem were\nwritten to disk until the transaction completed.\n\nAlso, the commits that lead to the above feature is:\n2020-09-03 [464824323] Add support for streaming to built-in logical replicatio\n2020-08-08 [7259736a6] Implement streaming mode in ReorderBuffer.\n\n* Improve the logical replication API to allow streaming large\nin-progress transactions (Tomas Vondra, Dilip Kumar, Amit Kapila)\nThe output functions begin with \"stream\". test_decoding also supports these.\n\nIn the above, I think it is better to say Enhance instead of Improve.\n\n* Add system view pg_stat_replication_slots to report replication slot\nactivity (Sawada Masahiko, Amit Kapila)\n\nI think for this item, we can give Vignesh C as author credit because\nhe helped a lot enhance/stabilize this feature after the initial\ncommit. See\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=f5fc2f5b23d1b1dff60f8ca5dc211161df47eda4\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=cca57c1d9bf7eeba5b81115e0b82651cf3d8e4ea\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=592f00f8dec68038301467a904ac514eddabf6cd\n\n* Allow multiple xacts during table sync in logical replication (Peter\nSmith, Amit Kapila, and Takamichi Osumi)\n\nIMPORTANT?\n\nYes, this is a big enough improvement in the initial table sync\nmechanism of logical replication that it is worth mentioning in\nRelease Notes. One of the main user-visible effects this commit can\nhave is that it will reduce the time for which we need to hold WAL on\nthe server during the initial table sync phase. Then it can avoid\nlarge copies in case there is any error during the time when\napply-worker and tablesync-worker synchronizes the apply location.\n\n* Enable logical replication to handle two phase commits (Ajin Cherian)\nThis is controlled via pg_create_logical_replication_slot().\n\nWe still haven't supported two-phase commits for built-in logical\nreplication but we have exposed APIs for decoding plugins to achieve\nthat. Using these APIs, we are planning to support it for built-in\nlogical replication but that will be probably done in PG-15. So, I\nsuggest we can modify it to something like:\n\n* Enhance logical decoding APIs to handle two-phase commits\nThis is controlled via pg_create_logical_replication_slot().\n\nMost of the work for this feature is done via commits:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=a271a1b50e9bec07e2ef3a05e38e7285113e4ce6\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=0aa8a01d04c8fe200b7a106878eebc3d0af9105c\n\nSo, I think the author credit should be given to Ajin Cherian, Amit\nKapila, Nikhil Sontakke, and Stas Kelvich\n\n* I think we should mention about commit:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d7eb52d7181d83cf2363570f7a205b8eb1008dbc\n\nThis allows Logical decoding to work efficiently in presence of a\nlarge amount of DDLs. This was a complaint from many users for long\ntime. Apart from the discussion thread, see some of the earlier\ncomplaints:\nhttps://www.postgresql.org/message-id/flat/20f3de7675f83176253f607b5e199b228406c21c.camel@cybertec.at\nhttps://www.postgresql.org/message-id/flat/CAHoiPjzea6N0zuCi=+f9v_j94nfsy6y8SU7-=bp4=7qw6_i=Rg@mail.gmail.com\n\nand most recently at:\nhttps://www.postgresql.org/message-id/DB8PR10MB36126BC0DC3570637FDFED12EB4F9%40DB8PR10MB3612.EURPRD10.PROD.OUTLOOK.COM\n\n* Speed truncation of small tables on large shared buffer servers (Kirk Jamison)\n\nThis optimization applies during recovery. Shouldn't we mention that\nin some way?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 11 May 2021 09:26:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, 2021-05-10 at 13:47 -0400, Bruce Momjian wrote:\n> Uh, I try to keep the first sentence short so people can scan it more\n> easily, so I am hesitant to make this change. I went with this change:\n> \n> \t<listitem>\n> \t<!--\n> \tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n> \t2021-04-03 [55873a00e] Improve psql's behavior when the editor is exited withou\n> \t-->\n> \t\n> \t<para>\n> \tWhen editing the previous query or a file with psql's \\e, or using \\ef and \\ev, ignore the contents if the editor exits without saving (Laurenz Albe)\n> \t</para>\n> \t\n> \t<para>\n> \tPreviously, such edits would still execute the editor contents.\n> \t</para>\n> \t</listitem>\n\nThat is fine with me and probably better than my suggestion.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 11 May 2021 07:50:41 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 11, 2021 at 5:34 AM Bruce Momjian <bruce@momjian.us> wrote:\n> On Mon, May 10, 2021 at 04:02:27PM +0300, Alexander Korotkov wrote:\n> > Hi, Bruce!\n> >\n> > On Mon, May 10, 2021 at 9:03 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > > I have committed the first draft of the PG 14 release notes. You can\n> > > see the most current build of them here:\n> > >\n> > > https://momjian.us/pgsql_docs/release-14.html\n> > >\n> > > I need clarification on many items, and the document still needs its\n> > > items properly ordered, and markup added. I also expect a lot of\n> > > feedback.\n> > >\n> > > I plan to work on completing this document this coming week in\n> > > preparation for beta next week.\n> >\n> > Thank you very much for your work!\n> >\n> > Let me provide a missing description for the items related to me.\n> >\n> > * Improve handling of compound words in to_tsquery() and\n> > websearch_to_tsquery() (Alexander Korotkov)\n> > Compound words are now transformed into parts connected with phrase\n> > search operators. For example, to_tsquery('pg_class') becomes 'pg <->\n> > class' instead of 'pg & class'. This eliminates bug of handling\n> > compound words connected with the phrase operator and makes the search\n> > of compound words more strict.\n>\n> OK, what symbols trigger this change? Underscore? What else?\n\nAny symbol, which is recognized as a separator by full-text parser,\nbut not tsquery parser. Fulltext search is extensible and allowing\npluggable parsers. In principle, we could dig the exact set of\nsymbols, but I'm not sure this worth the effort.\n\n> You are\n> saying the previous code allowed 'pg' and 'class' anywhere in the\n> string, while the new code requires them to be adjacent, which more\n> closely matches the pattern.\n\nYes, that's it.\n\n> > * Fix extra distance in phrase operators for quoted text in\n> > websearch_to_tsquery() (Alexander Korotkov)\n> > For example, websearch_to_tsquery('english', '\"aaa: bbb\"') becomes\n> > 'aaa <> bbb' instead of 'aaa <2> bbb'.\n>\n> So colon and space were considered to be two tokens between 'aaa' and\n> 'bbb', while is really only one because both tokens are discarded? Is\n> this true of any discarded tokens, e.g. ''\"aaa ?:, bbb\"'?\n\nYes, that's true for any discarded tokens.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 11 May 2021 13:16:38 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On 5/10/21 9:56 PM, Bruce Momjian wrote:\n> On Mon, May 10, 2021 at 07:50:14AM -0400, Joe Conway wrote:\n>> On 5/10/21 2:03 AM, Bruce Momjian wrote:\n>> > I have committed the first draft of the PG 14 release notes. You can\n>> > see the most current build of them here:\n>> > \n>> > \thttps://momjian.us/pgsql_docs/release-14.html\n>> > \n>> > I need clarification on many items, and the document still needs its\n>> > items properly ordered, and markup added. I also expect a lot of\n>> > feedback.\n>> > \n>> > I plan to work on completing this document this coming week in\n>> > preparation for beta next week.\n>> \n>> While only a small change, this commit does affect user visible behavior and\n>> so should probably be noted:\n>> \n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=b12bd4869b5e\n> \n> I see your point. Here is the release entry I added:\n\nNot quite -- maybe something like the below:\n\n> \t<para>\n> \tReturn false for has_column_privilege() checks on non-existent or dropped columns (Joe Conway)\n\nAlways return NULL for has_column_privilege() checks on non-existent or \ndropped column by attnum (Joe Conway)\n\n> \t<para>\n> \tPreviously such columns returned an invalid column error.\n\nPreviously existence of such columns were ignored when caller had table \nlevel privileges.\n\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Tue, 11 May 2021 06:57:19 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "First Bruce, thank you for compiling the release notes draft. Comments\ninline:\n\nOn 5/10/21 11:56 PM, Amit Kapila wrote:\n> On Mon, May 10, 2021 at 7:30 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>\n>> Can these be merged:\n>> Allow logical replication to stream long transactions to standbys (Dilip Kumar, Tomas Vondra, Amit Kapila, Nikhil Sontakke)\n>> Improve the logical replication API to allow streaming large in-progress transactions (Tomas Vondra, Dilip Kumar, Amit Kapila)\n>>\n> \n> I think we should merge the below two:\n> Add support for streaming to built-in logical replication (Vondra,\n> Dilip Kumar, Amit Kapila, Ajin Cherian, Nikhil Sontakke, Stas Kelvich)\n> \n> Allow logical replication to stream long transactions to standbys\n> (Dilip Kumar, Tomas Vondra, Amit Kapila, Nikhil Sontakke)\n> \n> and say something like:\n> Allow logical replication to stream long in-progress transactions to standbys\n> Previously transactions that exceeded logical_decoding_work_mem were\n> written to disk until the transaction completed.\n\nI was going to post a question about the above bullet point(s), so this\nclarification helps -- thanks!\n\nTwo comments on the above:\n\n1. I do believe ther eis a missing \"Tomas\" in the first release note.\n2. Would you want to s/standbys/subscribers/ given an instance can be in\nread/write mode but still receive logical changes? Or did this patch\naffect only subscribers that are also standbys?\n\nThanks,\n\nJonathan", "msg_date": "Tue, 11 May 2021 09:13:31 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "I think these should be merged:\n\n| Remove factorial operators ! and !! (Mark Dilger)\n| Add documentation for the factorial() function (Peter Eisentraut)\n\n| This can be disabled by turning client options \"sslsni\" off. \n\noption (no ess)\n\n| Allow REINDEX to process all child tables and indexes of a partitioned table (Justin Pryzby, Michael Paquier) \n\nIt should actually say \"all child tables or indexes of a partitioned relation\",\nsince you can REINDEX INDEX partitioned_index.\n\n| Add bit_xor XOR aggregate function (Alexey Bashtanov) \n\nRemove XOR ?\n\n| Remove support for postfix (right-unary) operators (Mark Dilger) \n| pg_dump and pg_upgrade will warn if post-fix operators are being dumped. \n\npostfix not post-fix\n\n| Remove temporary files after backend crashes (Euler Taveira) \n\ncrash (singular)\n\n| Increase warning time and hard limit before transaction id and multi-transaction wraparound (Noah Misch) \n| This should reduce the number of failures without warning. \n\nI'd say: This should reduce the possibility of failures that occur without\nhaving issued warnings about wraparound.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 11 May 2021 08:24:22 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 09:00:05AM -0500, Justin Pryzby wrote:\n> Thanks for putting it together.\n> \n> I think these two should be merged:\n> | Remove containment operators @ and ~ from contrib modules cube, hstore, intarray, and seg (Justin Pryzby) \n> | Remove deprecated containment operators for built-in geometry data types (Justin Pryzby) \n\nAgreed, merged.\n> \n> | Improve autovacuum's analyze of partitioned tables (Yuzuko Hosoya) \n> | DETAILS? \n> \n> Should say: Autovacuum now analyzes partitioned tables.\n\nAgreed, updated.\n\n> | The server variable check_client_connection_interval allows supporting operating systems, e.g., Linux, to automatically cancel queries by disconnected clients. \n> The GUC is actually called client_connection_check_interval - the commit\n> message used the wrong name.\n\nOK, fixed.\n\n> | This is particularly helpful for reducing index bloat on tables that frequently update indexed columns. \n> Does it mean \"..where indexed columns are frequently updated\"?\n\n\nYeah, that needs help. ;-) I updated it to :\n\n\tThis is particularly helpful for reducing index bloat on tables\n\twhose indexed columns are frequently updated.\n\n> | Allow multiple foreign table scans to be run in parallel (Robert Haas, Kyotaro Horiguchi, Thomas Munro, Etsuro Fujita) \n> I think it means multiple foreight table scan *nodes*\n\nHow is this?\n\n\tAllow a query referencing multiple foreign tables to perform foreign\n\ttable scans in parallel (Robert Haas, Kyotaro Horiguchi, Thomas Munro,\n\tEtsuro Fujita)\n\nI am worried \"nodes\" is too vague.\n\n> | If server variable compute_query_id is enabled, display the hash in pg_stat_activity, EXPLAIN VERBOSE, csvlog, and optionally in log_line_prefix (Julien Rouhaud) \n> I think needs details, like: \"If disabled, then the hash might be computed by\n> an extension, instead\".\n\nI ended up with:\n\n\t<para>\n\tIf server variable compute_query_id is enabled, display the hash\n\tin pg_stat_activity, EXPLAIN VERBOSE, csvlog, and optionally in\n\tlog_line_prefix (Julien Rouhaud)\n\t</para>\n\t\n\t<para>\n\tA query id computed by an extension will also be displayed.\n\t</para>\n\n> Later, you say:\n> | Extension pg_stat_statements will need to enable hash computation via the compute_query_id server variable to function properly. pg_stat_statements can now use a custom hash computation method. \n> Maybe it should say \"will need hash computation to be enabled\".\n\nHere is the updated entry:\n\n\t<listitem>\n\t<!--\n\tAuthor: Bruce Momjian <bruce@momjian.us>\n\t2021-04-07 [5fd9dfa5f] Move pg_stat_statements query jumbling to core.\n\t-->\n\t\n\t<para>\n\tMove query hash computation from pg_stat_statements to the core server (Julien Rouhaud)\n\t</para>\n\t\n\t<para>\n\tExtension pg_stat_statements will now need to enable query hash computation to function properly.\n\tThis can be done by enabling the server variable compute_query_id or by using an extension with a custom hash computation method.\n\t</para>\n\t</listitem>\n\n> | Allow more than the common name (CN) to be matched for client certificate authentication (Andrew Dunstan) \n> Your description makes it sound like arbitrary attributes can be compared. But\n> the option just allows comparing CN or DN.\n\nOK, new text is:\n\n\t<listitem>\n\t<!--\n\tAuthor: Andrew Dunstan <andrew@dunslane.net>\n\t2021-03-29 [6d7a6feac] Allow matching the DN of a client certificate for\n\tauthen\n\t-->\n\t\n\t<para>\n\tAllow the certificate's distinguished name (DN) to be matched for client\n\tcertificate authentication (Andrew Dunstan)\n\t</para>\n\t\n\t<para>\n\tThe new pg_hba.conf keyword \"clientname=DN\" allows comparison with\n\tnon-CN certificate attributes and can be combined with ident maps.\n\t</para>\n\t</listitem>\n\n> | Allow file system sync at the start of crash recovery on Linux (Thomas Munro) \n> I think this should describe the existing, default behavior:\n> Allow syncfs method to sync data directory during recovery;\n> The default behavior is to open and fsync every data file, and the new setting\n> recovery_init_sync_method=syncfs instead syncs each filesystem in the data\n> directory.\n\nI went with this text:\n\n\t<listitem>\n\t<!--\n\tAuthor: Thomas Munro <tmunro@postgresql.org>\n\t2021-03-20 [61752afb2] Provide recovery_init_sync_method=syncfs.\n\t-->\n\t\n\t<para>\n\tAllow file system sync at the start of crash recovery on Linux (Thomas\n\tMunro)\n\t</para>\n\t\n\t<para>\n\tBy default, Postgres opens and fsyncs every data file at the start of\n\tcrash recovery.\n\tThis new setting, recovery_init_sync_method=syncfs, instead syncs each\n\tfilesystem used by the database cluster.\n\tThis allows for faster recovery on systems with many database files.\n\t</para>\n\t</listitem>\n\n> | Add date_bin function (John Naylor) \n> This truncate timestamps on an arbitrary interval.\n> Like date_trunc() but also supports eg. '15 minutes', and also uses an arbitrary \"origin\".\n\nOK, so what I think it returns is the greatest datetime that is a\nmultiple of interval values added to origin which is not greater than\nthe target date, right? Am I the only one who finds this unclear? \nDoesn't our documentation of this feature need to explain this?\n\n> | Support negative indexes in split_part() (Nikhil Benesch) \n> | Negative values count from the last field going forward. \n> should say \"start from the last field and count backward\" ?\n\nYes, fixed with your wording.\n\n> | Add configure option --with-openssl to behave like --with-ssl={openssl} (Daniel Gustafsson, Michael Paquier) \n> | The option --with-openssl is kept for compatibility. \n> I think this is backwards. The new option is with-ssl=openssl, and (as you\n> said) with-openssl is kept.\n\nAgreed:\n\n\tAdd configure option --with-ssl={openssl} to behave like\n\t\t--with-openssl (Daniel Gustafsson, Michael Paquier)\n\n> Should these be in the \"compatibility\" section?\n> \n> | Force custom server variable names to match the pattern used for unquoted SQL identifiers (Tom Lane) \n\nYes.\n\n> | Change password_encryption's default to scram-sha-256 (Peter Eisentraut) \n\nYes, I can see this impacting people. I move it and added text to\nhighlight the incompatibility:\n\n\t<listitem>\n\t<!--\n\tAuthor: Peter Eisentraut <peter@eisentraut.org>\n\t2020-06-10 [c7eab0e97] Change default of password_encryption to scram-sha-256\n\t-->\n\t\n\t<para>\n\tChange password_encryption's default to scram-sha-256 (Peter Eisentraut)\n\t</para>\n\t\n\t<para>\n\tPreviously it was md5.\tAll new passwords will be stored as\n\tSHA256 unless this server variable is changed or the password is\n\talready md5-hashed.\n\t</para>\n\t</listitem>\n\n> \n> | Change checkpoint_completion_target default to 0.9 (Stephen Frost) \n\nI don' think that is an incompatibility since it only affects\nperformance, and can be easily changed.\n\n> | Reduce the default value of vacuum_cost_page_miss (Peter Geoghegan) \n> \n> Nitpicks to follow:\n> \n> | Allow some GiST index to be built by presorting the data (Andrey Borodin) \n> indexes\n\nFixed.\n\n> | with --with-lz4 support to enable this feature\n> I would change to say \"to support\" rather than \"support to enable\"\n\nYes, better.\n\n> | Speed truncation of small tables on large shared buffer servers (Kirk Jamison) \n> \"on servers with large settings of shared_buffers\"\n\nI went with:\n\n\tSpeed truncation of small tables on clusters with a large number of\n\tshared buffers (Kirk Jamison)\n\n> | Allow windowing functions to perform incremental sorts (David Rowley) \n> Just \"window\" functions\n\nOK, fixed.\n\n> | Improve pg_stat_activity reporting for walsenders processes (Tom Lane) \n> walsender\n\nFixed.\n\n> | Previously these functions could only be executed by super-users, and still defaults do that. \n> ..which is still the default behavior.\n\nUpdated to:\n\n\tPreviously these functions could only be executed by\n\tsuper-users, and this is still the default.\n\n> | This allows multiple queries to be send and only wait for completion when a specific synchronization message is sent. \n> be sent\n\nFixed.\n\n> | Enhance libpq libpq's target_session_attrs parameter options (Haribabu Kommi, Greg Nancarrow, Vignesh C, Tom Lane) \n> remove first \"libpq\"\n\nFixed.\n\n> | With the removal of the ! operator in this release, factorial() is the only built-in way to computer a factorial. \n> compute\n\nFixed.\n\n> | For example, GROUP BY CUBE (a,b), CUBE (b,c) will generated duplicate grouping combinations without DISTINCT. \n> \n> will generate\n\nFixed.\n\n> | Allow VACUUM VERBOSE to report page deletion counts for each scan of an index (Peter Geoghegan) \n> \n> I think \"Allow\" is wrong - should just say that VACUUM VERBOSE reports..\n\nUpdated to:\n\n\tHave VACUUM VERBOSE report page deletion counts for each scan of\n\tan index (Peter Geoghegan)\n\n> |By default, only the root of partitioned tables are imported. \n> *is* imported\n\nFixed.\n\n> Can these be merged:\n> Allow logical replication to stream long transactions to standbys (Dilip Kumar, Tomas Vondra, Amit Kapila, Nikhil Sontakke) \n> Improve the logical replication API to allow streaming large in-progress transactions (Tomas Vondra, Dilip Kumar, Amit Kapila) \n\nI am not sure. I do think we need to merge some of the logical\nreplication items, but I am not sure which ones yet.\n\nThanks for all the good feedback.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 11 May 2021 10:35:23 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 08:53:54PM -0700, Peter Geoghegan wrote:\n> On Mon, May 10, 2021 at 7:18 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Mon, May 10, 2021 at 04:14:56PM -0700, Peter Geoghegan wrote:\n> > > On Mon, May 10, 2021 at 3:58 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > > OK, you are confirming what Matthias suggested. I added these two\n> > > > items, which both seem to apply only to heap pages, not index pages:\n> > >\n> > > That's right -- these two relate to heap pages only.\n> > >\n> > > I think that Matthias compared these two to bottom-up index deletion\n> > > because all three patches are concerned about avoiding \"a permanent\n> > > solution to a temporary problem\". They're conceptually similar despite\n> > > being in fairly different areas. Evidently Matthias has a similar\n> > > mental model to my own when it comes to this stuff.\n> >\n> > Agreed, that is a very interesting distinction.\n> \n> BTW, I think that the wording of the bottom-up index deletion item\n> should be changed to describe the result rather than the mechanism\n> itself. The important idea is that non-HOT updaters are made to clean\n> up their own mess before it gets out of hand (i.e. before a\n> version-driven page split can go ahead), at least for those indexes\n> whose columns are not logically modified by the UPDATE statement\n> (usually most indexes on the table). We're making the updaters live\n> within their means -- they cannot be allowed to avoid paying small\n> incremental costs if that ultimately imposes a much larger, lasting\n> cost on the system as a whole. This could be thought of as a negative\n> externality.\n> \n> The index deletion mechanism itself is not that different to what we\n> had before. The important point is how and when it kicks in, and the\n> systematic effect of that over time. It's a subtractive thing, not an\n> additive thing -- it's helpful because of what *doesn't* happen in\n> Postgres 14.\n\nI updated this to:\n\n\t<listitem>\n\t<!--\n\tAuthor: Peter Geoghegan <pg@bowt.ie>\n\t2021-01-13 [9dc718bdf] Pass down \"logically unchanged index\" hint.\n\tAuthor: Peter Geoghegan <pg@bowt.ie>\n\t2021-01-13 [d168b6668] Enhance nbtree index tuple deletion.\n\t-->\n\t\n\t<para>\n\tAllow index additions to remove expired btree index entries to prevent page\n\tsplits (Peter Geoghegan)\n\t</para>\n\t\n\t<para>\n\tThis is particularly helpful for reducing index bloat on tables whose\n\tindexed columns are frequently updated.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 11 May 2021 10:48:11 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 11, 2021 at 06:57:19AM -0400, Joe Conway wrote:\n> On 5/10/21 9:56 PM, Bruce Momjian wrote:\n> > On Mon, May 10, 2021 at 07:50:14AM -0400, Joe Conway wrote:\n> > > On 5/10/21 2:03 AM, Bruce Momjian wrote:\n> > > > I have committed the first draft of the PG 14 release notes. You can\n> > > > see the most current build of them here:\n> > > > > \thttps://momjian.us/pgsql_docs/release-14.html\n> > > > > I need clarification on many items, and the document still needs\n> > > its\n> > > > items properly ordered, and markup added. I also expect a lot of\n> > > > feedback.\n> > > > > I plan to work on completing this document this coming week in\n> > > > preparation for beta next week.\n> > > \n> > > While only a small change, this commit does affect user visible behavior and\n> > > so should probably be noted:\n> > > \n> > > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=b12bd4869b5e\n> > \n> > I see your point. Here is the release entry I added:\n> \n> Not quite -- maybe something like the below:\n> \n> > \t<para>\n> > \tReturn false for has_column_privilege() checks on non-existent or dropped columns (Joe Conway)\n> \n> Always return NULL for has_column_privilege() checks on non-existent or\n> dropped column by attnum (Joe Conway)\n> \n> > \t<para>\n> > \tPreviously such columns returned an invalid column error.\n> \n> Previously existence of such columns were ignored when caller had table\n> level privileges.\n\nI can't reproduce the NULL using column name text:\n\n\ttest=> SELECT CURRENT_USER;\n\t current_user\n\t--------------\n\t postgres\n\t(1 row)\n\n\ttest=> CREATE TABLE test (x int, y int);\n\tCREATE TABLE\n\t\n\ttest=> CREATE USER test;\n\tCREATE ROLE\n\t\n\ttest=> \\c test test\n\tYou are now connected to database \"test\" as user \"test\".\n\t\n\ttest=> SELECT has_column_privilege('test', 'x', 'SELECT');\n\t has_column_privilege\n\t----------------------\n\t f\n\n\ttest=> SELECT has_column_privilege('test', 'z', 'SELECT');\n\tERROR: column \"z\" of relation \"test\" does not exist\n\nbut I was able to show it using attribute numbers, as you explained:\n\n\ttest=> SELECT has_column_privilege('test', 1::smallint, 'SELECT');\n\t has_column_privilege\n\t----------------------\n\t f\n\t(1 row)\n\t\n\ttest=> SELECT has_column_privilege('test', 2::smallint, 'SELECT');\n\t has_column_privilege\n\t----------------------\n\t f\n\t(1 row)\n\t\n\ttest=> SELECT has_column_privilege('test', 3::smallint, 'SELECT');\n\t has_column_privilege\n\t----------------------\n\t\n\t(1 row)\n\nWhat is the logic that attribute numbers should behave differently than\nattribute names?\n\nHere is the updated text:\n\n\t<listitem>\n\t<!--\n\tAuthor: Joe Conway <mail@joeconway.com>\n\t2021-03-31 [b12bd4869] Fix has_column_privilege function corner case\n\t-->\n\t\n\t<para>\n\tReturn false for has_column_privilege() checks on non-existent or\n\tdropped columns when using attribute numbers (Joe Conway)\n\t</para>\n\t\n\t<para>\n\tPreviously such attribute numbers returned an invalid column error.\n\t</para>\n\t</listitem>\n\t\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 11 May 2021 11:11:20 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 02:03:08AM -0400, Bruce Momjian wrote:\n> I have committed the first draft of the PG 14 release notes. You can\n> see the most current build of them here:\n> \n> \thttps://momjian.us/pgsql_docs/release-14.html\n> \n> I need clarification on many items, and the document still needs its\n> items properly ordered, and markup added. I also expect a lot of\n> feedback.\n> \n> I plan to work on completing this document this coming week in\n> preparation for beta next week.\n\nJust to clarify, I am trying to process the release note feedback as\nquickly as possible, and committing changes 1-2 times a day. However,\nthe feedback is currently coming in quicker than I can process it ---\neventually I will catch up. All the feedback has been very useful.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 11 May 2021 11:22:34 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On 5/11/21 11:11 AM, Bruce Momjian wrote:\n> On Tue, May 11, 2021 at 06:57:19AM -0400, Joe Conway wrote:\n>> On 5/10/21 9:56 PM, Bruce Momjian wrote:\n>> > On Mon, May 10, 2021 at 07:50:14AM -0400, Joe Conway wrote:\n>> > > On 5/10/21 2:03 AM, Bruce Momjian wrote:\n>> > > > I have committed the first draft of the PG 14 release notes. You can\n>> > > > see the most current build of them here:\n>> > > > > \thttps://momjian.us/pgsql_docs/release-14.html\n>> > > > > I need clarification on many items, and the document still needs\n>> > > its\n>> > > > items properly ordered, and markup added. I also expect a lot of\n>> > > > feedback.\n>> > > > > I plan to work on completing this document this coming week in\n>> > > > preparation for beta next week.\n>> > > \n>> > > While only a small change, this commit does affect user visible behavior and\n>> > > so should probably be noted:\n>> > > \n>> > > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=b12bd4869b5e\n>> > \n>> > I see your point. Here is the release entry I added:\n>> \n>> Not quite -- maybe something like the below:\n>> \n>> > \t<para>\n>> > \tReturn false for has_column_privilege() checks on non-existent or dropped columns (Joe Conway)\n>> \n>> Always return NULL for has_column_privilege() checks on non-existent or\n>> dropped column by attnum (Joe Conway)\n>> \n>> > \t<para>\n>> > \tPreviously such columns returned an invalid column error.\n>> \n>> Previously existence of such columns were ignored when caller had table\n>> level privileges.\n> \n> I can't reproduce the NULL using column name text:\n\n> \ttest=> SELECT has_column_privilege('test', 'z', 'SELECT');\n> \tERROR: column \"z\" of relation \"test\" does not exist\n\nThat is the way it is supposed to work when the column is specified by \nname. The patch did not change that in any way.\n\n> but I was able to show it using attribute numbers, as you explained:\n\nYes, that was what the patch fixed.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Tue, 11 May 2021 11:26:48 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 11, 2021 at 11:26:48AM -0400, Joe Conway wrote:\n> On 5/11/21 11:11 AM, Bruce Momjian wrote:\n> > > Previously existence of such columns were ignored when caller had table\n> > > level privileges.\n> > \n> > I can't reproduce the NULL using column name text:\n> \n> > \ttest=> SELECT has_column_privilege('test', 'z', 'SELECT');\n> > \tERROR: column \"z\" of relation \"test\" does not exist\n> \n> That is the way it is supposed to work when the column is specified by name.\n> The patch did not change that in any way.\n\nI am just confused why attribute numbers are handled differently than\nattribute names.\n\n> > but I was able to show it using attribute numbers, as you explained:\n> \n> Yes, that was what the patch fixed.\n\nThanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 11 May 2021 11:37:02 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On 5/11/21 11:37 AM, Bruce Momjian wrote:\n> On Tue, May 11, 2021 at 11:26:48AM -0400, Joe Conway wrote:\n>> On 5/11/21 11:11 AM, Bruce Momjian wrote:\n>> > > Previously existence of such columns were ignored when caller had table\n>> > > level privileges.\n>> > \n>> > I can't reproduce the NULL using column name text:\n>> \n>> > \ttest=> SELECT has_column_privilege('test', 'z', 'SELECT');\n>> > \tERROR: column \"z\" of relation \"test\" does not exist\n>> \n>> That is the way it is supposed to work when the column is specified by name.\n>> The patch did not change that in any way.\n> \n> I am just confused why attribute numbers are handled differently than\n> attribute names.\n\nI am not entirely sure, but that boat sailed a long time ago and really \nhas nothing to do with this patch ;-)\n\nThis is the code comment that predates the patch but is the reason \nbehind the change:\n\n------------\n/*\n * has_any_column_privilege variants\n *\t\tThese are all named \"has_any_column_privilege\" at the SQL level.\n *\t\tThey take various combinations of relation name, relation OID,\n *\t\tuser name, user OID, or implicit user = current_user.\n *\n *\t\tThe result is a boolean value: true if user has the indicated\n *\t\tprivilege for any column of the table, false if not. The variants\n *\t\tthat take a relation OID return NULL if the OID doesn't exist.\n */\n------------\n\nThe patch made that last sentence true in the corner cases.\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Tue, 11 May 2021 12:31:01 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 11, 2021 at 12:31:01PM -0400, Joe Conway wrote:\n> On 5/11/21 11:37 AM, Bruce Momjian wrote:\n> > On Tue, May 11, 2021 at 11:26:48AM -0400, Joe Conway wrote:\n> > > On 5/11/21 11:11 AM, Bruce Momjian wrote:\n> > > > > Previously existence of such columns were ignored when caller had table\n> > > > > level privileges.\n> > > > > I can't reproduce the NULL using column name text:\n> > > \n> > > > \ttest=> SELECT has_column_privilege('test', 'z', 'SELECT');\n> > > > \tERROR: column \"z\" of relation \"test\" does not exist\n> > > \n> > > That is the way it is supposed to work when the column is specified by name.\n> > > The patch did not change that in any way.\n> > \n> > I am just confused why attribute numbers are handled differently than\n> > attribute names.\n> \n> I am not entirely sure, but that boat sailed a long time ago and really has\n> nothing to do with this patch ;-)\n\nIt just feels like this change makes the function's behavior less\nconsistent.\n \n> This is the code comment that predates the patch but is the reason behind\n> the change:\n> \n> ------------\n> /*\n> * has_any_column_privilege variants\n> *\t\tThese are all named \"has_any_column_privilege\" at the SQL level.\n> *\t\tThey take various combinations of relation name, relation OID,\n> *\t\tuser name, user OID, or implicit user = current_user.\n> *\n> *\t\tThe result is a boolean value: true if user has the indicated\n> *\t\tprivilege for any column of the table, false if not. The variants\n> *\t\tthat take a relation OID return NULL if the OID doesn't exist.\n> */\n> ------------\n> \n> The patch made that last sentence true in the corner cases.\n\nWell, the example I showed was for attribute numbers but relation names,\nwhich isn't mentioned in this comment.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 11 May 2021 13:30:14 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On 5/11/21 1:30 PM, Bruce Momjian wrote:\n> On Tue, May 11, 2021 at 12:31:01PM -0400, Joe Conway wrote:\n>> On 5/11/21 11:37 AM, Bruce Momjian wrote:\n>> > On Tue, May 11, 2021 at 11:26:48AM -0400, Joe Conway wrote:\n>> > > On 5/11/21 11:11 AM, Bruce Momjian wrote:\n>> > > > > Previously existence of such columns were ignored when caller had table\n>> > > > > level privileges.\n>> > > > > I can't reproduce the NULL using column name text:\n>> > > \n>> > > > \ttest=> SELECT has_column_privilege('test', 'z', 'SELECT');\n>> > > > \tERROR: column \"z\" of relation \"test\" does not exist\n>> > > \n>> > > That is the way it is supposed to work when the column is specified by name.\n>> > > The patch did not change that in any way.\n>> > \n>> > I am just confused why attribute numbers are handled differently than\n>> > attribute names.\n>> \n>> I am not entirely sure, but that boat sailed a long time ago and really has\n>> nothing to do with this patch ;-)\n> \n> It just feels like this change makes the function's behavior less\n> consistent.\n\nSee Tom's commit message here:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3d0f68dd30612\n\nIn particular:\n\n \"The variants of these functions that take\n numeric inputs (OIDs or column numbers) are\n supposed to return NULL rather than failing\n on bad input; this rule reduces problems with\n snapshot skew when queries apply the functions\n to all rows of a catalog.\"\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Tue, 11 May 2021 15:18:29 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> On 5/11/21 1:30 PM, Bruce Momjian wrote:\n>> It just feels like this change makes the function's behavior less\n>> consistent.\n\n> See Tom's commit message here:\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3d0f68dd30612\n\n> In particular:\n\n> \"The variants of these functions that take\n> numeric inputs (OIDs or column numbers) are\n> supposed to return NULL rather than failing\n> on bad input; this rule reduces problems with\n> snapshot skew when queries apply the functions\n> to all rows of a catalog.\"\n\nYeah, the null-return-for-bad-numeric-input behavior is important.\nPerhaps a case could be made for returning null for bad text\ninput too, but I don't recall that anybody has asked for that.\n\nA case could also be made that changing the behavior on the text\nside would break applications that expect the current behavior.\nSo I'm disinclined to make a wholesale change there, without more\nevidence that it's a good idea.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 May 2021 15:51:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 11, 2021 at 03:51:39PM -0400, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> > On 5/11/21 1:30 PM, Bruce Momjian wrote:\n> >> It just feels like this change makes the function's behavior less\n> >> consistent.\n> \n> > See Tom's commit message here:\n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3d0f68dd30612\n> \n> > In particular:\n> \n> > \"The variants of these functions that take\n> > numeric inputs (OIDs or column numbers) are\n> > supposed to return NULL rather than failing\n> > on bad input; this rule reduces problems with\n> > snapshot skew when queries apply the functions\n> > to all rows of a catalog.\"\n> \n> Yeah, the null-return-for-bad-numeric-input behavior is important.\n> Perhaps a case could be made for returning null for bad text\n> input too, but I don't recall that anybody has asked for that.\n> \n> A case could also be made that changing the behavior on the text\n> side would break applications that expect the current behavior.\n> So I'm disinclined to make a wholesale change there, without more\n> evidence that it's a good idea.\n\nOK, as long as we thought about this, I am fine.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 11 May 2021 16:01:13 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 11, 2021 at 01:16:38PM +0300, Alexander Korotkov wrote:\n> > OK, what symbols trigger this change? Underscore? What else?\n> \n> Any symbol, which is recognized as a separator by full-text parser,\n> but not tsquery parser. Fulltext search is extensible and allowing\n> pluggable parsers. In principle, we could dig the exact set of\n> symbols, but I'm not sure this worth the effort.\n> \n> > You are\n> > saying the previous code allowed 'pg' and 'class' anywhere in the\n> > string, while the new code requires them to be adjacent, which more\n> > closely matches the pattern.\n> \n> Yes, that's it.\n> \n> > > * Fix extra distance in phrase operators for quoted text in\n> > > websearch_to_tsquery() (Alexander Korotkov)\n> > > For example, websearch_to_tsquery('english', '\"aaa: bbb\"') becomes\n> > > 'aaa <> bbb' instead of 'aaa <2> bbb'.\n> >\n> > So colon and space were considered to be two tokens between 'aaa' and\n> > 'bbb', while is really only one because both tokens are discarded? Is\n> > this true of any discarded tokens, e.g. ''\"aaa ?:, bbb\"'?\n> \n> Yes, that's true for any discarded tokens.\n\nI can up with this text for these two items. I think it still needs ro\nbe more specific:\n\n\t<listitem>\n\t<!--\n\tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n\t2021-01-31 [0c4f355c6] Fix parsing of complex morphs to tsquery\n\t-->\n\t\n\t<para>\n\tFix to_tsquery() and websearch_to_tsquery() to properly parse\n\tcertain discarded tokens in quotes (Alexander Korotkov)\n\t</para>\n\t\n\t<para>\n\tCertain discarded tokens, like underscore, caused the output\n\tof these functions to produce incorrect tsquery output, e.g.,\n\twebsearch_to_tsquery('\"pg_class pg\"') used to output '( pg &\n\tclass ) <-> pg', but now outputs 'pg <-> class <-> pg'.\n\t</para>\n\t</listitem>\n\t\n\t<listitem>\n\t<!--\n\tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n\t2021-05-03 [eb086056f] Make websearch_to_tsquery() parse text in quotes as a si\n\t-->\n\t\n\t<para>\n\tFix websearch_to_tsquery() to properly parse multiple adjacent\n\tdiscarded tokens in quotes (Alexander Korotkov)\n\t</para>\n\t\n\t<para>\n\tPreviously, quoted text that contained multiple adjacent discarded\n\ttokens were treated as multiple tokens, causing incorrect tsquery\n\toutput, e.g., websearch_to_tsquery('\"aaa: bbb\"') used to output\n\t'aaa <2> bbb', but now\toutputs 'aaa <-> bbb'.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 11 May 2021 16:31:41 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 08:08:01AM -0700, Peter Geoghegan wrote:\n> On Mon, May 10, 2021 at 7:00 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > | Allow VACUUM VERBOSE to report page deletion counts for each scan of an index (Peter Geoghegan)\n> >\n> > I think \"Allow\" is wrong - should just say that VACUUM VERBOSE reports..\n> \n> It's also not accurate, since the count of deleted pages was always\n> shown by VACUUM VERBOSE (once per index scan). The new feature has us\n> show pages deleted by the VACUUM that actually ran (not some earlier\n> VACUUM) -- these are \"newly deleted pages\".\n> \n> I don't think that this item is worth mentioning, though -- it's just\n> a nice to have. If Bruce removes it from the release notes entirely I\n> won't object.\n\nOK, removed. Additional debug output usually isn't important\nmentioning --- when you need it, you will see it.\n\n> In addition to the items that I commented on in my response to\n> Matthias just now, I should point out the following item as worthy of\n> inclusion:\n> \n> 9dd963ae25 Recycle nbtree pages deleted during same VACUUM.\n> \n> I suggest that this item be phrased more or less as follows:\n> \n> \"Allow VACUUM to eagerly place newly deleted B-Tree pages in the Free\n> Space Map. Previously VACUUM could only place preexisting deleted\n> pages in the Free Space Map for recycling.\"\n\nSo, previously it could only place pages that were deleted from the\nprevious vacuum on the free list?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 11 May 2021 16:41:19 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 11, 2021 at 1:41 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > \"Allow VACUUM to eagerly place newly deleted B-Tree pages in the Free\n> > Space Map. Previously VACUUM could only place preexisting deleted\n> > pages in the Free Space Map for recycling.\"\n>\n> So, previously it could only place pages that were deleted from the\n> previous vacuum on the free list?\n\nYes, exactly -- that's how it worked, even though in general there was\nno way to predict when the next VACUUM would be. Now VACUUM will\nusually manage to do everything in the same VACUUM operation, starting\nfrom deleting the index tuples that make the page empty, then deleting\nthe page, and finally putting the page in the FSM. Makes a big\ndifference when the workload has lots of page deletions, say due to\nrange deletions.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 11 May 2021 13:47:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 11, 2021 at 09:26:36AM +0530, Amit Kapila wrote:\n> On Mon, May 10, 2021 at 7:30 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > Can these be merged:\n> > Allow logical replication to stream long transactions to standbys (Dilip Kumar, Tomas Vondra, Amit Kapila, Nikhil Sontakke)\n> > Improve the logical replication API to allow streaming large in-progress transactions (Tomas Vondra, Dilip Kumar, Amit Kapila)\n> >\n> \n> I think we should merge the below two:\n> Add support for streaming to built-in logical replication (Vondra,\n> Dilip Kumar, Amit Kapila, Ajin Cherian, Nikhil Sontakke, Stas Kelvich)\n> \n> Allow logical replication to stream long transactions to standbys\n> (Dilip Kumar, Tomas Vondra, Amit Kapila, Nikhil Sontakke)\n> \n> and say something like:\n> Allow logical replication to stream long in-progress transactions to standbys\n> Previously transactions that exceeded logical_decoding_work_mem were\n> written to disk until the transaction completed.\n\nOK, updated as you suggested.\n\n> Also, the commits that lead to the above feature is:\n> 2020-09-03 [464824323] Add support for streaming to built-in logical replicatio\n> 2020-08-08 [7259736a6] Implement streaming mode in ReorderBuffer.\n\nYes, these were already listed above that item.\n\n> * Improve the logical replication API to allow streaming large\n> in-progress transactions (Tomas Vondra, Dilip Kumar, Amit Kapila)\n> The output functions begin with \"stream\". test_decoding also supports these.\n> \n> In the above, I think it is better to say Enhance instead of Improve.\n\nOK, word changed.\n\n> * Add system view pg_stat_replication_slots to report replication slot\n> activity (Sawada Masahiko, Amit Kapila)\n> \n> I think for this item, we can give Vignesh C as author credit because\n> he helped a lot enhance/stabilize this feature after the initial\n> commit. See\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=f5fc2f5b23d1b1dff60f8ca5dc211161df47eda4\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=cca57c1d9bf7eeba5b81115e0b82651cf3d8e4ea\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=592f00f8dec68038301467a904ac514eddabf6cd\n\nOK, I have added him:\n\n\tAdd system view pg_stat_replication_slots to report replication slot\n\tactivity (Sawada Masahiko, Amit Kapila, Vignesh C)\n\n> * Allow multiple xacts during table sync in logical replication (Peter\n> Smith, Amit Kapila, and Takamichi Osumi)\n> \n> IMPORTANT?\n> \n> Yes, this is a big enough improvement in the initial table sync\n> mechanism of logical replication that it is worth mentioning in\n> Release Notes. One of the main user-visible effects this commit can\n> have is that it will reduce the time for which we need to hold WAL on\n> the server during the initial table sync phase. Then it can avoid\n> large copies in case there is any error during the time when\n> apply-worker and tablesync-worker synchronizes the apply location.\n\nOK, thanks.\n\n> * Enable logical replication to handle two phase commits (Ajin Cherian)\n> This is controlled via pg_create_logical_replication_slot().\n> \n> We still haven't supported two-phase commits for built-in logical\n> replication but we have exposed APIs for decoding plugins to achieve\n> that. Using these APIs, we are planning to support it for built-in\n> logical replication but that will be probably done in PG-15. So, I\n> suggest we can modify it to something like:\n> \n> * Enhance logical decoding APIs to handle two-phase commits\n> This is controlled via pg_create_logical_replication_slot().\n\nOK, updated as you suggested.\n\n> Most of the work for this feature is done via commits:\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=a271a1b50e9bec07e2ef3a05e38e7285113e4ce6\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=0aa8a01d04c8fe200b7a106878eebc3d0af9105c\n> \n> So, I think the author credit should be given to Ajin Cherian, Amit\n> Kapila, Nikhil Sontakke, and Stas Kelvich\n\nOK, the new item text is:\n\n\t<listitem>\n\t<!--\n\tAuthor: Amit Kapila <akapila@postgresql.org>\n\t2020-12-30 [0aa8a01d0] Extend the output plugin API to allow decoding of prepar\n\tAuthor: Amit Kapila <akapila@postgresql.org>\n\t2021-01-04 [a271a1b50] Allow decoding at prepare time in ReorderBuffer.\n\tAuthor: Amit Kapila <akapila@postgresql.org>\n\t2021-03-03 [19890a064] Add option to enable two_phase commits via pg_create_log\n\t-->\n\t\n\t<para>\n\tEnhance logical decoding APIs to handle two-phase commits (Ajin\n\tCherian, Amit Kapila, Nikhil Sontakke, Stas Kelvich)\n\t</para>\n\t\n\t<para>\n\tThis is controlled via pg_create_logical_replication_slot().\n\t</para>\n\t</listitem>\n\n> * I think we should mention about commit:\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d7eb52d7181d83cf2363570f7a205b8eb1008dbc\n> \n> This allows Logical decoding to work efficiently in presence of a\n> large amount of DDLs. This was a complaint from many users for long\n> time. Apart from the discussion thread, see some of the earlier\n> complaints:\n> https://www.postgresql.org/message-id/flat/20f3de7675f83176253f607b5e199b228406c21c.camel@cybertec.at\n> https://www.postgresql.org/message-id/flat/CAHoiPjzea6N0zuCi=+f9v_j94nfsy6y8SU7-=bp4=7qw6_i=Rg@mail.gmail.com\n> \n> and most recently at:\n> https://www.postgresql.org/message-id/DB8PR10MB36126BC0DC3570637FDFED12EB4F9%40DB8PR10MB3612.EURPRD10.PROD.OUTLOOK.COM\n\nOK, I added this:\n\n\t<listitem>\n\t<!--\n\tAuthor: Amit Kapila <akapila@postgresql.org>\n\t2020-10-15 [d7eb52d71] Execute invalidation messages for each XLOG_XACT_INVALID\n\t-->\n\t\n\t<para>\n\tAllow logical decoding to more efficently process cache invalidation messages\n\t</para>\n\t\n\t<para>\n\tThis allows Logical decoding to work efficiently in presence of a large amount of DDL.\n\t</para>\n\t</listitem>\n\n> * Speed truncation of small tables on large shared buffer servers (Kirk Jamison)\n> \n> This optimization applies during recovery. Shouldn't we mention that\n> in some way?\n\nYes, new text:\n\n\tSpeed truncation of small tables during recovery on clusters with a\n\tlarge number of shared buffers (Kirk Jamison)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 11 May 2021 17:06:03 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 08:08:01AM -0700, Peter Geoghegan wrote:\n> On Mon, May 10, 2021 at 7:00 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > | Allow VACUUM VERBOSE to report page deletion counts for each scan of an index (Peter Geoghegan)\n> >\n> > I think \"Allow\" is wrong - should just say that VACUUM VERBOSE reports..\n> \n> It's also not accurate, since the count of deleted pages was always\n> shown by VACUUM VERBOSE (once per index scan). The new feature has us\n> show pages deleted by the VACUUM that actually ran (not some earlier\n> VACUUM) -- these are \"newly deleted pages\".\n> \n> I don't think that this item is worth mentioning, though -- it's just\n> a nice to have. If Bruce removes it from the release notes entirely I\n> won't object.\n> \n> In addition to the items that I commented on in my response to\n> Matthias just now, I should point out the following item as worthy of\n> inclusion:\n> \n> 9dd963ae25 Recycle nbtree pages deleted during same VACUUM.\n> \n> I suggest that this item be phrased more or less as follows:\n> \n> \"Allow VACUUM to eagerly place newly deleted B-Tree pages in the Free\n> Space Map. Previously VACUUM could only place preexisting deleted\n> pages in the Free Space Map for recycling.\"\n\nOK, god it. New text is:\n\n\t<listitem>\n\t<!--\n\tAuthor: Peter Geoghegan <pg@bowt.ie>\n\t2021-03-21 [9dd963ae2] Recycle nbtree pages deleted during same VACUUM.\n\t-->\n\t\n\t<para>\n\tAllow VACUUM to eagerly add newly deleted btree pages in the free\n\tspace map (Peter Geoghegan)\n\t</para>\n\t\n\t<para>\n\tPreviously VACUUM could only place preexisting deleted pages in\n\tthe free space map.\n\t</para>\n\t</listitem>\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 11 May 2021 17:11:29 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 11, 2021 at 09:13:31AM -0400, Jonathan Katz wrote:\n> First Bruce, thank you for compiling the release notes draft. Comments\n> inline:\n> \n> On 5/10/21 11:56 PM, Amit Kapila wrote:\n> > On Mon, May 10, 2021 at 7:30 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >>\n> >> Can these be merged:\n> >> Allow logical replication to stream long transactions to standbys (Dilip Kumar, Tomas Vondra, Amit Kapila, Nikhil Sontakke)\n> >> Improve the logical replication API to allow streaming large in-progress transactions (Tomas Vondra, Dilip Kumar, Amit Kapila)\n> >>\n> > \n> > I think we should merge the below two:\n> > Add support for streaming to built-in logical replication (Vondra,\n> > Dilip Kumar, Amit Kapila, Ajin Cherian, Nikhil Sontakke, Stas Kelvich)\n> > \n> > Allow logical replication to stream long transactions to standbys\n> > (Dilip Kumar, Tomas Vondra, Amit Kapila, Nikhil Sontakke)\n> > \n> > and say something like:\n> > Allow logical replication to stream long in-progress transactions to standbys\n> > Previously transactions that exceeded logical_decoding_work_mem were\n> > written to disk until the transaction completed.\n> \n> I was going to post a question about the above bullet point(s), so this\n> clarification helps -- thanks!\n> \n> Two comments on the above:\n> \n> 1. I do believe ther eis a missing \"Tomas\" in the first release note.\n\nYes, I saw that and fixed, thanks.\n\n> 2. Would you want to s/standbys/subscribers/ given an instance can be in\n> read/write mode but still receive logical changes? Or did this patch\n> affect only subscribers that are also standbys?\n\nI don't know.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 11 May 2021 17:12:16 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 11, 2021 at 08:24:22AM -0500, Justin Pryzby wrote:\n> I think these should be merged:\n> \n> | Remove factorial operators ! and !! (Mark Dilger)\n> | Add documentation for the factorial() function (Peter Eisentraut)\n> \n> | This can be disabled by turning client options \"sslsni\" off. \n> \n> option (no ess)\n\nFixed.\n\n> | Allow REINDEX to process all child tables and indexes of a partitioned table (Justin Pryzby, Michael Paquier) \n> \n> It should actually say \"all child tables or indexes of a partitioned relation\",\n> since you can REINDEX INDEX partitioned_index.\n\nOK, fixed, good point. I had not thought of indexes.\n\n> | Add bit_xor XOR aggregate function (Alexey Bashtanov) \n> \n> Remove XOR ?\n\nUh, I thought XOR was describing the function, since the name might not\nbe clear. No?\n\n> | Remove support for postfix (right-unary) operators (Mark Dilger) \n> | pg_dump and pg_upgrade will warn if post-fix operators are being dumped. \n> \n> postfix not post-fix\n\nOops, fixed. :-)\n\n> | Remove temporary files after backend crashes (Euler Taveira) \n> \n> crash (singular)\n\nUh, I thought I would say \"all crashes\" here.\n\n> | Increase warning time and hard limit before transaction id and multi-transaction wraparound (Noah Misch) \n> | This should reduce the number of failures without warning. \n> \n> I'd say: This should reduce the possibility of failures that occur without\n> having issued warnings about wraparound.\n\nOK, added, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 11 May 2021 17:16:47 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On 5/11/21 5:12 PM, Bruce Momjian wrote:\n> On Tue, May 11, 2021 at 09:13:31AM -0400, Jonathan Katz wrote:\n>> First Bruce, thank you for compiling the release notes draft. Comments\n>> inline:\n>>\n>> On 5/10/21 11:56 PM, Amit Kapila wrote:\n\n>>> and say something like:\n>>> Allow logical replication to stream long in-progress transactions to standbys\n>>> Previously transactions that exceeded logical_decoding_work_mem were\n>>> written to disk until the transaction completed.\n>>\n> \n>> 2. Would you want to s/standbys/subscribers/ given an instance can be in\n>> read/write mode but still receive logical changes? Or did this patch\n>> affect only subscribers that are also standbys?\n> \n> I don't know.\n\nThat question was geared more towards Amit as he proposed the revised\ntext :)\n\nThanks,\n\nJonathan", "msg_date": "Tue, 11 May 2021 17:17:16 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": ".On Tue, May 11, 2021 at 11:31 PM Bruce Momjian <bruce@momjian.us> wrote:\n> On Tue, May 11, 2021 at 01:16:38PM +0300, Alexander Korotkov wrote:\n> > > OK, what symbols trigger this change? Underscore? What else?\n> >\n> > Any symbol, which is recognized as a separator by full-text parser,\n> > but not tsquery parser. Fulltext search is extensible and allowing\n> > pluggable parsers. In principle, we could dig the exact set of\n> > symbols, but I'm not sure this worth the effort.\n> >\n> > > You are\n> > > saying the previous code allowed 'pg' and 'class' anywhere in the\n> > > string, while the new code requires them to be adjacent, which more\n> > > closely matches the pattern.\n> >\n> > Yes, that's it.\n> >\n> > > > * Fix extra distance in phrase operators for quoted text in\n> > > > websearch_to_tsquery() (Alexander Korotkov)\n> > > > For example, websearch_to_tsquery('english', '\"aaa: bbb\"') becomes\n> > > > 'aaa <> bbb' instead of 'aaa <2> bbb'.\n> > >\n> > > So colon and space were considered to be two tokens between 'aaa' and\n> > > 'bbb', while is really only one because both tokens are discarded? Is\n> > > this true of any discarded tokens, e.g. ''\"aaa ?:, bbb\"'?\n> >\n> > Yes, that's true for any discarded tokens.\n>\n> I can up with this text for these two items. I think it still needs ro\n> be more specific:\n>\n> <listitem>\n> <!--\n> Author: Alexander Korotkov <akorotkov@postgresql.org>\n> 2021-01-31 [0c4f355c6] Fix parsing of complex morphs to tsquery\n> -->\n>\n> <para>\n> Fix to_tsquery() and websearch_to_tsquery() to properly parse\n> certain discarded tokens in quotes (Alexander Korotkov)\n> </para>\n\nThis relates not just to quotes. Original problem relates to quotes\nin websearch_to_tsquery() and phrase operator in to_tsquery(). But\nthe solution changes output for all query operands containing\ndiscarded tokens.\n\nCould we try this?\n\nMake to_tsquery() and websearch_to_tsquery() produce more strict\noutput for query parts containing discarded tokens. In particular,\nthis makes to_tsquery() and websearch_to_tsquery() properly parse the\ndiscarded tokens in phrase search operands and quotes correspondingly.\n\n> <para>\n> Certain discarded tokens, like underscore, caused the output\n> of these functions to produce incorrect tsquery output, e.g.,\n> websearch_to_tsquery('\"pg_class pg\"') used to output '( pg &\n> class ) <-> pg', but now outputs 'pg <-> class <-> pg'.\n> </para>\n> </listitem>\n\nThis part looks good to me. I'd just suggest to extend the example to\nto_tsquery() as well.\n\nCertain discarded tokens, like underscore, caused the output of these\nfunctions to produce incorrect tsquery output, e.g., both\nwebsearch_to_tsquery('\"pg_class pg\"') and to_tsquery('pg_class <->\npg') used to output '( pg & class ) <-> pg', but now both output 'pg\n<-> class <-> pg'.\n\n> <listitem>\n> <!--\n> Author: Alexander Korotkov <akorotkov@postgresql.org>\n> 2021-05-03 [eb086056f] Make websearch_to_tsquery() parse text in quotes as a si\n> -->\n>\n> <para>\n> Fix websearch_to_tsquery() to properly parse multiple adjacent\n> discarded tokens in quotes (Alexander Korotkov)\n> </para>\n>\n> <para>\n> Previously, quoted text that contained multiple adjacent discarded\n> tokens were treated as multiple tokens, causing incorrect tsquery\n> output, e.g., websearch_to_tsquery('\"aaa: bbb\"') used to output\n> 'aaa <2> bbb', but now outputs 'aaa <-> bbb'.\n> </para>\n> </listitem>\n\nThis item looks good to me.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 12 May 2021 00:19:37 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "\nThanks you for the review. I did skip these items based on the criteria\nI normally use, but it is good for people to review this list to see if\nmy analysis or criteria is wrong.\n\n---------------------------------------------------------------------------\n\nOn Mon, May 10, 2021 at 09:40:45AM -0500, Justin Pryzby wrote:\n> Same as the last couple years, I checked for missing items in the release\n> notes, running something like this.\n> \n> git log --cherry-pick --oneline origin/REL_13_STABLE...origin/master\n> \n> Should any of these be included?\n> \n> f82de5c46b Do COPY FROM encoding conversion/verification in larger chunks.\n> 9e596b65f4 Add \"LP_DEAD item?\" column to GiST pageinspect functions\n> \n> 10a5b35a00 Report resource usage at the end of recovery\n> 7e453634bb Add additional information in the vacuum error context.\n> 1ea396362b Improve logging of bad parameter values in BIND messages.\n> \n> 86dc90056d Rework planning and execution of UPDATE and DELETE.\n> a1115fa078 Postpone some more stuff out of ExecInitModifyTable.\n> c5b7ba4e67 Postpone some stuff out of ExecInitModifyTable.\n> \n> 7db0cd2145 Set PD_ALL_VISIBLE and visibility map bits in COPY FREEZE\n> 01e658fa74 Hash support for row types\n> a929e17e5a Allow run-time pruning on nested Append/MergeAppend nodes\n> 8c15a29745 Allow ALTER TYPE to update an existing type's typsubscript value.\n> c7aba7c14e Support subscripting of arbitrary types, not only arrays.\n> \n> 7b94e99960 Remove catalog function currtid()\n> 926fa801ac Remove undocumented IS [NOT] OF syntax.\n> cd9c1b3e19 Rename PGPROC->vacuumFlags to statusFlags\n> a04daa97a4 Remove es_result_relation_info from EState.\n> \n> 3d351d916b Redefine pg_class.reltuples to be -1 before the first VACUUM or ANALYZE.\n> fea10a6434 Rename VariableCacheData.nextFullXid to nextXid.\n> 9de9294b0c Stop archive recovery if WAL generated with wal_level=minimal is found. (see also 15251c0a6)\n> \n> f40c6969d0 Routine usage information schema tables\n> \n> b4af70cb21 Simplify state managed by VACUUM.\n> 4753ef37e0 Use a WaitLatch for vacuum/autovacuum sleeping\n> 9dd963ae25 Recycle nbtree pages deleted during same VACUUM.\n> 3c3b8a4b26 Truncate line pointer array during VACUUM.\n> \n> ad1c36b070 Fix foreign-key selectivity estimation in the presence of constants.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 11 May 2021 17:19:37 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 11, 2021 at 11:57:10AM +0900, Amit Langote wrote:\n> On Mon, May 10, 2021 at 11:40 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Same as the last couple years, I checked for missing items in the release\n> > notes, running something like this.\n> >\n> > git log --cherry-pick --oneline origin/REL_13_STABLE...origin/master\n> >\n> > Should any of these be included?\n> >\n> > 86dc90056d Rework planning and execution of UPDATE and DELETE.\n> > a1115fa078 Postpone some more stuff out of ExecInitModifyTable.\n> > c5b7ba4e67 Postpone some stuff out of ExecInitModifyTable.\n> \n> I was just about to ask Bruce what he thinks about these.\n> \n> To clarify, the first one is a big refactoring commit that allowed us\n> to get rid of inheritance_planner(), a fairly inefficient way of\n> planning updates/deletes on partitioned tables, especially when many\n> partitions remain after pruning (or when pruning cannot be used). One\n> may see the performance of update/deletes, especially on partitioned\n> tables, to be generally improved as a result of this commit, but maybe\n> not as significantly as to be mentioned in E.1.3.1.1. Partitioning or\n> even E.1.3.1.4. General Performance. However, one user-visible\n> feature that came out of this work is that updates/deletes can now use\n> run-time pruning whereas they couldn't before. Maybe that ought to be\n> mentioned. (This reminds me to send a patch to remove the note from\n> 5.11.4. Partition Pruning that says that runtime pruning cannot be\n> used for update/delete).\n> \n> The other two commits can lead to improved performance of\n> update/deletes when there are many unpruned partitions in the plan,\n> but runtime pruning (a new feature as mentioned above) leads to only\n> one or few partitions to actually be updated/deleted from. I admit\n> though that the cases for which performance has been improved still\n> under-perform the cases that already performed better starting in v12,\n> that is, the cases where the planner itself is able to trim down the\n> plan to contain one or few partitions, so maybe nothing very big to\n> see here just yet. You may want to take a look at the benchmark\n> results I had posted here:\n> https://www.postgresql.org/message-id/CA%2BHiwqEcawatEaUh1uTbZMEZTJeLzbroRTz9_X9Z5CFjTWJkhw%40mail.gmail.com\n\nSeems we might want to have a general release note item that mentions\nimproved update/delete performance for partitioned tables, yes? I think\nthe run-time pruning and single-parition pullup are significant.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 11 May 2021 17:23:10 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 07:53:08AM -0700, Peter Geoghegan wrote:\n> On Sun, May 9, 2021 at 11:03 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > I have committed the first draft of the PG 14 release notes.\n> \n> This definitely isn't necessary, since the commit in question was a\n> totally mechanical thing that cleaned up a minor inconsistency:\n> \n> Initialize work_mem and maintenance_work_mem using current guc.c\n> default (Peter Geoghegan)\n> \n> Oversight in commit 848ae330a49, which increased the previous defaults\n> for work_mem and maintenance_work_mem by 4X. IS THIS A BEHAVIORAL\n> CHANGE?\n\nOK, that's what I thought --- removed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 11 May 2021 17:24:36 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On 2021-May-11, Bruce Momjian wrote:\n\n> > 86dc90056d Rework planning and execution of UPDATE and DELETE.\n> > a1115fa078 Postpone some more stuff out of ExecInitModifyTable.\n> > c5b7ba4e67 Postpone some stuff out of ExecInitModifyTable.\n\nAre these the set that means UPDATE/DELETE can now do partition pruning?\nI think that's a significant improvement, because table partitioning\nbecomes usable in situations where it previously wasn't, so let's\nmention it.\n\n> > c7aba7c14e Support subscripting of arbitrary types, not only arrays.\n\nThere's an entry that says we allow subscripting of jsonb, but the\nfeature is more general than that: our subscripting is now extensible,\nand jsonb is just the first implementation. I think this should be\nmentioned.\n\n> > f40c6969d0 Routine usage information schema tables\n\nThis is one small step towards compliance, I think, so it should be\nmentioned.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Tue, 11 May 2021 17:47:27 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 11, 2021 at 10:35:23AM -0400, Bruce Momjian wrote:\n> > | Allow more than the common name (CN) to be matched for client certificate authentication (Andrew Dunstan) \n> > Your description makes it sound like arbitrary attributes can be compared. But\n> > the option just allows comparing CN or DN.\n> \n> OK, new text is:\n> \n> \t<para>\n> \tAllow the certificate's distinguished name (DN) to be matched for client\n> \tcertificate authentication (Andrew Dunstan)\n> \t</para>\n> \t\n> \t<para>\n> \tThe new pg_hba.conf keyword \"clientname=DN\" allows comparison with\n> \tnon-CN certificate attributes and can be combined with ident maps.\n> \t</para>\n> \t</listitem>\n\nI think this part is still misleading. The option just allows DN/CN, so it's\nstrange to say \"non-CN attributes\".\n\n> > | Add date_bin function (John Naylor) \n> > This truncate timestamps on an arbitrary interval.\n> > Like date_trunc() but also supports eg. '15 minutes', and also uses an arbitrary \"origin\".\n> \n> OK, so what I think it returns is the greatest datetime that is a\n> multiple of interval values added to origin which is not greater than\n> the target date, right? Am I the only one who finds this unclear? \n> Doesn't our documentation of this feature need to explain this?\n\nI think the documentation is okay, myself:\nhttps://www.postgresql.org/docs/devel/functions-datetime.html#FUNCTIONS-DATETIME-BIN\n| The function date_bin “bins” the input timestamp into the specified interval (the stride)...\n\nAnyway, the release notes have to be at least as succiently as that.\n\n+Allow VACUUM to eagerly add newly deleted btree pages in the free space map (Peter Geoghegan)\n\nsay added \"to\" the FSM\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 11 May 2021 17:13:21 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Wed, May 12, 2021 at 12:19:37AM +0300, Alexander Korotkov wrote:\n> This relates not just to quotes. Original problem relates to quotes\n> in websearch_to_tsquery() and phrase operator in to_tsquery(). But\n> the solution changes output for all query operands containing\n> discarded tokens.\n> \n> Could we try this?\n> \n> Make to_tsquery() and websearch_to_tsquery() produce more strict\n> output for query parts containing discarded tokens. In particular,\n> this makes to_tsquery() and websearch_to_tsquery() properly parse the\n> discarded tokens in phrase search operands and quotes correspondingly.\n\n> > <para>\n> > Certain discarded tokens, like underscore, caused the output\n> > of these functions to produce incorrect tsquery output, e.g.,\n> > websearch_to_tsquery('\"pg_class pg\"') used to output '( pg &\n> > class ) <-> pg', but now outputs 'pg <-> class <-> pg'.\n> > </para>\n> > </listitem>\n> \n> This part looks good to me. I'd just suggest to extend the example to\n> to_tsquery() as well.\n> \n> Certain discarded tokens, like underscore, caused the output of these\n> functions to produce incorrect tsquery output, e.g., both\n> websearch_to_tsquery('\"pg_class pg\"') and to_tsquery('pg_class <->\n> pg') used to output '( pg & class ) <-> pg', but now both output 'pg\n> <-> class <-> pg'.\n\nOK, I went with this:\n\n\t<listitem>\n\t<!--\n\tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n\t2021-01-31 [0c4f355c6] Fix parsing of complex morphs to tsquery\n\t-->\n\t\n\t<para>\n\tFix to_tsquery() and websearch_to_tsquery() to properly parse\n\tquery text containing discarded tokens (Alexander Korotkov)\n\t</para>\n\t\n\t<para>\n\tCertain discarded tokens, like underscore, caused the output of\n\tthese functions to produce incorrect tsquery output, e.g., both\n\twebsearch_to_tsquery('\"pg_class pg\"') and to_tsquery('pg_class\n\t&lt;-&gt; pg') used to output '( pg & class ) &lt;-&gt; pg',\n\tbut now both output 'pg &lt;-&gt; class &lt;-&gt; pg'.\n\t</para>\n\t</listitem>\n\n> > <listitem>\n> > <!--\n> > Author: Alexander Korotkov <akorotkov@postgresql.org>\n> > 2021-05-03 [eb086056f] Make websearch_to_tsquery() parse text in quotes as a si\n> > -->\n> >\n> > <para>\n> > Fix websearch_to_tsquery() to properly parse multiple adjacent\n> > discarded tokens in quotes (Alexander Korotkov)\n> > </para>\n> >\n> > <para>\n> > Previously, quoted text that contained multiple adjacent discarded\n> > tokens were treated as multiple tokens, causing incorrect tsquery\n> > output, e.g., websearch_to_tsquery('\"aaa: bbb\"') used to output\n> > 'aaa <2> bbb', but now outputs 'aaa <-> bbb'.\n> > </para>\n> > </listitem>\n> \n> This item looks good to me.\n\nGood, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 11 May 2021 22:31:24 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 11, 2021 at 05:13:21PM -0500, Justin Pryzby wrote:\n> On Tue, May 11, 2021 at 10:35:23AM -0400, Bruce Momjian wrote:\n> > > | Allow more than the common name (CN) to be matched for client certificate authentication (Andrew Dunstan) \n> > > Your description makes it sound like arbitrary attributes can be compared. But\n> > > the option just allows comparing CN or DN.\n> > \n> > OK, new text is:\n> > \n> > \t<para>\n> > \tAllow the certificate's distinguished name (DN) to be matched for client\n> > \tcertificate authentication (Andrew Dunstan)\n> > \t</para>\n> > \t\n> > \t<para>\n> > \tThe new pg_hba.conf keyword \"clientname=DN\" allows comparison with\n> > \tnon-CN certificate attributes and can be combined with ident maps.\n> > \t</para>\n> > \t</listitem>\n> \n> I think this part is still misleading. The option just allows DN/CN, so it's\n> strange to say \"non-CN attributes\".\n\nOK, so this is where I am confused. I searched for distinguished name\n(DN) and came up with DN being a concatentation of all the fields\nprovided to the certificate signing request (CSR). Is that right? \nWouldn't people test _parts_ of the DN, rather than all of it. The test\nin the patch seems to do that:\n\n+ \"# MAPNAME SYSTEM-USERNAME PG-USERNAME\\n\",\n+ \"dn \\\"CN=ssltestuser-dn,OU=Testing,OU=Engineering,O=PGDG\\\" ssltestuser\\n\",\n+ \"dnre \\\"/^.*OU=Testing,.*\\$\\\" ssltestuser\\n\",\n+ \"cn ssltestuser-dn ssltestuser\\n\";\n\nI think someone need to explain to me exactly what the DN is and how it\nis used. Sorry.\n\n> > > | Add date_bin function (John Naylor) \n> > > This truncate timestamps on an arbitrary interval.\n> > > Like date_trunc() but also supports eg. '15 minutes', and also uses an arbitrary \"origin\".\n> > \n> > OK, so what I think it returns is the greatest datetime that is a\n> > multiple of interval values added to origin which is not greater than\n> > the target date, right? Am I the only one who finds this unclear? \n> > Doesn't our documentation of this feature need to explain this?\n> \n> I think the documentation is okay, myself:\n> https://www.postgresql.org/docs/devel/functions-datetime.html#FUNCTIONS-DATETIME-BIN\n> | The function date_bin “bins” the input timestamp into the specified interval (the stride)...\n\nOK, if everyone else is happy with it, I am fine. I now have:\n\n\t<listitem>\n\t<!--\n\tAuthor: Peter Eisentraut <peter@eisentraut.org>\n\t2021-03-24 [49ab61f0b] Add date_bin function\n\tAuthor: Peter Eisentraut <peter@eisentraut.org>\n\t2021-04-10 [496e58bb0] Improve behavior of date_bin with origin in the\n\tfuture\n\t-->\n\t\n\t<para>\n\tAdd date_bin function (John Naylor)\n\t</para>\n\t\n\t<para>\n\tThe function date_bin \"bins\" the input timestamp into a specified\n\tinterval aligned with a specified origin.\n\t</para>\n\t</listitem>\n\n> Anyway, the release notes have to be at least as succiently as that.\n> \n> +Allow VACUUM to eagerly add newly deleted btree pages in the free space map (Peter Geoghegan)\n> \n> say added \"to\" the FSM\n\nYes, fixed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 11 May 2021 22:45:04 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 11, 2021 at 6:43 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>\n> First Bruce, thank you for compiling the release notes draft. Comments\n> inline:\n>\n> On 5/10/21 11:56 PM, Amit Kapila wrote:\n> > On Mon, May 10, 2021 at 7:30 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >>\n> >> Can these be merged:\n> >> Allow logical replication to stream long transactions to standbys (Dilip Kumar, Tomas Vondra, Amit Kapila, Nikhil Sontakke)\n> >> Improve the logical replication API to allow streaming large in-progress transactions (Tomas Vondra, Dilip Kumar, Amit Kapila)\n> >>\n> >\n> > I think we should merge the below two:\n> > Add support for streaming to built-in logical replication (Vondra,\n> > Dilip Kumar, Amit Kapila, Ajin Cherian, Nikhil Sontakke, Stas Kelvich)\n> >\n> > Allow logical replication to stream long transactions to standbys\n> > (Dilip Kumar, Tomas Vondra, Amit Kapila, Nikhil Sontakke)\n> >\n> > and say something like:\n> > Allow logical replication to stream long in-progress transactions to standbys\n> > Previously transactions that exceeded logical_decoding_work_mem were\n> > written to disk until the transaction completed.\n>\n> I was going to post a question about the above bullet point(s), so this\n> clarification helps -- thanks!\n>\n> Two comments on the above:\n>\n> 1. I do believe ther eis a missing \"Tomas\" in the first release note.\n> 2. Would you want to s/standbys/subscribers/ given an instance can be in\n> read/write mode but still receive logical changes?\n>\n\nRight, that makes sense to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 12 May 2021 09:15:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Wed, May 12, 2021 at 2:36 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n>\n> OK, I added this:\n>\n> <listitem>\n> <!--\n> Author: Amit Kapila <akapila@postgresql.org>\n> 2020-10-15 [d7eb52d71] Execute invalidation messages for each XLOG_XACT_INVALID\n> -->\n>\n> <para>\n> Allow logical decoding to more efficently process cache invalidation messages\n> </para>\n>\n> <para>\n> This allows Logical decoding to work efficiently in presence of a large amount of DDL.\n> </para>\n> </listitem>\n>\n\nDilip Kumar is the primary author of this work.\n\nThanks for all the updates.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 12 May 2021 09:18:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Wed, May 12, 2021 at 6:23 AM Bruce Momjian <bruce@momjian.us> wrote:\n> On Tue, May 11, 2021 at 11:57:10AM +0900, Amit Langote wrote:\n> > On Mon, May 10, 2021 at 11:40 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > Same as the last couple years, I checked for missing items in the release\n> > > notes, running something like this.\n> > >\n> > > git log --cherry-pick --oneline origin/REL_13_STABLE...origin/master\n> > >\n> > > Should any of these be included?\n> > >\n> > > 86dc90056d Rework planning and execution of UPDATE and DELETE.\n> > > a1115fa078 Postpone some more stuff out of ExecInitModifyTable.\n> > > c5b7ba4e67 Postpone some stuff out of ExecInitModifyTable.\n> >\n> > I was just about to ask Bruce what he thinks about these.\n> >\n> > To clarify, the first one is a big refactoring commit that allowed us\n> > to get rid of inheritance_planner(), a fairly inefficient way of\n> > planning updates/deletes on partitioned tables, especially when many\n> > partitions remain after pruning (or when pruning cannot be used). One\n> > may see the performance of update/deletes, especially on partitioned\n> > tables, to be generally improved as a result of this commit, but maybe\n> > not as significantly as to be mentioned in E.1.3.1.1. Partitioning or\n> > even E.1.3.1.4. General Performance. However, one user-visible\n> > feature that came out of this work is that updates/deletes can now use\n> > run-time pruning whereas they couldn't before. Maybe that ought to be\n> > mentioned. (This reminds me to send a patch to remove the note from\n> > 5.11.4. Partition Pruning that says that runtime pruning cannot be\n> > used for update/delete).\n> >\n> > The other two commits can lead to improved performance of\n> > update/deletes when there are many unpruned partitions in the plan,\n> > but runtime pruning (a new feature as mentioned above) leads to only\n> > one or few partitions to actually be updated/deleted from. I admit\n> > though that the cases for which performance has been improved still\n> > under-perform the cases that already performed better starting in v12,\n> > that is, the cases where the planner itself is able to trim down the\n> > plan to contain one or few partitions, so maybe nothing very big to\n> > see here just yet. You may want to take a look at the benchmark\n> > results I had posted here:\n> > https://www.postgresql.org/message-id/CA%2BHiwqEcawatEaUh1uTbZMEZTJeLzbroRTz9_X9Z5CFjTWJkhw%40mail.gmail.com\n>\n> Seems we might want to have a general release note item that mentions\n> improved update/delete performance for partitioned tables, yes?\n\nYes, that would not be entirely wrong.\n\nThe cases that benefit more from these commits are those where the\nplanner is not able to prune many (or any) partitions, such as, for\nexample, if a generic plan is chosen for prepared update/delete\nqueries. The cases where the planner *is* able to prune most\npartitions should run faster with v14 but only marginally; that case\nwas already extensively improved by the work done in v12 release.\n\n> I think\n> the run-time pruning and single-parition pullup are significant.\n\nAssuming, by \"single-partition pullup\", you mean run-time pruning now\nallows the executor to touch only one affected partition out of the\nmany contained in a update/delete plan, then yes, that's new in v14.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 May 2021 15:41:04 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Wed, May 12, 2021 at 6:47 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2021-May-11, Bruce Momjian wrote:\n>\n> > > 86dc90056d Rework planning and execution of UPDATE and DELETE.\n> > > a1115fa078 Postpone some more stuff out of ExecInitModifyTable.\n> > > c5b7ba4e67 Postpone some stuff out of ExecInitModifyTable.\n>\n> Are these the set that means UPDATE/DELETE can now do partition pruning?\n> I think that's a significant improvement, because table partitioning\n> becomes usable in situations where it previously wasn't, so let's\n> mention it.\n\nThanks. To be clear, update/delete can now do *run-time* partition\npruning, while plan-time pruning has already been working since v12\n(commit 428b260f87). How? The plan for update/delete now looks very\nsimilar to that for an equivalent select query -- Append/MergeAppend\nto scan partitions, with ModifyTable on top to do the actual\nupdate/delete -- and because Append/MergeAppend nodes carry run-time\npruning info, update/delete now get that ability.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 May 2021 15:59:09 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 11, 2021 at 10:45:04PM -0400, Bruce Momjian wrote:\n> On Tue, May 11, 2021 at 05:13:21PM -0500, Justin Pryzby wrote:\n> > On Tue, May 11, 2021 at 10:35:23AM -0400, Bruce Momjian wrote:\n> > > > | Allow more than the common name (CN) to be matched for client certificate authentication (Andrew Dunstan) \n> > > > Your description makes it sound like arbitrary attributes can be compared. But\n> > > > the option just allows comparing CN or DN.\n> > > \n> > > OK, new text is:\n> > > \n> > > \t<para>\n> > > \tAllow the certificate's distinguished name (DN) to be matched for client\n> > > \tcertificate authentication (Andrew Dunstan)\n> > > \t</para>\n> > > \t\n> > > \t<para>\n> > > \tThe new pg_hba.conf keyword \"clientname=DN\" allows comparison with\n> > > \tnon-CN certificate attributes and can be combined with ident maps.\n> > > \t</para>\n> > > \t</listitem>\n> > \n> > I think this part is still misleading. The option just allows DN/CN, so it's\n> > strange to say \"non-CN attributes\".\n> \n> OK, so this is where I am confused. I searched for distinguished name\n> (DN) and came up with DN being a concatentation of all the fields\n> provided to the certificate signing request (CSR). Is that right? \n> Wouldn't people test _parts_ of the DN, rather than all of it.\n\n+Andrew\n\nThe full DN is probably not the postgres username, so the docs suggest that:\n| This option is probably best used in conjunction with a username map.\n\nYou're right that clientname=DN allows testing *parts*, of the DN, but I don't\nknow if there's any reason to believe that's the typical use case.\n\nThe primary utility of clientname=DN seems to be that the CN alone is (or can\nbe) ambiguous - matching on the full DN is intended to resolve that. I think\nthe release notes should focus on this.\n\nMatching parts of the DN (other than the CN) seems like a 2ndary use.\n\nMaybe a variation on your original words is better.\n| Allow the distinguished name (DN) to be matched for client certificate authentication (Andrew Dunstan) \n| Previously, matching was done only the common name (DN).\n| With a username map, the DN can be matched in full or in part.\n\n> The test in the patch seems to do that:\n> \n> + \"# MAPNAME SYSTEM-USERNAME PG-USERNAME\\n\",\n> + \"dn \\\"CN=ssltestuser-dn,OU=Testing,OU=Engineering,O=PGDG\\\" ssltestuser\\n\",\n> + \"dnre \\\"/^.*OU=Testing,.*\\$\\\" ssltestuser\\n\",\n> + \"cn ssltestuser-dn ssltestuser\\n\";\n> \n> I think someone need to explain to me exactly what the DN is and how it\n> is used. Sorry.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 12 May 2021 10:06:05 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Wed, May 12, 2021 at 09:15:55AM +0530, Amit Kapila wrote:\n> On Tue, May 11, 2021 at 6:43 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> > I was going to post a question about the above bullet point(s), so this\n> > clarification helps -- thanks!\n> >\n> > Two comments on the above:\n> >\n> > 1. I do believe ther eis a missing \"Tomas\" in the first release note.\n> > 2. Would you want to s/standbys/subscribers/ given an instance can be in\n> > read/write mode but still receive logical changes?\n> >\n> \n> Right, that makes sense to me.\n\nChange made.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 12 May 2021 13:13:18 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Wed, May 12, 2021 at 09:18:27AM +0530, Amit Kapila wrote:\n> On Wed, May 12, 2021 at 2:36 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> >\n> > OK, I added this:\n> >\n> > <listitem>\n> > <!--\n> > Author: Amit Kapila <akapila@postgresql.org>\n> > 2020-10-15 [d7eb52d71] Execute invalidation messages for each XLOG_XACT_INVALID\n> > -->\n> >\n> > <para>\n> > Allow logical decoding to more efficently process cache invalidation messages\n> > </para>\n> >\n> > <para>\n> > This allows Logical decoding to work efficiently in presence of a large amount of DDL.\n> > </para>\n> > </listitem>\n> >\n> \n> Dilip Kumar is the primary author of this work.\n> \n> Thanks for all the updates.\n\nThanks, added.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 12 May 2021 13:14:09 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Wed, May 12, 2021 at 10:06:05AM -0500, Justin Pryzby wrote:\n> On Tue, May 11, 2021 at 10:45:04PM -0400, Bruce Momjian wrote:\n> > OK, so this is where I am confused. I searched for distinguished name\n> > (DN) and came up with DN being a concatentation of all the fields\n> > provided to the certificate signing request (CSR). Is that right? \n> > Wouldn't people test _parts_ of the DN, rather than all of it.\n> \n> +Andrew\n> \n> The full DN is probably not the postgres username, so the docs suggest that:\n> | This option is probably best used in conjunction with a username map.\n> \n> You're right that clientname=DN allows testing *parts*, of the DN, but I don't\n> know if there's any reason to believe that's the typical use case.\n> \n> The primary utility of clientname=DN seems to be that the CN alone is (or can\n> be) ambiguous - matching on the full DN is intended to resolve that. I think\n> the release notes should focus on this.\n\nOK, that matches what I thought. Here is my updated version:\n\n\t<listitem>\n\t<!--\n\tAuthor: Andrew Dunstan <andrew@dunslane.net>\n\t2021-03-29 [6d7a6feac] Allow matching the DN of a client certificate for\n\tauthen\n\t-->\n\t\n\t<para>\n\tAllow the certificate's distinguished name (DN) to be matched for client\n\tcertificate authentication (Andrew Dunstan)\n\t</para>\n\t\n\t<para>\n-->\tThe new pg_hba.conf keyword \"clientname=DN\" allows comparison with\n-->\tcertificate attributes beyond the CN and can be combined with ident\n-->\tmaps.\n\t</para>\n\t</listitem>\n\nTechnically these are attributes of the certificate signing request\n(CSR), but I didn't want to mention that here.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 12 May 2021 13:25:30 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 11, 2021 at 11:57:10AM +0900, Amit Langote wrote:\n> On Mon, May 10, 2021 at 11:40 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > 86dc90056d Rework planning and execution of UPDATE and DELETE.\n> > a1115fa078 Postpone some more stuff out of ExecInitModifyTable.\n> > c5b7ba4e67 Postpone some stuff out of ExecInitModifyTable.\n> \n> I was just about to ask Bruce what he thinks about these.\n> \n> To clarify, the first one is a big refactoring commit that allowed us\n> to get rid of inheritance_planner(), a fairly inefficient way of\n> planning updates/deletes on partitioned tables, especially when many\n> partitions remain after pruning (or when pruning cannot be used). One\n> may see the performance of update/deletes, especially on partitioned\n> tables, to be generally improved as a result of this commit, but maybe\n> not as significantly as to be mentioned in E.1.3.1.1. Partitioning or\n> even E.1.3.1.4. General Performance. However, one user-visible\n> feature that came out of this work is that updates/deletes can now use\n> run-time pruning whereas they couldn't before. Maybe that ought to be\n> mentioned. (This reminds me to send a patch to remove the note from\n> 5.11.4. Partition Pruning that says that runtime pruning cannot be\n> used for update/delete).\n> \n> The other two commits can lead to improved performance of\n> update/deletes when there are many unpruned partitions in the plan,\n> but runtime pruning (a new feature as mentioned above) leads to only\n> one or few partitions to actually be updated/deleted from. I admit\n> though that the cases for which performance has been improved still\n> under-perform the cases that already performed better starting in v12,\n> that is, the cases where the planner itself is able to trim down the\n> plan to contain one or few partitions, so maybe nothing very big to\n> see here just yet. You may want to take a look at the benchmark\n> results I had posted here:\n> https://www.postgresql.org/message-id/CA%2BHiwqEcawatEaUh1uTbZMEZTJeLzbroRTz9_X9Z5CFjTWJkhw%40mail.gmail.com\n\nOK, I added this entry:\n\n\t<listitem>\n\t<!--\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\t2021-03-31 [86dc90056] Rework planning and execution of UPDATE and DELETE.\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\t2021-04-06 [a1115fa07] Postpone some more stuff out of ExecInitModifyTable.\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\t2021-04-06 [c5b7ba4e6] Postpone some stuff out of ExecInitModifyTable.\n\t-->\n\t\n\t<para>\n\tImprove the performance of updates/deletes on partitioned tables\n\twhen only a few partitions are affected (Amit Langote, Tom Lane)\n\t</para>\n\t\n\t<para>\n\tThis also allows run-time pruning of updates/deletes on partitioned\n\ttables.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 12 May 2021 13:39:39 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 11, 2021 at 05:47:27PM -0400, �lvaro Herrera wrote:\n> On 2021-May-11, Bruce Momjian wrote:\n> \n> > > 86dc90056d Rework planning and execution of UPDATE and DELETE.\n> > > a1115fa078 Postpone some more stuff out of ExecInitModifyTable.\n> > > c5b7ba4e67 Postpone some stuff out of ExecInitModifyTable.\n> \n> Are these the set that means UPDATE/DELETE can now do partition pruning?\n> I think that's a significant improvement, because table partitioning\n> becomes usable in situations where it previously wasn't, so let's\n> mention it.\n> \n> > > c7aba7c14e Support subscripting of arbitrary types, not only arrays.\n> \n> There's an entry that says we allow subscripting of jsonb, but the\n> feature is more general than that: our subscripting is now extensible,\n> and jsonb is just the first implementation. I think this should be\n> mentioned.\n\nOK, I added it into the existing entry:\n\n\t<listitem>\n\t<!--\n\tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n\t2021-01-31 [676887a3b] Implementation of subscripting for jsonb\n\tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n\t2021-01-31 [81fcc72e6] Filling array gaps during jsonb subscripting\n\tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n\t2021-01-31 [aa6e46daf] Throw error when assigning jsonb scalar instead of a com\n\t-->\n\t\n\t<para>\n\tAllow subscripting of jsonb (Dmitry Dolgov)\n\t</para>\n\t\n\t<para>\n\tSubscripting can be used to extract from and assign to jsonb documents.\n\tSubscripting is now extensible so extensions and built-in data type can make use of this.\n\t</para>\n\t</listitem>\n\n> > > f40c6969d0 Routine usage information schema tables\n> \n> This is one small step towards compliance, I think, so it should be\n> mentioned.\n\nI added this item:\n\n\t<listitem>\n\t<!--\n\tAuthor: Peter Eisentraut <peter@eisentraut.org>\n\t2021-02-17 [f40c6969d] Routine usage information schema tables\n\t-->\n\t\n\t<para>\n\tImplement information schema view routine_column_usage to track columns\n\treferenced by function and procedure default expressions (Peter Eisentraut)\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 12 May 2021 15:48:14 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On 2021-May-12, Bruce Momjian wrote:\n\n> OK, I added it into the existing entry:\n> \n> \t<listitem>\n> \t<!--\n> \tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n> \t2021-01-31 [676887a3b] Implementation of subscripting for jsonb\n> \tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n> \t2021-01-31 [81fcc72e6] Filling array gaps during jsonb subscripting\n> \tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n> \t2021-01-31 [aa6e46daf] Throw error when assigning jsonb scalar instead of a com\n> \t-->\n> \t\n> \t<para>\n> \tAllow subscripting of jsonb (Dmitry Dolgov)\n> \t</para>\n> \t\n> \t<para>\n> \tSubscripting can be used to extract from and assign to jsonb documents.\n> \tSubscripting is now extensible so extensions and built-in data type can make use of this.\n> \t</para>\n> \t</listitem>\n\nI dunno, this seems to approach the feature from the specifics rather\nthan its full generality. I would have said something like \n\n Make subscripting user-extensible for \"container\" types\n <!-- maybe \"array-like types\" -->\n\n Subscripting JSONB is now integrated in core, and adding user-defined\n subscripting for other types is possible.\n\n\n-- \n�lvaro Herrera Valdivia, Chile\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n\n\n", "msg_date": "Wed, 12 May 2021 18:06:31 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On 2021-May-10, Bruce Momjian wrote:\n\n> I reworeded it to:\n> \n> \t<listitem>\n> \t<!--\n> \tAuthor: Peter Eisentraut <peter@eisentraut.org>\n> \t2020-06-10 [c7eab0e97] Change default of password_encryption to scram-sha-256\n> \t-->\n> \t\n> \t<para>\n> \tLimit the ways password_encryption can enable md5 hashing (Peter Eisentraut)\n> \t</para>\n> \t\n> \t<para>\n> \tPreviously on/true/yes/1 values enabled md5. Now, only the string md5 does this.\n> \t</para>\n> \t</listitem>\n> \n> I also have this entry:\n> \n> \t<listitem>\n> \t<!--\n> \tAuthor: Peter Eisentraut <peter@eisentraut.org>\n> \t2020-06-10 [c7eab0e97] Change default of password_encryption to scram-sha-256\n> \t-->\n> \t\n> \t<para>\n> \tChange password_encryption's default to scram-sha-256 (Peter Eisentraut)\n> \t</para>\n> \t\n> \t<para>\n> \tPreviously it was md5.\n> \t</para>\n> \t</listitem>\n> \n> I am hesitant to merge them since they are different changes.\n\nDifferent? The commit IDs look pretty similar to me, not to say exactly\nidentical.\n\nReading these, it's pretty unclear what password_encryption *is* in the\nfirst place. The item should say \"the password_encryption server\nparameter\" -- that's what we have in the pg10 note for precursor commit\neb61136dc.\n\nI would suggest \n\n<para>\n Change the default of the password_encryption server parameter to\n scram-sha-256.\n</para>\n\n<para>\n Also, the legacy (and undocumented) boolean-like values which were\n previously synonyms of <literal>md5</literal> are no longer accepted.\n</para>\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"The Gord often wonders why people threaten never to come back after they've\nbeen told never to return\" (www.actsofgord.com)\n\n\n", "msg_date": "Wed, 12 May 2021 18:19:28 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 02:42:08PM -0400, John Naylor wrote:\n> Hi Bruce,\n> \n> Thanks for doing this work again!\n> \n> > Add date_bin function (John Naylor)\n> >\n> > WHAT DOES THIS DO?\n> \n> Hard to describe in a one-liner, but it lines up timestamps into regular\n> intervals as specified by the user. It is more clear after seeing examples:\n> \n> https://www.postgresql.org/docs/devel/functions-datetime.html#\n> FUNCTIONS-DATETIME-BIN\n\nOK, I just copied the doc sentence into the release notes.\n\n> > Dramatically improve Unicode normalization (John Naylor)\n> >\n> > WHAT OPERATIONS USE THIS?\n> \n> PG13 added the normalize() function to normalize Unicode sequences, as well as\n> the IS NORMALIZED syntax to test for that. The commits* here do not change\n> behavior and only improve performance. As such, this really belongs in the\n> performance section.�\n> \n> *There is one additional commit that belongs to this entry:\n> \n> Author: Michael Paquier <michael@paquier.xyz>\n> 2020-10-11 [80f8eb79e] Use perfect hash for NFC and NFKC Unicode Normalization\n> quick check\n\nOK, updated text:\n\n\t<listitem>\n\t<!--\n\tAuthor: Michael Paquier <michael@paquier.xyz>\n\t2020-10-11 [80f8eb79e] Use perfect hash for NFC and NFKC Unicode Normalization\n\tAuthor: Michael Paquier <michael@paquier.xyz>\n\t2020-10-23 [783f0cc64] Improve performance of Unicode {de,re}composition in the\n\t-->\n\t\n\t<para>\n\tDramatically improve Unicode normalization (John Naylor)\n\t</para>\n\t\n\t<para>\n\tThis speeds normalize() and IS NORMALIZED.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 12 May 2021 23:15:06 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Wed, May 12, 2021 at 06:06:31PM -0400, �lvaro Herrera wrote:\n> On 2021-May-12, Bruce Momjian wrote:\n> \n> > OK, I added it into the existing entry:\n> > \n> > \t<listitem>\n> > \t<!--\n> > \tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n> > \t2021-01-31 [676887a3b] Implementation of subscripting for jsonb\n> > \tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n> > \t2021-01-31 [81fcc72e6] Filling array gaps during jsonb subscripting\n> > \tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n> > \t2021-01-31 [aa6e46daf] Throw error when assigning jsonb scalar instead of a com\n> > \t-->\n> > \t\n> > \t<para>\n> > \tAllow subscripting of jsonb (Dmitry Dolgov)\n> > \t</para>\n> > \t\n> > \t<para>\n> > \tSubscripting can be used to extract from and assign to jsonb documents.\n> > \tSubscripting is now extensible so extensions and built-in data type can make use of this.\n> > \t</para>\n> > \t</listitem>\n> \n> I dunno, this seems to approach the feature from the specifics rather\n> than its full generality. I would have said something like \n> \n> Make subscripting user-extensible for \"container\" types\n> <!-- maybe \"array-like types\" -->\n> \n> Subscripting JSONB is now integrated in core, and adding user-defined\n> subscripting for other types is possible.\n\nI wanted to highlight the more useful aspect of this change, which I\nthink is JSBONB subscripting. Here is an updated version. Changes?\n\n\t<listitem>\n\t<!--\n\tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n\t2021-01-31 [676887a3b] Implementation of subscripting for jsonb\n\tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n\t2021-01-31 [81fcc72e6] Filling array gaps during jsonb subscripting\n\tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n\t2021-01-31 [aa6e46daf] Throw error when assigning jsonb scalar instead of a com\n\t-->\n\t\n\t<para>\n\tAllow subscripting of JSONB and simplify the implementation of subscripting\n\t(Dmitry Dolgov)\n\t</para>\n\t\n\t<para>\n\tJSONB subscripting can be used to extract from and assign to JSONB\n\tdocuments. Extensions and built-in data types can now implement\n\tsubscripting more easily.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 12 May 2021 23:23:17 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Wed, May 12, 2021 at 06:19:28PM -0400, �lvaro Herrera wrote:\n> > I am hesitant to merge them since they are different changes.\n> \n> Different? The commit IDs look pretty similar to me, not to say exactly\n> identical.\n> \n> Reading these, it's pretty unclear what password_encryption *is* in the\n> first place. The item should say \"the password_encryption server\n> parameter\" -- that's what we have in the pg10 note for precursor commit\n> eb61136dc.\n> \n> I would suggest \n> \n> <para>\n> Change the default of the password_encryption server parameter to\n> scram-sha-256.\n> </para>\n> \n> <para>\n> Also, the legacy (and undocumented) boolean-like values which were\n> previously synonyms of <literal>md5</literal> are no longer accepted.\n> </para>\n\nOK, updated text:\n\n\t<listitem>\n\t<!--\n\tAuthor: Peter Eisentraut <peter@eisentraut.org>\n\t2020-06-10 [c7eab0e97] Change default of password_encryption to scram-sha-256\n\t-->\n\t\n\t<para>\n\tChange the default of the password_encryption server parameter\n\tto scram-sha-256 (Peter Eisentraut)\n\t</para>\n\t\n\t<para>\n\tPreviously it was md5.\tAll new passwords will be stored as SHA256\n\tunless this server variable is changed or the password is already\n\tmd5-hashed. Also, the legacy (and undocumented) boolean-like\n\tvalues which were previously synonyms of <literal>md5</literal>\n\tare no longer accepted.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 12 May 2021 23:26:06 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Thu, May 13, 2021 at 2:39 AM Bruce Momjian <bruce@momjian.us> wrote:\n> On Tue, May 11, 2021 at 11:57:10AM +0900, Amit Langote wrote:\n> > On Mon, May 10, 2021 at 11:40 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > 86dc90056d Rework planning and execution of UPDATE and DELETE.\n> > > a1115fa078 Postpone some more stuff out of ExecInitModifyTable.\n> > > c5b7ba4e67 Postpone some stuff out of ExecInitModifyTable.\n> >\n> > I was just about to ask Bruce what he thinks about these.\n> >\n> > To clarify, the first one is a big refactoring commit that allowed us\n> > to get rid of inheritance_planner(), a fairly inefficient way of\n> > planning updates/deletes on partitioned tables, especially when many\n> > partitions remain after pruning (or when pruning cannot be used). One\n> > may see the performance of update/deletes, especially on partitioned\n> > tables, to be generally improved as a result of this commit, but maybe\n> > not as significantly as to be mentioned in E.1.3.1.1. Partitioning or\n> > even E.1.3.1.4. General Performance. However, one user-visible\n> > feature that came out of this work is that updates/deletes can now use\n> > run-time pruning whereas they couldn't before. Maybe that ought to be\n> > mentioned. (This reminds me to send a patch to remove the note from\n> > 5.11.4. Partition Pruning that says that runtime pruning cannot be\n> > used for update/delete).\n> >\n> > The other two commits can lead to improved performance of\n> > update/deletes when there are many unpruned partitions in the plan,\n> > but runtime pruning (a new feature as mentioned above) leads to only\n> > one or few partitions to actually be updated/deleted from. I admit\n> > though that the cases for which performance has been improved still\n> > under-perform the cases that already performed better starting in v12,\n> > that is, the cases where the planner itself is able to trim down the\n> > plan to contain one or few partitions, so maybe nothing very big to\n> > see here just yet. You may want to take a look at the benchmark\n> > results I had posted here:\n> > https://www.postgresql.org/message-id/CA%2BHiwqEcawatEaUh1uTbZMEZTJeLzbroRTz9_X9Z5CFjTWJkhw%40mail.gmail.com\n>\n> OK, I added this entry:\n\nThank you.\n\n> <listitem>\n> <!--\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2021-03-31 [86dc90056] Rework planning and execution of UPDATE and DELETE.\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2021-04-06 [a1115fa07] Postpone some more stuff out of ExecInitModifyTable.\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2021-04-06 [c5b7ba4e6] Postpone some stuff out of ExecInitModifyTable.\n> -->\n>\n> <para>\n> Improve the performance of updates/deletes on partitioned tables\n> when only a few partitions are affected (Amit Langote, Tom Lane)\n> </para>\n>\n> <para>\n> This also allows run-time pruning of updates/deletes on partitioned\n> tables.\n> </para>\n> </listitem>\n\nHow about writing the 2nd line instead as:\n\nUpdates/deletes on partitioned tables can now use execution-time\npartition pruning.\n\nWe don't seem to use the term \"run-time pruning\" anywhere else in the\ndocumentation, and \"pruning of updates/deletes\" sounds strange.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 14:46:58 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Thu, May 13, 2021 at 02:46:58PM +0900, Amit Langote wrote:\n> How about writing the 2nd line instead as:\n> \n> Updates/deletes on partitioned tables can now use execution-time\n> partition pruning.\n> \n> We don't seem to use the term \"run-time pruning\" anywhere else in the\n> documentation, and \"pruning of updates/deletes\" sounds strange.\n\nGood point, updated text:\n\n\t<listitem>\n\t<!--\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\t2021-03-31 [86dc90056] Rework planning and execution of UPDATE and DELETE.\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\t2021-04-06 [a1115fa07] Postpone some more stuff out of ExecInitModifyTable.\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\t2021-04-06 [c5b7ba4e6] Postpone some stuff out of ExecInitModifyTable.\n\t-->\n\t\n\t<para>\n\tImprove the performance of updates/deletes on partitioned tables\n\twhen only a few partitions are affected (Amit Langote, Tom Lane)\n\t</para>\n\t\n\t<para>\n\tThis also allows updates/deletes on partitioned tables to use\n\texecution-time partition pruning.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 13 May 2021 10:59:48 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 02:03:08AM -0400, Bruce Momjian wrote:\n> I have committed the first draft of the PG 14 release notes. You can\n> see the most current build of them here:\n> \n> \thttps://momjian.us/pgsql_docs/release-14.html\n> \n> I need clarification on many items, and the document still needs its\n> items properly ordered, and markup added. I also expect a lot of\n> feedback.\n> \n> I plan to work on completing this document this coming week in\n> preparation for beta next week.\n\nI have ordered the items in each section. My next job is to add markup\nand indenting to the XML.\n\nFYI, the PG 14 release item count is much higher than usual:\n\n\trelease-7.4: 263\n\trelease-8.0: 230\n\trelease-8.1: 174\n\trelease-8.2: 215\n\trelease-8.3: 214\n\trelease-8.4: 314\n\trelease-9.0: 237\n\trelease-9.1: 203\n\trelease-9.2: 238\n\trelease-9.3: 177\n\trelease-9.4: 211\n\trelease-9.5: 193\n\trelease-9.6: 214\n\trelease-10: 189\n\trelease-11: 170\n\trelease-12: 180\n\trelease-13: 178\n\trelease-14: 220\n\nPG 14 is a 23% increase over our previous release. I think the cause is\neither more hackers/sponsors, Covid, or both.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 13 May 2021 21:21:51 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Thu, May 13, 2021 at 11:59 PM Bruce Momjian <bruce@momjian.us> wrote:\n> On Thu, May 13, 2021 at 02:46:58PM +0900, Amit Langote wrote:\n> > How about writing the 2nd line instead as:\n> >\n> > Updates/deletes on partitioned tables can now use execution-time\n> > partition pruning.\n> >\n> > We don't seem to use the term \"run-time pruning\" anywhere else in the\n> > documentation, and \"pruning of updates/deletes\" sounds strange.\n>\n> Good point, updated text:\n>\n> <listitem>\n> <!--\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2021-03-31 [86dc90056] Rework planning and execution of UPDATE and DELETE.\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2021-04-06 [a1115fa07] Postpone some more stuff out of ExecInitModifyTable.\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2021-04-06 [c5b7ba4e6] Postpone some stuff out of ExecInitModifyTable.\n> -->\n>\n> <para>\n> Improve the performance of updates/deletes on partitioned tables\n> when only a few partitions are affected (Amit Langote, Tom Lane)\n> </para>\n>\n> <para>\n> This also allows updates/deletes on partitioned tables to use\n> execution-time partition pruning.\n> </para>\n> </listitem>\n\nThank you.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 May 2021 10:40:42 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 09:40:45AM -0500, Justin Pryzby wrote:\n> Should any of these be included?\n\nNew SQL-accessible functionality should be included:\n> 8c15a29745 Allow ALTER TYPE to update an existing type's typsubscript value.\n\nThese should be merged:\n> 756ab29124 Allow pageinspect to inspect GiST indexes (Andrey Borodin, Heikki Linnakangas)\n> 9e596b65f4 Add \"LP_DEAD item?\" column to GiST pageinspect functions\n\nI'm undecided on this one:\n> 7db0cd2145 Set PD_ALL_VISIBLE and visibility map bits in COPY FREEZE\n\nPeople who didn't used to use FREEZE (because it didn't work or otherwise)\nmight want to start using it.\n\nI'm withdrawing these, as modifications to existing log messages don't need to\nbe included:\n\n> 10a5b35a00 Report resource usage at the end of recovery\n> 7e453634bb Add additional information in the vacuum error context.\n> 1ea396362b Improve logging of bad parameter values in BIND messages.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 13 May 2021 21:01:41 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Thu, May 13, 2021 at 09:01:41PM -0500, Justin Pryzby wrote:\n> These should be merged:\n> > 756ab29124 Allow pageinspect to inspect GiST indexes (Andrey Borodin, Heikki Linnakangas)\n> > 9e596b65f4 Add \"LP_DEAD item?\" column to GiST pageinspect functions\n\nSorry, this was my error while reconciling my list with yours.\nYour notes only have one item for these, which is correct.\n\nPossibly you'd want to include the 9e59 commit in the comment (but it wouldn't\nhave avoided my own confusion, tough).\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 13 May 2021 21:08:37 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Thu, May 13, 2021 at 09:08:37PM -0500, Justin Pryzby wrote:\n> On Thu, May 13, 2021 at 09:01:41PM -0500, Justin Pryzby wrote:\n> > These should be merged:\n> > > 756ab29124 Allow pageinspect to inspect GiST indexes (Andrey Borodin, Heikki Linnakangas)\n> > > 9e596b65f4 Add \"LP_DEAD item?\" column to GiST pageinspect functions\n> \n> Sorry, this was my error while reconciling my list with yours.\n> Your notes only have one item for these, which is correct.\n> \n> Possibly you'd want to include the 9e59 commit in the comment (but it wouldn't\n> have avoided my own confusion, tough).\n\nOK, I added that commit:\n\n\t<listitem>\n\t<!--\n\tAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n\t2021-01-13 [756ab2912] Add functions to 'pageinspect' to inspect GiST indexes.\n\tAuthor: Peter Geoghegan <pg@bowt.ie>\n\t2021-02-14 [9e596b65f] Add \"LP_DEAD item?\" column to GiST pageinspect functions\n\t-->\n\t\n\t<para>\n\tAllow pageinspect to inspect GiST indexes (Andrey Borodin,\n\tHeikki Linnakangas)\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 14 May 2021 10:39:59 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Thu, May 13, 2021 at 09:01:41PM -0500, Justin Pryzby wrote:\n> On Mon, May 10, 2021 at 09:40:45AM -0500, Justin Pryzby wrote:\n> > Should any of these be included?\n> \n> New SQL-accessible functionality should be included:\n> > 8c15a29745 Allow ALTER TYPE to update an existing type's typsubscript value.\n\nOK, text is:\n\n\t<listitem>\n\t<!--\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\t2020-12-11 [8c15a2974] Allow ALTER TYPE to update an existing type's typsubscri\n\t-->\n\t\n\t<para>\n\tAllow ALTER TYPE to specify or remove a SUBSCRIPT handler\n\t(Tom Lane)\n\t</para>\n\t</listitem>\n\n> I'm undecided on this one:\n> > 7db0cd2145 Set PD_ALL_VISIBLE and visibility map bits in COPY FREEZE\n> \n> People who didn't used to use FREEZE (because it didn't work or otherwise)\n> might want to start using it.\n\nYes, that seems very important:\n\n\t<listitem>\n\t<!--\n\tAuthor: Tomas Vondra <tomas.vondra@postgresql.org>\n\t2021-01-17 [7db0cd214] Set PD_ALL_VISIBLE and visibility map bits in COPY FREEZ\n\t-->\n\t\n\t<para>\n\tHave COPY FREEZE appropriately update page visibility bits\n\t(Anastasia Lubennikova, Pavan Deolasee, Jeff Janes)\n\t</para>\n\t</listitem>\n\nThis one one of those, \"I must be confused, since we certainly already\ndid this before.\"\n\n> I'm withdrawing these, as modifications to existing log messages don't need to\n> be included:\n> \n> > 10a5b35a00 Report resource usage at the end of recovery\n> > 7e453634bb Add additional information in the vacuum error context.\n> > 1ea396362b Improve logging of bad parameter values in BIND messages.\n\nOK.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 14 May 2021 12:53:25 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Thu, May 13, 2021 at 09:01:41PM -0500, Justin Pryzby wrote:\n>> New SQL-accessible functionality should be included:\n>>> 8c15a29745 Allow ALTER TYPE to update an existing type's typsubscript value.\n\n> OK, text is:\n\n> \t<listitem>\n> \t<!--\n> \tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n> \t2020-12-11 [8c15a2974] Allow ALTER TYPE to update an existing type's typsubscri\n> \t-->\n\t\n> \t<para>\n> \tAllow ALTER TYPE to specify or remove a SUBSCRIPT handler\n> \t(Tom Lane)\n> \t</para>\n> \t</listitem>\n\nI don't understand why we'd bother to list that as a separate bullet item.\nIt's an expected part of the custom-subscript capability, I'd think.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 May 2021 15:39:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Fri, May 14, 2021 at 03:39:39PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Thu, May 13, 2021 at 09:01:41PM -0500, Justin Pryzby wrote:\n> >> New SQL-accessible functionality should be included:\n> >>> 8c15a29745 Allow ALTER TYPE to update an existing type's typsubscript value.\n> \n> > OK, text is:\n> \n> > \t<listitem>\n> > \t<!--\n> > \tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n> > \t2020-12-11 [8c15a2974] Allow ALTER TYPE to update an existing type's typsubscri\n> > \t-->\n> \t\n> > \t<para>\n> > \tAllow ALTER TYPE to specify or remove a SUBSCRIPT handler\n> > \t(Tom Lane)\n> > \t</para>\n> > \t</listitem>\n> \n> I don't understand why we'd bother to list that as a separate bullet item.\n> It's an expected part of the custom-subscript capability, I'd think.\n\nI am not sure either.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 14 May 2021 16:01:32 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Fri, May 14, 2021 at 04:01:32PM -0400, Bruce Momjian wrote:\n> On Fri, May 14, 2021 at 03:39:39PM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > On Thu, May 13, 2021 at 09:01:41PM -0500, Justin Pryzby wrote:\n> > >> New SQL-accessible functionality should be included:\n> > >>> 8c15a29745 Allow ALTER TYPE to update an existing type's typsubscript value.\n> > \n> > > OK, text is:\n> > \n> > > \t<listitem>\n> > > \t<!--\n> > > \tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n> > > \t2020-12-11 [8c15a2974] Allow ALTER TYPE to update an existing type's typsubscri\n> > > \t-->\n> > \t\n> > > \t<para>\n> > > \tAllow ALTER TYPE to specify or remove a SUBSCRIPT handler\n> > > \t(Tom Lane)\n> > > \t</para>\n> > > \t</listitem>\n> > \n> > I don't understand why we'd bother to list that as a separate bullet item.\n> > It's an expected part of the custom-subscript capability, I'd think.\n> \n> I am not sure either.\n\nRemoved.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sat, 15 May 2021 14:30:15 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On 2021-May-12, Bruce Momjian wrote:\n\n> OK, updated text:\n> \n> \t<listitem>\n> \t<!--\n> \tAuthor: Peter Eisentraut <peter@eisentraut.org>\n> \t2020-06-10 [c7eab0e97] Change default of password_encryption to scram-sha-256\n> \t-->\n> \t\n> \t<para>\n> \tChange the default of the password_encryption server parameter\n> \tto scram-sha-256 (Peter Eisentraut)\n> \t</para>\n> \t\n> \t<para>\n> \tPreviously it was md5.\tAll new passwords will be stored as SHA256\n> \tunless this server variable is changed or the password is already\n> \tmd5-hashed. Also, the legacy (and undocumented) boolean-like\n> \tvalues which were previously synonyms of <literal>md5</literal>\n> \tare no longer accepted.\n> \t</para>\n> \t</listitem>\n\nThanks, looks ok as far as what the original point was about.\n\nI have to say that this sentence is a bit odd: \"All new passwords will\nbe stored as sha256 unless ... the password is already md5-hashed\".\nDoes this mean that if you change a password for a user whose password\nwas md5, the new one is stored as md5 too even if the setting is\nscram-sha-256? Or if \"the password\" means an old password, then why is\nit a new password?\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Sat, 15 May 2021 19:05:35 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On 2021-May-15, Bruce Momjian wrote:\n\n> On Fri, May 14, 2021 at 04:01:32PM -0400, Bruce Momjian wrote:\n> > On Fri, May 14, 2021 at 03:39:39PM -0400, Tom Lane wrote:\n> > > Bruce Momjian <bruce@momjian.us> writes:\n> > > > On Thu, May 13, 2021 at 09:01:41PM -0500, Justin Pryzby wrote:\n> > > >> New SQL-accessible functionality should be included:\n> > > >>> 8c15a29745 Allow ALTER TYPE to update an existing type's typsubscript value.\n> > > \n> > > > OK, text is:\n> > > \n> > > > \t<listitem>\n> > > > \t<!--\n> > > > \tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n> > > > \t2020-12-11 [8c15a2974] Allow ALTER TYPE to update an existing type's typsubscri\n> > > > \t-->\n> > > \t\n> > > > \t<para>\n> > > > \tAllow ALTER TYPE to specify or remove a SUBSCRIPT handler\n> > > > \t(Tom Lane)\n> > > > \t</para>\n> > > > \t</listitem>\n> > > \n> > > I don't understand why we'd bother to list that as a separate bullet item.\n> > > It's an expected part of the custom-subscript capability, I'd think.\n> > \n> > I am not sure either.\n> \n> Removed.\n\nSo now we don't have anything that mentions ALTER TYPE / SUBSCRIPT,\nbecause the item is about JSONB, not about subscripts. Moreover it\nseems misplaced in the \"functions\" subsection -- subscripting is\nreplacing what used to be function calls, but really it's more than\nthat. I think it should be in \"datatypes\", and I further think it\nshould mention ALTER TYPE SUBSCRIPT too.\n\nI, um, again think that the item should not be primarily about JSONB,\nbut about our (IMO groundbreaking) extensible subscripting work.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Sat, 15 May 2021 19:12:28 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On 2021-May-10, Bruce Momjian wrote:\n\n> I have committed the first draft of the PG 14 release notes. You can\n> see the most current build of them here:\n> \n> \thttps://momjian.us/pgsql_docs/release-14.html\n\nThanks for putting this all together.\n\n> <listitem>\n> <!--\n> Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> 2021-04-08 [0827e8af7] autovacuum: handle analyze for partitioned tables\n> -->\n> \n> <para>\n> Autovacuum now analyzes partitioned tables (Yuzuko Hosoya)\n> </para>\n\nI would add something like:\n\n\"The insert, update and delete tuple counts from partitions are\npropagated to their parent tables, so autovacuum knows to process them\ntoo.\"\n\nBecause I spent a considerable amount of time on this patch, I think\nit's fair to list me as co-author, after Hosoya-san's name. Same with\nthe PQtrace() item.\n\nThanks\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Sat, 15 May 2021 19:16:18 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Sat, May 15, 2021 at 07:12:28PM -0400, Alvaro Herrera wrote:\n> On 2021-May-15, Bruce Momjian wrote:\n> > On Fri, May 14, 2021 at 04:01:32PM -0400, Bruce Momjian wrote:\n> > > On Fri, May 14, 2021 at 03:39:39PM -0400, Tom Lane wrote:\n> > > > Bruce Momjian <bruce@momjian.us> writes:\n> > > > > On Thu, May 13, 2021 at 09:01:41PM -0500, Justin Pryzby wrote:\n> > > > >> New SQL-accessible functionality should be included:\n> > > > >>> 8c15a29745 Allow ALTER TYPE to update an existing type's typsubscript value.\n> > > > \n> > > > > OK, text is:\n> > > > \n> > > > > \t<listitem>\n> > > > > \t<!--\n> > > > > \tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n> > > > > \t2020-12-11 [8c15a2974] Allow ALTER TYPE to update an existing type's typsubscri\n> > > > > \t-->\n> > > > \t\n> > > > > \t<para>\n> > > > > \tAllow ALTER TYPE to specify or remove a SUBSCRIPT handler\n> > > > > \t(Tom Lane)\n> > > > > \t</para>\n> > > > > \t</listitem>\n> > > > \n> > > > I don't understand why we'd bother to list that as a separate bullet item.\n> > > > It's an expected part of the custom-subscript capability, I'd think.\n> > > \n> > > I am not sure either.\n> > \n> > Removed.\n> \n> So now we don't have anything that mentions ALTER TYPE / SUBSCRIPT,\n> because the item is about JSONB, not about subscripts. Moreover it\n> seems misplaced in the \"functions\" subsection -- subscripting is\n> replacing what used to be function calls, but really it's more than\n> that. I think it should be in \"datatypes\", and I further think it\n> should mention ALTER TYPE SUBSCRIPT too.\n> \n> I, um, again think that the item should not be primarily about JSONB,\n> but about our (IMO groundbreaking) extensible subscripting work.\n\n\nMaybe it should be included, but merged with the existing item, like this.\n\nAllow subscripting of non-array types.\n\nThis is implemented for the JSONB type, allowing to extract from and assign to\nJSONB documents.\nA new ALTER TYPE syntax allows to specify or remove a SUBSCRIPT handler.\n\nMerge with the hstore item?\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 15 May 2021 18:33:58 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I, um, again think that the item should not be primarily about JSONB,\n> but about our (IMO groundbreaking) extensible subscripting work.\n\n+1. Extensible subscripting is THE feature here, using it for jsonb\nis just an application.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 15 May 2021 22:27:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Sat, May 15, 2021 at 07:05:35PM -0400, �lvaro Herrera wrote:\n> On 2021-May-12, Bruce Momjian wrote:\n> \n> > OK, updated text:\n> > \n> > \t<listitem>\n> > \t<!--\n> > \tAuthor: Peter Eisentraut <peter@eisentraut.org>\n> > \t2020-06-10 [c7eab0e97] Change default of password_encryption to scram-sha-256\n> > \t-->\n> > \t\n> > \t<para>\n> > \tChange the default of the password_encryption server parameter\n> > \tto scram-sha-256 (Peter Eisentraut)\n> > \t</para>\n> > \t\n> > \t<para>\n> > \tPreviously it was md5.\tAll new passwords will be stored as SHA256\n> > \tunless this server variable is changed or the password is already\n> > \tmd5-hashed. Also, the legacy (and undocumented) boolean-like\n> > \tvalues which were previously synonyms of <literal>md5</literal>\n> > \tare no longer accepted.\n> > \t</para>\n> > \t</listitem>\n> \n> Thanks, looks ok as far as what the original point was about.\n> \n> I have to say that this sentence is a bit odd: \"All new passwords will\n> be stored as sha256 unless ... the password is already md5-hashed\".\n> Does this mean that if you change a password for a user whose password\n> was md5, the new one is stored as md5 too even if the setting is\n> scram-sha-256? Or if \"the password\" means an old password, then why is\n> it a new password?\n\nOK, what I was trying to say was that if you dump/restore, and the old\npassword was md5, the newly-restored password will be md5, but it was\nvery unclear. I changed it to this:\n\n\t<listitem>\n\t<!--\n\tAuthor: Peter Eisentraut <peter@eisentraut.org>\n\t2020-06-10 [c7eab0e97] Change default of password_encryption to scram-sha-256\n\tAuthor: Peter Eisentraut <peter@eisentraut.org>\n\t2020-06-10 [c7eab0e97] Change default of password_encryption to scram-sha-256\n\t-->\n\t\n\t<para>\n\tChange the default of the password_encryption server parameter to\n\tscram-sha-256 (Peter Eisentraut)\n\t</para>\n\t\n\t<para>\n\tPreviously it was md5.\tAll new passwords will be stored as SHA256\n\tunless this server variable is changed or the password is specified\n\tin md5 format.\tAlso, the legacy (and undocumented) boolean-like\n\tvalues which were previously synonyms for <literal>md5</literal>\n\tare no longer accepted.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sun, 16 May 2021 22:51:02 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Sat, May 15, 2021 at 10:27:08PM -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > I, um, again think that the item should not be primarily about JSONB,\n> > but about our (IMO groundbreaking) extensible subscripting work.\n> \n> +1. Extensible subscripting is THE feature here, using it for jsonb\n> is just an application.\n\nOK, I moved the entry into 'data types', and listed two entries instead\nof one:\n\n\t<listitem>\n\t<!--\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\t2020-12-09 [c7aba7c14] Support subscripting of arbitrary types, not only arrays\n\t-->\n\t\n\t<para>\n\tAllow extensions and built-in data types to implement subscripting (Tom Lane)\n\t</para>\n\t\n\t<para>\n\tPreviously subscript handling was hard-coded into the server and\n\twas difficult to add to date types.\n\t</para>\n\t</listitem>\n\t\n\t<listitem>\n\t<!--\n\tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n\t2021-01-31 [676887a3b] Implementation of subscripting for jsonb\n\tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n\t2021-01-31 [81fcc72e6] Filling array gaps during jsonb subscripting\n\tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n\t2021-01-31 [aa6e46daf] Throw error when assigning jsonb scalar instead of a com\n\t-->\n\t\n\t<para>\n\tAllow subscripting of JSONB (Dmitry Dolgov)\n\t</para>\n\t\n\t<para>\n\tJSONB subscripting can be used to extract from and assign to\n\tJSONB documents.\n\t</para>\n\t</listitem>\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sun, 16 May 2021 23:06:28 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Sat, May 15, 2021 at 07:16:18PM -0400, �lvaro Herrera wrote:\n> On 2021-May-10, Bruce Momjian wrote:\n> \n> > I have committed the first draft of the PG 14 release notes. You can\n> > see the most current build of them here:\n> > \n> > \thttps://momjian.us/pgsql_docs/release-14.html\n> \n> Thanks for putting this all together.\n> \n> > <listitem>\n> > <!--\n> > Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > 2021-04-08 [0827e8af7] autovacuum: handle analyze for partitioned tables\n> > -->\n> > \n> > <para>\n> > Autovacuum now analyzes partitioned tables (Yuzuko Hosoya)\n> > </para>\n> \n> I would add something like:\n> \n> \"The insert, update and delete tuple counts from partitions are\n> propagated to their parent tables, so autovacuum knows to process them\n> too.\"\n\nOK, updated text:\n\n\t<listitem>\n\t<!--\n\tAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\n\t2021-04-08 [0827e8af7] autovacuum: handle analyze for partitioned tables\n\t-->\n\t\n\t<para>\n\tAutovacuum now analyzes partitioned tables (Yuzuko Hosoya, �lvaro\n\tHerrera)\n\t</para>\n\t\n\t<para>\n\tInsert, update, and delete tuple counts from partitions are now\n\tpropagated to their parent tables so autovacuum knows when to\n\tprocess them.\n\t</para>\n\t</listitem>\n\n> Because I spent a considerable amount of time on this patch, I think\n> it's fair to list me as co-author, after Hosoya-san's name. Same with\n> the PQtrace() item.\n\nDone, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sun, 16 May 2021 23:10:50 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> OK, I moved the entry into 'data types', and listed two entries instead\n> of one:\n\t\n> \t<para>\n> \tAllow extensions and built-in data types to implement subscripting (Tom Lane)\n> \t</para>\n\nBetter, but Dmitry Dolgov gets the credit not me.\n\n> \t<para>\n> \tPreviously subscript handling was hard-coded into the server and\n> \twas difficult to add to date types.\n> \t</para>\n> \t</listitem>\n\n\"Difficult\"? I don't think it was possible at all. Perhaps\n\n\tPreviously subscript handling was hard-coded into the server,\n\tso that subscripting could only be applied to array types.\n\tThis change allows subscript notation to be used to extract\n\tor assign portions of a value of any type for which the concept\n\tmakes sense.\n\n> \tJSONB subscripting can be used to extract from and assign to\n> \tJSONB documents.\n\nPerhaps \"extract and assign to portions of JSONB documents\" or\nsomething like that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 16 May 2021 23:17:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Sun, May 16, 2021 at 11:17:46PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > OK, I moved the entry into 'data types', and listed two entries instead\n> > of one:\n> \t\n> > \t<para>\n> > \tAllow extensions and built-in data types to implement subscripting (Tom Lane)\n> > \t</para>\n> \n> Better, but Dmitry Dolgov gets the credit not me.\n\nOK, fixed.\n\n> > \t<para>\n> > \tPreviously subscript handling was hard-coded into the server and\n> > \twas difficult to add to date types.\n> > \t</para>\n> > \t</listitem>\n> \n> \"Difficult\"? I don't think it was possible at all. Perhaps\n\nI used difficult to mean you had to hack the server code to enable it,\nbut your sentence below is better since it highlights usage beyond\narrays.\n\n\n> \tPreviously subscript handling was hard-coded into the server,\n> \tso that subscripting could only be applied to array types.\n> \tThis change allows subscript notation to be used to extract\n> \tor assign portions of a value of any type for which the concept\n> \tmakes sense.\n> \n> > \tJSONB subscripting can be used to extract from and assign to\n> > \tJSONB documents.\n> \n> Perhaps \"extract and assign to portions of JSONB documents\" or\n> something like that?\n\nOK, updated text for both items:\n\n\t<listitem>\n\t<!--\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\t2020-12-09 [c7aba7c14] Support subscripting of arbitrary types, not only arrays\n\t-->\n\t\n\t<para>\n\tAllow extensions and built-in data types to implement subscripting (Dmitry Dolgov)\n\t</para>\n\t\n\t<para>\n\tPreviously subscript handling was hard-coded into the server,\n\tso that subscripting could only be applied to array types. This\n\tchange allows subscript notation to be used to extract or assign\n\tportions of a value of any type for which the concept makes sense.\n\t</para>\n\t</listitem>\n\t\n\t<listitem>\n\t<!--\n\tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n\t2021-01-31 [676887a3b] Implementation of subscripting for jsonb\n\tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n\t2021-01-31 [81fcc72e6] Filling array gaps during jsonb subscripting\n\tAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n\t2021-01-31 [aa6e46daf] Throw error when assigning jsonb scalar instead of a com\n\t-->\n\t\n\t<para>\n\tAllow subscripting of JSONB (Dmitry Dolgov)\n\t</para>\n\t\n\t<para>\n\tJSONB subscripting can be used to extract and assign to portions\n\tof JSONB documents.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sun, 16 May 2021 23:32:54 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "\n\nOn 2021/05/10 15:03, Bruce Momjian wrote:\n> I have committed the first draft of the PG 14 release notes. You can\n> see the most current build of them here:\n> \n> \thttps://momjian.us/pgsql_docs/release-14.html\n\nThanks a lot!!\n\n\n> I need clarification on many items, and the document still needs its\n> items properly ordered, and markup added. I also expect a lot of\n> feedback.\n\n> <para>\n> Add function pg_backend_memory_contexts() to output the memory contexts of arbitrary backends (Atsushi Torikoshi)\n\nCould you fix the function name from pg_backend_memory_contexts() to\npg_log_backend_memory_contexts()? That is, \"log_\" is necessary just after\n\"pg_\" in the function name.\n\n\n> <para>\n> Add lock wait time to pg_locks (Atsushi Torikoshi)\n\nCould you change \"lock wait time\" to \"lock wait start time\"? Because\nthe pg_locks.waitstart column that was added in v14 reports the time\nwhen the server process *started* waiting for the lock.\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 17 May 2021 16:18:27 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 17, 2021 at 04:18:27PM +0900, Fujii Masao wrote:\n> \n> \n> On 2021/05/10 15:03, Bruce Momjian wrote:\n> > I have committed the first draft of the PG 14 release notes. You can\n> > see the most current build of them here:\n> > \n> > \thttps://momjian.us/pgsql_docs/release-14.html\n> \n> Thanks a lot!!\n> \n> \n> > I need clarification on many items, and the document still needs its\n> > items properly ordered, and markup added. I also expect a lot of\n> > feedback.\n> \n> > <para>\n> > Add function pg_backend_memory_contexts() to output the memory contexts of arbitrary backends (Atsushi Torikoshi)\n> \n> Could you fix the function name from pg_backend_memory_contexts() to\n> pg_log_backend_memory_contexts()? That is, \"log_\" is necessary just after\n> \"pg_\" in the function name.\n\nAh, I see my mistake, fixed.\n\n> > <para>\n> > Add lock wait time to pg_locks (Atsushi Torikoshi)\n> \n> Could you change \"lock wait time\" to \"lock wait start time\"? Because\n> the pg_locks.waitstart column that was added in v14 reports the time\n> when the server process *started* waiting for the lock.\n\nYes, good point, fixed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 17 May 2021 09:45:04 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "Hi,\n\nOn 5/10/21 2:03 AM, Bruce Momjian wrote:\n> I have committed the first draft of the PG 14 release notes. You can\n> see the most current build of them here:\n> \n> \thttps://momjian.us/pgsql_docs/release-14.html\n> \n> I need clarification on many items, and the document still needs its\n> items properly ordered, and markup added. I also expect a lot of\n> feedback.\n> \n> I plan to work on completing this document this coming week in\n> preparation for beta next week.\n\nAgain, thanks for compiling the release notes.\n\nI have published the first draft of the release announcement on\n-advocacy, you can find the original email here:\n\nhttps://www.postgresql.org/message-id/f82ccf97-d02c-0974-27b8-e0f5e4c1de02%40postgresql.org\n\nIf you are interested in reviewing the release announcement, please\nprovide feedback on the -advocacy mailing list.\n\nThanks!\n\nJonathan", "msg_date": "Mon, 17 May 2021 10:18:11 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On 2021-May-16, Bruce Momjian wrote:\n\n> OK, what I was trying to say was that if you dump/restore, and the old\n> password was md5, the newly-restored password will be md5, but it was\n> very unclear. I changed it to this:\n> \n> \t<listitem>\n> \t<!--\n> \tAuthor: Peter Eisentraut <peter@eisentraut.org>\n> \t2020-06-10 [c7eab0e97] Change default of password_encryption to scram-sha-256\n> \tAuthor: Peter Eisentraut <peter@eisentraut.org>\n> \t2020-06-10 [c7eab0e97] Change default of password_encryption to scram-sha-256\n> \t-->\n> \t\n> \t<para>\n> \tChange the default of the password_encryption server parameter to\n> \tscram-sha-256 (Peter Eisentraut)\n> \t</para>\n> \t\n> \t<para>\n> \tPreviously it was md5.\tAll new passwords will be stored as SHA256\n> \tunless this server variable is changed or the password is specified\n> \tin md5 format.\tAlso, the legacy (and undocumented) boolean-like\n> \tvalues which were previously synonyms for <literal>md5</literal>\n> \tare no longer accepted.\n> \t</para>\n> \t</listitem>\n\nAh, yeah, now I understand. WFM.\n\nThanks\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"La fuerza no est� en los medios f�sicos\nsino que reside en una voluntad indomable\" (Gandhi)\n\n\n", "msg_date": "Mon, 17 May 2021 15:29:04 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "I propose that the B-Tree index item (bottom-up index deletion, \"Allow\nbtree index additions to remove expired index entries to prevent page\nsplits\") link back to the documentation for those that want to drill\ndown:\n\nhttps://www.postgresql.org/docs/14/btree-implementation.html#BTREE-DELETION\n\nThe same approach was used for the Postgres 13 B-Tree deduplication\nrelease note item (same page from the docs, even).\n\nI understand that it's kind of hard to get the point of the feature\nacross within the confines of the release notes, which have a very\nwide audience. I believe that the practical take-away for more\nexperienced users should be something like \"significantly ameliorates\nwrite-amplification problems in indexes with non-HOT updates\". But\nthat's quite a mouthful. Having a hyperlink to the docs seems like a\nsensible compromise. I really just want experienced users to be able\nto make the connection between this feature and well-publicized\nproblems with Postgres index bloat/VACUUM.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 17 May 2021 12:45:44 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 17, 2021 at 12:45 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> https://www.postgresql.org/docs/14/btree-implementation.html#BTREE-DELETION\n\nThis linke currently 404s --\nhttps://www.postgresql.org/docs/devel/btree-implementation.html#BTREE-DELETION\nworks, though.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 17 May 2021 12:47:41 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 17, 2021 at 12:45:44PM -0700, Peter Geoghegan wrote:\n> I propose that the B-Tree index item (bottom-up index deletion, \"Allow\n> btree index additions to remove expired index entries to prevent page\n> splits\") link back to the documentation for those that want to drill\n> down:\n> \n> https://www.postgresql.org/docs/14/btree-implementation.html#BTREE-DELETION\n> \n> The same approach was used for the Postgres 13 B-Tree deduplication\n> release note item (same page from the docs, even).\n> \n> I understand that it's kind of hard to get the point of the feature\n> across within the confines of the release notes, which have a very\n> wide audience. I believe that the practical take-away for more\n> experienced users should be something like \"significantly ameliorates\n> write-amplification problems in indexes with non-HOT updates\". But\n> that's quite a mouthful. Having a hyperlink to the docs seems like a\n> sensible compromise. I really just want experienced users to be able\n> to make the connection between this feature and well-publicized\n> problems with Postgres index bloat/VACUUM.\n\nYes, I am going to add links to everything, but haven't had time yet.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 17 May 2021 16:32:07 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 3:03 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I have committed the first draft of the PG 14 release notes. You can\n> see the most current build of them here:\n>\n> https://momjian.us/pgsql_docs/release-14.html\n>\n\nI think we need to mention in the release note that\nvacuum_cleanup_index_scale_factor GUC parameter has been removed and\nvacuum_cleanup_index_scale_factor storage parameter has been\ndeprecated (please refer to commit 9f3665fb and effdd3f3b63).\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 18 May 2021 18:28:49 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 18, 2021 at 06:28:49PM +0900, Masahiko Sawada wrote:\n> On Mon, May 10, 2021 at 3:03 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I have committed the first draft of the PG 14 release notes. You can\n> > see the most current build of them here:\n> >\n> > https://momjian.us/pgsql_docs/release-14.html\n> >\n> \n> I think we need to mention in the release note that\n> vacuum_cleanup_index_scale_factor GUC parameter has been removed and\n> vacuum_cleanup_index_scale_factor storage parameter has been\n> deprecated (please refer to commit 9f3665fb and effdd3f3b63).\n\nLooking at the full commit message:\n\n\tcommit 9f3665fbfc\n\tAuthor: Peter Geoghegan <pg@bowt.ie>\n\tDate: Wed Mar 10 16:27:01 2021 -0800\n\t\n\t Don't consider newly inserted tuples in nbtree VACUUM.\n\t\n\t Remove the entire idea of \"stale stats\" within nbtree VACUUM (stop\n\t caring about stats involving the number of inserted tuples). Also\n\t remove the vacuum_cleanup_index_scale_factor GUC/param on the master\n\t branch (though just disable them on postgres 13).\n\t\n\t The vacuum_cleanup_index_scale_factor/stats interface made the nbtree AM\n\t partially responsible for deciding when pg_class.reltuples stats needed\n\t to be updated. This seems contrary to the spirit of the index AM API,\n\t though -- it is not actually necessary for an index AM's bulk delete and\n\t cleanup callbacks to provide accurate stats when it happens to be\n\t inconvenient. The core code owns that. (Index AMs have the authority\n\t to perform or not perform certain kinds of deferred cleanup based on\n\t their own considerations, such as page deletion and recycling, but that\n\t has little to do with pg_class.reltuples/num_index_tuples.)\n\t\n\t This issue was fairly harmless until the introduction of the\n\t autovacuum_vacuum_insert_threshold feature by commit b07642db, which had\n\t an undesirable interaction with the vacuum_cleanup_index_scale_factor\n\t mechanism: it made insert-driven autovacuums perform full index scans,\n\t even though there is no real benefit to doing so. This has been tied to\n\t a regression with an append-only insert benchmark [1].\n\t\n\t Also have remaining cases that perform a full scan of an index during a\n\t cleanup-only nbtree VACUUM indicate that the final tuple count is only\n\t an estimate. This prevents vacuumlazy.c from setting the index's\n\t pg_class.reltuples in those cases (it will now only update pg_class when\n\t vacuumlazy.c had TIDs for nbtree to bulk delete). This arguably fixes\n\t an oversight in deduplication-related bugfix commit 48e12913.\n\t\n\t [1] https://smalldatum.blogspot.com/2021/01/insert-benchmark-postgres-is-still.html\n\t\n\t Author: Peter Geoghegan <pg@bowt.ie>\n\t Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>\n\t Discussion: https://postgr.es/m/CAD21AoA4WHthN5uU6+WScZ7+J_RcEjmcuH94qcoUPuB42ShXzg@mail.gmail.com\n-->\t Backpatch: 13-, where autovacuum_vacuum_insert_threshold was added.\n\nThis was backpatched into PG 13.3, which was released last week:\n\n\t <listitem>\n\t<!--\n\tAuthor: Peter Geoghegan <pg@bowt.ie>\n\tBranch: master [9f3665fbf] 2021-03-10 16:27:01 -0800\n\tBranch: REL_13_STABLE [9663d1244] 2021-03-10 16:26:58 -0800\n\tAuthor: Peter Geoghegan <pg@bowt.ie>\n\tBranch: master [5f8727f5a] 2021-03-10 17:07:57 -0800\n\tBranch: REL_13_STABLE [1fc5a5738] 2021-03-10 17:07:55 -0800\n\t-->\n\t <para>\n\t Disable the <varname>vacuum_cleanup_index_scale_factor</varname>\n\t parameter and storage option (Peter Geoghegan)\n\t </para>\n\t\n\t <para>\n\t The notion of tracking <quote>stale</quote> index statistics proved\n\t to interact badly with\n\t the <varname>autovacuum_vacuum_insert_threshold</varname> parameter,\n\t resulting in unnecessary full-index scans and consequent degradation\n\t of autovacuum performance. The latter mechanism seems superior, so\n\t remove the stale-statistics logic. The control parameter for that,\n\t <varname>vacuum_cleanup_index_scale_factor</varname>, will be\n\t removed entirely in v14. In v13, it remains present to avoid\n\t breaking existing configuration files, but it no longer does\n\t anything.\n\t </para>\n\t </listitem>\n\nTherefore, it didn't show up in my src/tools/git_changelog output, and I\ndid not include it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 18 May 2021 10:07:25 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 18, 2021 at 11:07 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Tue, May 18, 2021 at 06:28:49PM +0900, Masahiko Sawada wrote:\n> > On Mon, May 10, 2021 at 3:03 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > >\n> > > I have committed the first draft of the PG 14 release notes. You can\n> > > see the most current build of them here:\n> > >\n> > > https://momjian.us/pgsql_docs/release-14.html\n> > >\n> >\n> > I think we need to mention in the release note that\n> > vacuum_cleanup_index_scale_factor GUC parameter has been removed and\n> > vacuum_cleanup_index_scale_factor storage parameter has been\n> > deprecated (please refer to commit 9f3665fb and effdd3f3b63).\n>\n> Looking at the full commit message:\n>\n> commit 9f3665fbfc\n> Author: Peter Geoghegan <pg@bowt.ie>\n> Date: Wed Mar 10 16:27:01 2021 -0800\n>\n> Don't consider newly inserted tuples in nbtree VACUUM.\n>\n> Remove the entire idea of \"stale stats\" within nbtree VACUUM (stop\n> caring about stats involving the number of inserted tuples). Also\n> remove the vacuum_cleanup_index_scale_factor GUC/param on the master\n> branch (though just disable them on postgres 13).\n>\n> The vacuum_cleanup_index_scale_factor/stats interface made the nbtree AM\n> partially responsible for deciding when pg_class.reltuples stats needed\n> to be updated. This seems contrary to the spirit of the index AM API,\n> though -- it is not actually necessary for an index AM's bulk delete and\n> cleanup callbacks to provide accurate stats when it happens to be\n> inconvenient. The core code owns that. (Index AMs have the authority\n> to perform or not perform certain kinds of deferred cleanup based on\n> their own considerations, such as page deletion and recycling, but that\n> has little to do with pg_class.reltuples/num_index_tuples.)\n>\n> This issue was fairly harmless until the introduction of the\n> autovacuum_vacuum_insert_threshold feature by commit b07642db, which had\n> an undesirable interaction with the vacuum_cleanup_index_scale_factor\n> mechanism: it made insert-driven autovacuums perform full index scans,\n> even though there is no real benefit to doing so. This has been tied to\n> a regression with an append-only insert benchmark [1].\n>\n> Also have remaining cases that perform a full scan of an index during a\n> cleanup-only nbtree VACUUM indicate that the final tuple count is only\n> an estimate. This prevents vacuumlazy.c from setting the index's\n> pg_class.reltuples in those cases (it will now only update pg_class when\n> vacuumlazy.c had TIDs for nbtree to bulk delete). This arguably fixes\n> an oversight in deduplication-related bugfix commit 48e12913.\n>\n> [1] https://smalldatum.blogspot.com/2021/01/insert-benchmark-postgres-is-still.html\n>\n> Author: Peter Geoghegan <pg@bowt.ie>\n> Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>\n> Discussion: https://postgr.es/m/CAD21AoA4WHthN5uU6+WScZ7+J_RcEjmcuH94qcoUPuB42ShXzg@mail.gmail.com\n> --> Backpatch: 13-, where autovacuum_vacuum_insert_threshold was added.\n>\n> This was backpatched into PG 13.3, which was released last week:\n>\n> <listitem>\n> <!--\n> Author: Peter Geoghegan <pg@bowt.ie>\n> Branch: master [9f3665fbf] 2021-03-10 16:27:01 -0800\n> Branch: REL_13_STABLE [9663d1244] 2021-03-10 16:26:58 -0800\n> Author: Peter Geoghegan <pg@bowt.ie>\n> Branch: master [5f8727f5a] 2021-03-10 17:07:57 -0800\n> Branch: REL_13_STABLE [1fc5a5738] 2021-03-10 17:07:55 -0800\n> -->\n> <para>\n> Disable the <varname>vacuum_cleanup_index_scale_factor</varname>\n> parameter and storage option (Peter Geoghegan)\n> </para>\n>\n> <para>\n> The notion of tracking <quote>stale</quote> index statistics proved\n> to interact badly with\n> the <varname>autovacuum_vacuum_insert_threshold</varname> parameter,\n> resulting in unnecessary full-index scans and consequent degradation\n> of autovacuum performance. The latter mechanism seems superior, so\n> remove the stale-statistics logic. The control parameter for that,\n> <varname>vacuum_cleanup_index_scale_factor</varname>, will be\n> removed entirely in v14. In v13, it remains present to avoid\n> breaking existing configuration files, but it no longer does\n> anything.\n> </para>\n> </listitem>\n>\n> Therefore, it didn't show up in my src/tools/git_changelog output, and I\n> did not include it.\n>\n\nThanks for your explanation. I understood and agreed not to include it\nin PG14 release note.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 18 May 2021 23:34:03 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 18, 2021 at 10:07:25AM -0400, Bruce Momjian wrote:\n> On Tue, May 18, 2021 at 06:28:49PM +0900, Masahiko Sawada wrote:\n> > On Mon, May 10, 2021 at 3:03 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > >\n> > > I have committed the first draft of the PG 14 release notes. You can\n> > > see the most current build of them here:\n> > >\n> > > https://momjian.us/pgsql_docs/release-14.html\n> > \n> > I think we need to mention in the release note that\n> > vacuum_cleanup_index_scale_factor GUC parameter has been removed and\n> > vacuum_cleanup_index_scale_factor storage parameter has been\n> > deprecated (please refer to commit 9f3665fb and effdd3f3b63).\n> \n> Looking at the full commit message:\n> \n> \tcommit 9f3665fbfc\n> \tAuthor: Peter Geoghegan <pg@bowt.ie>\n> \tDate: Wed Mar 10 16:27:01 2021 -0800\n> \t\n> \t Don't consider newly inserted tuples in nbtree VACUUM.\n> \t\n> \t Remove the entire idea of \"stale stats\" within nbtree VACUUM (stop\n> \t caring about stats involving the number of inserted tuples). Also\n> \t remove the vacuum_cleanup_index_scale_factor GUC/param on the master\n> \t branch (though just disable them on postgres 13).\n\n> This was backpatched into PG 13.3, which was released last week:\n\n> \t remove the stale-statistics logic. The control parameter for that,\n> \t <varname>vacuum_cleanup_index_scale_factor</varname>, will be\n> \t removed entirely in v14. In v13, it remains present to avoid\n> \t breaking existing configuration files, but it no longer does\n> \t anything.\n> \t </para>\n> \t </listitem>\n> \n> Therefore, it didn't show up in my src/tools/git_changelog output, and I\n> did not include it.\n\nNormally, stuff that was backpatched isn't included in major release notes,\nsince the change would/could normally happen during a minor -> minor+1 release.\n\nAs things stand, in this case I think it *should* be included, since the\nbackpatched change isn't the same as the change to HEAD (removing the GUC).\nThe git_changelog output might well be wrong in this case (or, arguably, the\n\"remove the GUC entirely\" should've been a separate master-only commit than the\n\"make the GUC do nothing\" commit).\n\nHowever, Peter indicated an intent to add a reloption to disable the vacuum\noptimization, so maybe the removal of the GUC could be documented at that time.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 18 May 2021 09:44:09 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 18, 2021 at 7:44 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> As things stand, in this case I think it *should* be included, since the\n> backpatched change isn't the same as the change to HEAD (removing the GUC).\n> The git_changelog output might well be wrong in this case (or, arguably, the\n> \"remove the GUC entirely\" should've been a separate master-only commit than the\n> \"make the GUC do nothing\" commit).\n\nI suppose that's true -- maybe it should be listed separately, because\nthe GUC is removed in 14 only.\n\n> However, Peter indicated an intent to add a reloption to disable the vacuum\n> optimization, so maybe the removal of the GUC could be documented at that time.\n\nThis is unrelated to the bypass indexes in VACUUM thing.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 18 May 2021 07:51:00 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 18, 2021 at 09:44:09AM -0500, Justin Pryzby wrote:\n> As things stand, in this case I think it *should* be included, since the\n> backpatched change isn't the same as the change to HEAD (removing the GUC).\n> The git_changelog output might well be wrong in this case (or, arguably, the\n> \"remove the GUC entirely\" should've been a separate master-only commit than the\n> \"make the GUC do nothing\" commit).\n\nI think having the same commit message for different patches to\ndifferent branches is an unwise behavior, particularly if the commit is\nrelease-note worthy. (I think it is fine if the patch is purely\nmechanical and hence not release-note worthy.) The master patch is hash\n9f3665fbfc and the PG 13 patch is hash 9663d12446.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 18 May 2021 11:08:15 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 18, 2021 at 07:51:00AM -0700, Peter Geoghegan wrote:\n> On Tue, May 18, 2021 at 7:44 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > As things stand, in this case I think it *should* be included, since the\n> > backpatched change isn't the same as the change to HEAD (removing the GUC).\n> > The git_changelog output might well be wrong in this case (or, arguably, the\n> > \"remove the GUC entirely\" should've been a separate master-only commit than the\n> > \"make the GUC do nothing\" commit).\n> \n> I suppose that's true -- maybe it should be listed separately, because\n> the GUC is removed in 14 only.\n\nOK, this is a mess then. Would someone please give me the full text for\nthis, including the commit hash(es)? Is the PG 13.3 release note text\naccurate?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 18 May 2021 11:09:48 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 18, 2021 at 8:09 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > I suppose that's true -- maybe it should be listed separately, because\n> > the GUC is removed in 14 only.\n>\n> OK, this is a mess then. Would someone please give me the full text for\n> this, including the commit hash(es)? Is the PG 13.3 release note text\n> accurate?\n\nThe 13.3 release notes say this:\n\n\"\"\"\nDisable the vacuum_cleanup_index_scale_factor parameter and storage\noption (Peter Geoghegan)\n\nThe notion of tracking “stale” index statistics proved to interact\nbadly with the autovacuum_vacuum_insert_threshold parameter, resulting\nin unnecessary full-index scans and consequent degradation of\nautovacuum performance. The latter mechanism seems superior, so remove\nthe stale-statistics logic. The control parameter for that,\nvacuum_cleanup_index_scale_factor, will be removed entirely in v14. In\nv13, it remains present to avoid breaking existing configuration\nfiles, but it no longer does anything.\n\"\"\"\n\nI think that this is slightly inaccurate, though that's probably of\nlittle consequence. The autovacuum_vacuum_insert_threshold GUC was in\nfact removed in v14, but the reloption was ultimately not removed from\nHEAD/v14 for compatibility reasons. This is not apparent just from\ncommit 9f3665fb -- there was a clean-up commit (commit effdd3f3) that\nadded the reloption back following further discussion. So the\nequivalent reloption remains in place though is disabled, just for\ncompatibility purposes -- on v13 and v14. The GUC is where v13 and v14\ndiffer -- only v13 still has the GUC to avoid breaking user's\npostgresql.conf files (though it's also delisted). Deprecating a\nreloption is much harder than deprecating a GUC.\n\nIn my opinion this should be interpreted as already handled by the\nbackpatch to 13.3, and so not necessary to handle again now -- despite\nthe GUC thing. It's possible that no users set the GUC at all, because\nit wasn't particularly well thought out. This entire situation is\nprobably unprecedented (we just don't deprecate reloptions very\noften), so I defer to your judgement, Bruce.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 18 May 2021 08:34:55 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 18, 2021 at 08:34:55AM -0700, Peter Geoghegan wrote:\n> In my opinion this should be interpreted as already handled by the\n> backpatch to 13.3, and so not necessary to handle again now -- despite\n> the GUC thing. It's possible that no users set the GUC at all, because\n> it wasn't particularly well thought out. This entire situation is\n> probably unprecedented (we just don't deprecate reloptions very\n> often), so I defer to your judgement, Bruce.\n\nI am thiking the vacuum_cleanup_index_scale_factor existance in a PG 14\npostgresql.conf will throw an error, unlike 13.x, so I do think we need\nto mention this so people will know to remove it from their\npostgresql.conf before upgrades, right? I don't think the PG 13.3\nrelease note mention really makes it clear it has to be removed. In a\ndump/restore, so we retain the reloption\nvacuum_cleanup_index_scale_factor and just ignore it, or drop it on\nrestore? I am hoping it is the later.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 18 May 2021 11:48:10 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 18, 2021 at 8:48 AM Bruce Momjian <bruce@momjian.us> wrote:\n> I am thiking the vacuum_cleanup_index_scale_factor existance in a PG 14\n> postgresql.conf will throw an error, unlike 13.x, so I do think we need\n> to mention this so people will know to remove it from their\n> postgresql.conf before upgrades, right? I don't think the PG 13.3\n> release note mention really makes it clear it has to be removed. In a\n> dump/restore, so we retain the reloption\n> vacuum_cleanup_index_scale_factor and just ignore it, or drop it on\n> restore? I am hoping it is the later.\n\nThere is no dump/restore hazard on upgrade to 14, since the\nvacuum_cleanup_index_scale_factor reloption remains in place (it's\njust not in psql tab completion anymore, nor is it documented, etc).\n\nIt is possible (though I would certainly say unlikely) that the\nvacuum_cleanup_index_scale_factor GUC will be in somebody's\npostgresql.conf from an earlier version, and won't be recognized on\nupgrade to v14. So maybe you need to say something about that\nparticular issue -- which could be framed as finishing off the process\nstarted by the 13.3 commit. But that's it.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 18 May 2021 08:54:56 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 18, 2021 at 08:54:56AM -0700, Peter Geoghegan wrote:\n> On Tue, May 18, 2021 at 8:48 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > I am thiking the vacuum_cleanup_index_scale_factor existance in a PG 14\n> > postgresql.conf will throw an error, unlike 13.x, so I do think we need\n> > to mention this so people will know to remove it from their\n> > postgresql.conf before upgrades, right? I don't think the PG 13.3\n> > release note mention really makes it clear it has to be removed. In a\n> > dump/restore, so we retain the reloption\n> > vacuum_cleanup_index_scale_factor and just ignore it, or drop it on\n> > restore? I am hoping it is the later.\n> \n> There is no dump/restore hazard on upgrade to 14, since the\n> vacuum_cleanup_index_scale_factor reloption remains in place (it's\n> just not in psql tab completion anymore, nor is it documented, etc).\n\nSo it is the former behavior --- \"so we retain the reloption\nvacuum_cleanup_index_scale_factor and just ignore it\"?\n\n> It is possible (though I would certainly say unlikely) that the\n> vacuum_cleanup_index_scale_factor GUC will be in somebody's\n> postgresql.conf from an earlier version, and won't be recognized on\n> upgrade to v14. So maybe you need to say something about that\n> particular issue -- which could be framed as finishing off the process\n> started by the 13.3 commit. But that's it.\n\nYes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 18 May 2021 11:57:11 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "We do retain the reloption, but ignore it. Purely to avoid the dump and\nreload hazard. I think that you could reasonably tell users that it's gone\ncompletely, because it does nothing on either 13 or 14. It's hidden from\nthem to the extent that that's possible.\n\nPeter Geoghegan\n(Sent from my phone)\n\nWe do retain the reloption, but ignore it. Purely to avoid the dump and reload hazard. I think that you could reasonably tell users that it's gone completely, because it does nothing on either 13 or 14. It's hidden from them to the extent that that's possible. Peter Geoghegan(Sent from my phone)", "msg_date": "Tue, 18 May 2021 09:01:44 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 18, 2021 at 09:01:44AM -0700, Peter Geoghegan wrote:\n> We do retain the reloption, but ignore it. Purely to avoid the dump and reload\n> hazard. I think that you could reasonably tell users that it's gone completely,\n> because it does nothing on either 13 or 14. It's hidden from them to the extent\n> that that's possible.�\n\nI went with this release note text:\n\n\t<listitem>\n\t<!--\n\tAuthor: Peter Geoghegan <pg@bowt.ie>\n\t2021-03-10 [9f3665fbf] Don't consider newly inserted tuples in nbtree VACUUM.\n\tAuthor: Peter Geoghegan <pg@bowt.ie>\n\t2021-03-11 [effdd3f3b] Add back vacuum_cleanup_index_scale_factor parameter.\n\t-->\n\t\n\t<para>\n\tRemove server variable vacuum_cleanup_index_scale_factor (Peter Geoghegan)\n\t</para>\n\t\n\t<para>\n\tThis setting was disabled in PostgreSQL version 13.3.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 18 May 2021 15:17:48 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, May 18, 2021 at 12:17 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I went with this release note text:\n\nThat seems reasonable -- thanks!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 18 May 2021 16:02:56 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "These sound weird since markup was added in 6a5bde7d4:\nhttps://www.postgresql.org/docs/devel/release-14.html\n| Remove server and Chapter 34 support for the version 2 wire protocol (Heikki Linnakangas)\n...\n| Pass doubled quote marks in Chapter 36 SQL command strings literally (Tom Lane)\n\n-Remove server and libpq support for the version 2 wire protocol (Heikki Linnakangas)\n+Remove server and <xref linkend=\"libpq\"/> support for the version 2 <link linkend=\"protocol\">wire protocol</link> (Heikki Linnakangas)\n\n> Force custom server variable names to match the pattern used for unquoted SQL identifiers (Tom Lane)\nSay \"Require\" not force?\n\n> This setting was disabled in PostgreSQL version 13.3.\n\"disabled\" sounds like it was set to \"off\". Maybe say it was ignored.\n\n> Add long-running queries to be canceled if the client disconnects (Sergey Cherkashin, Thomas Munro)\nShould say: Allow\n\n> The server variable client_connection_check_interval allows supporting operating systems, e.g., Linux, to automatically cancel queries by disconnected clients.\nsay \"some operating systems\" ?\n\n> This can be disabled by turning client option \"sslsni\" off.\n\"turning off\"\n\n| Add %P to log_line_prefix to report the parallel group leader (Justin Pryzby)\n\nMaybe it should say \"Allow %P in log_line_prefix to ...\", otherwise it sounds\nlike the default was changed.\n\n| Reduce the default value of vacuum_cost_page_miss (Peter Geoghegan) \n| This new default better reflects current hardware capabilities. \nAlso say: the previous default was 10.\n\n\n", "msg_date": "Wed, 19 May 2021 09:39:08 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Wed, May 19, 2021 at 09:39:08AM -0500, Justin Pryzby wrote:\n> These sound weird since markup was added in 6a5bde7d4:\n> https://www.postgresql.org/docs/devel/release-14.html\n> | Remove server and Chapter 34 support for the version 2 wire protocol (Heikki Linnakangas)\n> ...\n> | Pass doubled quote marks in Chapter 36 SQL command strings literally (Tom Lane)\n\n> -Remove server and libpq support for the version 2 wire protocol (Heikki Linnakangas)\n> +Remove server and <xref linkend=\"libpq\"/> support for the version 2 <link linkend=\"protocol\">wire protocol</link> (Heikki Linnakangas)\n\nAgreed, fixed.\n\n> > Force custom server variable names to match the pattern used for unquoted SQL identifiers (Tom Lane)\n> Say \"Require\" not force?\n\nAgreed, fixed.\n\n> > This setting was disabled in PostgreSQL version 13.3.\n> \"disabled\" sounds like it was set to \"off\". Maybe say it was ignored.\n\nOK, I went with this:\n\n\tThis setting was ignored starting in\n\t<productname>PostgreSQL</productname> version 13.3.\n\n> > Add long-running queries to be canceled if the client disconnects (Sergey Cherkashin, Thomas Munro)\n> Should say: Allow\n\nYes.\n\n> > The server variable client_connection_check_interval allows supporting operating systems, e.g., Linux, to automatically cancel queries by disconnected clients.\n> say \"some operating systems\" ?\n\nAgreed, done.\n\n> > This can be disabled by turning client option \"sslsni\" off.\n> \"turning off\"\n\nAgreed.\n\n> | Add %P to log_line_prefix to report the parallel group leader (Justin Pryzby)\n> \n> Maybe it should say \"Allow %P in log_line_prefix to ...\", otherwise it sounds\n> like the default was changed.\n\nI am not sure, but I changed it as you suggested:\n\n\tAllow %P in <varname>log_line_prefix</varname> to report the\n\tparallel group leader (Justin Pryzby)\n\n> | Reduce the default value of vacuum_cost_page_miss (Peter Geoghegan) \n> | This new default better reflects current hardware capabilities. \n> Also say: the previous default was 10.\n\nUh, we didn't report the new value, so why report the old one?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 20 May 2021 15:44:46 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Thu, May 20, 2021 at 03:44:46PM -0400, Bruce Momjian wrote:\n> > | Reduce the default value of vacuum_cost_page_miss (Peter Geoghegan) \n> > | This new default better reflects current hardware capabilities. \n> > Also say: the previous default was 10.\n> \n> Uh, we didn't report the new value, so why report the old one?\n\nGood point.\nFor symmetry with this one, maybe the old and new values should be included?\n\n|Change checkpoint_completion_target default to 0.9 (Stephen Frost)\n|The previous default was 0.5.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 20 May 2021 14:55:18 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Thu, May 20, 2021 at 02:55:18PM -0500, Justin Pryzby wrote:\n> On Thu, May 20, 2021 at 03:44:46PM -0400, Bruce Momjian wrote:\n> > > | Reduce the default value of vacuum_cost_page_miss (Peter Geoghegan) \n> > > | This new default better reflects current hardware capabilities. \n> > > Also say: the previous default was 10.\n> > \n> > Uh, we didn't report the new value, so why report the old one?\n> \n> Good point.\n> For symmetry with this one, maybe the old and new values should be included?\n\nNot sure. Those values are kind of hard to understand, so I am afraid\nthere would be more confusion by mentioning them.\n\n> |Change checkpoint_completion_target default to 0.9 (Stephen Frost)\n> |The previous default was 0.5.\n\nUh, that one is frequently modified by users, to an extent I didn't\nunderstand why we kept it at 0.5 for so long, which is why I mentioned\nit.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 20 May 2021 15:59:02 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On 2021-May-20, Bruce Momjian wrote:\n\n> On Thu, May 20, 2021 at 02:55:18PM -0500, Justin Pryzby wrote:\n> > On Thu, May 20, 2021 at 03:44:46PM -0400, Bruce Momjian wrote:\n> > > > | Reduce the default value of vacuum_cost_page_miss (Peter Geoghegan) \n> > > > | This new default better reflects current hardware capabilities. \n> > > > Also say: the previous default was 10.\n> > > \n> > > Uh, we didn't report the new value, so why report the old one?\n> > \n> > Good point.\n> > For symmetry with this one, maybe the old and new values should be included?\n> \n> Not sure. Those values are kind of hard to understand, so I am afraid\n> there would be more confusion by mentioning them.\n> \n> > |Change checkpoint_completion_target default to 0.9 (Stephen Frost)\n> > |The previous default was 0.5.\n> \n> Uh, that one is frequently modified by users, to an extent I didn't\n> understand why we kept it at 0.5 for so long, which is why I mentioned\n> it.\n\nYou also mentioned 'md5' in the entry about password_encryption,\nremember? I tend to agree with Justin: if it's not too much extra space\nto mention both values, let's just do that. \"Reduce the value of X to Y\nfrom Z. The new default better reflects ...\" seems OK to me.\n\nI prefer \"to Y from Z\" rather than \"from Z to Y\", because then the new\nvalue appears first, which seems a tiny improvement in readability,\nthough the phrase is in the opposite order of traditional. Also it\nseems better than \"change value of X to Y. The previous default was Z\"\nbecause it then becomes a little more verbose than really needed. But\nmaybe that's OK too.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"La fuerza no est� en los medios f�sicos\nsino que reside en una voluntad indomable\" (Gandhi)\n\n\n", "msg_date": "Thu, 20 May 2021 16:35:07 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Thu, May 20, 2021 at 04:35:07PM -0400, �lvaro Herrera wrote:\n> On 2021-May-20, Bruce Momjian wrote:\n> \n> > On Thu, May 20, 2021 at 02:55:18PM -0500, Justin Pryzby wrote:\n> > > On Thu, May 20, 2021 at 03:44:46PM -0400, Bruce Momjian wrote:\n> > > > > | Reduce the default value of vacuum_cost_page_miss (Peter Geoghegan) \n> > > > > | This new default better reflects current hardware capabilities. \n> > > > > Also say: the previous default was 10.\n> > > > \n> > > > Uh, we didn't report the new value, so why report the old one?\n> > > \n> > > Good point.\n> > > For symmetry with this one, maybe the old and new values should be included?\n> > \n> > Not sure. Those values are kind of hard to understand, so I am afraid\n> > there would be more confusion by mentioning them.\n> > \n> > > |Change checkpoint_completion_target default to 0.9 (Stephen Frost)\n> > > |The previous default was 0.5.\n> > \n> > Uh, that one is frequently modified by users, to an extent I didn't\n> > understand why we kept it at 0.5 for so long, which is why I mentioned\n> > it.\n> \n> You also mentioned 'md5' in the entry about password_encryption,\n> remember? I tend to agree with Justin: if it's not too much extra space\n> to mention both values, let's just do that. \"Reduce the value of X to Y\n> from Z. The new default better reflects ...\" seems OK to me.\n> \n> I prefer \"to Y from Z\" rather than \"from Z to Y\", because then the new\n> value appears first, which seems a tiny improvement in readability,\n> though the phrase is in the opposite order of traditional. Also it\n> seems better than \"change value of X to Y. The previous default was Z\"\n> because it then becomes a little more verbose than really needed. But\n> maybe that's OK too.\n\nI went with this text:\n\n\tReduce the default value of <varname><xref\n\tlinkend=\"guc-vacuum-cost-page-miss\"></varname> from 10 milliseconds to 2\n\t(Peter Geoghegan)\n\nI think with numbers, and the fact we are saying \"decrease\" havint them\nin the from/to order is best. If this was non-numeric, like to scram\nfrom md5, it would make more sense to use to/from.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 20 May 2021 17:37:52 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Thu, May 20, 2021 at 2:37 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I went with this text:\n>\n> Reduce the default value of <varname><xref\n> linkend=\"guc-vacuum-cost-page-miss\"></varname> from 10 milliseconds to 2\n> (Peter Geoghegan)\n>\n> I think with numbers, and the fact we are saying \"decrease\" havint them\n> in the from/to order is best. If this was non-numeric, like to scram\n> from md5, it would make more sense to use to/from.\n\nThe point of this change was to make the cost of dirtying pages much\nhigher than everything else, since writes are in effect much more\nexpensive on modern hardware. Don't know if you need to say that.\n\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 20 May 2021 16:19:50 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Thu, May 20, 2021 at 04:19:50PM -0700, Peter Geoghegan wrote:\n> On Thu, May 20, 2021 at 2:37 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > I went with this text:\n> >\n> > Reduce the default value of <varname><xref\n> > linkend=\"guc-vacuum-cost-page-miss\"></varname> from 10 milliseconds to 2\n> > (Peter Geoghegan)\n> >\n> > I think with numbers, and the fact we are saying \"decrease\" havint them\n> > in the from/to order is best. If this was non-numeric, like to scram\n> > from md5, it would make more sense to use to/from.\n> \n> The point of this change was to make the cost of dirtying pages much\n> higher than everything else, since writes are in effect much more\n> expensive on modern hardware. Don't know if you need to say that.\n\nI think our text \"This new default better reflects current hardware\ncapabilities.\" is detailed enough. People can dig into the item to see\nwhat it does and how it adjusts costs.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 20 May 2021 19:54:37 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Thu, May 20, 2021 at 4:54 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I think our text \"This new default better reflects current hardware\n> capabilities.\" is detailed enough. People can dig into the item to see\n> what it does and how it adjusts costs.\n\nFair enough.\n\nI noticed something about the same item that needs to be fixed,\nthough. The vacuum_cost_page_miss GUC does not directly represent any\nkind of time-based delay, but the current wording says that it uses\nmillisecond units. In fact the vacuum_cost_page_miss GUC is based on\nabstract cost units, apportioned from vacuum_cost_limit. I suggested\nthat the wording talk about relative cost differences in part because\nthat's just how the GUC works, in general.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 22 May 2021 15:35:13 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Sat, May 22, 2021 at 03:35:13PM -0700, Peter Geoghegan wrote:\n> On Thu, May 20, 2021 at 4:54 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > I think our text \"This new default better reflects current hardware\n> > capabilities.\" is detailed enough. People can dig into the item to see\n> > what it does and how it adjusts costs.\n> \n> Fair enough.\n> \n> I noticed something about the same item that needs to be fixed,\n> though. The vacuum_cost_page_miss GUC does not directly represent any\n> kind of time-based delay, but the current wording says that it uses\n> millisecond units. In fact the vacuum_cost_page_miss GUC is based on\n> abstract cost units, apportioned from vacuum_cost_limit. I suggested\n> that the wording talk about relative cost differences in part because\n> that's just how the GUC works, in general.\n\nOK, thanks. Here is the updated text:\n\n\t<listitem>\n\t<!--\n\tAuthor: Peter Geoghegan <pg@bowt.ie>\n\t2021-01-27 [e19594c5c] Reduce the default value of\n\tvacuum_cost_page_miss.\n\t-->\n\t\n\t<para>\n\tReduce the default value of <xref linkend=\"guc-vacuum-cost-page-miss\"/>\n\tto better reflects current hardware capabilities (Peter Geoghegan)\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sat, 22 May 2021 19:23:26 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> I have committed the first draft of the PG 14 release notes. You can\n> see the most current build of them here:\n> \n> \thttps://momjian.us/pgsql_docs/release-14.html\n\nIt occurs to me that the wording around the new default roles could\nprobably be better. Specifically:\n\nAdd predefined roles pg_read_all_data and pg_write_all_data (Stephen Frost)\n\nThese non-login roles give read-only/write-only access to all objects.\n\nMight be better as:\n\nThese non-login roles give read, or write, access to all tables, views,\nand sequences.\n\n(These roles don't actually allow, for example, a function to be\nredefined, so saying 'all objects' isn't quite right either.)\n\nWhile these roles could be used to create a 'read only' or 'write only'\nrole, they, themselves, do not explicitly convey that on to a role\nbecause they don't do anything to prevent someone from GRANT'ing other\nrights to some role which has been GRANT'd these predefined roles. I\ndon't think anyone on this list thought differently from that, but the\nphrasing strikes me as potentially confusing.\n\nMaybe another way would be:\n\nThese non-login roles give (only) read, or write, access to all tables,\nviews, and sequences.\n\nbut I don't think saying 'only' there really adds anything and instead\ninvites confusion.\n\nThanks!\n\nStephen", "msg_date": "Sat, 22 May 2021 19:29:45 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Sat, May 22, 2021 at 07:29:45PM -0400, Stephen Frost wrote:\n> Greetings,\n> \n> * Bruce Momjian (bruce@momjian.us) wrote:\n> > I have committed the first draft of the PG 14 release notes. You can\n> > see the most current build of them here:\n> > \n> > \thttps://momjian.us/pgsql_docs/release-14.html\n> \n> It occurs to me that the wording around the new default roles could\n> probably be better. Specifically:\n> \n> Add predefined roles pg_read_all_data and pg_write_all_data (Stephen Frost)\n> \n> These non-login roles give read-only/write-only access to all objects.\n> \n> Might be better as:\n> \n> These non-login roles give read, or write, access to all tables, views,\n> and sequences.\n> \n> (These roles don't actually allow, for example, a function to be\n> redefined, so saying 'all objects' isn't quite right either.)\n> \n> While these roles could be used to create a 'read only' or 'write only'\n> role, they, themselves, do not explicitly convey that on to a role\n> because they don't do anything to prevent someone from GRANT'ing other\n> rights to some role which has been GRANT'd these predefined roles. I\n> don't think anyone on this list thought differently from that, but the\n> phrasing strikes me as potentially confusing.\n> \n> Maybe another way would be:\n> \n> These non-login roles give (only) read, or write, access to all tables,\n> views, and sequences.\n> \n> but I don't think saying 'only' there really adds anything and instead\n> invites confusion.\n\nOK, I went with this text:\n\n\t<listitem>\n\t<!--\n\tAuthor: Stephen Frost <sfrost@snowman.net>\n\t2021-04-05 [6c3ffd697] Add pg_read_all_data and pg_write_all_data roles\n\t-->\n\t\n\t<para>\n\tAdd predefined roles <link\n\tlinkend=\"predefined-roles\"><structname>pg_read_all_data</structname></link>\n\tand <structname>pg_write_all_data</structname> (Stephen Frost)\n\t</para>\n\t\n\t<para>\n\tThese non-login roles can be used to give read or write permission to\n\tall tables, views, and sequences.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sat, 22 May 2021 20:16:58 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "doc/src/sgml/release-14.sgml- <listitem>\ndoc/src/sgml/release-14.sgml-<!--\ndoc/src/sgml/release-14.sgml-Author: Fujii Masao <fujii@postgresql.org>\ndoc/src/sgml/release-14.sgml-2020-10-02 [8d9a93596] Add pg_stat_wal statistics view.\ndoc/src/sgml/release-14.sgml-Author: Fujii Masao <fujii@postgresql.org>\ndoc/src/sgml/release-14.sgml-2020-12-02 [01469241b] Track total number of WAL records, FPIs and bytes genera\ndoc/src/sgml/release-14.sgml-Author: Fujii Masao <fujii@postgresql.org>\ndoc/src/sgml/release-14.sgml:2021-03-09 [ff99918c6] Track total amounts of times spent writing and syncing W\ndoc/src/sgml/release-14.sgml--->\ndoc/src/sgml/release-14.sgml-\ndoc/src/sgml/release-14.sgml- <para>\ndoc/src/sgml/release-14.sgml- Add system view <link\ndoc/src/sgml/release-14.sgml- linkend=\"monitoring-pg-stat-wal-view\"><structname>pg_stat_wal</structname></link>\ndoc/src/sgml/release-14.sgml- which reports <acronym>WAL</acronym> activity (Masahiro Ikeda)\ndoc/src/sgml/release-14.sgml- </para>\ndoc/src/sgml/release-14.sgml- </listitem>\n\nI think this should also mention the GUC:\n| track_wal_io_timing\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 4 Jun 2021 16:31:06 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Fri, Jun 4, 2021 at 04:31:06PM -0500, Justin Pryzby wrote:\n> doc/src/sgml/release-14.sgml- <listitem>\n> doc/src/sgml/release-14.sgml-<!--\n> doc/src/sgml/release-14.sgml-Author: Fujii Masao <fujii@postgresql.org>\n> doc/src/sgml/release-14.sgml-2020-10-02 [8d9a93596] Add pg_stat_wal statistics view.\n> doc/src/sgml/release-14.sgml-Author: Fujii Masao <fujii@postgresql.org>\n> doc/src/sgml/release-14.sgml-2020-12-02 [01469241b] Track total number of WAL records, FPIs and bytes genera\n> doc/src/sgml/release-14.sgml-Author: Fujii Masao <fujii@postgresql.org>\n> doc/src/sgml/release-14.sgml:2021-03-09 [ff99918c6] Track total amounts of times spent writing and syncing W\n> doc/src/sgml/release-14.sgml--->\n> doc/src/sgml/release-14.sgml-\n> doc/src/sgml/release-14.sgml- <para>\n> doc/src/sgml/release-14.sgml- Add system view <link\n> doc/src/sgml/release-14.sgml- linkend=\"monitoring-pg-stat-wal-view\"><structname>pg_stat_wal</structname></link>\n> doc/src/sgml/release-14.sgml- which reports <acronym>WAL</acronym> activity (Masahiro Ikeda)\n> doc/src/sgml/release-14.sgml- </para>\n> doc/src/sgml/release-14.sgml- </listitem>\n> \n> I think this should also mention the GUC:\n> | track_wal_io_timing\n\nThis is part of this commit:\n\n\tcommit ff99918c62\n\tAuthor: Fujii Masao <fujii@postgresql.org>\n\tDate: Tue Mar 9 16:52:06 2021 +0900\n\t\n\t Track total amounts of times spent writing and syncing WAL data to disk.\n\t\n\t This commit adds new GUC track_wal_io_timing. When this is enabled,\n\t the total amounts of time XLogWrite writes and issue_xlog_fsync syncs\n\t WAL data to disk are counted in pg_stat_wal. This information would be\n\t useful to check how much WAL write and sync affect the performance.\n\nThe feature is clearly the new system view pg_stat_wal. The question is\nwhether we should also document that track_wal_io_timing adds timing\ncontrol to that new system view. Normally we don't mention additional\nfeatures, like pg_dump support, for features we add in a major release. \nThe documentation we link to from the release notes does mention\ntrack_wal_io_timing.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 4 Jun 2021 18:47:16 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "| Add Set Server Name Indication (SNI) for SSL connection packets (Peter Eisentraut) \nRemove \"Set\"\n\n| Reduce the default value of vacuum_cost_page_miss from 10 milliseconds to 2 (Peter Geoghegan) \nPeter mentioned that this should not say \"milliseconds\" (but maybe the page I'm\nlooking at is old).\n\n| Cause vacuum operations to be aggressive if the table is near xid or multixact wraparound (Masahiko Sawada, Peter Geoghegan) \nSay \"become aggressive\" ?\n\n| Allow the arbitrary collations of partition boundary values (Tom Lane) \nRemove \"the\"\n\n| Generate WAL invalidations message during command completion when using logical replication (Dilip Kumar, Tomas Vondra, Amit Kapila) \ninvalidation messages\n\n| Add support for infinity and -infinity values to the numeric data type (Tom Lane) \n\"-infinity\" has markup but not \"infinity\" ?\n\n| Allow vacuum to deallocate space reserved by trailing unused heap line pointers (Matthias van de Meent, Peter Geoghegan) \nsay \"reclaim space\" ?\n\n\n", "msg_date": "Fri, 11 Jun 2021 22:45:51 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Fri, Jun 11, 2021 at 10:45:51PM -0500, Justin Pryzby wrote:\n> | Add Set Server Name Indication (SNI) for SSL connection packets (Peter Eisentraut) \n> Remove \"Set\"\n> \n> | Reduce the default value of vacuum_cost_page_miss from 10 milliseconds to 2 (Peter Geoghegan) \n> Peter mentioned that this should not say \"milliseconds\" (but maybe the page I'm\n> looking at is old).\n> \n> | Cause vacuum operations to be aggressive if the table is near xid or multixact wraparound (Masahiko Sawada, Peter Geoghegan) \n> Say \"become aggressive\" ?\n> \n> | Allow the arbitrary collations of partition boundary values (Tom Lane) \n> Remove \"the\"\n> \n> | Generate WAL invalidations message during command completion when using logical replication (Dilip Kumar, Tomas Vondra, Amit Kapila) \n> invalidation messages\n> \n> | Add support for infinity and -infinity values to the numeric data type (Tom Lane) \n> \"-infinity\" has markup but not \"infinity\" ?\n> \n> | Allow vacuum to deallocate space reserved by trailing unused heap line pointers (Matthias van de Meent, Peter Geoghegan) \n> say \"reclaim space\" ?\n\nSome more:\n\n| VACUUM now has a PROCESS_TOAST which can be set to false to disable TOAST processing, and vacuumdb has a --no-process-toast option. \nhas a process_toast *option\n\n| Previously, if the object already exists, EXPLAIN would fail. \nalready existed\n\n| Function pg_stat_reset_replication_slot() resets slot statistics. \n*The function. But maybe it should be omitted.\n\n| New options are read-only, primary, standby, and prefer-standby. \n*The new options\n\n| Allow reindexdb to change the tablespace of the new index (Michael Paquier)\n| This is done by specifying --tablespace.\nI think this should be merged with the corresponding server feature, like this one:\n|Add ability to skip vacuuming of TOAST tables (Nathan Bossart)\n|VACUUM now has a PROCESS_TOAST which can be set to false to disable TOAST processing, and vacuumdb has a --no-process-toast option. \n\nOr, the client-side option could be omitted. This is distinguished from\nvacuumdb --no-index-cleanup and --no-truncate, for which the server support was\nadded in v12, and the client support was essentially an omision.\n\n| Add documentation for the factorial() function (Peter Eisentraut)\n| With the removal of the ! operator in this release, factorial() is the only built-in way to compute a factorial. \nCould be ommited or collapsed into the other item. I know Tom thinks that\nit's unnecesary to document changes to documentation.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 14 Jun 2021 11:37:58 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On 5/10/21 8:03 AM, Bruce Momjian wrote:\n> I have committed the first draft of the PG 14 release notes. You can\n> see the most current build of them here:\n> \n> \thttps://momjian.us/pgsql_docs/release-14.html\n> \n> I need clarification on many items, and the document still needs its\n> items properly ordered, and markup added. I also expect a lot of\n> feedback.\n> \n> I plan to work on completing this document this coming week in\n> preparation for beta next week.\n> \n\nSorry it took me a while to look at the release notes. I have one\nsuggestion regarding this item:\n\n\n Allow logical replication to stream long in-progress transactions to\n subscribers (Tomas Vondra, Dilip Kumar, Amit Kapila, Ajin Cherian,\n Nikhil Sontakke, Stas Kelvich)\n\nAFAICS the authors are generally ordered by how much they contributed to\nthe feature. In that case I'd move myself down the list - certainly\nafter Dilip and Amit, perhaps after Ajin. While I posted the original\npatch, but most of the work after that to get it committed was done by\nthose two/three people.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 14 Jun 2021 18:57:41 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "Hi Bruce,\n\nFor this item:\n\n<!--\nAuthor: Tomas Vondra <tomas.vondra@postgresql.org>\n2021-03-26 [77b88cd1b] BRIN bloom indexes\n-->\n\n <para>\n Allow <acronym>BRIN</acronym> indexes to use bloom filters\n (Tomas Vondra)\n </para>\n\n <para>\n This allows bloom indexes to be used effectively with data that\n is not physically localized in the heap.\n </para>\n </listitem>\n\nThe text implies that this affects contrib/bloom. I think it should be\n\"This allows BRIN indexes...\".\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nHi Bruce,For this item:<!--Author: Tomas Vondra <tomas.vondra@postgresql.org>2021-03-26 [77b88cd1b] BRIN bloom indexes-->       <para>        Allow <acronym>BRIN</acronym> indexes to use bloom filters        (Tomas Vondra)       </para>       <para>        This allows bloom indexes to be used effectively with data that        is not physically localized in the heap.       </para>      </listitem>The text implies that this affects contrib/bloom. I think it should be \"This allows BRIN indexes...\". -- John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 14 Jun 2021 13:29:42 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Fri, Jun 11, 2021 at 10:45:51PM -0500, Justin Pryzby wrote:\n> | Add Set Server Name Indication (SNI) for SSL connection packets (Peter Eisentraut) \n> Remove \"Set\"\n\nFixed.\n\n> | Reduce the default value of vacuum_cost_page_miss from 10 milliseconds to 2 (Peter Geoghegan) \n> Peter mentioned that this should not say \"milliseconds\" (but maybe the page I'm\n> looking at is old).\n\nIt is old. It is now:\n\n Reduce the default value of <xref\n linkend=\"guc-vacuum-cost-page-miss\"/> to better reflects current\n hardware capabilities (Peter Geoghegan)\n\n> | Cause vacuum operations to be aggressive if the table is near xid or multixact wraparound (Masahiko Sawada, Peter Geoghegan) \n> Say \"become aggressive\" ?\n\nUpdated text:\n\n Cause vacuum operations to be more aggressive if the table is near\n xid or multixact wraparound (Masahiko Sawada, Peter Geoghegan)\n\n> | Allow the arbitrary collations of partition boundary values (Tom Lane) \n> Remove \"the\"\n\nAgreed, removed.\n\n> | Generate WAL invalidations message during command completion when using logical replication (Dilip Kumar, Tomas Vondra, Amit Kapila) \n> invalidation messages\n\nFixed.\n\n> | Add support for infinity and -infinity values to the numeric data type (Tom Lane) \n> \"-infinity\" has markup but not \"infinity\" ?\n\nFixed.\n> \n> | Allow vacuum to deallocate space reserved by trailing unused heap line pointers (Matthias van de Meent, Peter Geoghegan) \n> say \"reclaim space\" ?\n\nOK, new text is:\n\n Allow vacuum to reclaim space used by unused trailing heap\n line pointers (Matthias van de Meent, Peter Geoghegan)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 14 Jun 2021 14:33:33 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, Jun 14, 2021 at 11:37:58AM -0500, Justin Pryzby wrote:\n> Some more:\n> \n> | VACUUM now has a PROCESS_TOAST which can be set to false to disable TOAST processing, and vacuumdb has a --no-process-toast option. \n> has a process_toast *option\n\nAgreed.\n\n> | Previously, if the object already exists, EXPLAIN would fail. \n> already existed\n\nFixed.\n\n> | Function pg_stat_reset_replication_slot() resets slot statistics. \n> *The function. But maybe it should be omitted.\n\nOK, I went with \"The function\".\n\n> | New options are read-only, primary, standby, and prefer-standby. \n> *The new options\n\nAgreed.\n\n> | Allow reindexdb to change the tablespace of the new index (Michael Paquier)\n> | This is done by specifying --tablespace.\n> I think this should be merged with the corresponding server feature, like this one:\n> |Add ability to skip vacuuming of TOAST tables (Nathan Bossart)\n> |VACUUM now has a PROCESS_TOAST which can be set to false to disable TOAST processing, and vacuumdb has a --no-process-toast option. \n> \n> Or, the client-side option could be omitted. This is distinguished from\n> vacuumdb --no-index-cleanup and --no-truncate, for which the server support was\n> added in v12, and the client support was essentially an omision.\n\nI am inclined to mention reindexdb because we mention the SQL command\noption in the text. Here is the updated text:\n\n\t <listitem>\n\t<!--\n\tAuthor: Michael Paquier <michael@paquier.xyz>\n\t2021-02-04 [c5b286047] Add TABLESPACE option to REINDEX\n\tAuthor: Michael Paquier <michael@paquier.xyz>\n\t2021-03-03 [57e6db706] Add - -tablespace option to reindexdb\n\t-->\n\t\n\t <para>\n\t Allow <link linkend=\"sql-reindex\"><command>REINDEX</command></link>\n\t to change the tablespace of the new index (Alexey Kondratov,\n\t Michael Paquier, Justin Pryzby)\n\t </para>\n\t\n\t <para>\n\t This is done by specifying a <literal>TABLESPACE</literal> clause.\n\t A <option>--tablespace</option> option was also added to <link\n\t linkend=\"app-reindexdb\"><application>reindexdb</application></link>\n\t to control this.\n\t </listitem>\n\n> | Add documentation for the factorial() function (Peter Eisentraut)\n> | With the removal of the ! operator in this release, factorial() is the only built-in way to compute a factorial. \n> Could be ommited or collapsed into the other item. I know Tom thinks that\n> it's unnecesary to document changes to documentation.\n\nUh, I think we need both items. We are removing a feature and asking\npeople to use an existing feature that was previously undocumented. I\nthink having two items makes it clear that the feature existed in\nprevious releases. I just tried merging them into one item and there\nwere just too many changes for it to be clear.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 14 Jun 2021 16:03:23 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, Jun 14, 2021 at 06:57:41PM +0200, Tomas Vondra wrote:\n> Sorry it took me a while to look at the release notes. I have one\n> suggestion regarding this item:\n> \n> \n> Allow logical replication to stream long in-progress transactions to\n> subscribers (Tomas Vondra, Dilip Kumar, Amit Kapila, Ajin Cherian,\n> Nikhil Sontakke, Stas Kelvich)\n> \n> AFAICS the authors are generally ordered by how much they contributed to\n> the feature. In that case I'd move myself down the list - certainly\n> after Dilip and Amit, perhaps after Ajin. While I posted the original\n> patch, but most of the work after that to get it committed was done by\n> those two/three people.\n\nOK, I moved you after Ajin. Sometimes it isn't clear how much of an\noriginal patch was modified by later authors.\n\nFYI, the most recent PG 14 relnote doc build is at:\n\n\thttps://momjian.us/pgsql_docs/release-14.html\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 14 Jun 2021 16:11:04 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, Jun 14, 2021 at 01:29:42PM -0400, John Naylor wrote:\n> Hi Bruce,\n> \n> For this item:\n> \n> <!--\n> Author: Tomas Vondra <tomas.vondra@postgresql.org>\n> 2021-03-26 [77b88cd1b] BRIN bloom indexes\n> -->\n> \n> � � � �<para>\n> � � � � Allow <acronym>BRIN</acronym> indexes to use bloom filters\n> � � � � (Tomas Vondra)\n> � � � �</para>\n> \n> � � � �<para>\n> � � � � This allows bloom indexes to be used effectively with data that\n> � � � � is not physically localized in the heap.\n> � � � �</para>\n> � � � </listitem>\n> \n> The text�implies that this affects�contrib/bloom. I think it should be \"This\n> allows BRIN indexes...\".�\n\nAh, I see your point. Updated text is:\n\n\t <listitem>\n\t<!--\n\tAuthor: Tomas Vondra <tomas.vondra@postgresql.org>\n\t2021-03-26 [77b88cd1b] BRIN bloom indexes\n\t-->\n\t\n\t <para>\n\t Allow <acronym>BRIN</acronym> indexes to use bloom filters\n\t (Tomas Vondra)\n\t </para>\n\t\n\t <para>\n\t This allows <acronym>BRIN</acronym> indexes to be used effectively\n\t with data that is not physically localized in the heap.\n\t </para>\n\t </listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 14 Jun 2021 16:12:20 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 3:03 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I have committed the first draft of the PG 14 release notes. You can\n> see the most current build of them here:\n>\n> https://momjian.us/pgsql_docs/release-14.html\n>\n\nIt might have already been discussed but I think we should mention\ncommit bed90759f in the release note. The commit seems like a bug fix\nbut it not back-patched to the older versions at least for now. There\nis a discussion[1] that we will revisit that a couple of months after\n14 is released so as there is some feedback from the field with this\nchange.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/YCsZIX2A2Ilsvfnl@paquier.xyz\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 15 Jun 2021 10:06:49 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, Jun 15, 2021 at 10:06:49AM +0900, Masahiko Sawada wrote:\n> On Mon, May 10, 2021 at 3:03 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I have committed the first draft of the PG 14 release notes. You can\n> > see the most current build of them here:\n> >\n> > https://momjian.us/pgsql_docs/release-14.html\n> >\n> \n> It might have already been discussed but I think we should mention\n> commit bed90759f in the release note. The commit seems like a bug fix\n> but it not back-patched to the older versions at least for now. There\n> is a discussion[1] that we will revisit that a couple of months after\n> 14 is released so as there is some feedback from the field with this\n> change.\n\nOK, but I need more information on how users will see a difference based\non this commit:\n\n\tcommit bed90759fc\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\tDate: Fri Oct 9 16:20:12 2020 -0400\n\t\n\t Fix our Windows stat() emulation to handle file sizes > 4GB.\n\t\n\t Hack things so that our idea of \"struct stat\" is equivalent to Windows'\n\t struct __stat64, allowing it to have a wide enough st_size field.\n\t\n\t Instead of relying on native stat(), use GetFileInformationByHandle().\n\t This avoids a number of issues with Microsoft's multiple and rather\n\t slipshod emulations of stat(). We still need to jump through hoops\n\t to deal with ERROR_DELETE_PENDING, though :-(\n\t\n\t Pull the relevant support code out of dirmod.c and put it into\n\t its own file, win32stat.c.\n\t\n\t Still TODO: do we need to do something different with lstat(),\n\t rather than treating it identically to stat()?\n\t\n\t Juan Jos� Santamar�a Flecha, reviewed by Emil Iggland;\n\t based on prior work by Michael Paquier, Sergey Zubkovsky, and others\n\t\n\t Discussion: https://postgr.es/m/1803D792815FC24D871C00D17AE95905CF5099@g01jpexmbkw24\n\t Discussion: https://postgr.es/m/15858-9572469fd3b73263@postgresql.org\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 14 Jun 2021 21:36:23 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, Jun 15, 2021 at 10:36 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Tue, Jun 15, 2021 at 10:06:49AM +0900, Masahiko Sawada wrote:\n> > On Mon, May 10, 2021 at 3:03 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > >\n> > > I have committed the first draft of the PG 14 release notes. You can\n> > > see the most current build of them here:\n> > >\n> > > https://momjian.us/pgsql_docs/release-14.html\n> > >\n> >\n> > It might have already been discussed but I think we should mention\n> > commit bed90759f in the release note. The commit seems like a bug fix\n> > but it not back-patched to the older versions at least for now. There\n> > is a discussion[1] that we will revisit that a couple of months after\n> > 14 is released so as there is some feedback from the field with this\n> > change.\n>\n> OK, but I need more information on how users will see a difference based\n> on this commit:\n\nI think that since with this commit the server on Windows can handle a\nfile over 4GB, COPY FROM loading data from an over 4GB file and\npg_dump dumping a large table work now.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 15 Jun 2021 11:49:21 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, Jun 15, 2021 at 11:49:21AM +0900, Masahiko Sawada wrote:\n> On Tue, Jun 15, 2021 at 10:36 AM Bruce Momjian <bruce@momjian.us> wrote:\n>> OK, but I need more information on how users will see a difference based\n>> on this commit:\n\n+1. That would be good to have in the release notes.\n\n> I think that since with this commit the server on Windows can handle a\n> file over 4GB, COPY FROM loading data from an over 4GB file and\n> pg_dump dumping a large table work now.\n\nSegment files or WAL files larger than 4GB also gain from that.\nAnything for which we may finish to do a stat() on benefits from this\nchange if running on Windows. For pg_dump, a workaround in PG <= 13\nwas to use --no-sync as the stat() failure came from files with a size\nlarger than 4GB. That's rather sad as that means sacrifying\ndurability for more usability :(\n--\nMichael", "msg_date": "Tue, 15 Jun 2021 12:01:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, Jun 14, 2021 at 1:11 PM Bruce Momjian <bruce@momjian.us> wrote:\n> FYI, the most recent PG 14 relnote doc build is at:\n>\n> https://momjian.us/pgsql_docs/release-14.html\n\nI just pushed a commit that makes the existing vacuum_index_cleanup\nreloption and INDEX_CLEANUP VACUUM parameter support disabling the\n\"Allow vacuum to skip index vacuuming when the number of removable\nindex entries is insignificant\" behavior. This should be mentioned in\nthe release notes.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 18 Jun 2021 20:47:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, Jun 15, 2021 at 12:01:00PM +0900, Michael Paquier wrote:\n> On Tue, Jun 15, 2021 at 11:49:21AM +0900, Masahiko Sawada wrote:\n> > On Tue, Jun 15, 2021 at 10:36 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >> OK, but I need more information on how users will see a difference based\n> >> on this commit:\n> \n> +1. That would be good to have in the release notes.\n> \n> > I think that since with this commit the server on Windows can handle a\n> > file over 4GB, COPY FROM loading data from an over 4GB file and\n> > pg_dump dumping a large table work now.\n> \n> Segment files or WAL files larger than 4GB also gain from that.\n> Anything for which we may finish to do a stat() on benefits from this\n> change if running on Windows. For pg_dump, a workaround in PG <= 13\n> was to use --no-sync as the stat() failure came from files with a size\n> larger than 4GB. That's rather sad as that means sacrifying\n> durability for more usability :(\n\nOK, I went with this text and put it in the Source Code section since it\napplies to several layers of Postgres.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.", "msg_date": "Sun, 20 Jun 2021 23:50:21 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, Jun 21, 2021 at 12:50 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Tue, Jun 15, 2021 at 12:01:00PM +0900, Michael Paquier wrote:\n> > On Tue, Jun 15, 2021 at 11:49:21AM +0900, Masahiko Sawada wrote:\n> > > On Tue, Jun 15, 2021 at 10:36 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > >> OK, but I need more information on how users will see a difference based\n> > >> on this commit:\n> >\n> > +1. That would be good to have in the release notes.\n> >\n> > > I think that since with this commit the server on Windows can handle a\n> > > file over 4GB, COPY FROM loading data from an over 4GB file and\n> > > pg_dump dumping a large table work now.\n> >\n> > Segment files or WAL files larger than 4GB also gain from that.\n> > Anything for which we may finish to do a stat() on benefits from this\n> > change if running on Windows. For pg_dump, a workaround in PG <= 13\n> > was to use --no-sync as the stat() failure came from files with a size\n> > larger than 4GB. That's rather sad as that means sacrifying\n> > durability for more usability :(\n>\n> OK, I went with this text and put it in the Source Code section since it\n> applies to several layers of Postgres.\n\nThanks!\n\nI got the parse error after applying the patch:\n\nrelease-14.sgml:3562: parser error : Input is not proper UTF-8,\nindicate encoding !\nBytes: 0xE9 0x20 0x53 0x61\n (Juan Jos Santamara Flecha)\n ^\n\nIs that a problem with my environment?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 21 Jun 2021 13:57:32 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, Jun 21, 2021 at 01:57:32PM +0900, Masahiko Sawada wrote:\n> > OK, I went with this text and put it in the Source Code section since it\n> > applies to several layers of Postgres.\n> \n> Thanks!\n> \n> I got the parse error after applying the patch:\n> \n> release-14.sgml:3562: parser error : Input is not proper UTF-8,\n> indicate encoding !\n> Bytes: 0xE9 0x20 0x53 0x61\n> (Juan Jos Santamara Flecha)\n> ^\n> \n> Is that a problem with my environment?\n\nI don't know, but it builds here and properly shows here:\n\n\thttps://momjian.us/pgsql_docs/release-14.html\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 21 Jun 2021 01:07:48 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Fri, Jun 18, 2021 at 08:47:21PM -0700, Peter Geoghegan wrote:\n> On Mon, Jun 14, 2021 at 1:11 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > FYI, the most recent PG 14 relnote doc build is at:\n> >\n> > https://momjian.us/pgsql_docs/release-14.html\n> \n> I just pushed a commit that makes the existing vacuum_index_cleanup\n> reloption and INDEX_CLEANUP VACUUM parameter support disabling the\n> \"Allow vacuum to skip index vacuuming when the number of removable\n> index entries is insignificant\" behavior. This should be mentioned in\n> the release notes.\n\nAgreed. I updated the PG 14 release notes to be current as of today,\nand adjusted your item --- patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.", "msg_date": "Mon, 21 Jun 2021 01:08:54 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, Jun 21, 2021 at 2:07 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, Jun 21, 2021 at 01:57:32PM +0900, Masahiko Sawada wrote:\n> > > OK, I went with this text and put it in the Source Code section since it\n> > > applies to several layers of Postgres.\n> >\n> > Thanks!\n> >\n> > I got the parse error after applying the patch:\n> >\n> > release-14.sgml:3562: parser error : Input is not proper UTF-8,\n> > indicate encoding !\n> > Bytes: 0xE9 0x20 0x53 0x61\n> > (Juan Jos Santamara Flecha)\n> > ^\n> >\n> > Is that a problem with my environment?\n>\n> I don't know, but it builds here and properly shows here:\n>\n> https://momjian.us/pgsql_docs/release-14.html\n\nMaybe it's my environmental problem. Thanks anyway!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 21 Jun 2021 14:09:47 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "> On Mon, Jun 21, 2021 at 12:50 PM Bruce Momjian <bruce@momjian.us> wrote:\n>>\n>> On Tue, Jun 15, 2021 at 12:01:00PM +0900, Michael Paquier wrote:\n>> > On Tue, Jun 15, 2021 at 11:49:21AM +0900, Masahiko Sawada wrote:\n>> > > On Tue, Jun 15, 2021 at 10:36 AM Bruce Momjian <bruce@momjian.us> wrote:\n>> > >> OK, but I need more information on how users will see a difference based\n>> > >> on this commit:\n>> >\n>> > +1. That would be good to have in the release notes.\n>> >\n>> > > I think that since with this commit the server on Windows can handle a\n>> > > file over 4GB, COPY FROM loading data from an over 4GB file and\n>> > > pg_dump dumping a large table work now.\n>> >\n>> > Segment files or WAL files larger than 4GB also gain from that.\n>> > Anything for which we may finish to do a stat() on benefits from this\n>> > change if running on Windows. For pg_dump, a workaround in PG <= 13\n>> > was to use --no-sync as the stat() failure came from files with a size\n>> > larger than 4GB. That's rather sad as that means sacrifying\n>> > durability for more usability :(\n>>\n>> OK, I went with this text and put it in the Source Code section since it\n>> applies to several layers of Postgres.\n> \n> Thanks!\n> \n> I got the parse error after applying the patch:\n> \n> release-14.sgml:3562: parser error : Input is not proper UTF-8,\n> indicate encoding !\n> Bytes: 0xE9 0x20 0x53 0x61\n> (Juan Jos Santamara Flecha)\n> ^\n> \n> Is that a problem with my environment?\n\nMe too. I think the problem is, Bruce's patch is encoded in\nISO-8859-1, not UTF-8. As far as I know PostgreSQL never encodes\n*.sgml files in ISO-8859-1. Anyway, attached is the Bruce's patch\nencoded in UTF-8. This works for me.\n\nMy guess is, when Bruce attached the file, his MUA automatically\nchanged the file encoding from UTF-8 to ISO-8859-1 (it could happen in\nmany MUA). Also that's the reason why he does not see the problem\nwhile compiling the sgml files. In his environment release-14.sgml is\nencoded in UTF-8, I guess. To prevent the problem next time, it's\nbetter to change the mime type of the attached file to\nApplication/Octet-Stream.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp", "msg_date": "Mon, 21 Jun 2021 14:47:16 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, Jun 21, 2021 at 02:47:16PM +0900, Tatsuo Ishii wrote:\n> > I got the parse error after applying the patch:\n> > \n> > release-14.sgml:3562: parser error : Input is not proper UTF-8,\n> > indicate encoding !\n> > Bytes: 0xE9 0x20 0x53 0x61\n> > (Juan Jos Santamara Flecha)\n> > ^\n> > \n> > Is that a problem with my environment?\n> \n> Me too. I think the problem is, Bruce's patch is encoded in\n> ISO-8859-1, not UTF-8. As far as I know PostgreSQL never encodes\n> *.sgml files in ISO-8859-1. Anyway, attached is the Bruce's patch\n> encoded in UTF-8. This works for me.\n> \n> My guess is, when Bruce attached the file, his MUA automatically\n> changed the file encoding from UTF-8 to ISO-8859-1 (it could happen in\n> many MUA). Also that's the reason why he does not see the problem\n> while compiling the sgml files. In his environment release-14.sgml is\n> encoded in UTF-8, I guess. To prevent the problem next time, it's\n> better to change the mime type of the attached file to\n> Application/Octet-Stream.\n\nOh, people were testing by building from the attached patch, not from\nthe git tree. Yes, I see now the email was switched to a single-byte\nencoding, and the attachment header confirms it:\n\n\tContent-Type: text/x-diff; charset=iso-8859-1\n\t ----------\n\tContent-Disposition: attachment; filename=\"master.diff\"\n\tContent-Transfer-Encoding: 8bit\n\nI guess my email program, mutt, is trying to be helpful by using a\nsingle-byte encoding when UTF is not necessary, which I guess makes\nsense. I will try to remember this can cause problems with SGML\nattachments.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 21 Jun 2021 10:37:18 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, May 10, 2021 at 7:03 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I have committed the first draft of the PG 14 release notes. You can\n> see the most current build of them here:\n>\n> https://momjian.us/pgsql_docs/release-14.html\n>\n> I need clarification on many items, and the document still needs its\n> items properly ordered, and markup added. I also expect a lot of\n> feedback.\n>\n> I plan to work on completing this document this coming week in\n> preparation for beta next week.\n\nVery good, thanks.\n\nI just noticed that these commits are missing, yet are very important\nnew features:\nd9d076222f5b94a8\nf9900df5f9\nc98763bf51bf\n\nThese are important enough to be major features of PG14.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 23 Jun 2021 13:50:07 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Wed, Jun 23, 2021 at 5:50 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> I just noticed that these commits are missing, yet are very important\n> new features:\n> d9d076222f5b94a8\n> f9900df5f9\n> c98763bf51bf\n>\n> These are important enough to be major features of PG14.\n\nI certainly think that they're important enough to be mentioned.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 23 Jun 2021 07:45:53 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Wed, Jun 23, 2021 at 07:45:53AM -0700, Peter Geoghegan wrote:\n> On Wed, Jun 23, 2021 at 5:50 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > I just noticed that these commits are missing, yet are very important\n> > new features:\n> > d9d076222f5b94a8\n> > f9900df5f9\n> > c98763bf51bf\n> >\n> > These are important enough to be major features of PG14.\n> \n> I certainly think that they're important enough to be mentioned.\n\nOK, here is a doc patch to add a mention of this. I originally thought\nthis was an optimization that wouldn't be of general interest.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.", "msg_date": "Thu, 24 Jun 2021 21:56:42 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "> Require custom server variable names to use only character which are valid for unquoted SQL identifiers (Tom Lane)\n\ncharacters plural (since 69a58bfe4)\n\n> This is similar to how Unicode can be specified in literal string.\n\nliteral strings\n\n> Add executor method to cache results from the inner-side of nested loop joins (David Rowley)\n> This is useful if only a small percentage of rows is checked on the inner side.\n\nI think this should mention the GUC, whether we leave it enabled by default (in\nwhich case people may want to disable it) or disable by default (in which case\npeople may want to enable it).\n\n> The postgres_fdw supports these type of scans if async_capable is set.\nthis type\nremove \"The\" ?\n\n> Prevent the containment operators (<@ and @>) for intarray from using GiST indexes (Tom Lane)\n> Remove deprecated containment operators @ and ~ for built-in geometric data types and contrib modules cube, hstore, intarray, and seg (Justin Pryzby)\n> For example, disregard ^ in its expansion in \\1 in (^\\d+).*\\1.\n> Add point operators <<| and |>> to be strictly above/below geometry (Emre Hasegeli)\n> Previously >^ and <^ were marked as performing this test, but non-point geometric operators used these operators for non-strict comparisons, leading to confusion. The old operators still exist but will be eventually removed. ACCURATE?\n\nShould these have markup added?\n\n> Certain discarded tokens, like underscore, caused the output of these functions to produce incorrect tsquery output, e.g., both websearch_to_tsquery('\"pg_class pg\"') and to_tsquery('pg_class <-> pg') used to output '( pg & class ) <-> pg', but now both output 'pg <-> class <-> pg'.\n> Previously, quoted text that contained multiple adjacent discarded tokens were treated as multiple tokens, causing incorrect tsquery output, e.g., websearch_to_tsquery('\"aaa: bbb\"') used to output 'aaa <2> bbb', but now outputs 'aaa <-> bbb'.\n\nMissing markup?\n\n> This is controlled by server variable ssl_crl_dir and libpq connection option sslcrldir. Previously only CRL files could be specified.\n> Allow pgstattuple_approx() to report on TOAST tables (Peter Eisentraut)\n> Add pg_stat_statements_info system view to show pg_stat_statements activity (Katsuragi Yuta, Yuki Seino, Naoki Nakamichi)\n> Add postgres_fdw function postgres_fdw_get_connections() to report open foreign server connections (Bharath Rupireddy)\n\nThese should have additional hyperlinks\n\n> Add primary keys, unique constraints, and foreign keys to system catalogs (Peter Eisentraut)\n\nShould mention and link to pg_get_catalog_foreign_keys()\n\n> Pass doubled quote marks in Chapter 36 SQL command strings literally (Tom Lane)\n\n\"Chapter 36\" looks funny?\nSee also: 4f7d1c309\n\n>Previously window frame clauses like 'inf' PRECEDING AND 'inf' FOLLOWING returned incorrect results.\n>Negative values produced undesirable results.\n>Previously such cases returned 1.\n>This previously was allowed but produced incorrect results.\n>This could be accomplished previously using existing syntax.\n\nAll these details could be omitted.\n\n>Only the target table can be referenced.\n\nCould be omitted or folded into the preceding line.\n\n> This was already disabled by default in previous Postgres releases, and most modern OpenSSL and TLS versions no longer support it.\n> This was last used as the default in Postgres 7.3 (year 2002).\n> By default, Postgres opens and fsyncs every data file at the start of crash recovery. This new setting, recovery_init_sync_method=syncfs, instead syncs each filesystem used by the database cluster. This allows for faster recovery on systems with many database files.\n> The new syntax is SUBSTRING(text SIMILAR pattern ESCAPE escapechar). The previous standard syntax was SUBSTRING(text FROM pattern FOR escapechar), and is still supported by Postgres.\n\nThese should all say <productname>PostgreSQL</productname>\n\n> Allow psql's \\df and \\do commands to specify function and operator argument types (Greg Sabino Mullane, Tom Lane)\n> Add an access method column to psql's \\d[i|m|t]+ output (Georgios Kokolatos)\n> Allow psql's \\dt and \\di to show TOAST tables and their indexes (Justin Pryzby)\n> Add psql command \\dX to list extended statistics objects (Tatsuro Yamada)\n> Fix psql's \\dT to understand array syntax and backend grammar aliases, like \"int\" for \"integer\" (Greg Sabino Mullane, Tom Lane)\n> When editing the previous query or a file with psql's \\e, or using \\ef and \\ev, ignore the contents if the editor exits without saving (Laurenz Albe)\n\nAll these slash commands should be <literal>\n\n> Stop pg_upgrade from creating analyze_new_cluster script (Michael Paquier)\n\nIt's called analyze_new_cluster.sh (except on window), and it's Magnus' patch.\n\n> EXTRACT(date) now throws an error for units that are not part of the date data type.\n\n\"Date data\" always seems hard to read.\nCould you add markup for \"<type>date</type>\" ?\nOr say: of type \"date\".\n\n> EXTRACT(date) now throws an error for units that are not part of the date data type.\n\nCould say \"if requested to extract units that are not part of the \"date\" type.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 25 Jun 2021 18:04:56 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Fri, Jun 25, 2021 at 06:04:56PM -0500, Justin Pryzby wrote:\n> > Require custom server variable names to use only character which are valid for unquoted SQL identifiers (Tom Lane)\n> \n> characters plural (since 69a58bfe4)\n\nFixed.\n\n> > This is similar to how Unicode can be specified in literal string.\n> \n> literal strings\n\nFixed.\n\n> > Add executor method to cache results from the inner-side of nested loop joins (David Rowley)\n> > This is useful if only a small percentage of rows is checked on the inner side.\n> \n> I think this should mention the GUC, whether we leave it enabled by default (in\n> which case people may want to disable it) or disable by default (in which case\n> people may want to enable it).\n\nOK, I changed it to:\n\n This is useful if only a small percentage of rows is checked on\n the inner side and is controlled by <xref\n linkend=\"guc-enable-resultcache\"/>.\n\n> > The postgres_fdw supports these type of scans if async_capable is set.\n> this type\n> remove \"The\" ?\n\nNew text is:\n\n\t<link\n\tlinkend=\"postgres-fdw\"><application>postgres_fdw</application></link>\n\tsupports these type of scans if <literal>async_capable</literal>\n\nI kept \"these types\" because the paragraph above says:\n\n\tAllow a query referencing multiple <link\n\tlinkend=\"sql-createforeigntable\">foreign tables</link> to perform\n\tforeign table scans in parallel (Robert Haas, Kyotaro Horiguchi,\n\tThomas Munro, Etsuro Fujita)\n\nso we are talking about scans in parallel, so I think it is plural. Wrong?\n\n> > Prevent the containment operators (<@ and @>) for intarray from using GiST indexes (Tom Lane)\n\nI show this markup as:\n\n Prevent the containment operators (&lt;@ and @&gt;) for <xref\n linkend=\"intarray\"/> from using GiST indexes (Tom Lane)\n\nWhat markup is missing?\n\n> > Remove deprecated containment operators @ and ~ for built-in geometric data types and contrib modules cube, hstore, intarray, and seg (Justin Pryzby)\n\nSame. what is missing?\n\n Remove deprecated containment operators @ and ~ for built-in\n <link linkend=\"functions-geometry\">geometric data types</link> and\n contrib modules <xref linkend=\"cube\"/>, <xref linkend=\"hstore\"/>,\n <xref linkend=\"intarray\"/>, and <xref linkend=\"seg\"/> (Justin Pryzby)\n\nI could link to our \"Geometric Functions and Operators\" section, but my\nunderstanding is those are operators for our built-in types, not for\nextensions, which is what we are talking about here.\n\n> > For example, disregard ^ in its expansion in \\1 in (^\\d+).*\\1.\n\nFixed:\n\n Improve handling of regular expression <link\n linkend=\"posix-escape-sequences\">back-references</link> (Tom Lane)\n\nI didn't know we had a good section for that, but I see it now.\n\n> > Add point operators <<| and |>> to be strictly above/below geometry (Emre Hasegeli)\n> > Previously >^ and <^ were marked as performing this test, but non-point geometric operators used these operators for non-strict comparisons, leading to confusion. The old operators still exist but will be eventually removed. ACCURATE?\n\nI see markup on this:\n\n Add <link linkend=\"functions-geometry\">point operators</link>\n &lt;&lt;| and |&gt;&gt; to be strictly above/below geometry\n (Emre Hasegeli)\n\n> Should these have markup added?\n> \n> > Certain discarded tokens, like underscore, caused the output of these functions to produce incorrect tsquery output, e.g., both websearch_to_tsquery('\"pg_class pg\"') and to_tsquery('pg_class <-> pg') used to output '( pg & class ) <-> pg', but now both output 'pg <-> class <-> pg'.\n> > Previously, quoted text that contained multiple adjacent discarded tokens were treated as multiple tokens, causing incorrect tsquery output, e.g., websearch_to_tsquery('\"aaa: bbb\"') used to output 'aaa <2> bbb', but now outputs 'aaa <-> bbb'.\n> \n> Missing markup?\n\nI see markup on the main text of this item:\n\n Fix <link\n linkend=\"functions-textsearch\"><function>to_tsquery()</function></link>\n and <function>websearch_to_tsquery()</function> to properly parse\n query text containing discarded tokens (Alexander Korotkov)\n\n> > This is controlled by server variable ssl_crl_dir and libpq connection option sslcrldir. Previously only CRL files could be specified.\n\nOK, added link to sslcrldir.\n\n> > Allow pgstattuple_approx() to report on TOAST tables (Peter Eisentraut)\n\nFixed:\n\n Allow <link\n linkend=\"pgstattuple\"><function>pgstattuple_approx()</function></link>\n to report on <acronym>TOAST</acronym> tables (Peter Eisentraut)\n\n> > Add pg_stat_statements_info system view to show pg_stat_statements activity (Katsuragi Yuta, Yuki Seino, Naoki Nakamichi)\n\nThe section heading already has a link so I don't add any for individual\nitems.\n\n> > Add postgres_fdw function postgres_fdw_get_connections() to report open foreign server connections (Bharath Rupireddy)\n\nSame, this is already in the section with a link in the heading.\n\n> > Add primary keys, unique constraints, and foreign keys to system catalogs (Peter Eisentraut)\n> \n> Should mention and link to pg_get_catalog_foreign_keys()\n\nUh, why? I don't see the release notes as a place to explain how to use\nPostgres features.\n\n> > Pass doubled quote marks in Chapter 36 SQL command strings literally (Tom Lane)\n> \n> \"Chapter 36\" looks funny?\n> See also: 4f7d1c309\n\nYes, fixed.\n\n> >Previously window frame clauses like 'inf' PRECEDING AND 'inf' FOLLOWING returned incorrect results.\n> >Negative values produced undesirable results.\n> >Previously such cases returned 1.\n> >This previously was allowed but produced incorrect results.\n> >This could be accomplished previously using existing syntax.\n> \n> All these details could be omitted.\n\nI think explaining the previous behavior helps people understand the new\nbehavior.\n\n> >Only the target table can be referenced.\n> \n> Could be omitted or folded into the preceding line.\n\nI only use a single sentence for the main text.\n\n> > This was already disabled by default in previous Postgres releases, and most modern OpenSSL and TLS versions no longer support it.\n> > This was last used as the default in Postgres 7.3 (year 2002).\n> > By default, Postgres opens and fsyncs every data file at the start of crash recovery. This new setting, recovery_init_sync_method=syncfs, instead syncs each filesystem used by the database cluster. This allows for faster recovery on systems with many database files.\n> > The new syntax is SUBSTRING(text SIMILAR pattern ESCAPE escapechar). The previous standard syntax was SUBSTRING(text FROM pattern FOR escapechar), and is still supported by Postgres.\n> \n> These should all say <productname>PostgreSQL</productname>\n\nOK, fixed.\n\n> > Allow psql's \\df and \\do commands to specify function and operator argument types (Greg Sabino Mullane, Tom Lane)\n> > Add an access method column to psql's \\d[i|m|t]+ output (Georgios Kokolatos)\n\nI don't think this one is actually literal.\n\n> > Allow psql's \\dt and \\di to show TOAST tables and their indexes (Justin Pryzby)\n> > Add psql command \\dX to list extended statistics objects (Tatsuro Yamada)\n> > Fix psql's \\dT to understand array syntax and backend grammar aliases, like \"int\" for \"integer\" (Greg Sabino Mullane, Tom Lane)\n> > When editing the previous query or a file with psql's \\e, or using \\ef and \\ev, ignore the contents if the editor exits without saving (Laurenz Albe)\n> \n> All these slash commands should be <literal>\n\nYes, fixed except for the one mentioned above.\n\n> > Stop pg_upgrade from creating analyze_new_cluster script (Michael Paquier)\n> \n> It's called analyze_new_cluster.sh (except on window), and it's Magnus' patch.\n\nAh, yes, Magnus, fixed. I don't see the value in adding .sh since it is\nnot always accurate, but I could add .sh/.bat, but that seems confusing.\n\n> > EXTRACT(date) now throws an error for units that are not part of the date data type.\n> \n> \"Date data\" always seems hard to read.\n> Could you add markup for \"<type>date</type>\" ?\n\nDone.\n\nThank you for all the help on this. Patch attached. Since we just\nbranched for PG 15, our official developer doc build no longer has the\nPG 14 release notes, so I changed my local doc tree to build PG 14 until\nPG14 is released in a few months; my URL is:\n\n\thttps://momjian.us/pgsql_docs/release-14.html\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.", "msg_date": "Mon, 28 Jun 2021 21:01:40 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, Jun 28, 2021 at 09:01:40PM -0400, Bruce Momjian wrote:\n> On Fri, Jun 25, 2021 at 06:04:56PM -0500, Justin Pryzby wrote:\n> > > The postgres_fdw supports these type of scans if async_capable is set.\n> > this type\n> > remove \"The\" ?\n> \n> New text is:\n> \n> \t<link\n> \tlinkend=\"postgres-fdw\"><application>postgres_fdw</application></link>\n> \tsupports these type of scans if <literal>async_capable</literal>\n> \n> I kept \"these types\" because the paragraph above says:\n> \n> \tAllow a query referencing multiple <link\n> \tlinkend=\"sql-createforeigntable\">foreign tables</link> to perform\n> \tforeign table scans in parallel (Robert Haas, Kyotaro Horiguchi,\n> \tThomas Munro, Etsuro Fujita)\n> \n> so we are talking about scans in parallel, so I think it is plural. Wrong?\n\nI think the \"type\" of scan being referenced is a \"parallel\" type, right ?\nSo there's only one type, but multiple scans.\nSo I think it should say \"this type\" of scan, but it seems like it's not only\neasier but generally better to say\n\n| postgres_fdw supports parallel scans if async_capable\n\n>> Prevent the containment operators (<@ and @>) for intarray from using GiST indexes (Tom Lane)\n>> Remove deprecated containment operators @ and ~ for built-in geometric data types and contrib modules cube, hstore, intarray, and seg (Justin Pryzby)\n>> For example, disregard ^ in its expansion in \\1 in (^\\d+).*\\1.\n>> Add point operators <<| and |>> to be strictly above/below geometry (Emre Hasegeli)\n>> Previously >^ and <^ were marked as performing this test, but non-point geometric operators used these operators for non-strict comparisons, leading to confusion. The old operators still exist but will be eventually removed.\n\n> What markup is missing?\n\nI mean markup for the operators, like <literal>&lt;@</literal>\n\n> > > Add primary keys, unique constraints, and foreign keys to system catalogs (Peter Eisentraut)\n>\n> > Should mention and link to pg_get_catalog_foreign_keys()\n> \n> Uh, why? I don't see the release notes as a place to explain how to use\n> Postgres features.\n\nBecause the normal way to show foreign keys (\\d) doesn't show them - the\nreferences are shown by the function.\n\nThanks,\n-- \nJustin\n\n\n", "msg_date": "Mon, 28 Jun 2021 21:25:47 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Fri, Jun 25, 2021 at 2:56 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Wed, Jun 23, 2021 at 07:45:53AM -0700, Peter Geoghegan wrote:\n> > On Wed, Jun 23, 2021 at 5:50 AM Simon Riggs\n> > <simon.riggs@enterprisedb.com> wrote:\n> > > I just noticed that these commits are missing, yet are very important\n> > > new features:\n> > > d9d076222f5b94a8\n> > > f9900df5f9\n> > > c98763bf51bf\n> > >\n> > > These are important enough to be major features of PG14.\n> >\n> > I certainly think that they're important enough to be mentioned.\n>\n> OK, here is a doc patch to add a mention of this. I originally thought\n> this was an optimization that wouldn't be of general interest.\n\nPerhaps we should also add this text from the commit message to ensure\nthe importance is understood:\n\"This is extremely useful in cases where CIC/RC can run for a very long\n time, because that used to be a significant headache for concurrent\n vacuuming of other tables.\"\n\nProposed edits:\n\n* \"during certain index operations\" -> \"while concurrent index\noperations run on other tables\"\n* spell Alvaro's name correctly\n* \"row expiration\" is a term not currently used in PG docs, so we\nshould probably look for something else.\n\n\nThere are 2 important features here, so the 2nd feature is worth\nmentioning also:\n\nAvoid spurious waits in concurrent indexing\n\nPreviously, multiple concurrent index operations could deadlock or\ncause long waits.\nWaits are avoided except for indexes with expressions, or WHERE predicates.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 29 Jun 2021 19:36:47 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Tue, Jun 29, 2021 at 07:36:47PM +0100, Simon Riggs wrote:\n> Perhaps we should also add this text from the commit message to ensure\n> the importance is understood:\n> \"This is extremely useful in cases where CIC/RC can run for a very long\n> time, because that used to be a significant headache for concurrent\n> vacuuming of other tables.\"\n\nUh, hopefully this idea is captured below.\n\n> Proposed edits:\n> \n> * \"during certain index operations\" -> \"while concurrent index\n> operations run on other tables\"\n\nUh, the problem here is that it isn't clear if we are talking about\nconcurrent index operations, \"concurrently\"-enabled index options, or\n\"concurrently\"-enabled concurrent index operations. I went with\n\"minimal-locking index operations\".\n\n> * spell Alvaro's name correctly\n\nUh, this is the second time Mutt's default to using iso-8859-1 for\nnon-ASCII attachments has caused confusion. I found that using 'set\nsend_charset=\"us-ascii:utf-8\"' in Mutt fixed this, so the attachment\nshould now be fine.\n\n> * \"row expiration\" is a term not currently used in PG docs, so we\n> should probably look for something else.\n\nYeah, I changed that to \"removing dead rows\" which seems to be our\nstandard text.\n\n> There are 2 important features here, so the 2nd feature is worth\n> mentioning also:\n> \n> Avoid spurious waits in concurrent indexing\n> \n> Previously, multiple concurrent index operations could deadlock or\n> cause long waits.\n> Waits are avoided except for indexes with expressions, or WHERE predicates.\n\nOK, I added text to the bottom to try and capture that; new patch\nattached, now with UTF8 encoding.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.", "msg_date": "Wed, 30 Jun 2021 18:20:14 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Wed, Jun 30, 2021 at 11:20 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Tue, Jun 29, 2021 at 07:36:47PM +0100, Simon Riggs wrote:\n> > Perhaps we should also add this text from the commit message to ensure\n> > the importance is understood:\n> > \"This is extremely useful in cases where CIC/RC can run for a very long\n> > time, because that used to be a significant headache for concurrent\n> > vacuuming of other tables.\"\n>\n> Uh, hopefully this idea is captured below.\n>\n> > Proposed edits:\n> >\n> > * \"during certain index operations\" -> \"while concurrent index\n> > operations run on other tables\"\n>\n> Uh, the problem here is that it isn't clear if we are talking about\n> concurrent index operations, \"concurrently\"-enabled index options, or\n> \"concurrently\"-enabled concurrent index operations. I went with\n> \"minimal-locking index operations\".\n>\n> > * spell Alvaro's name correctly\n>\n> Uh, this is the second time Mutt's default to using iso-8859-1 for\n> non-ASCII attachments has caused confusion. I found that using 'set\n> send_charset=\"us-ascii:utf-8\"' in Mutt fixed this, so the attachment\n> should now be fine.\n>\n> > * \"row expiration\" is a term not currently used in PG docs, so we\n> > should probably look for something else.\n>\n> Yeah, I changed that to \"removing dead rows\" which seems to be our\n> standard text.\n\nWhat you have now looks great for this feature, thanks.\n\n\n> > There are 2 important features here, so the 2nd feature is worth\n> > mentioning also:\n> >\n> > Avoid spurious waits in concurrent indexing\n> >\n> > Previously, multiple concurrent index operations could deadlock or\n> > cause long waits.\n> > Waits are avoided except for indexes with expressions, or WHERE predicates.\n>\n> OK, I added text to the bottom to try and capture that; new patch\n> attached, now with UTF8 encoding.\n\nThe text from \"This also avoids...\" tries to explain this, but they\nare two separate features, each important in its own right.\n\nSo regrettably, this part doesn't capture it, for me.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 1 Jul 2021 15:13:30 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Thu, Jul 1, 2021 at 03:13:30PM +0100, Simon Riggs wrote:\n> On Wed, Jun 30, 2021 at 11:20 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > * \"row expiration\" is a term not currently used in PG docs, so we\n> > > should probably look for something else.\n> >\n> > Yeah, I changed that to \"removing dead rows\" which seems to be our\n> > standard text.\n> \n> What you have now looks great for this feature, thanks.\n\nGood.\n\n> > > There are 2 important features here, so the 2nd feature is worth\n> > > mentioning also:\n> > >\n> > > Avoid spurious waits in concurrent indexing\n> > >\n> > > Previously, multiple concurrent index operations could deadlock or\n> > > cause long waits.\n> > > Waits are avoided except for indexes with expressions, or WHERE predicates.\n> >\n> > OK, I added text to the bottom to try and capture that; new patch\n> > attached, now with UTF8 encoding.\n> \n> The text from \"This also avoids...\" tries to explain this, but they\n> are two separate features, each important in its own right.\n> \n> So regrettably, this part doesn't capture it, for me.\n\nI see what you mean. This is in the VACUUM section, and this feature,\nthough from the same commits, has nothing to do with vacuum. Attached\nis an updated patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.", "msg_date": "Thu, 1 Jul 2021 19:50:25 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, Jun 28, 2021 at 09:25:47PM -0500, Justin Pryzby wrote:\n> On Mon, Jun 28, 2021 at 09:01:40PM -0400, Bruce Momjian wrote:\n> > so we are talking about scans in parallel, so I think it is plural. Wrong?\n> \n> I think the \"type\" of scan being referenced is a \"parallel\" type, right ?\n> So there's only one type, but multiple scans.\n> So I think it should say \"this type\" of scan, but it seems like it's not only\n> easier but generally better to say\n> \n> | postgres_fdw supports parallel scans if async_capable\n> \n> >> Prevent the containment operators (<@ and @>) for intarray from using GiST indexes (Tom Lane)\n> >> Remove deprecated containment operators @ and ~ for built-in geometric data types and contrib modules cube, hstore, intarray, and seg (Justin Pryzby)\n> >> For example, disregard ^ in its expansion in \\1 in (^\\d+).*\\1.\n> >> Add point operators <<| and |>> to be strictly above/below geometry (Emre Hasegeli)\n> >> Previously >^ and <^ were marked as performing this test, but non-point geometric operators used these operators for non-strict comparisons, leading to confusion. The old operators still exist but will be eventually removed.\n> \n> > What markup is missing?\n> \n> I mean markup for the operators, like <literal>&lt;@</literal>\n> \n> > Uh, why? I don't see the release notes as a place to explain how to use\n> > Postgres features.\n> \n> Because the normal way to show foreign keys (\\d) doesn't show them - the\n> references are shown by the function.\n\nOK, agreed. Here is an updated applied patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.", "msg_date": "Thu, 1 Jul 2021 20:32:31 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Fri, Jul 2, 2021 at 12:50 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, Jul 1, 2021 at 03:13:30PM +0100, Simon Riggs wrote:\n> > On Wed, Jun 30, 2021 at 11:20 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > > * \"row expiration\" is a term not currently used in PG docs, so we\n> > > > should probably look for something else.\n> > >\n> > > Yeah, I changed that to \"removing dead rows\" which seems to be our\n> > > standard text.\n> >\n> > What you have now looks great for this feature, thanks.\n>\n> Good.\n>\n> > > > There are 2 important features here, so the 2nd feature is worth\n> > > > mentioning also:\n> > > >\n> > > > Avoid spurious waits in concurrent indexing\n> > > >\n> > > > Previously, multiple concurrent index operations could deadlock or\n> > > > cause long waits.\n> > > > Waits are avoided except for indexes with expressions, or WHERE predicates.\n> > >\n> > > OK, I added text to the bottom to try and capture that; new patch\n> > > attached, now with UTF8 encoding.\n> >\n> > The text from \"This also avoids...\" tries to explain this, but they\n> > are two separate features, each important in its own right.\n> >\n> > So regrettably, this part doesn't capture it, for me.\n>\n> I see what you mean. This is in the VACUUM section, and this feature,\n> though from the same commits, has nothing to do with vacuum. Attached\n> is an updated patch.\n\nPerfect, many thanks.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 13 Jul 2021 16:33:38 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "I think the release notes for the autovacuum item (which was first reverted and\nthen partially un-reverted) should say something like \"Partitioned tables are\nnow included in pg_stat_all_tables\":\n| e1efc5b465 Keep stats up to date for partitioned tables\n\nRemove some internal question/marks:\nACCURATE?\nREMOVE?\nALTER TABLE ... ALTER COLUMN ... TYPE RESETS STASTISTICS? (was never intended to have markup added?)\n\nAlso, I'm not sure if this one should be included.\n9a4c0e36fb Dump ALTER TABLE ... ATTACH PARTITION as a separate ArchiveEntry.\nSee: https://www.postgresql.org/message-id/flat/20210830154249.GX26465@telsasoft.com#9ad6fd4c36e13deea1c5f92f5114320e\n\nWhat about this ?\nce6b662aae psql: Fix name quoting on extended statistics\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 18 Sep 2021 15:46:05 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I think the release notes for the autovacuum item (which was first reverted and\n> then partially un-reverted) should say something like \"Partitioned tables are\n> now included in pg_stat_all_tables\":\n> | e1efc5b465 Keep stats up to date for partitioned tables\n\nHmm. If I'm reading the commit message properly, the actual change there\nis not that, but that analyze count and last analyze time are now tracked\ncorrectly for partitioned tables. Might be worth mentioning, not sure.\n\n> Remove some internal question/marks:\n> ACCURATE?\n> REMOVE?\n> ALTER TABLE ... ALTER COLUMN ... TYPE RESETS STASTISTICS? (was never intended to have markup added?)\n\nDid that in the copy-editing I just pushed.\n\n> Also, I'm not sure if this one should be included.\n> 9a4c0e36fb Dump ALTER TABLE ... ATTACH PARTITION as a separate ArchiveEntry.\n> See: https://www.postgresql.org/message-id/flat/20210830154249.GX26465@telsasoft.com#9ad6fd4c36e13deea1c5f92f5114320e\n\nProbably not worth listing --- the amount of explanation needed seems\nto outweigh the probability of users caring.\n\n> What about this ?\n> ce6b662aae psql: Fix name quoting on extended statistics\n\nSeems way too minor to bother with here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Sep 2021 17:15:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Sat, Sep 18, 2021 at 05:15:39PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > I think the release notes for the autovacuum item (which was first reverted and\n> > then partially un-reverted) should say something like \"Partitioned tables are\n> > now included in pg_stat_all_tables\":\n> > | e1efc5b465 Keep stats up to date for partitioned tables\n> \n> Hmm. If I'm reading the commit message properly, the actual change there\n> is not that, but that analyze count and last analyze time are now tracked\n> correctly for partitioned tables. Might be worth mentioning, not sure.\n\nThe reverted patch to autoanalyze included partitioned tables in\npg_stat_all_tables, and the revert specifically avoided changing that (to avoid\na catbump). But last_analyzed and analyze_count were always shown as \"0\". So\nthe e1 commit addresses that by tracking that information and showing correct\nvalue instead of always 0.\n\nThe relevant portion starts here:\nhttps://www.postgresql.org/message-id/flat/202108161700.d4eh6a7n2lki%40alvherre.pgsql#b2e426eb19dbbddee0adf9bb1bcbbcf1\n\nI suggest that this *should* be included in the release notes, since I\nspecifically requested that partitioned tables be included in 2018.\n\n> In 20200418050815(dot)GE26953(at)telsasoft(dot)com I wrote:\n> |This patch includes partitioned tables in pg_stat_*_tables, which is great; I\n> |complained awhile ago that they were missing [0]. It might be useful if that\n> |part was split out into a separate 0001 patch (?).\n> | [0] https://www.postgresql.org/message-id/20180601221428.GU5164%40telsasoft.com\n\nAlso, I've patched my analyze script to use that field (same as for\nnonpartitioned tables) rather than needing to do a subquery involving\nmax(last_analyzed) of the partitions. Since it's still needed to manually\nanalyze parent tables.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 18 Sep 2021 16:35:51 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" }, { "msg_contents": "On Mon, 10 May 2021 02:03:08 -0400\nBruce Momjian <bruce@momjian.us> wrote:\n\n> I have committed the first draft of the PG 14 release notes. You can\n> see the most current build of them here:\n> \n> \thttps://momjian.us/pgsql_docs/release-14.html\n> \n> I need clarification on many items, and the document still needs its\n> items properly ordered, and markup added. I also expect a lot of\n> feedback.\n\nI found that there isn't an item that referring the following commit:\n\ncommit 547f04e7348b6ed992bd4a197d39661fe7c25097\nAuthor: Thomas Munro <tmunro@postgresql.org>\nDate: Wed Mar 10 16:09:50 2021 +1300\n\n pgbench: Improve time logic.\n \n Instead of instr_time (struct timespec) and the INSTR_XXX macros,\n introduce pg_time_usec_t and use integer arithmetic. Don't include the\n connection time in TPS unless using -C mode, but report it separately.\n\nThis commit changed the output results that are visible for users, that is,\ntps with \"including connection establishing\" and \"excluding ...\" are no\nlonger reported, so I wonder we should let users know it.\n\nAnother user-visible change on pgbench that is not listed in the release\nnotes is the support for pipeline mode. There is a mention about libpq\nsupporting pipeline, so I think we can also mention that of pgbench.\n\nThere are other minor improvement on pgbench, but pgbench is just a\nbenchnark tool and its changes do not directly affect to user application,\nso maybe we don't have to describe all in the release notes.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 27 Sep 2021 10:07:13 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: PG 14 release notes, first draft" } ]
[ { "msg_contents": "I noted that, while privileges.sql doesn't stand out in terms of\nruntime normally (it's only the fourth slowest test in its\nparallel group), it looks absolutely horrid in CLOBBER_CACHE_ALWAYS\ntesting. On hyrax's latest run, it takes nearly 9000 seconds longer\nthan the next-slowest member of its group. Remembering that the\ncore regression tests are run thrice in a minimal buildfarm cycle,\nthis test is single-handedly responsible for over seven hours of the\n54 hour total build cycle.\n\nI dug into it and found that the core issue is much like that in\nopr_sanity.sql, namely that we're repeating this plpgsql function\n$bignum times:\n\nCREATE FUNCTION leak(integer,integer) RETURNS boolean\n AS $$begin return $1 < $2; end$$\n LANGUAGE plpgsql immutable;\n\n(I wonder whether the planner needs to invoke this function\nquite so many times during selectivity estimation. But,\nagain, improving that seems like a task for some other day.)\n\nNow, as far as I can see, this function definition isn't doing\nanything we can't do with an alias for the underlying int4lt\nfunction: the fact that the implementation is in plpgsql\nshouldn't matter at all for the purposes of this test.\nSo I replaced it, as per the attached patch.\n\nOn my machine, the time to run privileges.sql under\ndebug_invalidate_system_caches_always = 1\ndrops from\n\nreal 293m31.054s\nto\nreal 1m47.807s\n\nYes, really.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 10 May 2021 02:03:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Another modest proposal for reducing CLOBBER_CACHE_ALWAYS runtime" }, { "msg_contents": "On Mon, 10 May 2021 at 18:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I dug into it and found that the core issue is much like that in\n> opr_sanity.sql, namely that we're repeating this plpgsql function\n> $bignum times:\n>\n> CREATE FUNCTION leak(integer,integer) RETURNS boolean\n> AS $$begin return $1 < $2; end$$\n> LANGUAGE plpgsql immutable;\n\n> real 293m31.054s\n> to\n> real 1m47.807s\n>\n> Yes, really.\n\nThat's quite impressive.\n\nI've very much in favour of this change. Making it more realistic to\nrun the regression tests on a CLOBBER_CACHE_ALWAYS builds before a\ncommit is a very worthy goal and this is a big step towards that.\nNice.\n\nDavid\n\n\n", "msg_date": "Mon, 10 May 2021 18:57:48 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Another modest proposal for reducing CLOBBER_CACHE_ALWAYS runtime" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Mon, 10 May 2021 at 18:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> real 293m31.054s\n>> to\n>> real 1m47.807s\n>> Yes, really.\n\n> That's quite impressive.\n\n> I've very much in favour of this change. Making it more realistic to\n> run the regression tests on a CLOBBER_CACHE_ALWAYS builds before a\n> commit is a very worthy goal and this is a big step towards that.\n> Nice.\n\nIt occurred to me to check hyrax's results on the older branches\n(it also tests v12 and v13), and the slope of the curve is bad:\n\nBranch\t\tLatest \"check\" phase runtime\n\nHEAD\t\t13:56:11\nv13\t\t11:00:33\nv12\t\t6:05:30\n\nSeems like we'd better do something about that.\n\nAbout 2.5 hours worth of the jump from 12 to 13 can be blamed on\nthe privileges test, looks like. The slowdown in that evidently\ncan be blamed on 0c882e52a86, which added this:\n\n+-- results below depend on having quite accurate stats for atest12\n+SET default_statistics_target = 10000;\n VACUUM ANALYZE atest12;\n+RESET default_statistics_target;\n\nThe slow queries in that test all cause the planner to apply the\n\"leak()\" function to every histogram entry for atest12, so this\none change caused a 100X increase in the amount of work there.\nI find it a bit remarkable that we barely noticed that in normal\noperation. In CCA mode, though, each leak() call takes circa 100ms\n(at least on my workstation), so kaboom.\n\nAnyway, I'm now feeling that what I should do with this patch\nis wait for the release cycle to finish and then apply it to\nv13 as well as HEAD. The other patch I proposed, to cut\nopr_sanity's runtime, may be too invasive for back-patch.\n\nMore generally, there is an upward creep in the test runtimes\nthat doesn't seem to be entirely accounted for by our constantly\nadding more tests. I am suspicious that plpgsql may be largely\nto blame for this. The smoking gun I found for that is the\nruntimes for the plpgsql_control test, which hasn't changed\n*at all* since it was added in v11; but hyrax shows these\nruntimes:\n\nHEAD:\ntest plpgsql_control ... ok 56105 ms\nv13:\ntest plpgsql_control ... ok 46879 ms\nv12:\ntest plpgsql_control ... ok 30809 ms\n\nIn normal builds that test's time has held pretty steady.\nSo I'm not sure what's underneath this rock, but I plan\nto try to find out.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 May 2021 14:30:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Another modest proposal for reducing CLOBBER_CACHE_ALWAYS runtime" }, { "msg_contents": "\nOn 5/10/21 2:03 AM, Tom Lane wrote:\n> I noted that, while privileges.sql doesn't stand out in terms of\n> runtime normally (it's only the fourth slowest test in its\n> parallel group), it looks absolutely horrid in CLOBBER_CACHE_ALWAYS\n> testing. On hyrax's latest run, it takes nearly 9000 seconds longer\n> than the next-slowest member of its group. Remembering that the\n> core regression tests are run thrice in a minimal buildfarm cycle,\n> this test is single-handedly responsible for over seven hours of the\n> 54 hour total build cycle.\n\n\nFWIW, I would normally suggest a CCA enabled animal disable some of the\ntests so that it doesn't run the core tests thrice. The easiest way to\ndo that is: --skip-steps='check pg_upgrade-check'\n\n\nEven so your point is well taken.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 11 May 2021 10:07:43 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Another modest proposal for reducing CLOBBER_CACHE_ALWAYS runtime" } ]
[ { "msg_contents": "The function GetSubscriptionRelations was declaring ScanKeyData\nskey[2]; but actually\nonly uses 1 scan key. It seems like the code was cut/paste from other\nnearby functions\nwhich really are using 2 keys.\n\nPSA a trivial patch to declare the correct number of keys for this function.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 10 May 2021 17:06:11 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "GetSubscriptionRelations declares too many scan keys" }, { "msg_contents": "On Mon, May 10, 2021 at 12:36 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> The function GetSubscriptionRelations was declaring ScanKeyData\n> skey[2]; but actually\n> only uses 1 scan key. It seems like the code was cut/paste from other\n> nearby functions\n> which really are using 2 keys.\n>\n> PSA a trivial patch to declare the correct number of keys for this function.\n\n+1 for the change. It looks like a cut/paste type introduced by the\ncommit 7c4f52409a.\n\nA comment on the patch: why do we need to declare an array of 1\nelement ScanKeyData skey[1];? Instead, can we just do ScanKeyData\nskey;?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 May 2021 13:39:31 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: GetSubscriptionRelations declares too many scan keys" }, { "msg_contents": "On Mon, May 10, 2021 at 01:39:31PM +0530, Bharath Rupireddy wrote:\n> On Mon, May 10, 2021 at 12:36 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > The function GetSubscriptionRelations was declaring ScanKeyData\n> > skey[2]; but actually\n> > only uses 1 scan key. It seems like the code was cut/paste from other\n> > nearby functions\n> > which really are using 2 keys.\n> >\n> > PSA a trivial patch to declare the correct number of keys for this function.\n> \n> +1 for the change. It looks like a cut/paste type introduced by the\n> commit 7c4f52409a.\n> \n> A comment on the patch: why do we need to declare an array of 1\n> element ScanKeyData skey[1];? Instead, can we just do ScanKeyData\n> skey;?\n\n+1, there are already many places where it's done this way if there's only 1\nkey.\n\n\n", "msg_date": "Mon, 10 May 2021 16:56:40 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: GetSubscriptionRelations declares too many scan keys" }, { "msg_contents": "On Mon, May 10, 2021 at 6:09 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, May 10, 2021 at 12:36 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > The function GetSubscriptionRelations was declaring ScanKeyData\n> > skey[2]; but actually\n> > only uses 1 scan key. It seems like the code was cut/paste from other\n> > nearby functions\n> > which really are using 2 keys.\n> >\n> > PSA a trivial patch to declare the correct number of keys for this function.\n>\n> +1 for the change. It looks like a cut/paste type introduced by the\n> commit 7c4f52409a.\n>\n> A comment on the patch: why do we need to declare an array of 1\n> element ScanKeyData skey[1];? Instead, can we just do ScanKeyData\n> skey;?\n\nIMO declaring skey[1] is better because then the code can share the\nsame pattern as every other ScanData skey[n] code.\n\nPlease search PG source code for \"ScanData skey[1];\" - there are\ndozens of precedents where other people felt the same as me for\ndeclaring single keys.\n\n--------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 10 May 2021 19:09:29 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GetSubscriptionRelations declares too many scan keys" }, { "msg_contents": "On Mon, May 10, 2021 at 07:09:29PM +1000, Peter Smith wrote:\n> On Mon, May 10, 2021 at 6:09 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Mon, May 10, 2021 at 12:36 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > The function GetSubscriptionRelations was declaring ScanKeyData\n> > > skey[2]; but actually\n> > > only uses 1 scan key. It seems like the code was cut/paste from other\n> > > nearby functions\n> > > which really are using 2 keys.\n> > >\n> > > PSA a trivial patch to declare the correct number of keys for this function.\n> >\n> > +1 for the change. It looks like a cut/paste type introduced by the\n> > commit 7c4f52409a.\n> >\n> > A comment on the patch: why do we need to declare an array of 1\n> > element ScanKeyData skey[1];? Instead, can we just do ScanKeyData\n> > skey;?\n> \n> IMO declaring skey[1] is better because then the code can share the\n> same pattern as every other ScanData skey[n] code.\n> \n> Please search PG source code for \"ScanData skey[1];\" - there are\n> dozens of precedents where other people felt the same as me for\n> declaring single keys.\n\nAFAICT there are 73 occurences vs 62 of the \"Scandata skey;\". I don't think\nthere's a huge consensus for one over the other.\n\n\n", "msg_date": "Mon, 10 May 2021 17:16:53 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: GetSubscriptionRelations declares too many scan keys" }, { "msg_contents": "On Mon, May 10, 2021 at 2:46 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, May 10, 2021 at 07:09:29PM +1000, Peter Smith wrote:\n> > On Mon, May 10, 2021 at 6:09 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Mon, May 10, 2021 at 12:36 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > The function GetSubscriptionRelations was declaring ScanKeyData\n> > > > skey[2]; but actually\n> > > > only uses 1 scan key. It seems like the code was cut/paste from other\n> > > > nearby functions\n> > > > which really are using 2 keys.\n> > > >\n> > > > PSA a trivial patch to declare the correct number of keys for this function.\n> > >\n> > > +1 for the change. It looks like a cut/paste type introduced by the\n> > > commit 7c4f52409a.\n> > >\n> > > A comment on the patch: why do we need to declare an array of 1\n> > > element ScanKeyData skey[1];? Instead, can we just do ScanKeyData\n> > > skey;?\n> >\n> > IMO declaring skey[1] is better because then the code can share the\n> > same pattern as every other ScanData skey[n] code.\n> >\n> > Please search PG source code for \"ScanData skey[1];\" - there are\n> > dozens of precedents where other people felt the same as me for\n> > declaring single keys.\n>\n> AFAICT there are 73 occurences vs 62 of the \"Scandata skey;\". I don't think\n> there's a huge consensus for one over the other.\n\nI think both Scandata skey[1]; and Scandata skey; are used. But IMHO\nusing Scandata skey; looks better.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 May 2021 15:08:56 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: GetSubscriptionRelations declares too many scan keys" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Mon, May 10, 2021 at 07:09:29PM +1000, Peter Smith wrote:\n>> Please search PG source code for \"ScanData skey[1];\" - there are\n>> dozens of precedents where other people felt the same as me for\n>> declaring single keys.\n\n> AFAICT there are 73 occurences vs 62 of the \"Scandata skey;\". I don't think\n> there's a huge consensus for one over the other.\n\nYeah, there's no real consensus about that. But in this case there's\na strong reason to use skey[1]: it makes the patch a very safe one-liner.\nTo convert to the other pattern would require touching more code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 May 2021 10:14:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GetSubscriptionRelations declares too many scan keys" }, { "msg_contents": "On Mon, May 10, 2021 at 10:14:08AM -0400, Tom Lane wrote:\n> Yeah, there's no real consensus about that. But in this case there's\n> a strong reason to use skey[1]: it makes the patch a very safe one-liner.\n> To convert to the other pattern would require touching more code.\n\nFWIW, what Peter S has done looks fine to me, even if it is true that\nCountDBSubscriptions() uses one scan key but does not use an array.\n\nAnd that makes the code slightly easier to follow.\n--\nMichael", "msg_date": "Tue, 11 May 2021 16:50:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GetSubscriptionRelations declares too many scan keys" }, { "msg_contents": "On Tue, May 11, 2021 at 04:50:46PM +0900, Michael Paquier wrote:\n> And that makes the code slightly easier to follow.\n\nYeah, that's better this way, so applied.\n--\nMichael", "msg_date": "Wed, 12 May 2021 16:52:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GetSubscriptionRelations declares too many scan keys" }, { "msg_contents": "On Wed, May 12, 2021 at 5:52 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, May 11, 2021 at 04:50:46PM +0900, Michael Paquier wrote:\n> > And that makes the code slightly easier to follow.\n>\n> Yeah, that's better this way, so applied.\n\nThanks!\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 12 May 2021 19:35:19 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GetSubscriptionRelations declares too many scan keys" } ]
[ { "msg_contents": "Hi,\n\nDuring debugging I noticed some code in ExecResult() where a WHILE\nloop is being used with an unconditional RETURN at the end of the\nblock (which is intentional, looking at the history of changes), but\nnow there's no actual use of the loop in any way. The code should\nprobably be changed to just use IF for clarity.\nI've attached a patch.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Mon, 10 May 2021 19:15:58 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Executor code - found an instance of a WHILE that should just be an\n IF" }, { "msg_contents": "On Mon, 10 May 2021 at 21:16, Greg Nancarrow <gregn4422@gmail.com> wrote:\n> During debugging I noticed some code in ExecResult() where a WHILE\n> loop is being used with an unconditional RETURN at the end of the\n> block (which is intentional, looking at the history of changes), but\n> now there's no actual use of the loop in any way. The code should\n> probably be changed to just use IF for clarity.\n> I've attached a patch.\n\nLooks like leftovers from ea15e1867.\n\nI don't think this will affect any code generation but you are right,\nit should be an \"if\".\n\nDavid\n\n\n", "msg_date": "Mon, 10 May 2021 23:49:43 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Executor code - found an instance of a WHILE that should just be\n an IF" }, { "msg_contents": "On Mon, 10 May 2021 at 23:49, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Mon, 10 May 2021 at 21:16, Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > During debugging I noticed some code in ExecResult() where a WHILE\n> > loop is being used with an unconditional RETURN at the end of the\n> > block (which is intentional, looking at the history of changes), but\n> > now there's no actual use of the loop in any way. The code should\n> > probably be changed to just use IF for clarity.\n> > I've attached a patch.\n>\n> Looks like leftovers from ea15e1867.\n>\n> I don't think this will affect any code generation but you are right,\n> it should be an \"if\".\n\nSince there's no bug fix here, I thought that there's not much point\nin backpatching this.\n\nDoes anyone object to making this small change in master?\n\nDavid\n\n\n", "msg_date": "Thu, 13 May 2021 20:20:36 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Executor code - found an instance of a WHILE that should just be\n an IF" }, { "msg_contents": "On Thu, May 13, 2021 at 08:20:36PM +1200, David Rowley wrote:\n> Since there's no bug fix here, I thought that there's not much point\n> in backpatching this.\n\nIndeed. I would not bother with a back-patch either.\n\n> Does anyone object to making this small change in master?\n\nNo objections from here.\n--\nMichael", "msg_date": "Thu, 13 May 2021 20:06:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Executor code - found an instance of a WHILE that should just be\n an IF" }, { "msg_contents": "On Thu, May 13, 2021 at 08:06:18PM +0900, Michael Paquier wrote:\n> On Thu, May 13, 2021 at 08:20:36PM +1200, David Rowley wrote:\n> > Since there's no bug fix here, I thought that there's not much point\n> > in backpatching this.\n> \n> Indeed. I would not bother with a back-patch either.\n> \n> > Does anyone object to making this small change in master?\n> \n> No objections from here.\n\n+1 to both.\n\n\n", "msg_date": "Thu, 13 May 2021 20:28:24 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Executor code - found an instance of a WHILE that should just be\n an IF" }, { "msg_contents": "On Fri, 14 May 2021 at 00:27, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 08:06:18PM +0900, Michael Paquier wrote:\n> > On Thu, May 13, 2021 at 08:20:36PM +1200, David Rowley wrote:\n> > > Since there's no bug fix here, I thought that there's not much point\n> > > in backpatching this.\n> >\n> > Indeed. I would not bother with a back-patch either.\n> >\n> > > Does anyone object to making this small change in master?\n> >\n> > No objections from here.\n>\n> +1 to both.\n\nThanks for the votes. Pushed.\n\nDavid\n\n\n", "msg_date": "Fri, 14 May 2021 12:26:58 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Executor code - found an instance of a WHILE that should just be\n an IF" }, { "msg_contents": "On Fri, May 14, 2021 at 10:27 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> Thanks for the votes. Pushed.\n\nThanks!\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Fri, 14 May 2021 10:37:25 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Executor code - found an instance of a WHILE that should just be\n an IF" } ]
[ { "msg_contents": "Hi,\n\nThe file path mentioned in the file comments of\n'src/backend/utils/activity/backend_status.c'\nwas incorrect. Modified it to the correct path. Please find the patch\nattached.\n\nThanks & Regards,\nNitin Jadhav", "msg_date": "Mon, 10 May 2021 15:15:31 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Fix of file path in the file comments" } ]
[ { "msg_contents": "Hi,\n\nWhile working on [1], I got to know that there is a new GUC\ndebug_invalidate_system_caches_always that has been introduced in v14.\nIt can be used to switch off cache invalidation in\nCLOBBER_CACHE_ALWAYS builds which makes cache sensitive tests stable.\nUsing this GUC, it is quite possible to make cached connection\nmanagement function tests more meaningful by returning original\nvalues(true/false, all the output columns) instead of SELECT 1. Note\nthat the commit f77717b29 stabilized the tests for those functions -\npostgres_fdw_disconnect, postgres_fdw_disconnect_all and\npostgres_fdw_get_connections by masking actual return value of the\nfunctions.\n\nAttaching a patch to use the new GUC to make the functions return actual output.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CALj2ACVGSQsq68y-LmyXKZzbNVgSgsiVKSzsrVXzVgnsZTN26Q%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 10 May 2021 18:02:56 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "postgres_fdw - make cached connection functions tests meaningful" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> While working on [1], I got to know that there is a new GUC\n> debug_invalidate_system_caches_always that has been introduced in v14.\n> It can be used to switch off cache invalidation in\n> CLOBBER_CACHE_ALWAYS builds which makes cache sensitive tests stable.\n> Using this GUC, it is quite possible to make cached connection\n> management function tests more meaningful by returning original\n> values(true/false, all the output columns) instead of SELECT 1.\n\nNote that this needs an update in the wake of d68a00391.\n\nMore generally, though, I am not sure that I believe the premise of\nthis patch. AFAICS it's assuming that forcing debug_discard_caches\noff guarantees zero cache flushes, which it does not. (If it could,\nwe wouldn't need the whole thing; the point of that variable is to\ndeterministically force flushes which would otherwise be\nnondeterministic, not nonexistent.) Even in a contrib test that\nseemingly has nothing else running, background activity such as\nautovacuum could result in surprises. So I fear that what you have\ngot here is a patch that will work 99% of the time; which is not\ngood enough for the buildfarm.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Jul 2021 18:18:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw - make cached connection functions tests meaningful" }, { "msg_contents": "On Mon, May 10, 2021 at 6:03 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> While working on [1], I got to know that there is a new GUC\n> debug_invalidate_system_caches_always that has been introduced in v14.\n> It can be used to switch off cache invalidation in\n> CLOBBER_CACHE_ALWAYS builds which makes cache sensitive tests stable.\n> Using this GUC, it is quite possible to make cached connection\n> management function tests more meaningful by returning original\n> values(true/false, all the output columns) instead of SELECT 1. Note\n> that the commit f77717b29 stabilized the tests for those functions -\n> postgres_fdw_disconnect, postgres_fdw_disconnect_all and\n> postgres_fdw_get_connections by masking actual return value of the\n> functions.\n>\n> Attaching a patch to use the new GUC to make the functions return actual output.\n\nThe patch does not apply on Head anymore, could you rebase and post a\npatch. I'm changing the status to \"Waiting for Author\".\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 15 Jul 2021 17:46:31 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw - make cached connection functions tests meaningful" }, { "msg_contents": "On Thu, Jul 15, 2021 at 3:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > While working on [1], I got to know that there is a new GUC\n> > debug_invalidate_system_caches_always that has been introduced in v14.\n> > It can be used to switch off cache invalidation in\n> > CLOBBER_CACHE_ALWAYS builds which makes cache sensitive tests stable.\n> > Using this GUC, it is quite possible to make cached connection\n> > management function tests more meaningful by returning original\n> > values(true/false, all the output columns) instead of SELECT 1.\n>\n> Note that this needs an update in the wake of d68a00391.\n>\n> More generally, though, I am not sure that I believe the premise of\n> this patch. AFAICS it's assuming that forcing debug_discard_caches\n> off guarantees zero cache flushes, which it does not. (If it could,\n> we wouldn't need the whole thing; the point of that variable is to\n> deterministically force flushes which would otherwise be\n> nondeterministic, not nonexistent.)\n\nCan the setting debug_discard_caches = 0 still make extra\nflushes/discards (not the regular cache flushes/discards that happen\nbecause of alters or changes in the cached elements)? My understanding\nwas that debug_discard_caches = 0, disables all the extra flushes with\nclobber cache builds. If my understanding wasn't right, isn't it good\nto mention it somewhere in the documentation or in the source code?\n\n> Even in a contrib test that\n> seemingly has nothing else running, background activity such as\n> autovacuum could result in surprises. So I fear that what you have\n> got here is a patch that will work 99% of the time; which is not\n> good enough for the buildfarm.\n\nIf the setting debug_discard_caches = 0 makes at least a few extra\ncache flushes, I don't mind withdrawing this patch.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 17 Jul 2021 20:03:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw - make cached connection functions tests meaningful" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Thu, Jul 15, 2021 at 3:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> More generally, though, I am not sure that I believe the premise of\n>> this patch. AFAICS it's assuming that forcing debug_discard_caches\n>> off guarantees zero cache flushes, which it does not.\n\n> Can the setting debug_discard_caches = 0 still make extra\n> flushes/discards (not the regular cache flushes/discards that happen\n> because of alters or changes in the cached elements)? My understanding\n> was that debug_discard_caches = 0, disables all the extra flushes with\n> clobber cache builds. If my understanding wasn't right, isn't it good\n> to mention it somewhere in the documentation or in the source code?\n\nThe reason for this mechanism is that cache flushes can be triggered\nat any time by sinval messages from other processes (e.g., background\nautovacuums). Setting debug_discard_caches allows us to exercise\nthat possibility exhaustively and make sure that the code is proof\nagainst cache entries disappearing unexpectedly. Not setting\ndebug_discard_caches doesn't mean that that can't happen, only that\nyou can't predict when it will happen.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 17 Jul 2021 12:06:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw - make cached connection functions tests meaningful" }, { "msg_contents": "On Sat, Jul 17, 2021 at 9:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > On Thu, Jul 15, 2021 at 3:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> More generally, though, I am not sure that I believe the premise of\n> >> this patch. AFAICS it's assuming that forcing debug_discard_caches\n> >> off guarantees zero cache flushes, which it does not.\n>\n> > Can the setting debug_discard_caches = 0 still make extra\n> > flushes/discards (not the regular cache flushes/discards that happen\n> > because of alters or changes in the cached elements)? My understanding\n> > was that debug_discard_caches = 0, disables all the extra flushes with\n> > clobber cache builds. If my understanding wasn't right, isn't it good\n> > to mention it somewhere in the documentation or in the source code?\n>\n> The reason for this mechanism is that cache flushes can be triggered\n> at any time by sinval messages from other processes (e.g., background\n> autovacuums). Setting debug_discard_caches allows us to exercise\n> that possibility exhaustively and make sure that the code is proof\n> against cache entries disappearing unexpectedly. Not setting\n> debug_discard_caches doesn't mean that that can't happen, only that\n> you can't predict when it will happen.\n\nThanks. I'm fine with dropping this patch, hence I marked the CF entry\nas \"rejected\".\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 21 Jul 2021 15:41:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw - make cached connection functions tests meaningful" } ]
[ { "msg_contents": "Hi\n\nmy customer reported an issue related to unsupported TABLESPACE pg_default\nfor partitioned table:\n\npostgres=# CREATE TABLE IF NOT EXISTS foo2\n (\n data bytea,\n guid character varying(255) COLLATE pg_catalog.\"default\" NOT NULL,\n part date NOT NULL,\n retention_period integer,\n CONSTRAINT document_data_bytea_pkey1 PRIMARY KEY (guid, part)\n ) PARTITION BY RANGE (part)\n WITH (\n OIDS = FALSE\n )\n TABLESPACE pg_default;\nERROR: cannot specify default tablespace for partitioned relations\n\nThis check is two years old\nhttps://github.com/postgres/postgres/commit/87259588d0ab0b8e742e30596afa7ae25caadb18#diff-f2c91c95b7f2a84d916138e0af4338859803a03cee0d7e2e710fbcb869c59d0c\n\nAre there some plans to fix this issue?\n\nRegards\n\nPavel\n\nHimy customer reported an issue related to unsupported TABLESPACE pg_default for partitioned table:postgres=# CREATE TABLE IF NOT EXISTS foo2    (        data bytea,        guid character varying(255) COLLATE pg_catalog.\"default\" NOT NULL,        part date NOT NULL,        retention_period integer,        CONSTRAINT document_data_bytea_pkey1 PRIMARY KEY (guid, part)    ) PARTITION BY RANGE (part)    WITH (        OIDS = FALSE    )    TABLESPACE pg_default;ERROR:  cannot specify default tablespace for partitioned relationsThis check is two years old https://github.com/postgres/postgres/commit/87259588d0ab0b8e742e30596afa7ae25caadb18#diff-f2c91c95b7f2a84d916138e0af4338859803a03cee0d7e2e710fbcb869c59d0cAre there some plans to fix this issue?RegardsPavel", "msg_date": "Mon, 10 May 2021 14:39:17 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "compatibility issue - problem with migrating from Postgres 11" } ]
[ { "msg_contents": "Hi\n\nI tested features of Postgres 14. The extension pg_stat_statements didn't\nwork to me until I enabled compute_query_id. Is it expected behaviour?\n\nI expected just an empty column query_id and workable extension. This\ndoesn't look well.\n\nMore, it increases the (little bit) complexity of migration to Postgres 14.\n\nRegards\n\nPavel\n\nHiI tested features of Postgres 14. The extension pg_stat_statements didn't work to me until I enabled compute_query_id. Is it expected behaviour?I expected just an empty column query_id and workable extension. This doesn't look well. More, it increases the (little bit) complexity of migration to Postgres 14.RegardsPavel", "msg_date": "Mon, 10 May 2021 16:36:16 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "pg_stat_statements requires compute_query_id" }, { "msg_contents": "Hi Pavel,\n\nOn Mon, May 10, 2021 at 04:36:16PM +0200, Pavel Stehule wrote:\n> \n> I tested features of Postgres 14. The extension pg_stat_statements didn't\n> work to me until I enabled compute_query_id. Is it expected behaviour?\n\nYes.\n\n> I expected just an empty column query_id and workable extension. This\n> doesn't look well.\n> \n> More, it increases the (little bit) complexity of migration to Postgres 14.\n\nThis was already raised multiple times, and the latest discussion can be found\nat [1].\n\nMultiple options have been suggested, but AFAICT there isn't a clear consensus\non what we should do exactly, so I've not been able to send a fix yet.\n\n[1]: https://www.postgresql.org/message-id/flat/35457b09-36f8-add3-1d07-6034fa585ca8%40oss.nttdata.com\n\n\n", "msg_date": "Mon, 10 May 2021 22:43:37 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements requires compute_query_id" }, { "msg_contents": "On Mon, May 10, 2021 at 7:43 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Mon, May 10, 2021 at 04:36:16PM +0200, Pavel Stehule wrote:\n> > I expected just an empty column query_id and workable extension. This\n> > doesn't look well.\n> >\n> > More, it increases the (little bit) complexity of migration to Postgres 14.\n>\n> This was already raised multiple times, and the latest discussion can be found\n> at [1].\n>\n> Multiple options have been suggested, but AFAICT there isn't a clear consensus\n> on what we should do exactly, so I've not been able to send a fix yet.\n>\n> [1]: https://www.postgresql.org/message-id/flat/35457b09-36f8-add3-1d07-6034fa585ca8%40oss.nttdata.com\n\nBefore it petered out, the thread seemed to be converging toward\nconsensus that the situation could be improved. I work on pganalyze,\nand our product requires pg_stat_statements to be enabled for a lot of\nits functionality. We guide our users through enabling it, but if\nadditional steps are required in 14, that may be confusing. I don't\nhave any strong feelings on the particular mechanism that would work\nbest here, but it would be nice if enabling pg_stat_statements in 14\ndid not require more work than in 13. Even if it's just one extra\nsetting, it's another potential thing to get wrong and have to\ntroubleshoot, plus it means all existing pg_stat_statements guides out\nthere would become out of date.\n\nThanks,\nMaciek\n\n\n", "msg_date": "Mon, 10 May 2021 10:03:36 -0700", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements requires compute_query_id" }, { "msg_contents": "po 10. 5. 2021 v 19:03 odesílatel Maciek Sakrejda <m.sakrejda@gmail.com>\nnapsal:\n\n> On Mon, May 10, 2021 at 7:43 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > On Mon, May 10, 2021 at 04:36:16PM +0200, Pavel Stehule wrote:\n> > > I expected just an empty column query_id and workable extension. This\n> > > doesn't look well.\n> > >\n> > > More, it increases the (little bit) complexity of migration to\n> Postgres 14.\n> >\n> > This was already raised multiple times, and the latest discussion can be\n> found\n> > at [1].\n> >\n> > Multiple options have been suggested, but AFAICT there isn't a clear\n> consensus\n> > on what we should do exactly, so I've not been able to send a fix yet.\n> >\n> > [1]:\n> https://www.postgresql.org/message-id/flat/35457b09-36f8-add3-1d07-6034fa585ca8%40oss.nttdata.com\n>\n> Before it petered out, the thread seemed to be converging toward\n> consensus that the situation could be improved. I work on pganalyze,\n> and our product requires pg_stat_statements to be enabled for a lot of\n> its functionality. We guide our users through enabling it, but if\n> additional steps are required in 14, that may be confusing. I don't\n> have any strong feelings on the particular mechanism that would work\n> best here, but it would be nice if enabling pg_stat_statements in 14\n> did not require more work than in 13. Even if it's just one extra\n> setting, it's another potential thing to get wrong and have to\n> troubleshoot, plus it means all existing pg_stat_statements guides out\n> there would become out of date.\n>\n\n+1\n\nminimally it requires extra notes in some migration guide.\n\nI understand so queryid is one from key values. So it is not possible to\nmerge data with and without a queryid. My idea about the best solution is\nsomething like pg_stat_statements can work without a queryid, and when the\ncompute_query_id is changed, then the values stored in pg_stat_statements\nare resetted. I have no idea if it can be implemented. Current state is not\nuser friendly. The people who know the previous behaviour can be very\nconfused.\n\nRegards\n\nPavel\n\n\n\n> Thanks,\n> Maciek\n>\n\npo 10. 5. 2021 v 19:03 odesílatel Maciek Sakrejda <m.sakrejda@gmail.com> napsal:On Mon, May 10, 2021 at 7:43 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Mon, May 10, 2021 at 04:36:16PM +0200, Pavel Stehule wrote:\n> > I expected just an empty column query_id and workable extension. This\n> > doesn't look well.\n> >\n> > More, it increases the (little bit) complexity of migration to Postgres 14.\n>\n> This was already raised multiple times, and the latest discussion can be found\n> at [1].\n>\n> Multiple options have been suggested, but AFAICT there isn't a clear consensus\n> on what we should do exactly, so I've not been able to send a fix yet.\n>\n> [1]: https://www.postgresql.org/message-id/flat/35457b09-36f8-add3-1d07-6034fa585ca8%40oss.nttdata.com\n\nBefore it petered out, the thread seemed to be converging toward\nconsensus that the situation could be improved. I work on pganalyze,\nand our product requires pg_stat_statements to be enabled for a lot of\nits functionality. We guide our users through enabling it, but if\nadditional steps are required in 14, that may be confusing. I don't\nhave any strong feelings on the particular mechanism that would work\nbest here, but it would be nice if enabling pg_stat_statements in 14\ndid not require more work than in 13. Even if it's just one extra\nsetting, it's another potential thing to get wrong and have to\ntroubleshoot, plus it means all existing pg_stat_statements guides out\nthere would become out of date.+1minimally it requires extra notes in some migration guide.I understand so queryid is one from key values. So it is not possible to merge data with and without a queryid. My idea about the best solution is something like pg_stat_statements can work without a queryid, and when the compute_query_id is changed, then the values stored in pg_stat_statements are resetted. I have no idea if it can be implemented. Current state is not user friendly. The people who know the previous behaviour can be very confused.RegardsPavel\n\nThanks,\nMaciek", "msg_date": "Mon, 10 May 2021 19:17:34 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_stat_statements requires compute_query_id" } ]
[ { "msg_contents": "The test program libpq_pipeline produced by the test suite in \nsrc/test/modules/libpq_pipeline/ is installed into tmp_install as part \nof make check. This isn't a real problem by itself, but I think it \ncreates a bit of an asymmetric situation that might be worth cleaning up.\n\nBefore, the contents of tmp_install exactly matched an actual \ninstallation. There were no extra test programs installed.\n\nAlso, the test suite code doesn't actually use that installed version, \nso it's not of any use, and it creates confusion about which copy is in use.\n\nThe reason this is there is that the test suite uses PGXS to build the \ntest program, and so things get installed automatically. I suggest that \nwe should either write out the build system by hand to avoid this, or \nmaybe extend PGXS to support building programs but not installing them. \nThe advantage of the former approach is that it would allow additional \ntest programs to be added later as well. (We should really collect the \nlibpq tests under src/interfaces/libpq/ anyway at some point.)\n\n\n", "msg_date": "Mon, 10 May 2021 20:26:56 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "libpq_pipeline in tmp_install" }, { "msg_contents": "On 10.05.21 20:26, Peter Eisentraut wrote:\n> The test program libpq_pipeline produced by the test suite in \n> src/test/modules/libpq_pipeline/ is installed into tmp_install as part \n> of make check.  This isn't a real problem by itself, but I think it \n> creates a bit of an asymmetric situation that might be worth cleaning up.\n> \n> Before, the contents of tmp_install exactly matched an actual \n> installation.  There were no extra test programs installed.\n> \n> Also, the test suite code doesn't actually use that installed version, \n> so it's not of any use, and it creates confusion about which copy is in \n> use.\n> \n> The reason this is there is that the test suite uses PGXS to build the \n> test program, and so things get installed automatically.  I suggest that \n> we should either write out the build system by hand to avoid this, or \n> maybe extend PGXS to support building programs but not installing them. \n\nHere is a patch that implements the second solution, which turned out to \nbe very easy.", "msg_date": "Wed, 19 May 2021 12:09:35 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: libpq_pipeline in tmp_install" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 10.05.21 20:26, Peter Eisentraut wrote:\n>> The reason this is there is that the test suite uses PGXS to build the \n>> test program, and so things get installed automatically.  I suggest that \n>> we should either write out the build system by hand to avoid this, or \n>> maybe extend PGXS to support building programs but not installing them. \n\n> Here is a patch that implements the second solution, which turned out to \n> be very easy.\n\n+1, except that you should add documentation for NO_INSTALL to the\nlist of definable symbols at the head of pgxs.mk, and to the list\nin extend.sgml (compare that for NO_INSTALLCHECK).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 May 2021 11:06:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq_pipeline in tmp_install" }, { "msg_contents": "On 2021-May-19, Tom Lane wrote:\n\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > On 10.05.21 20:26, Peter Eisentraut wrote:\n> >> The reason this is there is that the test suite uses PGXS to build the \n> >> test program, and so things get installed automatically.� I suggest that \n> >> we should either write out the build system by hand to avoid this, or \n> >> maybe extend PGXS to support building programs but not installing them. \n> \n> > Here is a patch that implements the second solution, which turned out to \n> > be very easy.\n\nGreat, thank you.\n\n> +1, except that you should add documentation for NO_INSTALL to the\n> list of definable symbols at the head of pgxs.mk, and to the list\n> in extend.sgml (compare that for NO_INSTALLCHECK).\n\nI propose this.\n\n-- \n�lvaro Herrera Valdivia, Chile", "msg_date": "Wed, 19 May 2021 11:48:47 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: libpq_pipeline in tmp_install" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-May-19, Tom Lane wrote:\n>> +1, except that you should add documentation for NO_INSTALL to the\n>> list of definable symbols at the head of pgxs.mk, and to the list\n>> in extend.sgml (compare that for NO_INSTALLCHECK).\n\n> I propose this.\n\nWFM.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 May 2021 13:35:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq_pipeline in tmp_install" }, { "msg_contents": "On 19.05.21 19:35, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> On 2021-May-19, Tom Lane wrote:\n>>> +1, except that you should add documentation for NO_INSTALL to the\n>>> list of definable symbols at the head of pgxs.mk, and to the list\n>>> in extend.sgml (compare that for NO_INSTALLCHECK).\n> \n>> I propose this.\n> \n> WFM.\n\nThanks for the feedback. I found that my proposal doesn't quite work, \nbecause \"check\" doesn't depend on \"all\" (anymore; see dbf2ec1a1c0), so \nrunning make check-world doesn't build the test program first. The \neasiest workaround I found was to add an \"install: all\" line even for \nthe NO_INSTALL case. It's all a bit hackish, though.", "msg_date": "Tue, 25 May 2021 11:04:40 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: libpq_pipeline in tmp_install" } ]
[ { "msg_contents": "Hi,\n\nAfter hearing from a couple of directions about systems spending too\nmuch time scanning the local lock hash table, I wrote the trivial\npatch to put them in a linked list, before learning that people have\nconsidered that before, so I should probably go and read some history\non that and find out why it hasn't been done...\n\nHowever, I noticed in passing that RemoveLocalLock() accesses\n*locallock after removing it from the hash table (in assertion builds\nonly). So one question I have is whether it's actually a programming\nrule that you can't do that (at most you can compare the pointer\nagainst NULL), or whether it's supposed to be\nsafe-if-you-know-what-you're-doing, as the existing comments hints.\nHere also is a patch that does wipe_mem on removed elements, as\nthreatened last time this topic came up[1], which reveals the problem.\nI'm also not exactly sure why it's only a WARNING if your local lock\ntable is out of sync, but perhaps that's in the archives too.\n\n[1] https://www.postgresql.org/message-id/flat/CAHut%2BPs-pL%2B%2Bf6CJwPx2%2BvUqXuew%3DXt-9Bi-6kCyxn%2BFwi2M7w%40mail.gmail.com", "msg_date": "Tue, 11 May 2021 11:37:04 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Is element access after HASH_REMOVE ever OK?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> However, I noticed in passing that RemoveLocalLock() accesses\n> *locallock after removing it from the hash table (in assertion builds\n> only). So one question I have is whether it's actually a programming\n> rule that you can't do that (at most you can compare the pointer\n> against NULL), or whether it's supposed to be\n> safe-if-you-know-what-you're-doing, as the existing comments hints.\n\nI'd say it's, at best, unwarranted familiarity with the dynahash\nimplementation ...\n\n> Here also is a patch that does wipe_mem on removed elements, as\n> threatened last time this topic came up[1], which reveals the problem.\n\n... one good reason being that it'll fail under this sort of\nentirely-reasonable debugging aid. Can we get rid of the unsafe\naccess easily?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 May 2021 20:06:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is element access after HASH_REMOVE ever OK?" }, { "msg_contents": "I wrote:\n> ... Can we get rid of the unsafe\n> access easily?\n\nOh, shoulda read your second patch first. Looking at that,\nI fear it might not be quite that simple, because the\ncomment on CheckAndSetLockHeld says very clearly\n\n * It is callers responsibility that this function is called after\n * acquiring/releasing the relation extension/page lock.\n\nso your proposed patch violates that specification.\n\nI'm inclined to think that this API spec is very poorly thought out\nand should be changed --- why is it that the flags should change\n*after* the lock change in both directions? But we'd have to take\na look at the usage of these flags to understand what's going on\nexactly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 May 2021 20:15:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is element access after HASH_REMOVE ever OK?" }, { "msg_contents": "Hi,\n\nOn 2021-05-10 20:15:41 -0400, Tom Lane wrote:\n> I wrote:\n> > ... Can we get rid of the unsafe\n> > access easily?\n>\n> Oh, shoulda read your second patch first. Looking at that,\n> I fear it might not be quite that simple, because the\n> comment on CheckAndSetLockHeld says very clearly\n>\n> * It is callers responsibility that this function is called after\n> * acquiring/releasing the relation extension/page lock.\n>\n> so your proposed patch violates that specification.\n\nIt wouldn't be too hard to fix this though - we can just copy the\nlocktag into a local variable. Or use one of the existing local copies,\nhigher in the stack.\n\nBut:\n\n> I'm inclined to think that this API spec is very poorly thought out\n> and should be changed --- why is it that the flags should change\n> *after* the lock change in both directions? But we'd have to take\n> a look at the usage of these flags to understand what's going on\n> exactly.\n\nI can't see a need to do it after the HASH_REMOVE at least - as we don't\nreturn early if that fails, there's no danger getting out of sync if we\nreverse the order. I think the comment could just be changed to say\nthat the function has to be called after it is inevitable that the lock\nis acquired/released.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 May 2021 18:27:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Is element access after HASH_REMOVE ever OK?" } ]
[ { "msg_contents": "Hi,\n\nIn https://www.postgresql.org/message-id/20210505010835.umylslxgq4a6rbwg%40alap3.anarazel.de\nI commented that we have a number of hacky workarounds to deal with the fact\nthat walreceiver writes partial WAL pages into reycled segments.\n\nThe problem with that practice is that within a page we cannot reliably detect\ninvalid record headers. This is especially true, when the record header spans\nacross a page boundary - currently the only check in that case is if the\nrecord length is smaller than 1GB, and even that is just checked in the\nfrontend. Note that we cannot rely on the CRC checksum here - it can only be\nvalidated once the whole record has been read.\n\nOn a primary we *do* pad partial pages, see AdvanceXLInsertBuffer():\n\t\t/*\n\t\t * Be sure to re-zero the buffer so that bytes beyond what we've\n\t\t * written will look like zeroes and not valid XLOG records...\n\t\t */\n\t\tMemSet((char *) NewPage, 0, XLOG_BLCKSZ);\n\nParticularly the logic in allocate_recordbuf() is scary: In a completely\nworking setup we'll regularly try to allocate large buffers that we'll never\nneed - and the record buffer is not freed until the startup process exits. And\nwe have no corresponding size check in the frontend (which doesn't make any\nsense to me). In the case of a record header across a page boundary, this\ncheck will pass in roughly 1/4 of the cases!\n\n\nAs an example of the difference this makes, I ran a primary/standby setup with\ncontinuously running regression tests, and had a psql \\watch terminate\nwalsender every 1.5 s.\n\nDecoding failures without zero-padding:\n2021-05-10 16:52:51.448 PDT [2481446][1/0] LOG: record with incorrect prev-link 103FF/73 at 4/C154FD50\n2021-05-10 16:52:53.001 PDT [2481446][1/0] LOG: record with incorrect prev-link 0/FFFF at 4/C3531A88\n2021-05-10 16:52:57.848 PDT [2481446][1/0] LOG: invalid resource manager ID 32 at 4/C3B67AD8\n2021-05-10 16:52:58.773 PDT [2481446][1/0] LOG: record with incorrect prev-link 403FF/12 at 4/C47F35E8\n2021-05-10 16:53:03.771 PDT [2481446][1/0] LOG: invalid page at 4/C562E000\n2021-05-10 16:53:04.945 PDT [2481446][1/0] LOG: invalid record length at 4/C6E1C1E8: wanted 24, got 0\n2021-05-10 16:53:06.176 PDT [2481446][1/0] LOG: invalid page at 4/C7040000\n2021-05-10 16:53:07.624 PDT [2481446][1/0] LOG: record with incorrect prev-link 2FF/64 at 4/C7475078\n...\n\n\nWith zero-padding:\n2021-05-10 16:58:20.186 PDT [2489042][1/0] LOG: invalid record length at 5/7049A40: wanted 24, got 0\n2021-05-10 16:58:22.832 PDT [2489042][1/0] LOG: invalid record length at 5/801AD70: wanted 24, got 0\n2021-05-10 16:58:27.548 PDT [2489042][1/0] LOG: invalid record length at 5/8319908: wanted 24, got 0\n2021-05-10 16:58:30.945 PDT [2489042][1/0] LOG: invalid record length at 5/AFDC770: wanted 24, got 0\n...\n2021-05-10 16:59:24.546 PDT [2489042][1/0] LOG: invalid page at 5/19284000\n\nThe \"invalid page\" cases are a lot rarer - previously we would hit them\nwhenever the record header itself passed [minimal] muster, even though it was\njust padding passing as e.g. a valid record length. Now it's only when the end\nof WAL actually is at the page boundary.\n\n\nOn 13+ we could do a bit better than the current approach, and use\npg_pwritev() to append the zeroed data. However, I'm not convinced it is a\ngood idea - when pg_pwritev is emulated, we'd always do the zeroing as part of\na separate write, which does seem like it increases the likelihood of\nencountering such partially written pages a bit. But perhaps it's too\ninsignificant to matter.\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 10 May 2021 17:55:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Zeroing partial pages in walreceiver" } ]
[ { "msg_contents": "Hi,\n\nThere are no interrupt checks in the WalReceiverMain() sub-loop for\nreceiving WAL. There's one above\n\n\t\t\t\t/* See if we can read data immediately */\n\t\t\t\tlen = walrcv_receive(wrconn, &buf, &wait_fd);\n\nbut none in the loop below:\n\t\t\t\t\t/*\n\t\t\t\t\t * Process the received data, and any subsequent data we\n\t\t\t\t\t * can read without blocking.\n\t\t\t\t\t */\n\t\t\t\t\tfor (;;)\n\nSimilarly, that inner loop doesn't send status updates or fsyncs, while\nthere's network data - but that matters a bit less, because we'll\nsendstatus updates upon request, and flush WAL at segment boundaries.\n\nThis may explain why a low-ish wal_sender_timeout /\nwal_receiver_status_interval combo still sees plenty timeouts.\n\nI suspect this is a lot easier to hit when the IO system on the standby\nis the bottleneck (with the kernel slowing us down inside the\npg_pwrite()), because that makes it easier to always have incoming\nnetwork data.\n\nIt's probably not a good idea to just remove that two-level loop - we\ndon't want to fsync at a much higher rate. But just putting an\nProcessWalRcvInterrupts() in the inner loop also seems unsatisfying, we\nshould respect wal_receiver_status_interval...\n\n\nI've a couple times gotten into a situation where I was shutting down\nthe primary while the standby was behind, and the system appeared to\njust lock up, with neither primary nor standby reacting to normal\nshutdown attempts. This seems to happen more often with larger wal\nsegment size...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 May 2021 19:27:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "walreceiver that is behind doesn't quit, send replies" }, { "msg_contents": "Hi,\n\nOn 2021-05-10 19:27:55 -0700, Andres Freund wrote:\n> I've a couple times gotten into a situation where I was shutting down\n> the primary while the standby was behind, and the system appeared to\n> just lock up, with neither primary nor standby reacting to normal\n> shutdown attempts. This seems to happen more often with larger wal\n> segment size...\n\nAh - to reproduce it, you can put a pg_usleep(10000) or so above the\npg_pwrite() in XLogWalRcvMain(). That triggers it fairly reliably for\nme.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 May 2021 19:59:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: walreceiver that is behind doesn't quit, send replies" } ]
[ { "msg_contents": "We apparently forgot in 86dc90056df to remove a note in 5.11.4.\nPartition Pruning saying this:\n\n <note>\n <para>\n Execution-time partition pruning currently only occurs for the\n <literal>Append</literal> and <literal>MergeAppend</literal> node types.\n It is not yet implemented for the <literal>ModifyTable</literal> node\n type, but that is likely to be changed in a future release of\n <productname>PostgreSQL</productname>.\n </para>\n </note>\n\nNow because we include the Append/MergeAppend node even if the query\nis update/delete, this note is no longer useful or valid.\n\nAttached patch removes that note.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 11 May 2021 12:35:36 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "update/delete and execution-time partition pruning" }, { "msg_contents": "On Tue, 11 May 2021 at 15:36, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> We apparently forgot in 86dc90056df to remove a note in 5.11.4.\n> Partition Pruning saying this:\n>\n> <note>\n> <para>\n> Execution-time partition pruning currently only occurs for the\n> <literal>Append</literal> and <literal>MergeAppend</literal> node types.\n> It is not yet implemented for the <literal>ModifyTable</literal> node\n> type, but that is likely to be changed in a future release of\n> <productname>PostgreSQL</productname>.\n> </para>\n> </note>\n>\n> Now because we include the Append/MergeAppend node even if the query\n> is update/delete, this note is no longer useful or valid.\n>\n> Attached patch removes that note.\n\nYeah. Agreed. That's out of date now.\n\nI'll take care of this.\n\nDavid\n\n\n", "msg_date": "Tue, 11 May 2021 15:46:13 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: update/delete and execution-time partition pruning" }, { "msg_contents": "On Tue, 11 May 2021 at 15:46, David Rowley <dgrowleyml@gmail.com> wrote:\n> I'll take care of this.\n\nPushed.\n\nDavid\n\n\n", "msg_date": "Tue, 11 May 2021 15:56:54 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: update/delete and execution-time partition pruning" }, { "msg_contents": "On Tue, May 11, 2021 at 12:57 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 11 May 2021 at 15:46, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I'll take care of this.\n>\n> Pushed.\n\nThank you.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 May 2021 14:11:20 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: update/delete and execution-time partition pruning" } ]
[ { "msg_contents": "Hi,\n\nI was going through the parallel vacuum docs and code. I found below\nthings, please someone clarify:\n\n1) I see that a term \"parallel degree\" is used in the docs, code\ncomments, error messages \"parallel vacuum degree must be a\nnon-negative integer\", \"parallel vacuum degree must be between 0 and\n%d\". Is there any specific reason to use the term \"parallel degree\"?\nIn the docs and code comments we generally use \"parallel workers\".\n2) The error messages \"parallel vacuum degree must be between 0 and\n%d\" and \"parallel option requires a value between 0 and %d\" look\ninconsistent.\n3) Should the Assert(nindexes > 0); in begin_parallel_vacuum just be\nAssert(nindexes > 1); as this function is entered only when indexes\nare > 1?\n4) IIUC, below comment says that even if PARALLEL 0 is specified with\nVACUUM command, there are chances that the indexes are vacuumed in\nparallel. Isn't it a bit unusual that a user specified 0 workers but\nstill the system is picking up parallelism? I'm sure this would have\nbeen discussed, but I'm curious to know the reason.\n * nrequested is the number of parallel workers that user requested. If\n * nrequested is 0, we compute the parallel degree based on nindexes, that is\n * the number of indexes that support parallel vacuum.\n5) Can the parallel_workers in below condition ever be negative in\nbegin_parallel_vacuum? I think we can just have if (parallel_workers\n== 0).\n /* Can't perform vacuum in parallel */\n if (parallel_workers <= 0)\n6) I think, instead of saying \"using integer background workers\", we\ncan just say \"using specified or lesser number of background workers\".\n From the docs: Perform index vacuum and index cleanup phases of VACUUM\nin parallel using integer background workers\nWe can say \"workers specified will be used during execution\"\n From the docs: workers specified in integer will be used during execution\n7) I think we need a comma after \"if any\" .\n From the docs: which is limited by the number of workers specified\nwith PARALLEL option if any which is further limited by\n8) Is it still true that if parallel workers are specified as 0 the\nparallelism will not be picked up?\n From the docs: This feature is known as parallel vacuum. To disable\nthis feature, one can use PARALLEL option and specify parallel workers\nas zero.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 May 2021 17:37:50 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "parallel vacuum - few questions on docs, comments and code" }, { "msg_contents": "On Tue, May 11, 2021 at 05:37:50PM +0530, Bharath Rupireddy wrote:\n> 3) Should the Assert(nindexes > 0); in begin_parallel_vacuum just be\n> Assert(nindexes > 1); as this function is entered only when indexes\n> are > 1?\n\nI think you're right, at least with the current implementation that\nparallelization is done across indexes. Same in parallel_vacuum_main.\n\n> 4) IIUC, below comment says that even if PARALLEL 0 is specified with\n> VACUUM command, there are chances that the indexes are vacuumed in\n> parallel. Isn't it a bit unusual that a user specified 0 workers but\n> still the system is picking up parallelism? I'm sure this would have\n> been discussed, but I'm curious to know the reason.\n> * nrequested is the number of parallel workers that user requested. If\n> * nrequested is 0, we compute the parallel degree based on nindexes, that is\n> * the number of indexes that support parallel vacuum.\n\nNo - nrequested is not actually the number of workers requested - it seems like\na poor choice of name.\n\nThis is the key part:\n\nsrc/include/commands/vacuum.h\n * The number of parallel vacuum workers. 0 by default which means choose\n * based on the number of indexes. -1 indicates parallel vacuum is\n * disabled.\n */\n int nworkers;\n} VacuumParams;\n\nThe parsing code is in src/backend/commands/vacuum.c.\n\n> 8) Is it still true that if parallel workers are specified as 0 the\n> parallelism will not be picked up?\n> From the docs: This feature is known as parallel vacuum. To disable\n> this feature, one can use PARALLEL option and specify parallel workers\n> as zero.\n\nI think it's the same answer as above.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 11 May 2021 08:01:47 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: parallel vacuum - few questions on docs, comments and code" }, { "msg_contents": "On Tue, May 11, 2021 at 6:31 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, May 11, 2021 at 05:37:50PM +0530, Bharath Rupireddy wrote:\n> > 3) Should the Assert(nindexes > 0); in begin_parallel_vacuum just be\n> > Assert(nindexes > 1); as this function is entered only when indexes\n> > are > 1?\n>\n> I think you're right, at least with the current implementation that\n> parallelization is done across indexes. Same in parallel_vacuum_main.\n>\n\nYeah, as code stands both of you are right. However, it can be helpful\nto test parallelism even with one index say if we implement something\nlike force_parallel_mode = regress or parallel_leader_participation =\noff.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 12 May 2021 09:47:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallel vacuum - few questions on docs, comments and code" }, { "msg_contents": "On Tue, May 11, 2021 at 5:38 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> I was going through the parallel vacuum docs and code. I found below\n> things, please someone clarify:\n>\n> 1) I see that a term \"parallel degree\" is used in the docs, code\n> comments, error messages \"parallel vacuum degree must be a\n> non-negative integer\", \"parallel vacuum degree must be between 0 and\n> %d\". Is there any specific reason to use the term \"parallel degree\"?\n> In the docs and code comments we generally use \"parallel workers\".\n\nI think using \"parallel workers\" will be more consistent.\n\n> 2) The error messages \"parallel vacuum degree must be between 0 and\n> %d\" and \"parallel option requires a value between 0 and %d\" look\n> inconsistent.\n\n+1\n\n> 3) Should the Assert(nindexes > 0); in begin_parallel_vacuum just be\n> Assert(nindexes > 1); as this function is entered only when indexes\n> are > 1?\n> 4) IIUC, below comment says that even if PARALLEL 0 is specified with\n> VACUUM command, there are chances that the indexes are vacuumed in\n> parallel. Isn't it a bit unusual that a user specified 0 workers but\n> still the system is picking up parallelism? I'm sure this would have\n> been discussed, but I'm curious to know the reason.\n> * nrequested is the number of parallel workers that user requested. If\n> * nrequested is 0, we compute the parallel degree based on nindexes, that is\n> * the number of indexes that support parallel vacuum.\n> 5) Can the parallel_workers in below condition ever be negative in\n> begin_parallel_vacuum? I think we can just have if (parallel_workers\n> == 0).\n> /* Can't perform vacuum in parallel */\n> if (parallel_workers <= 0)\n\nYes it should if (parallel_workers == 0)\n\n> 8) Is it still true that if parallel workers are specified as 0 the\n> parallelism will not be picked up?\n> From the docs: This feature is known as parallel vacuum. To disable\n> this feature, one can use PARALLEL option and specify parallel workers\n> as zero.\n\nYes, by default this is enabled so for disabling user need to give\nPARALLEL as 0.\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 May 2021 10:56:38 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallel vacuum - few questions on docs, comments and code" }, { "msg_contents": "On Tue, May 11, 2021 at 5:38 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I was going through the parallel vacuum docs and code. I found below\n> things, please someone clarify:\n>\n> 1) I see that a term \"parallel degree\" is used in the docs, code\n> comments, error messages \"parallel vacuum degree must be a\n> non-negative integer\", \"parallel vacuum degree must be between 0 and\n> %d\". Is there any specific reason to use the term \"parallel degree\"?\n> In the docs and code comments we generally use \"parallel workers\".\n>\n\nThe parallel degree term is used here to indicate that we compute how\nmuch parallelism we can achieve based on the indexes.\n\n> 2) The error messages \"parallel vacuum degree must be between 0 and\n> %d\" and \"parallel option requires a value between 0 and %d\" look\n> inconsistent.\n>\n\nI think we can make them consistent.\n\n> 5) Can the parallel_workers in below condition ever be negative in\n> begin_parallel_vacuum? I think we can just have if (parallel_workers\n> == 0).\n> /* Can't perform vacuum in parallel */\n> if (parallel_workers <= 0)\n\nEven if it can't go negative in the current code, I don't see a\nproblem with the current code. It seems safe like this.\n\n> 6) I think, instead of saying \"using integer background workers\", we\n> can just say \"using specified or lesser number of background workers\".\n> From the docs: Perform index vacuum and index cleanup phases of VACUUM\n> in parallel using integer background workers\n> We can say \"workers specified will be used during execution\"\n> From the docs: workers specified in integer will be used during execution\n>\n\nThe docs here refer to \"PARALLEL integer\" specified in specs, so not\nsure if the proposed text is better.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 12 May 2021 11:10:12 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallel vacuum - few questions on docs, comments and code" }, { "msg_contents": "On Tue, May 11, 2021 at 6:31 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > 4) IIUC, below comment says that even if PARALLEL 0 is specified with\n> > VACUUM command, there are chances that the indexes are vacuumed in\n> > parallel. Isn't it a bit unusual that a user specified 0 workers but\n> > still the system is picking up parallelism? I'm sure this would have\n> > been discussed, but I'm curious to know the reason.\n> > * nrequested is the number of parallel workers that user requested. If\n> > * nrequested is 0, we compute the parallel degree based on nindexes, that is\n> > * the number of indexes that support parallel vacuum.\n>\n> No - nrequested is not actually the number of workers requested - it seems like\n> a poor choice of name.\n>\n> This is the key part:\n>\n> src/include/commands/vacuum.h\n> * The number of parallel vacuum workers. 0 by default which means choose\n> * based on the number of indexes. -1 indicates parallel vacuum is\n> * disabled.\n> */\n> int nworkers;\n> } VacuumParams;\n\nThanks. The name \"nworkers\" looks fine to me after reading the comment\nabove it. And the parallelism will be chosen by default.\n /* By default parallel vacuum is enabled */\n params.nworkers = 0;\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 May 2021 18:28:05 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: parallel vacuum - few questions on docs, comments and code" }, { "msg_contents": "On Wed, May 12, 2021 at 9:48 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 11, 2021 at 6:31 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Tue, May 11, 2021 at 05:37:50PM +0530, Bharath Rupireddy wrote:\n> > > 3) Should the Assert(nindexes > 0); in begin_parallel_vacuum just be\n> > > Assert(nindexes > 1); as this function is entered only when indexes\n> > > are > 1?\n> >\n> > I think you're right, at least with the current implementation that\n> > parallelization is done across indexes. Same in parallel_vacuum_main.\n> >\n>\n> Yeah, as code stands both of you are right. However, it can be helpful\n> to test parallelism even with one index say if we implement something\n> like force_parallel_mode = regress or parallel_leader_participation =\n> off.\n\nI see that currently we don't have it yet. Is it worth implementing\nthem? Something like 1) when force_parallel_mode = regress, spawn one\nparallel worker, send the relation information to it, so that it\nperforms vacuuming both the relation and it's indexes. 2)\nparallel_leader_participation = off, spawn workers as specified, but\ndon't let the leader to vacuum index, so that any worker can pick it\nup. I'm not sure of the complexity though.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 May 2021 18:30:17 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: parallel vacuum - few questions on docs, comments and code" }, { "msg_contents": "On Wed, May 12, 2021 at 11:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 11, 2021 at 5:38 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > I was going through the parallel vacuum docs and code. I found below\n> > things, please someone clarify:\n> >\n> > 1) I see that a term \"parallel degree\" is used in the docs, code\n> > comments, error messages \"parallel vacuum degree must be a\n> > non-negative integer\", \"parallel vacuum degree must be between 0 and\n> > %d\". Is there any specific reason to use the term \"parallel degree\"?\n> > In the docs and code comments we generally use \"parallel workers\".\n> >\n>\n> The parallel degree term is used here to indicate that we compute how\n> much parallelism we can achieve based on the indexes.\n\nYeah, I get it. Even if users don't specify a parallel option there\nare chances that parallelism is picked. So the parallel degree is the\nfinal number of workers that are chosen by the server for vacuuming\nindexes. And, I think that parallel degree is something internal to\nthe server, and it's better we replace it in the vacuumdb.sgml, change\nPARALLEL_DEGREE to PARALLEL_WORKERS in vacuumdb.c and change the error\nmessage \"parallel vacuum degree must be a non-negative integer\" to\n\"parallel workers for vacuum must be greater than or equal to zero\".\n\nThoughts?\n\n> > 2) The error messages \"parallel vacuum degree must be between 0 and\n> > %d\" and \"parallel option requires a value between 0 and %d\" look\n> > inconsistent.\n> >\n>\n> I think we can make them consistent.\n\nHow about only one message \"parallel option requires a value between 0\nand %d\" for both cases below? IMO they essentially mean the same\nthing.\n\npostgres=# vacuum (parallel ) t1;\nERROR: parallel option requires a value between 0 and 1024\npostgres=# vacuum (parallel -4) t1;\nERROR: parallel vacuum degree must be between 0 and 1024\n\n> > 5) Can the parallel_workers in below condition ever be negative in\n> > begin_parallel_vacuum? I think we can just have if (parallel_workers\n> > == 0).\n> > /* Can't perform vacuum in parallel */\n> > if (parallel_workers <= 0)\n>\n> Even if it can't go negative in the current code, I don't see a\n> problem with the current code. It seems safe like this.\n\nOkay.\n\n> > 6) I think, instead of saying \"using integer background workers\", we\n> > can just say \"using specified or lesser number of background workers\".\n> > From the docs: Perform index vacuum and index cleanup phases of VACUUM\n> > in parallel using integer background workers\n> > We can say \"workers specified will be used during execution\"\n> > From the docs: workers specified in integer will be used during execution\n> >\n> The docs here refer to \"PARALLEL integer\" specified in specs, so not\n> sure if the proposed text is better.\n\nIMO, \"using the number of background workers specified with the\noption\" looks better than \"using integer background workers\".\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 May 2021 18:37:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: parallel vacuum - few questions on docs, comments and code" }, { "msg_contents": "On Wed, May 12, 2021 at 6:37 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, May 12, 2021 at 11:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, May 11, 2021 at 5:38 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > I was going through the parallel vacuum docs and code. I found below\n> > > things, please someone clarify:\n> > >\n> > > 1) I see that a term \"parallel degree\" is used in the docs, code\n> > > comments, error messages \"parallel vacuum degree must be a\n> > > non-negative integer\", \"parallel vacuum degree must be between 0 and\n> > > %d\". Is there any specific reason to use the term \"parallel degree\"?\n> > > In the docs and code comments we generally use \"parallel workers\".\n> > >\n> >\n> > The parallel degree term is used here to indicate that we compute how\n> > much parallelism we can achieve based on the indexes.\n>\n> Yeah, I get it. Even if users don't specify a parallel option there\n> are chances that parallelism is picked. So the parallel degree is the\n> final number of workers that are chosen by the server for vacuuming\n> indexes. And, I think that parallel degree is something internal to\n> the server, and it's better we replace it in the vacuumdb.sgml, change\n> PARALLEL_DEGREE to PARALLEL_WORKERS in vacuumdb.c and change the error\n> message \"parallel vacuum degree must be a non-negative integer\" to\n> \"parallel workers for vacuum must be greater than or equal to zero\".\n>\n> Thoughts?\n>\n> > > 2) The error messages \"parallel vacuum degree must be between 0 and\n> > > %d\" and \"parallel option requires a value between 0 and %d\" look\n> > > inconsistent.\n> > >\n> >\n> > I think we can make them consistent.\n>\n> How about only one message \"parallel option requires a value between 0\n> and %d\" for both cases below? IMO they essentially mean the same\n> thing.\n>\n\nI am fine with changing what you are proposing in the above two\npoints. Sawada-San, any thoughts?\n\n>\n> > > 6) I think, instead of saying \"using integer background workers\", we\n> > > can just say \"using specified or lesser number of background workers\".\n> > > From the docs: Perform index vacuum and index cleanup phases of VACUUM\n> > > in parallel using integer background workers\n> > > We can say \"workers specified will be used during execution\"\n> > > From the docs: workers specified in integer will be used during execution\n> > >\n> > The docs here refer to \"PARALLEL integer\" specified in specs, so not\n> > sure if the proposed text is better.\n>\n> IMO, \"using the number of background workers specified with the\n> option\" looks better than \"using integer background workers\".\n> Thoughts?\n>\n\nI am not too sure about this point. I guess we can leave it for now.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 13 May 2021 11:55:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallel vacuum - few questions on docs, comments and code" }, { "msg_contents": "On Wed, May 12, 2021 at 6:30 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, May 12, 2021 at 9:48 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, May 11, 2021 at 6:31 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > On Tue, May 11, 2021 at 05:37:50PM +0530, Bharath Rupireddy wrote:\n> > > > 3) Should the Assert(nindexes > 0); in begin_parallel_vacuum just be\n> > > > Assert(nindexes > 1); as this function is entered only when indexes\n> > > > are > 1?\n> > >\n> > > I think you're right, at least with the current implementation that\n> > > parallelization is done across indexes. Same in parallel_vacuum_main.\n> > >\n> >\n> > Yeah, as code stands both of you are right. However, it can be helpful\n> > to test parallelism even with one index say if we implement something\n> > like force_parallel_mode = regress or parallel_leader_participation =\n> > off.\n>\n> I see that currently we don't have it yet. Is it worth implementing\n> them? Something like 1) when force_parallel_mode = regress, spawn one\n> parallel worker, send the relation information to it, so that it\n> performs vacuuming both the relation and it's indexes. 2)\n> parallel_leader_participation = off, spawn workers as specified, but\n> don't let the leader to vacuum index, so that any worker can pick it\n> up. I'm not sure of the complexity though.\n>\n\nWe had some patch on the above lines in the original development\nthread which has been used for testing during development but not sure\nhow much useful it is now. However, I am fine if others think\nsomething like that is useful.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 13 May 2021 11:58:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallel vacuum - few questions on docs, comments and code" }, { "msg_contents": "On Thu, May 13, 2021 at 3:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, May 12, 2021 at 6:37 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, May 12, 2021 at 11:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, May 11, 2021 at 5:38 PM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > >\n> > > > I was going through the parallel vacuum docs and code. I found below\n> > > > things, please someone clarify:\n> > > >\n> > > > 1) I see that a term \"parallel degree\" is used in the docs, code\n> > > > comments, error messages \"parallel vacuum degree must be a\n> > > > non-negative integer\", \"parallel vacuum degree must be between 0 and\n> > > > %d\". Is there any specific reason to use the term \"parallel degree\"?\n> > > > In the docs and code comments we generally use \"parallel workers\".\n> > > >\n> > >\n> > > The parallel degree term is used here to indicate that we compute how\n> > > much parallelism we can achieve based on the indexes.\n> >\n> > Yeah, I get it. Even if users don't specify a parallel option there\n> > are chances that parallelism is picked. So the parallel degree is the\n> > final number of workers that are chosen by the server for vacuuming\n> > indexes. And, I think that parallel degree is something internal to\n> > the server, and it's better we replace it in the vacuumdb.sgml, change\n> > PARALLEL_DEGREE to PARALLEL_WORKERS in vacuumdb.c and change the error\n> > message \"parallel vacuum degree must be a non-negative integer\" to\n> > \"parallel workers for vacuum must be greater than or equal to zero\".\n> >\n> > Thoughts?\n\nI'm fine with this change.\n\n> >\n> > > > 2) The error messages \"parallel vacuum degree must be between 0 and\n> > > > %d\" and \"parallel option requires a value between 0 and %d\" look\n> > > > inconsistent.\n> > > >\n> > >\n> > > I think we can make them consistent.\n> >\n> > How about only one message \"parallel option requires a value between 0\n> > and %d\" for both cases below? IMO they essentially mean the same\n> > thing.\n\nThe change looks good to me in terms of consistency but even the\ncurrent messages also make sense and are slightly clearer to me aside\nfrom using the term \"degree\". If the user lacks an integer after\nPARALLEL option, we say \"parallel option requires a value between 0\nand %d\" and if the user specifies an invalid number to the option we\nsay \"parallel vacuum degree must be between 0 and %d”. We use the\nmessage something like “AAA must be between X and Y” also in other\nplaces if users input an invalid value. I'm not sure the consistency\nis important here but another idea to improve the error message would\nbe to change \"parallel vacuum degree must be between 0 and %d” to \"the\nnumber of parallel workers must be between 0 and %d” (or using\n“parallel workers for vacuum” instead of “the number of parallel\nworkers”) while leaving another message as it is.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 13 May 2021 22:29:57 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallel vacuum - few questions on docs, comments and code" }, { "msg_contents": "On Thu, May 13, 2021 at 7:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > Yeah, I get it. Even if users don't specify a parallel option there\n> > > are chances that parallelism is picked. So the parallel degree is the\n> > > final number of workers that are chosen by the server for vacuuming\n> > > indexes. And, I think that parallel degree is something internal to\n> > > the server, and it's better we replace it in the vacuumdb.sgml, change\n> > > PARALLEL_DEGREE to PARALLEL_WORKERS in vacuumdb.c and change the error\n> > > message \"parallel vacuum degree must be a non-negative integer\" to\n> > > \"parallel workers for vacuum must be greater than or equal to zero\".\n> > >\n> > > Thoughts?\n>\n> I'm fine with this change.\n\nThanks.\n\n> is important here but another idea to improve the error message would\n> be to change \"parallel vacuum degree must be between 0 and %d” to \"the\n> number of parallel workers must be between 0 and %d” (or using\n> “parallel workers for vacuum” instead of “the number of parallel\n> workers”) while leaving another message as it is.\n\nDone that way.\n\nPSA patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 13 May 2021 21:00:26 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: parallel vacuum - few questions on docs, comments and code" }, { "msg_contents": "On Thu, May 13, 2021 at 9:00 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Done that way.\n>\n> PSA patch.\n\nYour changes look good. About changing the \"non-negative integer\" to\n\"greater than or equal to zero\", there is another thread [1], I am not\nsure that have we concluded anything there yet.\n\n- pg_log_error(\"parallel vacuum degree must be a non-negative integer\");\n+ pg_log_error(\"parallel workers for vacuum must be greater than or\nequal to zero\");\n exit(1);\n\n[1] https://www.postgresql.org/message-id/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 May 2021 10:43:20 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallel vacuum - few questions on docs, comments and code" }, { "msg_contents": "On Fri, May 14, 2021 at 10:43 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 9:00 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Done that way.\n> >\n> > PSA patch.\n>\n> Your changes look good. About changing the \"non-negative integer\" to\n> \"greater than or equal to zero\", there is another thread [1], I am not\n> sure that have we concluded anything there yet.\n>\n> - pg_log_error(\"parallel vacuum degree must be a non-negative integer\");\n> + pg_log_error(\"parallel workers for vacuum must be greater than or\n> equal to zero\");\n> exit(1);\n>\n> [1] https://www.postgresql.org/message-id/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nYeah. Tom proposed if (foo <= 0) { error:\"foo must be greater than\nzero\" } at [1]. In the subsequent messages both Michael and I agreed\nwith that. But we also have cases like if (foo < 0) for which I think\n{ error:\"foo must be greater than or equal to zero\" } would be better,\nsimilar to what's proposed. Please feel free to provide your thoughts\nthere in that thread.\n\n[1] - https://www.postgresql.org/message-id/621822.1620655780%40sss.pgh.pa.us\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 May 2021 18:00:04 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: parallel vacuum - few questions on docs, comments and code" }, { "msg_contents": "On Fri, May 14, 2021 at 6:00 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, May 14, 2021 at 10:43 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> >\n> > Your changes look good. About changing the \"non-negative integer\" to\n> > \"greater than or equal to zero\", there is another thread [1], I am not\n> > sure that have we concluded anything there yet.\n> >\n> > - pg_log_error(\"parallel vacuum degree must be a non-negative integer\");\n> > + pg_log_error(\"parallel workers for vacuum must be greater than or\n> > equal to zero\");\n> > exit(1);\n> >\n> > [1] https://www.postgresql.org/message-id/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com\n>\n> Yeah. Tom proposed if (foo <= 0) { error:\"foo must be greater than\n> zero\" } at [1]. In the subsequent messages both Michael and I agreed\n> with that. But we also have cases like if (foo < 0) for which I think\n> { error:\"foo must be greater than or equal to zero\" } would be better,\n> similar to what's proposed. Please feel free to provide your thoughts\n> there in that thread.\n>\n\nI responded on that thread and it seems there is no object to the new\nmessage. I have a minor comment on your patch:\n\n- printf(_(\" -P, --parallel=PARALLEL_DEGREE use this many background\nworkers for vacuum, if available\\n\"));\n+ printf(_(\" -P, --parallel=PARALLEL_WORKERS use this many background\nworkers for vacuum, if available\\n\"));\n\nIf the patch changes the vacuumdb code as above then isn't it better\nto change the vacuumdb docs to reflect the same. See below part of\nvacuumdb docs:\n-P parallel_degree\n--parallel=parallel_degree\n\nAlso, can you please check if your patch works for PG-13 as well\nbecause I think it is better to backpatch it?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 21 May 2021 11:10:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallel vacuum - few questions on docs, comments and code" }, { "msg_contents": "On Fri, May 21, 2021 at 11:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I responded on that thread and it seems there is no object to the new\n> message. I have a minor comment on your patch:\n\nThanks Amit!\n\n> - printf(_(\" -P, --parallel=PARALLEL_DEGREE use this many background\n> workers for vacuum, if available\\n\"));\n> + printf(_(\" -P, --parallel=PARALLEL_WORKERS use this many background\n> workers for vacuum, if available\\n\"));\n>\n> If the patch changes the vacuumdb code as above then isn't it better\n> to change the vacuumdb docs to reflect the same. See below part of\n> vacuumdb docs:\n> -P parallel_degree\n> --parallel=parallel_degree\n\nChanged.\n\n> Also, can you please check if your patch works for PG-13 as well\n> because I think it is better to backpatch it?\n\nI'm not sure about backpatching as it is not a critical bug fix. Since\nthe changes are user visible, I think that it's okay to backpatch.\n\nAnyways, attaching patches for both master and v13 branch. Please\nreview it further.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 21 May 2021 13:30:43 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: parallel vacuum - few questions on docs, comments and code" }, { "msg_contents": "On Fri, May 21, 2021 at 1:30 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, May 21, 2021 at 11:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > If the patch changes the vacuumdb code as above then isn't it better\n> > to change the vacuumdb docs to reflect the same. See below part of\n> > vacuumdb docs:\n> > -P parallel_degree\n> > --parallel=parallel_degree\n>\n> Changed.\n>\n> > Also, can you please check if your patch works for PG-13 as well\n> > because I think it is better to backpatch it?\n>\n> I'm not sure about backpatching as it is not a critical bug fix. Since\n> the changes are user visible, I think that it's okay to backpatch.\n>\n\nYes, as it is a user-visible change (though minor) so I thought it\nwould be good to backpatch this. Does anyone else have any opinion on\nthis?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 21 May 2021 15:33:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallel vacuum - few questions on docs, comments and code" }, { "msg_contents": "On Fri, May 21, 2021 at 3:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 21, 2021 at 1:30 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Fri, May 21, 2021 at 11:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > If the patch changes the vacuumdb code as above then isn't it better\n> > > to change the vacuumdb docs to reflect the same. See below part of\n> > > vacuumdb docs:\n> > > -P parallel_degree\n> > > --parallel=parallel_degree\n> >\n> > Changed.\n> >\n> > > Also, can you please check if your patch works for PG-13 as well\n> > > because I think it is better to backpatch it?\n> >\n> > I'm not sure about backpatching as it is not a critical bug fix. Since\n> > the changes are user visible, I think that it's okay to backpatch.\n> >\n>\n> Yes, as it is a user-visible change (though minor) so I thought it\n> would be good to backpatch this. Does anyone else have any opinion on\n> this?\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 25 May 2021 14:59:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallel vacuum - few questions on docs, comments and code" } ]
[ { "msg_contents": "In some recent threads I complained about how CLOBBER_CACHE_ALWAYS\ntest runs have gotten markedly slower over the past couple of release\ncycles [1][2][3]. It'd be impossibly time-consuming to investigate the\ncauses by repeating the whole test corpus, but I've had some success in\nbisecting while measuring the runtime of just a single test script.\nIn this report I'm looking at src/pl/plpgsql/src/sql/plpgsql_control.sql,\nwhich is a useful candidate because it hasn't changed at all since v11.\nDespite that, hyrax's latest runs show these runtimes:\n\nHEAD:\ntest plpgsql_control ... ok 56105 ms\nREL_13_STABLE:\ntest plpgsql_control ... ok 46879 ms\nREL_12_STABLE:\ntest plpgsql_control ... ok 30809 ms\n\nso we have clearly made CCA runs a whole lot worse since v12.\n(Non-CCA buildfarm members show runtimes that are about the same\nacross all three branches.)\n\nI've reproduced (some of) these results on my shiny new M1 mini,\nwhich is a tad faster than hyrax's host: it can do the test on HEAD\n(049e1e2ed) in 15.413s. (Note: this, and the numbers following, are\nmedian-of-3-runs; the run variance is enough that I wouldn't trust\nthem to less than a tenth of a second.) The run time at 615cebc94\n(v12 branchoff point) is 11.861s. Bisecting, I found that there were\nthree commits that accounted for almost all of the slowdown since v12:\n\n0d861bbb7 Add deduplication to nbtree\n11.836s -> 12.339s\n(that's runtime on the preceding commit -> runtime on this commit)\n\n8f59f6b9c(+fbc7a7160) Improve performance of \"simple expressions\" in PL/pgSQL\n12.334s -> 14.158s\n\n911e70207 Implement operator class parameters\n14.263s -> 15.415s\n\nOne thing that confuses me, though, is that all of these are v13-era\ncommits (they all went into the tree during March 2020). I don't\nsee any real difference in the runtime from the v13 branchoff point\nto now, which doesn't square with hyrax's results. Could there be\nthat much inter-platform variation in the overhead of CCA? It might\nbe useful for somebody with patience and a fast Intel machine to try\nto replicate these results.\n\nAnyway, it seems like these three features deserve some study as to\nwhy they caused so much slowdown under CCA. It's not so surprising\nthat 8f59f6b9c would have an effect on a test of plpgsql control\nlogic, but I find it surprising and rather disturbing that either\nof the others would.\n\nBTW, I was also tracking initdb runtime under CCA while I did this,\nand while I didn't formally bisect on that basis, I did notice that\n911e70207 had quite a negative impact on that too: 180s -> 195s.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/242172.1620494497%40sss.pgh.pa.us#eab25bb83bdcdd0f58b2d712b4971fcd\n[2] https://www.postgresql.org/message-id/flat/292305.1620503097%40sss.pgh.pa.us\n[3] https://www.postgresql.org/message-id/flat/575884.1620626638%40sss.pgh.pa.us\n\n\n", "msg_date": "Tue, 11 May 2021 12:03:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Some other CLOBBER_CACHE_ALWAYS culprits" }, { "msg_contents": "Hi,\n\nOn 2021-05-11 12:03:33 -0400, Tom Lane wrote:\n> In some recent threads I complained about how CLOBBER_CACHE_ALWAYS\n> test runs have gotten markedly slower over the past couple of release\n> cycles [1][2][3].\n\nI wonder if the best way to attack this in a more fundamental manner would be\nto handle nested invalidations different than we do today. Not just for\nCCA/CCR performance, but also to make invalidations easier to understand in\ngeneral.\n\nRight now, for CCA, we'll often invalidate all the caches dozens of times for\na single syscache lookup. Often rebuilding a lot of the entries over and over\nagain, even when they are not accessed during the lookup (because relcache\neagerly rebuilds cache entries). Of course that's terribly expensive. It's\nsomething like O(lookups * cache accesses during lookup * total cache entries)\nI think?\n\nIMO the problem largely stems from eagerly rebuilding *all* relcache entries\nduring invalidation processing. Something triggers\nInvalidateSystemCaches(). That in turn triggers RelationBuildDesc() for all\nrelations, which triggers a lot of syscache lookups, which trigger a lot of\nrelcache lookups, ... And that's just during the\nInvalidateSystemCaches(). Most subsequent syscache lookup will be a cache miss\ntoo (unless accessed during the relcache rebuilds) - each syscache miss will\ntrigger a few system relations to be locked, triggering separate\nInvalidateSystemCaches().\n\nIf we split cache invalidation into separate invalidation and cache-rebuild\nphases, we'd likely be a lot better off, I think, by being able to avoid the\nrepeated rebuilds of cache entries that are never accessed during\ninvalidation.\n\nI'd prototyped a relcache version of this in\nhttps://postgr.es/m/20180829083730.n645apqhb2gyih3g%40alap3.anarazel.de but it\nseems like it might be possible to generalize?\n\n\n> so we have clearly made CCA runs a whole lot worse since v12.\n> (Non-CCA buildfarm members show runtimes that are about the same\n> across all three branches.)\n>\n> I've reproduced (some of) these results on my shiny new M1 mini,\n> which is a tad faster than hyrax's host: it can do the test on HEAD\n> (049e1e2ed) in 15.413s. (Note: this, and the numbers following, are\n> median-of-3-runs; the run variance is enough that I wouldn't trust\n> them to less than a tenth of a second.) The run time at 615cebc94\n> (v12 branchoff point) is 11.861s. Bisecting, I found that there were\n> three commits that accounted for almost all of the slowdown since v12:\n>\n> 0d861bbb7 Add deduplication to nbtree\n> 11.836s -> 12.339s\n> (that's runtime on the preceding commit -> runtime on this commit)\n\nHm. The most likely explanation seems to be that this shows that index\naccesses without using deduplication are slightly more expensive due to the\nchange? System tables never use the duplication stuff (cf\n_bt_allequalimage())...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 11 May 2021 16:07:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Some other CLOBBER_CACHE_ALWAYS culprits" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-11 12:03:33 -0400, Tom Lane wrote:\n>> In some recent threads I complained about how CLOBBER_CACHE_ALWAYS\n>> test runs have gotten markedly slower over the past couple of release\n>> cycles [1][2][3].\n\n> I wonder if the best way to attack this in a more fundamental manner would be\n> to handle nested invalidations different than we do today. Not just for\n> CCA/CCR performance, but also to make invalidations easier to understand in\n> general.\n\nI spent some time thinking along those lines too, but desisted after\nconcluding that that would fundamentally break the point of CCA\ntesting, namely to be sure we survive when a cache flush occurs at\n$any-random-point. Sure, in practice it will not be the case that\na flush occurs at EVERY random point. But I think if you try to\noptimize away a rebuild at point B on the grounds that you just\ndid one at point A, you will fail to cover the scenario where flush\nrequests arrive at exactly points A and B.\n\n> IMO the problem largely stems from eagerly rebuilding *all* relcache entries\n> during invalidation processing.\n\nUh, we don't do that; only for relations that are pinned, which we\nknow are being used.\n\nWhat it looked like to me, in an admittedly cursory bit of perf\ntesting, was that most of the cycles were going into fetching\ncache entries from catalogs over and over. But it's hard to avoid\nthat.\n\nI did wonder for a bit about doing something like moving cache\nentries to another physical place rather than dropping them.\nI don't really like that either though, because then the behavior\nthat CCA is testing really has not got that much at all to do\nwith real system behavior.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 May 2021 19:30:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some other CLOBBER_CACHE_ALWAYS culprits" }, { "msg_contents": "Hi,\n\nOn 2021-05-11 19:30:48 -0400, Tom Lane wrote:\n> > IMO the problem largely stems from eagerly rebuilding *all* relcache entries\n> > during invalidation processing.\n>\n> Uh, we don't do that; only for relations that are pinned, which we\n> know are being used.\n\nSorry, all surviving relcache entries - but that's typically quite a\nfew.\n\n\n> I spent some time thinking along those lines too, but desisted after\n> concluding that that would fundamentally break the point of CCA\n> testing, namely to be sure we survive when a cache flush occurs at\n> $any-random-point.\n\nWhy would rebuilding non-accessed relcache entries over and over help\nwith that? I am not proposing that we do not mark all cache entries are\ninvalid, or that we do not rebuild tables that aren't accessed.\n\nDuring an extremely trivial query from a user defined table ('blarg'),\nhere's top 10 RelationBuildDesc() calls:\n 344 rebuild pg_attrdef\n 274 rebuild pg_opclass\n 274 rebuild pg_amproc\n 260 rebuild pg_index\n 243 rebuild pg_am\n 236 rebuild pg_attrdef_adrelid_adnum_index\n 236 rebuild blarg\n 74 rebuild pg_namespace\n 52 rebuild pg_statistic\n 37 rebuild pg_tablespace\n134.420 ms\n\nHere's the same when joining two tables:\n 5828 rebuild pg_opclass\n 2897 rebuild pg_amop\n 2250 rebuild pg_cast\n 2086 rebuild pg_amproc\n 1465 rebuild pg_statistic\n 1274 rebuild pg_index\n 936 rebuild pg_attrdef\n 646 rebuild pg_operator\n 619 rebuild pg_am\n 518 rebuild pg_tablespace\n1414.886 ms\n\nthree tables:\n 16614 rebuild pg_opclass\n 7787 rebuild pg_amop\n 6750 rebuild pg_cast\n 5388 rebuild pg_amproc\n 5141 rebuild pg_statistic\n 3058 rebuild pg_index\n 1824 rebuild pg_operator\n 1374 rebuild pg_attrdef\n 1233 rebuild pg_am\n 1110 rebuild pg_tablespace\n3971.506 ms\n\nfour:\n 33328 rebuild pg_opclass\n 16020 rebuild pg_amop\n 14000 rebuild pg_statistic\n 13500 rebuild pg_cast\n 10876 rebuild pg_amproc\n 5792 rebuild pg_index\n 3950 rebuild pg_operator\n 2035 rebuild pg_am\n 1924 rebuild pg_tablespace\n 1746 rebuild pg_attrdef\n7927.172 ms\n\nThis omits all the work done as part of RelationReloadNailed(), but\nshows the problem quite clearly, I think?\n\nBasically, every additional accessed table in a transaction makes things\ndrastically slower.\n\nIn the four join case my four user defined tables were rebuilt a lot of\ntimes:\n 463 rebuild blarg\n 440 rebuild blarg2\n 293 rebuild blarg3\n 233 rebuild blarg4\ndespite obviously not being relevant for the cache invalidation\nprocessing itself.\n\n\nThe list of sytable scans in the four table case:\n 380278 systable_beginscan: pg_class, using index: 1\n 111539 systable_beginscan: pg_attribute, using index: 1\n 73544 systable_beginscan: pg_class, using index: 0\n 4134 systable_beginscan: pg_opclass, using index: 1\n 4099 systable_beginscan: pg_amproc, using index: 1\n 2791 systable_beginscan: pg_am, using index: 0\n 2061 systable_beginscan: pg_index, using index: 1\n 1429 systable_beginscan: pg_attrdef, using index: 1\n 345 systable_beginscan: pg_type, using index: 1\n 300 systable_beginscan: pg_cast, using index: 1\n 195 systable_beginscan: pg_statistic, using index: 1\n 191 systable_beginscan: pg_amop, using index: 1\n 103 systable_beginscan: pg_operator, using index: 1\n 52 systable_beginscan: pg_tablespace, using index: 1\n 33 systable_beginscan: pg_proc, using index: 1\n 27 systable_beginscan: pg_authid, using index: 1\n 20 systable_beginscan: pg_namespace, using index: 1\n 4 systable_beginscan: pg_statistic_ext, using index: 1\n\n581145 in total.\n\n\n> Sure, in practice it will not be the case that a flush occurs at EVERY\n> random point. But I think if you try to optimize away a rebuild at\n> point B on the grounds that you just did one at point A, you will fail\n> to cover the scenario where flush requests arrive at exactly points A\n> and B.\n\nI don't think we'd loose a lot of practical coverage if we avoided\nrebuilding non-accessed relcache entries eagerly during cache\nlookups. What coverage do we e.g. gain by having a single\nSearchCatCacheMiss() triggering rebuilding the relcache of a user\ndefined table several times?\n\nThe InvalidateSystemCaches() marks all catcache entries as invalid. The\nnext catcache lookup will thus trigger a cache miss. That cache miss\nwill typically at least open the previously not locked relation + index\nthe cache is over. Each of those relation opens will fire off another\nInvalidateSystemCaches(). Which will rebuild all the surviving relcache\nentries at least twice - despite never being accessed in that path.\n\n\n> What it looked like to me, in an admittedly cursory bit of perf\n> testing, was that most of the cycles were going into fetching\n> cache entries from catalogs over and over. But it's hard to avoid\n> that.\n\nSure - but that's only because we rebuild stuff over and over despite\nnot being accessed...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 11 May 2021 19:02:00 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Some other CLOBBER_CACHE_ALWAYS culprits" }, { "msg_contents": "Hi,\n\nOn 2021-05-11 19:02:00 -0700, Andres Freund wrote:\n> Why would rebuilding non-accessed relcache entries over and over help\n> with that? I am not proposing that we do not mark all cache entries are\n> invalid, or that we do not rebuild tables that aren't accessed.\n\nA slightly more concrete proposal:\n\nWe introduce a new list of pending relcache invalidations. When\nRelationCacheInvalidateEntry() or RelationCacheInvalidateEntry()\ninvalidate an entry, it gets put on that list (pretty much like the\nexisting rebuildList in RelationCacheInvalidate(), except longer lived).\n\nWhen an invalid relcache entry is accessed, it is obviously immediately\nrebuilt.\n\nNormally RelationCacheInvalidate() eagerly processes that list, as well\nas in ProcessInvalidationMessages(), ReceiveSharedInvalidMessages() etc.\n\nBut SearchCatCacheMiss() sets a flag that prevents the eager processing\nin RelationCacheInvalidate() - that avoids needing to rebuild relcache\nentries that aren't actually accessed as part of a cache miss\nrepeatedly.\n\nI think just avoiding the repeated relcache rebuilds in\nSearchCatCacheMiss() would reduce runtime significantly, even if\nSearchCatCacheMiss() at the end would process that list of relcache\ninvalidations.\n\nBut I think it might not even be needed to achieve good coverage? It\nmight be fine to defer processing of the pending list until the next\nRelationCacheInvalidate() triggered by a relation_open() outside of a\ncatcache miss (or obviously until it is accessed next)?\n\n\nI think this scheme wouldn't just improve CCI performance, but\nimportantly also normal invalidation processing. Right now we'll often\nre-build the same cache entry multiple times as part of a single\nReceiveSharedInvalidMessages() as it's pretty common that a relation is\nthe target of DDL in very close-by transactions.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 11 May 2021 19:32:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Some other CLOBBER_CACHE_ALWAYS culprits" }, { "msg_contents": "Hi,\n\nOn 2021-05-11 19:02:00 -0700, Andres Freund wrote:\n> I don't think we'd loose a lot of practical coverage if we avoided\n> rebuilding non-accessed relcache entries eagerly during cache\n> lookups. What coverage do we e.g. gain by having a single\n> SearchCatCacheMiss() triggering rebuilding the relcache of a user\n> defined table several times?\n>\n> The InvalidateSystemCaches() marks all catcache entries as invalid. The\n> next catcache lookup will thus trigger a cache miss. That cache miss\n> will typically at least open the previously not locked relation + index\n> the cache is over. Each of those relation opens will fire off another\n> InvalidateSystemCaches(). Which will rebuild all the surviving relcache\n> entries at least twice - despite never being accessed in that path.\n\nThis is actually worse than I described here, and I think it may point\ntowards a relatively minimal change that'd improve performance of\ndebug_invalidate_system_caches_always=1 substantially.\n\nPutting in some instrumentation I noticed that with\ndebug_invalidate_system_caches_always=1 a single \"top level\"\nSearchCatCacheMiss() triggers up to a 100 RelationCacheInvalidate(). There's\ntwo levels to it:\n\nThe table_open/index_open done as part of a SearchCatCacheMiss() will each\ntrigger a invalidation of their own. But what then drives that up much further\nis that the RelationCacheInvalidate() will destroy the relcache entries for\nnearly all indexes and for pg_amop etc and *not* rebuild them as part of\nRelationCacheInvalidate() - there are no references.\n\nWhich means that the index_open() on whatever index the syscache uses builds a\nnew relache entry. Which then needs to do a RelationInitIndexAccessInfo() on\nthat index. Which triggers a lot of syscache lookups. Which in turn need to\nbuild pg_omop etc. Which trigggers RelationCacheInvalidate() over and over.\n\nIn essence, debug_invalidate_system_caches_always=1 in some important aspects\nbehaves like debug_invalidate_system_caches_always=3, due to the syscache\ninvolvement.\n\nI think it's worth testing that we actually deal with everything possible\nbeing invalidated as part of a syscache lookup, but I don't think we learn a\nton doing that for the whole build. Particularly when it prevents us from\nactually testing more interesting invalidation scenarios?\n\nWhat about having a mode where each \"nesting\" level of SearchCatCacheMiss\nallows only one interior InvalidateSystemCaches()?\n\n\nHere's an example stacktrace showing three nested syscache lookups:\n\n#0 SearchCatCacheMiss (cache=0x55aed2c34e00, nkeys=2, hashValue=3953514454, hashIndex=86, v1=2656, v2=2, v3=0, v4=0)\n at /home/andres/src/postgresql/src/backend/utils/cache/catcache.c:1329\n#1 0x000055aed0edf194 in SearchCatCacheInternal (cache=0x55aed2c34e00, nkeys=2, v1=2656, v2=2, v3=0, v4=0)\n at /home/andres/src/postgresql/src/backend/utils/cache/catcache.c:1301\n#2 0x000055aed0edeeb8 in SearchCatCache2 (cache=0x55aed2c34e00, v1=2656, v2=2) at /home/andres/src/postgresql/src/backend/utils/cache/catcache.c:1177\n#3 0x000055aed0efc9b6 in SearchSysCache2 (cacheId=7, key1=2656, key2=2) at /home/andres/src/postgresql/src/backend/utils/cache/syscache.c:1145\n#4 0x000055aed0ee46aa in get_attoptions (relid=2656, attnum=2) at /home/andres/src/postgresql/src/backend/utils/cache/lsyscache.c:1002\n#5 0x000055aed0ef7ebd in RelationGetIndexAttOptions (relation=0x7f873ad21700, copy=false)\n at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:5734\n#6 0x000055aed0eefc95 in RelationInitIndexAccessInfo (relation=0x7f873ad21700) at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:1522\n#7 0x000055aed0eee927 in RelationBuildDesc (targetRelId=2656, insertIt=true) at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:1194\n#8 0x000055aed0ef09ea in RelationIdGetRelation (relationId=2656) at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:2064\n#9 0x000055aed083a95d in relation_open (relationId=2656, lockmode=1) at /home/andres/src/postgresql/src/backend/access/common/relation.c:59\n#10 0x000055aed08c77af in index_open (relationId=2656, lockmode=1) at /home/andres/src/postgresql/src/backend/access/index/indexam.c:136\n#11 0x000055aed08c6be4 in systable_beginscan (heapRelation=0x7f873ad1ec60, indexId=2656, indexOK=true, snapshot=0x0, nkeys=1, key=0x7ffdff557420)\n at /home/andres/src/postgresql/src/backend/access/index/genam.c:395\n#12 0x000055aed0ef436b in AttrDefaultFetch (relation=0x7f873ad1e830, ndef=1) at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:4422\n#13 0x000055aed0eed6dc in RelationBuildTupleDesc (relation=0x7f873ad1e830) at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:689\n#14 0x000055aed0eee737 in RelationBuildDesc (targetRelId=16385, insertIt=false) at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:1147\n#15 0x000055aed0ef16e4 in RelationClearRelation (relation=0x7f873ad1c728, rebuild=true)\n at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:2592\n#16 0x000055aed0ef2391 in RelationCacheInvalidate () at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:3047\n#17 0x000055aed0ee2218 in InvalidateSystemCaches () at /home/andres/src/postgresql/src/backend/utils/cache/inval.c:657\n#18 0x000055aed0ee230b in AcceptInvalidationMessages () at /home/andres/src/postgresql/src/backend/utils/cache/inval.c:725\n#19 0x000055aed0d35204 in LockRelationOid (relid=2610, lockmode=1) at /home/andres/src/postgresql/src/backend/storage/lmgr/lmgr.c:137\n#20 0x000055aed083a953 in relation_open (relationId=2610, lockmode=1) at /home/andres/src/postgresql/src/backend/access/common/relation.c:56\n#21 0x000055aed0913b73 in table_open (relationId=2610, lockmode=1) at /home/andres/src/postgresql/src/backend/access/table/table.c:43\n#22 0x000055aed0edf2be in SearchCatCacheMiss (cache=0x55aed2c3dc80, nkeys=1, hashValue=1574576467, hashIndex=19, v1=2696, v2=0, v3=0, v4=0)\n at /home/andres/src/postgresql/src/backend/utils/cache/catcache.c:1365\n#23 0x000055aed0edf194 in SearchCatCacheInternal (cache=0x55aed2c3dc80, nkeys=1, v1=2696, v2=0, v3=0, v4=0)\n at /home/andres/src/postgresql/src/backend/utils/cache/catcache.c:1301\n#24 0x000055aed0edee7d in SearchCatCache1 (cache=0x55aed2c3dc80, v1=2696) at /home/andres/src/postgresql/src/backend/utils/cache/catcache.c:1169\n#25 0x000055aed0efc8dd in SearchSysCache1 (cacheId=32, key1=2696) at /home/andres/src/postgresql/src/backend/utils/cache/syscache.c:1134\n#26 0x000055aed0eeef67 in RelationInitIndexAccessInfo (relation=0x7f873ad1e160) at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:1401\n#27 0x000055aed0eee927 in RelationBuildDesc (targetRelId=2696, insertIt=true) at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:1194\n#28 0x000055aed0ef09ea in RelationIdGetRelation (relationId=2696) at /home/andres/src/postgresql/src/backend/utils/cache/relcache.c:2064\n#29 0x000055aed083a95d in relation_open (relationId=2696, lockmode=1) at /home/andres/src/postgresql/src/backend/access/common/relation.c:59\n#30 0x000055aed08c77af in index_open (relationId=2696, lockmode=1) at /home/andres/src/postgresql/src/backend/access/index/indexam.c:136\n#31 0x000055aed08c6be4 in systable_beginscan (heapRelation=0x7f873ad21040, indexId=2696, indexOK=true, snapshot=0x0, nkeys=3, key=0x7ffdff557f60)\n at /home/andres/src/postgresql/src/backend/access/index/genam.c:395\n#32 0x000055aed0edf30f in SearchCatCacheMiss (cache=0x55aed2c49380, nkeys=3, hashValue=1153660433, hashIndex=17, v1=16385, v2=1, v3=0, v4=0)\n at /home/andres/src/postgresql/src/backend/utils/cache/catcache.c:1367\n#33 0x000055aed0edf194 in SearchCatCacheInternal (cache=0x55aed2c49380, nkeys=3, v1=16385, v2=1, v3=0, v4=0)\n at /home/andres/src/postgresql/src/backend/utils/cache/catcache.c:1301\n#34 0x000055aed0edeef8 in SearchCatCache3 (cache=0x55aed2c49380, v1=16385, v2=1, v3=0)\n at /home/andres/src/postgresql/src/backend/utils/cache/catcache.c:1185\n#35 0x000055aed0efca94 in SearchSysCache3 (cacheId=59, key1=16385, key2=1, key3=0) at /home/andres/src/postgresql/src/backend/utils/cache/syscache.c:1156\n#36 0x000055aed0ee75eb in get_attavgwidth (relid=16385, attnum=1) at /home/andres/src/postgresql/src/backend/utils/cache/lsyscache.c:3116\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 May 2021 11:24:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Some other CLOBBER_CACHE_ALWAYS culprits" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> In essence, debug_invalidate_system_caches_always=1 in some important aspects\n> behaves like debug_invalidate_system_caches_always=3, due to the syscache\n> involvement.\n\nYeah. I think it's important to test those recursive invalidation\nscenarios, but it could likely be done more selectively.\n\n> What about having a mode where each \"nesting\" level of SearchCatCacheMiss\n> allows only one interior InvalidateSystemCaches()?\n\nAn idea I'd been toying with was to make invals probabilistic, that is\nthere would be X% chance of an inval being forced at any particular\nopportunity. Then you could dial X up or down to make a tradeoff\nbetween speed and the extent of coverage you get from a single run.\n(Over time, you could expect pretty complete coverage even with X\nnot very close to 1, I think.)\n\nThis could be extended to what you're thinking about by reducing X\n(according to some rule or other) for each level of cache-flush\nrecursion. The argument to justify that is that recursive cache\nflushes are VERY repetitive, so that even a small probability will\nadd up to full coverage of those code paths fairly quickly.\n\nI've not worked out the math to justify any specific proposal\nalong this line, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 May 2021 16:53:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some other CLOBBER_CACHE_ALWAYS culprits" }, { "msg_contents": "Hi,\n\nOn 2021-05-14 16:53:16 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > In essence, debug_invalidate_system_caches_always=1 in some important aspects\n> > behaves like debug_invalidate_system_caches_always=3, due to the syscache\n> > involvement.\n> \n> Yeah. I think it's important to test those recursive invalidation\n> scenarios, but it could likely be done more selectively.\n\nAgreed. I wonder if the logic could be something like indicating that we\ndon't invalidate due to pg_class/attribute/am/... (a set of super common\nsystem catalogs) being opened, iff that open is at the \"top level\". So\nwe'd e.g. not trigger invalidation for a syscache miss scanning\npg_class, unless the miss happens during a relcache build. But we would\ncontinue to trigger invalidations without further checks if\ne.g. pg_subscription is opened.\n\n\n> > What about having a mode where each \"nesting\" level of SearchCatCacheMiss\n> > allows only one interior InvalidateSystemCaches()?\n> \n> An idea I'd been toying with was to make invals probabilistic, that is\n> there would be X% chance of an inval being forced at any particular\n> opportunity. Then you could dial X up or down to make a tradeoff\n> between speed and the extent of coverage you get from a single run.\n> (Over time, you could expect pretty complete coverage even with X\n> not very close to 1, I think.)\n> \n> This could be extended to what you're thinking about by reducing X\n> (according to some rule or other) for each level of cache-flush\n> recursion. The argument to justify that is that recursive cache\n> flushes are VERY repetitive, so that even a small probability will\n> add up to full coverage of those code paths fairly quickly.\n\nThat'd make sense, I've been wondering about something similar. But I'm\na bit worried about that making it harder to reproduce problems\nreliably?\n\n\n> I've not worked out the math to justify any specific proposal\n> along this line, though.\n\nFWIW, I've prototyped the idea of only invalidating once for each\nsyscache level, and it does reduce runtime of\n\nCREATE TABLE blarg_{0,1,2,3}(id serial primary key);\nSET debug_invalidate_system_caches_always = 1;\nSELECT * FROM blarg_0 join blarg_1 USING (id) join blarg_2 using (id) JOIN blarg_3 USING(id);\nRESET ALL;\n\nfrom 7.5s to 4.7s. The benefits are smaller when fewer tables are\naccessed, and larger if more (surprising, right :)).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 14 May 2021 14:25:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Some other CLOBBER_CACHE_ALWAYS culprits" }, { "msg_contents": "On 2021-May-14, Tom Lane wrote:\n\n> An idea I'd been toying with was to make invals probabilistic, that is\n> there would be X% chance of an inval being forced at any particular\n> opportunity. Then you could dial X up or down to make a tradeoff\n> between speed and the extent of coverage you get from a single run.\n> (Over time, you could expect pretty complete coverage even with X\n> not very close to 1, I think.)\n\nMaybe we could say that debug_invalidate_system_caches_always=2 means to\nuse the current behavior, and debug_invalidate_system_caches_always=1\nuses some probabilistic rule?\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Fri, 14 May 2021 17:36:51 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Some other CLOBBER_CACHE_ALWAYS culprits" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-14 16:53:16 -0400, Tom Lane wrote:\n>> An idea I'd been toying with was to make invals probabilistic, that is\n>> there would be X% chance of an inval being forced at any particular\n>> opportunity. Then you could dial X up or down to make a tradeoff\n>> between speed and the extent of coverage you get from a single run.\n>> (Over time, you could expect pretty complete coverage even with X\n>> not very close to 1, I think.)\n\n> That'd make sense, I've been wondering about something similar. But I'm\n> a bit worried about that making it harder to reproduce problems\n> reliably?\n\nOnce you know or suspect a problem, you dial X up to 1 and wait.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 May 2021 17:38:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some other CLOBBER_CACHE_ALWAYS culprits" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-May-14, Tom Lane wrote:\n>> An idea I'd been toying with was to make invals probabilistic, that is\n>> there would be X% chance of an inval being forced at any particular\n>> opportunity. Then you could dial X up or down to make a tradeoff\n>> between speed and the extent of coverage you get from a single run.\n>> (Over time, you could expect pretty complete coverage even with X\n>> not very close to 1, I think.)\n\n> Maybe we could say that debug_invalidate_system_caches_always=2 means to\n> use the current behavior, and debug_invalidate_system_caches_always=1\n> uses some probabilistic rule?\n\nWhat I had in mind was to replace the boolean with an actual fraction.\nProbability zero is the non-debug behavior, and probability one gives\nyou the same result as CLOBBER_CACHE_ALWAYS, and values in between\ngive you tradeoffs. But I'm not sure exactly how to extend that to\nthe recursive cases.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 May 2021 17:43:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some other CLOBBER_CACHE_ALWAYS culprits" } ]
[ { "msg_contents": "$SUBJECT was discovered in Fedora 35 prerelease testing of Python 3.10:\n\nhttps://bugzilla.redhat.com/show_bug.cgi?id=1959080\n\nWe have time to think about what to do about this, but some fix\nwill be needed before long.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 May 2021 12:13:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Python 3.10 breaks regression tests with traceback changes" }, { "msg_contents": "I missed Tom already started the thread, so just passing the other mail \nwith a patch proposal here:\n\nhttps://www.postgresql.org/message-id/CA%2BHKMWPk0bMUvwV7x4pYxbtNMCVvpBVi%3Do%3DS96MFYTr_WKZ7UA%40mail.gmail.com\n\nHonza\n\nOn 5/11/21 6:13 PM, Tom Lane wrote:\n> $SUBJECT was discovered in Fedora 35 prerelease testing of Python 3.10:\n> \n> https://bugzilla.redhat.com/show_bug.cgi?id=1959080\n> \n> We have time to think about what to do about this, but some fix\n> will be needed before long.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n\n", "msg_date": "Tue, 11 May 2021 23:31:06 +0200", "msg_from": "Honza Horak <hhorak@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Python 3.10 breaks regression tests with traceback changes" }, { "msg_contents": "Anybody had a chance to look at the proposed patch?\n\nHonza\n\nOn Tue, May 11, 2021 at 11:31 PM Honza Horak <hhorak@redhat.com> wrote:\n\n> I missed Tom already started the thread, so just passing the other mail\n> with a patch proposal here:\n>\n>\n> https://www.postgresql.org/message-id/CA%2BHKMWPk0bMUvwV7x4pYxbtNMCVvpBVi%3Do%3DS96MFYTr_WKZ7UA%40mail.gmail.com\n>\n> Honza\n>\n> On 5/11/21 6:13 PM, Tom Lane wrote:\n> > $SUBJECT was discovered in Fedora 35 prerelease testing of Python 3.10:\n> >\n> > https://bugzilla.redhat.com/show_bug.cgi?id=1959080\n> >\n> > We have time to think about what to do about this, but some fix\n> > will be needed before long.\n> >\n> > regards, tom lane\n> >\n> >\n>\n\nAnybody had a chance to look at the proposed patch?HonzaOn Tue, May 11, 2021 at 11:31 PM Honza Horak <hhorak@redhat.com> wrote:I missed Tom already started the thread, so just passing the other mail \nwith a patch proposal here:\n\nhttps://www.postgresql.org/message-id/CA%2BHKMWPk0bMUvwV7x4pYxbtNMCVvpBVi%3Do%3DS96MFYTr_WKZ7UA%40mail.gmail.com\n\nHonza\n\nOn 5/11/21 6:13 PM, Tom Lane wrote:\n> $SUBJECT was discovered in Fedora 35 prerelease testing of Python 3.10:\n> \n> https://bugzilla.redhat.com/show_bug.cgi?id=1959080\n> \n> We have time to think about what to do about this, but some fix\n> will be needed before long.\n> \n>                       regards, tom lane\n> \n>", "msg_date": "Mon, 24 May 2021 17:52:18 +0200", "msg_from": "Honza Horak <hhorak@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Python 3.10 breaks regression tests with traceback changes" }, { "msg_contents": "On 24.05.21 17:52, Honza Horak wrote:\n> Anybody had a chance to look at the proposed patch?\n\nPatch applied to master. I suppose we should backpatch this.\n\n> \n> Honza\n> \n> On Tue, May 11, 2021 at 11:31 PM Honza Horak <hhorak@redhat.com \n> <mailto:hhorak@redhat.com>> wrote:\n> \n> I missed Tom already started the thread, so just passing the other mail\n> with a patch proposal here:\n> \n> https://www.postgresql.org/message-id/CA%2BHKMWPk0bMUvwV7x4pYxbtNMCVvpBVi%3Do%3DS96MFYTr_WKZ7UA%40mail.gmail.com\n> <https://www.postgresql.org/message-id/CA%2BHKMWPk0bMUvwV7x4pYxbtNMCVvpBVi%3Do%3DS96MFYTr_WKZ7UA%40mail.gmail.com>\n> \n> Honza\n> \n> On 5/11/21 6:13 PM, Tom Lane wrote:\n> > $SUBJECT was discovered in Fedora 35 prerelease testing of Python\n> 3.10:\n> >\n> > https://bugzilla.redhat.com/show_bug.cgi?id=1959080\n> <https://bugzilla.redhat.com/show_bug.cgi?id=1959080>\n> >\n> > We have time to think about what to do about this, but some fix\n> > will be needed before long.\n> >\n> >                       regards, tom lane\n> >\n> >\n> \n\n\n\n", "msg_date": "Sat, 5 Jun 2021 07:29:16 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Python 3.10 breaks regression tests with traceback changes" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Patch applied to master. I suppose we should backpatch this.\n\nPython 3.10 has evidently now percolated into Fedora Rawhide,\nthus caiman is failing in the back branches. Time for that\nback-patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Jun 2021 17:13:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Python 3.10 breaks regression tests with traceback changes" }, { "msg_contents": "On 16.06.21 23:13, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> Patch applied to master. I suppose we should backpatch this.\n> \n> Python 3.10 has evidently now percolated into Fedora Rawhide,\n> thus caiman is failing in the back branches. Time for that\n> back-patch.\n\ndone\n\n\n", "msg_date": "Thu, 17 Jun 2021 09:08:05 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Python 3.10 breaks regression tests with traceback changes" } ]
[ { "msg_contents": "We've several times discussed doing $SUBJECT by replacing the\nmakefile's use of serial_schedule with calling parallel_schedule\nwith --max-connections=1. This'd remove the need to maintain\ntwo lists of regression test scripts.\n\nI got annoyed again just now about how people seem unable to\nkeep the two lists in the same order, so here is a patch to\nget rid of serial_schedule in that way.\n\n(The vcregress.pl changes are untested, but they seem straightforward\nenough. I do wonder though why we spell it --max-concurrent-tests\nthere when the makefile uses --max-connections.)\n\nIt'd perhaps be possible to adjust pg_regress so that when\n--max-connections=1 its progress output looks exactly the same\nas it did with serial_schedule. I doubt it's worth the trouble\nthough, unless anyone really wants that.\n\nAny objections?\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 11 May 2021 14:58:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Let's get rid of serial_schedule" }, { "msg_contents": "> On 11 May 2021, at 20:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> We've several times discussed doing $SUBJECT by replacing the\n> makefile's use of serial_schedule with calling parallel_schedule\n> with --max-connections=1. This'd remove the need to maintain\n> two lists of regression test scripts.\n> \n> I got annoyed again just now about how people seem unable to\n> keep the two lists in the same order, so here is a patch to\n> get rid of serial_schedule in that way.\n\nVery much +1 on this approach.\n\n> +\t# for backwards compatibility, interpret \"serial\" as parallel tests\n\nThis comment may seem odd without reading the commit message. Perhaps it can\nbe reworded to \"..as parallel tests running with a single worker\" or something\nalong those lines?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 11 May 2021 21:11:32 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Let's get rid of serial_schedule" }, { "msg_contents": "On 11/05/2021 21:58, Tom Lane wrote:\n> We've several times discussed doing $SUBJECT by replacing the\n> makefile's use of serial_schedule with calling parallel_schedule\n> with --max-connections=1. This'd remove the need to maintain\n> two lists of regression test scripts.\n> \n> I got annoyed again just now about how people seem unable to\n> keep the two lists in the same order, so here is a patch to\n> get rid of serial_schedule in that way.\n\n+1\n\n> +\t# for backwards compatibility, interpret \"serial\" as parallel tests\n\nThis comment isn't great, IMHO. How about:\n\n# for backwards comopatibility, \"serial\" runs the tests in\n# parallel_schedule one by one.\n\n- Heikki\n\n\n", "msg_date": "Tue, 11 May 2021 22:12:46 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Let's get rid of serial_schedule" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 11/05/2021 21:58, Tom Lane wrote:\n>> +\t# for backwards compatibility, interpret \"serial\" as parallel tests\n\n> This comment isn't great, IMHO. How about:\n\n> # for backwards comopatibility, \"serial\" runs the tests in\n> # parallel_schedule one by one.\n\nYeah, and on closer inspection, the code is wrong too :-(.\nI'd confused --max-concurrent-tests with --max-connections,\nbut they're different.\n\nAlso, I did take a look at pg_regress.c, and confirmed my\nfear that getting it to duplicate the serial output style\nwould be kind of messy.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 11 May 2021 15:38:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Let's get rid of serial_schedule" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 11 May 2021, at 20:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> +\t# for backwards compatibility, interpret \"serial\" as parallel tests\n\n> This comment may seem odd without reading the commit message. Perhaps it can\n> be reworded to \"..as parallel tests running with a single worker\" or something\n> along those lines?\n\nI liked Heikki's phrasing, so the v2 patch does it his way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 May 2021 15:44:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Let's get rid of serial_schedule" }, { "msg_contents": "> On 11 May 2021, at 21:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 11 May 2021, at 20:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> +\t# for backwards compatibility, interpret \"serial\" as parallel tests\n> \n>> This comment may seem odd without reading the commit message. Perhaps it can\n>> be reworded to \"..as parallel tests running with a single worker\" or something\n>> along those lines?\n> \n> I liked Heikki's phrasing, so the v2 patch does it his way.\n\nAgreed, I like his phrasing too.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 11 May 2021 21:47:22 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Let's get rid of serial_schedule" }, { "msg_contents": "вт, 11 мая 2021 г. в 23:47, Daniel Gustafsson <daniel@yesql.se>:\n\n> > On 11 May 2021, at 21:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Daniel Gustafsson <daniel@yesql.se> writes:\n> >> On 11 May 2021, at 20:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> + # for backwards compatibility, interpret \"serial\" as parallel tests\n> >\n> >> This comment may seem odd without reading the commit message. Perhaps\n> it can\n> >> be reworded to \"..as parallel tests running with a single worker\" or\n> something\n> >> along those lines?\n> >\n> > I liked Heikki's phrasing, so the v2 patch does it his way.\n>\n> Agreed, I like his phrasing too.\n\n\n+1 too. I'd also like to get rid of the redundant test schedule. Very much\nappreciate the initiative.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nвт, 11 мая 2021 г. в 23:47, Daniel Gustafsson <daniel@yesql.se>:> On 11 May 2021, at 21:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 11 May 2021, at 20:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> +   # for backwards compatibility, interpret \"serial\" as parallel tests\n> \n>> This comment may seem odd without reading the commit message.  Perhaps it can\n>> be reworded to \"..as parallel tests running with a single worker\" or something\n>> along those lines?\n> \n> I liked Heikki's phrasing, so the v2 patch does it his way.\n\nAgreed, I like his phrasing too. +1 too. I'd also like to get rid of the redundant test schedule. Very much appreciate the initiative.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Tue, 11 May 2021 23:53:29 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Let's get rid of serial_schedule" } ]
[ { "msg_contents": "Originally reported as https://bugzilla.redhat.com/show_bug.cgi?id=1959080\nduring early integration testing of Python 3.10 done in Fedora.\n\nThe test failed because Python 3.10 reports the traceback a bit differently:\n\ncat postgresql-13.2/src/pl/plpython/regression.diffs\ndiff -U3\npostgresql-13.2/src/pl/plpython/expected/python3/plpython_subtransaction.out\n/builddir/build/BUILD/postgresql-13.2/src/pl/plpython/results/python3/plpython_subtransaction.out\n---\npostgresql-13.2/src/pl/plpython/expected/python3/plpython_subtransaction.out\n2021-05-11 17:05:24.116431883 +0200\n+++\npostgresql-13.2/src/pl/plpython/results/python3/plpython_subtransaction.out\n2021-05-11 17:05:26.689459542 +0200\n@@ -224,8 +224,8 @@\n SELECT subtransaction_exit_subtransaction_in_with();\n ERROR: ValueError: this subtransaction has already been exited\n CONTEXT: Traceback (most recent call last):\n- PL/Python function \"subtransaction_exit_subtransaction_in_with\", line 3,\nin <module>\n- s.__exit__(None, None, None)\n+ PL/Python function \"subtransaction_exit_subtransaction_in_with\", line 2,\nin <module>\n+ with plpy.subtransaction() as s:\n PL/Python function \"subtransaction_exit_subtransaction_in_with\"\n -- Make sure we don't get a \"current transaction is aborted\" error\n SELECT 1 as test;\n\nThe attached patch makes the traceback look the same in Python 3.10 and\nalso in earlier versions.\n\nRegards,\nHonza", "msg_date": "Tue, 11 May 2021 23:23:18 +0200", "msg_from": "Honza Horak <hhorak@redhat.com>", "msg_from_op": true, "msg_subject": "plpython subtransaction test failure with Python 3.10" } ]
[ { "msg_contents": "While testing something on spgist I found that at certain point while\ninserting in spgist it is going for doPickSplit, but even after split\nis is not able to find a place to insert a tuple and it keeping going\nin that loop infinitely it seems and finally error out with OOM\nbecause in this loop we are continuously allocating memory for the\ntuple in temp context but since we are never coming out of the loop it\nis erroring out with OOM.\n\nTo reproduce load the data from the attached script 'data_load.sql'\nand run below commands\n\n------Load data before running this using 'data_load.sql'\n-------Test case start---\ncreate extension spgist_name_ops;\n\nselect opcname, amvalidate(opc.oid)\nfrom pg_opclass opc join pg_am am on am.oid = opcmethod\nwhere amname = 'spgist' and opcname = 'name_ops';\n\n-- warning expected here\nselect opcname, amvalidate(opc.oid)\nfrom pg_opclass opc join pg_am am on am.oid = opcmethod\nwhere amname = 'spgist' and opcname = 'name_ops_old';\n\ncreate table t(f1 name, f2 integer, f3 text);\ncreate index on t using spgist(f1) include(f2, f3);\n\\d+ t_f1_f2_f3_idx\n\ninsert into t select proname, case when length(proname) % 2 = 0 then\npronargs else null end, prosrc from pg_proc_test;\n\n---- Test case end\n\n--Memory allocation stack----\n#1 0x0000000000bf96c5 in palloc0 (size=9696) at mcxt.c:1133\n#2 0x000000000056b24b in spgFormLeafTuple (state=0x7ffedea15b80,\nheapPtr=0x27df306, datums=0x7ffedea15660, isnulls=0x7ffedea15640) at\nspgutils.c:892\n#3 0x000000000055e15c in doPickSplit (index=0x7fa4b1ddd5c8,\nstate=0x7ffedea15b80, current=0x7ffedea159c0, parent=0x7ffedea159a0,\nnewLeafTuple=0x27df300, level=9,\nisNulls=false, isNew=true) at spgdoinsert.c:848\n#4 0x0000000000561e53 in spgdoinsert (index=0x7fa4b1ddd5c8,\nstate=0x7ffedea15b80, heapPtr=0x27718d8, datums=0x7ffedea15cc0,\nisnulls=0x7ffedea15ca0)\nat spgdoinsert.c:2115\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 12 May 2021 12:38:53 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "OOM in spgist insert" }, { "msg_contents": "ср, 12 мая 2021 г. в 11:09, Dilip Kumar <dilipbalaut@gmail.com>:\n\n> While testing something on spgist I found that at certain point while\n> inserting in spgist it is going for doPickSplit, but even after split\n> is is not able to find a place to insert a tuple and it keeping going\n> in that loop infinitely it seems and finally error out with OOM\n> because in this loop we are continuously allocating memory for the\n> tuple in temp context but since we are never coming out of the loop it\n> is erroring out with OOM.\n>\n> My first idea is that this is the case when index tuple doesn't fit into\none index page. As INCLUDED columns are added as is the tuple can not be\nmade shorter by prefix-stripping. Seems we should check every index tuple\nlength to fit the page before its insertion. Will see the code little bit\nlater.\n\nThanks for the reporting!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nср, 12 мая 2021 г. в 11:09, Dilip Kumar <dilipbalaut@gmail.com>:While testing something on spgist I found that at certain point while\ninserting in spgist it is going for doPickSplit, but even after split\nis is not able to find a place to insert a tuple and it keeping going\nin that loop infinitely it seems and finally error out with OOM\nbecause in this loop we are continuously allocating memory for the\ntuple in temp context but since we are never coming out of the loop it\nis erroring out with OOM.My first idea is that this is the case when index tuple doesn't fit into one index page. As INCLUDED columns are added as is the tuple can not be made shorter by prefix-stripping. Seems we should check every index tuple length to fit the page before its insertion. Will see the code little bit later.Thanks for the reporting!-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Wed, 12 May 2021 12:13:25 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "On Wed, 12 May 2021 at 1:43 PM, Pavel Borisov <pashkin.elfe@gmail.com>\nwrote:\n\n> ср, 12 мая 2021 г. в 11:09, Dilip Kumar <dilipbalaut@gmail.com>:\n>\n>> While testing something on spgist I found that at certain point while\n>> inserting in spgist it is going for doPickSplit, but even after split\n>> is is not able to find a place to insert a tuple and it keeping going\n>> in that loop infinitely it seems and finally error out with OOM\n>> because in this loop we are continuously allocating memory for the\n>> tuple in temp context but since we are never coming out of the loop it\n>> is erroring out with OOM.\n>>\n>> My first idea is that this is the case when index tuple doesn't fit into\n> one index page. As INCLUDED columns are added as is the tuple can not be\n> made shorter by prefix-stripping. Seems we should check every index tuple\n> length to fit the page before its insertion.\n>\n\nThanks for looking into this.\n\nWill see the code little bit later.\n>\n\nOk\n\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, 12 May 2021 at 1:43 PM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:ср, 12 мая 2021 г. в 11:09, Dilip Kumar <dilipbalaut@gmail.com>:While testing something on spgist I found that at certain point while\ninserting in spgist it is going for doPickSplit, but even after split\nis is not able to find a place to insert a tuple and it keeping going\nin that loop infinitely it seems and finally error out with OOM\nbecause in this loop we are continuously allocating memory for the\ntuple in temp context but since we are never coming out of the loop it\nis erroring out with OOM.My first idea is that this is the case when index tuple doesn't fit into one index page. As INCLUDED columns are added as is the tuple can not be made shorter by prefix-stripping. Seems we should check every index tuple length to fit the page before its insertion.Thanks for looking into this.Will see the code little bit later.Ok\n-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 12 May 2021 14:06:47 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "ср, 12 мая 2021 г. в 12:36, Dilip Kumar <dilipbalaut@gmail.com>:\n\n> On Wed, 12 May 2021 at 1:43 PM, Pavel Borisov <pashkin.elfe@gmail.com>\n> wrote:\n>\n>> ср, 12 мая 2021 г. в 11:09, Dilip Kumar <dilipbalaut@gmail.com>:\n>>\n>>> While testing something on spgist I found that at certain point while\n>>> inserting in spgist it is going for doPickSplit, but even after split\n>>> is is not able to find a place to insert a tuple and it keeping going\n>>> in that loop infinitely it seems and finally error out with OOM\n>>> because in this loop we are continuously allocating memory for the\n>>> tuple in temp context but since we are never coming out of the loop it\n>>> is erroring out with OOM.\n>>>\n>>> My first idea is that this is the case when index tuple doesn't fit into\n>> one index page. As INCLUDED columns are added as is the tuple can not be\n>> made shorter by prefix-stripping. Seems we should check every index tuple\n>> length to fit the page before its insertion.\n>>\n>\n> Thanks for looking into this.\n>\n> Will see the code little bit later.\n>>\n>\n> Ok\n>\nPFA v1 patch. Does this help?\n\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Wed, 12 May 2021 12:39:04 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "ср, 12 мая 2021 г. в 12:39, Pavel Borisov <pashkin.elfe@gmail.com>:\n\n> ср, 12 мая 2021 г. в 12:36, Dilip Kumar <dilipbalaut@gmail.com>:\n>\n>> On Wed, 12 May 2021 at 1:43 PM, Pavel Borisov <pashkin.elfe@gmail.com>\n>> wrote:\n>>\n>>> ср, 12 мая 2021 г. в 11:09, Dilip Kumar <dilipbalaut@gmail.com>:\n>>>\n>>>> While testing something on spgist I found that at certain point while\n>>>> inserting in spgist it is going for doPickSplit, but even after split\n>>>> is is not able to find a place to insert a tuple and it keeping going\n>>>> in that loop infinitely it seems and finally error out with OOM\n>>>> because in this loop we are continuously allocating memory for the\n>>>> tuple in temp context but since we are never coming out of the loop it\n>>>> is erroring out with OOM.\n>>>>\n>>>> My first idea is that this is the case when index tuple doesn't fit\n>>> into one index page. As INCLUDED columns are added as is the tuple can not\n>>> be made shorter by prefix-stripping. Seems we should check every index\n>>> tuple length to fit the page before its insertion.\n>>>\n>>\n>> Thanks for looking into this.\n>>\n>> Will see the code little bit later.\n>>>\n>>\n>> Ok\n>>\n> PFA v1 patch. Does this help?\n>\nI've made a mistake in attributes count in v1. PFA v2\n\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Wed, 12 May 2021 12:51:04 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "On Wed, May 12, 2021 at 2:21 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n\n>>\n>> PFA v1 patch. Does this help?\n>\n> I've made a mistake in attributes count in v1. PFA v2\n\nV2 works. Thanks for fixing this quickly, I think you can add a\ncomment for the new error condition you added.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 May 2021 14:36:24 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": ">\n> V2 works. Thanks for fixing this quickly, I think you can add a\n> comment for the new error condition you added.\n>\nAdded comments. PFA v3\n\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Wed, 12 May 2021 15:41:21 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "On Wed, May 12, 2021 at 5:11 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>>\n>> V2 works. Thanks for fixing this quickly, I think you can add a\n>> comment for the new error condition you added.\n>\n> Added comments. PFA v3\n\nThanks.\n\n+ *\n+ * For indexes with INCLUDEd columns we do not know whether we can reduce\n+ * index tuple size by suffixing its key part or we will go into the\n+ * endless loop on pick-split (in case included columns data is big enough\n\n INCLUDEd -> why you have used a mixed case here?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 May 2021 17:26:33 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": ">\n> INCLUDEd -> why you have used a mixed case here?\n>\nIt is current practice to call INCLUDE columns in capital, you can find\nmany places in the current code. But case mixture can be avoided indeed ))\nPFA v4", "msg_date": "Wed, 12 May 2021 16:05:33 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "On Wed, May 12, 2021 at 5:35 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>>\n>> INCLUDEd -> why you have used a mixed case here?\n>\n> It is current practice to call INCLUDE columns in capital, you can find many places in the current code. But case mixture can be avoided indeed ))\n> PFA v4\n\nOkay, that makes sense.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 May 2021 20:15:06 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> [ v4-0001-When-there-are-INCLUDEd-columns-in-SpGist-check-t.patch ]\n\nI don't like this patch one bit --- it's basically disabling a fairly\nimportant SPGiST feature as soon as there are included columns.\nWhat's more, it's not really giving us any better defense against\nthe infinite-picksplit-loop problem than we had before.\n\nI wonder if we should give up on the theory posited circa\nspgdoinsert.c:2213:\n\n * Note: if the opclass sets longValuesOK, we rely on the\n * choose function to eventually shorten the leafDatum\n * enough to fit on a page. We could add a test here to\n * complain if the datum doesn't get visibly shorter each\n * time, but that could get in the way of opclasses that\n * \"simplify\" datums in a way that doesn't necessarily\n * lead to physical shortening on every cycle.\n\nThe argument that there might be a longValuesOK opclass that *doesn't*\nshorten the datum each time seems fairly hypothetical to me.\n\nAn alternative way of looking at things is that this is the opclass's\nfault. It should have realized that it's not making progress, and\nthrown an error. However, I'm not sure if the opclass picksplit\nfunction has enough info to throw a meaningful error. It looks to\nme like the trouble case from spg_text_picksplit's point of view\nis that it's given a single tuple containing a zero-length string,\nso that there is no prefix it can strip. However, it seems possible\nthat that could be a legitimate case. Even if it's not, the opclass\nfunction doesn't have (for instance) the name of the index to cite\nin an error message. Nor did we give it a way to return a failure\nindication, which is seeming like a mistake right now.\n\nBTW, another nasty thing I discovered while testing this is that\nthe CHECK_FOR_INTERRUPTS() at line 2146 is useless, because\nwe're holding a buffer lock there so InterruptHoldoffCount > 0.\nSo once you get into this loop you can't even cancel the query.\nSeems like that needs a fix, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 May 2021 11:25:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "On 2021-May-13, Tom Lane wrote:\n\n> BTW, another nasty thing I discovered while testing this is that\n> the CHECK_FOR_INTERRUPTS() at line 2146 is useless, because\n> we're holding a buffer lock there so InterruptHoldoffCount > 0.\n> So once you get into this loop you can't even cancel the query.\n> Seems like that needs a fix, too.\n\nThis comment made me remember a patch I've had for a while, which splits\nthe CHECK_FOR_INTERRUPTS() definition in two -- one of them is\nINTERRUPTS_PENDING_CONDITION() which let us test the condition\nseparately; that allows the lock we hold to be released prior to\nactually processing the interrupts.\n\nThe btree code modified was found to be an actual problem in production\nwhen a btree is corrupted in such a way that vacuum would get an\ninfinite loop. I don't remember the exact details but I think we saw\nvacuum running for a couple of weeks, and had to restart the server in\norder to terminate it (since it wouldn't respond to signals).\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"I am amazed at [the pgsql-sql] mailing list for the wonderful support, and\nlack of hesitasion in answering a lost soul's question, I just wished the rest\nof the mailing list could be like this.\" (Fotis)\n (http://archives.postgresql.org/pgsql-sql/2006-06/msg00265.php)", "msg_date": "Thu, 13 May 2021 11:53:51 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "On 2021-May-13, Alvaro Herrera wrote:\n\n> The btree code modified was found to be an actual problem in production\n> when a btree is corrupted in such a way that vacuum would get an\n> infinite loop. I don't remember the exact details but I think we saw\n> vacuum running for a couple of weeks, and had to restart the server in\n> order to terminate it (since it wouldn't respond to signals).\n\n(Looking again, the nbtpage.c hunk might have been made obsolete by\nc34787f91058 and other commits).\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Thu, 13 May 2021 12:29:55 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-May-13, Tom Lane wrote:\n>> BTW, another nasty thing I discovered while testing this is that\n>> the CHECK_FOR_INTERRUPTS() at line 2146 is useless, because\n>> we're holding a buffer lock there so InterruptHoldoffCount > 0.\n>> So once you get into this loop you can't even cancel the query.\n>> Seems like that needs a fix, too.\n\nAttached is a WIP patch for that part. Basically, if it looks\nlike there's an interrupt pending, make spgdoinsert() fall out of\nits loop, so it'll release the buffer locks, and then recheck\nCHECK_FOR_INTERRUPTS() at a point where it can really throw the\nerror. Now, the hole in this approach is what if ProcessInterrupts\nstill declines to throw the error? I've made the patch make use of\nthe existing provision to retry spgdoinsert() in case of deadlock,\nso that it just goes around and tries again. But that doesn't seem\nterribly satisfactory, because if InterruptPending remains set then\nwe'll just have an infinite loop at that outer level. IOW, this\npatch can only cope with the situation where there's not any outer\nfunction holding a buffer lock (or other reason to prevent the\nquery cancel from taking effect). Maybe that's good enough, but\nI'm not terribly pleased with it.\n\n> This comment made me remember a patch I've had for a while, which splits\n> the CHECK_FOR_INTERRUPTS() definition in two -- one of them is\n> INTERRUPTS_PENDING_CONDITION() which let us test the condition\n> separately; that allows the lock we hold to be released prior to\n> actually processing the interrupts.\n\nHm. Yeah, I was feeling that this patch is unduly friendly with\nthe innards of CHECK_FOR_INTERRUPTS(). So maybe some refactoring\nis called for, but I'm not quite sure what it should look like.\nTesting the condition separately is fine, but what about the case\nof ProcessInterrupts refusing to pull the trigger?\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 13 May 2021 12:31:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> (Looking again, the nbtpage.c hunk might have been made obsolete by\n> c34787f91058 and other commits).\n\nOK. Here's a revision that adopts your idea, except that I left\nout the nbtpage.c change since you aren't sure of that part.\n\nI added a macro that allows spgdoinsert to Assert that it's not\ncalled in a context where the infinite-loop-due-to-InterruptPending\nrisk would arise. This is a little bit fragile, because it'd be\neasy for ill-considered changes to ProcessInterrupts to break it,\nbut it's better than nothing.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 13 May 2021 13:29:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "Here's a patch that attempts to prevent the infinite loop in spgdoinsert\nby checking whether the proposed leaf tuple is getting smaller at each\niteration.\n\nWe can't be totally rigid about that, because for example if the opclass\nshortens a 7-byte string to 5 bytes, that will make no difference in the\ntuple's size after alignment. I first tried to handle that by checking\ndatumGetSize() of the key datum itself, but observed that spgtextproc.c\nhas some cases where it'll return an empty leaf-datum string at two\nlevels before succeeding. Maybe it'd be okay to fail that on the\ngrounds that it can't become any smaller later. But on the whole, and\nconsidering the existing comment's concerns about opclasses that don't\nshorten the datum every time, it seems like a good idea to allow some\nfuzz here. So what this patch does is to allow up to 10 cycles with no\nreduction in the actual leaf tuple size before failing. That way we can\nhandle slop due to alignment roundoff and slop due to opclass corner\ncases with a single, very cheap mechanism. It does mean that we might\nbuild a few more useless inner tuples before failing --- but that seems\nlike a good tradeoff for *not* failing in cases where the opclass is\nable to shorten the leaf datum sufficiently.\n\nI have not bothered to tease apart the query-cancel and infinite-loop\nparts of the patch, but probably should do that before committing.\n\nWhat do people think about back-patching this? In existing branches,\nwe've defined it to be an opclass bug if it fails to shorten the leaf\ndatum enough. But not having any defenses against that seems like\nnot a great idea. OTOH, the 10-cycles-to-show-progress rule could be\nargued to be an API break.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 13 May 2021 18:20:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "On 2021-May-13, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > (Looking again, the nbtpage.c hunk might have been made obsolete by\n> > c34787f91058 and other commits).\n> \n> OK. Here's a revision that adopts your idea, except that I left\n> out the nbtpage.c change since you aren't sure of that part.\n\nThanks.\n\n> I added a macro that allows spgdoinsert to Assert that it's not\n> called in a context where the infinite-loop-due-to-InterruptPending\n> risk would arise. This is a little bit fragile, because it'd be\n> easy for ill-considered changes to ProcessInterrupts to break it,\n> but it's better than nothing.\n\nHmm, it looks OK to me, but I wonder why you kept the original\nCHECK_FOR_INTERRUPTS()s since these would be done once we've broken out\nof the loop anyway. I tested Dilip's original test case and while we\nstill die on OOM, we're able to interrupt it before dying.\n\n\nNot related to this patch -- I was bothered by the UnlockReleaseBuffer\ncalls at the bottom of spgdoinsert that leave the buffer still set in\nthe structs. It's not a problem if you look only at this routine, but I\nnotice that callee doPickSplit does the same thing, so maybe there is\nsome weird situation in which you could end up with the buffer variable\nset, but we don't hold lock nor pin on the page, so an attempt to clean\nup would break. I don't know enough about spgist to figure out how to\ncraft a test case, maybe it's impossible to reach for some reason, but\nit seems glass-in-the-beach sort of thing.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Someone said that it is at least an order of magnitude more work to do\nproduction software than a prototype. I think he is wrong by at least\nan order of magnitude.\" (Brian Kernighan)\n\n\n", "msg_date": "Thu, 13 May 2021 18:26:56 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "On 2021-May-13, Tom Lane wrote:\n\n> What do people think about back-patching this? In existing branches,\n> we've defined it to be an opclass bug if it fails to shorten the leaf\n> datum enough. But not having any defenses against that seems like\n> not a great idea. OTOH, the 10-cycles-to-show-progress rule could be\n> argued to be an API break.\n\nI think if the alternative is to throw an error, we can afford to retry\nquite a few more times than 10 in order not have that called an API\nbreak. Say, retry (MAXIMUM_ALIGNOF << 3) times or so (if you want to\nparameterize on maxalign). It's not like this is going to be a\nperformance drag where not needed .. but I think leaving back-branches\nunfixed is not great.\n\nI did run Dilip's test case as well as your new regression test, and\nboth work as intended with your new code (and both OOM-crash the\noriginal code).\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Thu, 13 May 2021 18:49:33 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "I think it's good to backpatch the check as it doesn't change acceptable\nbehavior, just replace infinite loop with the acceptable thing.\n\nI think it's good to backpatch the check as it doesn't change acceptable behavior, just replace infinite loop with the acceptable thing.", "msg_date": "Fri, 14 May 2021 02:49:49 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Hmm, it looks OK to me, but I wonder why you kept the original\n> CHECK_FOR_INTERRUPTS()s since these would be done once we've broken out\n> of the loop anyway. I tested Dilip's original test case and while we\n> still die on OOM, we're able to interrupt it before dying.\n\nHm. My thought was that in the cases where InterruptPending is set for\nsome reason other than a query cancel, we could let ProcessInterrupts\nservice it at less cost than abandoning and retrying the index insertion.\nOn reflection though, that only works for the first CHECK_FOR_INTERRUPTS\nat the top of the loop, and only the first time through, because during\nlater calls we'll be holding buffer locks.\n\nMaybe the best idea is to have one CHECK_FOR_INTERRUPTS at the top of\nthe function, in hopes of clearing out any already-pending interrupts,\nand then just use the condition test inside the loop.\n\n> Not related to this patch -- I was bothered by the UnlockReleaseBuffer\n> calls at the bottom of spgdoinsert that leave the buffer still set in\n> the structs. It's not a problem if you look only at this routine, but I\n> notice that callee doPickSplit does the same thing, so maybe there is\n> some weird situation in which you could end up with the buffer variable\n> set, but we don't hold lock nor pin on the page, so an attempt to clean\n> up would break.\n\nMaybe I'm confused, but aren't those just local variables that are about\nto go out of scope anyway? Clearing them seems more likely to draw\ncompiler warnings about dead stores than accomplish something useful.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 May 2021 19:04:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-May-13, Tom Lane wrote:\n>> What do people think about back-patching this? In existing branches,\n>> we've defined it to be an opclass bug if it fails to shorten the leaf\n>> datum enough. But not having any defenses against that seems like\n>> not a great idea. OTOH, the 10-cycles-to-show-progress rule could be\n>> argued to be an API break.\n\n> I think if the alternative is to throw an error, we can afford to retry\n> quite a few more times than 10 in order not have that called an API\n> break. Say, retry (MAXIMUM_ALIGNOF << 3) times or so (if you want to\n> parameterize on maxalign). It's not like this is going to be a\n> performance drag where not needed .. but I think leaving back-branches\n> unfixed is not great.\n\nHm. Index bloat is not something to totally ignore, though, so I'm\nnot sure what the best cutoff is.\n\nAnyway, here is a patch set teased apart into committable bites,\nand with your other points addressed.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 13 May 2021 21:01:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "I wrote:\n> Anyway, here is a patch set teased apart into committable bites,\n> and with your other points addressed.\n\nOh, maybe some docs would be a good thing too ...\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 13 May 2021 21:44:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "On Fri, May 14, 2021 at 6:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2021-May-13, Tom Lane wrote:\n> >> What do people think about back-patching this? In existing branches,\n> >> we've defined it to be an opclass bug if it fails to shorten the leaf\n> >> datum enough. But not having any defenses against that seems like\n> >> not a great idea. OTOH, the 10-cycles-to-show-progress rule could be\n> >> argued to be an API break.\n>\n> > I think if the alternative is to throw an error, we can afford to retry\n> > quite a few more times than 10 in order not have that called an API\n> > break. Say, retry (MAXIMUM_ALIGNOF << 3) times or so (if you want to\n> > parameterize on maxalign). It's not like this is going to be a\n> > performance drag where not needed .. but I think leaving back-branches\n> > unfixed is not great.\n>\n> Hm. Index bloat is not something to totally ignore, though, so I'm\n> not sure what the best cutoff is.\n>\n> Anyway, here is a patch set teased apart into committable bites,\n> and with your other points addressed.\n\nI have tested with my original issue and this patch is fixing the\nissue. Thanks!\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 May 2021 10:57:34 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "Now when checking for shortening of leaf tuple is added LongValuesOK\nbecome slightly redundant. I'd propose to replace it with more legible name\nas LongValuesOK doesn't mean they are warranted to be ok just that we can\ntry to shorten them? What about tryShortening, trySuffixing or\ncan(Try)ShortenTuple?\n\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nNow when checking for shortening of  leaf tuple is added LongValuesOK become slightly redundant. I'd propose to replace it with more legible name as LongValuesOK doesn't mean they are warranted to be ok just that we can try to shorten them? What about tryShortening, trySuffixing or can(Try)ShortenTuple?-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Fri, 14 May 2021 12:21:28 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": ">\n>\n> Now when checking for shortening of leaf tuple is added LongValuesOK\n> become slightly redundant. I'd propose to replace it with more legible name\n> as LongValuesOK doesn't mean they are warranted to be ok just that we can\n> try to shorten them? What about tryShortening, trySuffixing or\n> can(Try)ShortenTuple?\n>\nOr maybe it's even more logical now to invert it and make\ndontTrySuffixing for use in the opclasses for kd-tree, quadtree etc which\nare warranted to have the same key data length at any tree level ?\n\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nNow when checking for shortening of  leaf tuple is added LongValuesOK become slightly redundant. I'd propose to replace it with more legible name as LongValuesOK doesn't mean they are warranted to be ok just that we can try to shorten them? What about tryShortening, trySuffixing or can(Try)ShortenTuple?Or maybe it's even more logical now to invert it and make dontTrySuffixing for use in the opclasses for kd-tree, quadtree etc which are warranted to have the same key data length at any tree level ?--Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Fri, 14 May 2021 12:27:14 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "On Thu, May 13, 2021 at 6:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> OTOH, the 10-cycles-to-show-progress rule could be\n> argued to be an API break.\n\nNot being familiar with this code, I don't really understand why 10\ncycles to show progress wouldn't, like 640kB, be enough for anyone.\nBut as far as back-patching the code goals, I think the question is\nnot so much whether this restriction could hypothetically break\nanything but whether it will actually break anything, which leads to\nthe question of how many spgist opclasses we think exist outside of\ncore. I did a Google search and found some evidence that PostGIS might\nhave such things, and also this:\n\nhttps://github.com/fake-name/pg-spgist_hamming\n\nThere might be other things, but I did not find them.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 May 2021 08:12:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> Now when checking for shortening of leaf tuple is added LongValuesOK\n> become slightly redundant. I'd propose to replace it with more legible name\n> as LongValuesOK doesn't mean they are warranted to be ok just that we can\n> try to shorten them? What about tryShortening, trySuffixing or\n> can(Try)ShortenTuple?\n\nThat field name is part of the opclass API. I fear it's several years\ntoo late to rename it for no compelling reason.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 May 2021 09:40:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, May 13, 2021 at 6:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> OTOH, the 10-cycles-to-show-progress rule could be\n>> argued to be an API break.\n\n> Not being familiar with this code, I don't really understand why 10\n> cycles to show progress wouldn't, like 640kB, be enough for anyone.\n\nYeah, after further thought I'm thinking that that ought to be plenty.\nIn released branches, that code will never execute at all unless the\ndatum-to-be-indexed is larger than ~8kB. If you are starting with,\nsay, a 30kB string, you'd better be removing a heck of a lot more than\none byte per tree level, or your search performance is going to be\nabysmal. Maybe algorithmic oddities would sometimes result in\nseemingly making no progress for one cycle, but I doubt there's need\nfor more slop than that. In this light, a 10-cycle grace period seems\nif anything excessive.\n\n> But as far as back-patching the code goals, I think the question is\n> not so much whether this restriction could hypothetically break\n> anything but whether it will actually break anything, which leads to\n> the question of how many spgist opclasses we think exist outside of\n> core.\n\nThat is also an interesting viewpoint. codesearch.debian.net knows\nof no external SPGiST opclasses other than PostGIS'. They don't\nseem to have indexed the two you found on github, though. None of\nthose four set longValuesOK to true, which means that the whole\ndiscussion is moot for them. So it's entirely plausible that\nspgtextproc.c is the only affected code anywhere.\n\nOf course, that conclusion could lead to the position that there's\nno point in back-patching anyway, since there's no reason to think\nthat spgtextproc.c is buggy.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 May 2021 09:54:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> This comment made me remember a patch I've had for a while, which splits\n> the CHECK_FOR_INTERRUPTS() definition in two -- one of them is\n> INTERRUPTS_PENDING_CONDITION() which let us test the condition\n> separately; that allows the lock we hold to be released prior to\n> actually processing the interrupts.\n\nI've now pushed that macro change ...\n\n> The btree code modified was found to be an actual problem in production\n> when a btree is corrupted in such a way that vacuum would get an\n> infinite loop. I don't remember the exact details but I think we saw\n> vacuum running for a couple of weeks, and had to restart the server in\n> order to terminate it (since it wouldn't respond to signals).\n\n... but I think this bit still needs work, if we still want it at all.\nThe problem is that it seems to believe that ProcessInterrupts is\nguaranteed not to return, which is far from the truth. Maybe it was\ntrue once, but we've grown a lot of accretions on it that will just\nclear InterruptPending and return. I see that the \"return false\"\nleads directly to an \"Assert(false)\", which seems unhelpful.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 May 2021 16:23:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OOM in spgist insert" } ]
[ { "msg_contents": "Hi!\nI am trying to understand the behaviour of the query planner regarding the\npush-down of the conditions \"through\" the join.\n\nLets say that I have tables a(adate date, aval text) and b(bdate date, bval\ntext), and I create a view:\n\ncreate view v as\n select a.adate, a.aval, b.bval from a join b on (a.adate = b.bdate);\n\nNow, when I do (explain select * from v where adate='2021-05-12') I can see\nthat condition (= '2021-05-12') is used by the planned for table access to\nboth a and b.\n\nHowever, if I use range-like condition (this is probably not a correct\nterminology, but I am not familiar with the correct one) like BETWEEN or\n(>='2021-05-21'), I will see that planner will use this condition to access\na, but not b. It seems that the type of join (inner or left) does not\nreally matter.\n\nDB fiddle that illustrates this;\nhttps://www.db-fiddle.com/f/pT2PwUkhJWuX9skWiBWXoL/0\n\nIn my experiments, I was never able to get an execution plan that \"pushes\ndown\" any condition apart from (=) through to the right side of the join,\nwhich is rather surprising and leads to suboptimal planner estimates and\nexecution plans whenever view like the above is a part of a bigger query\nwith more joins on top.\n\nEqually surprising is that I was unable to find documentation or past\nmailing list discussions of this or similar topic, which leads me to\nbelieve that I am just not familiar with the proper terminology and can't\ncome up with the right search terms.\n\nCan you please tell me what is the proper way to describe this\nbehaviour/phenomenon (so that I can use it as search terms) and/or provide\nme with references to the parts of the source code that determines which\nconditions would be \"pushed down\" and which are not?\n\nPS As far as I can see, this behaviour is consistent between versions 9.5,\n10, 11, 12 and 13.\n\n-- \nD. Astapov\n\nHi!I am trying to understand the behaviour of the query planner regarding the push-down of the conditions \"through\" the join.Lets say that I have tables a(adate date, aval text) and b(bdate date, bval text), and I create a view:create view v as    select a.adate, a.aval, b.bval from a join b on (a.adate = b.bdate);Now, when I do (explain select * from v where adate='2021-05-12') I can see that condition (= '2021-05-12') is used by the planned for table access to both a and b.However, if I use range-like condition (this is probably not a correct terminology, but I am not familiar with the correct one) like BETWEEN or (>='2021-05-21'), I will see that planner will use this condition to access a, but not b. It seems that the type of join (inner or left) does not really matter.DB fiddle that illustrates this; https://www.db-fiddle.com/f/pT2PwUkhJWuX9skWiBWXoL/0In my experiments, I was never able to get an execution plan that \"pushes down\" any condition apart from (=) through to the right side of the join, which is rather surprising and leads to suboptimal planner estimates and execution plans whenever view like the above is a part of a bigger query with more joins on top.Equally surprising is that I was unable to find documentation or past mailing list discussions of this or similar topic, which leads me to believe that I am just not familiar with the proper terminology and can't come up with the right search terms.Can you please tell me what is the proper way to describe this behaviour/phenomenon (so that I can use it as search terms) and/or provide me with references to the parts of the source code that determines which conditions would be \"pushed down\" and which are not?PS As far as I can see, this behaviour is consistent between versions 9.5, 10, 11, 12 and 13.-- D. Astapov", "msg_date": "Wed, 12 May 2021 11:41:20 +0100", "msg_from": "Dmitry Astapov <dastapov@gmail.com>", "msg_from_op": true, "msg_subject": "Condition pushdown: why (=) is pushed down into join, but BETWEEN or\n >= is not?" }, { "msg_contents": "Dmitry Astapov <dastapov@gmail.com> writes:\n> I am trying to understand the behaviour of the query planner regarding the\n> push-down of the conditions \"through\" the join.\n\nI think your mental model is wrong. What's actually happening here is\nthat the planner uses equivalence classes to deduce implied conditions.\nThat is, we have the join condition a.adate = b.bdate and then you've\nadded the where condition a.adate = '2021-05-12'. Transitivity implies\nthat b.bdate = '2021-05-12', so we deduce that condition and are able\nto apply it at the relation scan of b. Furthermore, having restricted\nboth a.adate and b.bdate to the same constant value at the scan level,\nwe no longer need to apply the join condition a.adate = b.bdate at all.\nThis is important not only to avoid the (probably minor) inefficiency\nof rechecking the join condition, but because if we believed that all\nthree conditions were independently applicable, we'd come out with a\nserious underestimate of the size of the join result.\n\n> In my experiments, I was never able to get an execution plan that \"pushes\n> down\" any condition apart from (=) through to the right side of the join,\n\nNone of the argument sketched above works for non-equality conditions.\nThere are some situations where you could probably figure out how to\nuse transitivity to deduce some implied condition, but cleaning things\nup so that you don't have redundant conditions fouling up the join\nsize estimates seems like a hard problem.\n\nAnother issue is that we could easily expend a lot of cycles on deductions\nthat lead nowhere, because once you try to open up the mechanism to\nconsider operators other than equality, there will be a lot of things that\nit looks at and then fails to do anything with. The equivalence class\nmechanism is tied into the same logic that considers merge and hash joins,\nso we are expending lots of cycles anytime we see an equality operator,\nand not so much for other operators.\n\n> Equally surprising is that I was unable to find documentation or past\n> mailing list discussions of this or similar topic, which leads me to\n> believe that I am just not familiar with the proper terminology and can't\n> come up with the right search terms.\n\nsrc/backend/optimizer/README has a discussion of equivalence classes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 May 2021 11:54:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join,\n but BETWEEN or >= is not?" }, { "msg_contents": "On Wed, May 12, 2021 at 4:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Dmitry Astapov <dastapov@gmail.com> writes:\n> > I am trying to understand the behaviour of the query planner regarding\n> the\n> > push-down of the conditions \"through\" the join.\n>\n> I think your mental model is wrong. What's actually happening here is\n> that the planner uses equivalence classes to deduce implied conditions.\n> That is, we have the join condition a.adate = b.bdate and then you've\n> added the where condition a.adate = '2021-05-12'. Transitivity implies\n> that b.bdate = '2021-05-12', so we deduce that condition and are able\n> to apply it at the relation scan of b. Furthermore, having restricted\n> both a.adate and b.bdate to the same constant value at the scan level,\n> we no longer need to apply the join condition a.adate = b.bdate at all.\n> This is important not only to avoid the (probably minor) inefficiency\n> of rechecking the join condition, but because if we believed that all\n> three conditions were independently applicable, we'd come out with a\n> serious underestimate of the size of the join result.\n>\n\nThank you very much, my mental model was indeed incorrect, and the above is\nvery helpful.\nAm I right in thinking that elimination the join condition is actually\nquite important part of the process?\nCould it possibly be the main reason for =ANY/(x IN (..)) not to be\noptimized the same way?\n\n\n>\n> > In my experiments, I was never able to get an execution plan that \"pushes\n> > down\" any condition apart from (=) through to the right side of the join,\n>\n> None of the argument sketched above works for non-equality conditions.\n> There are some situations where you could probably figure out how to\n> use transitivity to deduce some implied condition, but cleaning things\n> up so that you don't have redundant conditions fouling up the join\n> size estimates seems like a hard problem.\n>\n\nI agree about inequality conditions, this problem seems to be rather hard\nto tackle in the general case.\n\nIs it still hard when one thinks about =ANY or (column in (val1, val2,\nval3, ...)) as well?\nI am thinking that =ANY would be a decent workaround for (x BETWEEN a AND\nb) in quite a lot of cases, if it was propagated to all the columns in the\nequivalence class.\n\n\n\n> > Equally surprising is that I was unable to find documentation or past\n> > mailing list discussions of this or similar topic, which leads me to\n> > believe that I am just not familiar with the proper terminology and can't\n> > come up with the right search terms.\n>\n> src/backend/optimizer/README has a discussion of equivalence classes.\n>\nThank you, this gives me a plethora of keywords for further searches.\n\nI realize that it is possibly off-topic here, but what about workarounds\nfor inequality constraints, joins and views? Maybe you could give me some\npointers here as well?\n\nMy tables are large to huge (think OLAP, not OLTP). I found out when I have\na view that joins several (2 to 10) tables on the column that is\nsemantically the same in all of them (let's say it is ID and we join on\nID), I do not have many avenues to efficiently select from such view for a\nlist of IDs at the same time.\n\nI could:\n1) Do lots of fast queries and union them:\nselect * from vw where id=ID1 union all select * from vw where id=ID2\n....., which is only really feasible if the query is generated by the\nprogram\n\n2)expose all ID columns from all the tables used in the view body and do:\nselect * from vw where id=ANY() and id1=ANY() and id2=ANY() and id3=ANY()\n.....\nThis only works well if the view hierarchy is flat (no views on views). If\nthere are other views that use this use, re-exports of extra columns\nquickly snowballs, you might need column renaming if same view ends up\nbeing used more than once through two different dependency paths. Plus\npeople not familiar with the problem tend to omit \"clearly superfluous\"\ncolumns from the new views they build on top.\n\n3)forbid views that join tables larger than a certain size/dismantle views\nthat become inefficient (this only works if the problem is detected fast\nenough and the view did not become popular yet)\n\nSo all of the workarounds I see in front of me right now are somewhat sad,\nbut they are necessary, as not doing them means that queries would take\nhours or days instead of minutes.\n\nIs there anything better that I have not considered in terms of workarounds?\n\n\n-- \nD. Astapov\n\nOn Wed, May 12, 2021 at 4:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Dmitry Astapov <dastapov@gmail.com> writes:\n> I am trying to understand the behaviour of the query planner regarding the\n> push-down of the conditions \"through\" the join.\n\nI think your mental model is wrong.  What's actually happening here is\nthat the planner uses equivalence classes to deduce implied conditions.\nThat is, we have the join condition a.adate = b.bdate and then you've\nadded the where condition a.adate = '2021-05-12'.  Transitivity implies\nthat b.bdate = '2021-05-12', so we deduce that condition and are able\nto apply it at the relation scan of b.  Furthermore, having restricted\nboth a.adate and b.bdate to the same constant value at the scan level,\nwe no longer need to apply the join condition a.adate = b.bdate at all.\nThis is important not only to avoid the (probably minor) inefficiency\nof rechecking the join condition, but because if we believed that all\nthree conditions were independently applicable, we'd come out with a\nserious underestimate of the size of the join result.Thank you very much, my mental model was indeed incorrect, and the above is very helpful.Am I right in thinking that elimination the join condition is actually quite important part of the process?Could it possibly be the main reason for =ANY/(x IN (..)) not to be optimized the same way? \n\n> In my experiments, I was never able to get an execution plan that \"pushes\n> down\" any condition apart from (=) through to the right side of the join,\n\nNone of the argument sketched above works for non-equality conditions.\nThere are some situations where you could probably figure out how to\nuse transitivity to deduce some implied condition, but cleaning things\nup so that you don't have redundant conditions fouling up the join\nsize estimates seems like a hard problem.I agree about inequality conditions, this problem seems to be rather hard to tackle in the general case.Is it still hard when one thinks about =ANY or (column in (val1, val2, val3, ...)) as well?I am thinking that =ANY would be a decent workaround for (x BETWEEN a AND b) in quite a lot of cases, if it was propagated to all the columns in the equivalence class.  \n> Equally surprising is that I was unable to find documentation or past\n> mailing list discussions of this or similar topic, which leads me to\n> believe that I am just not familiar with the proper terminology and can't\n> come up with the right search terms.\n\nsrc/backend/optimizer/README has a discussion of equivalence classes.Thank you, this gives me a plethora of keywords for further searches.I realize that it is possibly off-topic here, but what about workarounds for inequality constraints, joins and views? Maybe you could give me some pointers here as well?My tables are large to huge (think OLAP, not OLTP). I found out when I have a view that joins several (2 to 10) tables on the column that is semantically the same in all of them (let's say it is ID and we join on ID), I do not have many avenues to efficiently select from such view for a list of IDs at the same time.I could:1) Do lots of fast queries and union them:select * from vw where id=ID1 union all select * from vw where id=ID2 ....., which is only really feasible if the query is generated by the program2)expose all ID columns from all the tables used in the view body and do:select * from vw where id=ANY() and id1=ANY() and id2=ANY() and id3=ANY() .....This only works well if the view hierarchy is flat (no views on views). If there are other views that use this use, re-exports of extra columns quickly snowballs, you might need column renaming if same view ends up being used more than once through two different dependency paths. Plus people not familiar with the problem tend to omit \"clearly superfluous\" columns from the new views they build on top. 3)forbid views that join tables larger than a certain size/dismantle views that become inefficient (this only works if the problem is detected fast enough and the view did not become popular yet)So all of the workarounds I see in front of me right now are somewhat sad, but they are necessary, as not doing them means that queries would take hours or days instead of minutes.Is there anything better that I have not considered in terms of workarounds? -- D. Astapov", "msg_date": "Wed, 12 May 2021 18:56:00 +0100", "msg_from": "Dmitry Astapov <dastapov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "Dmitry Astapov <dastapov@gmail.com> writes:\n> Am I right in thinking that elimination the join condition is actually\n> quite important part of the process?\n> Could it possibly be the main reason for =ANY/(x IN (..)) not to be\n> optimized the same way?\n\nYup.\n\n> Is it still hard when one thinks about =ANY or (column in (val1, val2,\n> val3, ...)) as well?\n\nYeah. For instance, if you have\n WHERE a = b AND a IN (1,2,3)\nthen yes, you could deduce \"b IN (1,2,3)\", but this would not give you\nlicense to drop the \"a = b\" condition. So now you have to figure out\nwhat the selectivity of that is after the application of the partially\nredundant IN clauses.\n\nI recall somebody (David Rowley, maybe? Too lazy to check archives.)\nworking on this idea awhile ago, but he didn't get to the point of\na committable patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 May 2021 19:21:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join,\n but BETWEEN or >= is not?" }, { "msg_contents": "On Fri, 14 May 2021 at 11:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I recall somebody (David Rowley, maybe? Too lazy to check archives.)\n> working on this idea awhile ago, but he didn't get to the point of\n> a committable patch.\n\nYeah. Me. The discussion is in [1].\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A%40mail.gmail.com\n\n\n", "msg_date": "Fri, 14 May 2021 16:21:52 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": ">\n>\n> So now you have to figure out\n> what the selectivity of that is after the application of the partially\n> redundant IN clauses.\n>\n\nWould marking the new added RestrictInfo.norm_selec > 1 be OK?\n\nclause_selectivity_ext\n\n /*\n * If the clause is marked redundant, always return 1.0.\n */\n if (rinfo->norm_selec > 1)\n return (Selectivity) 1.0;\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nSo now you have to figure out\nwhat the selectivity of that is after the application of the partially\nredundant IN clauses.Would marking the new added RestrictInfo.norm_selec > 1 be OK?clause_selectivity_ext        /*         * If the clause is marked redundant, always return 1.0.         */        if (rinfo->norm_selec > 1)            return (Selectivity) 1.0;-- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Mon, 17 May 2021 10:52:22 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "On Mon, 17 May 2021 at 14:52, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> Would marking the new added RestrictInfo.norm_selec > 1 be OK?\n\nThere would be cases you'd want to not count the additional clauses in\nthe selectivity estimation and there would be cases you would want to.\n\nFor example:\n\nSELECT ... FROM t1 INNER JOIN t2 ON t1.dt = t2.dt WHERE t1.dt BETWEEN\n'date1' AND 'date2';\n\nIf you derived that t2.dt is also BETWEEN 'date1' AND 'date2' then\nyou'd most likely want to include those quals for scans feeding merge,\nhash and non-parameterized nested loop joins, so you'd also want to\ncount them in your selectivity estimations, else you'd feed junk\nvalues into the join selectivity estimations.\n\nParameterized nested loop joins might be different as if you were\nlooping up an index for t1.dt values on some index on t2.dt, then\nyou'd likely not want to bother also filtering out the between clause\nvalues too. They're redundant in that case.\n\nI imagined we'd have some functions in equivclass.c that allows you to\nchoose if you wanted the additional filters or not.\n\nTom's example, WHERE a = b AND a IN (1,2,3), if a and b were in the\nsame relation then you'd likely never want to include the additional\nquals. The only reason I could think that it would be a good idea is\nif \"b\" had an index but \"a\" didn't. I've not checked the code, but\nthe index matching code might already allow that to work anyway.\n\nDavid\n\n\n", "msg_date": "Thu, 20 May 2021 00:14:47 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "On Wed, May 19, 2021 at 8:15 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Mon, 17 May 2021 at 14:52, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > Would marking the new added RestrictInfo.norm_selec > 1 be OK?\n>\n> There would be cases you'd want to not count the additional clauses in\n> the selectivity estimation and there would be cases you would want to.\n>\n> For example:\n>\n> SELECT ... FROM t1 INNER JOIN t2 ON t1.dt = t2.dt WHERE t1.dt BETWEEN\n> 'date1' AND 'date2';\n>\n> If you derived that t2.dt is also BETWEEN 'date1' AND 'date2' then\n> you'd most likely want to include those quals for scans feeding merge,\n> hash and non-parameterized nested loop joins, so you'd also want to\n> count them in your selectivity estimations, else you'd feed junk\n> values into the join selectivity estimations.\n>\n>\nYes, you are correct.\n\n\n> Parameterized nested loop joins might be different as if you were\n> looping up an index for t1.dt values on some index on t2.dt, then\n> you'd likely not want to bother also filtering out the between clause\n> values too. They're redundant in that case.\n>\n>\nI do not truly understand this.\n\n\n> I imagined we'd have some functions in equivclass.c that allows you to\n> choose if you wanted the additional filters or not.\n>\n\nSounds like a good idea.\n\n\n>\n> Tom's example, WHERE a = b AND a IN (1,2,3), if a and b were in the\n> same relation then you'd likely never want to include the additional\n> quals. The only reason I could think that it would be a good idea is\n> if \"b\" had an index but \"a\" didn't. I've not checked the code, but\n> the index matching code might already allow that to work anyway.\n>\n>\n+1 for this feature overall.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Wed, May 19, 2021 at 8:15 PM David Rowley <dgrowleyml@gmail.com> wrote:On Mon, 17 May 2021 at 14:52, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> Would marking the new added RestrictInfo.norm_selec > 1 be OK?\n\nThere would be cases you'd want to not count the additional clauses in\nthe selectivity estimation and there would be cases you would want to.\n\nFor example:\n\nSELECT ... FROM t1 INNER JOIN t2 ON t1.dt = t2.dt WHERE t1.dt BETWEEN\n'date1' AND 'date2';\n\nIf you derived that t2.dt is also BETWEEN 'date1' AND 'date2' then\nyou'd most likely want to include those quals for scans feeding merge,\nhash and non-parameterized nested loop joins, so you'd also want to\ncount them in your selectivity estimations, else you'd feed junk\nvalues into the join selectivity estimations.\nYes, you are correct.  \nParameterized nested loop joins might be different as if you were\nlooping up an index for t1.dt values on some index on t2.dt, then\nyou'd likely not want to bother also filtering out the between clause\nvalues too. They're redundant in that case.\nI do not truly understand this.    \nI imagined we'd have some functions in equivclass.c that allows you to\nchoose if you wanted the additional filters or not.Sounds like a good idea.  \n\nTom's example, WHERE a = b AND a IN (1,2,3), if a and b were in the\nsame relation then you'd likely never want to include the additional\nquals.  The only reason I could think that it would be a good idea is\nif \"b\" had an index but \"a\" didn't.  I've not checked the code, but\nthe index matching code might already allow that to work anyway. +1 for this feature overall. -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Thu, 20 May 2021 13:21:42 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "On Fri, May 14, 2021 at 12:22 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Fri, 14 May 2021 at 11:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I recall somebody (David Rowley, maybe? Too lazy to check archives.)\n> > working on this idea awhile ago, but he didn't get to the point of\n> > a committable patch.\n>\n> Yeah. Me. The discussion is in [1].\n>\n> David\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A%40mail.gmail.com\n\n\nHi:\n\nI read through that thread and summarized the current pending issue as\nbelow IIUC.\na). The most challenging issue is this push down misleads the planner's\nrows estimation,\nwhich probably be worse than the lack of such push down. b). The new\ngenerated\nqual may increase the qual execution cost. c). Planning time is also\nincreased but\nwe can not gain much because of that. I just tried to address these issues\nas below\nbased on the patch David has finished a long time ago.\n\nTo address the row estimation issue, The most straightforward way to fix\nthis is to\nignore the derived clauses when figuring out the RelOptInfo->rows on base\nrelation.\nTo note which clause is derived from this patch, I added a new field\n\"EquivalenceClass *\nderived\" in RestrictInfo. and then added a included_derived option in\nclauselist_selectivity_ext,\nduring the set_xxx_rel_size function, we can pass the\nincluded_derived=false. This strategy\nshould be used in get_parameterized_baserel_size. In all the other cases,\ninclude_derived=true\nis used. which are finished in commit 2. (Commit 1 is Daivd's patch, I\njust rebased it)\n\nset enable_hashjoin to off;\nset enable_mergejoin to off;\nset enable_seqscan to on;\nregression=# explain analyze select * from tenk1 a, tenk1 b where\na.thousand = b.thousand and a.thousand < 100;\n\n QUERY\nPLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=27.14..1090.67 rows=10740 width=488) (actual\ntime=0.404..15.006 rows=10000 loops=1)\n -> Bitmap Heap Scan on tenk1 b (cost=26.84..385.26 rows=10000\nwidth=244) (actual time=0.350..1.419 rows=1000 loops=1)\n Recheck Cond: (thousand < 100)\\\n Heap Blocks: exact=324\n -> Bitmap Index Scan on tenk1_thous_tenthous (cost=0.00..24.34\nrows=1074 width=0) (actual time=0.238..0.240 rows=1000 loops=1)\n Index Cond: (thousand < 100)\n -> Memoize (cost=0.30..0.47 rows=1 width=244) (actual\ntime=0.002..0.006 rows=10 loops=1000)\n Cache Key: b.thousand\n Cache Mode: logical\n Hits: 900 Misses: 100 Evictions: 0 Overflows: 0 Memory Usage:\n277kB\n -> Index Scan using tenk1_thous_tenthous on tenk1 a\n (cost=0.29..0.46 rows=1 width=244) (actual time=0.010..0.032 rows=10\nloops=100)\n Index Cond: ((thousand = b.thousand) AND (thousand < 100))\n Planning Time: 2.459 ms\n Execution Time: 15.964 ms\n(14 rows)\n\nAs shown above, with commit 2 the JoinRel's rows estimation is correct\nnow. but it will mislead\nthe DBA to read the plan. See Bitmap Heap Scan on tenk1 b\n (...rows=10000..) (... rows=1000 loops=1)\nThis is because RelOptInfo->rows is not just used to calculate the\njoinrel.rows but also be used to\nshow the set Path.rows at many places. I can't think of a better way than\nadding a new filtered_rows\nin RelOptInfo which the semantics is used for Path.rows purpose only. That\nis what commit 3 does.\n\nAfter commit 3, we can see:\n\nregression=# explain analyze select * from tenk1 a, tenk1 b where\na.thousand = b.thousand and a.thousand < 100;\n\n QUERY\nPLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------\n\n Nested Loop (cost=24.90..459.16 rows=10740 width=488) (actual\ntime=0.440..16.966 rows=10000 loops=1)\n -> Bitmap Heap Scan on tenk1 b (cost=24.61..383.03 rows=1074\nwidth=244) (actual time=0.383..1.546 rows=1000 loops=1)\n Recheck Cond: (thousand < 100)\n Heap Blocks: exact=324\n -> Bitmap Index Scan on tenk1_thous_tenthous (cost=0.00..24.34\nrows=1074 width=0) (actual time=0.270..0.272 rows=1000 loops=1)\n Index Cond: (thousand < 100)\n -> Memoize (cost=0.30..0.47 rows=1 width=244) (actual\ntime=0.002..0.008 rows=10 loops=1000)\n Cache Key: b.thousand\n Cache Mode: logical\n Hits: 900 Misses: 100 Evictions: 0 Overflows: 0 Memory Usage:\n277kB\n -> Index Scan using tenk1_thous_tenthous on tenk1 a\n (cost=0.29..0.46 rows=1 width=244) (actual time=0.012..0.050 rows=10\nloops=100)\n Index Cond: ((thousand = b.thousand) AND (thousand < 100))\n Planning Time: 2.578 ms\n Execution Time: 17.929 ms\n(14 rows)\n\n\n\"Bitmap Heap Scan on tenk1 b (... rows=1074 ..) (.. rows=1000 loops=1)\"\nshows the issue fixed. but\nThere is something wrong below.\n\nIndex Scan using tenk1_thous_tenthous on tenk1 a (cost=0.29..0.46 rows=1\nwidth=244) (actual time=0.012..0.050 rows=10 loops=100)\nIndex Cond: ((thousand = b.thousand) AND (thousand < 100))\n\nHere the \" (thousand < 100)\" is from the user, not from this patch. and\n(thousand = b.thousand) AND (thousand < 100)\nhas some correlation. I can't think of a solution for this. and fixing\nthis issue is beyond the scope of this patch.\n\nSo at this stage, I think the row estimation issue is gone.\n\nAs the new generated equals increase the execution cost opinion, I think\nit is hard for planners to distinguish which quals deserves adding or not.\nInstead\nI just removed the quals execution during create_plan stage to remove the\nobviously\nduplicated qual executions. I only handled the case that the derived quals\nis executed\nat the same time with the restrinctInfo who's parent_ec is used to generate\nthe\nderived quals. If I understand the RestrictInfo.parent_ec correctly, The\ncost of\nfinding out the correlated quals in this patch are pretty low, see\nis_correlated_derived_clause.\nThis is what commit 4 does. After we apply it, we can see the last demo\nabove becomes to:\n\nregression=# explain analyze select * from tenk1 a join d_tenk2 b on\na.thousand = b.thousand and a.thousand < 100;\n\n QUERY\nPLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------\n\n Nested Loop (cost=10000000000.30..10000002799.78 rows=20020 width=488)\n(actual time=0.051..26.080 rows=20000 loops=1)\n -> Seq Scan on tenk1 a (cost=10000000000.00..10000000470.00 rows=1001\nwidth=244) (actual time=0.018..3.902 rows=1000 loops=1)\n Filter: (thousand < 100)\n Rows Removed by Filter: 9000\n -> Memoize (cost=0.30..3.18 rows=20 width=244) (actual\ntime=0.002..0.008 rows=20 loops=1000)\n Cache Key: a.thousand\n Cache Mode: logical\n Hits: 900 Misses: 100 Evictions: 0 Overflows: 0 Memory Usage:\n546kB\n -> Index Scan using d_tenk2_thousand_idx on d_tenk2 b\n (cost=0.29..3.17 rows=20 width=244) (actual time=0.008..0.037 rows=20\nloops=100)\n Index Cond: (thousand = a.thousand)\n Planning Time: 0.596 ms\n Execution Time: 27.502 ms\n(12 rows)\n\nThe \"thousand < 100\" for b is removed during execution.\n\nCommit 5 reduced the requirements for this path to work. Now it\nsupports ScalarArrayOpExpr\nand any perudoconstant filter to support the user case I meet. Commit 6\nadded some testcase\nand they are just used for review since there are two many runtime\nstatistics in the output and\nI can't think of way to fix it.\n\nI also study David's commit 1, and the semantics of ec_filters is so\naccurate and I'm very\nexcited to see it.\n\nThis patch series is still in the PoC stage, so something is not handled at\nall. For commit 2, I didn't\nhandle extended statistics related paths and I just handled plain rel\n(subquery, forign table and so\non are missed). I think it is OK for a PoC.\n\nAt last, I will share some performance testing for this patch. This is the\nreal user case I met.\n\ncreate table p (a int, b int) partition by range(a);\nselect 'create table p_' || i || ' partition of p for values from (' ||\n(i-1) * 100000 || ') to (' || i * 100000 || ');' from generate_series(1,\n50)i; \\gexec\ninsert into p select i, i from generate_series(1, 50 * 100000 -1) i;\ncreate index on p(a);\n\ncreate table q (a int, b int) partition by range(a);\nselect 'create table q_' || i || ' partition of q for values from (' ||\n(i-1) * 100000 || ') to (' || i * 100000 || ');' from generate_series(1,\n50)i; \\gexec\ninsert into q select * from p;\ncreate index on q(a);\n\nselect * from p, q where p.a = q.a and p.a in (3, 200000);\n\nRun the above query in both prepared and no prepared case, I get the\nfollowing results:\n\n| workload | with this feature | w/o this feature |\n|--------------+-------------------+------------------|\n| Prepared | 0.25 ms | 0.8 ms |\n| Non Prepared | 0.890 ms | 4.207 ms |\n\n\nAny thoughts?\n\n\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Tue, 1 Feb 2022 23:07:41 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "> Subject: [PATCH v1 1/6] Rebaee David's patch against the latest code.\n\nIf you use git-am, then the author/commit information is preserved.\nIt's probably good to include a link to the patch in any case.\n\n> Subject: [PATCH v1 4/6] remove duplicated qual executing.\n \n--- \n src/backend/optimizer/path/equivclass.c | 22 +++++++++++++++++++ \n src/backend/optimizer/plan/createplan.c | 29 +++++++++++++++++++++++-- \n src/include/optimizer/paths.h | 2 ++ \n src/test/regress/parallel_schedule | 2 ++ \n 4 files changed, 53 insertions(+), 2 deletions(-) \n\nI think the ./ec_filter test is missing from from this patch.\n\n> Subject: [PATCH v1 6/6] adding some test cases for this feature and fix the existing case \nThe tests should be updated with the corresponding patches. It's common for\nthe patches to be commited separately, like if 0001 is ready but the others are\nstill evolving.\n\nI'm not sure whether you think this patch is ready to be added to a commitfest,\nbut do you know about the CI infrastructure ? It allows running all the cfbot\ntests for a github branch against 4 OS, which helps catch portability issues,\nincluding memory errors and unstable explain output. See: src/tools/ci/README.\nThere's an failure in postgres_fdw, probably the output needs to be updated.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 1 Feb 2022 11:13:29 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but\n BETWEEN or >= is not?" }, { "msg_contents": "Hi Justin:\n\n Thanks for your attention.\n\nOn Wed, Feb 2, 2022 at 1:13 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> > Subject: [PATCH v1 1/6] Rebaee David's patch against the latest code.\n>\n> If you use git-am, then the author/commit information is preserved.\n> It's probably good to include a link to the patch in any case.\n>\n>\nThanks for this reminder, I didn't pay enough attention to this. Fixed.\n\n(The original patch looks like a diff file not a commit, I wrote a simple\ncommit\nmessage for this and link to the origin discussion link.)\n\n\n\n> > Subject: [PATCH v1 4/6] remove duplicated qual executing.\n>\n> ---\n>\n>\n> src/backend/optimizer/path/equivclass.c | 22 +++++++++++++++++++\n>\n>\n> src/backend/optimizer/plan/createplan.c | 29 +++++++++++++++++++++++--\n>\n>\n> src/include/optimizer/paths.h | 2 ++\n>\n>\n> src/test/regress/parallel_schedule | 2 ++\n>\n>\n> 4 files changed, 53 insertions(+), 2 deletions(-)\n>\n>\n>\n> I think the ./ec_filter test is missing from from this patch.\n>\n>\nIndeed..\n\n\n> > Subject: [PATCH v1 6/6] adding some test cases for this feature and fix\n> the existing case\n>\n>\n> The tests should be updated with the corresponding patches. It's common\n> for\n> the patches to be commited separately, like if 0001 is ready but the\n> others are\n> still evolving.\n>\n\nYes, I agree with this. Just that in this case, the commit split is just\nfor easy\nreview/discussion. they are unlikely to be able to commit separately. so I\nkeep\nit as it was and improve each commit message.\n\n\n>\n> I'm not sure whether you think this patch is ready to be added to a\n> commitfest,\n> but do you know about the CI infrastructure ? It allows running all the\n> cfbot\n> tests for a github branch against 4 OS, which helps catch portability\n> issues,\n> including memory errors and unstable explain output. See:\n> src/tools/ci/README.\n>\n\nAdded. https://commitfest.postgresql.org/37/3524/\n\n\n> There's an failure in postgres_fdw, probably the output needs to be\n> updated.\n>\n\nFor the postgres_fdw, I just refreshed the content. with this patch, the\nplan changed\nfrom\n\n Foreign Scan\n Output: ft5.*, ft5.c1, ft5.c2, ft5.c3, ft4.c1, ft4.c2\n Relations: (public.ft5) INNER JOIN (public.ft4)\n Remote SQL: SELECT CASE WHEN (r1.*)::text IS NOT NULL THEN ROW(r1.c1,\nr1.c2, r1.c3) END, r1.c1, r1.c2, r1.c3, r2.c1, r2.c2 FROM (\"S 1\".\"T 4\" r1\nINNER JOIN \"S 1\".\"T 3\" r2 ON (((r1.c1 = r2.c1)) AND ((r2.c1 >= 10)) AND\n((r2.c1 <= 30)))) ORDER BY r1.c1 ASC NULLS LAST\n(4 rows)\n\nto\n\n Sort (cost=108.02..108.04 rows=7 width=62)\n Output: ft5.*, ft5.c1, ft5.c2, ft5.c3, ft4.c1, ft4.c2\n Sort Key: ft5.c1\n -> Foreign Scan (cost=100.00..107.92 rows=7 width=62)\n Output: ft5.*, ft5.c1, ft5.c2, ft5.c3, ft4.c1, ft4.c2\n Relations: (public.ft5) INNER JOIN (public.ft4)\n Remote SQL: SELECT CASE WHEN (r1.*)::text IS NOT NULL THEN\nROW(r1.c1, r1.c2, r1.c3) END, r1.c1, r1.c2, r1.c3, r2.c1, r2.c2 FROM (\"S\n1\".\"T 4\" r1 INNER JOIN \"S 1\".\"T 3\" r2 ON (((r1.c1 = r2.c1)) AND ((r2.c1 >=\n10)) AND ((r2.c1 <= 30)) AND ((r1.c1 >= 10)) AND ((r1.\nc1 <= 30))))\n\nBut if I set enable_sort = off, we can still get the previous plan, which\nproves that\nthis patch doesn't make the above path unavailable, it is just not cheaper\nthan\nthe new one. Here is the new commit messages:\n\ncommit e0a7838a09e73f831eecb23b5e7884cc34d71301\nAuthor: David Rowley <dgrowleyml@gmail.com>\nDate: Tue Feb 1 20:56:40 2022 +0800\n\n Introudce ec_filters in EquivalenceClass struct, the semantics is the\nquals can\n\n be applied to any EquivalenceMember in this EC. Later this information\nis used\n to generate new RestrictInfo and was distributed to related RelOptInfo\nvery\n soon.\n\n Author: David Rowley at 2015-12 [1]\n Andy Fan rebase this patch to current latest code.\n\n\nhttps://www.postgresql.org/message-id/CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A%40mail.gmail.com\n\ncommit 73f52d0909374446cd689457f0a4ef52addb035e\nAuthor: Andy Fan <yizhi.fzh@alibaba-inc.com>\nDate: Tue Feb 1 14:54:07 2022 +0800\n\n After distributing the new derived RestrictInfo into RelOptInfo, then\nthe rows\n estimation is wrong at the joinrel part. The reason is well described\nat [1] and\n [2], To fix this issue, I added a new field \"EquivalenceClass\n*derived\" in\n RestrictInfo struct to indicate how this qual is generated. we would\nignore such\n qual during estimate of the rows size. All the set_xx_size should be\ntaken care of, but\n for now, just set_plain_rel_size is taken care of for the PoC purpose.\n\n [1]\n\nhttps://www.postgresql.org/message-id/flat/CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A%40mail.gmail.com\n [2]\n\nhttps://www.postgresql.org/message-id/flat/1727507.1620948117%40sss.pgh.pa.us#52ac3f46cf614acb0bdbddb7128f5bd2\n\ncommit 8439b4818410d860a4ca4be3458b54c04c6f8648\nAuthor: Andy Fan <yizhi.fzh@alibaba-inc.com>\nDate: Tue Feb 1 15:20:10 2022 +0800\n\n Introduce RelOptInfo.filtered_rows.\n\n Previously the Path.rows (shown in the explain output) and\nRelOptInfo.rows\n which would be used to calculating joinrel's estimated rows are same\n at many scan paths, like SeqScan, IndexScan, BitmapHeapScan and so on.\nBut\n they would be different after distributing a new restrictinfo from\nec_filter.\n So I developed RelOptInfo.filtered_rows to take some duty out of\nRelOptInfo.rows.\n\ncommit 11b3395bb5bcc4a2bcff6fed8078dbbf3cda81b1\nAuthor: Andy Fan <yizhi.fzh@alibaba-inc.com>\nDate: Tue Feb 1 17:37:27 2022 +0800\n\n Remove duplicated qual executing for executor.\n\n Take the SELECT * FROM t1, t2 WHERE t1.a = t2.a and t2.a > 3 for\nexample,\n we can derive t1.a > 3 with EC filter infrastructure. However if it\ngenerate a\n plan like below, the new generated qual does not deserve to execute.\n\n Nest Loop\n Seq Scan (t1.a > 3)\n Index Scan t2_a\n (a = t1.a) (t2.a > 3)\n\n This patch removes the \"t2.a > 3\" for the above case.\n\ncommit 2875a76136293589b6e409cb6be4defab87ade59\nAuthor: Andy Fan <yizhi.fzh@alibaba-inc.com>\nDate: Wed Feb 2 11:54:24 2022 +0800\n\n Support ScalarArrayOpExpr and perudoconstant on ef_filter.\n\ncommit a4b21ab6fd0fd57902f5471ec962a77b59085158 (HEAD -> cf_v4)\nAuthor: Andy Fan <yizhi.fzh@alibaba-inc.com>\nDate: Wed Feb 2 11:59:53 2022 +0800\n\n Added the testcase for this feature and fix the previous test case\n\n as well. The new added test case needs outputting some runtime\n statistics, which will probably be different at each run. I can think\n of a way to make the test case stable if the patchsets are not wrong\n at the first step.\n\n\n-- \nBest Regards\nAndy Fan\n\nHi Justin:   Thanks for your attention. On Wed, Feb 2, 2022 at 1:13 AM Justin Pryzby <pryzby@telsasoft.com> wrote:> Subject: [PATCH v1 1/6] Rebaee David's patch against the latest code.\n\nIf you use git-am, then the author/commit information is preserved.\nIt's probably good to include a link to the patch in any case.\nThanks for this reminder, I didn't pay enough attention to this.  Fixed.(The original patch looks like a diff file not a commit, I wrote a simple commitmessage for this and link to the origin discussion link.) \n> Subject: [PATCH v1 4/6] remove duplicated qual executing.\n\n---                                                                                                                                                                                              \n src/backend/optimizer/path/equivclass.c | 22 +++++++++++++++++++                                                                                                                                \n src/backend/optimizer/plan/createplan.c | 29 +++++++++++++++++++++++--                                                                                                                          \n src/include/optimizer/paths.h           |  2 ++                                                                                                                                                 \n src/test/regress/parallel_schedule      |  2 ++                                                                                                                                                 \n 4 files changed, 53 insertions(+), 2 deletions(-)                                                                                                                                               \n\nI think the ./ec_filter test is missing from from this patch.\nIndeed.. \n> Subject: [PATCH v1 6/6] adding some test cases for this feature and fix the existing case                                                                                                                                                                                   \nThe tests should be updated with the corresponding patches.  It's common for\nthe patches to be commited separately, like if 0001 is ready but the others are\nstill evolving.Yes, I agree with this. Just that in this case, the commit split is just for easyreview/discussion. they are unlikely to be able to commit separately. so I keepit as it was and improve each commit message. \n\nI'm not sure whether you think this patch is ready to be added to a commitfest,\nbut do you know about the CI infrastructure ?  It allows running all the cfbot\ntests for a github branch against 4 OS, which helps catch portability issues,\nincluding memory errors and unstable explain output.  See: src/tools/ci/README.Added.  https://commitfest.postgresql.org/37/3524/   \nThere's an failure in postgres_fdw, probably the output needs to be updated.For the postgres_fdw, I just refreshed the content. with this patch, the plan changedfrom  Foreign Scan   Output: ft5.*, ft5.c1, ft5.c2, ft5.c3, ft4.c1, ft4.c2   Relations: (public.ft5) INNER JOIN (public.ft4)   Remote SQL: SELECT CASE WHEN (r1.*)::text IS NOT NULL THEN ROW(r1.c1, r1.c2, r1.c3) END, r1.c1, r1.c2, r1.c3, r2.c1, r2.c2 FROM (\"S 1\".\"T 4\" r1 INNER JOIN \"S 1\".\"T 3\" r2 ON (((r1.c1 = r2.c1)) AND ((r2.c1 >= 10)) AND ((r2.c1 <= 30)))) ORDER BY r1.c1 ASC NULLS LAST(4 rows)to Sort  (cost=108.02..108.04 rows=7 width=62)   Output: ft5.*, ft5.c1, ft5.c2, ft5.c3, ft4.c1, ft4.c2   Sort Key: ft5.c1   ->  Foreign Scan  (cost=100.00..107.92 rows=7 width=62)         Output: ft5.*, ft5.c1, ft5.c2, ft5.c3, ft4.c1, ft4.c2         Relations: (public.ft5) INNER JOIN (public.ft4)         Remote SQL: SELECT CASE WHEN (r1.*)::text IS NOT NULL THEN ROW(r1.c1, r1.c2, r1.c3) END, r1.c1, r1.c2, r1.c3, r2.c1, r2.c2 FROM (\"S 1\".\"T 4\" r1 INNER JOIN \"S 1\".\"T 3\" r2 ON (((r1.c1 = r2.c1)) AND ((r2.c1 >= 10)) AND ((r2.c1 <= 30)) AND ((r1.c1 >= 10)) AND ((r1.c1 <= 30))))But if I set enable_sort = off,  we can still get the previous plan, which proves thatthis patch doesn't make the above path unavailable, it is just not cheaper thanthe new one.  Here is the new commit messages:commit e0a7838a09e73f831eecb23b5e7884cc34d71301Author: David Rowley <dgrowleyml@gmail.com>Date:   Tue Feb 1 20:56:40 2022 +0800    Introudce ec_filters in EquivalenceClass struct, the semantics is the quals can    be applied to any EquivalenceMember in this EC. Later this information is used    to generate new RestrictInfo and was distributed to related RelOptInfo very    soon.    Author: David Rowley at 2015-12 [1]    Andy Fan rebase this patch to current latest code.    https://www.postgresql.org/message-id/CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A%40mail.gmail.comcommit 73f52d0909374446cd689457f0a4ef52addb035eAuthor: Andy Fan <yizhi.fzh@alibaba-inc.com>Date:   Tue Feb 1 14:54:07 2022 +0800    After distributing the new derived RestrictInfo into RelOptInfo, then the rows    estimation is wrong at the joinrel part. The reason is well described at [1] and    [2],  To fix this issue, I added a new field \"EquivalenceClass *derived\" in    RestrictInfo struct to indicate how this qual is generated. we would ignore such    qual during estimate of the rows size. All the set_xx_size should be taken care of, but    for now, just set_plain_rel_size is taken care of for the PoC purpose.    [1]    https://www.postgresql.org/message-id/flat/CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A%40mail.gmail.com    [2]    https://www.postgresql.org/message-id/flat/1727507.1620948117%40sss.pgh.pa.us#52ac3f46cf614acb0bdbddb7128f5bd2commit 8439b4818410d860a4ca4be3458b54c04c6f8648Author: Andy Fan <yizhi.fzh@alibaba-inc.com>Date:   Tue Feb 1 15:20:10 2022 +0800    Introduce RelOptInfo.filtered_rows.    Previously the Path.rows (shown in the explain output) and RelOptInfo.rows    which would be used to calculating joinrel's estimated rows are same    at many scan paths, like SeqScan, IndexScan, BitmapHeapScan and so on. But    they would be different after distributing a new restrictinfo from ec_filter.    So I developed RelOptInfo.filtered_rows to take some duty out of RelOptInfo.rows.commit 11b3395bb5bcc4a2bcff6fed8078dbbf3cda81b1Author: Andy Fan <yizhi.fzh@alibaba-inc.com>Date:   Tue Feb 1 17:37:27 2022 +0800    Remove duplicated qual executing for executor.    Take the SELECT * FROM t1, t2 WHERE t1.a = t2.a and t2.a > 3 for example,    we can derive t1.a > 3 with EC filter infrastructure.  However if it generate a    plan like below, the new generated qual does not deserve to execute.    Nest Loop       Seq Scan (t1.a > 3)       Index Scan t2_a          (a = t1.a) (t2.a > 3)    This patch removes the \"t2.a > 3\" for the above case.commit 2875a76136293589b6e409cb6be4defab87ade59Author: Andy Fan <yizhi.fzh@alibaba-inc.com>Date:   Wed Feb 2 11:54:24 2022 +0800    Support ScalarArrayOpExpr and perudoconstant on ef_filter.commit a4b21ab6fd0fd57902f5471ec962a77b59085158 (HEAD -> cf_v4)Author: Andy Fan <yizhi.fzh@alibaba-inc.com>Date:   Wed Feb 2 11:59:53 2022 +0800    Added the testcase for this feature and fix the previous test case    as well. The new added test case needs outputting some runtime    statistics, which will probably be different at each run. I can think    of a way to make the test case stable if the patchsets are not wrong    at the first step.-- Best RegardsAndy Fan", "msg_date": "Wed, 2 Feb 2022 15:37:13 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "Hi,\n\nthere's been an interesting case [1] of a slow query on pgsql-general, \nrelated to the topic discussed in this thread. It causes an order the \nquery to run slower by multiple orders of magnitude, and I think it's \ninteresting, so let me present a simple example demonstrating it.\n\n------------------------------------------------------------------------\ncreate table t1 (a int);\ncreate table t2 (a int);\n\ninsert into t1\nselect i from generate_series(1,100000) s(i);\n\ninsert into t2\nselect mod(i,100000) from generate_series(1,10000000) s(i);\n\ncreate index on t1(a);\ncreate index on t2(a);\n\nvacuum analyze t1, t2;\n\n-- we need to force mergejoin\nset enable_nestloop = off;\n------------------------------------------------------------------------\n\nNow, let's run a simple query:\n\nSELECT t1.a, t2.a FROM t1 JOIN t2 USING (a)\n WHERE (t1.a > 99000) ORDER BY t1.a LIMIT 100;\n\n QUERY PLAN\n------------------------------------------------------------------------\n Limit (cost=4.63..224.57 rows=100 width=8)\n (actual time=8999.487..8999.707 rows=100 loops=1)\n -> Merge Join (cost=4.63..209447.97 rows=95226 width=8)\n (actual time=8999.485..8999.620 rows=100 loops=1)\n Merge Cond: (t1.a = t2.a)\n -> Index Only Scan using t1_a_idx on t1\n (cost=0.29..29.25 rows=969 width=4)\n (actual time=0.010..0.011 rows=1 loops=1)\n Index Cond: (a > 99000)\n Heap Fetches: 0\n -> Index Only Scan using t2_a_idx on t2\n (cost=0.43..183464.09 rows=9999977 width=4)\n (actual time=0.026..4594.757 rows=9900200 loops=1)\n Heap Fetches: 0\n Planning Time: 0.338 ms\n Execution Time: 8999.768 ms\n(10 rows)\n\n\nNow, let's do a simple trick and add condition on t2.a, \"implied\" by the \njoin condition (t1.a = t2.a) and inequality (t1.a > 99000).\n\n\nSELECT t1.a, t2.a FROM t1 JOIN t2 USING (a)\n WHERE (t1.a > 99000) AND (t2.a > 99000) ORDER BY t1.a LIMIT 100;\n\n QUERY PLAN\n------------------------------------------------------------------------\n Limit (cost=0.77..250.39 rows=100 width=8)\n (actual time=0.040..0.294 rows=100 loops=1)\n -> Merge Join (cost=0.77..2297.23 rows=920 width=8)\n (actual time=0.039..0.172 rows=100 loops=1)\n Merge Cond: (t1.a = t2.a)\n -> Index Only Scan using t1_a_idx on t1\n (cost=0.29..29.25 rows=969 width=4)\n (actual time=0.031..0.031 rows=1 loops=1)\n Index Cond: (a > 99000)\n Heap Fetches: 0\n -> Index Only Scan using t2_a_idx on t2\n (cost=0.43..2014.87 rows=96596 width=4)\n (actual time=0.005..0.052 rows=100 loops=1)\n Index Cond: (a > 99000)\n Heap Fetches: 0\n Planning Time: 0.222 ms\n Execution Time: 0.414 ms\n(11 rows)\n\nWell, that's quite a difference! From 9000ms to 1ms, pretty good.\n\nWhat is happening in the first plan is the merge join needs t2 sorted by \nt2.a, and the index-only-scan looks like a great way to do that, as it \nhas low startup cost (because LIMIT likes that). But this completely \nmisses that (t1.a > 9900) implies (t2.a > 9900) through the equality in \njoin condition. So we start scanning t2_a_idx, only to throw the first \n99% of tuples away.\n\nIn the original report this is particularly egregious, because the index \nonly scan looks like this:\n\n -> Index Only Scan using data_class_pkey on data_class ta\n (cost=0.57..4935483.78 rows=216964862 width=8)\n (actual time=0.018..35022.908 rows=151321889 loops=1)\n Heap Fetches: 151321889\n\nSo yeah, 151M heap fetches, that's bound to be expensive :-/\n\nAdding the condition on t2.a allows just skipping the first chunk of the \nindex, eliminating the expensive part.\n\nOf course, this breaks the estimates in the faster query, because we now \napply the condition twice - once for the index scan, one as the join \nclause. So instead of ~100k rows the join is estimated as ~1000 rows.\n\nI'm also not claiming this is 100% worth it - queries with a suitable \ncombination of clauses (conditions on the join keys) seems rather \nuncommon. But it seems like an interesting example, because it may be \nseen either as missed execution optimization (failing to skip the \ninitial chunk of rows), or an costing issue due to not accounting for \nhaving to process the rows (which would likely result in picking a \ndifferent plan).\n\n\n\nregards\n\n[1] \nhttps://www.postgresql.org/message-id/CA%2B1Wm9U_sP9237f7OH7O%3D-UTab71DWOO4Qc-vnC78DfsJQBCwQ%40mail.gmail.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 5 Feb 2022 14:32:40 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "Hi,\n\nOn Sat, Feb 5, 2022 at 9:32 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n>\n> I'm also not claiming this is 100% worth it - queries with a suitable\n> combination of clauses (conditions on the join keys) seems rather\n> uncommon.\n\n\nThanks for showing interest in this. I want to add some other user cases\nwhich seem not very uncommon. a). When we join the key on a foregin\ntable, in which case, push down a qual to foregin key would be pretty\ngood to reduce the data transformed from the network. b). If the people\njoin many partitioned table on partitioned key, but they want to query\nmore than 1 partitions (which means the qual on partition key is not a\nsimple \"partitionKey = Const\"), then we have to do a run-time partition\nprune (lose the chance for initial partition prune). We have big difference\non the performance aspect as well.\n\nI guess some of the people who think we may need this feature are not very\nclear about what bad it would be if we add this feature (Of course Including\nme). I summarized the discussion before and hacked the solution at [1],\nthe\ncurrent state looks reasonable to me. I'm not sure if I missed any point.\n\n> Of course, this breaks the estimates in the faster query, because we now\n> apply the condition twice - once for the index scan, one as the join\n> clause. So instead of ~100k rows the join is estimated as ~1000 rows.\n\nI think my patch has addressed this. Here is the example:\n\npostgres=# set geqo to off; -- disable this feature, we have an estimation\nerror.\n -- using geqo guc in patch is\njust for easy testing.\nSET\npostgres=# explain analyze SELECT t1.a, t2.a FROM t1 JOIN t2 USING (a)\n WHERE (t1.a > 99000) and t2.a > 99000;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.73..2408.37 rows=990 width=8)\n (actual time=0.032..21.350 rows=99900 loops=1)\n Merge Cond: (t1.a = t2.a)\n -> Index Only Scan using t1_a_idx on t1 (cost=0.29..29.64 rows=991\nwidth=4)\n (actual time=0.014..0.121\nrows=1000 loops=1)\n Index Cond: (a > 99000)\n Heap Fetches: 0\n -> Index Only Scan using t2_a_idx on t2 (cost=0.43..2113.20\nrows=101301 width=4)\n (actual time=0.013..9.854\nrows=99900 loops=1)\n Index Cond: (a > 99000)\n Heap Fetches: 0\n Planning Time: 0.282 ms\n Execution Time: 24.823 ms\n(10 rows)\n\n\npostgres=# set geqo to on; -- enable this feature and let planner derive\nthe qual by itself, the estimation\n -- is good.\nSET\npostgres=# explain analyze SELECT t1.a, t2.a FROM t1 JOIN t2 USING (a)\n WHERE (t1.a > 99000) ;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.73..2408.37 rows=97680 width=8)\n (actual time=0.031..21.296 rows=99900 loops=1)\n Merge Cond: (t1.a = t2.a)\n -> Index Only Scan using t1_a_idx on t1 (cost=0.29..29.64 rows=991\nwidth=4)\n (actual time=0.014..0.116\nrows=1000 loops=1)\n Index Cond: (a > 99000)\n Heap Fetches: 0\n -> Index Only Scan using t2_a_idx on t2 (cost=0.43..2113.20\nrows=101301 width=4)\n (actual time=0.012..9.751\nrows=99900 loops=1)\n Index Cond: (a > 99000)\n Heap Fetches: 0\n Planning Time: 0.269 ms\n Execution Time: 24.749 ms\n(10 rows)\n\n\nSo I think knowing what bad it is to have this feature is the key point to\ndiscussion now.\n\n[1]\nhttps://www.postgresql.org/message-id/CAKU4AWpo9z0hMHDWUKuce4Z-NpcybV0J2UVu5%2BDVwyP-CrHCQg%40mail.gmail.com\n\n\n-- \nBest Regards\nAndy Fan\n\nHi, On Sat, Feb 5, 2022 at 9:32 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\nI'm also not claiming this is 100% worth it - queries with a suitable \ncombination of clauses (conditions on the join keys) seems rather \nuncommon. Thanks for showing interest in this. I want to add some other user caseswhich seem not very uncommon.   a).  When we join the key on a foregintable, in which case,  push down a qual to foregin key would be prettygood to reduce the data transformed from the network.  b).  If the peoplejoin many partitioned table on partitioned key,  but they want to querymore than 1 partitions (which means the qual on partition key is not asimple \"partitionKey = Const\"),  then we have to do a run-time partitionprune (lose the chance for initial partition prune).  We have big differenceon the performance aspect as well. I guess some of the people who think we may need this feature are not veryclear about what bad it would be if we add this feature (Of course Includingme).  I summarized the discussion before and hacked the solution at [1],  thecurrent state looks reasonable to me.   I'm not sure if I missed any point. > Of course, this breaks the estimates in the faster query, because we now> apply the condition twice - once for the index scan, one as the join> clause. So instead of ~100k rows the join is estimated as ~1000 rows.I think my patch has addressed this. Here is the example:postgres=# set geqo to off;  -- disable this feature, we have an estimation error.                                              -- using geqo guc in patch is just for easy testing. SETpostgres=# explain analyze SELECT t1.a, t2.a FROM t1 JOIN t2 USING (a)  WHERE (t1.a > 99000) and t2.a > 99000;                                                             QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------ Merge Join  (cost=0.73..2408.37 rows=990 width=8)             (actual time=0.032..21.350 rows=99900 loops=1)   Merge Cond: (t1.a = t2.a)   ->  Index Only Scan using t1_a_idx on t1  (cost=0.29..29.64 rows=991 width=4)                                             (actual time=0.014..0.121 rows=1000 loops=1)         Index Cond: (a > 99000)         Heap Fetches: 0   ->  Index Only Scan using t2_a_idx on t2  (cost=0.43..2113.20 rows=101301 width=4)                                              (actual time=0.013..9.854 rows=99900 loops=1)         Index Cond: (a > 99000)         Heap Fetches: 0 Planning Time: 0.282 ms Execution Time: 24.823 ms(10 rows)postgres=# set geqo to on;  -- enable this feature and let planner derive the qual by itself, the estimation                                             -- is good. SETpostgres=# explain analyze SELECT t1.a, t2.a FROM t1 JOIN t2 USING (a)  WHERE (t1.a > 99000) ;                                                             QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------ Merge Join  (cost=0.73..2408.37 rows=97680 width=8)             (actual time=0.031..21.296 rows=99900 loops=1)   Merge Cond: (t1.a = t2.a)   ->  Index Only Scan using t1_a_idx on t1  (cost=0.29..29.64 rows=991 width=4)                                             (actual time=0.014..0.116 rows=1000 loops=1)         Index Cond: (a > 99000)         Heap Fetches: 0   ->  Index Only Scan using t2_a_idx on t2  (cost=0.43..2113.20 rows=101301 width=4)                                              (actual time=0.012..9.751 rows=99900 loops=1)         Index Cond: (a > 99000)         Heap Fetches: 0 Planning Time: 0.269 ms Execution Time: 24.749 ms(10 rows)So I think knowing what bad it is to have this feature is the key point to discussion now. [1] https://www.postgresql.org/message-id/CAKU4AWpo9z0hMHDWUKuce4Z-NpcybV0J2UVu5%2BDVwyP-CrHCQg%40mail.gmail.com -- Best RegardsAndy Fan", "msg_date": "Sun, 6 Feb 2022 10:23:28 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": ">\n>\n>\n> > Of course, this breaks the estimates in the faster query, because we now\n> > apply the condition twice - once for the index scan, one as the join\n> > clause. So instead of ~100k rows the join is estimated as ~1000 rows.\n>\n> I think my patch has addressed this. Here is the example: ...\n>\n\n>\nSo I think knowing what bad it is to have this feature is the key point to\n> discussion now.\n>\n> [1]\n> https://www.postgresql.org/message-id/CAKU4AWpo9z0hMHDWUKuce4Z-NpcybV0J2UVu5%2BDVwyP-CrHCQg%40mail.gmail.com\n>\n>\n>\nI forgot to upload these patches, upload them now.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Fri, 11 Feb 2022 13:32:20 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": ">\n>>\n> So I think knowing what bad it is to have this feature is the key point to\n>> discussion now.\n>>\n>>\nI re-read the discussion at 2015 [1] and the below topic is added for the\nabove\nquestion. Here is the summary for easy discussion.\n\n====\n From planner aspect:\n\n> While I've only read your description of the patch not the patch itself,\n> the search methodology you propose seems pretty brute-force and\n> unlikely to solve that issue. It's particularly important to avoid\nO(N^2)\n> behaviors when there are N expressions ...\n\nThe patch has 3 steps in general. 1). Gather the filter_qual_list during\nthe deconstruct_jointree. only unmergeable qual is gathered here.\n2). After the root->eq_classes is built, scan each of the above quals to\nfind out if there is a EC match, if yes, add it to the EC. There are\nsome fast paths here. like ec->relids, em->em_relids. 3). compose\nthe qual in ec_filter and members in ec_members, then distribute it to\nthe relations. This step take the most cycles of this feature, and it is\nthe most important part for this feature as well.\n\nFortunately, thousands of partitions of a table would not make it worse\nsince they are not generated at that stage. So I'd believe the number of\nECs or EMs in an EC would be pretty small in common cases.\n\n> time would be spent on searches for matching subexpressions whether\n> or not anything was learned (and often nothing would be learned).\n\nThis is about some cases like \"SELECT * FROM t1, t2 WHERE t1.a = t2.a\nand t1.b > 3\". In this case, we still need to go through steps 1 & 2,\nall the fast\npaths don't work and the equal() is unavoidable. However step 3 can be\nignored.\nIf we want to improve this, could we maintain an attr_eq_indexes in\nRelOptInfos\nwhich indicates if the given attribute appears in any one of EC members?\n\n=====\n From executor aspects:\n\n> The reason why the answer isn't automatically \"all of them\"\n> is because, first of all, it's possible that enforcing the condition\n> at a particular table costs more to filter out the rows that we save\n> in execution time at higher levels of the plan tree. For example,\n> consider A JOIN B ON A.X = B.X WHERE A.X > 1000000. It might be that\n> the range of A.X is [0,1000001] but the range of B.X is\n> [1000000,2000000]; so enforcing the inequality against A is very\n> selective but enforcing it against B filters out basically nothing.\n\nI think we can classify this as we push down / execute an qual, the\nqual takes lots of cycles, but it doesn't filter many rows.\n\n> A first cut might be to enforce the inequality against the relation\n> where it's believed to be most selective, equivalence-class column\n> mentioned in the inequality provided that the\n> selectivity is thought to be above some threshold ... but I'm not sure\n> this is very principled,\n\nI can only input +1 after some deep thoughts.\n\n>> Furthermore, there are some cases involving parameterized paths where\n>> enforcing the inequality multiple times is definitely bad: for\n>> example, if we've got a nested loop where the outer side is a seq scan\n>> that enforces the condition and the inner side is an index probe, it\n>> is just a waste to retest it on the inner side. We already know that\n>> the outer row passes the inequality, so the inner row will necessarily\n>> pass also. This doesn't apply to merge or hash joins, and it also\n>> doesn't apply to all nested loops: scans that aren't paramaterized by\n>> the equivalence-class column can still benefit from separate\n>> enforcement of the inequality.\n>>\n> I guess that could be fixed by somehow marking these pushed quals as\n> optional and having parameterised scans ignore optional quals.\n\nThis has been done by committing 4.\n\n> Now, all that having been said, I think this is a pretty important\n> optimization. Lots of people have asked for it, and I think it would\n> be worth expending some brainpower to try to figure out a way to be\n> smarter than we are now, which is, in a nutshell, as dumb as possible.\n\n+1. I asked custom to add the derivable quals manually for 10+ of table\neach query last year and gained great results.\n\nAnyone still have interest in this? Or is a better solution really\npossible?\nOr is the current method too bad to rescue?\n\n-- \nBest Regards\nAndy Fan\n\n So I think knowing what bad it is to have this feature is the key point to discussion now. I re-read the discussion at 2015 [1] and the below topic is added for the abovequestion.   Here is the summary for easy discussion. ====From planner aspect: > While I've only read your description of the patch not the patch itself, > the search methodology you propose seems pretty brute-force and > unlikely to solve that issue.  It's particularly important to avoid O(N^2) > behaviors when there are N expressions ...The patch has 3 steps in general.  1). Gather the filter_qual_list duringthe deconstruct_jointree. only unmergeable qual is gathered here. 2).  After the root->eq_classes is built, scan each of the above quals to find out if there is a EC match,  if yes, add it to the EC.  There are some fast paths here. like ec->relids,  em->em_relids.  3).  compose the qual in ec_filter and members in ec_members, then distribute it tothe relations.  This step take the most cycles of this feature,   and it is the most important part for this feature as well.Fortunately,  thousands of partitions of a table would not make it worsesince they are not generated at that stage.  So I'd believe the number ofECs or EMs in an EC would be pretty small in common cases.  > time would be spent on searches for matching subexpressions whether > or not anything was learned (and often nothing would be learned).  This is about some cases like \"SELECT * FROM t1, t2 WHERE t1.a = t2.aand t1.b > 3\".   In this case,  we still need to go through steps 1 & 2,  all the fast paths don't work and the equal() is unavoidable.  However step 3 can be ignored.  If we want to improve this,  could we maintain an attr_eq_indexes in RelOptInfos which indicates if the given attribute appears in any one of EC members?  =====From executor aspects:> The reason why the answer isn't automatically \"all of them\"> is because, first of all, it's possible that enforcing the condition> at a particular table costs more to filter out the rows that we save> in execution time at higher levels of the plan tree.  For example,> consider A JOIN B ON A.X = B.X WHERE A.X > 1000000.  It might be that> the range of A.X is [0,1000001] but the range of B.X is> [1000000,2000000]; so enforcing the inequality against A is very> selective but enforcing it against B filters out basically nothing.I think we can classify this as we push down / execute an qual, thequal takes lots of cycles, but it doesn't filter many rows.  > A first cut might be to enforce the inequality against the relation> where it's believed to be most selective, equivalence-class column > mentioned in the inequality provided that the> selectivity is thought to be above some threshold ... but I'm not sure> this is very principled,I can only input +1 after some deep thoughts. >> Furthermore, there are some cases involving parameterized paths where>> enforcing the inequality multiple times is definitely bad: for>> example, if we've got a nested loop where the outer side is a seq scan>> that enforces the condition and the inner side is an index probe, it>> is just a waste to retest it on the inner side.  We already know that>> the outer row passes the inequality, so the inner row will necessarily>> pass also.  This doesn't apply to merge or hash joins, and it also>> doesn't apply to all nested loops: scans that aren't paramaterized by>> the equivalence-class column can still benefit from separate>> enforcement of the inequality.>>> I guess that could be fixed by somehow marking these pushed quals as> optional and having parameterised scans ignore optional quals.This has been done by committing 4. > Now, all that having been said, I think this is a pretty important> optimization.  Lots of people have asked for it, and I think it would> be worth expending some brainpower to try to figure out a way to be> smarter than we are now, which is, in a nutshell, as dumb as possible.+1.  I asked custom to add the derivable quals manually for 10+ of tableeach query last year and gained great results.   Anyone still have interest in this?  Or is a better solution really possible?  Or is the current method  too bad to rescue? -- Best RegardsAndy Fan", "msg_date": "Fri, 11 Feb 2022 18:37:17 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "So I think knowing what bad it is to have this feature is the key point to discussion now.\r\n\r\n\r\n> While I've only read your description of the patch not the patch itself,\r\n\r\nThis comment applies to me also.\r\n\r\nIs the join selectivity properly calculated in all cases, e.g. in the n:m join case in particular, or in general when you’re not joining to a unique key? (this would be the usual situation here, since it adds a range qual to a join qual)\r\n\r\n>> Furthermore, there are some cases involving parameterized paths where\r\n>> enforcing the inequality multiple times is definitely bad\r\n\r\n\r\n * This has been done by committing 4.\r\n\r\nWhat remaining cases are there where the qual is evaluated redundantly?\r\n\r\n\r\n\r\n * Anyone still have interest in this? Or is a better solution really possible?\r\nOr is the current method too bad to rescue?\r\n\r\nAs you’ve shown, this can potentially be very important, though I don’t think you’ll often see equijoins with an additional range restriction on the join keys. When it happens, though, it could be especially important for joins to partitioned tables with many remote fdw partitions when the join can’t be pushed down to the remote server.\r\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\nSo I think knowing what bad it is to have this feature is the key point to discussion now. \n\n\n \n\n\n\n\n\n\n\n \n\n\n> While I've only read your description of the patch not the patch itself, \n \nThis comment applies to me also.  \n \nIs the join selectivity properly calculated in all cases, e.g. in the n:m join case in particular, or in general when you’re not joining to a unique key? (this would be the usual situation here, since it adds a range qual to a join qual)\n\n\n \n\n\n>> Furthermore, there are some cases involving parameterized paths where\r\n>> enforcing the inequality multiple times is definitely bad\n \n\nThis has been done by committing 4. \n\n\n \nWhat remaining cases are there where the qual is evaluated redundantly?\n\n\n \n\nAnyone still have interest in this?  Or is a better solution really possible?  \n\n\nOr is the current method  too bad to rescue? \n\n\n \n\n\n\n\n\nAs you’ve shown, this can potentially be very important, though I don’t think you’ll often see equijoins with an additional range restriction on the join keys.  When it happens, though, it could be especially important for joins to partitioned\r\n tables with many remote fdw partitions when the join can’t be pushed down to the remote server.", "msg_date": "Thu, 17 Feb 2022 18:55:06 +0000", "msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join,\n but BETWEEN or >= is not?" }, { "msg_contents": "On Tue, Feb 1, 2022 at 10:08 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> To address the row estimation issue, The most straightforward way to fix this is to\n> ignore the derived clauses when figuring out the RelOptInfo->rows on base relation.\n> To note which clause is derived from this patch, I added a new field \"EquivalenceClass *\n> derived\" in RestrictInfo. and then added a included_derived option in clauselist_selectivity_ext,\n> during the set_xxx_rel_size function, we can pass the included_derived=false. This strategy\n> should be used in get_parameterized_baserel_size. In all the other cases, include_derived=true\n> is used. which are finished in commit 2. (Commit 1 is Daivd's patch, I just rebased it)\n\nThat doesn't sound correct to me.\n\nSuppose that we have A.x = B.x and also A.x < 42. We can choose to\nenforce A.x < 42 or we can choose to enforce B.x < 42 or we can do\nboth. In general, any of those could be right: if either one of those\ntwo is highly selective while the other is not very selective at all,\nit's going to be fastest to enforce only the more selective qual. But\nif both are selective then it may be best to enforce both, so let's\nsuppose we do that. If we don't adopt the proposal above and just do\nnothing, then our row count estimates for both A and B will include\nthe effect of checking x < 42, and so they will be correct, but the\nrow count estimate for join(A, B) will include the effect of checking\nx < 42 twice, and so it will be too low, which can mess up the plan at\nhigher levels.\n\nBut discounting the effect of B.x < 42 when estimating the size of B\nis also incorrect. Now, the row count estimate for join(A, B) will\ninclude the effect of x < 42 only once, which is good. However, the\nrow count estimate for B will be too high, because it will not include\nthe effect of B.x < 42. And that means that the cost estimate for\njoin(A, B) will be wrong. It will be too high, because it's going to\nthink that it has more rows coming from the B side of the join than\nwhat is actually the case. And that can also mess up the plan at\nhigher levels.\n\nI think we could get around this problem by having multiple\nRelOptInfos (or something similar that is lighter-weight) for each\nrelation. Today, we'd create a RelOptInfo for A, one for B, and one\nfor join(A, B), and the paths for the join are created by joining a\npath for A to a path for B. Now imagine that we have instead 5\nRelOptInfos, for {A}, {A|x<42}, {B}, {B|x<42}, and join(A, B). The\nlegal paths for that last one can be created by joining {A} to\n{B|x<42} or {A|x<42} to {B} or {A|x<42} to {B|x<42}. Each of those 5\nRelOptInfos can have its own cardinality estimate, and it seems pretty\nstraightforward to see how to get both the scan cardinality and the\njoin cardinality correct. Now I think this is decidedly non-trivial to\nimplement, and I also hear the voice of Tom Lane saying that's going\nto be expensive in both time and memory, and he's not wrong.\n\nOn the other hand, I completely agree with David's comments on the\nother thread to the effect that holding our breath is not getting us\nanywhere. People don't keep asking for this feature because it's a\nstupid thing that nobody really wants, and when Tom alleges that it\nwill rarely pay off, I think he's pretty far off the mark. The only\ntime we need to consider doing any extra work is when we have\nsomething like the example discussed here, namely A.x = B.x and A.x <\n42. If there is a variable that is part of an equivalence class and\nalso is used in a scan qual, what are the chances that the implied\ninequality is useful? There's no way to estimate that mathematically -\nit's all about what you think human beings are typically going to do -\nbut I'd say it's probably better than 50%. I know that when I was\nregularly doing application programming on top of PostgreSQL I was\nVERY aware of this limitation of the optimizer and habitually thought\nabout which table to write the inequality against. That kept me out of\ntrouble most of the time, but it sure seems like we're punting the\noptimizer's job to the end user.\n\nAnd even then, I still sometimes couldn't stay out of trouble, because\nsometimes I knew that the implied inequality really ought to be\nenforced against both sides of the join to get a decent plan. In that\ncase, the only way to get the optimizer to do what I wanted was to\nduplicate the qual. But that runs headlong into the exact problem that\nwe're talking about here: now the join selectivity is going to be\nmessed up, and then some other part of the plan would get messed up. I\nstill remember the frustration associated with that scenario more than\n10 years later. You can't even fix it by uglifying your query with a\nplanner hint, because we don't support those either. Which brings me\nto another point: it's incoherent to simultaneously argue that we\nshouldn't have planner hints but rather focus on improving the\nplanner, and at the same time refuse to improve the planner because it\nwould make planning too expensive. I actually think we should do both,\nbecause I neither believe that it's impossible to fix this particular\nproblem nor that it is possible to create a planner so good that it\nalways makes the right decisions without any explicit input from a\nhuman being. But the only way you can think this problem is unfixable\nand at the same time think we don't need hints is if you think this\nproblem is fake.\n\nIt's not.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Feb 2022 15:15:13 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "On 2/17/22 21:15, Robert Haas wrote:\n> On Tue, Feb 1, 2022 at 10:08 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> To address the row estimation issue, The most straightforward way to fix this is to\n>> ignore the derived clauses when figuring out the RelOptInfo->rows on base relation.\n>> To note which clause is derived from this patch, I added a new field \"EquivalenceClass *\n>> derived\" in RestrictInfo. and then added a included_derived option in clauselist_selectivity_ext,\n>> during the set_xxx_rel_size function, we can pass the included_derived=false. This strategy\n>> should be used in get_parameterized_baserel_size. In all the other cases, include_derived=true\n>> is used. which are finished in commit 2. (Commit 1 is Daivd's patch, I just rebased it)\n> \n> That doesn't sound correct to me.\n> \n> Suppose that we have A.x = B.x and also A.x < 42. We can choose to\n> enforce A.x < 42 or we can choose to enforce B.x < 42 or we can do\n> both. In general, any of those could be right: if either one of those\n> two is highly selective while the other is not very selective at all,\n> it's going to be fastest to enforce only the more selective qual. But\n> if both are selective then it may be best to enforce both, so let's\n> suppose we do that. If we don't adopt the proposal above and just do\n> nothing, then our row count estimates for both A and B will include\n> the effect of checking x < 42, and so they will be correct, but the\n> row count estimate for join(A, B) will include the effect of checking\n> x < 42 twice, and so it will be too low, which can mess up the plan at\n> higher levels.\n> \n> But discounting the effect of B.x < 42 when estimating the size of B\n> is also incorrect. Now, the row count estimate for join(A, B) will\n> include the effect of x < 42 only once, which is good. However, the\n> row count estimate for B will be too high, because it will not include\n> the effect of B.x < 42. And that means that the cost estimate for\n> join(A, B) will be wrong. It will be too high, because it's going to\n> think that it has more rows coming from the B side of the join than\n> what is actually the case. And that can also mess up the plan at\n> higher levels.\n> \n> I think we could get around this problem by having multiple\n> RelOptInfos (or something similar that is lighter-weight) for each\n> relation. Today, we'd create a RelOptInfo for A, one for B, and one\n> for join(A, B), and the paths for the join are created by joining a\n> path for A to a path for B. Now imagine that we have instead 5\n> RelOptInfos, for {A}, {A|x<42}, {B}, {B|x<42}, and join(A, B). The\n> legal paths for that last one can be created by joining {A} to\n> {B|x<42} or {A|x<42} to {B} or {A|x<42} to {B|x<42}. Each of those 5\n> RelOptInfos can have its own cardinality estimate, and it seems pretty\n> straightforward to see how to get both the scan cardinality and the\n> join cardinality correct. Now I think this is decidedly non-trivial to\n> implement, and I also hear the voice of Tom Lane saying that's going\n> to be expensive in both time and memory, and he's not wrong.\n> \n\nIMHO the whole problem is we're unable to estimate the join clause as a\nconditional probability, i.e.\n\n P(A.x = B.x | (A.x < 42) & (B.x < 42))\n\nso maybe instead of trying to generate additional RelOptInfo items we\nshould think about improving that. The extra RelOptInfos don't really\nsolve this, because even if you decide to join A|x<42 to B|x<42 it does\nnothing to improve the join clause estimate.\n\nWith equality clauses we don't have this issue, because if you derive\nclauses at the baserel level, the join clause becomes no-op with\nselecitivity 1.0. But for inequalities that does not work ...\n\nInterestingly enough, the patch [1] tries to do something like this by\napplying extended statistics to joins, and using baserestrictinfos as\n\"conditions\" for statistics on both sides.\n\nIt actually deals with a more general form of this case, because the\nclauses don't need to reference the same attribute - so for example this\nwould work too, assuming there is extended stats object on the columns\non each side:\n\n P(A.c = B.d | (A.e < 42) & (B.f < 42))\n\n\n\n[1] https://commitfest.postgresql.org/36/3055/\n\n\n> On the other hand, I completely agree with David's comments on the\n> other thread to the effect that holding our breath is not getting us\n> anywhere. People don't keep asking for this feature because it's a\n> stupid thing that nobody really wants, and when Tom alleges that it\n> will rarely pay off, I think he's pretty far off the mark. The only\n> time we need to consider doing any extra work is when we have\n> something like the example discussed here, namely A.x = B.x and A.x <\n> 42. If there is a variable that is part of an equivalence class and\n> also is used in a scan qual, what are the chances that the implied\n> inequality is useful? There's no way to estimate that mathematically -\n> it's all about what you think human beings are typically going to do -\n> but I'd say it's probably better than 50%. I know that when I was\n> regularly doing application programming on top of PostgreSQL I was\n> VERY aware of this limitation of the optimizer and habitually thought\n> about which table to write the inequality against. That kept me out of\n> trouble most of the time, but it sure seems like we're punting the\n> optimizer's job to the end user.\n> \n\nNot sure. In my experience queries with both a join clause and other\nclauses referencing the same attribute are pretty rare. But I agree if\nwe can do the expensive stuff only when actually needed, with no cost in\nthe 99.999% other cases, I don't see why not. Of course, code complexity\nis a cost too.\n\n> And even then, I still sometimes couldn't stay out of trouble, because\n> sometimes I knew that the implied inequality really ought to be\n> enforced against both sides of the join to get a decent plan. In that\n> case, the only way to get the optimizer to do what I wanted was to\n> duplicate the qual. But that runs headlong into the exact problem that\n> we're talking about here: now the join selectivity is going to be\n> messed up, and then some other part of the plan would get messed up. I\n> still remember the frustration associated with that scenario more than\n> 10 years later. You can't even fix it by uglifying your query with a\n> planner hint, because we don't support those either. Which brings me\n> to another point: it's incoherent to simultaneously argue that we\n> shouldn't have planner hints but rather focus on improving the\n> planner, and at the same time refuse to improve the planner because it\n> would make planning too expensive. I actually think we should do both,\n> because I neither believe that it's impossible to fix this particular\n> problem nor that it is possible to create a planner so good that it\n> always makes the right decisions without any explicit input from a\n> human being. But the only way you can think this problem is unfixable\n> and at the same time think we don't need hints is if you think this\n> problem is fake.\n> \n\nIMHO to deal with the estimates it'd be enough to allow calculating\nconditional probabilities.\n\nNo comment regarding hints ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 17 Feb 2022 22:17:36 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "On Thu, Feb 17, 2022 at 4:17 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> IMHO the whole problem is we're unable to estimate the join clause as a\n> conditional probability, i.e.\n>\n> P(A.x = B.x | (A.x < 42) & (B.x < 42))\n>\n> so maybe instead of trying to generate additional RelOptInfo items we\n> should think about improving that. The extra RelOptInfos don't really\n> solve this, because even if you decide to join A|x<42 to B|x<42 it does\n> nothing to improve the join clause estimate.\n\nI guess I hadn't considered that angle. I think the extra RelOptInfos\n(or whatever) actually do solve a problem, because enforcing a\nhigh-selectivity join qual against both sides is potentially quite\nwasteful, and you need some way to decide whether to do it on one\nside, the other, or both. But it's also true that I was wrong to\nassume independence ... and if we could avoid assuming that, then the\njoin selectivity would work itself out without any of the machinery\nthat I just proposed.\n\n> It actually deals with a more general form of this case, because the\n> clauses don't need to reference the same attribute - so for example this\n> would work too, assuming there is extended stats object on the columns\n> on each side:\n>\n> P(A.c = B.d | (A.e < 42) & (B.f < 42))\n\nThat'd be cool.\n\n> Not sure. In my experience queries with both a join clause and other\n> clauses referencing the same attribute are pretty rare. But I agree if\n> we can do the expensive stuff only when actually needed, with no cost in\n> the 99.999% other cases, I don't see why not. Of course, code complexity\n> is a cost too.\n\nRight. I mean, we could have a planner GUC to control whether the\noptimization is used even in cases where we see that it's possible.\nBut Tom keeps arguing that it is possible in many queries and would\nbenefit few queries, and I'm not seeing why that should be so. I think\nit's likely to benefit many of the queries to which it applies.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Feb 2022 17:16:31 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "On Fri, Feb 18, 2022 at 4:15 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Feb 1, 2022 at 10:08 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > To address the row estimation issue, The most straightforward way to fix\n> this is to\n> > ignore the derived clauses when figuring out the RelOptInfo->rows on\n> base relation.\n> > To note which clause is derived from this patch, I added a new field\n> \"EquivalenceClass *\n> > derived\" in RestrictInfo. and then added a included_derived option in\n> clauselist_selectivity_ext,\n> > during the set_xxx_rel_size function, we can pass the\n> included_derived=false. This strategy\n> > should be used in get_parameterized_baserel_size. In all the other\n> cases, include_derived=true\n> > is used. which are finished in commit 2. (Commit 1 is Daivd's patch, I\n> just rebased it)\n>\n> That doesn't sound correct to me.\n>\n> Suppose that we have A.x = B.x and also A.x < 42. We can choose to\n> enforce A.x < 42 or we can choose to enforce B.x < 42 or we can do\n> both. In general, any of those could be right:\n\n\nThis is impressive. To achieve this, we have to treat a.x < 42 and\nb.x < 42 equally rather than b.x < 42 is derived from a.x < 42,\nand enforce the final plan to execute 1 qual in such a group at least.\nThis introduces some more complexity at the first glance, but I think\nit is a great aspect to think about.\n\n\n> .., which is good. However, the\n> row count estimate for B will be too high, because it will not include\n> the effect of B.x < 42. And that means that the cost estimate for\n> join(A, B) will be wrong. It will be too high, because it's going to\n> think that it has more rows coming from the B side of the join than\n> what is actually the case. And that can also mess up the plan at\n> higher levels.\n>\n>\nIIUC, this would not happen if we apply the commit 3.\n\nIn commit 3, the real rows for the scan path are adjusted by\nRelOptInfo->filter_rows,\nwhich take the effect of B.x < 42. It is used in cost_{ascan}_path lately.\nHere is an example:\n\nregression=# explain analyze select * from tenk1 a, tenk1 b where\na.thousand = b.thousand and a.thousand < 100;\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\nNested Loop (cost=24.90..459.16 rows=10740 width=488) (actual\ntime=0.416..17.459 rows=10000 loops=1)\n-> Bitmap Heap Scan on tenk1 b (cost=24.61..383.03 rows=1074 width=244)\n(actual time=0.369..1.801 rows=1000 loops=1)\nRecheck Cond: (thousand < 100)\nHeap Blocks: exact=324\n-> Bitmap Index Scan on tenk1_thous_tenthous (cost=0.00..24.34 rows=1074\nwidth=0) (actual time=0.251..0.251 rows=1000 loops=1)\nIndex Cond: (thousand < 100)\n-> Memoize (cost=0.30..0.47 rows=1 width=244) (actual time=0.002..0.006\nrows=10 loops=1000)\nCache Key: b.thousand\nCache Mode: logical\nHits: 900 Misses: 100 Evictions: 0 Overflows: 0 Memory Usage: 277kB\n-> Index Scan using tenk1_thous_tenthous on tenk1 a (cost=0.29..0.46\nrows=1 width=244) (actual time=0.012..0.033 rows=10 loops=100)\nIndex Cond: ((thousand = b.thousand) AND (thousand < 100))\nPlanning Time: 0.934 ms\nExecution Time: 18.496 ms\n(14 rows)\n\nb.thousand < 100 is derived from a.thousand < 100; and the final path cost\nis:\n-> Bitmap Heap Scan on tenk1 b (cost=24.61..383.03 rows=1074 width=244)\n(actual time=0.369..1.801 rows=1000 loops=1)\nRecheck Cond: (thousand < 100)\nHeap Blocks: exact=324\n-> Bitmap Index Scan on tenk1_thous_tenthous (cost=0.00..24.34 rows=1074\nwidth=0) (actual time=0.251..0.251 rows=1000 loops=1)\nIndex Cond: (thousand < 100)\n\nWhich is exactly same as select * from tenk1 where thousand < 100;\n\n== Commit 3 [1] message with some modification ==\n\n Introduce RelOptInfo.filtered_rows.\n\n Previously the Path.rows (shown in the explain output) and\nRelOptInfo.rows\n (which would be used to calculating joinrel's estimated rows) are same\n at many scan paths, like SeqScan, IndexScan, BitmapHeapScan and so on.\nBut\n they would be different after distributing a new restrictinfo from\nec_filter.\n So I developed RelOptInfo.filtered_rows to take some duty out of\nRelOptInfo.rows.\n RelOptInfo.filtered_rows would count the effect of derived qual, and be\nused for\n cost_xxx function.\n\n\n\n> On the other hand, I completely agree with David's comments on the\n> other thread to the effect that holding our breath is not getting us\n> anywhere.\n\n\n+1 with this as well. PostgreSQL community has an great reputation\nin the world, and mostly because the authority figures in this community\nhas set up a good example for the following people. and it is not\neasy. But if the authority figures are too restrict with code quality,\n this\nis not good for the community as well, we should encourage more people\nto have a try, to some extent.\n\nTaking the current feature for example, the estimation issue absolutely\nneeds a fix. While I do listen/think carefully how to reduce planner\nextra cost or rethink if any important items are missed by me. I also think\nDavid's method is not unacceptable at all.\n\nWhat do you think about moving on this feature? The items known by me\nare: 1). Make sure the estimation error can be fixed or discuss if my\ncurrent\nsolution is workable. b). Just distribute some selectivity restrictinfo\nto\nRelOptInfo. c). See how hard it is to treat the original / derived qual\nequally.\nd). Reduce the extra planner cost at much as possible. Any other important\nitems I missed?\n\n[1]\nhttps://www.postgresql.org/message-id/CAKU4AWrdeQZ8xvf%3DDVhndUs%3DRGn8oVoSJvYK3Yj7uWq2%3Ddt%3DMg%40mail.gmail.com\n\n-- \nBest Regards\nAndy Fan\n\nOn Fri, Feb 18, 2022 at 4:15 AM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Feb 1, 2022 at 10:08 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> To address the row estimation issue, The most straightforward way to fix this is to\n> ignore the derived clauses when figuring out the RelOptInfo->rows on base relation.\n> To note which clause is derived from this patch, I added a new field \"EquivalenceClass *\n> derived\" in RestrictInfo. and then added a  included_derived  option in clauselist_selectivity_ext,\n> during the set_xxx_rel_size function, we can pass the included_derived=false.  This strategy\n> should be used in get_parameterized_baserel_size.   In all the other cases, include_derived=true\n> is used. which are finished in commit 2. (Commit 1 is Daivd's patch, I just  rebased it)\n\nThat doesn't sound correct to me.\n\nSuppose that we have A.x = B.x and also A.x < 42. We can choose to\nenforce A.x < 42 or we can choose to enforce B.x < 42 or we can do\nboth. In general, any of those could be right: This is impressive.  To achieve this, we have to treat a.x < 42 and b.x < 42 equally rather than b.x < 42 is derived from a.x < 42, and enforce the final plan to execute 1 qual in such a group at least.  This introduces some more complexity at the first glance, but I think it is a great aspect to think about.  .., which is good. However, the\nrow count estimate for B will be too high, because it will not include\nthe effect of B.x < 42. And that means that the cost estimate for\njoin(A, B) will be wrong. It will be too high, because it's going to\nthink that it has more rows coming from the B side of the join than\nwhat is actually the case. And that can also mess up the plan at\nhigher levels.\nIIUC, this would not happen if we apply the commit 3.In commit 3, the real rows for the scan path are adjusted by RelOptInfo->filter_rows,which take the effect of B.x < 42.  It is used in cost_{ascan}_path lately. Here is an example:regression=# explain analyze select * from tenk1 a, tenk1 b where a.thousand = b.thousand and a.thousand < 100;QUERY PLAN-----------------------------------------------------------------------------------------------------------------------------------------------Nested Loop  (cost=24.90..459.16 rows=10740 width=488) (actual time=0.416..17.459 rows=10000 loops=1)->  Bitmap Heap Scan on tenk1 b  (cost=24.61..383.03 rows=1074 width=244) (actual time=0.369..1.801 rows=1000 loops=1)Recheck Cond: (thousand < 100)Heap Blocks: exact=324->  Bitmap Index Scan on tenk1_thous_tenthous  (cost=0.00..24.34 rows=1074 width=0) (actual time=0.251..0.251 rows=1000 loops=1)Index Cond: (thousand < 100)->  Memoize  (cost=0.30..0.47 rows=1 width=244) (actual time=0.002..0.006 rows=10 loops=1000)Cache Key: b.thousandCache Mode: logicalHits: 900  Misses: 100  Evictions: 0  Overflows: 0  Memory Usage: 277kB->  Index Scan using tenk1_thous_tenthous on tenk1 a  (cost=0.29..0.46 rows=1 width=244) (actual time=0.012..0.033 rows=10 loops=100)Index Cond: ((thousand = b.thousand) AND (thousand < 100))Planning Time: 0.934 msExecution Time: 18.496 ms(14 rows)b.thousand < 100 is derived from a.thousand < 100; and the final path cost is:->  Bitmap Heap Scan on tenk1 b  (cost=24.61..383.03 rows=1074 width=244) (actual time=0.369..1.801 rows=1000 loops=1)Recheck Cond: (thousand < 100)Heap Blocks: exact=324->  Bitmap Index Scan on tenk1_thous_tenthous  (cost=0.00..24.34 rows=1074 width=0) (actual time=0.251..0.251 rows=1000 loops=1)Index Cond: (thousand < 100)Which is exactly same as select * from tenk1 where thousand < 100; == Commit 3 [1] message with some modification ==     Introduce RelOptInfo.filtered_rows.    Previously the Path.rows (shown in the explain output) and RelOptInfo.rows    (which would be used to calculating joinrel's estimated rows) are same    at many scan paths, like SeqScan, IndexScan, BitmapHeapScan and so on. But    they would be different after distributing a new restrictinfo from ec_filter.    So I developed RelOptInfo.filtered_rows to take some duty out of RelOptInfo.rows.    RelOptInfo.filtered_rows would count the effect of derived qual, and be used for    cost_xxx function. \n\nOn the other hand, I completely agree with David's comments on the\nother thread to the effect that holding our breath is not getting us\nanywhere. +1 with this as well.  PostgreSQL community has an great reputationin the world,  and mostly because the authority figures in this communityhas set up a good example for the following people.  and it is noteasy.  But if the authority figures are too restrict with code quality,  thisis not good for the community as well, we should encourage more peopleto have a try,  to some extent. Taking the current feature for example,  the estimation issue absolutely needs a fix.  While I do listen/think carefully how to reduce plannerextra cost or rethink if any important items are missed by me. I also thinkDavid's method is not unacceptable at all. What do you think about moving on this feature?  The items known by me are: 1).  Make sure the estimation error can be fixed or discuss if my currentsolution is workable.  b).  Just distribute some selectivity restrictinfo to RelOptInfo.  c).  See how hard it is to treat the original / derived qual equally.d).  Reduce the extra planner cost at much as possible.  Any other importantitems I missed? [1] https://www.postgresql.org/message-id/CAKU4AWrdeQZ8xvf%3DDVhndUs%3DRGn8oVoSJvYK3Yj7uWq2%3Ddt%3DMg%40mail.gmail.com -- Best RegardsAndy Fan", "msg_date": "Fri, 18 Feb 2022 13:56:05 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "On Fri, Feb 18, 2022 at 12:56 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> What do you think about moving on this feature? The items known by me\n> are: 1). Make sure the estimation error can be fixed or discuss if my current\n> solution is workable. b). Just distribute some selectivity restrictinfo to\n> RelOptInfo. c). See how hard it is to treat the original / derived qual equally.\n> d). Reduce the extra planner cost at much as possible. Any other important\n> items I missed?\n\nI think it's not realistic to do anything here for PostgreSQL 15.\nConsidering that it's almost the end of February and feature freeze\nwill probably be in perhaps 5-6 weeks, in order to get something\ncommitted at this point, you would need to have (1) sufficient\nconsensus on the design, (2) a set of reasonably complete patches\nimplementing that design at an acceptable level of quality, and (3) a\ncommitter interested in putting in the necessary time to help you get\nthis over the finish line. As far as I can see, you have none of those\nthings. Tom doesn't think we need this at all, and you and I and\nTomas all have somewhat different ideas on what approach we ought to\nbe taking, and the patches appear to be at a POC level at this point\nrather than something that's close to being ready to ship, and no\ncommitter has expressed interest in trying to get them into this\nrelease.\n\nIt seems to me that the thing to do here is see if you can build\nconsensus on an approach. Just saying that we ought to think the\npatches you've already got are good enough is not going to get you\nanywhere. I do understand that the political element of this problem\nis frustrating to you, as it is to many people. But consider the\nalternative: suppose the way things worked around here is that any\ncommitter could commit anything they liked without needing the\napproval of any other committer, or even over their objections. Well,\nit would be chaos. People would be constantly reverting or rewriting\nthings that other people had done, and everybody would probably be\npissed off at each other all the time, and the quality would go down\nthe tubes and nobody would use PostgreSQL any more. I'm not saying the\ncurrent system is perfect, not at all. It's frustrating as all get out\nat times. But the reason it's frustrating is because the PostgreSQL\ncommunity is a community of human beings, and there's nothing more\nfrustrating in the world than the stuff other human beings do.\n\nHowever, it's equally true that we get further working together than\nwe would individually. I think Tom is wrong about the merits of doing\nsomething in this area, but I also think he's incredibly smart and\nthoughtful and one of the best technologists I've ever met, and\nprobably just one of the absolute best technologists on Planet Earth.\nAnd I also have to consider, and this is really important, the\npossibility that Tom is right about this issue and I am wrong. So far\nTom hasn't replied to what I wrote, but I hope he does. Maybe he'll\nadmit that I have some valid points. Maybe he'll tell me why he thinks\nI'm wrong. Maybe I'll learn about some problem that I haven't\nconsidered from his response, and maybe that will lead to a refinement\nof the idea that will make it better. I don't know, but it's certainly\nhappened in plenty of other cases. And that's how PostgreSQL gets to\nbe this pretty amazing database that it is. So, yeah, building\nconsensus is frustrating and it takes a long time and sometimes it\nfeels like other people are obstructing you needlessly and sometimes\nthat's probably true. But there's not a realistic alternative. Nobody\nhere is smart enough to create software that is as good as what all of\nus create together.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 18 Feb 2022 13:27:16 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "\n\nOn 2/17/22 23:16, Robert Haas wrote:\n> On Thu, Feb 17, 2022 at 4:17 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> IMHO the whole problem is we're unable to estimate the join clause as a\n>> conditional probability, i.e.\n>>\n>> P(A.x = B.x | (A.x < 42) & (B.x < 42))\n>>\n>> so maybe instead of trying to generate additional RelOptInfo items we\n>> should think about improving that. The extra RelOptInfos don't really\n>> solve this, because even if you decide to join A|x<42 to B|x<42 it does\n>> nothing to improve the join clause estimate.\n> \n> I guess I hadn't considered that angle. I think the extra RelOptInfos\n> (or whatever) actually do solve a problem, because enforcing a\n> high-selectivity join qual against both sides is potentially quite\n> wasteful, and you need some way to decide whether to do it on one\n> side, the other, or both. But it's also true that I was wrong to\n> assume independence ... and if we could avoid assuming that, then the\n> join selectivity would work itself out without any of the machinery\n> that I just proposed.\n> \n\nTrue. We kinda already have this issue for the equality clauses, and\nhaving paths with the condition pushed down (or not) seems like a\nnatural approach.\n\n>> It actually deals with a more general form of this case, because the\n>> clauses don't need to reference the same attribute - so for example this\n>> would work too, assuming there is extended stats object on the columns\n>> on each side:\n>>\n>> P(A.c = B.d | (A.e < 42) & (B.f < 42))\n> \n> That'd be cool.\n> \n\nYeah, but the patch implementing this still needs more work.\n\n>> Not sure. In my experience queries with both a join clause and other\n>> clauses referencing the same attribute are pretty rare. But I agree if\n>> we can do the expensive stuff only when actually needed, with no cost in\n>> the 99.999% other cases, I don't see why not. Of course, code complexity\n>> is a cost too.\n> \n> Right. I mean, we could have a planner GUC to control whether the\n> optimization is used even in cases where we see that it's possible.\n> But Tom keeps arguing that it is possible in many queries and would\n> benefit few queries, and I'm not seeing why that should be so. I think\n> it's likely to benefit many of the queries to which it applies.\n> \n\nMaybe. Although the example I linked some time ago shows a pretty\ndramatic improvement, due to picking merge join + index scan, and not\nrealizing we'll have to skip a lot of data. But that's just one\nanecdotal example.\n\nAnyway, I think the best way to deal with these (perfectly legitimate)\nconcerns is to show how expensive it is for queries not not having such\njoin/restriction clauses, with the cost being close to 0. And then for\nqueries with such clauses but not benefiting from the change (a bit like\na worst case).\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2B1Wm9U_sP9237f7OH7O%3D-UTab71DWOO4Qc-vnC78DfsJQBCwQ%40mail.gmail.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 19 Feb 2022 01:16:53 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "Thanks for the detailed explanation.\n\nOn Sat, Feb 19, 2022 at 2:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Feb 18, 2022 at 12:56 AM Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n> > What do you think about moving on this feature? The items known by me\n> > are: 1). Make sure the estimation error can be fixed or discuss if my\n> current\n> > solution is workable. b). Just distribute some selectivity\n> restrictinfo to\n> > RelOptInfo. c). See how hard it is to treat the original / derived\n> qual equally.\n> > d). Reduce the extra planner cost at much as possible. Any other\n> important\n> > items I missed?\n>\n> I think it's not realistic to do anything here for PostgreSQL 15.\n> Considering that it's almost the end of February and feature freeze\n> will probably be in perhaps 5-6 weeks, in order to get something\n> committed at this point,\n\n\nI didn't expect that we could commit it very soon;) Actually my\nexpectation\nwas that more people would care about the direction of this feature. I care\nabout it, but that's not enough obviously. So I summarized the direction I\nwant to go, and let more people see if that's right.\n\n\n> Tom doesn't think we need this at all, and you and I and\n> Tomas all have somewhat different ideas on what approach we ought to\n> be taking,\n\n\nAgreed. IMO, the estimation error looks like a serious issue that we\nall agree to find a solution. But currently we have different ways to\nhandle\nthat. I'd pretty much hope that we can have a discussion about this stuff.\n\nand the patches appear to be at a POC level at this point rather than\n\nsomething that's close to being ready to ship,\n>\n\nThis is very true since no consensus on an approach so far. PoC would\nbe enough for now.\n\n\n> It seems to me that the thing to do here is see if you can build\n> consensus on an approach. Just saying that we ought to think the\n> patches you've already got are good enough is not going to get you\n> anywhere.\n\n\nI truly understand this and no matter which approach I insist on, the\nonly reason is just because I think it is the best one IMO and not because\nit comes from me or not.\n\n\n> I do understand that the political element of this problem\n> is frustrating to you, as it is to many people. But consider the\n> alternative: suppose the way things worked around here is that any\n> committer could commit anything they liked without needing the\n> approval of any other committer, or even over their objections. Well,\n> it would be chaos.\n\n\nThis is the fact I think.\n\n\n> People would be constantly reverting or rewriting\n> things that other people had done, and everybody would probably be\n> pissed off at each other all the time, and the quality would go down\n> the tubes and nobody would use PostgreSQL any more.\n\n\n\n> But the reason it's frustrating is because the PostgreSQL\n> community is a community of human beings, and there's nothing more\n> frustrating in the world than the stuff other human beings do.\n>\n>\nNew knowledge gained from how committers think about other's patch:)\nIt is reasonable. Committing the patch is not my only goal. Thinking\nstuff more completely is also an awesome thing to get during discussion.\nJust that sometimes ignorance is frustrating (I also truly understood\nthat everyone's energy is limited).\n\nHowever, it's equally true that we get further working together than\n> we would individually. I think Tom is wrong about the merits of doing\n> something in this area, but I also think he's incredibly smart and\n> thoughtful and one of the best technologists I've ever met, and\n> probably just one of the absolute best technologists on Planet Earth.\n> And I also have to consider, and this is really important, the\n> possibility that Tom is right about this issue and I am wrong. So far\n> Tom hasn't replied to what I wrote, but I hope he does. Maybe he'll\n> admit that I have some valid points. Maybe he'll tell me why he thinks\n> I'm wrong. Maybe I'll learn about some problem that I haven't\n> considered from his response, and maybe that will lead to a refinement\n> of the idea that will make it better.\n\n\n+1. Just to be more precise, are you also confused about why this\nshould not be done at all. IIUC, I get 3 reasons from Tom's reply.\na). Planning cost. b). estimation error. c) extra qual execution is bad.\n\n\n\n> I don't know, but it's certainly\n> happened in plenty of other cases. And that's how PostgreSQL gets to\n> be this pretty amazing database that it is. So, yeah, building\n> consensus is frustrating and it takes a long time and sometimes it\n> feels like other people are obstructing you needlessly and sometimes\n> that's probably true. But there's not a realistic alternative. Nobody\n> here is smart enough to create software that is as good as what all of\n> us create together.\n>\n\n+1.\n\n-- \nBest Regards\nAndy Fan\n\nThanks for the detailed explanation. On Sat, Feb 19, 2022 at 2:27 AM Robert Haas <robertmhaas@gmail.com> wrote:On Fri, Feb 18, 2022 at 12:56 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> What do you think about moving on this feature?  The items known by me\n> are: 1).  Make sure the estimation error can be fixed or discuss if my current\n> solution is workable.  b).  Just distribute some selectivity restrictinfo to\n> RelOptInfo.  c).  See how hard it is to treat the original / derived qual equally.\n> d).  Reduce the extra planner cost at much as possible.  Any other important\n> items I missed?\n\nI think it's not realistic to do anything here for PostgreSQL 15.\nConsidering that it's almost the end of February and feature freeze\nwill probably be in perhaps 5-6 weeks, in order to get something\ncommitted at this point,  I didn't expect that we could commit it very soon;)   Actually my expectationwas that more people would care about the direction of this feature.  I careabout it, but that's not enough obviously.  So I summarized the direction Iwant to go, and let more people see if that's right.    Tom doesn't think we need this at all, and you and I and\nTomas all have somewhat different ideas on what approach we ought to\nbe taking, Agreed.  IMO,  the estimation error looks like a serious issue that weall agree to find a solution.  But currently we have different ways to handlethat. I'd pretty much hope that we can have a discussion about this stuff. and the patches appear to be at a POC level at this point rather than  something that's close to being ready to ship, This is very true since no consensus on an approach so far. PoC wouldbe enough for now. \nIt seems to me that the thing to do here is see if you can build\nconsensus on an approach. Just saying that we ought to think the\npatches you've already got are good enough is not going to get you\nanywhere.I truly understand this and no matter which approach I insist on, theonly reason is just because I think it is the best one IMO and not becauseit comes from me or not.   I do understand that the political element of this problem\nis frustrating to you, as it is to many people. But consider the\nalternative: suppose the way things worked around here is that any\ncommitter could commit anything they liked without needing the\napproval of any other committer, or even over their objections. Well,\nit would be chaos. This is the fact I think.  People would be constantly reverting or rewriting\nthings that other people had done, and everybody would probably be\npissed off at each other all the time, and the quality would go down\nthe tubes and nobody would use PostgreSQL any more.  But the reason it's frustrating is because the PostgreSQL\ncommunity is a community of human beings, and there's nothing more\nfrustrating in the world than the stuff other human beings do.\n New knowledge gained from  how committers think about  other's patch:) It is reasonable.  Committing the patch is not my only goal.  Thinkingstuff more completely is also an awesome thing to get during discussion.Just that sometimes ignorance is frustrating (I also truly understoodthat everyone's energy is limited). \nHowever, it's equally true that we get further working together than\nwe would individually. I think Tom is wrong about the merits of doing\nsomething in this area, but I also think he's incredibly smart and\nthoughtful and one of the best technologists I've ever met, and\nprobably just one of the absolute best technologists on Planet Earth.\nAnd I also have to consider, and this is really important, the\npossibility that Tom is right about this issue and I am wrong. So far\nTom hasn't replied to what I wrote, but I hope he does. Maybe he'll\nadmit that I have some valid points. Maybe he'll tell me why he thinks\nI'm wrong. Maybe I'll learn about some problem that I haven't\nconsidered from his response, and maybe that will lead to a refinement\nof the idea that will make it better.+1.  Just to be more precise,  are you also confused about why thisshould not be done at all.  IIUC, I get 3 reasons from Tom's reply. a). Planning cost. b). estimation error.  c)  extra qual execution is bad.      I don't know, but it's certainly\nhappened in plenty of other cases. And that's how PostgreSQL gets to\nbe this pretty amazing database that it is. So, yeah, building\nconsensus is frustrating and it takes a long time and sometimes it\nfeels like other people are obstructing you needlessly and sometimes\nthat's probably true. But there's not a realistic alternative. Nobody\nhere is smart enough to create software that is as good as what all of\nus create together.+1. -- Best RegardsAndy Fan", "msg_date": "Mon, 21 Feb 2022 15:31:22 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": ">\n>\n> >> It actually deals with a more general form of this case, because the\n> >> clauses don't need to reference the same attribute - so for example this\n> >> would work too, assuming there is extended stats object on the columns\n> >> on each side:\n> >>\n> >> P(A.c = B.d | (A.e < 42) & (B.f < 42))\n> >\n> > That'd be cool.\n> >\n>\n> Yeah, but the patch implementing this still needs more work.\n>\n>\nThanks for that patch. That patch has been on my study list for a long\ntime and it can fix the other real case I met some day ago. I spent one\nday studying it again yesterday just that the result does not deserve\nsharing at the current stage. As for the purpose here, if we have\nextended statistics, I believe it can work well. But requiring extended\nstatistics for this feature does not look very reasonable to me. Do you\nthink we can go further in direction for the issue here? and it would\nbe super great that you can take a look at the commit 3 [1]. IIUC,\nIt can solve the issue and is pretty straightforward.\n\n[1]\nhttps://www.postgresql.org/message-id/CAKU4AWrdeQZ8xvf%3DDVhndUs%3DRGn8oVoSJvYK3Yj7uWq2%3Ddt%3DMg%40mail.gmail.com\n\n\n-- \nBest Regards\nAndy Fan\n\n\n>> It actually deals with a more general form of this case, because the\n>> clauses don't need to reference the same attribute - so for example this\n>> would work too, assuming there is extended stats object on the columns\n>> on each side:\n>>\n>>   P(A.c = B.d | (A.e < 42) & (B.f < 42))\n> \n> That'd be cool.\n> \n\nYeah, but the patch implementing this still needs more work.\nThanks for that patch. That patch has been on my study list for a longtime and it can fix the other real case I met some day ago.  I spent oneday studying it again yesterday just that the result does not deserve sharing at the current stage.   As for the purpose here,  if we have extended statistics, I believe it can work well.  But requiring extendedstatistics for this feature does not look very reasonable to me.  Do you think we can go further in direction for the issue here?   and it wouldbe super great that you can take a look at the commit 3 [1].  IIUC, It can solve the issue and is pretty straightforward. [1] https://www.postgresql.org/message-id/CAKU4AWrdeQZ8xvf%3DDVhndUs%3DRGn8oVoSJvYK3Yj7uWq2%3Ddt%3DMg%40mail.gmail.com -- Best RegardsAndy Fan", "msg_date": "Mon, 21 Feb 2022 15:46:07 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "On Mon, Feb 21, 2022 at 2:31 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> +1. Just to be more precise, are you also confused about why this\n> should not be done at all. IIUC, I get 3 reasons from Tom's reply.\n> a). Planning cost. b). estimation error. c) extra qual execution is bad.\n\nThis topic has been discussed a number of times, and Tom has basically\nalways said that he thinks this would be expensive to plan (which I\nthink is true) and that we wouldn't get much benefit (which I think is\nfalse).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 1 Mar 2022 16:50:08 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> This topic has been discussed a number of times, and Tom has basically\n> always said that he thinks this would be expensive to plan (which I\n> think is true) and that we wouldn't get much benefit (which I think is\n> false).\n\nI think the trick here, as in so many other places, is to not impose\nsignificant extra planning cost on queries that don't end up getting\nany benefit. I'm not in favor of complicating the EquivalenceClass\nmechanism for this, because (a) I don't think that such an approach\nwill lead to success on that metric, and (b) what it definitely will do\nis make ECs harder to understand and reason about. If we develop a\nseparate mechanism that can infer things from inequalities, and it only\nkicks in when there are some inequalities, that might work out okay.\nBut because of that, I don't even like the 0001 patch in this series.\nI've not looked at the subsequent ones.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 01 Mar 2022 17:53:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join,\n but BETWEEN or >= is not?" }, { "msg_contents": "On Tue, Mar 1, 2022 at 5:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > This topic has been discussed a number of times, and Tom has basically\n> > always said that he thinks this would be expensive to plan (which I\n> > think is true) and that we wouldn't get much benefit (which I think is\n> > false).\n>\n> I think the trick here, as in so many other places, is to not impose\n> significant extra planning cost on queries that don't end up getting\n> any benefit.\n\nI agree. My question is: why shouldn't every case where we can deduce\nan implied inequality be reasonably likely to show a benefit? If the\nquery specifies that a.x = b.x and also that a.x < 42, the only reason\nto suppose that evaluating a.x < 42 rather than b.x < 42 or in\naddition to b.x < 42 is likely to be better is if we assume the user\nknows how the query optimizer works and has employed that knowledge in\ncrafting the query. And admittedly, sophisticated users are probably\nlikely to do that, and even unsophisticated users may do it more\nlikely than chance would dictate. But it still feels like we have a\ngood chance of landing of coming out ahead pretty often unless the\nuser really knows what they are doing. And even then, any mechanism we\nadd here can have an off switch.\n\n> I'm not in favor of complicating the EquivalenceClass\n> mechanism for this, because (a) I don't think that such an approach\n> will lead to success on that metric, and (b) what it definitely will do\n> is make ECs harder to understand and reason about. If we develop a\n> separate mechanism that can infer things from inequalities, and it only\n> kicks in when there are some inequalities, that might work out okay.\n> But because of that, I don't even like the 0001 patch in this series.\n> I've not looked at the subsequent ones.\n\nI don't think 0001 is right either, although maybe for somewhat\ndifferent reasons. First, I think it only considers VAR OP CONST style\nclauses, but that is leaving money on the table, because given a.x =\nb.x AND mumble(a.x), we can decide to instead test mumble(b.x) if the\nequality operator in question has is-binary-identical semantics. It\ndoes not seem necessary for a first patch to deal with both that and\nthe somewhat more pleasing case where we're making deductions based on\noperator families ... but we shouldn't commit to a design for the VAR\nOP CONST case without understanding how it could be generalized.\nSecond, it looks to me like the patch takes the rather naive strategy\nof enforcing the derived clauses everywhere that they can legally be\nput, which seems certain not to be optimal.\n\nI don't know whether attaching something to the equivalence class data\nstructure is the right idea or not. Presumably, we don't want to make\nan extra pass over the query tree to gather the information needed for\nthis kind of optimization, and it feels like we need to know which\nvars are EMs before we try to derive alternate/additional quals. So I\nguess we'd want to study clauses for possible use by this kind of\nmechanism after we've derived ECs but before we do any costing stuff,\nyet without introducing a whole new pass. Once we do derive that\ninformation, where are we going to put it? We have to be able to tell\nefficiently when looking at a baserel whether there are any implied\ninequalities that we should be thinking about ... and there's nothing\nobvious tying all of the relevant places together other than the EM.\nBut I'm kind of blathering here: I feel like there are a lot of\ncomplexities I haven't thought hard enough about to have an\nintelligent opinion.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 1 Mar 2022 20:11:24 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I agree. My question is: why shouldn't every case where we can deduce\n> an implied inequality be reasonably likely to show a benefit?\n\nMaybe it will be, if we can deal with the issue you already mentioned\nabout not misestimating the resulting partially-redundant conditions.\n\n> Second, it looks to me like the patch takes the rather naive strategy\n> of enforcing the derived clauses everywhere that they can legally be\n> put, which seems certain not to be optimal.\n\nI'm not sure about that ... it's basically what we do with derived\nequalities. However, there's enough structure in the equivalence-class\ncase that we don't end up enforcing redundant quals. It's not clear\nto me whether the same can be said here.\n\n> I don't know whether attaching something to the equivalence class data\n> structure is the right idea or not. Presumably, we don't want to make\n> an extra pass over the query tree to gather the information needed for\n> this kind of optimization, and it feels like we need to know which\n> vars are EMs before we try to derive alternate/additional quals.\n\nYeah, we don't want to make an additional pass over the tree, and\nwe also would rather not add an additional set of per-operator\ncatalog lookups. We might be able to generalize the code that looks\nfor equality operators so that it looks for \"any btree operator\"\nwith the same number of lookups, and then have it feed the results\ndown either the EquivalenceClass path or the inequality path\nas appropriate. At the end, after we've formed all the ECs, we\ncould have a go at matching up the inequality structures with the\nECs. But I don't agree that ECs are a necessary prerequisite.\nHere are a couple of other patterns that might be worth looking for:\n\n* \"a > b AND b > c\" allows deducing \"a > c\", whether or not any\nof those values appears in an EC.\n\n* \"a > const1 AND a > const2\" can be simplified to either \"a > const1\"\nor \"a > const2\" depending on which constant is larger. (The predicate\nproof mechanism already has a form of this, but we don't typically\napply it in a way that would result in dropping the redundant qual.)\n\nIt's entirely possible that one or both of these patterns is not\nworth looking for. But I would say that it's equally unproven\nthat deriving \"a > c\" from \"a = b AND b > c\" is worth the cycles.\nI'll grant that it's most likely going to be a win if we can use\nany of these patterns to generate a restriction clause from what\nhad been join clauses. Beyond that it's much less clear.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 01 Mar 2022 21:05:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join,\n but BETWEEN or >= is not?" }, { "msg_contents": "Thanks Tom for joining.\n\n\n> I'm not in favor of complicating the EquivalenceClass\n> mechanism for this, because .... (b) what it definitely will do\n> is make ECs harder to understand and reason about.\n\n\nI'm not willing to show opposition on purpose, and I'm not insist on current\nstrategy, but I can't understand the comment here, not sure how others.\nSo I just point it out. IMO, the semantics of ec_filter is that every EMs\nin this\nEC can have this filter. I do like this method very much. If we need\nsomething\nto improve that, it may be the content in ec_filter is not generic\nenough. For example:\n\nselect * from t1, t2 where t1.a = t2.a and t2.a > 3;\n\nThen the EC filter is \"t2.a > 3\". Why is it a \"t2.a\" rather than a more\ngeneric type to show \"any EM\" in this EC, I can double check the\npatch to see if this can be any helpful.\n\nMaybe I'm focusing on the current situation too much, could you describe\nmore about the badness of this semantics level?\n\n\n\n> If we develop a\n> separate mechanism that can infer things from inequalities, and it\n\n_only_\n\nkicks in when there are some inequalities, that might work out okay.\n>\n\nI will try to make this part clearer. The current mechanism includes 3\nsteps.\n1). Gather the inequalities_qual_list during the deconstruct_jointree. 2).\nAfter the root->eq_classes is built, scan each of the above quals to find\nout if there is an EC match, if yes, add it to the EC. There are some\nfast paths here.\n3). compose the qual in ec_filter and members in ec_members, then\ndistribute it to the relations.\n\nStep 1 would make sure only inequalities is checked. Are you unhappy with\nthe\ncost of step 2 here? for the case like\n\nSELECT * FROM t1, t2 WHERE t1.a = t2.a AND t1.b > 3;\n\nwe have to go step 2 and get nothing finally. As for the case like \"FROM\nt1, t2, t3\nWHERE t1.a = t2.a and t3.c > 3\". t3.c > 3 can be discard quickly with\nEC->relids checking.\n\nBut because of that, I don't even like the 0001 patch in this series.\n> I've not looked at the subsequent ones.\n>\n>\nI agree with 0001 patch should be the first one to reach an agreement .\n\n-- \nBest Regards\nAndy Fan\n\nThanks Tom for joining. I'm not in favor of complicating the EquivalenceClass\nmechanism for this, because ....  (b) what it definitely will do\nis make ECs harder to understand and reason about.  I'm not willing to show opposition on purpose, and I'm not insist on currentstrategy,  but I can't understand the comment here, not sure how others.So I just point it out.  IMO, the semantics of ec_filter is that every EMs in this EC can  have this filter.  I do like this method very much.  If we need somethingto improve that, it may be the content in ec_filter is not genericenough.  For example:select * from t1, t2 where t1.a = t2.a and t2.a > 3; Then the EC filter is \"t2.a > 3\".  Why is it a \"t2.a\"  rather than a moregeneric type to show \"any EM\" in this EC,  I can double check thepatch to see if this can be any helpful. Maybe I'm focusing on the current situation too much,  could you describemore about the badness of this semantics level?  If we develop a\nseparate mechanism that can infer things from inequalities, and it _only_ \nkicks in when there are some inequalities, that might work out okay.I will try to make this part clearer.  The current mechanism includes 3 steps.1). Gather the inequalities_qual_list during the deconstruct_jointree. 2).  After the root->eq_classes is built,  scan each of the above quals to findout if there is an EC match,  if yes, add it to the EC.  There are some fast paths here. 3).  compose  the qual in ec_filter and members in ec_members, then distribute it to the relations.  Step 1 would make sure only inequalities is checked.   Are you unhappy with the cost of step 2 here?  for the case like SELECT * FROM t1, t2 WHERE t1.a = t2.a AND t1.b > 3;  we have to go step 2 and get nothing finally.  As for the case like \"FROM t1, t2, t3 WHERE t1.a = t2.a and t3.c > 3\".  t3.c > 3 can be discard quickly with EC->relids checking. \nBut because of that, I don't even like the 0001 patch in this series.\nI've not looked at the subsequent ones.I agree with 0001 patch should be the first one to reach an agreement .  -- Best RegardsAndy Fan", "msg_date": "Wed, 2 Mar 2022 10:43:15 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": ">\n>\n> I don't think 0001 is right either, although maybe for somewhat\n> different reasons. First, I think it only considers VAR OP CONST style\n> clauses, but that is leaving money on the table, because given a.x =\n> b.x AND mumble(a.x), we can decide to instead test mumble(b.x) if the\n> equality operator in question has is-binary-identical semantics. It\n> does not seem necessary for a first patch to deal with both that and\n> the somewhat more pleasing case where we're making deductions based on\n> operator families ... but we shouldn't commit to a design for the VAR\n> OP CONST case without understanding how it could be generalized.\n>\n\nI can follow up with this and +1 with the statement.\n\n\n> Second, it looks to me like the patch takes the rather naive strategy\n> of enforcing the derived clauses everywhere that they can legally be\n> put, which seems certain not to be optimal.\n\n\nIf we can have some agreement (after more discussion) the EC filter is\nacceptable on semantics level, I think we may have some chances to\nimprove something at execution level.\n\n-- \nBest Regards\nAndy Fan\n\n\nI don't think 0001 is right either, although maybe for somewhat\ndifferent reasons. First, I think it only considers VAR OP CONST style\nclauses, but that is leaving money on the table, because given a.x =\nb.x AND mumble(a.x), we can decide to instead test mumble(b.x) if the\nequality operator in question has is-binary-identical semantics. It\ndoes not seem necessary for a first patch to deal with both that and\nthe somewhat more pleasing case where we're making deductions based on\noperator families ... but we shouldn't commit to a design for the VAR\nOP CONST case without understanding how it could be generalized.I can follow up with this and +1 with the statement.   \nSecond, it looks to me like the patch takes the rather naive strategy\nof enforcing the derived clauses everywhere that they can legally be\nput, which seems certain not to be optimal.If we can have some agreement (after more discussion) the EC filter is acceptable on semantics level,  I think we may have some chances to improve something at execution level.-- Best RegardsAndy Fan", "msg_date": "Wed, 2 Mar 2022 11:00:12 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "On Tue, Mar 1, 2022 at 9:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I agree. My question is: why shouldn't every case where we can deduce\n> > an implied inequality be reasonably likely to show a benefit?\n>\n> Maybe it will be, if we can deal with the issue you already mentioned\n> about not misestimating the resulting partially-redundant conditions.\n\nOK.\n\n> > Second, it looks to me like the patch takes the rather naive strategy\n> > of enforcing the derived clauses everywhere that they can legally be\n> > put, which seems certain not to be optimal.\n>\n> I'm not sure about that ... it's basically what we do with derived\n> equalities. However, there's enough structure in the equivalence-class\n> case that we don't end up enforcing redundant quals. It's not clear\n> to me whether the same can be said here.\n\nI mean, to go back to the example of a.x < 42 and a.x = b.x, there are\nthree possible choices as to where to enforce the qual (a, b, both).\nThat's a meaningful choice, independent of any estimation issue. I\nthink it is reasonably common to have cases where a.x < 42 is very\nselective and b.x < 42 hardly filters out anything at all, or the\nother way around. Certainly, that kind of situation came up a lot in\nPostgreSQL-based applications that I wrote myself back in the day. If\nwe're just talking about btree operators, *maybe* we can say it's\ncheap enough that we don't care, but color me a tad skeptical.\n\n> > I don't know whether attaching something to the equivalence class data\n> > structure is the right idea or not. Presumably, we don't want to make\n> > an extra pass over the query tree to gather the information needed for\n> > this kind of optimization, and it feels like we need to know which\n> > vars are EMs before we try to derive alternate/additional quals.\n>\n> Yeah, we don't want to make an additional pass over the tree, and\n> we also would rather not add an additional set of per-operator\n> catalog lookups. We might be able to generalize the code that looks\n> for equality operators so that it looks for \"any btree operator\"\n> with the same number of lookups, and then have it feed the results\n> down either the EquivalenceClass path or the inequality path\n> as appropriate. At the end, after we've formed all the ECs, we\n> could have a go at matching up the inequality structures with the\n> ECs.\n\nInteresting idea.\n\n> But I don't agree that ECs are a necessary prerequisite.\n> Here are a couple of other patterns that might be worth looking for:\n>\n> * \"a > b AND b > c\" allows deducing \"a > c\", whether or not any\n> of those values appears in an EC.\n>\n> * \"a > const1 AND a > const2\" can be simplified to either \"a > const1\"\n> or \"a > const2\" depending on which constant is larger. (The predicate\n> proof mechanism already has a form of this, but we don't typically\n> apply it in a way that would result in dropping the redundant qual.)\n>\n> It's entirely possible that one or both of these patterns is not\n> worth looking for. But I would say that it's equally unproven\n> that deriving \"a > c\" from \"a = b AND b > c\" is worth the cycles.\n> I'll grant that it's most likely going to be a win if we can use\n> any of these patterns to generate a restriction clause from what\n> had been join clauses. Beyond that it's much less clear.\n\nPretty much all of the cases that I've run across involve an equijoin\nplus an inequality, so if somebody asked me which problem we ought to\nput most effort into solving, I'd say that one. Cases like \"a>1 and\na>2\" or a same-table case like \"a=b and b>3\" haven't been as common in\nmy experience, and haven't caused as much trouble when they do happen.\nPart of that is because if you have something like \"a>1 and a>2\" in\nyour query, it may be easy for you to just tweak the query generation\nto avoid it, and if \"a=b and b>3\" is coming up a lot, you may choose\nto adjust your data model (e.g. choose to store NULL in b to indicate\nsame-as-a), whereas if you have something like\n\"orders.orderno=order_lines.orderno and order_lines.orderno<10000,\"\nwhat are you going to do to avoid that exactly? If you normalize your\norder data and then want to find the old orders, this problem arises\nineluctably.\n\nBut having said that, I'm not *against* doing something about those\ncases if it's cheap or falls out naturally. If we could detect for\nfree that the user had written a>1 and a>2, it'd certainly be\nbeneficial to drop the former qual and keep only the latter. If the\nuser writes a>b and b>c and all those columns are in one table I don't\nsee how it helps to derive a>c, because we're still going to need to\ncheck the other two quals anyway so we've just created more work. But\nif those columns are not all in the same table then I'd say chances\nare really pretty good. Like, suppose it's x.a>y.b and y.b>x.c. Well,\nlike I say, I don't really see people writing queries like that\nmyself, but if they do, it seems pretty obvious that deriving x.a>x.c\nhas the potential to save a LOT of trouble. If it's x.a>y.b and\ny.b>z.c I don't feel quite so optimistic, but it may be that we would\nlike to do the x-z join first, and if we do, enforcing x.a>z.c at that\nlevel to shrink the join product seems like a very strong idea. It is\na slight loss if we run that qual on lots of rows and it never fails,\nbut it is a gigantic win if it filters out a bunch of stuff. I bet a\nlot of users would be VERY happy to pay the cost of testing x.a>z.c at\nthe x-z join level even on queries where the statistics suggest that\nit will be entirely useless, because it won't cost that much to check\nit, and if by some chance the statistics are misleading, it might\nprevent a really bad outcome where the query runs for a super-long\ntime and they get paged. So the questions in my mind here are all\nabout whether we can detect this stuff cheaply and whether anybody\nwants to do the work to make it happen, not whether we'd get a benefit\nin the cases where it kicks in.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 2 Mar 2022 10:46:15 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> So the questions in my mind here are all\n> about whether we can detect this stuff cheaply and whether anybody\n> wants to do the work to make it happen, not whether we'd get a benefit\n> in the cases where it kicks in.\n\nRight, my worries are mostly about the first point.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Mar 2022 11:09:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join,\n but BETWEEN or >= is not?" }, { "msg_contents": "On Wed, Mar 2, 2022 at 11:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > So the questions in my mind here are all\n> > about whether we can detect this stuff cheaply and whether anybody\n> > wants to do the work to make it happen, not whether we'd get a benefit\n> > in the cases where it kicks in.\n>\n> Right, my worries are mostly about the first point.\n\nOK, cool.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 2 Mar 2022 12:29:06 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "On Thu, Mar 3, 2022 at 1:29 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Mar 2, 2022 at 11:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > So the questions in my mind here are all\n> > > about whether we can detect this stuff cheaply and whether anybody\n> > > wants to do the work to make it happen, not whether we'd get a benefit\n> > > in the cases where it kicks in.\n> >\n> > Right, my worries are mostly about the first point.\n>\n> OK, cool.\n>\n\nI have finished the PoC for planning timing improvement and joinrel rows\nestimation.\nthe design considers the requirement we can enforce any corrective quals\nduring\nexecution (rather than must execute the RestirctInfo which user provides),\nbut\nnothing is coded for that part so far.\n\nCopy the commit message here for easy discussion.\n\n== Planning timing part ==\nPatch 1: expand the duties of check_mergejoinable to check non-equal btree\n operators as well to support the EC Filter function. A new field\n named btreeineqopfamilies is added in RestictInfo and it is set\n with the same round syscache search for check_mergejoinable. Because\n of this, check_mergejoinable is renamed to check_btreeable.\n The bad part of this is it only works for opclause so far.\n\nPatch 2: Introduce ec_filters in EquivalenceClass struct, the semantics\nis that the quals can\n be applied to any EquivalenceMember in this EC. Later this information\nis used\n to generate new RestrictInfo and was distributed to related RelOptInfo\nvery\n soon. There are 3 major steps here:\n\n a). In distribute_qual_to_rels to gather the ineq quallist.\n b). After deconstruct_jointree, distribute_filter_quals_to_eclass\ndistributes\n these ineq-quallist to the related EC's ef_filters.\n c). generate_base_implied_equalities_no_const scan the ec_filters and\ndistribute\n the restrictinfo to related RelOptInfo.\n\nPatch 3: Reduce some planning cost for deriving qual for EC filter\nfeature.\n Mainly changes include:\n 1. Check if the qual is simple enough by checking rinfo->right_relids\nand\n info->right_relids, save the pull_varnos of rinfo->clause calls.\n 2. check contain_volatile_functions against RestrictInfo, so that\n the result can be shared with following calls.\n 3. By employing the RestictInfo->btreeineqfamility which is calculating.\n with the same round of calculating RestrictInfo->mergeopfamilies. In\nthis\n way we save some calls for syscache.\n 4. Calculating the opfamility and amstrategy at\n distribute_filter_quals_to_eclass and cache the results in\nEquivalenceFilter.\n if no suitable opfamility and amstrategy are found, bypass the qual\nimmediately\n and at last using the cached value\ngenerate_base_implied_equalities_no_const.\n\n After this change, there is a testcase changed unexpectedly in\nequivclass.out\n (compared with Patch-2 expectation file.)\n\n create user regress_user_ectest;\n grant select on ec0 to regress_user_ectest;\n grant select on ec1 to regress_user_ectest;\n\n set session authorization regress_user_ectest;\n\n -- with RLS active, the non-leakproof a.ff = 43 clause is not treated\n -- as a suitable source for an EquivalenceClass; currently, this is true\n -- even though the RLS clause has nothing to do directly with the EC\n explain (costs off)\n regression-> select * from ec0 a, ec1 b\n regression-> where a.ff = b.ff and a.ff = 43::bigint::int8alias1;\n\n The b.ff = 43 has disappeared from ec1 b. But since it even didn't shown\n before the EC filter, so I'm not sure my changes here make something\nwrong,\n maybe fix an issue by accident?\n\n== Join Rel size estimation part ==\n\nI have revist the strategy before, the main reasons are 1). we should\nconsider every\nqual *equally*. 2). In the past, I just wanted to get the same result as\nec filters doesn't\nhappen, but later I found that even if there is no ec filter, we still\nhave some estimation error\nclearly. for example:\n\ncreate table ec_t1000 (a int);\ninsert into ec_t1000 select i from generate_series(1, 1000)i;\ncreate table ec_t110 (a int);\ninsert into ec_t110 select i from generate_series(1, 110) i;\ncreate table ec_t200 (a int);\ninsert into ec_t200 select i from generate_series(1, 200) i;\nanalyze ec_t1000, ec_t110, ec_t200;\n\n\nquery 1: explain select * from ec_t1000, ec_t110 where ec_t1000.a =\nec_t110.a and ec_t1000.a > 100; -- (0.9)\n\nquery 2: explain select * from ec_t1000, ec_t110 where ec_t1000.a =\nec_t110.a and ec_t110.a > 100; -- (0.1)\n\nquery 3: explain select * from ec_t1000, ec_t110 , ec_t200 where ec_t1000.a\n= ec_t110.a and ec_t110.a = ec_t200.a and ec_t1000.a > 100;\n\nquery 4: explain select * from ec_t1000, ec_t110 , ec_t200 where ec_t1000.a\n= ec_t110.a and ec_t110.a = ec_t200.a and ec_t110.a > 100;\n\nquery 5: explain select * from ec_t1000, ec_t110 , ec_t200 where ec_t1000.a\n= ec_t110.a and ec_t110.a = ec_t200.a and ec_t200.a > 100;\n\nwe can see query 1 is the same as query 2, and query 3/4/5 should be the\nsame as well. The fact\nis not. Here is the result on the current master and patched version.\n\n| Query Id | Real rows | Est. Rows at master | Est. rows with patched |\n|----------+-----------+---------------------+------------------------|\n| 1 | 10 | 99 | 10 |\n| 2 | 10 | 10 | 10 |\n| 3 | 10 | 20 | 11 |\n| 4 | 10 | 2 | 11 |\n| 5 | 10 | 11 | 11 |\n\n\nPatch 4: Prepare the code for CorrectiveQual structure.\n Just refactor the method for 2-level loop in\n generate_base_implied_equalities_no_const, no other things are changed.\n\nPatch 5: struct CorrectiveQuals is as simple as a List of RestrictInfo,\nthe properties\nof it are: a). only one restrictinfo on this group should be counted for\nany joinrel\nsize estimation. b). at least 1 restrictinfo in this group should be\nexecuted during\nexecution. In this commit, only the size estimation issue is tried.\n\n PlannerInfo.correlative_quals is added to manage all the\nCorrectiveQuals at\n subquery level. RelOptInfo.cqual_indexes is a List * to indicate a which\n CorrectiveQuals this relation is related to. This is designed for easy\nto check if\n the both sides of joinrel correlated to the same CorrectiveQuals. The\nreason why\n \"List *\" will be explained later.\n\n The overall design of handing the joinrel size estimation is:\n a). At the base relation level, we just count everything with the\ncorrelative\n quals. b). During any level joinrel size estimation, we just keep 1\nside's\n cqual (short for corrective qual) selectivity by eliminating the other\none. so\n the size estimation for a mergeable join selectivity becomes to:\n\n rows = R1.rows X r2.rows X 1 / Max (ndistval_of_colA,\n ndistinval_of_colB) X 1 /\n Selectivity(R1's CorrectiveQual).\n\n r1.rows X 1 / Selectivity(R1's CorrectiveQual) eliminates the impact of\n CorrectiveQual on R1. After this, the JoinRel of (R1, R2) still be\nimpacted by\n this CorrectiveQual but \"just once\" in this level. Later if\nJoinRel(R1, R2) needs\n to join with R3, and R3 is impacted by this CorectiveQuals as well. We\n need to keep one and eliminate the other one as above again.\n\n The algorithm for which Selectivity should be eliminated and which one\nshould be\n kept is:\n\n When we join 2 inner_rel and outer_rel with a mergeable join\nrestrictinfo, if\n both sides is impacted with the same CorrectiveQual, we first choose\nwhich \"side\"\n to eliminating based on which side of the restrictinfo has a higher\ndistinct\n value. The reason for this is more or less because we used\n\"Max\"(ndistinctValT1,\n ndistinctValT2). After deciding which \"side\" to eliminate, the real\neliminating\n selectivity is RelOptInfo->cqual_selectivity[n], The left one still\ntakes effect\n and is noted in the joinrel->cqual_selectivitity[n].\n\n Introduction of RelOptInfo->cqual_selectivity:\n\n The number of elements in cqual_selecitity equals\n the length of cqual_indexes. The semantics is which\n Selectivity in the corresponding CorectiveQuals's qual\n list is taking effect. At any time, only 1 Qual\n Selectivity is counted for any-level of joinrel size estimation.\n\n In reality, it is possible to have many CorrectiveQuals, but for design\n discussion, the current implementation only takes care of the 1\nCorrectiveQuals.\n This would be helpful for PoC/review/discussion.\n\n Some flow for the key data:\n\n 1. root->corrective_quals is initialized at\n generate_base_implied_equalities_no_const stage. we create a\nCorrectiveQual in\n this list for each ec_filter and fill the RestrictInfo part for it. At\n the same time, we note that which RelOptInfo (cqual_indexes) is related\nto this cqual.\n\n 2. RelOptInfo->cqual_selecitity for baserel is set at the end of\nset_rel_size,\n at this time, the selectivity for every RestrictInfo is calculated, we\ncan just\n fetch the cached value. As for joinrel, it is maintained in\n calc_join_cqual_selectivity, this function would return the Selectivity\nto\n eliminate and set the above value.\n\n Limitation in this PoC:\n 1. Only support 1 CorrectiveQual in root->correlative_quals\n 2. Only tested with INNER_JOIN.\n 3. Inherited tables are not supported.\n\nI find it is hard to explain things clearly without the code. Any feedback\nis welcome.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Tue, 8 Mar 2022 21:44:37 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "Hi:\n\nI just tested more cases for the estimation issue for this feature, and we\ncan\nfind **we get a more accurate/stable estimation than before**. Here is the\ntest\ncases and result (by comparing the master version and patched version).\n\ncreate table ec_t110 as select i::int as a from generate_series(1, 110) i;\ncreate table ec_t200 as select i::int as a from generate_series(1, 200) i;\ncreate table ec_t500 as select i::int as a from generate_series(1, 500) i;\ncreate table ec_t800 as select i::int as a from generate_series(1, 800) i;\ncreate table ec_t1000 as select i::int as a from generate_series(1, 1000) i;\n\nanalyze;\n\n-- 2 table joins.\nexplain analyze select * from ec_t1000, ec_t110 where ec_t1000.a =\nec_t110.a and ec_t1000.a > 100; -- (0.9)\nexplain analyze select * from ec_t1000, ec_t110 where ec_t1000.a =\nec_t110.a and ec_t110.a > 100; -- (0.1)\n\n-- 3 table joins.\nexplain analyze select * from ec_t1000, ec_t110 , ec_t200 where ec_t1000.a\n= ec_t110.a and ec_t110.a = ec_t200.a and ec_t1000.a > 100;\nexplain analyze select * from ec_t1000, ec_t110 , ec_t200 where ec_t1000.a\n= ec_t110.a and ec_t110.a = ec_t200.a and ec_t110.a > 100;\nexplain analyze select * from ec_t1000, ec_t110 , ec_t200 where ec_t1000.a\n= ec_t110.a and ec_t110.a = ec_t200.a and ec_t200.a > 100;\n\n-- 4 table joins.\nexplain analyze select * from ec_t1000, ec_t110 , ec_t200, ec_t500 where\nec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t500.a = ec_t200.a\nand ec_t1000.a > 100;\nexplain analyze select * from ec_t1000, ec_t110 , ec_t200, ec_t500 where\nec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t500.a = ec_t200.a\nand ec_t110.a > 100;\nexplain analyze select * from ec_t1000, ec_t110 , ec_t200, ec_t500 where\nec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t500.a = ec_t200.a\nand ec_t200.a > 100;\nexplain analyze select * from ec_t1000, ec_t110 , ec_t200, ec_t500 where\nec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t500.a = ec_t200.a\nand ec_t500.a > 100;\n\n-- 5 table joins.\nexplain analyze select * from ec_t1000, ec_t110 , ec_t200, ec_t500, ec_t800\nwhere ec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t500.a =\nec_t200.a and ec_t500.a = ec_t800.a and ec_t1000.a > 100;\nexplain analyze select * from ec_t1000, ec_t110 , ec_t200, ec_t500, ec_t800\nwhere ec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t500.a =\nec_t200.a and ec_t500.a = ec_t800.a and ec_t110.a > 100;\nexplain analyze select * from ec_t1000, ec_t110 , ec_t200, ec_t500, ec_t800\nwhere ec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t500.a =\nec_t200.a and ec_t500.a = ec_t800.a and ec_t200.a > 100;\nexplain analyze select * from ec_t1000, ec_t110 , ec_t200, ec_t500, ec_t800\nwhere ec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t500.a =\nec_t200.a and ec_t500.a = ec_t800.a and ec_t500.a > 100;\nexplain analyze select * from ec_t1000, ec_t110 , ec_t200, ec_t500, ec_t800\nwhere ec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t500.a =\nec_t200.a and ec_t500.a = ec_t800.a and ec_t800.a > 100;\n\n| Query Id | Real rows | Est. Rows at master | Est. rows with patched |\ntable # |\n|----------+-----------+---------------------+------------------------+---------|\n| 1 | 10 | 99 | 10 |\n 2 |\n| 2 | 10 | 10 | 10 |\n 2 |\n| 3 | 10 | 20 | 11 |\n 3 |\n| 4 | 10 | 2 | 11 |\n 3 |\n| 5 | 10 | 11 | 11 |\n 3 |\n| 6 | 10 | 10 | 9 |\n 4 |\n| 7 | 10 | 1 | 9 |\n 4 |\n| 8 | 10 | 6 | 9 |\n 4 |\n| 9 | 10 | 9 | 9 |\n 4 |\n| 10 | 10 | 8 | 8 |\n 5 |\n| 11 | 10 | 1 | 8 |\n 5 |\n| 12 | 10 | 5 | 8 |\n 5 |\n| 13 | 10 | 7 | 8 |\n 5 |\n| 14 | 10 | 8 | 8 |\n 5 |\n\n\nIn the past, we can just use the qual user provided to do estimation. As for\nnow, since we introduce the CorrectiveQuals design, we still keep just only\n1\nqual counted, but we can choose the best one in CorrectiveQuals no matter\nwhich\none is provided by the user. we gain a better and stable estimation because\nof this.\n\nI'm happy about the overall design but not pretty confident about the\nmethod to\n\"choose the best one to keep\". So I did some test case as many as I can to\nfind\nsomething is wrong, so far so good.\n\nI'm also happy with how to keep only one qual in CorrectiveQuals (not\nchoose the\nbest one). Assume we just have 1 EC filter in this query for simplicity. At\nthe\nbeginning, all the baserel have been impacted by CorrectiveQual. When join 2\nrelations, we rollback 1 side and keep the other one. when we join this\njoinrel\nwith another rel, we rollback 1 side and keep the other one and so forth.\n\n(rollback is not changing some things which we already computed, it is only\nused when estimating size for coming joinrel).\n\nThe patchset can be applied cleanly with\n9e98583898c347e007958c8a09911be2ea4acfb9.\n\nHi:I just tested more cases for the estimation issue for this feature, and we canfind **we get a more accurate/stable estimation than before**. Here is the testcases and result (by comparing the master version and patched version).create table ec_t110 as select i::int as a from generate_series(1, 110) i;create table ec_t200 as select i::int as a from generate_series(1, 200) i;create table ec_t500 as select i::int as a from generate_series(1, 500) i;create table ec_t800 as select i::int as a from generate_series(1, 800) i;create table ec_t1000 as select i::int as a from generate_series(1, 1000) i;analyze;-- 2 table joins.explain analyze select * from ec_t1000, ec_t110 where ec_t1000.a = ec_t110.a and ec_t1000.a > 100; -- (0.9)explain analyze select * from ec_t1000, ec_t110 where ec_t1000.a = ec_t110.a and ec_t110.a > 100; --  (0.1)-- 3 table joins.explain analyze select * from ec_t1000, ec_t110 , ec_t200 where ec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t1000.a > 100;explain analyze select * from ec_t1000, ec_t110 , ec_t200 where ec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t110.a > 100;explain analyze select * from ec_t1000, ec_t110 , ec_t200 where ec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t200.a > 100;-- 4 table joins.explain analyze select * from ec_t1000, ec_t110 , ec_t200, ec_t500 where ec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t500.a = ec_t200.a and ec_t1000.a > 100;explain analyze select * from ec_t1000, ec_t110 , ec_t200, ec_t500 where ec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t500.a = ec_t200.a and ec_t110.a > 100;explain analyze select * from ec_t1000, ec_t110 , ec_t200, ec_t500 where ec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t500.a = ec_t200.a and ec_t200.a > 100;explain analyze select * from ec_t1000, ec_t110 , ec_t200, ec_t500 where ec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t500.a = ec_t200.a and ec_t500.a > 100;-- 5 table joins.explain analyze select * from ec_t1000, ec_t110 , ec_t200, ec_t500, ec_t800 where ec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t500.a = ec_t200.a and ec_t500.a = ec_t800.a and ec_t1000.a > 100;explain analyze select * from ec_t1000, ec_t110 , ec_t200, ec_t500, ec_t800 where ec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t500.a = ec_t200.a and ec_t500.a = ec_t800.a and ec_t110.a > 100;explain analyze select * from ec_t1000, ec_t110 , ec_t200, ec_t500, ec_t800 where ec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t500.a = ec_t200.a and ec_t500.a = ec_t800.a and ec_t200.a > 100;explain analyze select * from ec_t1000, ec_t110 , ec_t200, ec_t500, ec_t800 where ec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t500.a = ec_t200.a and ec_t500.a = ec_t800.a and ec_t500.a > 100;explain analyze select * from ec_t1000, ec_t110 , ec_t200, ec_t500, ec_t800 where ec_t1000.a = ec_t110.a and ec_t110.a = ec_t200.a and ec_t500.a = ec_t200.a and ec_t500.a = ec_t800.a and ec_t800.a > 100;| Query Id | Real rows | Est. Rows at master | Est. rows with patched | table # ||----------+-----------+---------------------+------------------------+---------||        1 |        10 |                  99 |                     10 |       2 ||        2 |        10 |                  10 |                     10 |       2 ||        3 |        10 |                  20 |                     11 |       3 ||        4 |        10 |                   2 |                     11 |       3 ||        5 |        10 |                  11 |                     11 |       3 ||        6 |        10 |                  10 |                      9 |       4 ||        7 |        10 |                   1 |                      9 |       4 ||        8 |        10 |                   6 |                      9 |       4 ||        9 |        10 |                   9 |                      9 |       4 ||       10 |        10 |                   8 |                      8 |       5 ||       11 |        10 |                   1 |                      8 |       5 ||       12 |        10 |                   5 |                      8 |       5 ||       13 |        10 |                   7 |                      8 |       5 ||       14 |        10 |                   8 |                      8 |       5 |In the past, we can just use the qual user provided to do estimation. As fornow, since we introduce the CorrectiveQuals design, we still keep just only 1qual counted, but we can choose the best one in CorrectiveQuals no matter whichone is provided by the user. we gain a better and stable estimation because of this.I'm happy about the overall design but not pretty confident about the method to\"choose the best one to keep\". So I did some test case as many as I can to findsomething is wrong, so far so good.I'm also happy with how to keep only one qual in CorrectiveQuals (not choose thebest one). Assume we just have 1 EC filter in this query for simplicity. At thebeginning, all the baserel have been impacted by CorrectiveQual. When join 2relations, we rollback 1 side and keep the other one. when we join this joinrelwith another rel,  we rollback 1 side and keep the other one and so forth.(rollback is not changing some things which we already computed, it is onlyused when estimating size for coming joinrel). The patchset can be applied cleanly with 9e98583898c347e007958c8a09911be2ea4acfb9.", "msg_date": "Thu, 17 Mar 2022 11:13:15 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "Hi,\n\nOn 2022-03-08 21:44:37 +0800, Andy Fan wrote:\n> I have finished the PoC for planning timing improvement and joinrel rows\n> estimation.\n\nThis currently crashes on cfbot:\nhttps://api.cirrus-ci.com/v1/task/6158455839916032/logs/cores.log\nhttps://cirrus-ci.com/task/6158455839916032\n\nAs this is clearly not 15 material, I've set the target version as 16. But it\nmight be good to just move the whole entry to the next CF...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Mar 2022 18:41:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but\n BETWEEN or >= is not?" }, { "msg_contents": "Hi:\n\nThanks for take care of this.\n\nOn Tue, Mar 22, 2022 at 9:41 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-03-08 21:44:37 +0800, Andy Fan wrote:\n> > I have finished the PoC for planning timing improvement and joinrel rows\n> > estimation.\n>\n> This currently crashes on cfbot:\n> https://api.cirrus-ci.com/v1/task/6158455839916032/logs/cores.log\n> https://cirrus-ci.com/task/6158455839916032\n>\n>\nThe crash happens at my own Assert statement. I assume we know the\nSelectivity\nfor a RestrictInfo after set_rel_size, however this is not true for foreign\ntable with use_remote_estimate=true. Since we are in a design discussion\nstage,\nI just disable this feature for foreign tables and can fix it later. Would\nthis be the\nright way to go?\n\n\n> As this is clearly not 15 material, I've set the target version as 16. But\n> it\n> might be good to just move the whole entry to the next CF...\n>\n>\nThanks for doing that. I tried but didn't find how to move it to the next\nCF.\n\nHere is the latest code. I have rebased the code with the latest master\na1bc4d3590b.\n\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Thu, 24 Mar 2022 10:21:41 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "On Thu, Mar 24, 2022 at 3:22 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> Here is the latest code. I have rebased the code with the latest master a1bc4d3590b.\n\nFYI this is failing with an unexpected plan in the partition_join test:\n\nhttps://api.cirrus-ci.com/v1/artifact/task/6090435050340352/log/src/test/regress/regression.diffs\n\n\n", "msg_date": "Tue, 17 May 2022 10:51:52 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "On Tue, May 17, 2022 at 6:52 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Thu, Mar 24, 2022 at 3:22 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > Here is the latest code. I have rebased the code with the latest master\n> a1bc4d3590b.\n>\n> FYI this is failing with an unexpected plan in the partition_join test:\n>\n>\n> https://api.cirrus-ci.com/v1/artifact/task/6090435050340352/log/src/test/regress/regression.diffs\n>\n\nThanks. But I will wait to see if anyone will show interest with this. Or\nelse\nMoving alone is not a great experience.\n\n-- \nBest Regards\nAndy Fan\n\nOn Tue, May 17, 2022 at 6:52 AM Thomas Munro <thomas.munro@gmail.com> wrote:On Thu, Mar 24, 2022 at 3:22 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> Here is the latest code. I have rebased the code with the latest master a1bc4d3590b.\n\nFYI this is failing with an unexpected plan in the partition_join test:\n\nhttps://api.cirrus-ci.com/v1/artifact/task/6090435050340352/log/src/test/regress/regression.diffs\nThanks.  But I will wait to see if anyone will show interest with this. Or elseMoving alone is not a great experience.  -- Best RegardsAndy Fan", "msg_date": "Tue, 17 May 2022 10:00:35 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "On 17/5/2022 05:00, Andy Fan wrote:\n> Thanks.  But I will wait to see if anyone will show interest with this. \n> Or else\n> Moving alone is not a great experience.\nTo move forward I've rebased your patchset onto new master, removed \nannoying tailing backspaces and applied two regression test changes, \ncaused by second patch: first of changes are legal, second looks normal \nbut should be checked on optimality.\nAs I see, a consensus should be found for the questions:\n1. Case of redundant clauses (x < 100 and x < 1000)\n2. Planning time degradation for trivial OLTP queries\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Thu, 7 Jul 2022 14:11:06 +0300", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "2022年7月7日(木) 20:11 Andrey Lepikhov <a.lepikhov@postgrespro.ru>:\n>\n> On 17/5/2022 05:00, Andy Fan wrote:\n> > Thanks. But I will wait to see if anyone will show interest with this.\n> > Or else\n> > Moving alone is not a great experience.\n> To move forward I've rebased your patchset onto new master, removed\n> annoying tailing backspaces and applied two regression test changes,\n> caused by second patch: first of changes are legal, second looks normal\n> but should be checked on optimality.\n> As I see, a consensus should be found for the questions:\n> 1. Case of redundant clauses (x < 100 and x < 1000)\n> 2. Planning time degradation for trivial OLTP queries\n\nHi\n\ncfbot reports the patch no longer applies [1]. As CommitFest 2022-11 is\ncurrently underway, this would be an excellent time to update the patch.\n\n[1] http://cfbot.cputube.org/patch_40_3524.log\n\nThanks\n\nIan Barwick\n\n\n", "msg_date": "Fri, 4 Nov 2022 12:02:39 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" }, { "msg_contents": "Hi:\n\n\n> cfbot reports the patch no longer applies [1]. As CommitFest 2022-11 is\n> currently underway, this would be an excellent time to update the patch.\n>\n\nThank you Ian & Andrey for taking care of this! I am planning to start\na new thread for this topic in 2 weeks, and will post an update patch\nat that time.\n\n-- \nBest Regards\nAndy Fan\n\nHi:  \ncfbot reports the patch no longer applies [1].  As CommitFest 2022-11 is\ncurrently underway, this would be an excellent time to update the patch. Thank you Ian & Andrey for taking care of this!  I am planning to start a new  thread for this topic in 2 weeks,  and will post an update patchat  that time.  -- Best RegardsAndy Fan", "msg_date": "Tue, 8 Nov 2022 17:04:00 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Condition pushdown: why (=) is pushed down into join, but BETWEEN\n or >= is not?" } ]
[ { "msg_contents": "Hi,\n\nDuring the discussion about memory contexts dumping[1], there\nwas a comment that exposing not only memory contexts but also\nquery plans and untruncated query string would be useful.\n\nI also feel that it would be nice when thinking about situations\nsuch as troubleshooting a long-running query on production\nenvironments where we cannot use debuggers.\n\nAt that point of the above comment, I was considering exposing\nsuch information on the shared memory.\nHowever, since memory contexts are now exposed on the log by\npg_log_backend_memory_contexts(PID), I'm thinking about\ndefining a function that logs the plan of a running query and\nuntruncated query string on the specified PID in the same way\nas below.\n\n postgres=# SELECT * FROM pg_log_current_plan(2155192);\n pg_log_current_plan\n ---------------------\n t\n (1 row)\n\n $ tail -f data/log/postgresql-2021-05-12.log\n\n 2021-05-12 17:37:19.481 JST [2155192] LOG: logging the plan of \nrunning query on PID 2155192\n Query Text: SELECT a.filler FROM pgbench_accounts a JOIN \npgbench_accounts b ON a.aid = b.aid;\n Merge Join (cost=0.85..83357.85 rows=1000000 width=85)\n Merge Cond: (a.aid = b.aid)\n -> Index Scan using pgbench_accounts_pkey on \npgbench_accounts a (cost=0.42..42377.43 rows=1000000 width=89)\n -> Index Only Scan using pgbench_accounts_pkey on \npgbench_accounts b (cost=0.42..25980.42 rows=1000000 width=4)\n\n\nAttached a PoC patch.\n\nAny thoughts?\n\n[1] \nhttps://www.postgresql.org/message-id/CA%2BTgmobkpFV0UB67kzXuD36--OFHwz1bs%3DL_6PZbD4nxKqUQMw%40mail.gmail.com\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Wed, 12 May 2021 20:24:04 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "RFC: Logging plan of the running query" }, { "msg_contents": "st 12. 5. 2021 v 13:24 odesílatel torikoshia <torikoshia@oss.nttdata.com>\nnapsal:\n\n> Hi,\n>\n> During the discussion about memory contexts dumping[1], there\n> was a comment that exposing not only memory contexts but also\n> query plans and untruncated query string would be useful.\n>\n> I also feel that it would be nice when thinking about situations\n> such as troubleshooting a long-running query on production\n> environments where we cannot use debuggers.\n>\n> At that point of the above comment, I was considering exposing\n> such information on the shared memory.\n> However, since memory contexts are now exposed on the log by\n> pg_log_backend_memory_contexts(PID), I'm thinking about\n> defining a function that logs the plan of a running query and\n> untruncated query string on the specified PID in the same way\n> as below.\n>\n> postgres=# SELECT * FROM pg_log_current_plan(2155192);\n> pg_log_current_plan\n> ---------------------\n> t\n> (1 row)\n>\n> $ tail -f data/log/postgresql-2021-05-12.log\n>\n> 2021-05-12 17:37:19.481 JST [2155192] LOG: logging the plan of\n> running query on PID 2155192\n> Query Text: SELECT a.filler FROM pgbench_accounts a JOIN\n> pgbench_accounts b ON a.aid = b.aid;\n> Merge Join (cost=0.85..83357.85 rows=1000000 width=85)\n> Merge Cond: (a.aid = b.aid)\n> -> Index Scan using pgbench_accounts_pkey on\n> pgbench_accounts a (cost=0.42..42377.43 rows=1000000 width=89)\n> -> Index Only Scan using pgbench_accounts_pkey on\n> pgbench_accounts b (cost=0.42..25980.42 rows=1000000 width=4)\n>\n>\n> Attached a PoC patch.\n>\n> Any thoughts?\n>\n\n+1\n\nPavel\n\n\n> [1]\n>\n> https://www.postgresql.org/message-id/CA%2BTgmobkpFV0UB67kzXuD36--OFHwz1bs%3DL_6PZbD4nxKqUQMw%40mail.gmail.com\n>\n>\n> Regards,\n>\n> --\n> Atsushi Torikoshi\n> NTT DATA CORPORATION\n\nst 12. 5. 2021 v 13:24 odesílatel torikoshia <torikoshia@oss.nttdata.com> napsal:Hi,\n\nDuring the discussion about memory contexts dumping[1], there\nwas a comment that exposing not only memory contexts but also\nquery plans and untruncated query string would be useful.\n\nI also feel that it would be nice when thinking about situations\nsuch as troubleshooting a long-running query on production\nenvironments where we cannot use debuggers.\n\nAt that point of the above comment, I was considering exposing\nsuch information on the shared memory.\nHowever, since memory contexts are now exposed on the log by\npg_log_backend_memory_contexts(PID), I'm thinking about\ndefining a function that logs the plan of a running query and\nuntruncated query string on the specified PID in the same way\nas below.\n\n   postgres=# SELECT * FROM pg_log_current_plan(2155192);\n    pg_log_current_plan\n   ---------------------\n    t\n   (1 row)\n\n   $ tail -f data/log/postgresql-2021-05-12.log\n\n   2021-05-12 17:37:19.481 JST [2155192] LOG:  logging the plan of \nrunning query on PID 2155192\n           Query Text: SELECT a.filler FROM pgbench_accounts a JOIN \npgbench_accounts b ON a.aid = b.aid;\n           Merge Join  (cost=0.85..83357.85 rows=1000000 width=85)\n             Merge Cond: (a.aid = b.aid)\n             ->  Index Scan using pgbench_accounts_pkey on \npgbench_accounts a  (cost=0.42..42377.43 rows=1000000 width=89)\n             ->  Index Only Scan using pgbench_accounts_pkey on \npgbench_accounts b  (cost=0.42..25980.42 rows=1000000 width=4)\n\n\nAttached a PoC patch.\n\nAny thoughts?+1Pavel\n\n[1] \nhttps://www.postgresql.org/message-id/CA%2BTgmobkpFV0UB67kzXuD36--OFHwz1bs%3DL_6PZbD4nxKqUQMw%40mail.gmail.com\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Wed, 12 May 2021 14:07:44 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Wed, May 12, 2021 at 4:54 PM torikoshia <torikoshia@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> During the discussion about memory contexts dumping[1], there\n> was a comment that exposing not only memory contexts but also\n> query plans and untruncated query string would be useful.\n>\n> I also feel that it would be nice when thinking about situations\n> such as troubleshooting a long-running query on production\n> environments where we cannot use debuggers.\n>\n> At that point of the above comment, I was considering exposing\n> such information on the shared memory.\n> However, since memory contexts are now exposed on the log by\n> pg_log_backend_memory_contexts(PID), I'm thinking about\n> defining a function that logs the plan of a running query and\n> untruncated query string on the specified PID in the same way\n> as below.\n>\n> postgres=# SELECT * FROM pg_log_current_plan(2155192);\n> pg_log_current_plan\n> ---------------------\n> t\n> (1 row)\n>\n> $ tail -f data/log/postgresql-2021-05-12.log\n>\n> 2021-05-12 17:37:19.481 JST [2155192] LOG: logging the plan of\n> running query on PID 2155192\n> Query Text: SELECT a.filler FROM pgbench_accounts a JOIN\n> pgbench_accounts b ON a.aid = b.aid;\n> Merge Join (cost=0.85..83357.85 rows=1000000 width=85)\n> Merge Cond: (a.aid = b.aid)\n> -> Index Scan using pgbench_accounts_pkey on\n> pgbench_accounts a (cost=0.42..42377.43 rows=1000000 width=89)\n> -> Index Only Scan using pgbench_accounts_pkey on\n> pgbench_accounts b (cost=0.42..25980.42 rows=1000000 width=4)\n>\n>\n> Attached a PoC patch.\n>\n> Any thoughts?\n>\n> [1]\n> https://www.postgresql.org/message-id/CA%2BTgmobkpFV0UB67kzXuD36--OFHwz1bs%3DL_6PZbD4nxKqUQMw%40mail.gmail.com\n\n+1 for the idea. It looks like pg_log_current_plan is allowed to run\nby superusers. Since it also shows up the full query text and the plan\nin the server log as plain text, there are chances that the sensitive\ninformation might be logged into the server log which is a risky thing\nfrom security standpoint. There's another thread (see [1] below) which\ndiscusses this issue by having a separate role for all debugging\npurposes. Note that final consensus is not reached yet. We may want to\nuse the same role for this patch as well.\n\n[1] - https://www.postgresql.org/message-id/CA%2BTgmoZz%3DK1bQRp0Ug%3D6uMGFWg-6kaxdHe6VSWaxq0U-YkppYQ%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 May 2021 18:03:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Wed, 12 May 2021 at 13:24, torikoshia <torikoshia@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> During the discussion about memory contexts dumping[1], there\n> was a comment that exposing not only memory contexts but also\n> query plans and untruncated query string would be useful.\n>\n> I also feel that it would be nice when thinking about situations\n> such as troubleshooting a long-running query on production\n> environments where we cannot use debuggers.\n>\n> At that point of the above comment, I was considering exposing\n> such information on the shared memory.\n> However, since memory contexts are now exposed on the log by\n> pg_log_backend_memory_contexts(PID), I'm thinking about\n> defining a function that logs the plan of a running query and\n> untruncated query string on the specified PID in the same way\n> as below.\n>\n> postgres=# SELECT * FROM pg_log_current_plan(2155192);\n> pg_log_current_plan\n> ---------------------\n> t\n> (1 row)\n>\n> $ tail -f data/log/postgresql-2021-05-12.log\n>\n> 2021-05-12 17:37:19.481 JST [2155192] LOG: logging the plan of\n> running query on PID 2155192\n> Query Text: SELECT a.filler FROM pgbench_accounts a JOIN\n> pgbench_accounts b ON a.aid = b.aid;\n> Merge Join (cost=0.85..83357.85 rows=1000000 width=85)\n> Merge Cond: (a.aid = b.aid)\n> -> Index Scan using pgbench_accounts_pkey on\n> pgbench_accounts a (cost=0.42..42377.43 rows=1000000 width=89)\n> -> Index Only Scan using pgbench_accounts_pkey on\n> pgbench_accounts b (cost=0.42..25980.42 rows=1000000 width=4)\n>\n>\n> Attached a PoC patch.\n>\n> Any thoughts?\n\nGreat idea. One feature I'd suggest would be adding a 'format' option\nas well, if such feature would be feasable.\n\n\nWith regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Wed, 12 May 2021 14:55:14 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Wed, May 12, 2021 at 08:24:04PM +0900, torikoshia wrote:\n> Hi,\n> \n> During the discussion about memory contexts dumping[1], there\n> was a comment that exposing not only memory contexts but also\n> query plans and untruncated query string would be useful.\n> \n> I also feel that it would be nice when thinking about situations\n> such as troubleshooting a long-running query on production\n> environments where we cannot use debuggers.\n> \n> At that point of the above comment, I was considering exposing\n> such information on the shared memory.\n> However, since memory contexts are now exposed on the log by\n> pg_log_backend_memory_contexts(PID), I'm thinking about\n> defining a function that logs the plan of a running query and\n> untruncated query string on the specified PID in the same way\n> as below.\n> \n> postgres=# SELECT * FROM pg_log_current_plan(2155192);\n> pg_log_current_plan\n> ---------------------\n> t\n> (1 row)\n> \n> $ tail -f data/log/postgresql-2021-05-12.log\n> \n> 2021-05-12 17:37:19.481 JST [2155192] LOG: logging the plan of running\n> query on PID 2155192\n> Query Text: SELECT a.filler FROM pgbench_accounts a JOIN\n> pgbench_accounts b ON a.aid = b.aid;\n> Merge Join (cost=0.85..83357.85 rows=1000000 width=85)\n> Merge Cond: (a.aid = b.aid)\n> -> Index Scan using pgbench_accounts_pkey on pgbench_accounts a\n> (cost=0.42..42377.43 rows=1000000 width=89)\n> -> Index Only Scan using pgbench_accounts_pkey on\n> pgbench_accounts b (cost=0.42..25980.42 rows=1000000 width=4)\n\nI didn't read the POC patch yet, but +1 for having that feature.\n\n\n", "msg_date": "Wed, 12 May 2021 22:40:46 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Wed, 2021-05-12 at 18:03 +0530, Bharath Rupireddy wrote:\n> On Wed, May 12, 2021 at 4:54 PM torikoshia <torikoshia@oss.nttdata.com> wrote:\n> > During the discussion about memory contexts dumping[1], there\n> > was a comment that exposing not only memory contexts but also\n> > query plans and untruncated query string would be useful.\n> > \n> > postgres=# SELECT * FROM pg_log_current_plan(2155192);\n> > pg_log_current_plan\n> > ---------------------\n> > t\n> > (1 row)\n> > \n> > $ tail -f data/log/postgresql-2021-05-12.log\n> > \n> > 2021-05-12 17:37:19.481 JST [2155192] LOG: logging the plan of\n> > running query on PID 2155192\n> > Query Text: SELECT a.filler FROM pgbench_accounts a JOIN\n> > pgbench_accounts b ON a.aid = b.aid;\n> > Merge Join (cost=0.85..83357.85 rows=1000000 width=85)\n> > Merge Cond: (a.aid = b.aid)\n> > -> Index Scan using pgbench_accounts_pkey on\n> > pgbench_accounts a (cost=0.42..42377.43 rows=1000000 width=89)\n> > -> Index Only Scan using pgbench_accounts_pkey on\n> > pgbench_accounts b (cost=0.42..25980.42 rows=1000000 width=4)\n\nI love the idea, but I didn't look at the patch.\n\n> Since it also shows up the full query text and the plan\n> in the server log as plain text, there are chances that the sensitive\n> information might be logged into the server log which is a risky thing\n> from security standpoint.\n\nI think that is irrelevant.\n\nA superuser can already set \"log_statement = 'all'\" to get this.\nThere is no protection from superusers, and it is pointless to require that.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 12 May 2021 18:08:07 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "Thank you all for your positive comments.\n\nOn 2021-05-12 21:55, Matthias van de Meent wrote:\n\n> Great idea. One feature I'd suggest would be adding a 'format' option\n> as well, if such feature would be feasable.\n\nThanks for the comment!\n\nDuring the development of pg_log_backend_memory_contexts(), I tried to\nmake the number of contexts to record configurable by making it GUC\nvariable or putting it on the shared memory, but the former seemed an\noverkill and the latter introduced some ugly behaviors, so we decided\nto make it a static number[1].\nI think we face the same difficulty here.\n\nAllowing to select the format would be better as auto_explain does by\nauto_explain.log_format, but I'm a bit doubtful that it is worth the\ncosts.\n\n[1] \nhttps://www.postgresql.org/message-id/flat/6738f309-a41b-cbe6-bb57-a1c58ce9f05a%40oss.nttdata.com#e687d583080c96563b2fa2e32dbf3fb6\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 13 May 2021 17:23:33 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-05-13 01:08, Laurenz Albe wrote:\n> On Wed, 2021-05-12 at 18:03 +0530, Bharath Rupireddy wrote:\n>> Since it also shows up the full query text and the plan\n>> in the server log as plain text, there are chances that the sensitive\n>> information might be logged into the server log which is a risky thing\n>> from security standpoint.\n\nThanks for the notification!\n\n> I think that is irrelevant.\n> \n> A superuser can already set \"log_statement = 'all'\" to get this.\n> There is no protection from superusers, and it is pointless to require \n> that.\n\nAFAIU, since that discussion is whether or not users other than \nsuperusers\nshould be given the privilege to execute the backtrace printing \nfunction,\nI think it might be applicable to pg_log_current_plan().\n\nSince restricting privilege to superusers is stricter, I'm going to \nproceed\nas it is for now, but depending on the above discussion, it may be \nbetter to\nchange it.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 13 May 2021 17:26:20 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Wed, May 12, 2021 at 4:54 PM torikoshia <torikoshia@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> During the discussion about memory contexts dumping[1], there\n> was a comment that exposing not only memory contexts but also\n> query plans and untruncated query string would be useful.\n>\n> I also feel that it would be nice when thinking about situations\n> such as troubleshooting a long-running query on production\n> environments where we cannot use debuggers.\n>\n> At that point of the above comment, I was considering exposing\n> such information on the shared memory.\n> However, since memory contexts are now exposed on the log by\n> pg_log_backend_memory_contexts(PID), I'm thinking about\n> defining a function that logs the plan of a running query and\n> untruncated query string on the specified PID in the same way\n> as below.\n>\n> postgres=# SELECT * FROM pg_log_current_plan(2155192);\n> pg_log_current_plan\n> ---------------------\n> t\n> (1 row)\n\n+1 for the idea. I did not read the complete patch but while reading\nthrough the patch, I noticed that you using elevel as LOG for printing\nthe stack trace. But I think the backend whose pid you have passed,\nthe connected client to that backend might not have superuser\nprivileges and if you use elevel LOG then that message will be sent to\nthat connected client as well and I don't think that is secure. So\ncan we use LOG_SERVER_ONLY so that we can prevent\nit from sending to the client.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 14:43:56 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Thu, May 13, 2021 at 2:44 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> +1 for the idea. I did not read the complete patch but while reading\n> through the patch, I noticed that you using elevel as LOG for printing\n> the stack trace. But I think the backend whose pid you have passed,\n> the connected client to that backend might not have superuser\n> privileges and if you use elevel LOG then that message will be sent to\n> that connected client as well and I don't think that is secure. So\n> can we use LOG_SERVER_ONLY so that we can prevent\n> it from sending to the client.\n\nTrue, we should use LOG_SERVER_ONLY and not send any logs to the client.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 14:57:56 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Thu, May 13, 2021 at 2:57 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Thu, May 13, 2021 at 2:44 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > +1 for the idea. I did not read the complete patch but while reading\n> > through the patch, I noticed that you using elevel as LOG for printing\n> > the stack trace. But I think the backend whose pid you have passed,\n> > the connected client to that backend might not have superuser\n> > privileges and if you use elevel LOG then that message will be sent to\n> > that connected client as well and I don't think that is secure. So\n> > can we use LOG_SERVER_ONLY so that we can prevent\n> > it from sending to the client.\n>\n> True, we should use LOG_SERVER_ONLY and not send any logs to the client.\n\nI further tend to think that, is it correct to log queries with LOG\nlevel when log_statement GUC is set? Or should it also be\nLOG_SERVER_ONLY?\n\n /* Log immediately if dictated by log_statement */\n if (check_log_statement(parsetree_list))\n {\n ereport(LOG,\n (errmsg(\"statement: %s\", query_string),\n errhidestmt(true),\n errdetail_execute(parsetree_list)));\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 15:06:41 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Thu, May 13, 2021 at 1:56 PM torikoshia <torikoshia@oss.nttdata.com> wrote:\n>\n> On 2021-05-13 01:08, Laurenz Albe wrote:\n> > On Wed, 2021-05-12 at 18:03 +0530, Bharath Rupireddy wrote:\n> >> Since it also shows up the full query text and the plan\n> >> in the server log as plain text, there are chances that the sensitive\n> >> information might be logged into the server log which is a risky thing\n> >> from security standpoint.\n>\n> Thanks for the notification!\n>\n> > I think that is irrelevant.\n> >\n> > A superuser can already set \"log_statement = 'all'\" to get this.\n> > There is no protection from superusers, and it is pointless to require\n> > that.\n>\n> AFAIU, since that discussion is whether or not users other than\n> superusers\n> should be given the privilege to execute the backtrace printing\n> function,\n> I think it might be applicable to pg_log_current_plan().\n>\n> Since restricting privilege to superusers is stricter, I'm going to\n> proceed\n> as it is for now, but depending on the above discussion, it may be\n> better to\n> change it.\n\nYeah, we can keep it as superuser-only for now.\n\nMight be unrelated, but just for info - there's another thread\n\"Granting control of SUSET gucs to non-superusers\" at [1] discussing\nthe new roles.\n\n[1] - https://www.postgresql.org/message-id/F9408A5A-B20B-42D2-9E7F-49CD3D1547BC%40enterprisedb.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 15:08:50 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Thu, May 13, 2021 at 3:06 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 2:57 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Thu, May 13, 2021 at 2:44 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > +1 for the idea. I did not read the complete patch but while reading\n> > > through the patch, I noticed that you using elevel as LOG for printing\n> > > the stack trace. But I think the backend whose pid you have passed,\n> > > the connected client to that backend might not have superuser\n> > > privileges and if you use elevel LOG then that message will be sent to\n> > > that connected client as well and I don't think that is secure. So\n> > > can we use LOG_SERVER_ONLY so that we can prevent\n> > > it from sending to the client.\n> >\n> > True, we should use LOG_SERVER_ONLY and not send any logs to the client.\n>\n> I further tend to think that, is it correct to log queries with LOG\n> level when log_statement GUC is set? Or should it also be\n> LOG_SERVER_ONLY?\n>\n> /* Log immediately if dictated by log_statement */\n> if (check_log_statement(parsetree_list))\n> {\n> ereport(LOG,\n> (errmsg(\"statement: %s\", query_string),\n> errhidestmt(true),\n> errdetail_execute(parsetree_list)));\n>\n\nWhat is your argument behind logging it with LOG? I mean we are\nsending the signal to all the backend and some backend might have the\nclient who is not connected as a superuser so sending the plan to\nthose clients is not a good idea from a security perspective.\nAnyways, LOG_SERVER_ONLY is not an exposed logging level it is used\nfor an internal purpose. So IMHO it should be logged with\nLOG_SERVER_ONLY level.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 15:19:48 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-05-13 18:36, Bharath Rupireddy wrote:\n> On Thu, May 13, 2021 at 2:57 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> On Thu, May 13, 2021 at 2:44 PM Dilip Kumar <dilipbalaut@gmail.com> \n>> wrote:\n>> > +1 for the idea. I did not read the complete patch but while reading\n>> > through the patch, I noticed that you using elevel as LOG for printing\n>> > the stack trace. But I think the backend whose pid you have passed,\n>> > the connected client to that backend might not have superuser\n>> > privileges and if you use elevel LOG then that message will be sent to\n>> > that connected client as well and I don't think that is secure. So\n>> > can we use LOG_SERVER_ONLY so that we can prevent\n>> > it from sending to the client.\n>> \n>> True, we should use LOG_SERVER_ONLY and not send any logs to the \n>> client.\n\nThanks, agree with changing it to LOG_SERVER_ONLY.\n\n> I further tend to think that, is it correct to log queries with LOG\n> level when log_statement GUC is set? Or should it also be\n> LOG_SERVER_ONLY?\n\nI feel it's OK to log with LOG_SERVER_ONLY since the log from\nlog_statement GUC would be printed already and independently.\nISTM people don't expect to log_statement GUC works even on\npg_log_current_plan(), do they?\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 13 May 2021 19:12:39 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Thu, May 13, 2021 at 3:20 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 3:06 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Thu, May 13, 2021 at 2:57 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > On Thu, May 13, 2021 at 2:44 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > +1 for the idea. I did not read the complete patch but while reading\n> > > > through the patch, I noticed that you using elevel as LOG for printing\n> > > > the stack trace. But I think the backend whose pid you have passed,\n> > > > the connected client to that backend might not have superuser\n> > > > privileges and if you use elevel LOG then that message will be sent to\n> > > > that connected client as well and I don't think that is secure. So\n> > > > can we use LOG_SERVER_ONLY so that we can prevent\n> > > > it from sending to the client.\n> > >\n> > > True, we should use LOG_SERVER_ONLY and not send any logs to the client.\n> >\n> > I further tend to think that, is it correct to log queries with LOG\n> > level when log_statement GUC is set? Or should it also be\n> > LOG_SERVER_ONLY?\n> >\n> > /* Log immediately if dictated by log_statement */\n> > if (check_log_statement(parsetree_list))\n> > {\n> > ereport(LOG,\n> > (errmsg(\"statement: %s\", query_string),\n> > errhidestmt(true),\n> > errdetail_execute(parsetree_list)));\n>\n> What is your argument behind logging it with LOG? I mean we are\n> sending the signal to all the backend and some backend might have the\n> client who is not connected as a superuser so sending the plan to\n> those clients is not a good idea from a security perspective.\n> Anyways, LOG_SERVER_ONLY is not an exposed logging level it is used\n> for an internal purpose. So IMHO it should be logged with\n> LOG_SERVER_ONLY level.\n\nI'm saying that - currently, queries are logged with LOG level when\nthe log_statement GUC is set. The queries might be sent to the\nnon-superuser clients. So, your point of \"sending the plan to those\nclients is not a good idea from a security perspective\" gets violated\nright? Should the log level be changed(in the below code) from \"LOG\"\nto \"LOG_SERVER_ONLY\"? I think we can discuss this separately so as not\nto sidetrack the main feature.\n\n /* Log immediately if dictated by log_statement */\n if (check_log_statement(parsetree_list))\n {\n ereport(LOG,\n (errmsg(\"statement: %s\", query_string),\n errhidestmt(true),\n errdetail_execute(parsetree_list)));\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 16:16:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Thu, May 13, 2021 at 4:16 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I'm saying that - currently, queries are logged with LOG level when\n> the log_statement GUC is set. The queries might be sent to the\n> non-superuser clients. So, your point of \"sending the plan to those\n> clients is not a good idea from a security perspective\" gets violated\n> right? Should the log level be changed(in the below code) from \"LOG\"\n> to \"LOG_SERVER_ONLY\"? I think we can discuss this separately so as not\n> to sidetrack the main feature.\n>\n> /* Log immediately if dictated by log_statement */\n> if (check_log_statement(parsetree_list))\n> {\n> ereport(LOG,\n> (errmsg(\"statement: %s\", query_string),\n> errhidestmt(true),\n> errdetail_execute(parsetree_list)));\n>\n\nYes, that was my exact point, that in this particular code log with\nLOG_SERVER_ONLY.\n\nLike this.\n /* Log immediately if dictated by log_statement */\n if (check_log_statement(parsetree_list))\n {\n ereport(LOG_SERVER_ONLY,\n.....\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 17:13:47 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Thu, May 13, 2021 at 5:14 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 4:16 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > I'm saying that - currently, queries are logged with LOG level when\n> > the log_statement GUC is set. The queries might be sent to the\n> > non-superuser clients. So, your point of \"sending the plan to those\n> > clients is not a good idea from a security perspective\" gets violated\n> > right? Should the log level be changed(in the below code) from \"LOG\"\n> > to \"LOG_SERVER_ONLY\"? I think we can discuss this separately so as not\n> > to sidetrack the main feature.\n> >\n> > /* Log immediately if dictated by log_statement */\n> > if (check_log_statement(parsetree_list))\n> > {\n> > ereport(LOG,\n> > (errmsg(\"statement: %s\", query_string),\n> > errhidestmt(true),\n> > errdetail_execute(parsetree_list)));\n> >\n>\n> Yes, that was my exact point, that in this particular code log with\n> LOG_SERVER_ONLY.\n>\n> Like this.\n> /* Log immediately if dictated by log_statement */\n> if (check_log_statement(parsetree_list))\n> {\n> ereport(LOG_SERVER_ONLY,\n\nAgree, but let's discuss that in a separate thread.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 17:15:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Thu, May 13, 2021 at 5:15 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 5:14 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Thu, May 13, 2021 at 4:16 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > I'm saying that - currently, queries are logged with LOG level when\n> > > the log_statement GUC is set. The queries might be sent to the\n> > > non-superuser clients. So, your point of \"sending the plan to those\n> > > clients is not a good idea from a security perspective\" gets violated\n> > > right? Should the log level be changed(in the below code) from \"LOG\"\n> > > to \"LOG_SERVER_ONLY\"? I think we can discuss this separately so as not\n> > > to sidetrack the main feature.\n> > >\n> > > /* Log immediately if dictated by log_statement */\n> > > if (check_log_statement(parsetree_list))\n> > > {\n> > > ereport(LOG,\n> > > (errmsg(\"statement: %s\", query_string),\n> > > errhidestmt(true),\n> > > errdetail_execute(parsetree_list)));\n> > >\n> >\n> > Yes, that was my exact point, that in this particular code log with\n> > LOG_SERVER_ONLY.\n> >\n> > Like this.\n> > /* Log immediately if dictated by log_statement */\n> > if (check_log_statement(parsetree_list))\n> > {\n> > ereport(LOG_SERVER_ONLY,\n>\n> Agree, but let's discuss that in a separate thread.\n\nDid not understand why separate thread? this is part of this thread\nno? but anyways now everyone agreed that we will log with\nLOG_SERVER_ONLY.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 17:18:52 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Thu, May 13, 2021 at 5:18 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 5:15 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Thu, May 13, 2021 at 5:14 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Thu, May 13, 2021 at 4:16 PM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > >\n> > > > I'm saying that - currently, queries are logged with LOG level when\n> > > > the log_statement GUC is set. The queries might be sent to the\n> > > > non-superuser clients. So, your point of \"sending the plan to those\n> > > > clients is not a good idea from a security perspective\" gets violated\n> > > > right? Should the log level be changed(in the below code) from \"LOG\"\n> > > > to \"LOG_SERVER_ONLY\"? I think we can discuss this separately so as not\n> > > > to sidetrack the main feature.\n> > > >\n> > > > /* Log immediately if dictated by log_statement */\n> > > > if (check_log_statement(parsetree_list))\n> > > > {\n> > > > ereport(LOG,\n> > > > (errmsg(\"statement: %s\", query_string),\n> > > > errhidestmt(true),\n> > > > errdetail_execute(parsetree_list)));\n> > > >\n> > >\n> > > Yes, that was my exact point, that in this particular code log with\n> > > LOG_SERVER_ONLY.\n> > >\n> > > Like this.\n> > > /* Log immediately if dictated by log_statement */\n> > > if (check_log_statement(parsetree_list))\n> > > {\n> > > ereport(LOG_SERVER_ONLY,\n> >\n> > Agree, but let's discuss that in a separate thread.\n>\n> Did not understand why separate thread? this is part of this thread\n> no? but anyways now everyone agreed that we will log with\n> LOG_SERVER_ONLY.\n\nBharat offlist pointed to me that here he was talking about another\nlog that is logging the query and not specific to this patch, so let's\nnot discuss this point here.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 18:27:32 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-05-13 21:57, Dilip Kumar wrote:\n> On Thu, May 13, 2021 at 5:18 PM Dilip Kumar <dilipbalaut@gmail.com> \n> wrote:\n>> \n>> On Thu, May 13, 2021 at 5:15 PM Bharath Rupireddy\n>> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> >\n>> > On Thu, May 13, 2021 at 5:14 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> > >\n>> > > On Thu, May 13, 2021 at 4:16 PM Bharath Rupireddy\n>> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> > > >\n>> > > > I'm saying that - currently, queries are logged with LOG level when\n>> > > > the log_statement GUC is set. The queries might be sent to the\n>> > > > non-superuser clients. So, your point of \"sending the plan to those\n>> > > > clients is not a good idea from a security perspective\" gets violated\n>> > > > right? Should the log level be changed(in the below code) from \"LOG\"\n>> > > > to \"LOG_SERVER_ONLY\"? I think we can discuss this separately so as not\n>> > > > to sidetrack the main feature.\n>> > > >\n>> > > > /* Log immediately if dictated by log_statement */\n>> > > > if (check_log_statement(parsetree_list))\n>> > > > {\n>> > > > ereport(LOG,\n>> > > > (errmsg(\"statement: %s\", query_string),\n>> > > > errhidestmt(true),\n>> > > > errdetail_execute(parsetree_list)));\n>> > > >\n>> > >\n>> > > Yes, that was my exact point, that in this particular code log with\n>> > > LOG_SERVER_ONLY.\n>> > >\n>> > > Like this.\n>> > > /* Log immediately if dictated by log_statement */\n>> > > if (check_log_statement(parsetree_list))\n>> > > {\n>> > > ereport(LOG_SERVER_ONLY,\n>> >\n>> > Agree, but let's discuss that in a separate thread.\n>> \n>> Did not understand why separate thread? this is part of this thread\n>> no? but anyways now everyone agreed that we will log with\n>> LOG_SERVER_ONLY.\n\nModified elevel from LOG to LOG_SERVER_ONLY.\n\nI also modified the patch to log JIT Summary and GUC settings \ninformation.\nIf there is other useful information to log, I would appreciate it if \nyou could point it out.\n\n> Bharat offlist pointed to me that here he was talking about another\n> log that is logging the query and not specific to this patch, so let's\n> not discuss this point here.\n\nThanks for sharing the situation!\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Fri, 28 May 2021 15:51:37 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-05-28 15:51, torikoshia wrote:\n> On 2021-05-13 21:57, Dilip Kumar wrote:\n>> On Thu, May 13, 2021 at 5:18 PM Dilip Kumar <dilipbalaut@gmail.com> \n>> wrote:\n>>> \n>>> On Thu, May 13, 2021 at 5:15 PM Bharath Rupireddy\n>>> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>> >\n>>> > On Thu, May 13, 2021 at 5:14 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>> > >\n>>> > > On Thu, May 13, 2021 at 4:16 PM Bharath Rupireddy\n>>> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>> > > >\n>>> > > > I'm saying that - currently, queries are logged with LOG level when\n>>> > > > the log_statement GUC is set. The queries might be sent to the\n>>> > > > non-superuser clients. So, your point of \"sending the plan to those\n>>> > > > clients is not a good idea from a security perspective\" gets violated\n>>> > > > right? Should the log level be changed(in the below code) from \"LOG\"\n>>> > > > to \"LOG_SERVER_ONLY\"? I think we can discuss this separately so as not\n>>> > > > to sidetrack the main feature.\n>>> > > >\n>>> > > > /* Log immediately if dictated by log_statement */\n>>> > > > if (check_log_statement(parsetree_list))\n>>> > > > {\n>>> > > > ereport(LOG,\n>>> > > > (errmsg(\"statement: %s\", query_string),\n>>> > > > errhidestmt(true),\n>>> > > > errdetail_execute(parsetree_list)));\n>>> > > >\n>>> > >\n>>> > > Yes, that was my exact point, that in this particular code log with\n>>> > > LOG_SERVER_ONLY.\n>>> > >\n>>> > > Like this.\n>>> > > /* Log immediately if dictated by log_statement */\n>>> > > if (check_log_statement(parsetree_list))\n>>> > > {\n>>> > > ereport(LOG_SERVER_ONLY,\n>>> >\n>>> > Agree, but let's discuss that in a separate thread.\n>>> \n>>> Did not understand why separate thread? this is part of this thread\n>>> no? but anyways now everyone agreed that we will log with\n>>> LOG_SERVER_ONLY.\n> \n> Modified elevel from LOG to LOG_SERVER_ONLY.\n> \n> I also modified the patch to log JIT Summary and GUC settings \n> information.\n> If there is other useful information to log, I would appreciate it if\n> you could point it out.\n\nUpdated the patch.\n\n- reordered superuser check which was pointed out in another thread[1]\n- added a regression test\n\n[1] https://postgr.es/m/YLxw1uVGIAP5uMPl@paquier.xyz\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Wed, 09 Jun 2021 16:44:55 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "\n\nOn 2021/06/09 16:44, torikoshia wrote:\n> Updated the patch.\n\nThanks for updating the patch!\n\nauto_explain can log the plan of even nested statement\nif auto_explain.log_nested_statements is enabled.\nBut ISTM that pg_log_current_plan() cannot log that plan.\nIs this intentional?\nI think that it's better to make pg_log_current_plan() log\nthe plan of even nested statement.\n\n\n+\tes->format = EXPLAIN_FORMAT_TEXT;\n+\tes->settings = true;\n\nSince pg_log_current_plan() is usually used to investigate and\ntrouble-shoot the long running queries, IMO it's better to\nenable es->verbose by default and get additional information\nabout the queries. Thought?\n\n\n+ * pg_log_current_plan\n+ *\t\tSignal a backend process to log plan the of running query.\n\n\"plan the of\" is typo?\n\n\n+ * Only superusers are allowed to signal to log plan because any users to\n+ * issue this request at an unbounded rate would cause lots of log messages\n+ * and which can lead to denial of service.\n\n\"because any users\" should be \"because allowing any users\"\nlike the comment for pg_log_backend_memory_contexts()?\n\n\n+ * All the actual work is deferred to ProcessLogExplainInterrupt(),\n\n\"ProcessLogExplainInterrupt()\" should be \"ProcessLogCurrentPlanInterrupt()\"?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 9 Jun 2021 23:04:22 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-06-09 23:04, Fujii Masao wrote:\n\nThanks for your review!\n\n> auto_explain can log the plan of even nested statement\n> if auto_explain.log_nested_statements is enabled.\n> But ISTM that pg_log_current_plan() cannot log that plan.\n> Is this intentional?\n\n> I think that it's better to make pg_log_current_plan() log\n> the plan of even nested statement.\n\n+1. It would be better.\nBut currently plan information is got from ActivePortal and ISTM there \nare no easy way to retrieve plan information of nested statements from \nActivePortal.\nAnyway I'll do some more research.\n\n\nI think you are right about the following comments.\nI'll fix them.\n\n> +\tes->format = EXPLAIN_FORMAT_TEXT;\n> +\tes->settings = true;\n> \n> Since pg_log_current_plan() is usually used to investigate and\n> trouble-shoot the long running queries, IMO it's better to\n> enable es->verbose by default and get additional information\n> about the queries. Thought?\n> + * pg_log_current_plan\n> + *\t\tSignal a backend process to log plan the of running query.\n> \n> \"plan the of\" is typo?\n> \n> \n> + * Only superusers are allowed to signal to log plan because any users \n> to\n> + * issue this request at an unbounded rate would cause lots of log \n> messages\n> + * and which can lead to denial of service.\n> \n> \"because any users\" should be \"because allowing any users\"\n> like the comment for pg_log_backend_memory_contexts()?\n> \n> \n> + * All the actual work is deferred to ProcessLogExplainInterrupt(),\n> \n> \"ProcessLogExplainInterrupt()\" should be \n> \"ProcessLogCurrentPlanInterrupt()\"?\n> \n> Regards,\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 10 Jun 2021 11:09:22 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Wed, Jun 9, 2021 at 1:14 PM torikoshia <torikoshia@oss.nttdata.com> wrote:\n> Updated the patch.\n\nThanks for the patch. Here are some comments on v3 patch:\n\n1) We could just say \"Requests to log query plan of the presently\nrunning query of a given backend along with an untruncated query\nstring in the server logs.\"\nInstead of\n+ They will be logged at <literal>LOG</literal> message level and\n+ will appear in the server log based on the log\n+ configuration set (See <xref linkend=\"runtime-config-logging\"/>\n\n2) It's better to do below, for reference see how pg_backend_pid,\npg_terminate_backend, pg_relpages and so on are used in the tests.\n+SELECT pg_log_current_plan(pg_backend_pid());\nrather than using the function in the FROM clause.\n+SELECT * FROM pg_log_current_plan(pg_backend_pid());\nIf okay, also change it for pg_log_backend_memory_contexts.\n\n3) Since most of the code looks same in pg_log_backend_memory_contexts\nand pg_log_current_plan, I think we can have a common function\nsomething like below:\nbool\nSendProcSignalForLogInfo(ProcSignalReason reason)\n{\nAssert(reason == PROCSIG_LOG_MEMORY_CONTEXT || reason ==\nPROCSIG_LOG_CURRENT_PLAN);\n\nif (!superuser())\n{\nif (reason == PROCSIG_LOG_MEMORY_CONTEXT)\nerrmsg(\"must be a superuser to log memory contexts\")\nelse if (reason == PROCSIG_LOG_CURRENT_PLAN)\nerrmsg(\"must be a superuser to log plan of the running query\")\n}\n\nif (SendProcSignal(pid, reason, proc->backendId) < 0)\n{\n}\n}\nThen we could just do:\nDatum\npg_log_backend_memory_contexts(PG_FUNCTION_ARGS)\n{\nPG_RETURN_BOOL(SendProcSignalForLogInfo(PROCSIG_LOG_MEMORY_CONTEXT));\n}\nDatum\npg_log_current_plan(PG_FUNCTION_ARGS)\n{\nPG_RETURN_BOOL(SendProcSignalForLogInfo(PROCSIG_LOG_CURRENT_PLAN));\n}\nWe can have SendProcSignalForLogInfo function defined in procsignal.c\nand procsignal.h\n\n4) I think we can have a sample function usage and how it returns true\nvalue, how the plan looks for a simple query(select 1 or some other\nsimple/complex generic query or simply select\npg_log_current_plan(pg_backend_pid());) in the documentation, much\nlike pg_log_backend_memory_contexts.\n\n5) Instead of just showing the true return value of the function\npg_log_current_plan in the sql test, which just shows that the signal\nis sent, but it doesn't mean that the backend has processed that\nsignal and logged the plan. I think we can add the test using TAP\nframework, one\n\n6) Do we unnecessarily need to signal the processes such as autovacuum\nlauncher/workers, logical replication launcher/workers, wal senders,\nwal receivers and so on. only to emit a \"PID %d is not executing\nqueries now\" message? Moreover, those processes will be waiting in\nloops for timeouts to occur, then as soon as they wake up do they need\nto process this extra uninformative signal?\nWe could choose to not signal those processes at all which might or\nmight not be possible.\nOtherwise, we could just emit messages like \"XXXX process cannot run a\nquery\" in ProcessInterrupts.\n\n7)Instead of\n(errmsg(\"logging plan of the running query on PID %d\\n%s\",\nhow about below?\n(errmsg(\"plan of the query running on backend with PID %d is:\\n%s\",\n\n8) Instead of\nerrmsg(\"PID %d is not executing queries now\")\nhow about below?\nerrmsg(\"Backend with PID %d is not running a query\")\n\n9) We could just do:\nvoid\nProcessLogCurrentPlanInterrupt(void)\n{\nExplainState *es;\nLogCurrentPlanPending = false;\nif (!ActivePortal || !ActivePortal->queryDesc)\nerrmsg(\"PID %d is not executing queries now\");\nes = NewExplainState();\nExplainQueryText();\nExplainPrintPlan();\n\n10) How about renaming the function pg_log_current_plan to\npg_log_query_plan or pg_log_current_query_plan?\n\n11) What happens if pg_log_current_plan is called for a parallel worker?\n\nWith Regards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 10 Jun 2021 21:50:48 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-06-11 01:20, Bharath Rupireddy wrote:\n\nThanks for your review!\n\n> On Wed, Jun 9, 2021 at 1:14 PM torikoshia <torikoshia@oss.nttdata.com> \n> wrote:\n>> Updated the patch.\n> \n> Thanks for the patch. Here are some comments on v3 patch:\n> \n> 1) We could just say \"Requests to log query plan of the presently\n> running query of a given backend along with an untruncated query\n> string in the server logs.\"\n> Instead of\n> + They will be logged at <literal>LOG</literal> message level \n> and\n> + will appear in the server log based on the log\n> + configuration set (See <xref \n> linkend=\"runtime-config-logging\"/>\n\nActually this explanation is almost the same as that of\npg_log_backend_memory_contexts().\nDo you think we should change both of them?\nI think it may be too detailed but accurate.\n\n> 2) It's better to do below, for reference see how pg_backend_pid,\n> pg_terminate_backend, pg_relpages and so on are used in the tests.\n> +SELECT pg_log_current_plan(pg_backend_pid());\n> rather than using the function in the FROM clause.\n> +SELECT * FROM pg_log_current_plan(pg_backend_pid());\n> If okay, also change it for pg_log_backend_memory_contexts.\n\nAgreed.\n\n> 3) Since most of the code looks same in pg_log_backend_memory_contexts\n> and pg_log_current_plan, I think we can have a common function\n> something like below:\n\nAgreed. I'll do some refactoring.\n\n> bool\n> SendProcSignalForLogInfo(ProcSignalReason reason)\n> {\n> Assert(reason == PROCSIG_LOG_MEMORY_CONTEXT || reason ==\n> PROCSIG_LOG_CURRENT_PLAN);\n> \n> if (!superuser())\n> {\n> if (reason == PROCSIG_LOG_MEMORY_CONTEXT)\n> errmsg(\"must be a superuser to log memory contexts\")\n> else if (reason == PROCSIG_LOG_CURRENT_PLAN)\n> errmsg(\"must be a superuser to log plan of the running query\")\n> }\n> \n> if (SendProcSignal(pid, reason, proc->backendId) < 0)\n> {\n> }\n> }\n> Then we could just do:\n> Datum\n> pg_log_backend_memory_contexts(PG_FUNCTION_ARGS)\n> {\n> PG_RETURN_BOOL(SendProcSignalForLogInfo(PROCSIG_LOG_MEMORY_CONTEXT));\n> }\n> Datum\n> pg_log_current_plan(PG_FUNCTION_ARGS)\n> {\n> PG_RETURN_BOOL(SendProcSignalForLogInfo(PROCSIG_LOG_CURRENT_PLAN));\n> }\n> We can have SendProcSignalForLogInfo function defined in procsignal.c\n> and procsignal.h\n> \n> 4) I think we can have a sample function usage and how it returns true\n> value, how the plan looks for a simple query(select 1 or some other\n> simple/complex generic query or simply select\n> pg_log_current_plan(pg_backend_pid());) in the documentation, much\n> like pg_log_backend_memory_contexts.\n\n+1.\n\n> 5) Instead of just showing the true return value of the function\n> pg_log_current_plan in the sql test, which just shows that the signal\n> is sent, but it doesn't mean that the backend has processed that\n> signal and logged the plan. I think we can add the test using TAP\n> framework, one\n\nI once made a tap test for pg_log_backend_memory_contexts(), but it\nseemed an overkill and we agreed that it was appropriate just ensuring\nthe function working as below discussion.\n\n \nhttps://www.postgresql.org/message-id/bbecd722d4f8e261b350186ac4bf68a7%40oss.nttdata.com\n\n> 6) Do we unnecessarily need to signal the processes such as autovacuum\n> launcher/workers, logical replication launcher/workers, wal senders,\n> wal receivers and so on. only to emit a \"PID %d is not executing\n> queries now\" message? Moreover, those processes will be waiting in\n> loops for timeouts to occur, then as soon as they wake up do they need\n> to process this extra uninformative signal?\n> We could choose to not signal those processes at all which might or\n> might not be possible.\n> Otherwise, we could just emit messages like \"XXXX process cannot run a\n> query\" in ProcessInterrupts.\n\nAgreed.\n\nHowever it needs to distinguish backends which can execute queries and\nother processes such as autovacuum launcher, I don't come up with\neasy ways to do so.\nI'm going to think about it.\n\n> 7)Instead of\n> (errmsg(\"logging plan of the running query on PID %d\\n%s\",\n> how about below?\n> (errmsg(\"plan of the query running on backend with PID %d is:\\n%s\",\n\n+1.\n\n> 8) Instead of\n> errmsg(\"PID %d is not executing queries now\")\n> how about below?\n> errmsg(\"Backend with PID %d is not running a query\")\n\n+1.\n\n> \n> 9) We could just do:\n> void\n> ProcessLogCurrentPlanInterrupt(void)\n> {\n> ExplainState *es;\n> LogCurrentPlanPending = false;\n> if (!ActivePortal || !ActivePortal->queryDesc)\n> errmsg(\"PID %d is not executing queries now\");\n> es = NewExplainState();\n> ExplainQueryText();\n> ExplainPrintPlan();\n> \n> 10) How about renaming the function pg_log_current_plan to\n> pg_log_query_plan or pg_log_current_query_plan?\n\n+1.\n\n> 11) What happens if pg_log_current_plan is called for a parallel \n> worker?\n\nAFAIU Parallel worker doesn't have ActivePortal, so it would always\nemit the message 'PID %d is not executing queries now'.\nAs 6), it would be better to distinguish the parallel worker and normal\nbackend.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 14 Jun 2021 21:18:51 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Mon, Jun 14, 2021 at 5:48 PM torikoshia <torikoshia@oss.nttdata.com> wrote:\n> > 1) We could just say \"Requests to log query plan of the presently\n> > running query of a given backend along with an untruncated query\n> > string in the server logs.\"\n> > Instead of\n> > + They will be logged at <literal>LOG</literal> message level\n> > and\n> > + will appear in the server log based on the log\n> > + configuration set (See <xref\n> > linkend=\"runtime-config-logging\"/>\n>\n> Actually this explanation is almost the same as that of\n> pg_log_backend_memory_contexts().\n> Do you think we should change both of them?\n> I think it may be too detailed but accurate.\n\nI withdraw my comment. We can keep the explanation similar to\npg_log_backend_memory_contexts as it was agreed upon and committed\ntext. If the wordings are similar, then it will be easier for users to\nunderstand the documentation.\n\n> > 5) Instead of just showing the true return value of the function\n> > pg_log_current_plan in the sql test, which just shows that the signal\n> > is sent, but it doesn't mean that the backend has processed that\n> > signal and logged the plan. I think we can add the test using TAP\n> > framework, one\n>\n> I once made a tap test for pg_log_backend_memory_contexts(), but it\n> seemed an overkill and we agreed that it was appropriate just ensuring\n> the function working as below discussion.\n>\n> https://www.postgresql.org/message-id/bbecd722d4f8e261b350186ac4bf68a7%40oss.nttdata.com\n\nOkay. I withdraw my comment.\n\n> > 6) Do we unnecessarily need to signal the processes such as autovacuum\n> > launcher/workers, logical replication launcher/workers, wal senders,\n> > wal receivers and so on. only to emit a \"PID %d is not executing\n> > queries now\" message? Moreover, those processes will be waiting in\n> > loops for timeouts to occur, then as soon as they wake up do they need\n> > to process this extra uninformative signal?\n> > We could choose to not signal those processes at all which might or\n> > might not be possible.\n> > Otherwise, we could just emit messages like \"XXXX process cannot run a\n> > query\" in ProcessInterrupts.\n>\n> Agreed.\n>\n> However it needs to distinguish backends which can execute queries and\n> other processes such as autovacuum launcher, I don't come up with\n> easy ways to do so.\n> I'm going to think about it.\n\nI'm not sure if there is any information in the shared memory\naccessible to all the backends/sessions that can say a PID is\nautovacuum launcher/worker, logical replication launcher/worker or any\nother background or parallel worker. If we were to invent a new\nmechanism just for addressing the above comment, I would rather choose\nto not do that as it seems like an overkill. We can leave it up to the\nuser whether or not to unnecessarily signal those processes which are\nbound to print \"PID XXX is not executing queries now\" message.\n\n> > 11) What happens if pg_log_current_plan is called for a parallel\n> > worker?\n>\n> AFAIU Parallel worker doesn't have ActivePortal, so it would always\n> emit the message 'PID %d is not executing queries now'.\n> As 6), it would be better to distinguish the parallel worker and normal\n> backend.\n\nAs I said, above, I think it will be a bit tough to do. If done, it\nseems like an overkill.\n\nWith Regards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 15 Jun 2021 09:57:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-06-15 13:27, Bharath Rupireddy wrote:\n> On Mon, Jun 14, 2021 at 5:48 PM torikoshia <torikoshia@oss.nttdata.com> \n> wrote:\n>> > 1) We could just say \"Requests to log query plan of the presently\n>> > running query of a given backend along with an untruncated query\n>> > string in the server logs.\"\n>> > Instead of\n>> > + They will be logged at <literal>LOG</literal> message level\n>> > and\n>> > + will appear in the server log based on the log\n>> > + configuration set (See <xref\n>> > linkend=\"runtime-config-logging\"/>\n>> \n>> Actually this explanation is almost the same as that of\n>> pg_log_backend_memory_contexts().\n>> Do you think we should change both of them?\n>> I think it may be too detailed but accurate.\n> \n> I withdraw my comment. We can keep the explanation similar to\n> pg_log_backend_memory_contexts as it was agreed upon and committed\n> text. If the wordings are similar, then it will be easier for users to\n> understand the documentation.\n> \n>> > 5) Instead of just showing the true return value of the function\n>> > pg_log_current_plan in the sql test, which just shows that the signal\n>> > is sent, but it doesn't mean that the backend has processed that\n>> > signal and logged the plan. I think we can add the test using TAP\n>> > framework, one\n>> \n>> I once made a tap test for pg_log_backend_memory_contexts(), but it\n>> seemed an overkill and we agreed that it was appropriate just ensuring\n>> the function working as below discussion.\n>> \n>> https://www.postgresql.org/message-id/bbecd722d4f8e261b350186ac4bf68a7%40oss.nttdata.com\n> \n> Okay. I withdraw my comment.\n> \n>> > 6) Do we unnecessarily need to signal the processes such as autovacuum\n>> > launcher/workers, logical replication launcher/workers, wal senders,\n>> > wal receivers and so on. only to emit a \"PID %d is not executing\n>> > queries now\" message? Moreover, those processes will be waiting in\n>> > loops for timeouts to occur, then as soon as they wake up do they need\n>> > to process this extra uninformative signal?\n>> > We could choose to not signal those processes at all which might or\n>> > might not be possible.\n>> > Otherwise, we could just emit messages like \"XXXX process cannot run a\n>> > query\" in ProcessInterrupts.\n>> \n>> Agreed.\n>> \n>> However it needs to distinguish backends which can execute queries and\n>> other processes such as autovacuum launcher, I don't come up with\n>> easy ways to do so.\n>> I'm going to think about it.\n> \n> I'm not sure if there is any information in the shared memory\n> accessible to all the backends/sessions that can say a PID is\n> autovacuum launcher/worker, logical replication launcher/worker or any\n> other background or parallel worker.\n\nAs far as I looked around, there seems no easy ways to do so.\n\n> If we were to invent a new\n> mechanism just for addressing the above comment, I would rather choose\n> to not do that as it seems like an overkill. We can leave it up to the\n> user whether or not to unnecessarily signal those processes which are\n> bound to print \"PID XXX is not executing queries now\" message.\n\n+1. I'm going to proceed in this direction.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 16 Jun 2021 20:36:08 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-06-16 20:36, torikoshia wrote:\n>> other background or parallel worker.\n> \n> As far as I looked around, there seems no easy ways to do so.\n> \n>> If we were to invent a new\n>> mechanism just for addressing the above comment, I would rather choose\n>> to not do that as it seems like an overkill. We can leave it up to the\n>> user whether or not to unnecessarily signal those processes which are\n>> bound to print \"PID XXX is not executing queries now\" message.\n> \n> +1. I'm going to proceed in this direction.\n\nUpdated the patch.\n\n\nOn Thu, Jun 10, 2021 at 11:09 AM torikoshia <torikoshia@oss.nttdata.com> \nwrote:\n> On 2021-06-09 23:04, Fujii Masao wrote:\n\n> > auto_explain can log the plan of even nested statement\n> > if auto_explain.log_nested_statements is enabled.\n> > But ISTM that pg_log_current_plan() cannot log that plan.\n> > Is this intentional?\n> \n> > I think that it's better to make pg_log_current_plan() log\n> > the plan of even nested statement.\n> \n> +1. It would be better.\n> But currently plan information is got from ActivePortal and ISTM there\n> are no easy way to retrieve plan information of nested statements from\n> ActivePortal.\n> Anyway I'll do some more research.\n\nI haven't found a proper way yet but it seems necessary to use something \nother than ActivePortal and I'm now thinking this could be a separate \npatch in the future.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Tue, 22 Jun 2021 11:30:31 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "\n\nOn 2021/06/22 11:30, torikoshia wrote:\n> On Thu, Jun 10, 2021 at 11:09 AM torikoshia <torikoshia@oss.nttdata.com> wrote:\n>> On 2021-06-09 23:04, Fujii Masao wrote:\n> \n>> > auto_explain can log the plan of even nested statement\n>> > if auto_explain.log_nested_statements is enabled.\n>> > But ISTM that pg_log_current_plan() cannot log that plan.\n>> > Is this intentional?\n>>\n>> > I think that it's better to make pg_log_current_plan() log\n>> > the plan of even nested statement.\n>>\n>> +1. It would be better.\n>> But currently plan information is got from ActivePortal and ISTM there\n>> are no easy way to retrieve plan information of nested statements from\n>> ActivePortal.\n>> Anyway I'll do some more research.\n> \n> I haven't found a proper way yet but it seems necessary to use something other than ActivePortal and I'm now thinking this could be a separate patch in the future. \n\n DO $$\n BEGIN\n PERFORM pg_sleep(100);\n END$$;\n\nWhen I called pg_log_current_query_plan() to send the signal to\nthe backend executing the above query, I got the following log message.\nI think that this is not expected message. I guess this issue happened\nbecause the information about query text and plan is retrieved\nfrom ActivePortal. If this understanding is right, ISTM that we should\nimplement new mechanism so that we can retrieve those information\neven while nested query is being executed.\n\n LOG: backend with PID 42449 is not running a query\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 1 Jul 2021 15:34:26 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Tue, Jun 22, 2021 at 8:00 AM torikoshia <torikoshia@oss.nttdata.com> wrote:\n> Updated the patch.\n\nThanks for the patch. Here are some comments on the v4 patch:\n\n1) Can we do + ExplainState *es = NewExplainState(); and es\nassignments after if (!ActivePortal || !ActivePortal->queryDesc), just\nto avoid unnecessary call in case of error hit? Also note that, we can\neasily hit the error case.\n\n2) It looks like there's an improper indentation. MyProcPid and\nes->str->data, should start from the \".\n+ ereport(LOG_SERVER_ONLY,\n+ (errmsg(\"backend with PID %d is not running a query\",\n+ MyProcPid)));\n\n+ ereport(LOG_SERVER_ONLY,\n+ (errmsg(\"plan of the query running on backend with PID %d is:\\n%s\",\n+ MyProcPid,\n+ es->str->data),\nFor reference see errmsg(\"unrecognized value for EXPLAIN option \\\"%s\\\": \\\"%s\\\"\",\n\n3)I prefer to do this so that any new piece of code can be introduced\nin between easily and it will be more readable as well.\n+Datum\n+pg_log_current_query_plan(PG_FUNCTION_ARGS)\n+{\n+ pid_t pid;\n+ bool result;\n+\n+ pid = PG_GETARG_INT32(0);\n+ result = SendProcSignalForLogInfo(pid, PROCSIG_LOG_CURRENT_PLAN);\n+\n+ PG_RETURN_BOOL(result);\n+}\nIf okay, please also change for the pg_log_backend_memory_contexts.\n\n4) Extra whitespace before the second line i.e. 2nd line reason should\nbe aligned with the 1st line reason.\n+ Assert(reason == PROCSIG_LOG_MEMORY_CONTEXT ||\n+ reason == PROCSIG_LOG_CURRENT_PLAN);\n\n5) How about \"Requests to log the plan of the query currently running\non the backend with specified process ID along with the untruncated\nquery string\"?\n+ Requests to log the untruncated query string and its plan for\n+ the query currently running on the backend with the specified\n+ process ID.\n\n6) A typo: it is \"nested statements (..) are not\"\n+ Note that nested statements (statements executed inside a function) is not\n\n7) I'm not sure what you mean by \"Some functions output what you want\nto the log.\"\n--- Memory contexts are logged and they are not returned to the function.\n+-- Some functions output what you want to the log.\nInstead, can we say \"These functions return true if the specified\nbackend is successfully signaled, otherwise false. Upon receiving the\nsignal, the backend will log the information to the server log.\"\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 2 Jul 2021 19:51:38 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-07-02 23:21, Bharath Rupireddy wrote:\n> On Tue, Jun 22, 2021 at 8:00 AM torikoshia <torikoshia@oss.nttdata.com> \n> wrote:\n>> Updated the patch.\n> \n> Thanks for the patch. Here are some comments on the v4 patch:\n\nThanks for your comments and suggestions!\nI agree with you and updated the patch.\n\nOn Thu, Jul 1, 2021 at 3:34 PM Fujii Masao <masao.fujii@oss.nttdata.com> \nwrote:\n\n> DO $$\n> BEGIN\n> PERFORM pg_sleep(100);\n> END$$;\n> \n> When I called pg_log_current_query_plan() to send the signal to\n> the backend executing the above query, I got the following log message.\n> I think that this is not expected message. I guess this issue happened\n> because the information about query text and plan is retrieved\n> from ActivePortal. If this understanding is right, ISTM that we should\n> implement new mechanism so that we can retrieve those information\n> even while nested query is being executed.\n\nI'm now working on this comment.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Fri, 09 Jul 2021 14:05:16 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Tue, Jun 22, 2021 at 8:00 AM torikoshia <torikoshia@oss.nttdata.com> \nwrote:\n> Updated the patch.\n\nHi, torikoshi-san\n\nThanks for your great work! I'd like to use this feature in v15.\nI confirmed that it works with queries I tried and make check-world has \nno error.\n\nWhen I tried this feature, I realized two things. So, I share them.\n\n(1) About output contents\n\n> The format of the query plan is the same as when <literal>FORMAT \n> TEXT</literal>\n> and <literal>VEBOSE</literal> are used in the \n> <command>EXPLAIN</command> command.\n> For example:\n\nI think the above needs to add COSTS and SETTINGS options too, and it's \nbetter to use an\nexample which the SETTINGS option works like the following.\n\n```\n2021-07-13 21:59:56 JST 69757 [client backend] LOG: plan of the query \nrunning on backend with PID 69757 is:\n Query Text: PREPARE query2 AS SELECT COUNT(*) FROM \npgbench_accounts t1, pgbench_accounts t2;\n Aggregate (cost=3750027242.84..3750027242.86 rows=1 width=8)\n Output: count(*)\n -> Nested Loop (cost=0.84..3125027242.84 rows=250000000000 \nwidth=0)\n -> Index Only Scan using pgbench_accounts_pkey on \npublic.pgbench_accounts t1 (cost=0.42..12996.42 rows=500000 width=0)\n Output: t1.aid\n -> Materialize (cost=0.42..15496.42 rows=500000 \nwidth=0)\n -> Index Only Scan using pgbench_accounts_pkey on \npublic.pgbench_accounts t2 (cost=0.42..12996.42 rows=500000 width=0)\n Settings: effective_cache_size = '8GB', work_mem = '16MB'\n```\n\n(2) About EXPLAIN \"BUFFER\" option\n\nWhen I checked EXPLAIN option, I found there is another option \"BUFFER\" \nwhich can be\nused without the \"ANALYZE\" option.\n\nI'm not sure it's useful because your target use-case is analyzing a \nlong-running query,\nnot its planning phase. If so, the planning buffer usage is not so much \nuseful. But, since\nthe overhead to output buffer usages is not high and it's used for \ndebugging use cases,\nI wonder it's not a bad idea to output buffer usages too. Thought?\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 13 Jul 2021 23:11:37 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Tue, Jul 13, 2021 at 11:11 PM Masahiro Ikeda \n<ikedamsh@oss.nttdata.com> wrote:\n\n> When I tried this feature, I realized two things. So, I share them.\n\nThanks for your review!\n\n> (1) About output contents\n> \n> > The format of the query plan is the same as when <literal>FORMAT\n> > TEXT</literal>\n> > and <literal>VEBOSE</literal> are used in the\n> > <command>EXPLAIN</command> command.\n> > For example:\n\n> I think the above needs to add COSTS and SETTINGS options too, and it's\n> better to use an\n> example which the SETTINGS option works like the following.\n\nAgreed. Updated the patch.\n\n> (2) About EXPLAIN \"BUFFER\" option\n> \n> When I checked EXPLAIN option, I found there is another option \"BUFFER\"\n> which can be\n> used without the \"ANALYZE\" option.\n> \n> I'm not sure it's useful because your target use-case is analyzing a\n> long-running query,\n> not its planning phase. If so, the planning buffer usage is not so much\n> useful. But, since\n> the overhead to output buffer usages is not high and it's used for\n> debugging use cases,\n> I wonder it's not a bad idea to output buffer usages too. Thought?\n\nAs you pointed out, I also think it would be useful when queries are \ntaking a long time in the planning phase.\nHowever, as far as I read ExplainOneQuery(), the buffer usages in the \nplanner phase are not retrieved by default. They are retrieved only when \nBUFFERS is specified in the EXPLAIN.\n\nIf we change it to always get the buffer usages and expose them as a \nglobal variable, we can get them through pg_log_current_plan(), but I \nthink it doesn't pay.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Mon, 19 Jul 2021 11:28:40 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "\n\nOn 2021/07/19 11:28, torikoshia wrote:\n> Agreed. Updated the patch.\n\nThanks for updating the patch!\n\n+bool\n+SendProcSignalForLogInfo(pid_t pid, ProcSignalReason reason)\n\nI don't think that procsignal.c is proper place to check the permission and\ncheck whether the specified PID indicates a PostgreSQL server process, etc\nbecause procsignal.c just provides fundamental routines for interprocess\nsignaling. Isn't it better to move the function to signalfuncs.c or elsewhere?\n\n\n+\tExplainQueryText(es, ActivePortal->queryDesc);\n+\tExplainPrintPlan(es, ActivePortal->queryDesc);\n+\tExplainPrintJITSummary(es, ActivePortal->queryDesc);\n\nWhen text format is used, ExplainBeginOutput() and ExplainEndOutput()\ndo nothing. So (I guess) you thought that they don't need to be called and\nimplemented the code in that way. But IMO it's better to comment\nwhy they don't need to be called, or to just call both of them\neven if they do nothing in text format.\n\n\n+\tExplainPrintJITSummary(es, ActivePortal->queryDesc);\n\nIt's better to check es->costs before calling this function,\nlike explain_ExecutorEnd() and ExplainOnePlan() do?\n\n\n+\tresult = SendProcSignalForLogInfo(pid, PROCSIG_LOG_CURRENT_PLAN);\n+\n+\tPG_RETURN_BOOL(result);\n\nCurrently SendProcSignalForLogInfo() calls PG_RETURN_BOOL() in some cases,\nbut instead it should just return true/false because pg_log_current_query_plan()\nexpects that?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 19 Jul 2021 15:07:48 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "\n\nOn 2021/07/09 14:05, torikoshia wrote:\n> On 2021-07-02 23:21, Bharath Rupireddy wrote:\n>> On Tue, Jun 22, 2021 at 8:00 AM torikoshia <torikoshia@oss.nttdata.com> wrote:\n>>> Updated the patch.\n>>\n>> Thanks for the patch. Here are some comments on the v4 patch:\n> \n> Thanks for your comments and suggestions!\n> I agree with you and updated the patch.\n> \n> On Thu, Jul 1, 2021 at 3:34 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n>> ���� DO $$\n>> ���� BEGIN\n>> ���� PERFORM pg_sleep(100);\n>> ���� END$$;\n>>\n>> When I called pg_log_current_query_plan() to send the signal to\n>> the backend executing the above query, I got the following log message.\n>> I think that this is not expected message. I guess this issue happened\n>> because the information about query text and plan is retrieved\n>> from ActivePortal. If this understanding is right, ISTM that we should\n>> implement new mechanism so that we can retrieve those information\n>> even while nested query is being executed.\n> \n> I'm now working on this comment.\n\nOne idea is to define new global pointer, e.g., \"QueryDesc *ActiveQueryDesc;\".\nThis global pointer is set to queryDesc in ExecutorRun()\n(also maybe ExecutorStart()). And this is reset to NULL in ExecutorEnd() and\nwhen an error is thrown. Then ProcessLogCurrentPlanInterrupt() can\nget the plan of the currently running query from that global pointer\ninstead of ActivePortal, and log it. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 28 Jul 2021 03:34:44 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "út 27. 7. 2021 v 20:34 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com>\nnapsal:\n\n>\n>\n> On 2021/07/09 14:05, torikoshia wrote:\n> > On 2021-07-02 23:21, Bharath Rupireddy wrote:\n> >> On Tue, Jun 22, 2021 at 8:00 AM torikoshia <torikoshia@oss.nttdata.com>\n> wrote:\n> >>> Updated the patch.\n> >>\n> >> Thanks for the patch. Here are some comments on the v4 patch:\n> >\n> > Thanks for your comments and suggestions!\n> > I agree with you and updated the patch.\n> >\n> > On Thu, Jul 1, 2021 at 3:34 PM Fujii Masao <masao.fujii@oss.nttdata.com>\n> wrote:\n> >\n> >> DO $$\n> >> BEGIN\n> >> PERFORM pg_sleep(100);\n> >> END$$;\n> >>\n> >> When I called pg_log_current_query_plan() to send the signal to\n> >> the backend executing the above query, I got the following log message.\n> >> I think that this is not expected message. I guess this issue happened\n> >> because the information about query text and plan is retrieved\n> >> from ActivePortal. If this understanding is right, ISTM that we should\n> >> implement new mechanism so that we can retrieve those information\n> >> even while nested query is being executed.\n> >\n> > I'm now working on this comment.\n>\n> One idea is to define new global pointer, e.g., \"QueryDesc\n> *ActiveQueryDesc;\".\n> This global pointer is set to queryDesc in ExecutorRun()\n> (also maybe ExecutorStart()). And this is reset to NULL in ExecutorEnd()\n> and\n> when an error is thrown. Then ProcessLogCurrentPlanInterrupt() can\n> get the plan of the currently running query from that global pointer\n> instead of ActivePortal, and log it. Thought?\n>\n\nIt cannot work - there can be a lot of nested queries, and at the end you\ncannot reset to null, but you should return back pointer to outer query.\n\nRegards\n\nPavel\n\n\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\n>\n>\n\nút 27. 7. 2021 v 20:34 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com> napsal:\n\nOn 2021/07/09 14:05, torikoshia wrote:\n> On 2021-07-02 23:21, Bharath Rupireddy wrote:\n>> On Tue, Jun 22, 2021 at 8:00 AM torikoshia <torikoshia@oss.nttdata.com> wrote:\n>>> Updated the patch.\n>>\n>> Thanks for the patch. Here are some comments on the v4 patch:\n> \n> Thanks for your comments and suggestions!\n> I agree with you and updated the patch.\n> \n> On Thu, Jul 1, 2021 at 3:34 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n>>      DO $$\n>>      BEGIN\n>>      PERFORM pg_sleep(100);\n>>      END$$;\n>>\n>> When I called pg_log_current_query_plan() to send the signal to\n>> the backend executing the above query, I got the following log message.\n>> I think that this is not expected message. I guess this issue happened\n>> because the information about query text and plan is retrieved\n>> from ActivePortal. If this understanding is right, ISTM that we should\n>> implement new mechanism so that we can retrieve those information\n>> even while nested query is being executed.\n> \n> I'm now working on this comment.\n\nOne idea is to define new global pointer, e.g., \"QueryDesc *ActiveQueryDesc;\".\nThis global pointer is set to queryDesc in ExecutorRun()\n(also maybe ExecutorStart()). And this is reset to NULL in ExecutorEnd() and\nwhen an error is thrown. Then ProcessLogCurrentPlanInterrupt() can\nget the plan of the currently running query from that global pointer\ninstead of ActivePortal, and log it. Thought?It cannot work - there can be a lot of nested queries, and at the end you cannot reset to null, but you should return back pointer to outer query.RegardsPavel \n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 27 Jul 2021 20:45:34 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-07-28 03:45, Pavel Stehule wrote:\n> út 27. 7. 2021 v 20:34 odesílatel Fujii Masao\n> <masao.fujii@oss.nttdata.com> napsal:\n> \n>> On 2021/07/09 14:05, torikoshia wrote:\n>>> On 2021-07-02 23:21, Bharath Rupireddy wrote:\n>>>> On Tue, Jun 22, 2021 at 8:00 AM torikoshia\n>> <torikoshia@oss.nttdata.com> wrote:\n>>>>> Updated the patch.\n>>>> \n>>>> Thanks for the patch. Here are some comments on the v4 patch:\n>>> \n>>> Thanks for your comments and suggestions!\n>>> I agree with you and updated the patch.\n>>> \n>>> On Thu, Jul 1, 2021 at 3:34 PM Fujii Masao\n>> <masao.fujii@oss.nttdata.com> wrote:\n>>> \n>>>> DO $$\n>>>> BEGIN\n>>>> PERFORM pg_sleep(100);\n>>>> END$$;\n>>>> \n>>>> When I called pg_log_current_query_plan() to send the signal to\n>>>> the backend executing the above query, I got the following log\n>> message.\n>>>> I think that this is not expected message. I guess this issue\n>> happened\n>>>> because the information about query text and plan is retrieved\n>>>> from ActivePortal. If this understanding is right, ISTM that we\n>> should\n>>>> implement new mechanism so that we can retrieve those information\n>>>> even while nested query is being executed.\n>>> \n>>> I'm now working on this comment.\n>> \n>> One idea is to define new global pointer, e.g., \"QueryDesc\n>> *ActiveQueryDesc;\".\n>> This global pointer is set to queryDesc in ExecutorRun()\n>> (also maybe ExecutorStart()). And this is reset to NULL in\n>> ExecutorEnd() and\n>> when an error is thrown. Then ProcessLogCurrentPlanInterrupt() can\n>> get the plan of the currently running query from that global pointer\n>> instead of ActivePortal, and log it. Thought?\n> \n> It cannot work - there can be a lot of nested queries, and at the end\n> you cannot reset to null, but you should return back pointer to outer\n> query.\n\nThanks for your comment!\n\nI'm wondering if we can avoid this problem by saving one outer level \nQueryDesc in addition to the current one.\nI'm going to try it.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 28 Jul 2021 20:44:00 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-07-28 20:44, torikoshia wrote:\n> On 2021-07-28 03:45, Pavel Stehule wrote:\n>> út 27. 7. 2021 v 20:34 odesílatel Fujii Masao\n>> <masao.fujii@oss.nttdata.com> napsal:\n>> \n>>> On 2021/07/09 14:05, torikoshia wrote:\n>>>> On 2021-07-02 23:21, Bharath Rupireddy wrote:\n>>>>> On Tue, Jun 22, 2021 at 8:00 AM torikoshia\n>>> <torikoshia@oss.nttdata.com> wrote:\n>>>>>> Updated the patch.\n>>>>> \n>>>>> Thanks for the patch. Here are some comments on the v4 patch:\n>>>> \n>>>> Thanks for your comments and suggestions!\n>>>> I agree with you and updated the patch.\n>>>> \n>>>> On Thu, Jul 1, 2021 at 3:34 PM Fujii Masao\n>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>> \n>>>>> DO $$\n>>>>> BEGIN\n>>>>> PERFORM pg_sleep(100);\n>>>>> END$$;\n>>>>> \n>>>>> When I called pg_log_current_query_plan() to send the signal to\n>>>>> the backend executing the above query, I got the following log\n>>> message.\n>>>>> I think that this is not expected message. I guess this issue\n>>> happened\n>>>>> because the information about query text and plan is retrieved\n>>>>> from ActivePortal. If this understanding is right, ISTM that we\n>>> should\n>>>>> implement new mechanism so that we can retrieve those information\n>>>>> even while nested query is being executed.\n>>>> \n>>>> I'm now working on this comment.\n>>> \n>>> One idea is to define new global pointer, e.g., \"QueryDesc\n>>> *ActiveQueryDesc;\".\n>>> This global pointer is set to queryDesc in ExecutorRun()\n>>> (also maybe ExecutorStart()). And this is reset to NULL in\n>>> ExecutorEnd() and\n>>> when an error is thrown. Then ProcessLogCurrentPlanInterrupt() can\n>>> get the plan of the currently running query from that global pointer\n>>> instead of ActivePortal, and log it. Thought?\n>> \n>> It cannot work - there can be a lot of nested queries, and at the end\n>> you cannot reset to null, but you should return back pointer to outer\n>> query.\n> \n> Thanks for your comment!\n> \n> I'm wondering if we can avoid this problem by saving one outer level\n> QueryDesc in addition to the current one.\n> I'm going to try it.\n\nI have updated the patch in this way.\n\nIn this patch, getting the plan to the DO statement is as follows.\n\n---------------------------------\n (pid:76608)=# DO $$\n BEGIN\n PERFORM pg_sleep(15);\n END$$;\n\n (pid:74482)=# SELECT pg_log_current_query_plan(76608);\n\n LOG: 00000: plan of the query running on backend with PID 76608 is:\n Query Text: SELECT pg_sleep(15)\n Result (cost=0.00..0.01 rows=1 width=4)\n Output: pg_sleep('15'::double precision)\n\n -- pid:76608 finished DO statement:\n (pid:74482)=# SELECT pg_log_current_query_plan(76608);\n\n LOG: 00000: backend with PID 76608 is not running a query\n---------------------------------\n\nAny thoughts?\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Tue, 10 Aug 2021 21:22:49 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "\n\nOn 2021/08/10 21:22, torikoshia wrote:\n> I have updated the patch in this way.\n\nThanks for updating the patch!\n\n\n> In this patch, getting the plan to the DO statement is as follows.\n\nLooks good to me.\n\n\n> Any thoughts?\n\n+\tereport(LOG_SERVER_ONLY,\n+\t\t\t(errmsg(\"plan of the query running on backend with PID %d is:\\n%s\",\n+\t\t\t\t\tMyProcPid, es->str->data),\n+\t\t\t errhidestmt(true)));\n\nShouldn't we hide context information by calling errhidecontext(true)?\n\n\n\nWhile \"make installcheck\" regression test was running, I repeated\nexecuting pg_log_current_query_plan() and got the failure of join_hash test\nwith the following diff. This means that pg_log_current_query_plan() could\ncause the query that should be completed successfully to fail with the error.\nIsn't this a bug?\n\nI *guess* that the cause of this issue is that ExplainNode() can call\nInstrEndLoop() more than once unexpectedly.\n\n ------------------------------------------------------------------------------\n $$\n select count(*) from simple r join simple s using (id);\n $$);\n- initially_multibatch | increased_batches\n-----------------------+-------------------\n- f | f\n-(1 row)\n-\n+ERROR: InstrEndLoop called on running node\n+CONTEXT: PL/pgSQL function hash_join_batches(text) line 6 at FOR over EXECUTE statement\n rollback to settings;\n -- parallel with parallel-oblivious hash join\n savepoint settings;\n@@ -687,11 +684,9 @@\n left join (select b1.id, b1.t from join_bar b1 join join_bar b2 using (id)) ss\n on join_foo.id < ss.id + 1 and join_foo.id > ss.id - 1;\n $$);\n- multibatch\n-------------\n- t\n-(1 row)\n-\n+ERROR: InstrEndLoop called on running node\n+CONTEXT: parallel worker\n+PL/pgSQL function hash_join_batches(text) line 6 at FOR over EXECUTE statement\n rollback to settings;\n -- single-batch with rescan, parallel-aware\n savepoint settings;\n ------------------------------------------------------------------------------\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 11 Aug 2021 00:21:11 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-08-11 00:21, Fujii Masao wrote:\n\n> On 2021/08/10 21:22, torikoshia wrote:\n>> I have updated the patch in this way.\n> \n> Thanks for updating the patch!\n> \n> \n>> In this patch, getting the plan to the DO statement is as follows.\n> \n> Looks good to me.\n> \n> \n>> Any thoughts?\n> \n> +\tereport(LOG_SERVER_ONLY,\n> +\t\t\t(errmsg(\"plan of the query running on backend with PID %d is:\\n%s\",\n> +\t\t\t\t\tMyProcPid, es->str->data),\n> +\t\t\t errhidestmt(true)));\n> \n> Shouldn't we hide context information by calling errhidecontext(true)?\n\nAgreed.\n\n> While \"make installcheck\" regression test was running, I repeated\n> executing pg_log_current_query_plan() and got the failure of join_hash \n> test\n> with the following diff. This means that pg_log_current_query_plan() \n> could\n> cause the query that should be completed successfully to fail with the \n> error.\n> Isn't this a bug?\n\nThanks for finding the bug.\nI also reproduced it.\n\n> I *guess* that the cause of this issue is that ExplainNode() can call\n> InstrEndLoop() more than once unexpectedly.\n\nAs far as I looked into, pg_log_current_plan() can call InstrEndLoop() \nthrough ExplainNode().\nI added a flag to ExplainState to avoid calling InstrEndLoop() when \nExplainNode() is called from pg_log_current_plan().\n\n> \n> \n> ------------------------------------------------------------------------------\n> $$\n> select count(*) from simple r join simple s using (id);\n> $$);\n> - initially_multibatch | increased_batches\n> -----------------------+-------------------\n> - f | f\n> -(1 row)\n> -\n> +ERROR: InstrEndLoop called on running node\n> +CONTEXT: PL/pgSQL function hash_join_batches(text) line 6 at FOR\n> over EXECUTE statement\n> rollback to settings;\n> -- parallel with parallel-oblivious hash join\n> savepoint settings;\n> @@ -687,11 +684,9 @@\n> left join (select b1.id, b1.t from join_bar b1 join join_bar b2\n> using (id)) ss\n> on join_foo.id < ss.id + 1 and join_foo.id > ss.id - 1;\n> $$);\n> - multibatch\n> -------------\n> - t\n> -(1 row)\n> -\n> +ERROR: InstrEndLoop called on running node\n> +CONTEXT: parallel worker\n> +PL/pgSQL function hash_join_batches(text) line 6 at FOR over EXECUTE \n> statement\n> rollback to settings;\n> -- single-batch with rescan, parallel-aware\n> savepoint settings;\n> \n> ------------------------------------------------------------------------------\n> \n> Regards,\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Wed, 11 Aug 2021 21:14:05 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "\n\nOn 2021/08/11 21:14, torikoshia wrote:\n> As far as I looked into, pg_log_current_plan() can call InstrEndLoop() through ExplainNode().\n> I added a flag to ExplainState to avoid calling InstrEndLoop() when ExplainNode() is called from pg_log_current_plan().\n\nThanks for updating the patch!\nI tried to test the patch again and encountered two issues.\n\n(1)\nThe following WITH RECURSIVE query failed with the error\n\"ERROR: failed to find plan for CTE sg\" when I ran\npg_log_current_query_plan() against the backend executing that query.\nIs this a bug?\n\n create table graph0( f int, t int, label text );\n insert into graph0 values (1, 2, 'arc 1 -> 2'),(1, 3, 'arc 1 -> 3'),(2, 3, 'arc 2 -> 3'),(1, 4, 'arc 1 -> 4'),(4, 5, 'arc 4 -> 5');\n\n with recursive search_graph(f, t, label, i) as (\n select *, 1||pg_sleep(1)::text from graph0 g\n union distinct\n select g.*,1||pg_sleep(1)::text\n from graph0 g, search_graph sg\n where g.f = sg.t\n ) search breadth first by f, t set seq\n select * from search_graph order by seq;\n\n\n(2)\nWhen I ran pg_log_current_query_plan() while \"make installcheck\" test\nwas running, I got the following assertion failure.\n\nTRAP: FailedAssertion(\"!IsPageLockHeld || (locktag->locktag_type == LOCKTAG_RELATION_EXTEND)\", File: \"lock.c\", Line: 894, PID: 61512)\n\n0 postgres 0x000000010ec23557 ExceptionalCondition + 231\n1 postgres 0x000000010e9eff15 LockAcquireExtended + 1461\n2 postgres 0x000000010e9ed14d LockRelationOid + 61\n3 postgres 0x000000010e41251b relation_open + 91\n4 postgres 0x000000010e509679 table_open + 25\n5 postgres 0x000000010ebf9462 SearchCatCacheMiss + 274\n6 postgres 0x000000010ebf5979 SearchCatCacheInternal + 761\n7 postgres 0x000000010ebf566c SearchCatCache + 60\n8 postgres 0x000000010ec1a9e0 SearchSysCache + 144\n9 postgres 0x000000010ec1ae03 SearchSysCacheExists + 51\n10 postgres 0x000000010e58ce35 TypeIsVisible + 437\n11 postgres 0x000000010ea98e4c format_type_extended + 1964\n12 postgres 0x000000010ea9900e format_type_with_typemod + 30\n13 postgres 0x000000010eb78d76 get_const_expr + 742\n14 postgres 0x000000010eb79bc8 get_rule_expr + 232\n15 postgres 0x000000010eb8140f get_func_expr + 1247\n16 postgres 0x000000010eb79dcd get_rule_expr + 749\n17 postgres 0x000000010eb81688 get_rule_expr_paren + 136\n18 postgres 0x000000010eb7bf38 get_rule_expr + 9304\n19 postgres 0x000000010eb72ad5 deparse_expression_pretty + 149\n20 postgres 0x000000010eb73463 deparse_expression + 83\n21 postgres 0x000000010e68eaf1 show_plan_tlist + 353\n22 postgres 0x000000010e68adaf ExplainNode + 4991\n23 postgres 0x000000010e688b4b ExplainPrintPlan + 283\n24 postgres 0x000000010e68e1aa ProcessLogCurrentPlanInterrupt + 266\n25 postgres 0x000000010ea133bb ProcessInterrupts + 3435\n26 postgres 0x000000010e738c97 vacuum_delay_point + 55\n27 postgres 0x000000010e42bb4b ginInsertCleanup + 1531\n28 postgres 0x000000010e42d418 gin_clean_pending_list + 776\n29 postgres 0x000000010e74955a ExecInterpExpr + 2522\n30 postgres 0x000000010e7487e2 ExecInterpExprStillValid + 82\n31 postgres 0x000000010e7ae83b ExecEvalExprSwitchContext + 59\n32 postgres 0x000000010e7ae7be ExecProject + 78\n33 postgres 0x000000010e7ae4e9 ExecResult + 345\n34 postgres 0x000000010e764e42 ExecProcNodeFirst + 82\n35 postgres 0x000000010e75ccb2 ExecProcNode + 50\n36 postgres 0x000000010e758301 ExecutePlan + 193\n37 postgres 0x000000010e7581d1 standard_ExecutorRun + 609\n38 auto_explain.so 0x000000010f1df3a7 explain_ExecutorRun + 247\n39 postgres 0x000000010e757f3b ExecutorRun + 91\n40 postgres 0x000000010ea1cb49 PortalRunSelect + 313\n41 postgres 0x000000010ea1c4dd PortalRun + 861\n42 postgres 0x000000010ea17474 exec_simple_query + 1540\n43 postgres 0x000000010ea164d4 PostgresMain + 2580\n44 postgres 0x000000010e91d159 BackendRun + 89\n45 postgres 0x000000010e91c6a5 BackendStartup + 565\n46 postgres 0x000000010e91b3fe ServerLoop + 638\n47 postgres 0x000000010e918b9d PostmasterMain + 6717\n48 postgres 0x000000010e7efd43 main + 819\n49 libdyld.dylib 0x00007fff6a46e3d5 start + 1\n50 ??? 0x0000000000000003 0x0 + 3\n\nLOG: server process (PID 61512) was terminated by signal 6: Abort trap: 6\nDETAIL: Failed process was running: select gin_clean_pending_list('t_gin_test_tbl_i_j_idx') is not null;\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 20 Aug 2021 01:12:11 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-08-20 01:12, Fujii Masao wrote:\n> On 2021/08/11 21:14, torikoshia wrote:\n>> As far as I looked into, pg_log_current_plan() can call InstrEndLoop() \n>> through ExplainNode().\n>> I added a flag to ExplainState to avoid calling InstrEndLoop() when \n>> ExplainNode() is called from pg_log_current_plan().\n> \n> Thanks for updating the patch!\n> I tried to test the patch again and encountered two issues.\n\nThanks for finding these issues!\n\n> \n> (1)\n> The following WITH RECURSIVE query failed with the error\n> \"ERROR: failed to find plan for CTE sg\" when I ran\n> pg_log_current_query_plan() against the backend executing that query.\n> Is this a bug?\n> \n> create table graph0( f int, t int, label text );\n> insert into graph0 values (1, 2, 'arc 1 -> 2'),(1, 3, 'arc 1 ->\n> 3'),(2, 3, 'arc 2 -> 3'),(1, 4, 'arc 1 -> 4'),(4, 5, 'arc 4 -> 5');\n> \n> with recursive search_graph(f, t, label, i) as (\n> select *, 1||pg_sleep(1)::text from graph0 g\n> union distinct\n> select g.*,1||pg_sleep(1)::text\n> from graph0 g, search_graph sg\n> where g.f = sg.t\n> ) search breadth first by f, t set seq\n> select * from search_graph order by seq;\n\nThis ERROR occurred without applying the patch and just calling \nEXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST.\n\nI'm going to make another thread to discuss it.\n\n> (2)\n> When I ran pg_log_current_query_plan() while \"make installcheck\" test\n> was running, I got the following assertion failure.\n> \n> TRAP: FailedAssertion(\"!IsPageLockHeld || (locktag->locktag_type ==\n> LOCKTAG_RELATION_EXTEND)\", File: \"lock.c\", Line: 894, PID: 61512)\n> \n> 0 postgres 0x000000010ec23557\n> ExceptionalCondition + 231\n> 1 postgres 0x000000010e9eff15\n> LockAcquireExtended + 1461\n> 2 postgres 0x000000010e9ed14d \n> LockRelationOid + 61\n> 3 postgres 0x000000010e41251b \n> relation_open + 91\n> 4 postgres 0x000000010e509679 table_open + \n> 25\n> 5 postgres 0x000000010ebf9462\n> SearchCatCacheMiss + 274\n> 6 postgres 0x000000010ebf5979\n> SearchCatCacheInternal + 761\n> 7 postgres 0x000000010ebf566c \n> SearchCatCache + 60\n> 8 postgres 0x000000010ec1a9e0 \n> SearchSysCache + 144\n> 9 postgres 0x000000010ec1ae03\n> SearchSysCacheExists + 51\n> 10 postgres 0x000000010e58ce35 \n> TypeIsVisible + 437\n> 11 postgres 0x000000010ea98e4c\n> format_type_extended + 1964\n> 12 postgres 0x000000010ea9900e\n> format_type_with_typemod + 30\n> 13 postgres 0x000000010eb78d76 \n> get_const_expr + 742\n> 14 postgres 0x000000010eb79bc8 \n> get_rule_expr + 232\n> 15 postgres 0x000000010eb8140f \n> get_func_expr + 1247\n> 16 postgres 0x000000010eb79dcd \n> get_rule_expr + 749\n> 17 postgres 0x000000010eb81688\n> get_rule_expr_paren + 136\n> 18 postgres 0x000000010eb7bf38 \n> get_rule_expr + 9304\n> 19 postgres 0x000000010eb72ad5\n> deparse_expression_pretty + 149\n> 20 postgres 0x000000010eb73463\n> deparse_expression + 83\n> 21 postgres 0x000000010e68eaf1 \n> show_plan_tlist + 353\n> 22 postgres 0x000000010e68adaf ExplainNode \n> + 4991\n> 23 postgres 0x000000010e688b4b\n> ExplainPrintPlan + 283\n> 24 postgres 0x000000010e68e1aa\n> ProcessLogCurrentPlanInterrupt + 266\n> 25 postgres 0x000000010ea133bb\n> ProcessInterrupts + 3435\n> 26 postgres 0x000000010e738c97\n> vacuum_delay_point + 55\n> 27 postgres 0x000000010e42bb4b\n> ginInsertCleanup + 1531\n> 28 postgres 0x000000010e42d418\n> gin_clean_pending_list + 776\n> 29 postgres 0x000000010e74955a \n> ExecInterpExpr + 2522\n> 30 postgres 0x000000010e7487e2\n> ExecInterpExprStillValid + 82\n> 31 postgres 0x000000010e7ae83b\n> ExecEvalExprSwitchContext + 59\n> 32 postgres 0x000000010e7ae7be ExecProject \n> + 78\n> 33 postgres 0x000000010e7ae4e9 ExecResult + \n> 345\n> 34 postgres 0x000000010e764e42\n> ExecProcNodeFirst + 82\n> 35 postgres 0x000000010e75ccb2 ExecProcNode \n> + 50\n> 36 postgres 0x000000010e758301 ExecutePlan \n> + 193\n> 37 postgres 0x000000010e7581d1\n> standard_ExecutorRun + 609\n> 38 auto_explain.so 0x000000010f1df3a7\n> explain_ExecutorRun + 247\n> 39 postgres 0x000000010e757f3b ExecutorRun \n> + 91\n> 40 postgres 0x000000010ea1cb49 \n> PortalRunSelect + 313\n> 41 postgres 0x000000010ea1c4dd PortalRun + \n> 861\n> 42 postgres 0x000000010ea17474\n> exec_simple_query + 1540\n> 43 postgres 0x000000010ea164d4 PostgresMain \n> + 2580\n> 44 postgres 0x000000010e91d159 BackendRun + \n> 89\n> 45 postgres 0x000000010e91c6a5 \n> BackendStartup + 565\n> 46 postgres 0x000000010e91b3fe ServerLoop + \n> 638\n> 47 postgres 0x000000010e918b9d \n> PostmasterMain + 6717\n> 48 postgres 0x000000010e7efd43 main + 819\n> 49 libdyld.dylib 0x00007fff6a46e3d5 start + 1\n> 50 ??? 0x0000000000000003 0x0 + 3\n> \n> LOG: server process (PID 61512) was terminated by signal 6: Abort \n> trap: 6\n> DETAIL: Failed process was running: select\n> gin_clean_pending_list('t_gin_test_tbl_i_j_idx') is not null;\n\nAs far as I understand, since explaining plans can acquire heavyweight \nlock for example to get column names, when page lock is held at the time \nof the interrupt, this assertion error occurs.\n\nThe attached patch tries to avoid this by checking each LocalLock entry \nand when finding even one, giving up logging the plan.\n\nThoughts?\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Tue, 07 Sep 2021 12:39:46 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-09-07 12:39, torikoshia wrote:\n> On 2021-08-20 01:12, Fujii Masao wrote:\n>> On 2021/08/11 21:14, torikoshia wrote:\n>>> As far as I looked into, pg_log_current_plan() can call \n>>> InstrEndLoop() through ExplainNode().\n>>> I added a flag to ExplainState to avoid calling InstrEndLoop() when \n>>> ExplainNode() is called from pg_log_current_plan().\n>> \n>> Thanks for updating the patch!\n>> I tried to test the patch again and encountered two issues.\n> \n> Thanks for finding these issues!\n> \n>> \n>> (1)\n>> The following WITH RECURSIVE query failed with the error\n>> \"ERROR: failed to find plan for CTE sg\" when I ran\n>> pg_log_current_query_plan() against the backend executing that query.\n>> Is this a bug?\n>> \n>> create table graph0( f int, t int, label text );\n>> insert into graph0 values (1, 2, 'arc 1 -> 2'),(1, 3, 'arc 1 ->\n>> 3'),(2, 3, 'arc 2 -> 3'),(1, 4, 'arc 1 -> 4'),(4, 5, 'arc 4 -> 5');\n>> \n>> with recursive search_graph(f, t, label, i) as (\n>> select *, 1||pg_sleep(1)::text from graph0 g\n>> union distinct\n>> select g.*,1||pg_sleep(1)::text\n>> from graph0 g, search_graph sg\n>> where g.f = sg.t\n>> ) search breadth first by f, t set seq\n>> select * from search_graph order by seq;\n> \n> This ERROR occurred without applying the patch and just calling\n> EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST.\n> \n> I'm going to make another thread to discuss it.\n> \n>> (2)\n>> When I ran pg_log_current_query_plan() while \"make installcheck\" test\n>> was running, I got the following assertion failure.\n>> \n>> TRAP: FailedAssertion(\"!IsPageLockHeld || (locktag->locktag_type ==\n>> LOCKTAG_RELATION_EXTEND)\", File: \"lock.c\", Line: 894, PID: 61512)\n>> \n>> 0 postgres 0x000000010ec23557\n>> ExceptionalCondition + 231\n>> 1 postgres 0x000000010e9eff15\n>> LockAcquireExtended + 1461\n>> 2 postgres 0x000000010e9ed14d \n>> LockRelationOid + 61\n>> 3 postgres 0x000000010e41251b \n>> relation_open + 91\n>> 4 postgres 0x000000010e509679 table_open \n>> + 25\n>> 5 postgres 0x000000010ebf9462\n>> SearchCatCacheMiss + 274\n>> 6 postgres 0x000000010ebf5979\n>> SearchCatCacheInternal + 761\n>> 7 postgres 0x000000010ebf566c \n>> SearchCatCache + 60\n>> 8 postgres 0x000000010ec1a9e0 \n>> SearchSysCache + 144\n>> 9 postgres 0x000000010ec1ae03\n>> SearchSysCacheExists + 51\n>> 10 postgres 0x000000010e58ce35 \n>> TypeIsVisible + 437\n>> 11 postgres 0x000000010ea98e4c\n>> format_type_extended + 1964\n>> 12 postgres 0x000000010ea9900e\n>> format_type_with_typemod + 30\n>> 13 postgres 0x000000010eb78d76 \n>> get_const_expr + 742\n>> 14 postgres 0x000000010eb79bc8 \n>> get_rule_expr + 232\n>> 15 postgres 0x000000010eb8140f \n>> get_func_expr + 1247\n>> 16 postgres 0x000000010eb79dcd \n>> get_rule_expr + 749\n>> 17 postgres 0x000000010eb81688\n>> get_rule_expr_paren + 136\n>> 18 postgres 0x000000010eb7bf38 \n>> get_rule_expr + 9304\n>> 19 postgres 0x000000010eb72ad5\n>> deparse_expression_pretty + 149\n>> 20 postgres 0x000000010eb73463\n>> deparse_expression + 83\n>> 21 postgres 0x000000010e68eaf1 \n>> show_plan_tlist + 353\n>> 22 postgres 0x000000010e68adaf ExplainNode \n>> + 4991\n>> 23 postgres 0x000000010e688b4b\n>> ExplainPrintPlan + 283\n>> 24 postgres 0x000000010e68e1aa\n>> ProcessLogCurrentPlanInterrupt + 266\n>> 25 postgres 0x000000010ea133bb\n>> ProcessInterrupts + 3435\n>> 26 postgres 0x000000010e738c97\n>> vacuum_delay_point + 55\n>> 27 postgres 0x000000010e42bb4b\n>> ginInsertCleanup + 1531\n>> 28 postgres 0x000000010e42d418\n>> gin_clean_pending_list + 776\n>> 29 postgres 0x000000010e74955a \n>> ExecInterpExpr + 2522\n>> 30 postgres 0x000000010e7487e2\n>> ExecInterpExprStillValid + 82\n>> 31 postgres 0x000000010e7ae83b\n>> ExecEvalExprSwitchContext + 59\n>> 32 postgres 0x000000010e7ae7be ExecProject \n>> + 78\n>> 33 postgres 0x000000010e7ae4e9 ExecResult \n>> + 345\n>> 34 postgres 0x000000010e764e42\n>> ExecProcNodeFirst + 82\n>> 35 postgres 0x000000010e75ccb2 \n>> ExecProcNode + 50\n>> 36 postgres 0x000000010e758301 ExecutePlan \n>> + 193\n>> 37 postgres 0x000000010e7581d1\n>> standard_ExecutorRun + 609\n>> 38 auto_explain.so 0x000000010f1df3a7\n>> explain_ExecutorRun + 247\n>> 39 postgres 0x000000010e757f3b ExecutorRun \n>> + 91\n>> 40 postgres 0x000000010ea1cb49 \n>> PortalRunSelect + 313\n>> 41 postgres 0x000000010ea1c4dd PortalRun + \n>> 861\n>> 42 postgres 0x000000010ea17474\n>> exec_simple_query + 1540\n>> 43 postgres 0x000000010ea164d4 \n>> PostgresMain + 2580\n>> 44 postgres 0x000000010e91d159 BackendRun \n>> + 89\n>> 45 postgres 0x000000010e91c6a5 \n>> BackendStartup + 565\n>> 46 postgres 0x000000010e91b3fe ServerLoop \n>> + 638\n>> 47 postgres 0x000000010e918b9d \n>> PostmasterMain + 6717\n>> 48 postgres 0x000000010e7efd43 main + 819\n>> 49 libdyld.dylib 0x00007fff6a46e3d5 start + 1\n>> 50 ??? 0x0000000000000003 0x0 + 3\n>> \n>> LOG: server process (PID 61512) was terminated by signal 6: Abort \n>> trap: 6\n>> DETAIL: Failed process was running: select\n>> gin_clean_pending_list('t_gin_test_tbl_i_j_idx') is not null;\n> \n> As far as I understand, since explaining plans can acquire heavyweight\n> lock for example to get column names, when page lock is held at the\n> time of the interrupt, this assertion error occurs.\n> \n> The attached patch tries to avoid this by checking each LocalLock\n> entry and when finding even one, giving up logging the plan.\n> \n> Thoughts?\n\nRegression tests failed on windows.\nUpdated patch attached.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Wed, 08 Sep 2021 21:06:26 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "Hi, hackers!\n\n• The last version of patch is correct applied. It changes 8 files from \n/src/backend, and 9 other files.\n\n• I have 1 error and 1 warning during compilation on Mac.\n\nexplain.c:4985:25: error: implicit declaration of function \n'GetLockMethodLocalHash' is invalid in C99 \n[-Werror,-Wimplicit-function-declaration]\n hash_seq_init(&status, GetLockMethodLocalHash());\nexplain.c:4985:25: warning: incompatible integer to pointer conversion \npassing 'int' to parameter of type 'HTAB *' (aka 'struct HTAB *') \n[-Wint-conversion]\n hash_seq_init(&status, GetLockMethodLocalHash());\n\nThis error doesn't appear at my second machine with Ubuntu.\n\nI found the reason. You delete #ifdef USE_ASSERT_CHECKING from \nimplementation of function GetLockMethodLocalHash(void), but this ifdef \nexists around function declaration. There may be a situation, when \nimplementation exists without declaration, so files with using of \nfunction produce errors. I create new version of patch with fix of this \nproblem.\n\nI'm agree that seeing the details of a query is a useful feature, but I \nhave several doubts:\n\n1) There are lots of changes of core's code. But not all users need this \nfunctionality. So adding this functionality like extension seemed more \nreasonable.\n\n2) There are many tools available to monitor the status of a query. How \nmuch do we need another one? For example:\n • pg_stat_progress_* is set of views with current status of ANALYZE, \nCREATE INDEX, VACUUM, CLUSTER, COPY, Base Backup. You can find it in \nPostgreSQL documentation [1].\n • pg_query_state is contrib with 2 patches for core (I hope someday \nCommunity will support adding this patches to PostgreSQL). It contains \nfunction with printing table with pid, full query text, plan and current \nprogress of every node like momentary EXPLAIN ANALYSE for SELECT, \nUPDATE, INSERT, DELETE. So it supports every flags and formats of \nEXPLAIN. You can find current version of pg_query_state on github [2]. \nAlso I found old discussion about first version of it in Community [3].\n\n3) Have you measured the overload of your feature? It would be really \ninteresting to know the changes in speed and performance.\n\nThank you for working on this issue. I would be glad to continue to \nfollow the development of this issue.\n\nLinks above:\n[1] https://www.postgresql.org/docs/current/progress-reporting.html\n[2] https://github.com/postgrespro/pg_query_state\n[3] \nhttps://www.postgresql.org/message-id/dbfb1a42-ee58-88fd-8d77-550498f52bc5@postgrespro.ru\n\n-- \nEkaterina Sokolova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 13 Oct 2021 17:28:30 +0300", "msg_from": "Ekaterina Sokolova <e.sokolova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-10-13 23:28, Ekaterina Sokolova wrote:\n> Hi, hackers!\n> \n> • The last version of patch is correct applied. It changes 8 files\n> from /src/backend, and 9 other files.\n> \n> • I have 1 error and 1 warning during compilation on Mac.\n> \n> explain.c:4985:25: error: implicit declaration of function\n> 'GetLockMethodLocalHash' is invalid in C99\n> [-Werror,-Wimplicit-function-declaration]\n> hash_seq_init(&status, GetLockMethodLocalHash());\n> explain.c:4985:25: warning: incompatible integer to pointer conversion\n> passing 'int' to parameter of type 'HTAB *' (aka 'struct HTAB *')\n> [-Wint-conversion]\n> hash_seq_init(&status, GetLockMethodLocalHash());\n> \n> This error doesn't appear at my second machine with Ubuntu.\n> \n> I found the reason. You delete #ifdef USE_ASSERT_CHECKING from\n> implementation of function GetLockMethodLocalHash(void), but this\n> ifdef exists around function declaration. There may be a situation,\n> when implementation exists without declaration, so files with using of\n> function produce errors. I create new version of patch with fix of\n> this problem.\n\nThanks for fixing that!\n\n> I'm agree that seeing the details of a query is a useful feature, but\n> I have several doubts:\n> \n> 1) There are lots of changes of core's code. But not all users need\n> this functionality. So adding this functionality like extension seemed\n> more reasonable.\n\nIt would be good if we can implement this feature in an extension, but \nas pg_query_state extension needs applying patches to PostgreSQL, I \nthink this kind of feature needs PostgreSQL core modification.\nIMHO extensions which need core modification are not easy to use in \nproduction environments..\n\n> 2) There are many tools available to monitor the status of a query.\n> How much do we need another one? For example:\n> • pg_stat_progress_* is set of views with current status of\n> ANALYZE, CREATE INDEX, VACUUM, CLUSTER, COPY, Base Backup. You can\n> find it in PostgreSQL documentation [1].\n> • pg_query_state is contrib with 2 patches for core (I hope\n> someday Community will support adding this patches to PostgreSQL). It\n> contains function with printing table with pid, full query text, plan\n> and current progress of every node like momentary EXPLAIN ANALYSE for\n> SELECT, UPDATE, INSERT, DELETE. So it supports every flags and formats\n> of EXPLAIN. You can find current version of pg_query_state on github\n> [2]. Also I found old discussion about first version of it in\n> Community [3].\n\nThanks for introducing the extension!\n\nI only took a quick look at pg_query_state, I have some questions.\n\npg_query_state seems using shm_mq to expose the plan information, but \nthere was a discussion that this kind of architecture would be tricky to \ndo properly [1].\nDoes pg_query_state handle difficulties listed on the discussion?\n\nIt seems the caller of the pg_query_state() has to wait until the target \nprocess pushes the plan information into shared memory, can it lead to \ndeadlock situations?\nI came up with this question because when trying to make a view for \nmemory contexts of other backends, we encountered deadlock situations. \nAfter all, we gave up view design and adopted sending signal and \nlogging.\n\nSome of the comments of [3] seem useful for my patch, I'm going to \nconsider them. Thanks!\n\n> 3) Have you measured the overload of your feature? It would be really\n> interesting to know the changes in speed and performance.\n\nI haven't measured it yet, but I believe that the overhead for backends \nwhich are not called pg_log_current_plan() would be slight since the \npatch just adds the logic for saving QueryDesc on ExecutorRun().\nThe overhead for backends which is called pg_log_current_plan() might \nnot slight, but since the target process are assumed dealing with \nlong-running query and the user want to know its plan, its overhead \nwould be worth the cost.\n\n> Thank you for working on this issue. I would be glad to continue to\n> follow the development of this issue.\n\nThanks for your help!\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 15 Oct 2021 15:17:22 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-10-15 15:17, torikoshia wrote:\n> I only took a quick look at pg_query_state, I have some questions.\n> \n> pg_query_state seems using shm_mq to expose the plan information, but\n> there was a discussion that this kind of architecture would be tricky\n> to do properly [1].\n> Does pg_query_state handle difficulties listed on the discussion?\n\nSorry, I forgot to add the URL.\n[1] \nhttps://www.postgresql.org/message-id/9a50371e15e741e295accabc72a41df1%40oss.nttdata.com\n\n> It seems the caller of the pg_query_state() has to wait until the\n> target process pushes the plan information into shared memory, can it\n> lead to deadlock situations?\n> I came up with this question because when trying to make a view for\n> memory contexts of other backends, we encountered deadlock situations.\n> After all, we gave up view design and adopted sending signal and\n> logging.\n\nDiscussion at the following URL.\nhttps://www.postgresql.org/message-id/9a50371e15e741e295accabc72a41df1%40oss.nttdata.com\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 15 Oct 2021 19:12:26 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Wed, Oct 13, 2021 at 05:28:30PM +0300, Ekaterina Sokolova wrote:\n> Hi, hackers!\n> \n> • pg_query_state is contrib with 2 patches for core (I hope someday\n> Community will support adding this patches to PostgreSQL). It contains\n\nI reviewed this version of the patch - I have some language fixes.\n\nI didn't know about pg_query_state, thanks.\n\n> To improve this situation, this patch adds\n> pg_log_current_query_plan() function that requests to log the\n> plan of the specified backend process.\n\nTo me, \"current plan\" seems to mean \"plan of *this* backend\" (which makes no\nsense to log). I think the user-facing function could be called\npg_log_query_plan(). It's true that the implementation is a request to another\nbackend to log its *own* query plan - but users shouldn't need to know about\nthe implementation.\n\n> + Only superusers can request to log plan of the running query.\n\n.. log the plan of a running query.\n\n> + Note that nested statements (statements executed inside a function) are not\n> + considered for logging. Only the deepest nesting query's plan is logged.\n\nOnly the plan of the most deeply nested query is logged.\n\n> +\t\t\t\t(errmsg(\"backend with PID %d is not running a query\",\n> +\t\t\t\t\tMyProcPid)));\n\nThe extra parens around errmsg() are not needed since e3a87b499.\n\n> +\t\t\t\t(errmsg(\"backend with PID %d is now holding a page lock. Try again\",\n\nremove \"now\"\n\n> +\t\t\t(errmsg(\"plan of the query running on backend with PID %d is:\\n%s\",\n> +\t\t\t\t\tMyProcPid, es->str->data),\n\nMaybe this should say \"query plan running on backend with PID 17793 is:\"\n\n> + * would cause lots of log messages and which can lead to denial of\n\nremove \"and\"\n\n> +\t\t\t\t errmsg(\"must be a superuser to log information about specified process\")));\n\nI think it should say not say \"specified\", since that sounds like the user\nmight have access to log information about some other processes:\n| must be a superuser to log information about processes\n\n> +\n> +\tproc = BackendPidGetProc(pid);\n> +\n> +\t/*\n> +\t * BackendPidGetProc returns NULL if the pid isn't valid; but by the time\n> +\t * we reach kill(), a process for which we get a valid proc here might\n> +\t * have terminated on its own. There's no way to acquire a lock on an\n> +\t * arbitrary process to prevent that. But since this mechanism is usually\n> +\t * used to below purposes, it might end its own first and the information\n\nused for below purposes,\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 12 Nov 2021 12:37:09 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Wed, Oct 13, 2021 at 7:58 PM Ekaterina Sokolova\n<e.sokolova@postgrespro.ru> wrote:\n> Thank you for working on this issue. I would be glad to continue to\n> follow the development of this issue.\n\nThanks for the patch. I'm not sure if v11 is the latest patch, if yes,\nI have the following comments:\n\n1) Firstly, v11 patch isn't getting applied on the master -\nhttp://cfbot.cputube.org/patch_35_3142.log.\n\n2) I think we are moving away from if (!superuser()) checks, see the\ncommit [1]. The goal is to let the GRANT-REVOKE system deal with who\nis supposed to run these system functions. Since\npg_log_current_query_plan also writes the info to server logs, I think\nit should do the same thing as commit [1] did for\npg_log_backend_memory_contexts.\n\nWith v11, you are re-introducing the superuser() check in the\npg_log_backend_memory_contexts which is wrong.\n\n3) I think SendProcSignalForLogInfo can be more generic, meaning, it\ncan also send signal to auxiliary processes if asked to do this will\nsimplify the things for pg_log_backend_memory_contexts and other\npatches like pg_print_backtrace. I would imagine it to be \"bool\nSendProcSignalForLogInfo(pid_t pid, ProcSignalReason reason, bool\nsignal_aux_proc);\".\n\n[1] commit f0b051e322d530a340e62f2ae16d99acdbcb3d05\nAuthor: Jeff Davis <jdavis@postgresql.org>\nDate: Tue Oct 26 13:13:52 2021 -0700\n\n Allow GRANT on pg_log_backend_memory_contexts().\n\n Remove superuser check, allowing any user granted permissions on\n pg_log_backend_memory_contexts() to log the memory contexts of any\n backend.\n\n Note that this could allow a privileged non-superuser to log the\n memory contexts of a superuser backend, but as discussed, that does\n not seem to be a problem.\n\n Reviewed-by: Nathan Bossart, Bharath Rupireddy, Michael Paquier,\nKyotaro Horiguchi, Andres Freund\n Discussion:\nhttps://postgr.es/m/e5cf6684d17c8d1ef4904ae248605ccd6da03e72.camel@j-davis.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 13 Nov 2021 18:59:59 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-11-13 22:29, Bharath Rupireddy wrote:\nThanks for your review!\n\n> On Wed, Oct 13, 2021 at 7:58 PM Ekaterina Sokolova\n> <e.sokolova@postgrespro.ru> wrote:\n>> Thank you for working on this issue. I would be glad to continue to\n>> follow the development of this issue.\n> \n> Thanks for the patch. I'm not sure if v11 is the latest patch, if yes,\n> I have the following comments:\n> \n> 1) Firstly, v11 patch isn't getting applied on the master -\n> http://cfbot.cputube.org/patch_35_3142.log.\nUpdated the patch.\n\n> 2) I think we are moving away from if (!superuser()) checks, see the\n> commit [1]. The goal is to let the GRANT-REVOKE system deal with who\n> is supposed to run these system functions. Since\n> pg_log_current_query_plan also writes the info to server logs, I think\n> it should do the same thing as commit [1] did for\n> pg_log_backend_memory_contexts.\n> \n> With v11, you are re-introducing the superuser() check in the\n> pg_log_backend_memory_contexts which is wrong.\n\nYeah, I removed superuser() check and make it possible to be executed by \nnon-superusers when users are granted to do so.\n> \n> 3) I think SendProcSignalForLogInfo can be more generic, meaning, it\n> can also send signal to auxiliary processes if asked to do this will\n> simplify the things for pg_log_backend_memory_contexts and other\n> patches like pg_print_backtrace. I would imagine it to be \"bool\n> SendProcSignalForLogInfo(pid_t pid, ProcSignalReason reason, bool\n> signal_aux_proc);\".\n\nI agree with your idea.\nSince sending signals to auxiliary processes to dump memory contexts and \npg_print_backtrace is still under discussion, IMHO it would be better to \nrefactor SendProcSignalForLogInfo after these patches are commited.\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Mon, 15 Nov 2021 21:59:15 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-11-13 03:37, Justin Pryzby wrote:\n\n> I reviewed this version of the patch - I have some language fixes.\n\nThanks for your review!\nAttached patch that reflects your comments.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Mon, 15 Nov 2021 23:00:12 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On Mon, Nov 15, 2021 at 6:29 PM torikoshia <torikoshia@oss.nttdata.com> wrote:\n> > 3) I think SendProcSignalForLogInfo can be more generic, meaning, it\n> > can also send signal to auxiliary processes if asked to do this will\n> > simplify the things for pg_log_backend_memory_contexts and other\n> > patches like pg_print_backtrace. I would imagine it to be \"bool\n> > SendProcSignalForLogInfo(pid_t pid, ProcSignalReason reason, bool\n> > signal_aux_proc);\".\n>\n> I agree with your idea.\n> Since sending signals to auxiliary processes to dump memory contexts and\n> pg_print_backtrace is still under discussion, IMHO it would be better to\n> refactor SendProcSignalForLogInfo after these patches are commited.\n\n+1.\n\nI have another comment: isn't it a good idea that an overloaded\nversion of the new function pg_log_query_plan can take the available\nexplain command options as a text argument? I'm not sure if it is\npossible to get the stats like buffers, costs etc of a running query,\nif yes, something like pg_log_query_plan(pid, 'buffers',\n'costs'....);? It looks to be an overkill at first sight, but these\ncan be useful to know more detailed plan of the query.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 15 Nov 2021 19:45:49 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Logging plan of the running query" }, { "msg_contents": "On 2021-11-15 23:15, Bharath Rupireddy wrote:\n\n> I have another comment: isn't it a good idea that an overloaded\n> version of the new function pg_log_query_plan can take the available\n> explain command options as a text argument? I'm not sure if it is\n> possible to get the stats like buffers, costs etc of a running query,\n> if yes, something like pg_log_query_plan(pid, 'buffers',\n> 'costs'....);? It looks to be an overkill at first sight, but these\n> can be useful to know more detailed plan of the query.\n\nI also think the overloaded version would be useful.\nHowever as discussed in [1], it seems to introduce other difficulties.\nI think it would be enough that the first version of pg_log_query_plan \ndoesn't take any parameters.\n\n[1] \nhttps://www.postgresql.org/message-id/ce86e4f72f09d5497e8ad3a162861d33%40oss.nttdata.com\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 16 Nov 2021 20:48:27 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" } ]
[ { "msg_contents": "Right now Tom is doing a bit of work to try and improve the\nperformance of regression test runs with CLOBBER _CACHE_ALWAYS. I'm\non board with making this go faster too.\n\nI did a CLOBBER_CACHE_ALWAYS_RUN today and it took my machine almost 7\nhours to complete. I occasionally checked top -c and was a bit\ndisappointed that the majority of the time just a single backend was\nbusy. The reason for this is that most groups have some test that\ntakes much longer to run than others and I just often caught it once\nit had finished all the faster tests and was stuck on the slow one.\n\nI did a bit of analysis into the runtimes and found that:\n\n1. Without parallelism, the total run-time of all tests was 12.29 hours.\n2. The run took 6.45 hours. (I took the max time from each group and\nsummed that from each group)\n\nThat means the average backends utilized here was about 1.9.\n\nI wondered if there might be a better way to handle how parallel tests\nwork in pg_regress. We have many parallel groups that have reached 20\ntests and we often just create another parallel group because of the\nnot exceeding 20 rule. In many cases, we could get busy running\nanother test instead of sitting around idle.\n\nRight now we start 1 backend for each test in a parallel group then\nwait for the final backend to complete before running the next group.\n\nIs a particular reason for it to work that way?\n\nWhy can we not just have a much larger parallel group and lump all of\nthe tests that have no special needs not to be run concurrently or\nconcurrently with another test in particular and just run all those\nwith up to N workers. Once a worker completes, give it another test\nto process until there are none left. We could still limit the total\nconcurrency with --max-connections=20. I don't think we'd need to make\nany code changes to make this idea work.\n\nI did the maths on that and if it worked that way, and assuming all\nthe parallel tests don't mind being run at the same time with any\nother parallel test, then the theoretical run-time comes down to 3.75\nhours with 8 workers, or 4.11 with 4 workers. The primary reason it\ndoes not become much faster is due to the \"privileges\" test taking 3\nhours. If I calculate assuming 128 workers the time only drops to 3.46\nhours. Here there are enough workers to start the slow privileges\ntest on a worker that's not done anything else yet. So the 3.46 hours\nis just the time for the privileges test plus the time to do the\nserial tests, one by one.\n\nFor the above, I didn't do anything to change the order of the tests\nto start the long-running ones first, but if I do that, I can get the\ntimes down to 3.46 with just 4 workers. That's 1.86x faster than my\nrun.\n\nI've attached a text file with the method I used to calculate each of\nthe numbers above and I've also attached the results with timings from\nmy CLOBBER_CACHE_ALWAYS run for anyone who'd like to check my maths.\n\nIf I split the \"privileges\" test into 2 even parts, then 8 workers\nwould run the tests in 1.95 hours which is 3.2x faster than my run.\n\nDavid", "msg_date": "Thu, 13 May 2021 00:55:33 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Do we need to rethink how to parallelize regression tests to speedup\n CLOBBER_CACHE_ALWAYS?" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Right now we start 1 backend for each test in a parallel group then\n> wait for the final backend to complete before running the next group.\n\n> Is a particular reason for it to work that way?\n\nThere are a whole lot of cases where test Y depends on an earlier test X.\nSome of those dependencies are annotated in parallel_schedule, but I fear\nmost are not.\n\nIf we had a full list of such dependencies then we could imagine building\na job scheduler that would dispatch any script that has no remaining\ndependencies.\n\nThe cases where \"script X can't run concurrently with script Y\" are\nalso problematic. It's not as easy to discover those through testing,\nsince it might happen to work depending on timing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 May 2021 09:50:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Do we need to rethink how to parallelize regression tests to\n speedup CLOBBER_CACHE_ALWAYS?" }, { "msg_contents": "On Thu, 13 May 2021 at 01:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> There are a whole lot of cases where test Y depends on an earlier test X.\n> Some of those dependencies are annotated in parallel_schedule, but I fear\n> most are not.\n>\n> If we had a full list of such dependencies then we could imagine building\n> a job scheduler that would dispatch any script that has no remaining\n> dependencies.\n\nI wonder if it could be done by starting a new parallel group and then\njust move existing tests into it first verifying that:\n\n1. The test does not display results from any pg_catalog table, or if\nit does the filter is restrictive enough that there's no possibility\nthat the results will change due to other sessions changing the\ncatalogues.\n2. If the test creates any new objects that those objects have a name\nthat's unlikely to conflict with other tests. e.g no tablenames like\nt1\n3. The test does not INSERT/DELETE/UPDATE/VACUUM/ALTER/ANALYZE any\ntables that exist for more than 1 test.\n4. Does not globally modify the system state. e.g ALTER SYSTEM.\n\nWe could document in parallel_schedule that tests in this particular\ngroup must meet the above requirement, plus any others I've not\nthought about. That list of reasons could be updated when we discover\nother things I've neglected to think about.\n\nI hope that now since we no longer have serial_schedule that just\nhaving one source of truth for tests that the comments in the\nparallel_schedule are more likely to be read and kept up to date.\n\nI imagine there are many tests that could also just be run entirely in\na single begin; commit;. That would mean any catalogue changes they\nmade would not be visible to any other test which happens to query\nthat.\n\nDavid\n\n\n", "msg_date": "Thu, 13 May 2021 10:01:21 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Do we need to rethink how to parallelize regression tests to\n speedup CLOBBER_CACHE_ALWAYS?" } ]
[ { "msg_contents": "Hi,\n\nWhile I was reviewing one of the logical decoding features, I found a\nfew issues in alter subscription drop publication.\n\nAlter subscription drop publication does not support copy_data option,\nthat needs to be removed from tab completion.\n\nDropping all the publications present in the subscription using alter\nsubscription drop publication would throw \"subscription must contain\nat least one publication\". This message was slightly confusing to me.\nAs even though some publication was present on the subscription I was\nnot able to drop. Instead I feel we could throw an error message\nsomething like \"dropping specified publication will result in\nsubscription without any publication, this is not supported\".\n\nmerge_publications can be called after validation of the options\nspecified, I think we should check if the options specified are\ncorrect or not before checking the actual publications.\n\nAttached a patch which contains the fixes for the same.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Wed, 12 May 2021 21:55:21 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "alter subscription drop publication fixes" }, { "msg_contents": "On Wed, May 12, 2021 at 9:55 PM vignesh C <vignesh21@gmail.com> wrote:\n> While I was reviewing one of the logical decoding features, I found a\n> few issues in alter subscription drop publication.\n\nThanks!\n\n> Alter subscription drop publication does not support copy_data option,\n> that needs to be removed from tab completion.\n\n+1. You may want to also change set_publication_option(to something\nlike drop_pulication_option with only refresh option) for the drop in\nthe docs? Because \"Additionally, refresh options as described under\nREFRESH PUBLICATION may be specified.\" doesn't make sense.\n\n> Dropping all the publications present in the subscription using alter\n> subscription drop publication would throw \"subscription must contain\n> at least one publication\". This message was slightly confusing to me.\n> As even though some publication was present on the subscription I was\n> not able to drop. Instead I feel we could throw an error message\n> something like \"dropping specified publication will result in\n> subscription without any publication, this is not supported\".\n\n-1 for that long message. The intention of that error was to throw an\nerror if all the publications of a subscription are dropped. If that's\nso confusing, then you could just let the error message be\n\"subscription must contain at least one publication\", add an error\ndetail \"Subscription without any publication is not allowed to\nexist/is not supported.\" or \"Removing/Dropping all the publications\nfrom a subscription is not allowed/supported.\" or some other better\nwording.\n\n> merge_publications can be called after validation of the options\n> specified, I think we should check if the options specified are\n> correct or not before checking the actual publications.\n\n+1. That was a miss in the original feature.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 May 2021 22:15:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: alter subscription drop publication fixes" }, { "msg_contents": "\nOn Thu, 13 May 2021 at 00:45, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Wed, May 12, 2021 at 9:55 PM vignesh C <vignesh21@gmail.com> wrote:\n>> While I was reviewing one of the logical decoding features, I found a\n>> few issues in alter subscription drop publication.\n>\n> Thanks!\n>\n>> Alter subscription drop publication does not support copy_data option,\n>> that needs to be removed from tab completion.\n>\n> +1. You may want to also change set_publication_option(to something\n> like drop_pulication_option with only refresh option) for the drop in\n> the docs? Because \"Additionally, refresh options as described under\n> REFRESH PUBLICATION may be specified.\" doesn't make sense.\n>\n\n+1. Make sense to remove the unsupported options for tab-complete.\n\n>> Dropping all the publications present in the subscription using alter\n>> subscription drop publication would throw \"subscription must contain\n>> at least one publication\". This message was slightly confusing to me.\n>> As even though some publication was present on the subscription I was\n>> not able to drop. Instead I feel we could throw an error message\n>> something like \"dropping specified publication will result in\n>> subscription without any publication, this is not supported\".\n>\n> -1 for that long message. The intention of that error was to throw an\n> error if all the publications of a subscription are dropped. If that's\n> so confusing, then you could just let the error message be\n> \"subscription must contain at least one publication\", add an error\n> detail \"Subscription without any publication is not allowed to\n> exist/is not supported.\" or \"Removing/Dropping all the publications\n> from a subscription is not allowed/supported.\" or some other better\n> wording.\n>\n\nAgree with Bharath. We can use a detail message. How about?\n\n if (!oldpublist)\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"subscription must contain at least one publication\"),\n errdetail(\"Dropping all the publications from a subscription is not supported\")));\n\n>> merge_publications can be called after validation of the options\n>> specified, I think we should check if the options specified are\n>> correct or not before checking the actual publications.\n>\n> +1. That was a miss in the original feature.\n>\n\n+1.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Thu, 13 May 2021 11:15:45 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: alter subscription drop publication fixes" }, { "msg_contents": "On Thu, May 13, 2021 at 8:45 AM Japin Li <japinli@hotmail.com> wrote:\n> >> Dropping all the publications present in the subscription using alter\n> >> subscription drop publication would throw \"subscription must contain\n> >> at least one publication\". This message was slightly confusing to me.\n> >> As even though some publication was present on the subscription I was\n> >> not able to drop. Instead I feel we could throw an error message\n> >> something like \"dropping specified publication will result in\n> >> subscription without any publication, this is not supported\".\n> >\n> > -1 for that long message. The intention of that error was to throw an\n> > error if all the publications of a subscription are dropped. If that's\n> > so confusing, then you could just let the error message be\n> > \"subscription must contain at least one publication\", add an error\n> > detail \"Subscription without any publication is not allowed to\n> > exist/is not supported.\" or \"Removing/Dropping all the publications\n> > from a subscription is not allowed/supported.\" or some other better\n> > wording.\n> >\n>\n> Agree with Bharath. We can use a detail message. How about?\n>\n> if (!oldpublist)\n> ereport(ERROR,\n> (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> errmsg(\"subscription must contain at least one publication\"),\n> errdetail(\"Dropping all the publications from a subscription is not supported\")));\n\nOr how about just errmsg(\"cannot drop all the publications of the\nsubscriber \\\"%s\\\"\", subname) without any error detail?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 09:36:35 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: alter subscription drop publication fixes" }, { "msg_contents": "On Thu, May 13, 2021 at 9:36 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 8:45 AM Japin Li <japinli@hotmail.com> wrote:\n> > >> Dropping all the publications present in the subscription using alter\n> > >> subscription drop publication would throw \"subscription must contain\n> > >> at least one publication\". This message was slightly confusing to me.\n> > >> As even though some publication was present on the subscription I was\n> > >> not able to drop. Instead I feel we could throw an error message\n> > >> something like \"dropping specified publication will result in\n> > >> subscription without any publication, this is not supported\".\n> > >\n> > > -1 for that long message. The intention of that error was to throw an\n> > > error if all the publications of a subscription are dropped. If that's\n> > > so confusing, then you could just let the error message be\n> > > \"subscription must contain at least one publication\", add an error\n> > > detail \"Subscription without any publication is not allowed to\n> > > exist/is not supported.\" or \"Removing/Dropping all the publications\n> > > from a subscription is not allowed/supported.\" or some other better\n> > > wording.\n> > >\n> >\n> > Agree with Bharath. We can use a detail message. How about?\n> >\n> > if (!oldpublist)\n> > ereport(ERROR,\n> > (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > errmsg(\"subscription must contain at least one publication\"),\n> > errdetail(\"Dropping all the publications from a subscription is not supported\")));\n>\n> Or how about just errmsg(\"cannot drop all the publications of the\n> subscriber \\\"%s\\\"\", subname) without any error detail?\n\nIMHO, this message without errdetail looks much better.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 11:04:34 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: alter subscription drop publication fixes" }, { "msg_contents": "On Wed, May 12, 2021 at 10:15 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, May 12, 2021 at 9:55 PM vignesh C <vignesh21@gmail.com> wrote:\n> > While I was reviewing one of the logical decoding features, I found a\n> > few issues in alter subscription drop publication.\n>\n> Thanks!\n>\n> > Alter subscription drop publication does not support copy_data option,\n> > that needs to be removed from tab completion.\n>\n> +1. You may want to also change set_publication_option(to something\n> like drop_pulication_option with only refresh option) for the drop in\n> the docs? Because \"Additionally, refresh options as described under\n> REFRESH PUBLICATION may be specified.\" doesn't make sense.\n>\n> > Dropping all the publications present in the subscription using alter\n> > subscription drop publication would throw \"subscription must contain\n> > at least one publication\". This message was slightly confusing to me.\n> > As even though some publication was present on the subscription I was\n> > not able to drop. Instead I feel we could throw an error message\n> > something like \"dropping specified publication will result in\n> > subscription without any publication, this is not supported\".\n>\n> -1 for that long message. The intention of that error was to throw an\n> error if all the publications of a subscription are dropped. If that's\n> so confusing, then you could just let the error message be\n> \"subscription must contain at least one publication\", add an error\n> detail \"Subscription without any publication is not allowed to\n> exist/is not supported.\" or \"Removing/Dropping all the publications\n> from a subscription is not allowed/supported.\" or some other better\n> wording.\n>\n\nModified the error message to \"errmsg(\"cannot drop all the\npublications of the subscriber \\\"%s\\\"\", subname)\".\nI have separated the Drop publication documentation contents. There\nare some duplicate contents but the readability is slightly better.\nThoughts?\n\n> > merge_publications can be called after validation of the options\n> > specified, I think we should check if the options specified are\n> > correct or not before checking the actual publications.\n>\n> +1. That was a miss in the original feature.\n\nAttached patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Thu, 13 May 2021 19:43:00 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: alter subscription drop publication fixes" }, { "msg_contents": "On Thu, May 13, 2021 at 7:43 PM vignesh C <vignesh21@gmail.com> wrote:\n> I have separated the Drop publication documentation contents. There\n> are some duplicate contents but the readability is slightly better.\n> Thoughts?\n\n-ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable>\nDROP PUBLICATION <replaceable\nclass=\"parameter\">publication_name</replaceable> [, ...] [ WITH (\n<replaceable class=\"parameter\">set_publication_option</replaceable> [=\n<replaceable class=\"parameter\">value</replaceable>] [, ... ] ) ]\n+ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable>\nDROP PUBLICATION <replaceable\nclass=\"parameter\">publication_name</replaceable> [, ...] [ WITH (\nrefresh [= <replaceable class=\"parameter\">value</replaceable>] ) ]\n\nIMO, let's not list the \"refresh\" option directly here. If we don't\nwant to add a new list of operations \"drop_publication_opition\", you\ncould just mention a note \"Except for DROP PUBLICATION, the refresh\noptions as described under REFRESH PUBLICATION may be specified.\" or\n\"Additionally, refresh options as described under REFRESH PUBLICATION\nmay be specified, except for DROP PUBLICATION.\"\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 20:13:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: alter subscription drop publication fixes" }, { "msg_contents": "\nOn Thu, 13 May 2021 at 22:13, vignesh C <vignesh21@gmail.com> wrote:\n> On Wed, May 12, 2021 at 10:15 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Wed, May 12, 2021 at 9:55 PM vignesh C <vignesh21@gmail.com> wrote:\n>> > While I was reviewing one of the logical decoding features, I found a\n>> > few issues in alter subscription drop publication.\n>>\n>> Thanks!\n>>\n>> > Alter subscription drop publication does not support copy_data option,\n>> > that needs to be removed from tab completion.\n>>\n>> +1. You may want to also change set_publication_option(to something\n>> like drop_pulication_option with only refresh option) for the drop in\n>> the docs? Because \"Additionally, refresh options as described under\n>> REFRESH PUBLICATION may be specified.\" doesn't make sense.\n>>\n>> > Dropping all the publications present in the subscription using alter\n>> > subscription drop publication would throw \"subscription must contain\n>> > at least one publication\". This message was slightly confusing to me.\n>> > As even though some publication was present on the subscription I was\n>> > not able to drop. Instead I feel we could throw an error message\n>> > something like \"dropping specified publication will result in\n>> > subscription without any publication, this is not supported\".\n>>\n>> -1 for that long message. The intention of that error was to throw an\n>> error if all the publications of a subscription are dropped. If that's\n>> so confusing, then you could just let the error message be\n>> \"subscription must contain at least one publication\", add an error\n>> detail \"Subscription without any publication is not allowed to\n>> exist/is not supported.\" or \"Removing/Dropping all the publications\n>> from a subscription is not allowed/supported.\" or some other better\n>> wording.\n>>\n>\n> Modified the error message to \"errmsg(\"cannot drop all the\n> publications of the subscriber \\\"%s\\\"\", subname)\".\n> I have separated the Drop publication documentation contents. There\n> are some duplicate contents but the readability is slightly better.\n> Thoughts?\n>\n>> > merge_publications can be called after validation of the options\n>> > specified, I think we should check if the options specified are\n>> > correct or not before checking the actual publications.\n>>\n>> +1. That was a miss in the original feature.\n>\n> Attached patch has the changes for the same.\n>\n\nThanks for updating the patch. I have a little comments for the new patch.\n\n- <literal>ADD</literal> adds additional publications,\n- <literal>DROP</literal> removes publications from the list of\n+ <literal>ADD</literal> adds additional publications from the list of\n\nI think, we should change the word 'from' to 'to'.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Fri, 14 May 2021 10:11:10 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: alter subscription drop publication fixes" }, { "msg_contents": "On Thu, May 13, 2021 at 8:13 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 7:43 PM vignesh C <vignesh21@gmail.com> wrote:\n> > I have separated the Drop publication documentation contents. There\n> > are some duplicate contents but the readability is slightly better.\n> > Thoughts?\n>\n> -ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable>\n> DROP PUBLICATION <replaceable\n> class=\"parameter\">publication_name</replaceable> [, ...] [ WITH (\n> <replaceable class=\"parameter\">set_publication_option</replaceable> [=\n> <replaceable class=\"parameter\">value</replaceable>] [, ... ] ) ]\n> +ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable>\n> DROP PUBLICATION <replaceable\n> class=\"parameter\">publication_name</replaceable> [, ...] [ WITH (\n> refresh [= <replaceable class=\"parameter\">value</replaceable>] ) ]\n>\n> IMO, let's not list the \"refresh\" option directly here. If we don't\n> want to add a new list of operations \"drop_publication_opition\", you\n> could just mention a note \"Except for DROP PUBLICATION, the refresh\n> options as described under REFRESH PUBLICATION may be specified.\" or\n> \"Additionally, refresh options as described under REFRESH PUBLICATION\n> may be specified, except for DROP PUBLICATION.\"\n\nThanks for the comment, the attached v3 patch has the changes for the\nsame. I also made another change to change set_publication_option to\npublication_option as it is common for SET/ADD & DROP.\n\nRegards,\nVignesh", "msg_date": "Fri, 14 May 2021 19:53:16 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: alter subscription drop publication fixes" }, { "msg_contents": "On Fri, May 14, 2021 at 7:41 AM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> On Thu, 13 May 2021 at 22:13, vignesh C <vignesh21@gmail.com> wrote:\n> > On Wed, May 12, 2021 at 10:15 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >>\n> >> On Wed, May 12, 2021 at 9:55 PM vignesh C <vignesh21@gmail.com> wrote:\n> >> > While I was reviewing one of the logical decoding features, I found a\n> >> > few issues in alter subscription drop publication.\n> >>\n> >> Thanks!\n> >>\n> >> > Alter subscription drop publication does not support copy_data\noption,\n> >> > that needs to be removed from tab completion.\n> >>\n> >> +1. You may want to also change set_publication_option(to something\n> >> like drop_pulication_option with only refresh option) for the drop in\n> >> the docs? Because \"Additionally, refresh options as described under\n> >> REFRESH PUBLICATION may be specified.\" doesn't make sense.\n> >>\n> >> > Dropping all the publications present in the subscription using alter\n> >> > subscription drop publication would throw \"subscription must contain\n> >> > at least one publication\". This message was slightly confusing to me.\n> >> > As even though some publication was present on the subscription I was\n> >> > not able to drop. Instead I feel we could throw an error message\n> >> > something like \"dropping specified publication will result in\n> >> > subscription without any publication, this is not supported\".\n> >>\n> >> -1 for that long message. The intention of that error was to throw an\n> >> error if all the publications of a subscription are dropped. If that's\n> >> so confusing, then you could just let the error message be\n> >> \"subscription must contain at least one publication\", add an error\n> >> detail \"Subscription without any publication is not allowed to\n> >> exist/is not supported.\" or \"Removing/Dropping all the publications\n> >> from a subscription is not allowed/supported.\" or some other better\n> >> wording.\n> >>\n> >\n> > Modified the error message to \"errmsg(\"cannot drop all the\n> > publications of the subscriber \\\"%s\\\"\", subname)\".\n> > I have separated the Drop publication documentation contents. There\n> > are some duplicate contents but the readability is slightly better.\n> > Thoughts?\n> >\n> >> > merge_publications can be called after validation of the options\n> >> > specified, I think we should check if the options specified are\n> >> > correct or not before checking the actual publications.\n> >>\n> >> +1. That was a miss in the original feature.\n> >\n> > Attached patch has the changes for the same.\n> >\n>\n> Thanks for updating the patch. I have a little comments for the new patch.\n>\n> - <literal>ADD</literal> adds additional publications,\n> - <literal>DROP</literal> removes publications from the list of\n> + <literal>ADD</literal> adds additional publications from the list\nof\n>\n> I think, we should change the word 'from' to 'to'.\n\nI have changed it to:\n <literal>ADD</literal> adds additional publications,\n- <literal>DROP</literal> removes publications from the list of\n+ <literal>DROP</literal> removes publications to/from the list of\n\nThe changes for the same are shared in v3 patch at [1].\n[1] -\nhttps://www.postgresql.org/message-id/CALDaNm3svMg%2BhMA9GsJsUQ75HXtpjpAh2gk%3D8yZfgAnA9BMsnA%40mail.gmail.com\n\nRegards,\nVignesh\n\nOn Fri, May 14, 2021 at 7:41 AM Japin Li <japinli@hotmail.com> wrote:>>> On Thu, 13 May 2021 at 22:13, vignesh C <vignesh21@gmail.com> wrote:> > On Wed, May 12, 2021 at 10:15 PM Bharath Rupireddy> > <bharath.rupireddyforpostgres@gmail.com> wrote:> >>> >> On Wed, May 12, 2021 at 9:55 PM vignesh C <vignesh21@gmail.com> wrote:> >> > While I was reviewing one of the logical decoding features, I found a> >> > few issues in alter subscription drop publication.> >>> >> Thanks!> >>> >> > Alter subscription drop publication does not support copy_data option,> >> > that needs to be removed from tab completion.> >>> >> +1. You may want to also change set_publication_option(to something> >> like drop_pulication_option with only refresh option) for the drop in> >> the docs? Because \"Additionally, refresh options as described under> >> REFRESH PUBLICATION may be specified.\" doesn't make sense.> >>> >> > Dropping all the publications present in the subscription using alter> >> > subscription drop publication would throw \"subscription must contain> >> > at least one publication\". This message was slightly confusing to me.> >> > As even though some publication was present on the subscription I was> >> > not able to drop. Instead I feel we could throw an error message> >> > something like \"dropping specified publication will result in> >> > subscription without any publication, this is not supported\".> >>> >> -1 for that long message. The intention of that error was to throw an> >> error if all the publications of a subscription are dropped. If that's> >> so confusing, then you could just let the error message be> >> \"subscription must contain at least one publication\", add an error> >> detail \"Subscription without any publication is not allowed to> >> exist/is not supported.\" or \"Removing/Dropping all the publications> >> from a subscription is not allowed/supported.\" or some other better> >> wording.> >>> >> > Modified the error message to \"errmsg(\"cannot drop all the> > publications of the subscriber \\\"%s\\\"\", subname)\".> > I have separated the Drop publication documentation contents. There> > are some duplicate contents but the readability is slightly better.> > Thoughts?> >> >> > merge_publications can be called after validation of the options> >> > specified, I think we should check if the options specified are> >> > correct or not before checking the actual publications.> >>> >> +1. That was a miss in the original feature.> >> > Attached patch has the changes for the same.> >>> Thanks for updating the patch. I have a little comments for the new patch.>> -      <literal>ADD</literal> adds additional publications,> -      <literal>DROP</literal> removes publications from the list of> +      <literal>ADD</literal> adds additional publications from the list of>> I think, we should change the word 'from' to 'to'.I have changed it to:       <literal>ADD</literal> adds additional publications,-      <literal>DROP</literal> removes publications from the list of+      <literal>DROP</literal> removes publications to/from the list ofThe changes for the same are shared in v3 patch at [1].[1] - https://www.postgresql.org/message-id/CALDaNm3svMg%2BhMA9GsJsUQ75HXtpjpAh2gk%3D8yZfgAnA9BMsnA%40mail.gmail.comRegards,Vignesh", "msg_date": "Fri, 14 May 2021 19:58:07 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: alter subscription drop publication fixes" }, { "msg_contents": "On Fri, May 14, 2021 at 7:58 PM vignesh C <vignesh21@gmail.com> wrote:\n> I have changed it to:\n> <literal>ADD</literal> adds additional publications,\n> - <literal>DROP</literal> removes publications from the list of\n> + <literal>DROP</literal> removes publications to/from the list of\n\nHow about \"Publications are added to or dropped from the existing list\nof publications by <literal>ADD</literal> or <literal>DROP</literal>\nrespectively.\" ?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 May 2021 20:11:05 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: alter subscription drop publication fixes" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Fri, May 14, 2021 at 7:58 PM vignesh C <vignesh21@gmail.com> wrote:\n>> I have changed it to:\n>> <literal>ADD</literal> adds additional publications,\n>> - <literal>DROP</literal> removes publications from the list of\n>> + <literal>DROP</literal> removes publications to/from the list of\n\n> How about \"Publications are added to or dropped from the existing list\n> of publications by <literal>ADD</literal> or <literal>DROP</literal>\n> respectively.\" ?\n\nWe generally prefer to use the active voice, so I don't think\nrestructuring the sentence that way is an improvement. The quoted\nbit would be better left alone entirely. Or maybe split it into\ntwo sentences, but keep the active voice.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 May 2021 11:26:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: alter subscription drop publication fixes" }, { "msg_contents": "On Fri, May 14, 2021 at 8:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > On Fri, May 14, 2021 at 7:58 PM vignesh C <vignesh21@gmail.com> wrote:\n> >> I have changed it to:\n> >> <literal>ADD</literal> adds additional publications,\n> >> - <literal>DROP</literal> removes publications from the list of\n> >> + <literal>DROP</literal> removes publications to/from the list of\n>\n> > How about \"Publications are added to or dropped from the existing list\n> > of publications by <literal>ADD</literal> or <literal>DROP</literal>\n> > respectively.\" ?\n>\n> We generally prefer to use the active voice, so I don't think\n> restructuring the sentence that way is an improvement. The quoted\n> bit would be better left alone entirely. Or maybe split it into\n> two sentences, but keep the active voice.\n\nI felt changing it to the below was better:\nSET replaces the entire list of publications with a new list, ADD adds\nadditional publications to the list of publications and DROP removes\nthe publications from the list of publications.\n\nAttached patch has the change for the same.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Fri, 14 May 2021 23:02:41 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: alter subscription drop publication fixes" }, { "msg_contents": "On Fri, May 14, 2021 at 11:02 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, May 14, 2021 at 8:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > > On Fri, May 14, 2021 at 7:58 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >> I have changed it to:\n> > >> <literal>ADD</literal> adds additional publications,\n> > >> - <literal>DROP</literal> removes publications from the list of\n> > >> + <literal>DROP</literal> removes publications to/from the list of\n> >\n> > > How about \"Publications are added to or dropped from the existing list\n> > > of publications by <literal>ADD</literal> or <literal>DROP</literal>\n> > > respectively.\" ?\n> >\n> > We generally prefer to use the active voice, so I don't think\n> > restructuring the sentence that way is an improvement. The quoted\n> > bit would be better left alone entirely. Or maybe split it into\n> > two sentences, but keep the active voice.\n>\n> I felt changing it to the below was better:\n> SET replaces the entire list of publications with a new list, ADD adds\n> additional publications to the list of publications and DROP removes\n> the publications from the list of publications.\n>\n> Attached patch has the change for the same.\n> Thoughts?\n\nThanks Vignesh, the patch looks good to me and it works as expected\ni.e. doesn't show up the copy_data option in the tab complete for the\nalter subscription drop publication command. While on this, I observed\nthat the new function merge_publications and the error message crossed\nthe 80char limit, I adjusted that and added a commit message. Please\nhave a look, if that is okay, add an entry to the commit fest and pass\nit on to the committer as I have no further comments.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 15 May 2021 14:58:10 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: alter subscription drop publication fixes" }, { "msg_contents": "On Sat, May 15, 2021 at 2:58 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, May 14, 2021 at 11:02 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Fri, May 14, 2021 at 8:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > > > On Fri, May 14, 2021 at 7:58 PM vignesh C <vignesh21@gmail.com>\nwrote:\n> > > >> I have changed it to:\n> > > >> <literal>ADD</literal> adds additional publications,\n> > > >> - <literal>DROP</literal> removes publications from the list\nof\n> > > >> + <literal>DROP</literal> removes publications to/from the\nlist of\n> > >\n> > > > How about \"Publications are added to or dropped from the existing\nlist\n> > > > of publications by <literal>ADD</literal> or\n<literal>DROP</literal>\n> > > > respectively.\" ?\n> > >\n> > > We generally prefer to use the active voice, so I don't think\n> > > restructuring the sentence that way is an improvement. The quoted\n> > > bit would be better left alone entirely. Or maybe split it into\n> > > two sentences, but keep the active voice.\n> >\n> > I felt changing it to the below was better:\n> > SET replaces the entire list of publications with a new list, ADD adds\n> > additional publications to the list of publications and DROP removes\n> > the publications from the list of publications.\n> >\n> > Attached patch has the change for the same.\n> > Thoughts?\n>\n> Thanks Vignesh, the patch looks good to me and it works as expected\n> i.e. doesn't show up the copy_data option in the tab complete for the\n> alter subscription drop publication command. While on this, I observed\n> that the new function merge_publications and the error message crossed\n> the 80char limit, I adjusted that and added a commit message. Please\n> have a look, if that is okay, add an entry to the commit fest and pass\n> it on to the committer as I have no further comments.\n\nThanks Bharath, that looks good. I have added a commitfest entry at [1] and\nmarked it to Ready For Committer.\n[1] - https://commitfest.postgresql.org/33/3115/\n\nRegards,\nVignesh\n\nOn Sat, May 15, 2021 at 2:58 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:>> On Fri, May 14, 2021 at 11:02 PM vignesh C <vignesh21@gmail.com> wrote:> >> > On Fri, May 14, 2021 at 8:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:> > >> > > Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:> > > > On Fri, May 14, 2021 at 7:58 PM vignesh C <vignesh21@gmail.com> wrote:> > > >> I have changed it to:> > > >> <literal>ADD</literal> adds additional publications,> > > >> -      <literal>DROP</literal> removes publications from the list of> > > >> +      <literal>DROP</literal> removes publications to/from the list of> > >> > > > How about \"Publications are added to or dropped from the existing list> > > > of publications by <literal>ADD</literal>  or <literal>DROP</literal>> > > > respectively.\" ?> > >> > > We generally prefer to use the active voice, so I don't think> > > restructuring the sentence that way is an improvement.  The quoted> > > bit would be better left alone entirely.  Or maybe split it into> > > two sentences, but keep the active voice.> >> > I felt changing it to the below was better:> > SET replaces the entire list of publications with a new list, ADD adds> > additional publications to the list of publications and DROP removes> > the publications from the list of publications.> >> > Attached patch has the change for the same.> > Thoughts?>> Thanks Vignesh, the patch looks good to me and it works as expected> i.e. doesn't show up the copy_data option in the tab complete for the> alter subscription drop publication command. While on this, I observed> that the new function merge_publications and the error message crossed> the 80char limit, I adjusted that and added a commit message. Please> have a look, if that is okay, add an entry to the commit fest and pass> it on to the committer as I have no further comments.Thanks Bharath, that looks good. I have added a commitfest entry at [1] and marked it to Ready For Committer.[1] - https://commitfest.postgresql.org/33/3115/Regards,Vignesh", "msg_date": "Sat, 15 May 2021 18:45:36 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: alter subscription drop publication fixes" }, { "msg_contents": "On 15.05.21 15:15, vignesh C wrote:\n> Thanks Bharath, that looks good. I have added a commitfest entry at [1] \n> and marked it to Ready For Committer.\n> [1] - https://commitfest.postgresql.org/33/3115/ \n> <https://commitfest.postgresql.org/33/3115/>\n\nCommitted.\n\nI took out some of the code reformatting. We have pgindent for that, \nand it didn't object to the existing formatting, so we don't need to \nworry about that very much.\n\n\n\n", "msg_date": "Fri, 25 Jun 2021 10:00:23 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: alter subscription drop publication fixes" }, { "msg_contents": "On Fri, Jun 25, 2021 at 1:30 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 15.05.21 15:15, vignesh C wrote:\n> > Thanks Bharath, that looks good. I have added a commitfest entry at [1]\n> > and marked it to Ready For Committer.\n> > [1] - https://commitfest.postgresql.org/33/3115/\n> > <https://commitfest.postgresql.org/33/3115/>\n>\n> Committed.\n>\n> I took out some of the code reformatting. We have pgindent for that,\n> and it didn't object to the existing formatting, so we don't need to\n> worry about that very much.\n\nThanks for committing this.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 25 Jun 2021 18:23:37 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: alter subscription drop publication fixes" } ]
[ { "msg_contents": "Hi, Etsuro-san:\nI was looking at\n Fix EXPLAIN ANALYZE for async-capable nodes.\n\nwhich adds the following parameter / field:\n\n+ bool async_mode; /* true if node is in async mode */\n\nasync_mode implies an enum: {sync, async}\nSince there are only two values, the data type is bool. I think it should\nbe named is_async.\n\nWhat do you think?\n\nCheers\n\nHi, Etsuro-san:I was looking at  Fix EXPLAIN ANALYZE for async-capable nodes.which adds the following parameter / field:+   bool        async_mode;     /* true if node is in async mode */async_mode implies an enum: {sync, async}Since there are only two values, the data type is bool. I think it should be named is_async.What do you think?Cheers", "msg_date": "Wed, 12 May 2021 10:27:01 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": true, "msg_subject": "naming of async_mode parameter" }, { "msg_contents": "Hi,\n\nOn Thu, May 13, 2021 at 2:23 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> I was looking at\n> Fix EXPLAIN ANALYZE for async-capable nodes.\n\nThanks for that!\n\n> which adds the following parameter / field:\n>\n> + bool async_mode; /* true if node is in async mode */\n>\n> async_mode implies an enum: {sync, async}\n> Since there are only two values, the data type is bool. I think it should be named is_async.\n\nBy async_mode, I mean \"is in async mode?\", as commented above. I\nthought the naming is_in_async_mode would be a bit long, so I\nshortened it to async_mode. IIUC, I think another example in our\ncodebase would be the hash_spill_mode parameter in the AggState\nstruct. So I think async_mode would be acceptable IMO.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 14 May 2021 17:05:28 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: naming of async_mode parameter" }, { "msg_contents": "On Fri, May 14, 2021 at 1:05 AM Etsuro Fujita <etsuro.fujita@gmail.com>\nwrote:\n\n> Hi,\n>\n> On Thu, May 13, 2021 at 2:23 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > I was looking at\n> > Fix EXPLAIN ANALYZE for async-capable nodes.\n>\n> Thanks for that!\n>\n> > which adds the following parameter / field:\n> >\n> > + bool async_mode; /* true if node is in async mode */\n> >\n> > async_mode implies an enum: {sync, async}\n> > Since there are only two values, the data type is bool. I think it\n> should be named is_async.\n>\n> By async_mode, I mean \"is in async mode?\", as commented above. I\n> thought the naming is_in_async_mode would be a bit long, so I\n> shortened it to async_mode. IIUC, I think another example in our\n> codebase would be the hash_spill_mode parameter in the AggState\n> struct. So I think async_mode would be acceptable IMO.\n>\n> Best regards,\n> Etsuro Fujita\n>\n\nHi,\nSearching postgres codebase reveals the following (partial) examples:\n\nbool is_varlena\nbool is_leaf\n\nI think these are more intuitive.\n\nIf you think is_in_async_mode is too long, how about naming the parameter\nis_async ?\n\nIf you agree, I can send out a patch.\n\nCheers\n\nOn Fri, May 14, 2021 at 1:05 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:Hi,\n\nOn Thu, May 13, 2021 at 2:23 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> I was looking at\n>   Fix EXPLAIN ANALYZE for async-capable nodes.\n\nThanks for that!\n\n> which adds the following parameter / field:\n>\n> +   bool        async_mode;     /* true if node is in async mode */\n>\n> async_mode implies an enum: {sync, async}\n> Since there are only two values, the data type is bool. I think it should be named is_async.\n\nBy async_mode, I mean \"is in async mode?\", as commented above.  I\nthought the naming is_in_async_mode would be a bit long, so I\nshortened it to async_mode.  IIUC, I think another example in our\ncodebase would be the hash_spill_mode parameter in the AggState\nstruct.  So I think async_mode would be acceptable IMO.\n\nBest regards,\nEtsuro FujitaHi,Searching postgres codebase reveals the following (partial) examples:bool is_varlenabool is_leaf I think these are more intuitive.If you think is_in_async_mode is too long, how about naming the parameter is_async ?If you agree, I can send out a patch.Cheers", "msg_date": "Fri, 14 May 2021 05:04:00 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": true, "msg_subject": "Re: naming of async_mode parameter" }, { "msg_contents": "Hi,\n\nOn Fri, May 14, 2021 at 9:00 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> Searching postgres codebase reveals the following (partial) examples:\n>\n> bool is_varlena\n> bool is_leaf\n>\n> I think these are more intuitive.\n>\n> If you think is_in_async_mode is too long, how about naming the parameter is_async ?\n\nSorry, I don’t think we need to rename the parameter, because I think\nasync_mode is also a good name for it, and the naming follows that of\nexisting parameters in our codebase, such as hash_spill_mode and\ncsv_mode in the CopyFormatOptions struct.\n\nI think this would be just a matter of preference.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Sun, 16 May 2021 23:10:49 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: naming of async_mode parameter" } ]
[ { "msg_contents": "Hackers,\n\nI would like to propose bumping PG_CONTROL_VERSION with each release \neven if there are no changes to the ControlFileData struct. Historically \nPG_CONTROL_VERSION has only been bumped when there were changes to \nControlFileData.\n\npgBackRest uses PG_CONTROL_VERSION to identify the version of PostgreSQL \nwhen it is not running. If PG_CONTROL_VERSION does not change from a \nprior version then we also use CATALOG_VERSION_NO to uniquely identify \nthe version.\n\nThis works fine, but is pretty fragile during the alpha/beta releases \nwhen CATALOG_VERSION_NO is likely to change with each release. Of \ncourse, PG_CONTROL_VERSION might change as well but this seems to be \nextremely rare for an alpha/beta release.\n\nThere are a few commits like eeca4cd3 and 99dd8b05a that would seem to \nargue that bumping PG_CONTROL_VERSION at least once for each release is \na good idea in general. It doesn't seem too useful to be able to run \npg_resetwal or pg_controldata against another version, in the few cases \nthat it would actually work, e.g. 9.6/9.5.\n\nThoughts?\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 12 May 2021 14:58:11 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Always bump PG_CONTROL_VERSION?" }, { "msg_contents": "On 2021-May-12, David Steele wrote:\n\n> pgBackRest uses PG_CONTROL_VERSION to identify the version of PostgreSQL\n> when it is not running. If PG_CONTROL_VERSION does not change from a prior\n> version then we also use CATALOG_VERSION_NO to uniquely identify the\n> version.\n\nWhy don't you use the PG_VERSION file in the datadir?\n\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"I dream about dreams about dreams\", sang the nightingale\nunder the pale moon (Sandman)\n\n\n", "msg_date": "Wed, 12 May 2021 15:21:45 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Always bump PG_CONTROL_VERSION?" }, { "msg_contents": "Hi,\n\nOn 2021-05-12 14:58:11 -0400, David Steele wrote:\n> pgBackRest uses PG_CONTROL_VERSION to identify the version of PostgreSQL\n> when it is not running. If PG_CONTROL_VERSION does not change from a prior\n> version then we also use CATALOG_VERSION_NO to uniquely identify the\n> version.\n\nWhy aren't you using PG_VERSION?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 May 2021 12:34:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Always bump PG_CONTROL_VERSION?" }, { "msg_contents": "On 5/12/21 3:21 PM, Alvaro Herrera wrote:\n> On 2021-May-12, David Steele wrote:\n> \n>> pgBackRest uses PG_CONTROL_VERSION to identify the version of PostgreSQL\n>> when it is not running. If PG_CONTROL_VERSION does not change from a prior\n>> version then we also use CATALOG_VERSION_NO to uniquely identify the\n>> version.\n> \n> Why don't you use the PG_VERSION file in the datadir?\n\nMostly because there is other data we need in pg_control and it is \nsimpler to read one file than two.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 12 May 2021 16:00:49 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Always bump PG_CONTROL_VERSION?" }, { "msg_contents": "David Steele <david@pgmasters.net> writes:\n> On 5/12/21 3:21 PM, Alvaro Herrera wrote:\n>> Why don't you use the PG_VERSION file in the datadir?\n\n> Mostly because there is other data we need in pg_control and it is \n> simpler to read one file than two.\n\nI'm disinclined to change the longstanding rule in this area for\na reason as weak as that.\n\nEven if we did change the rule going forward, you'd still need to\ndo it properly for existing releases, so I don't see that you're\ngoing to save anything.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 May 2021 16:18:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Always bump PG_CONTROL_VERSION?" }, { "msg_contents": "Hi,\n\nOn 2021-05-12 16:18:16 -0400, Tom Lane wrote:\n> Even if we did change the rule going forward, you'd still need to\n> do it properly for existing releases, so I don't see that you're\n> going to save anything.\n\nIt turns out that the last time a major version didn't have a unique\ncontrol file version was 9.5, I assume that's where David is coming\nfrom.\n\nThat said, I don't think it's a good practice to use the control file\nversion as an identifier for the major version. Who knows, it might be\nnecessary to add an optional new format in a minor version at some point\nor such crazyness. And then there's the beta stuff you'd mentioned, etc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 May 2021 13:30:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Always bump PG_CONTROL_VERSION?" }, { "msg_contents": "On 5/12/21 4:18 PM, Tom Lane wrote:\n> David Steele <david@pgmasters.net> writes:\n>> On 5/12/21 3:21 PM, Alvaro Herrera wrote:\n>>> Why don't you use the PG_VERSION file in the datadir?\n> \n>> Mostly because there is other data we need in pg_control and it is\n>> simpler to read one file than two.\n> \n> I'm disinclined to change the longstanding rule in this area for\n> a reason as weak as that.\n\nIt's a bit more than that -- for instance we have pg_control in every \nbackup but in order to get PG_VERSION we may need to reference a prior \nbackup since it never changes. pg_control is also checked on every \narchive_command/restore_command so reading an extra file adds up when \nperformance is paramount. There are also many unit tests that need to \nwrite this data, etc.\n\nIn short, it would be very nice to have one place to get info about a \ncluster.\n\n> Even if we did change the rule going forward, you'd still need to\n> do it properly for existing releases, so I don't see that you're\n> going to save anything.\n\nIt's not really a burden for existing releases. The issue is during the \nalpha/beta phase when the CATALOG_VERSION_NO can change several times in \na few months.\n\nPerhaps it was unwise to frame this in the requirements for an external \ntool, but I still think eeca4cd3 and 99dd8b05a argue for it being a good \nidea.\n\nOr perhaps we could just add the version number to pg_control? At least \nthen pg_controldata could display the version.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 12 May 2021 16:42:44 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Always bump PG_CONTROL_VERSION?" }, { "msg_contents": "On Wed, May 12, 2021 at 01:30:27PM -0700, Andres Freund wrote:\n> That said, I don't think it's a good practice to use the control file\n> version as an identifier for the major version. Who knows, it might be\n> necessary to add an optional new format in a minor version at some point\n> or such crazyness. And then there's the beta stuff you'd mentioned, etc.\n\nYes, PG_VERSION, as you wrote upthread already, is already fine for\nthe job, and FWIW, I have yet to see a case where being able to easily\ndetect the minor version in a data folder matters.\n\nAnd, I am of the opinion to not change the control file version if\nthere is no need to do so.\n--\nMichael", "msg_date": "Thu, 13 May 2021 11:04:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Always bump PG_CONTROL_VERSION?" }, { "msg_contents": "On Thu, May 13, 2021 at 11:04:54AM +0900, Michael Paquier wrote:\n> On Wed, May 12, 2021 at 01:30:27PM -0700, Andres Freund wrote:\n> > That said, I don't think it's a good practice to use the control file\n> > version as an identifier for the major version. Who knows, it might be\n> > necessary to add an optional new format in a minor version at some point\n> > or such crazyness. And then there's the beta stuff you'd mentioned, etc.\n> \n> Yes, PG_VERSION, as you wrote upthread already, is already fine for\n> the job, and FWIW, I have yet to see a case where being able to easily\n> detect the minor version in a data folder matters.\n\nWould it even make any sense? It could be \"has version X.Y ever started the\ncluster\", or \"was it the last version that started the cluster\", or something\nelse?\n\n> And, I am of the opinion to not change the control file version if\n> there is no need to do so.\n\n+1\n\n\n", "msg_date": "Thu, 13 May 2021 10:15:40 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Always bump PG_CONTROL_VERSION?" }, { "msg_contents": "On 5/12/21 10:04 PM, Michael Paquier wrote:\n> On Wed, May 12, 2021 at 01:30:27PM -0700, Andres Freund wrote:\n>> That said, I don't think it's a good practice to use the control file\n>> version as an identifier for the major version. Who knows, it might be\n>> necessary to add an optional new format in a minor version at some point\n>> or such crazyness. And then there's the beta stuff you'd mentioned, etc.\n> \n> Yes, PG_VERSION, as you wrote upthread already, is already fine for\n> the job, and FWIW, I have yet to see a case where being able to easily\n> detect the minor version in a data folder matters.\n\nRegardless of PG_VERSION doing the job or not, shouldn't there be a bump \nin PG_CONTROL_VERSION whenever there is a structural or semantic change \nin the control file data? And wouldn't the easiest way to ensure that be \nto bump the version with every release?\n\nAlso, can someone give me a good reason NOT to bump the version?\n\n\nThanks, Jan\n\n-- \nJan Wieck\nPostgres User since 1994\n\n\n", "msg_date": "Thu, 13 May 2021 17:42:52 -0400", "msg_from": "Jan Wieck <jan@wi3ck.info>", "msg_from_op": false, "msg_subject": "Re: Always bump PG_CONTROL_VERSION?" }, { "msg_contents": "Jan Wieck <jan@wi3ck.info> writes:\n> Regardless of PG_VERSION doing the job or not, shouldn't there be a bump \n> in PG_CONTROL_VERSION whenever there is a structural or semantic change \n> in the control file data? And wouldn't the easiest way to ensure that be \n> to bump the version with every release?\n\nNo, the way to do that is to change the version number in the commit\nthat changes the file's contents.\n\n> Also, can someone give me a good reason NOT to bump the version?\n\nIt creates unnecessary churn, not to mention a false sense of\ncomplacency. Bumping the version in the commit that changes\nthings is not optional, because if you don't do that then you'll\nprobably burn some other developer also working on HEAD. So\nI don't want people thinking they can skip this because it was\ndone at the beginning of the development cycle. We've learned\nthese things the hard way for CATVERSION. I think the only reason\nthat PG_CONTROL version or WAL version might seem different is\nthat we haven't changed them often enough for people to have fresh\nmemories of problems.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 May 2021 18:45:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Always bump PG_CONTROL_VERSION?" }, { "msg_contents": "On 5/13/21 6:45 PM, Tom Lane wrote:\n> Bumping the version in the commit that changes\n> things is not optional, because if you don't do that then you'll\n> probably burn some other developer also working on HEAD. So\n> I don't want people thinking they can skip this because it was\n> done at the beginning of the development cycle.\n\nAnd we make sure this is done how?\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPostgres User since 1994\n\n\n", "msg_date": "Thu, 13 May 2021 18:59:51 -0400", "msg_from": "Jan Wieck <jan@wi3ck.info>", "msg_from_op": false, "msg_subject": "Re: Always bump PG_CONTROL_VERSION?" }, { "msg_contents": "Jan Wieck <jan@wi3ck.info> writes:\n> On 5/13/21 6:45 PM, Tom Lane wrote:\n>> Bumping the version in the commit that changes\n>> things is not optional, because if you don't do that then you'll\n>> probably burn some other developer also working on HEAD. So\n>> I don't want people thinking they can skip this because it was\n>> done at the beginning of the development cycle.\n\n> And we make sure this is done how?\n\nPeer pressure? It's not that different from N other ways to\ndo a patch incorrectly, of course.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 May 2021 19:23:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Always bump PG_CONTROL_VERSION?" }, { "msg_contents": "Hi,\n\nOn 2021-05-13 17:42:52 -0400, Jan Wieck wrote:\n> Also, can someone give me a good reason NOT to bump the version?\n\nThere's several types of tools (particularly around backup) that need to\nparse control files. Unnecessarily increasing the numbers of versions\nthat need to be dealt with makes that a bit harder.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 17 May 2021 15:47:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Always bump PG_CONTROL_VERSION?" }, { "msg_contents": "On Mon, May 17, 2021 at 03:47:01PM -0700, Andres Freund wrote:\n> There's several types of tools (particularly around backup) that need to\n> parse control files. Unnecessarily increasing the numbers of versions\n> that need to be dealt with makes that a bit harder.\n\nI am digressing here, sorry for that..\n\nBut it is worth noting that it is fun to debug issues where a user\nchanges the system locale and breaks some logic that parses the output\nof pg_controldata because of the translations of the text fields.\nI've really wondered over the years whether there should be more\nswitches to pick up only the field values, for the popular ones like\nTLIs for example. pg_config does that.\n--\nMichael", "msg_date": "Tue, 18 May 2021 11:40:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Always bump PG_CONTROL_VERSION?" } ]
[ { "msg_contents": "Hi,\n\nWhen working on the shared memory stats patch I needed to manufacture\nhaving a lot of stats entries. It seemed cheaper to create functions\nthan relations, for fairly obvious reasons. That required calling the\nfunctions too get those entries.\n\nMy first attempt ran into the following issue:\n\n-- create 100k functions\nDO $d$BEGIN FOR i IN 1..100000 LOOP EXECUTE format('CREATE OR REPLACE FUNCTION func_%1$s() RETURNS VOID LANGUAGE SQL AS $f$SELECT 1$f$;', i);END LOOP;END;$d$;\nTime: 14853.428 ms (00:14.853)\n\n-- call them to create stats\nDO $d$BEGIN FOR i IN 1..100000 LOOP EXECUTE format('SELECT func_%1$s();', i);END LOOP;END;$d$;\nTime: 106291.238 ms (01:46.291)\n\nI had started with 1M functions, but the calls never finished.\n\nIt turns out to work more normally if you create *and* call the\nfunctions after each other:\n\nDO $d$BEGIN FOR i IN 1..100000 LOOP EXECUTE format('CREATE OR REPLACE FUNCTION func_%1$s() RETURNS VOID LANGUAGE SQL AS $f$SELECT 1$f$; SELECT func_%1$s();', i);END LOOP;END;$d$;\nTime: 20043.375 ms (00:20.043)\n\nThe problem is that SearchCatCacheList() is not actually a hash table -\nthere are no buckets, in contrast to SearchCatCacheList(). The hash\nvalues SearchCatCacheList() computes are only used to make the\ncomparison cheaper.\n\nThe only reason that the combined creation / call works out OK\nperformance wise, is that CatCacheInvalidate() destroys *all* lists, so\nthere only ever is one entry to match against.\n\n\nThis seems like a pretty large trap?\n\nIt's been that way since SearchCatCacheList() was introduced:\n\ncommit 0332d65ac4a1c843e1812755db1afc1b1109d0ea\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2002-04-06 06:59:25 +0000\n\n Implement partial-key searching of syscaches, per recent suggestion\n to pghackers. Use this to do searching for ambiguous functions ---\n it will get more uses soon.\n\nTom, any chance you remember if this was an oversight, or whether you\njust considered this to be OK, given the likely numbers of objects?\n\n\nI mainly wrote this email because I just remembered this by accident as\npart of another discussion, and thought it'd be good to have a record of\nthe problem...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 May 2021 14:07:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "SearchCatCacheList()/SearchSysCacheList() is O(n)" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The problem is that SearchCatCacheList() is not actually a hash table -\n> there are no buckets, in contrast to SearchCatCacheList().\n\nUh, what did you mean to compare to there?\n\n> Tom, any chance you remember if this was an oversight, or whether you\n> just considered this to be OK, given the likely numbers of objects?\n\nI'm pretty sure I didn't think the lists would get large enough to be\na problem. I'm not quite sure which list is a problem here, actually,\nseeing that the functions all have distinct names.\n\nI have a vague recollection that some callers depend on the lists\nbeing ordered by key. I'm also fairly sure that most callers need\nto look at all entries of whichever list they've requested. So\nI have doubts that \"use a hash table\" is really going to be a\nproductive idea.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 May 2021 17:26:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SearchCatCacheList()/SearchSysCacheList() is O(n)" }, { "msg_contents": "Hi,\n\nOn 2021-05-12 17:26:28 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > The problem is that SearchCatCacheList() is not actually a hash table -\n> > there are no buckets, in contrast to SearchCatCacheList().\n>\n> Uh, what did you mean to compare to there?\n\nOops, copy-and-paste failure. I was trying to reference\nSearchCatCacheInternal().\n\n\n> > Tom, any chance you remember if this was an oversight, or whether you\n> > just considered this to be OK, given the likely numbers of objects?\n>\n> I'm pretty sure I didn't think the lists would get large enough to be\n> a problem. I'm not quite sure which list is a problem here, actually,\n> seeing that the functions all have distinct names.\n\nIt's not an individual \"result\" list that's the issue. In my example\nthey're all exactly one element long. The problem is that CatCache->list\nhas one element for each cached SearchCatCacheList() result, and that\nfor every SearchCatCacheList() we linearly search through CatCache->list\nto find a match.\n\nThis is the profile:\n\n- 88.21% postgres postgres [.] SearchCatCacheList\n - 88.21% SearchCatCacheList\n - 88.21% SearchSysCacheList\n FuncnameGetCandidates\n func_get_detail\n ParseFuncOrColumn\n transformFuncCall\n+ 0.65% postgres postgres [.] AllocSetAlloc\n\nIOW, a single SearchCatCacheInternal() is O(N). With a fairly small\nconstant, but obviously that's still not great for a cache.\n\n\n> I have a vague recollection that some callers depend on the lists\n> being ordered by key. I'm also fairly sure that most callers need\n> to look at all entries of whichever list they've requested. So\n> I have doubts that \"use a hash table\" is really going to be a\n> productive idea.\n\nI was only thinking of a hashtable to identify the relvant CatCList, not\nwithin a CatCList or anything. The cache search code between\nSearchCatCacheInternal() and SearchCatCacheList() is pretty similar, but\ncrucially the former uses buckets, the latter doesn't:\n\nstatic inline HeapTuple\nSearchCatCacheInternal(CatCache *cache,\n\t\t\t\t\t int nkeys,\n\t\t\t\t\t Datum v1,\n\t\t\t\t\t Datum v2,\n\t\t\t\t\t Datum v3,\n\t\t\t\t\t Datum v4)\n...\n\n\t/*\n\t * find the hash bucket in which to look for the tuple\n\t */\n\thashValue = CatalogCacheComputeHashValue(cache, nkeys, v1, v2, v3, v4);\n\thashIndex = HASH_INDEX(hashValue, cache->cc_nbuckets);\n\n\t/*\n\t * scan the hash bucket until we find a match or exhaust our tuples\n\t *\n\t * Note: it's okay to use dlist_foreach here, even though we modify the\n\t * dlist within the loop, because we don't continue the loop afterwards.\n\t */\n\tbucket = &cache->cc_bucket[hashIndex];\n\tdlist_foreach(iter, bucket)\n {\n...\n\nSearchCatCacheList(CatCache *cache,\n\t\t\t\t int nkeys,\n\t\t\t\t Datum v1,\n\t\t\t\t Datum v2,\n\t\t\t\t Datum v3)\n...\n\t/*\n\t * compute a hash value of the given keys for faster search. We don't\n\t * presently divide the CatCList items into buckets, but this still lets\n\t * us skip non-matching items quickly most of the time.\n\t */\n\tlHashValue = CatalogCacheComputeHashValue(cache, nkeys, v1, v2, v3, v4);\n\n\t/*\n\t * scan the items until we find a match or exhaust our list\n\t *\n\t * Note: it's okay to use dlist_foreach here, even though we modify the\n\t * dlist within the loop, because we don't continue the loop afterwards.\n\t */\n\tdlist_foreach(iter, &cache->cc_lists)\n\t{\n\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 May 2021 14:37:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: SearchCatCacheList()/SearchSysCacheList() is O(n)" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> It's not an individual \"result\" list that's the issue. In my example\n> they're all exactly one element long. The problem is that CatCache->list\n> has one element for each cached SearchCatCacheList() result, and that\n> for every SearchCatCacheList() we linearly search through CatCache->list\n> to find a match.\n\nAh, now I understand. Yeah, replacing that list with a hash table might\nbe a good idea.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 May 2021 17:49:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SearchCatCacheList()/SearchSysCacheList() is O(n)" } ]
[ { "msg_contents": "Hi,\n\nI think it will be useful to allow foreign tables to be VACUUMed if\nthe underlying FDW supports, currently VACUUM doesn't support foreign\ntables, see [1]. In case of postgres_fdw, if foreign tables are\nspecified in the local VACUUM command, a per-server remote VACUUM\ncommand can be prepared with the foreign tables that belong to the\nsame server and sent to the foreign server. This design is similar to\nTRUNCATE on foreign tables committed as part of 8ff1c946. Although,\nthis may not be much useful for FDWs that connect to remote non-MVCC\ndatabases where the concept of VACUUM may not apply, but for\npostgres_fdw and others it might help.\n\nI would like to hear opinions from the hackers. If it's not\npossible/feasible to do this, please let me know the reasons. Thanks.\n\n[1] postgres=# vacuum ft1; ---> ft1 is a foreign table\nWARNING: skipping \"ft1\" --- cannot vacuum non-tables or special system tables\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 09:44:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Support for VACUUMing Foreign Tables" }, { "msg_contents": "From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\r\n> I think it will be useful to allow foreign tables to be VACUUMed if\r\n> the underlying FDW supports, currently VACUUM doesn't support foreign\r\n> tables, see [1].\r\n\r\nCould you let us imagine more concretely how useful it will be? While TRUNCATE can be part of an application's data processing as alternative to DELETE, I think VACUUM is purely the data storage maintenance that's performed by the DBA and can be done naturally locally on the server where the table resides. (The existing ANALYZE on FDW is an exception; it's useful to also have data statistics locally.)\r\n\r\n\r\n> this may not be much useful for FDWs that connect to remote non-MVCC\r\n> databases where the concept of VACUUM may not apply, but for\r\n> postgres_fdw and others it might help.\r\n\r\nCan you show some examples of \"others\"? I believe we should be careful not to make the FDW interface a swamp for functions that are only convenient for PostgreSQL.\r\n\r\nHow about adding a routine to the FDW interface that allows to execute an arbitrary command like the following? VACUUM will be able to use this.\r\n\r\n PGresult *DoCommandPathThrough(ForeignTable *table, const char *command);\r\n\r\nOr, maybe it's more flexible to use ForeignServer instead of ForeignTable.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Fri, 14 May 2021 01:05:02 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Support for VACUUMing Foreign Tables" }, { "msg_contents": "On Fri, May 14, 2021 at 01:05:02AM +0000, tsunakawa.takay@fujitsu.com wrote:\n> Could you let us imagine more concretely how useful it will be?\n> While TRUNCATE can be part of an application's data processing as\n> alternative to DELETE, I think VACUUM is purely the data storage\n> maintenance that's performed by the DBA and can be done naturally\n> locally on the server where the table resides. (The existing\n> ANALYZE on FDW is an exception; it's useful to also have data\n> statistics locally.)\n\nThe concept of vacuuming applies to PG because of its concepts behind\nMVCC. Thinking broader, in which aspect can that apply to FDWs in\ngeneral?\n\n> How about adding a routine to the FDW interface that allows to\n> execute an arbitrary command like the following? VACUUM will be\n> able to use this.\n> \n> PGresult *DoCommandPathThrough(ForeignTable *table, const char *command);\n> \n> Or, maybe it's more flexible to use ForeignServer instead of ForeignTable.\n\nBeing able to pass down to remote servers arbitrary command strings\nsounds like a recipy for security holes, IMO.\n--\nMichael", "msg_date": "Fri, 14 May 2021 14:37:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for VACUUMing Foreign Tables" }, { "msg_contents": "On Fri, May 14, 2021 at 6:35 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\n> > I think it will be useful to allow foreign tables to be VACUUMed if\n> > the underlying FDW supports, currently VACUUM doesn't support foreign\n> > tables, see [1].\n>\n> Could you let us imagine more concretely how useful it will be? While TRUNCATE can be part of an application's data processing as alternative to DELETE, I think VACUUM is purely the data storage maintenance that's performed by the DBA and can be done naturally locally on the server where the table resides. (The existing ANALYZE on FDW is an exception; it's useful to also have data statistics locally.)\n\nI agree that TRUNCATE is a user-visible command so it is good to send\nsuch a command to a remote server. But, sending ANALYZE and VACUUM to\nFDW can have a similar use case. I mean based on the current data\nchanges/operation the DBA can decide whether it needs to generate the\nstatistic or whether it needs to do garbage collection. I think\nideally both these operations can be performed locally on the remote\nserver but if we are sending ANALYZE to the remote server then IMHO\nthere is some merit to sending VACUUM as well. Having said that, I\nthink the purpose of ANALYZE can be generic across the databases that\nwe want to update the statistic but VACUUM is different, it too much\ndepends upon how the data is stored (what MVCC mechanism they are\nusing) on the remote server and what is the vacuuming need for that\ndatabase. So maybe garbage collection should be controlled locally by\nthe DBA on that server.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 May 2021 11:48:25 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support for VACUUMing Foreign Tables" }, { "msg_contents": "On Fri, May 14, 2021 at 6:35 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\n> > I think it will be useful to allow foreign tables to be VACUUMed if\n> > the underlying FDW supports, currently VACUUM doesn't support foreign\n> > tables, see [1].\n>\n> Could you let us imagine more concretely how useful it will be? While TRUNCATE can be part of an application's data processing as alternative to DELETE, I think VACUUM is purely the data storage maintenance that's performed by the DBA and can be done naturally locally on the server where the table resides. (The existing ANALYZE on FDW is an exception; it's useful to also have data statistics locally.)\n\nThis can be useful in situations like where there are many remote\npostgres servers that are connected to a single coordinator on which\nforeign tables are defined for each of the remote tables. In this\ncase, the DBA (or whoever is responsible to do that job) doesn't have\nto figure out which remote server should be logged onto to perform the\nVACUUM. They can issue VACUUM command on the foreign table from the\ncoordinator server.\n\n> > this may not be much useful for FDWs that connect to remote non-MVCC\n> > databases where the concept of VACUUM may not apply, but for\n> > postgres_fdw and others it might help.\n>\n> Can you show some examples of \"others\"? I believe we should be careful not to make the FDW interface a swamp for functions that are only convenient for PostgreSQL.\n\nThere are other databases that have MVCC implemented for which the\nbloat clean up might be necessary at some point. They may not have the\nsame terminology that postgres has for cleaning up the bloat. For\ninstance, MySQL (instead of VACUUM they have OPTIMIZE TABLE command)\nwhich can be connected to postgres_fdw using supported fdw\n\nAnd see [1] for the databases that have MVCC support. I'm not sure if\nall of them have a FDW to connect to postgres.\n\n[1] https://dbdb.io/browse?concurrency-control=multi-version-concurrency-control-mvcc\n\n> How about adding a routine to the FDW interface that allows to execute an arbitrary command like the following? VACUUM will be able to use this.\n>\n> PGresult *DoCommandPathThrough(ForeignTable *table, const char *command);\n>\n> Or, maybe it's more flexible to use ForeignServer instead of ForeignTable.\n\nI agree with Michael Paquier's response to this point that it can be\nan issue from a security standpoint. But we could have had such kind\nof a generic API for commands like TRUNCATE, ANALYZE, VACUUM etc. void\nExecRemoteCommand(ForeignServer *server, const char *command, void\n*input_params, void *output_params); with the API knowing all the\nsupported commands and erroring out on unsupported commands. Now, we\nhave a separate API for each of the supported commands which looks\nactually cleaner.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 May 2021 11:56:17 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Support for VACUUMing Foreign Tables" }, { "msg_contents": "On Fri, May 14, 2021 at 11:48 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, May 14, 2021 at 6:35 AM tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n> >\n> > From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\n> > > I think it will be useful to allow foreign tables to be VACUUMed if\n> > > the underlying FDW supports, currently VACUUM doesn't support foreign\n> > > tables, see [1].\n> >\n> > Could you let us imagine more concretely how useful it will be? While TRUNCATE can be part of an application's data processing as alternative to DELETE, I think VACUUM is purely the data storage maintenance that's performed by the DBA and can be done naturally locally on the server where the table resides. (The existing ANALYZE on FDW is an exception; it's useful to also have data statistics locally.)\n>\n> I agree that TRUNCATE is a user-visible command so it is good to send\n> such a command to a remote server. But, sending ANALYZE and VACUUM to\n> FDW can have a similar use case. I mean based on the current data\n> changes/operation the DBA can decide whether it needs to generate the\n> statistic or whether it needs to do garbage collection. I think\n> ideally both these operations can be performed locally on the remote\n> server but if we are sending ANALYZE to the remote server then IMHO\n> there is some merit to sending VACUUM as well. Having said that, I\n> think the purpose of ANALYZE can be generic across the databases that\n> we want to update the statistic but VACUUM is different, it too much\n> depends upon how the data is stored (what MVCC mechanism they are\n> using) on the remote server and what is the vacuuming need for that\n> database. So maybe garbage collection should be controlled locally by\n> the DBA on that server.\n\nAgree. Different MVCC databases can have different commands to clean\nup the bloat, their implementation of the vacuum's FdwRoutine can be\nimplemented accordingly. postgres_fdw can prepare the \"VACUUM rel;\"\ncommand. Having said that, I don't think all the remote databases will\nhave the same ANALYZE rel; or TRUNCATE rel; commands either. It's\nbetter left to the implementation of the FdwRoutine for a particular\nremote database.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 May 2021 12:01:15 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Support for VACUUMing Foreign Tables" }, { "msg_contents": "On Thu, 2021-05-13 at 09:44 +0530, Bharath Rupireddy wrote:\n> I think it will be useful to allow foreign tables to be VACUUMed if\n> the underlying FDW supports, currently VACUUM doesn't support foreign\n> tables, see [1]. In case of postgres_fdw, if foreign tables are\n> specified in the local VACUUM command, a per-server remote VACUUM\n> command can be prepared with the foreign tables that belong to the\n> same server and sent to the foreign server. This design is similar to\n> TRUNCATE on foreign tables committed as part of 8ff1c946. Although,\n> this may not be much useful for FDWs that connect to remote non-MVCC\n> databases where the concept of VACUUM may not apply, but for\n> postgres_fdw and others it might help.\n> \n> I would like to hear opinions from the hackers. If it's not\n> possible/feasible to do this, please let me know the reasons. Thanks.\n\nI see no value in this.\n\nFirst, it wouldn't make sense for anything except postgres_fdw, so\nI think it should not be part of the FDW API. If anything, it should\nmean that knowledge about postgres_fdw gets hardwired into VACUUM.\n\nBut I don't think that is a smart idea either. Each database cluster\nis in charge of vacuuming its own tables, so the better approach would\nbe to tune autovacuum on the remote side so that it does the right thing.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 14 May 2021 13:11:29 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Support for VACUUMing Foreign Tables" }, { "msg_contents": "From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\r\n> This can be useful in situations like where there are many remote\r\n> postgres servers that are connected to a single coordinator on which\r\n> foreign tables are defined for each of the remote tables. In this\r\n> case, the DBA (or whoever is responsible to do that job) doesn't have\r\n> to figure out which remote server should be logged onto to perform the\r\n> VACUUM. They can issue VACUUM command on the foreign table from the\r\n> coordinator server.\r\n\r\nI thought the FDW mechanism was initially, and probably still is, designed to access data on other data sources that are operated independently, mostly for data integration. Are you thinking of shared-nothing clustering that consist of tightly coupled servers, because you're referring to a coordinator server? (Is EDB (re-)starting the sharding scale-out?)\r\n\r\n\r\n> There are other databases that have MVCC implemented for which the\r\n> bloat clean up might be necessary at some point. They may not have the\r\n> same terminology that postgres has for cleaning up the bloat. For\r\n> instance, MySQL (instead of VACUUM they have OPTIMIZE TABLE command)\r\n> which can be connected to postgres_fdw using supported fdw\r\n\r\nMySQL's OPTIMIZE TABLE looks like VACUUM FULL, not plain VACUUM, although I'm not completely sure.\r\n\r\nHow would the various options map to the FDW interface, such as FREEZE, VERBOSE, vacuum_truncate, index_cleanup? Also, how would the following GUC settings propagate to the foreign server?\r\n\r\nSET vacuum_freeze_table_age = 0;\r\nSET vacuum_freeze_min_age = 0;\r\nVACUUM mytable;\r\n\r\nI think people who want to run manual VACUUM will want to control VACUUM behavior. But I'm afraid VACUUM is too specific to Postgres to be not very good to be incorporated into the FDW interface.\r\n\r\n\r\nWhat's our stance toward the FDW interface? I've thought 1 so far.\r\n\r\n1) We carefully choose FDW routines so that many other data sources can provide implementations for. We want to allow access to various data sources through the frame of Postgres.\r\n\r\n2) We don't care other data sources. We are happy if multiple Postgres instances can interoperate with each other. Other data source providers can choose to implement suitable routines when they can fit their implementations into the Postgres world.\r\n\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Mon, 17 May 2021 01:17:25 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Support for VACUUMing Foreign Tables" } ]
[ { "msg_contents": "I noticed $SUBJECT in curculio.\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=curculio&dt=2021-05-11%2018%3A30%3A23\n\nThe failure test report:\nt/020_messages.pl (Wstat: 7424 Tests: 1 Failed: 0)\n Non-zero exit status: 29\n Parse errors: Bad plan. You planned 5 tests but ran 1.\nFiles=21, Tests=149, 79 wallclock secs ( 0.03 usr 0.03 sys + 22.36\ncusr 9.47 csys = 31.89 CPU)\nResult: FAIL\nMakefile:21: recipe for target 'check' failed\ngmake: *** [check] Error 1\n\nAs far as I can understand the failure happens due to error:\n2021-05-11 20:41:38.356 CEST [31177:4] 020_messages.pl ERROR:\nreplication slot \"tap_sub\" is active for PID 7354\n2021-05-11 20:41:38.356 CEST [31177:5] 020_messages.pl STATEMENT:\nSELECT get_byte(data, 1), encode(substr(data, 11, 8), 'escape')\nFROM pg_logical_slot_peek_binary_changes('tap_sub', NULL, NULL,\n'proto_version', '1',\n'publication_names', 'tap_pub',\n'messages', 'true')\nOFFSET 1 LIMIT 1\n\nAs per my initial analysis, this happens because in the test after\ncreating a subscription, we are not waiting for the subscriber to\ncatch up. I think there is a narrow window where it is possible that\nafter creating a subscription, the apply worker starts replication\neven after we have done Alter Subscription .. DISABLE. This is also\nvisible in buildfarm logs:\n\n2021-05-11 20:41:38.345 CEST [4046:9] 020_messages.pl STATEMENT:\nSELECT get_byte(data, 0)\nFROM pg_logical_slot_peek_binary_changes('tap_sub', NULL, NULL,\n'proto_version', '1',\n'publication_names', 'tap_pub',\n'messages', 'true')\n...\n\n2021-05-11 20:41:38.354 CEST [7354:6] tap_sub LOG: received\nreplication command: START_REPLICATION SLOT \"tap_sub\" LOGICAL 0/0\n(proto_version '2', publication_names '\"tap_pub\"')\n\nNotice that we have checked the messages before receiving the start\nreplication command from the subscriber. Now, if we would have waited\nfor the subscription to catch up then our check in pg_stat_replication\nwouldn't have given a false positive. I have noticed that we do have\nsuch wait in other subscription tests but somehow missed it here.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 13 May 2021 15:12:30 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "subscriptioncheck failure" }, { "msg_contents": "On Thu, May 13, 2021 at 3:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I noticed $SUBJECT in curculio.\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=curculio&dt=2021-05-11%2018%3A30%3A23\n>\n> The failure test report:\n> t/020_messages.pl (Wstat: 7424 Tests: 1 Failed: 0)\n> Non-zero exit status: 29\n> Parse errors: Bad plan. You planned 5 tests but ran 1.\n> Files=21, Tests=149, 79 wallclock secs ( 0.03 usr 0.03 sys + 22.36\n> cusr 9.47 csys = 31.89 CPU)\n> Result: FAIL\n> Makefile:21: recipe for target 'check' failed\n> gmake: *** [check] Error 1\n>\n> As far as I can understand the failure happens due to error:\n> 2021-05-11 20:41:38.356 CEST [31177:4] 020_messages.pl ERROR:\n> replication slot \"tap_sub\" is active for PID 7354\n> 2021-05-11 20:41:38.356 CEST [31177:5] 020_messages.pl STATEMENT:\n> SELECT get_byte(data, 1), encode(substr(data, 11, 8), 'escape')\n> FROM pg_logical_slot_peek_binary_changes('tap_sub', NULL, NULL,\n> 'proto_version', '1',\n> 'publication_names', 'tap_pub',\n> 'messages', 'true')\n> OFFSET 1 LIMIT 1\n>\n> As per my initial analysis, this happens because in the test after\n> creating a subscription, we are not waiting for the subscriber to\n> catch up. I think there is a narrow window where it is possible that\n> after creating a subscription, the apply worker starts replication\n> even after we have done Alter Subscription .. DISABLE. This is also\n> visible in buildfarm logs:\n>\n> 2021-05-11 20:41:38.345 CEST [4046:9] 020_messages.pl STATEMENT:\n> SELECT get_byte(data, 0)\n> FROM pg_logical_slot_peek_binary_changes('tap_sub', NULL, NULL,\n> 'proto_version', '1',\n> 'publication_names', 'tap_pub',\n> 'messages', 'true')\n> ...\n>\n> 2021-05-11 20:41:38.354 CEST [7354:6] tap_sub LOG: received\n> replication command: START_REPLICATION SLOT \"tap_sub\" LOGICAL 0/0\n> (proto_version '2', publication_names '\"tap_pub\"')\n>\n> Notice that we have checked the messages before receiving the start\n> replication command from the subscriber. Now, if we would have waited\n> for the subscription to catch up then our check in pg_stat_replication\n> wouldn't have given a false positive. I have noticed that we do have\n> such wait in other subscription tests but somehow missed it here.\n\nI agree with your analysis, apart from this issue there is one other\nsmall problem. Currently after disabling subscription\npg_stat_replication is used to verify if the walsender is exited. The\nsteps of the walsender exit includes cleaning up of walsender and then\nreleasing a replication slot. There will be random test failure if\npg_logical_slot_peek_binary_changes is called in this narrow time\nwindow. I think it is better to fix it by checking the active column\nin pg_replication_slot instead of pg_stat_replication which is more\nreliable.\nAttached patch which has the fixes for the same.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Thu, 13 May 2021 16:14:55 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: subscriptioncheck failure" }, { "msg_contents": "On Thu, May 13, 2021 at 04:14:55PM +0530, vignesh C wrote:\n> +$node_publisher->wait_for_catchup('tap_sub');\n> +\n> # Ensure a transactional logical decoding message shows up on the slot\n> $node_subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION tap_sub DISABLE\");\n> \n> # wait for the replication connection to drop from the publisher\n> $node_publisher->poll_query_until('postgres',\n> -\t'SELECT COUNT(*) FROM pg_catalog.pg_stat_replication', 0);\n> +\t\"SELECT COUNT(*) FROM pg_catalog.pg_replication_slots WHERE slot_name = 'tap_sub' AND active='f'\", 1);\n\nThere are three places in this test where a slot is disabled, followed\nby a wait to make sure that the slot is gone. Perhaps it would be\nbetter to wrap that in a small-ish routine?\n--\nMichael", "msg_date": "Thu, 13 May 2021 20:11:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: subscriptioncheck failure" }, { "msg_contents": "On Thu, May 13, 2021 at 4:41 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, May 13, 2021 at 04:14:55PM +0530, vignesh C wrote:\n> > +$node_publisher->wait_for_catchup('tap_sub');\n> > +\n> > # Ensure a transactional logical decoding message shows up on the slot\n> > $node_subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION tap_sub DISABLE\");\n> >\n> > # wait for the replication connection to drop from the publisher\n> > $node_publisher->poll_query_until('postgres',\n> > - 'SELECT COUNT(*) FROM pg_catalog.pg_stat_replication', 0);\n> > + \"SELECT COUNT(*) FROM pg_catalog.pg_replication_slots WHERE slot_name = 'tap_sub' AND active='f'\", 1);\n>\n> There are three places in this test where a slot is disabled, followed\n> by a wait to make sure that the slot is gone. Perhaps it would be\n> better to wrap that in a small-ish routine?\n\nYes that would be better, I will make the changes for this and post a patch.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 13 May 2021 17:41:43 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: subscriptioncheck failure" }, { "msg_contents": "On Thu, May 13, 2021 at 4:41 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, May 13, 2021 at 04:14:55PM +0530, vignesh C wrote:\n> > +$node_publisher->wait_for_catchup('tap_sub');\n> > +\n> > # Ensure a transactional logical decoding message shows up on the slot\n> > $node_subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION tap_sub DISABLE\");\n> >\n> > # wait for the replication connection to drop from the publisher\n> > $node_publisher->poll_query_until('postgres',\n> > - 'SELECT COUNT(*) FROM pg_catalog.pg_stat_replication', 0);\n> > + \"SELECT COUNT(*) FROM pg_catalog.pg_replication_slots WHERE slot_name = 'tap_sub' AND active='f'\", 1);\n>\n> There are three places in this test where a slot is disabled, followed\n> by a wait to make sure that the slot is gone. Perhaps it would be\n> better to wrap that in a small-ish routine?\n\nThanks for the comments, Please find the attached patch having the changes.\n\nRegards,\nVignesh", "msg_date": "Thu, 13 May 2021 19:05:55 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: subscriptioncheck failure" }, { "msg_contents": "On Thu, May 13, 2021 at 07:05:55PM +0530, vignesh C wrote:\n> Thanks for the comments, Please find the attached patch having the changes.\n\nCool, thanks for the new version. I have spent some time\nunderstanding the initial report from Amit as well as what you are\nproposing here, and refactoring the test so as the set of CREATE/ALTER\nSUBSCRIPTION commands are added within this routine is a good idea.\n\nI would have made the comment on top of setup_subscription a bit more\ntalkative regarding the fact that it may reuse an existing\nsubscription, but that's a nit. Let's wait for Amit and see what he\nthinks about what you are proposing.\n--\nMichael", "msg_date": "Fri, 14 May 2021 14:30:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: subscriptioncheck failure" }, { "msg_contents": "On Thu, May 13, 2021 at 7:06 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 4:41 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Thu, May 13, 2021 at 04:14:55PM +0530, vignesh C wrote:\n> > > +$node_publisher->wait_for_catchup('tap_sub');\n> > > +\n> > > # Ensure a transactional logical decoding message shows up on the slot\n> > > $node_subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION tap_sub DISABLE\");\n> > >\n> > > # wait for the replication connection to drop from the publisher\n> > > $node_publisher->poll_query_until('postgres',\n> > > - 'SELECT COUNT(*) FROM pg_catalog.pg_stat_replication', 0);\n> > > + \"SELECT COUNT(*) FROM pg_catalog.pg_replication_slots WHERE slot_name = 'tap_sub' AND active='f'\", 1);\n> >\n> > There are three places in this test where a slot is disabled, followed\n> > by a wait to make sure that the slot is gone. Perhaps it would be\n> > better to wrap that in a small-ish routine?\n>\n> Thanks for the comments, Please find the attached patch having the changes.\n>\n\nFew comments:\n1.\n+ # Ensure a transactional logical decoding message shows up on the slot\n+ $node_subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION tap_sub DISABLE\");\n\nAfter you have encapsulated this command in the function, the above\ncomment doesn't make sense because we do this for both transactional\nand non-transactional messages. I suggest we can change it to\nsomething like: \"This is done to ensure a logical decoding message is\nshown up on the slot\".\n\n2.\n+# Setup the subscription before checking pg_logical_slot_peek_binary_changes\n+sub setup_subscription\n\nI think here the functionality is more for the catchup of\nsubscription, so it might be better to name the function as\nsubscription_catchup or catchup_subscription. I think you can expand\nthe comments atop this function a bit as suggested by Michael.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 17 May 2021 09:06:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: subscriptioncheck failure" }, { "msg_contents": "On Mon, May 17, 2021 at 9:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 7:06 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Thu, May 13, 2021 at 4:41 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Few comments:\n> 1.\n> + # Ensure a transactional logical decoding message shows up on the slot\n> + $node_subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION tap_sub DISABLE\");\n>\n> After you have encapsulated this command in the function, the above\n> comment doesn't make sense because we do this for both transactional\n> and non-transactional messages. I suggest we can change it to\n> something like: \"This is done to ensure a logical decoding message is\n> shown up on the slot\".\n>\n> 2.\n> +# Setup the subscription before checking pg_logical_slot_peek_binary_changes\n> +sub setup_subscription\n>\n> I think here the functionality is more for the catchup of\n> subscription, so it might be better to name the function as\n> subscription_catchup or catchup_subscription. I think you can expand\n> the comments atop this function a bit as suggested by Michael.\n>\n\nOne more point:\n+ $node_publisher->wait_for_catchup('tap_sub');\n+\n+ # Ensure a transactional logical decoding message shows up on the slot\n+ $node_subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION tap_sub DISABLE\");\n+\n+ # wait for the replication connection to drop from the publisher\n+ $node_publisher->poll_query_until('postgres',\n+ \"SELECT COUNT(*) FROM pg_catalog.pg_replication_slots WHERE\nslot_name = 'tap_sub' AND active='f'\", 1);\n\nIn the above sequence, wait_for_catchup will query pg_stat_replication\nwhereas after disabling subscription we are checking\npg_replication_slots. I understand from your explanation why we can't\nrely on pg_stat_replication after DISABLE but it might be better to\ncheck that the slot is active before disabling it. I think currently,\nthe test assumes that, isn't it better to have an explicit check for\nthat?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 17 May 2021 10:40:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: subscriptioncheck failure" }, { "msg_contents": "On Mon, May 17, 2021 at 10:40 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 17, 2021 at 9:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, May 13, 2021 at 7:06 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Thu, May 13, 2021 at 4:41 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > Few comments:\n> > 1.\n> > + # Ensure a transactional logical decoding message shows up on the slot\n> > + $node_subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION tap_sub DISABLE\");\n> >\n> > After you have encapsulated this command in the function, the above\n> > comment doesn't make sense because we do this for both transactional\n> > and non-transactional messages. I suggest we can change it to\n> > something like: \"This is done to ensure a logical decoding message is\n> > shown up on the slot\".\n> >\n> > 2.\n> > +# Setup the subscription before checking pg_logical_slot_peek_binary_changes\n> > +sub setup_subscription\n> >\n> > I think here the functionality is more for the catchup of\n> > subscription, so it might be better to name the function as\n> > subscription_catchup or catchup_subscription. I think you can expand\n> > the comments atop this function a bit as suggested by Michael.\n> >\n>\n> One more point:\n> + $node_publisher->wait_for_catchup('tap_sub');\n> +\n> + # Ensure a transactional logical decoding message shows up on the slot\n> + $node_subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION tap_sub DISABLE\");\n> +\n> + # wait for the replication connection to drop from the publisher\n> + $node_publisher->poll_query_until('postgres',\n> + \"SELECT COUNT(*) FROM pg_catalog.pg_replication_slots WHERE\n> slot_name = 'tap_sub' AND active='f'\", 1);\n>\n> In the above sequence, wait_for_catchup will query pg_stat_replication\n> whereas after disabling subscription we are checking\n> pg_replication_slots. I understand from your explanation why we can't\n> rely on pg_stat_replication after DISABLE but it might be better to\n> check that the slot is active before disabling it. I think currently,\n> the test assumes that, isn't it better to have an explicit check for\n> that?\n\nI felt this is not required, wait_for_catchup will poll_query_until\nthe state = 'streaming', even if START_REPLICATION takes time, state\nwill be in 'startup' state, this way poll_query_until will take care\nof handling this.\n\nOn further analysis I found that we need to do \"Alter subscription\ntap_sub ENABLE\" and \"ALTER subscription tap_sub DISABLE\" multiple\ntime, Instead we can change pg_logical_slot_peek_binary_changes to\npg_logical_slot_get_binary_changes at appropriate steps. This way the\ncommon function can be removed and the enable/disable multiple times\ncan be removed. If we are going ahead with this approach the above\ncomments provided are no more valid.\n\nI have made the changes in similar lines in the attached patch. If you\nare ok we can go ahead with the new approach which will simplify the\nchanges required.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Mon, 17 May 2021 17:48:45 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: subscriptioncheck failure" }, { "msg_contents": "On Mon, May 17, 2021 at 5:48 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, May 17, 2021 at 10:40 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > One more point:\n> > + $node_publisher->wait_for_catchup('tap_sub');\n> > +\n> > + # Ensure a transactional logical decoding message shows up on the slot\n> > + $node_subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION tap_sub DISABLE\");\n> > +\n> > + # wait for the replication connection to drop from the publisher\n> > + $node_publisher->poll_query_until('postgres',\n> > + \"SELECT COUNT(*) FROM pg_catalog.pg_replication_slots WHERE\n> > slot_name = 'tap_sub' AND active='f'\", 1);\n> >\n> > In the above sequence, wait_for_catchup will query pg_stat_replication\n> > whereas after disabling subscription we are checking\n> > pg_replication_slots. I understand from your explanation why we can't\n> > rely on pg_stat_replication after DISABLE but it might be better to\n> > check that the slot is active before disabling it. I think currently,\n> > the test assumes that, isn't it better to have an explicit check for\n> > that?\n>\n> I felt this is not required, wait_for_catchup will poll_query_until\n> the state = 'streaming', even if START_REPLICATION takes time, state\n> will be in 'startup' state, this way poll_query_until will take care\n> of handling this.\n>\n\nmakes sense, but let's add some comments to clarify the same.\n\n> On further analysis I found that we need to do \"Alter subscription\n> tap_sub ENABLE\" and \"ALTER subscription tap_sub DISABLE\" multiple\n> time, Instead we can change pg_logical_slot_peek_binary_changes to\n> pg_logical_slot_get_binary_changes at appropriate steps. This way the\n> common function can be removed and the enable/disable multiple times\n> can be removed.\n>\n\nI think that is a valid point. This was probably kept so that we can\npeek multiple times for the same message to test various things but\nthat can be achieved with the way you have changed the test.\n\nOne more thing, shouldn't we make auto_vacuum=off for this test by\nusing 'append_conf' before starting the publisher. That will avoid the\nrisk of empty transactions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 18 May 2021 09:09:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: subscriptioncheck failure" }, { "msg_contents": "On Tue, May 18, 2021 at 9:09 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 17, 2021 at 5:48 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Mon, May 17, 2021 at 10:40 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > One more point:\n> > > + $node_publisher->wait_for_catchup('tap_sub');\n> > > +\n> > > + # Ensure a transactional logical decoding message shows up on the slot\n> > > + $node_subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION tap_sub DISABLE\");\n> > > +\n> > > + # wait for the replication connection to drop from the publisher\n> > > + $node_publisher->poll_query_until('postgres',\n> > > + \"SELECT COUNT(*) FROM pg_catalog.pg_replication_slots WHERE\n> > > slot_name = 'tap_sub' AND active='f'\", 1);\n> > >\n> > > In the above sequence, wait_for_catchup will query pg_stat_replication\n> > > whereas after disabling subscription we are checking\n> > > pg_replication_slots. I understand from your explanation why we can't\n> > > rely on pg_stat_replication after DISABLE but it might be better to\n> > > check that the slot is active before disabling it. I think currently,\n> > > the test assumes that, isn't it better to have an explicit check for\n> > > that?\n> >\n> > I felt this is not required, wait_for_catchup will poll_query_until\n> > the state = 'streaming', even if START_REPLICATION takes time, state\n> > will be in 'startup' state, this way poll_query_until will take care\n> > of handling this.\n> >\n>\n> makes sense, but let's add some comments to clarify the same.\n>\n\nModified.\n\n> > On further analysis I found that we need to do \"Alter subscription\n> > tap_sub ENABLE\" and \"ALTER subscription tap_sub DISABLE\" multiple\n> > time, Instead we can change pg_logical_slot_peek_binary_changes to\n> > pg_logical_slot_get_binary_changes at appropriate steps. This way the\n> > common function can be removed and the enable/disable multiple times\n> > can be removed.\n> >\n>\n> I think that is a valid point. This was probably kept so that we can\n> peek multiple times for the same message to test various things but\n> that can be achieved with the way you have changed the test.\n>\n> One more thing, shouldn't we make auto_vacuum=off for this test by\n> using 'append_conf' before starting the publisher. That will avoid the\n> risk of empty transactions.\n\nI felt that makes sense, added it.\n\nThanks for the comments, the attached patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Tue, 18 May 2021 11:24:56 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: subscriptioncheck failure" }, { "msg_contents": "On Tue, May 18, 2021 at 11:25 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks for the comments, the attached patch has the changes for the same.\n>\n\nThanks, I have pushed your patch after making minor changes in the comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 19 May 2021 10:27:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: subscriptioncheck failure" }, { "msg_contents": "On Wed, May 19, 2021 at 10:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 18, 2021 at 11:25 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Thanks for the comments, the attached patch has the changes for the same.\n> >\n>\n> Thanks, I have pushed your patch after making minor changes in the comments.\n\nThanks for pushing this patch.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 19 May 2021 12:24:25 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: subscriptioncheck failure" } ]
[ { "msg_contents": "Hi,\n\nPer Coverity.\nCID 1453114 (#1 of 1): Explicit null dereferenced (FORWARD_NULL)\n53. var_deref_model: Passing null pointer child_expr to strcmp, which\ndereferences it.\n\nIt is agreed that asserts should be used for error conditions that can\nnever occur in the release.\nBut with errors that can occur, using assert does not make sense.\n\nBetter to make sure that strcmp can be called without risk.\nMeanwhile, fix the strcmp call signature (const char).\n\n#include <stdio.h>\n#include <string.h>\n\nint main()\n{\n const char * s1=\"\";\n const char * s2=\"0\";\n\n if (strstr(s1, s2) != 0) {\n printf(\"found\");\n } else {\n printf(\"not found\");\n }\n}\nnot found!\n\nregards,\nRanier Vilela", "msg_date": "Thu, 13 May 2021 11:04:12 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Explicit NULL dereference (src/backend/commands/tablecmds.c)" }, { "msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> It is agreed that asserts should be used for error conditions that can\n> never occur in the release.\n> But with errors that can occur, using assert does not make sense.\n\nOn what grounds do you claim that those asserts are wrong?\n\nCoverity's opinion counts for just about nothing these days.\nA test case causing a crash would count, of course.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 May 2021 10:10:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Explicit NULL dereference (src/backend/commands/tablecmds.c)" } ]
[ { "msg_contents": "Per sqlsmith.\n\npostgres=# select amvalidate(123);\nERROR: cache lookup failed for operator class 123\npostgres=# \\errverbose \nERROR: XX000: cache lookup failed for operator class 123\nLOCATION: amvalidate, amapi.c:125\n\nThe usual expectation is that sql callable functions should return null rather\nthan hitting elog().\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 13 May 2021 12:01:22 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "amvalidate(): cache lookup failed for operator class 123" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Per sqlsmith.\n> postgres=# select amvalidate(123);\n> ERROR: cache lookup failed for operator class 123\n> postgres=# \\errverbose \n> ERROR: XX000: cache lookup failed for operator class 123\n> LOCATION: amvalidate, amapi.c:125\n\n> The usual expectation is that sql callable functions should return null rather\n> than hitting elog().\n\nMeh. I'm not convinced that that position ought to apply to amvalidate.\nUnder what circumstances would you be calling that on an opclass that\nmight be about to be dropped?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 May 2021 14:22:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: amvalidate(): cache lookup failed for operator class 123" }, { "msg_contents": "On Thu, May 13, 2021 at 02:22:16PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > Per sqlsmith.\n> > postgres=# select amvalidate(123);\n> > ERROR: cache lookup failed for operator class 123\n> > postgres=# \\errverbose \n> > ERROR: XX000: cache lookup failed for operator class 123\n> > LOCATION: amvalidate, amapi.c:125\n> \n> > The usual expectation is that sql callable functions should return null rather\n> > than hitting elog().\n> \n> Meh. I'm not convinced that that position ought to apply to amvalidate.\n> Under what circumstances would you be calling that on an opclass that\n> might be about to be dropped?\n\nSure, no problem. I'm just passing on the message :)\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 13 May 2021 13:28:17 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: amvalidate(): cache lookup failed for operator class 123" }, { "msg_contents": "On Thu, May 13, 2021 at 2:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Meh. I'm not convinced that that position ought to apply to amvalidate.\n\nI am still of the opinion that we ought to apply it across the board,\nfor consistency. It makes it easier for humans to know which problems\nare known to be reachable and which are thought to be can't-happen and\nthus bugs. If we fix cases like this to return a real error code, then\nanything that comes up as XX000 is likely to be a real bug, whereas if\nwe don't, the things that we're not concerned about have to be\nfiltered out by some other method, probably involving a human being.\nIf the filter that human being has to apply further involves reading\nTom Lane's mind and knowing what he will think about a particular\nreport, or alternatively asking him, it just makes complicated\nsomething that we could have made simple.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 16:12:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: amvalidate(): cache lookup failed for operator class 123" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, May 13, 2021 at 2:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Meh. I'm not convinced that that position ought to apply to amvalidate.\n\n> I am still of the opinion that we ought to apply it across the board,\n> for consistency.\n\nThe main reason I'm concerned about applying that rule to amvalidate\nis that then how do you know what's actually an error case?\n\nAs a hardly-irrelevant counterexample, we have a whole bunch of\nregression tests that do something like\n\nSELECT ...\nWHERE NOT amvalidate(oid);\n\nEvery one of those is silently and dangerously wrong if amvalidate\nmight sometimes return null.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 May 2021 16:49:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: amvalidate(): cache lookup failed for operator class 123" }, { "msg_contents": "On Thu, May 13, 2021 at 4:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The main reason I'm concerned about applying that rule to amvalidate\n> is that then how do you know what's actually an error case?\n>\n> As a hardly-irrelevant counterexample, we have a whole bunch of\n> regression tests that do something like\n>\n> SELECT ...\n> WHERE NOT amvalidate(oid);\n>\n> Every one of those is silently and dangerously wrong if amvalidate\n> might sometimes return null.\n\nOh, I didn't notice previously that Justin's proposal was to make the\nfunctions return NULL. He's correct that this is consistent with other\ncases, and if we go that way, then these queries need to be updated. I\nhad just been imaging using ereport(ERROR, ...) which wouldn't have\nthat problem. I think either approach would be an improvement over the\nstatus quo.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 15 May 2021 12:00:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: amvalidate(): cache lookup failed for operator class 123" }, { "msg_contents": "On Sat, May 15, 2021 at 12:00:37PM -0400, Robert Haas wrote:\n> Oh, I didn't notice previously that Justin's proposal was to make the\n> functions return NULL. He's correct that this is consistent with other\n> cases, and if we go that way, then these queries need to be updated. I\n> had just been imaging using ereport(ERROR, ...) which wouldn't have\n> that problem. I think either approach would be an improvement over the\n> status quo.\n\nFWIW, I am not convinced with what we could gain by sending NULL as\nresult rather than bump on an ERROR with this function. Don't take me\nwrong, I like when system functions return a gentle NULL result if\nsomething does not exist, if the function is something we document and\nif it can be used with large catalog scans a-la-pg_class. I don't\nthink that there is any need to apply that to amvalidate() though, and\nit could mean potential issues with out-of-core modules, rum for one,\nno?\n--\nMichael", "msg_date": "Mon, 17 May 2021 13:48:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: amvalidate(): cache lookup failed for operator class 123" }, { "msg_contents": "On Thu, May 13, 2021 at 12:01:22PM -0500, Justin Pryzby wrote:\n> postgres=# select amvalidate(123);\n> ERROR: cache lookup failed for operator class 123\n> The usual expectation is that sql callable functions should return null rather\n> than hitting elog().\n\nOn Thu, May 13, 2021 at 2:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Meh. I'm not convinced that that position ought to apply to amvalidate.\n\nOn Thu, May 13, 2021 at 04:12:10PM -0400, Robert Haas wrote:\n> I am still of the opinion that we ought to apply it across the board,\n> for consistency. It makes it easier for humans to know which problems\n> are known to be reachable and which are thought to be can't-happen and\n> thus bugs. If we fix cases like this to return a real error code, then\n> anything that comes up as XX000 is likely to be a real bug, whereas if\n> we don't, the things that we're not concerned about have to be\n> filtered out by some other method, probably involving a human being.\n> If the filter that human being has to apply further involves reading\n> Tom Lane's mind and knowing what he will think about a particular\n> report, or alternatively asking him, it just makes complicated\n> something that we could have made simple.\n\nFWIW, here are some other cases from sqlsmith which hit elog()/XX000:\n\npostgres=# select unknownin('');\nERROR: failed to find conversion function from unknown to text\npostgres=# \\errverbose\nERROR: XX000: failed to find conversion function from unknown to text\nLOCATION: coerce_type, parse_coerce.c:542\n\npostgres=# SELECT pg_catalog.interval( '12 seconds'::interval ,3);\nERROR: unrecognized interval typmod: 3\n\npostgres=# SELECT pg_describe_object(1,0,1);\nERROR: invalid non-zero objectSubId for object class 1\n\npostgres=# SELECT pg_read_file( repeat('a',333));\nERROR: could not open file \"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\" for reading: File name too long\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 13 Feb 2023 07:50:53 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: amvalidate(): cache lookup failed for operator class 123" }, { "msg_contents": "On Mon, Feb 13, 2023 at 07:50:53AM -0600, Justin Pryzby wrote:\n> On Thu, May 13, 2021 at 12:01:22PM -0500, Justin Pryzby wrote:\n> > postgres=# select amvalidate(123);\n> > ERROR: cache lookup failed for operator class 123\n> > The usual expectation is that sql callable functions should return null rather\n> > than hitting elog().\n> \n> On Thu, May 13, 2021 at 2:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Meh. I'm not convinced that that position ought to apply to amvalidate.\n> \n> On Thu, May 13, 2021 at 04:12:10PM -0400, Robert Haas wrote:\n> > I am still of the opinion that we ought to apply it across the board,\n> > for consistency. It makes it easier for humans to know which problems\n> > are known to be reachable and which are thought to be can't-happen and\n> > thus bugs. If we fix cases like this to return a real error code, then\n> > anything that comes up as XX000 is likely to be a real bug, whereas if\n> > we don't, the things that we're not concerned about have to be\n> > filtered out by some other method, probably involving a human being.\n> > If the filter that human being has to apply further involves reading\n> > Tom Lane's mind and knowing what he will think about a particular\n> > report, or alternatively asking him, it just makes complicated\n> > something that we could have made simple.\n> \n> FWIW, here are some other cases from sqlsmith which hit elog()/XX000:\n> \n> postgres=# select unknownin('');\n> ERROR: failed to find conversion function from unknown to text\n> postgres=# \\errverbose\n> ERROR: XX000: failed to find conversion function from unknown to text\n> LOCATION: coerce_type, parse_coerce.c:542\n> \n> postgres=# SELECT pg_catalog.interval( '12 seconds'::interval ,3);\n> ERROR: unrecognized interval typmod: 3\n> \n> postgres=# SELECT pg_describe_object(1,0,1);\n> ERROR: invalid non-zero objectSubId for object class 1\n> \n> postgres=# SELECT pg_read_file( repeat('a',333));\n> ERROR: could not open file \"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\" for reading: File name too long\n\npostgres=# SELECT acldefault('a',0);\nERROR: unrecognized object type abbreviation: a\n\npostgres=# SELECT setweight('a','1');\nERROR: unrecognized weight: 49\n\nregression=# select float8_regr_intercept(ARRAY[1]) ;\nERROR: float8_regr_intercept: expected 6-element float8 array\n\nNone of these is new in v16.\n\n\n", "msg_date": "Wed, 12 Apr 2023 16:40:20 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "more elogs hit by sqlsmith (Re: amvalidate(): cache lookup failed\n for operator class 123)" } ]
[ { "msg_contents": "Hi hackers,\r\n\r\nI've attached a small patch that allows specifying only direct members\r\nof a group in pg_hba.conf. The \"+\" prefix offered today matches both\r\ndirect and indirect role members, which may complicate some role\r\nsetups. For example, if you have one set of roles that are members of\r\nthe \"pam\" role and another set that are members of the \"scram-sha-256\"\r\nrole, granting membership in a PAM role to a SCRAM role might\r\ninadvertently modify the desired authentication method for the\r\ngrantee. If only direct membership is considered, no such inadvertent\r\nauthentication method change would occur.\r\n\r\nI chose \"&\" as a new group name prefix for this purpose. This choice\r\nseemed as good as any, but I'm open to changing it if anyone has\r\nsuggestions. For determining direct role membership, I added a new\r\nfunction in acl.c that matches other related functions. I added a new\r\nrole cache type since it seemed to fit in reasonably well, but it seems\r\nunlikely that there is any real performance benefit versus simply\r\nopen-coding the syscache lookup.\r\n\r\nI didn't see any existing authentication tests for groups at first\r\nglance. If folks are interested in this functionality, I can work on\r\nadding some tests for this stuff.\r\n\r\nNathan", "msg_date": "Thu, 13 May 2021 23:38:46 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "allow specifying direct role membership in pg_hba.conf" }, { "msg_contents": "\nOn 5/13/21 7:38 PM, Bossart, Nathan wrote:\n> Hi hackers,\n>\n> I've attached a small patch that allows specifying only direct members\n> of a group in pg_hba.conf. The \"+\" prefix offered today matches both\n> direct and indirect role members, which may complicate some role\n> setups. For example, if you have one set of roles that are members of\n> the \"pam\" role and another set that are members of the \"scram-sha-256\"\n> role, granting membership in a PAM role to a SCRAM role might\n> inadvertently modify the desired authentication method for the\n> grantee. If only direct membership is considered, no such inadvertent\n> authentication method change would occur.\n>\n> I chose \"&\" as a new group name prefix for this purpose. This choice\n> seemed as good as any, but I'm open to changing it if anyone has\n> suggestions. For determining direct role membership, I added a new\n> function in acl.c that matches other related functions. I added a new\n> role cache type since it seemed to fit in reasonably well, but it seems\n> unlikely that there is any real performance benefit versus simply\n> open-coding the syscache lookup.\n>\n> I didn't see any existing authentication tests for groups at first\n> glance. If folks are interested in this functionality, I can work on\n> adding some tests for this stuff.\n>\n\nDo we really want to be creating two classes of role membership?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 14 May 2021 08:11:07 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: allow specifying direct role membership in pg_hba.conf" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 5/13/21 7:38 PM, Bossart, Nathan wrote:\n>> I've attached a small patch that allows specifying only direct members\n>> of a group in pg_hba.conf.\n\n> Do we really want to be creating two classes of role membership?\n\nYeah, this seems to be going against the clear meaning of the\nSQL spec. I realize you can argue that pg_hba.conf doesn't have\nto follow the spec, but it doesn't seem like a terribly good idea\nto interpret role membership differently in different places.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 May 2021 11:23:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: allow specifying direct role membership in pg_hba.conf" }, { "msg_contents": "On 05/13/21 19:38, Bossart, Nathan wrote:\n> I chose \"&\" as a new group name prefix for this purpose. This choice\n\nIf pg_hba syntax changes are being entertained, I would love to be able\nto set ssl_min_protocol_version locally in a hostssl rule.\n\nSome clients at $work are stuck with ancient SSL libraries, but I would\nmuch rather be able to weaken ssl_min_protocol_version just for them\nthan do it globally.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 14 May 2021 11:33:34 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: allow specifying direct role membership in pg_hba.conf" }, { "msg_contents": "Greetings,\n\n* Chapman Flack (chap@anastigmatix.net) wrote:\n> If pg_hba syntax changes are being entertained, I would love to be able\n> to set ssl_min_protocol_version locally in a hostssl rule.\n> \n> Some clients at $work are stuck with ancient SSL libraries, but I would\n> much rather be able to weaken ssl_min_protocol_version just for them\n> than do it globally.\n\nThis (unlike what was actually proposed) does seem like it'd be a useful\nimprovement. Not sure exaclty how it would work but I'm generally on\nboard with the idea.\n\nThanks,\n\nStephen", "msg_date": "Fri, 14 May 2021 14:57:48 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: allow specifying direct role membership in pg_hba.conf" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 5/13/21 7:38 PM, Bossart, Nathan wrote:\n> >> I've attached a small patch that allows specifying only direct members\n> >> of a group in pg_hba.conf.\n> \n> > Do we really want to be creating two classes of role membership?\n> \n> Yeah, this seems to be going against the clear meaning of the\n> SQL spec. I realize you can argue that pg_hba.conf doesn't have\n> to follow the spec, but it doesn't seem like a terribly good idea\n> to interpret role membership differently in different places.\n\nAgreed.\n\nThe lack of any particular justifcation for wanting this isn't a useful\nway to propose a patch either.\n\nThanks,\n\nStephen", "msg_date": "Fri, 14 May 2021 15:00:01 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: allow specifying direct role membership in pg_hba.conf" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Chapman Flack (chap@anastigmatix.net) wrote:\n>> If pg_hba syntax changes are being entertained, I would love to be able\n>> to set ssl_min_protocol_version locally in a hostssl rule.\n>> Some clients at $work are stuck with ancient SSL libraries, but I would\n>> much rather be able to weaken ssl_min_protocol_version just for them\n>> than do it globally.\n\n> This (unlike what was actually proposed) does seem like it'd be a useful\n> improvement. Not sure exaclty how it would work but I'm generally on\n> board with the idea.\n\nSeems like putting GUCs directly into pg_hba would be a mess. Would\nit be enough to tell people to use ALTER ROLE/DATABASE SET for this,\nand then fix things so that we recheck the protocol version (and\npossibly bail out) after absorbing those settings?\n\nI can think of objections to this:\n\n* If you actually want to tie the restriction to source IP addresses,\nrather than users or databases, this doesn't get the job done.\n\n* The authentication cycle would be completed (or at least mostly\nso) before we bail out; so if the concern is about packet-sniffing\nor MITM attacks, maybe this would expose too much.\n\nBut it does have the advantage of being something it seems like\nwe could get done easily.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 May 2021 15:35:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: allow specifying direct role membership in pg_hba.conf" }, { "msg_contents": "On Fri, May 14, 2021 at 8:58 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Chapman Flack (chap@anastigmatix.net) wrote:\n> > If pg_hba syntax changes are being entertained, I would love to be able\n> > to set ssl_min_protocol_version locally in a hostssl rule.\n> >\n> > Some clients at $work are stuck with ancient SSL libraries, but I would\n> > much rather be able to weaken ssl_min_protocol_version just for them\n> > than do it globally.\n>\n> This (unlike what was actually proposed) does seem like it'd be a useful\n> improvement. Not sure exaclty how it would work but I'm generally on\n> board with the idea.\n\nI agree, but I have no idea how you could do that within the current\npg_hba.conf.\n\nThe row is selected by the combination of username/database/ipaddress.\nBut you have to pick the minimum TLS version before the client has\nsent that... Basically we have to make the choice long before we've\neven started looking at pg_hba.\n\nIt would be good to have a way to do it, but I'm not sure pg_hba.conf\nis the place for it.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 17 May 2021 22:15:59 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: allow specifying direct role membership in pg_hba.conf" }, { "msg_contents": "On 05/17/21 16:15, Magnus Hagander wrote:\n> The row is selected by the combination of username/database/ipaddress.\n> But you have to pick the minimum TLS version before the client has\n> sent that... Basically we have to make the choice long before we've\n> even started looking at pg_hba.\n\nUse the peer IP address to pre-filter the available pg_hba entries to\nthose pertaining to that address ... choose a min protocol version that's\nthe min specified among those ... then get the username and database name\n(by which point a protocol has been negotiated), then further filter the\nlist down to those pertaining to that user and database and allowing that\nprotocol version?\n\nYes, clunky, but avoids a more ambitious redesign of pg_hba.\n\nI'm not sure a more ambitious redesign would be a bad thing in principle;\nthe pg_hba.conf syntax seems rather clunky and limiting to begin with,\nand I keep wondering why it isn't in shared tables or something. But\nI suppose a lot of external admin tools have some knowledge of it?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 17 May 2021 16:31:15 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: allow specifying direct role membership in pg_hba.conf" }, { "msg_contents": "On Mon, May 17, 2021 at 10:31 PM Chapman Flack <chap@anastigmatix.net> wrote:\n>\n> On 05/17/21 16:15, Magnus Hagander wrote:\n> > The row is selected by the combination of username/database/ipaddress.\n> > But you have to pick the minimum TLS version before the client has\n> > sent that... Basically we have to make the choice long before we've\n> > even started looking at pg_hba.\n>\n> Use the peer IP address to pre-filter the available pg_hba entries to\n> those pertaining to that address ... choose a min protocol version that's\n> the min specified among those ... then get the username and database name\n> (by which point a protocol has been negotiated), then further filter the\n> list down to those pertaining to that user and database and allowing that\n> protocol version?\n>\n> Yes, clunky, but avoids a more ambitious redesign of pg_hba.\n\nSo you're saying that some entries int he parameter section would\ndepend on the db/user/ip combo and some would depend just on the ip?\n\nThat seems like an absolutely terrible idea to me, especially since\nthis is about security configuration. Way too easy to get wrong by\npeople who don't know how the internals work. People will *definitely*\nset those parameter thinking that they can do it based on the db and\nuser as well.\n\n\n> I'm not sure a more ambitious redesign would be a bad thing in principle;\n> the pg_hba.conf syntax seems rather clunky and limiting to begin with,\n> and I keep wondering why it isn't in shared tables or something. But\n> I suppose a lot of external admin tools have some knowledge of it?\n\nI think we'd either need a redesign of that, or a completely different\nway of configuring pre-authentication settings.\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 17 May 2021 22:35:06 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: allow specifying direct role membership in pg_hba.conf" }, { "msg_contents": "On 05/17/21 16:35, Magnus Hagander wrote:\n> So you're saying that some entries int he parameter section would\n> depend on the db/user/ip combo and some would depend just on the ip?\n\nI don't *think* that's what I was saying. What I was thinking was this:\n\nThe pg_hba.conf file is an ordered list of entries. Each entry can specify\na (broad or narrow) set of IPs it applies to, a (broad or narrow) set of\ndatabases it applies to, and a (broad or narrow) set of users it applies to.\n\nAlso, in this hypothetical, it can specify a min protocol version.\n\nRight now, we're doing something like this:\n\n1. accept an incoming connection, learning the client IP\n2. SSLRequest message leads to negotiating TLS\n3. StartupMessage supplies the desired database and user name\n4. pg_hba entries are consulted once and filtered down to the first one\napplicable to the client IP, database, and username (and SSLness)\n5. that entry is used for authentication\n\n\nI suggested only:\n\nInsert step 1½, filter the pg_hba entries down to only those that could\npossibly accept a connection from this IP address. This is an improper\nsubset of the whole list, and an improper superset of the singleton to be\ngenerated later in step 4.\n\nStep 2 still negotiates TLS, but can fail early if the protocol would\nbe older than the oldest allowed in the pre-filtered list.\n\nStep 4 takes that pre-filtered list and completes the restriction down to\nfirst entry matching the IP, database, and username. This should be the\nsame singleton it would generate now. But it can fail-fast if that entry\nwould require a higher protocol version than what's been negotiated,\nbefore sending the corresponding authentication request message, so no\nauthentication data will be exchanged over a less-secure channel than\nintended. However, the user, database name, and options in the Startup\nmessage might have been exposed over a lower TLS version than intended.\nMaybe that's not the end of the world?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 17 May 2021 17:18:07 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: allow specifying direct role membership in pg_hba.conf" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 05/17/21 16:35, Magnus Hagander wrote:\n>> So you're saying that some entries int he parameter section would\n>> depend on the db/user/ip combo and some would depend just on the ip?\n\n> I don't *think* that's what I was saying. What I was thinking was this:\n> ...\n\nThis seems pretty horrid to me, not only from a complexity standpoint,\nbut because it would break the principle that pg_hba.conf entries are\napplied in order.\n\nOn the whole, I'm afraid that this idea is going to create a lot\nmore problems than it solves.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 17 May 2021 17:55:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: allow specifying direct role membership in pg_hba.conf" }, { "msg_contents": "On 05/17/21 17:55, Tom Lane wrote:\n> This seems pretty horrid to me, not only from a complexity standpoint,\n> but because it would break the principle that pg_hba.conf entries are\n> applied in order.\n\nThis makes twice in a row that I've failed to see how.\n\nIf you go through the entries, in order, and simply prune from the list\nthe ones you can already prove would never apply to this connection, how\ndoes that break the ordering principle?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 17 May 2021 21:19:00 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: allow specifying direct role membership in pg_hba.conf" }, { "msg_contents": "On 05/17/21 21:19, Chapman Flack wrote:\n> This makes twice in a row that I've failed to see how.\n> \n> If you go through the entries, in order, and simply prune from the list\n> the ones you can already prove would never apply to this connection, how\n> does that break the ordering principle?\n\n\nOk, I see how what I proposed looks out-of-order just in that it lets the\ninitial TLS negotiation be influenced by the minimum version over all\npotentially-applicable entries.\n\nBut that's just an optimization anyway. The same ultimate effect would be\nachieved by unconditionally allowing anything back to TLSv1 to be negotiated\nat SSLRequest time, and then (processing the entries in order as always)\nrejecting the connection if the first one that could apply expects a higher\nprotocol version.\n\nThe pre-scan and use of the minimum version encountered has only the effect\nof fast-failing a TLS negotiation for a version that won't possibly succeed.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 17 May 2021 21:31:23 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: allow specifying direct role membership in pg_hba.conf" }, { "msg_contents": "On Mon, May 17, 2021 at 11:18 PM Chapman Flack <chap@anastigmatix.net> wrote:\n>\n> On 05/17/21 16:35, Magnus Hagander wrote:\n> > So you're saying that some entries int he parameter section would\n> > depend on the db/user/ip combo and some would depend just on the ip?\n>\n> I don't *think* that's what I was saying. What I was thinking was this:\n>\n> The pg_hba.conf file is an ordered list of entries. Each entry can specify\n> a (broad or narrow) set of IPs it applies to, a (broad or narrow) set of\n> databases it applies to, and a (broad or narrow) set of users it applies to.\n>\n> Also, in this hypothetical, it can specify a min protocol version.\n>\n> Right now, we're doing something like this:\n>\n> 1. accept an incoming connection, learning the client IP\n> 2. SSLRequest message leads to negotiating TLS\n> 3. StartupMessage supplies the desired database and user name\n> 4. pg_hba entries are consulted once and filtered down to the first one\n> applicable to the client IP, database, and username (and SSLness)\n> 5. that entry is used for authentication\n>\n>\n> I suggested only:\n>\n> Insert step 1½, filter the pg_hba entries down to only those that could\n> possibly accept a connection from this IP address. This is an improper\n> subset of the whole list, and an improper superset of the singleton to be\n> generated later in step 4.\n>\n> Step 2 still negotiates TLS, but can fail early if the protocol would\n> be older than the oldest allowed in the pre-filtered list.\n\nNop, this is *exactly* what I'm referring to as being a bad idea.\n\nStep 1 1/2 in this *ignores* the fact that you may have specified a\nrestriction on username and database name in pg_hba.conf, because it\nhasn't seen them yet. Thus, a parameter such as min_tls_version would\nnot respect the username/databasename field, whereas other parameters\nwould. That is a massive risk of misconfiguration.\n\nI mean, if you have\nhostssl somedatabase someuser 10.0.0.0/24 gss\nhostssl somedatabase supseruser 10.0.0.0/24 gss tls_min_version=1.3\n\nOne would reasonably expect that \"someuser\" can connect with whatever\nthe default version i for tls_min_versino, whereas \"superuser\" would\nrequire a minimum of 1.3. But that's *not* what would happen --\nsuperuser would also be allowed to connect with a lower version if\nthat's allowed in the global set.\n\n\n> Step 4 takes that pre-filtered list and completes the restriction down to\n> first entry matching the IP, database, and username. This should be the\n> same singleton it would generate now. But it can fail-fast if that entry\n> would require a higher protocol version than what's been negotiated,\n> before sending the corresponding authentication request message, so no\n> authentication data will be exchanged over a less-secure channel than\n> intended. However, the user, database name, and options in the Startup\n> message might have been exposed over a lower TLS version than intended.\n> Maybe that's not the end of the world?\n\nThat is exactly the problem. And while that may hold true of current\nauth methods, it may not hold true of all. And it could still trigger\nthings like an ident callback if that is allowed etc.\n\nSo I stand by thinking this is the wrong place to solve the problem. I\nagree it would be good to be able to do it, but I don't agree on\noverloading it on pg_hba.conf, which is complicated enough already.\nAnd for security config, simplicity is pretty much always better.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 18 May 2021 10:54:37 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: allow specifying direct role membership in pg_hba.conf" }, { "msg_contents": "On 05/18/21 04:54, Magnus Hagander wrote:\n\n> I mean, if you have\n> hostssl somedatabase someuser 10.0.0.0/24 gss\n> hostssl somedatabase supseruser 10.0.0.0/24 gss tls_min_version=1.3\n> \n> One would reasonably expect that \"someuser\" can connect with whatever\n> the default version i for tls_min_versino, whereas \"superuser\" would\n> require a minimum of 1.3. But that's *not* what would happen --\n> superuser would also be allowed to connect with a lower version if\n> that's allowed in the global set.\n\nNegatory. \"superuser\" would be allowed to send a StartupMessage\ncontaining the strings \"somedatabase\" and \"superuser\" (and possibly\nsome settings of options) over a lower version if that's allowed\nin the global set ... and would then have the connection rejected\nbecause the negotiated protocol was lower than 1.3, without seeing\nany authentication message or having a chance to send any sensitive\nauthentication credentials.\n\nSo the risk of any information exposure over a too-low TLS version\nis limited to the name of a database, the name of a user, and possibly\nthe settings of some options, and no sensitive authentication data.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 18 May 2021 08:05:32 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: allow specifying direct role membership in pg_hba.conf" }, { "msg_contents": "\nOn 5/18/21 8:05 AM, Chapman Flack wrote:\n> On 05/18/21 04:54, Magnus Hagander wrote:\n>\n>> I mean, if you have\n>> hostssl somedatabase someuser 10.0.0.0/24 gss\n>> hostssl somedatabase supseruser 10.0.0.0/24 gss tls_min_version=1.3\n>>\n>> One would reasonably expect that \"someuser\" can connect with whatever\n>> the default version i for tls_min_versino, whereas \"superuser\" would\n>> require a minimum of 1.3. But that's *not* what would happen --\n>> superuser would also be allowed to connect with a lower version if\n>> that's allowed in the global set.\n> Negatory. \"superuser\" would be allowed to send a StartupMessage\n> containing the strings \"somedatabase\" and \"superuser\" (and possibly\n> some settings of options) over a lower version if that's allowed\n> in the global set ... and would then have the connection rejected\n> because the negotiated protocol was lower than 1.3, without seeing\n> any authentication message or having a chance to send any sensitive\n> authentication credentials.\n>\n> So the risk of any information exposure over a too-low TLS version\n> is limited to the name of a database, the name of a user, and possibly\n> the settings of some options, and no sensitive authentication data.\n>\n\n\nWe are way off $subject. If we want to continue this discussion please\nuse an appropriate subject.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 18 May 2021 10:53:45 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: allow specifying direct role membership in pg_hba.conf" } ]
[ { "msg_contents": "Hi hackers,\n\nRecently I was involved with some patches [1][2] to fix code which was\nmistakenly using a global \"wrconn\" variable instead of a local one.\n\nThat bug led me to wonder if similar problems might be going\nundetected elsewhere in the code. There is a gcc compiler option [3]\n-Wshadow which informs about the similar scenario where one variable\nis \"shadowing\" another (e.g. redeclaring a variable with the same name\nas one at an outer scope).\n\nPSA a log file from a PG14 build (code from last week) run using the\n-Wshadow flag. In this logfile I have filtered out everything except\nthe shadow warnings.\n\nMy plan initially was to just fix the few warnings found and present\nthe patches here, but it turned out there are far more cases than I\nwas anticipating.\n\nThere seem to be basically 3 categories of shadowing exposed in this logfile:\n1. where a var declaration is shadowing a previously declared local\nvar (205 cases found)\n2. where a var declaration is shadowing a function parameter (14 cases found)\n3. where a var declaration is shadowing a global variable (110 cases found)\n\n~~~\n\nOf the dozen or so cases that I have looked at, so far I have been\nunable to find anything that would result in any *real* errors.\n\nBut that is not to say they are harmless either - at the very least\nIMO they affect code readability in ways that span the full spectrum\nfrom \"meh\" to downright \"dodgy-looking\".\n\nSome examples are possibly deliberate (albeit lazy / unimaginative?)\nlocal re-declarations of variables like \"i\" and \"buf\" etc.\n\nBut many other examples (particularly the global shadows) seemed\nclearly unintentional mistakes to me - like the code evolved and\ncontinued working OK without warnings, so any introduced shadowing\njust went unnoticed.\n\nAnd who knows... maybe there are a few *real* bugs lurking within this list too?\n\n~~~\n\nFor now, I am not sure how to proceed with this information. Hence this post...\n\n- Perhaps a consistent convention for global variable names could have\nprevented lots of these cases from occurring.\n\n- Many of these shadow cases look unintentional to me; I feel the code\nwould have been implemented differently had the developer been aware\nof them, so at least advertising their presence seems a useful thing\nto know. Perhaps the -Wshadow flag can be added to one of the\nbuild-farm machines for that purpose?\n\n- Finally, IMO the code is nearly always more confusing when there is\nvariable shadowing, so removal of these warnings seems a worthy goal.\nPerhaps they can be slowly whittled away during the course of PG 15\ndevelopment?\n\nOr am I just jumping at shadows?\n\nThoughts?\n\n----------\n[1] https://github.com/postgres/postgres/commit/4e8c0f1a0d0d095a749a329a216c88a340a455b6\n[2] https://github.com/postgres/postgres/commit/db16c656478b815627a03bb0a31833391a733eb0\n[3] https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 14 May 2021 10:00:09 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "What is lurking in the shadows?" }, { "msg_contents": "On Fri, 14 May 2021 at 12:00, Peter Smith <smithpb2250@gmail.com> wrote:\n> That bug led me to wonder if similar problems might be going\n> undetected elsewhere in the code. There is a gcc compiler option [3]\n> -Wshadow which informs about the similar scenario where one variable\n> is \"shadowing\" another (e.g. redeclaring a variable with the same name\n> as one at an outer scope).\n\n> For now, I am not sure how to proceed with this information. Hence this post...\n\nI'm inclined to think that since a bug has already been found due to a\nlocal variable shadowing a global one that it would be good to review\nthese and then consider if it's worth doing any renaming. I think the\nprocess of looking at each warning individually will allow us to\ndetermine if; a) there are any bugs, or; b) if it's worth doing any\nrenaming.\n\nI see GCC also has -Wshadow=compatible-local to warn when there\nshadowing is going on in local vars where both vars have compatible\ntypes. -Wshadow=local is any local var shadowing, then the option you\nused which is the same as -Wshadow=global.\n\nI'd say it might be worth aspiring to reduce the warnings from\nbuilding with these flags. If we reduced these down then it might\nallow us to more easily identify cases where there are actual bugs.\nMaybe we can get to a point where we could enable either\n-Wshadow=compatible-local or -Wshadow=local. I doubt we could ever\nget to a stage where -Wshadow=global would work for us. There's also\nsome quite old discussion in [1] that you might want to review.\n\nI don't pretend to have found the best example of ones that we might\nwant to leave alone, but:\n\npg_controldata.c: In function ‘wal_level_str’:\npg_controldata.c:73:24: warning: declaration of ‘wal_level’ shadows a\nglobal declaration [-Wshadow]\n wal_level_str(WalLevel wal_level)\n ^\nIn file included from pg_controldata.c:24:0:\n../../../src/include/access/xlog.h:187:24: warning: shadowed\ndeclaration is here [-Wshadow]\n extern PGDLLIMPORT int wal_level;\n\nI wonder if it would really clear up much if the parameter name there\nwas renamed not to shadow the GUC variable's name.\n\nAlso, doing any renaming here is not without risk that we break\nsomething, so certainly PG15 at the earliest, unless there is an\nactual bug.\n\nI imagine starting with a patch that fixes the ones where the name\ndoes not have much meaning. e.g, i, buf, tmp, lc\n\nWe also need to take into account that renaming variables here can\nincrease the overhead of backpatching fixes. The process of fixing\nthose up to make the patch apply to the back branch does increase the\nchances that bugs could make their way into the back branches.\nHowever, it's probably more likely to end up as a bug if the patch was\nwritten for the back branch then there's a bigger opportunity for the\npatch author to pick the wrong variable name when converting the patch\nto work with master. In the reverse case, that does not seem as likely\ndue to both variables having the same name.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/877k1psmpf.fsf%40mailbox.samurai.com\n\n\n", "msg_date": "Fri, 14 May 2021 13:16:37 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: What is lurking in the shadows?" }, { "msg_contents": "On Fri, May 14, 2021 at 01:16:37PM +1200, David Rowley wrote:\n> I'm inclined to think that since a bug has already been found due to a\n> local variable shadowing a global one that it would be good to review\n> these and then consider if it's worth doing any renaming. I think the\n> process of looking at each warning individually will allow us to\n> determine if; a) there are any bugs, or; b) if it's worth doing any\n> renaming.\n\n70116493 is another instance of that, from a not-so-far past..\n\n> I'd say it might be worth aspiring to reduce the warnings from\n> building with these flags. If we reduced these down then it might\n> allow us to more easily identify cases where there are actual bugs.\n> Maybe we can get to a point where we could enable either\n> -Wshadow=compatible-local or -Wshadow=local. I doubt we could ever\n> get to a stage where -Wshadow=global would work for us. There's also\n> some quite old discussion in [1] that you might want to review.\n\nAgreed, not before the 15 branch opens for business for cosmetic\nchanges. compatible-local did not sound that much interesting to me\non first sight, but the report of Peter tells the contrary: most of\nthe conflicts come from local problems. I am not sure that you could\nenable that safely though as PG_TRY() would complain on that, for\nexample in ProcessUtilitySlow().\n\n> We also need to take into account that renaming variables here can\n> increase the overhead of backpatching fixes. The process of fixing\n> those up to make the patch apply to the back branch does increase the\n> chances that bugs could make their way into the back branches.\n> However, it's probably more likely to end up as a bug if the patch was\n> written for the back branch then there's a bigger opportunity for the\n> patch author to pick the wrong variable name when converting the patch\n> to work with master. In the reverse case, that does not seem as likely\n> due to both variables having the same name.\n\nThat may be tricky, even if global or local variables are changed,\nbut I'd like to think that there is room for improvement. Looking at\nthe report, the global conflicts involve:\n- synchronous_commit\n- ssl_key_file\n- wal_segment_size\n- DataDir, with the control data business. \n\nThese seem changeable without much holes with potential back-patches.\n--\nMichael", "msg_date": "Fri, 14 May 2021 15:24:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: What is lurking in the shadows?" }, { "msg_contents": "On Fri, May 14, 2021 at 11:16 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 14 May 2021 at 12:00, Peter Smith <smithpb2250@gmail.com> wrote:\n> > That bug led me to wonder if similar problems might be going\n> > undetected elsewhere in the code. There is a gcc compiler option [3]\n> > -Wshadow which informs about the similar scenario where one variable\n> > is \"shadowing\" another (e.g. redeclaring a variable with the same name\n> > as one at an outer scope).\n>\n> > For now, I am not sure how to proceed with this information. Hence this post...\n>\n> I'm inclined to think that since a bug has already been found due to a\n> local variable shadowing a global one that it would be good to review\n> these and then consider if it's worth doing any renaming. I think the\n> process of looking at each warning individually will allow us to\n> determine if; a) there are any bugs, or; b) if it's worth doing any\n> renaming.\n>\n\nHi David, Michael - Thanks for your replies.\n\nYeah, I would like to work my way through all of these warnings in my\nspare time and report back to this thread (after 1-2 months?) with a\ndetailed analysis.\n\nAfter that it should become much clearer what / if any action should\nbe taken next.\n\n----------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 18 May 2021 12:08:57 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: What is lurking in the shadows?" }, { "msg_contents": "On Tue, May 18, 2021 at 12:08:57PM +1000, Peter Smith wrote:\n> Yeah, I would like to work my way through all of these warnings in my\n> spare time and report back to this thread (after 1-2 months?) with a\n> detailed analysis.\n\nThe next commit fest is at the beginning of July, so there are a\ncouple of weeks of margin here.\n\n> After that it should become much clearer what / if any action should\n> be taken next.\n\nIf you can dive into the details, that would be nice! My take would\nbe to look first at the local-local conflicts and rename all the\nvariables that conflict so as any backpatch done in the areas changed\ncause a compilation failure. Some of the global-local conflicts are\nsimple enough to solve, these could go second. Each of them requires\na case-by-case lookup, of course.\n--\nMichael", "msg_date": "Tue, 18 May 2021 11:25:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: What is lurking in the shadows?" }, { "msg_contents": "On Tue, 18 May 2021 at 14:09, Peter Smith <smithpb2250@gmail.com> wrote:\n> Yeah, I would like to work my way through all of these warnings in my\n> spare time and report back to this thread (after 1-2 months?) with a\n> detailed analysis.\n\nI'd recommend for any patches that they come in bite-sized chunks. A\ncommitter is going to have to re-review each change. For me\npersonally, I'll probably run for the hills if I see a patch that\nrenames 200 variables.\n\nI'd think about a dozen would be good. Starting with ones that are\nleast likely to raise objection also seems like a good idea. That way\nyou'll have an idea if you want to trouble yourself with the more\nquestionable ones when the less questionable ones raised too many\nquestions. Like I mentioned, start with ones like i, buf, tmp, lc. If\nthose are accepted then move on to the more difficult ones. Unless\nyou discover bugs, then there's not really any urgency to fix these.\nDoing it in bite-sized chunks is less likely going to cause\nfrustration for you if some of the work is rejected after you've gone\nto all the trouble.\n\nAlso, going by what's mentioned in [1], in particular [2], I'm not so\ncertain that these changes will be received well by everyone. So I\nrecommend just taking it slow.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/877k1psmpf.fsf%40mailbox.samurai.com\n[2] https://www.postgresql.org/message-id/22920.1069708226%40sss.pgh.pa.us\n\n\n", "msg_date": "Tue, 18 May 2021 14:26:34 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: What is lurking in the shadows?" } ]
[ { "msg_contents": "Hi,\n\nWhile I was reviewing one of the logical decoding features, I found\nStreaming and binary options were missing in tab completion for the\nalter subscription set option, the attached patch has the changes for\nthe same.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Fri, 14 May 2021 12:00:38 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Added missing tab completion for alter subscription set option" }, { "msg_contents": "On Fri, May 14, 2021 at 12:00 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> While I was reviewing one of the logical decoding features, I found\n> Streaming and binary options were missing in tab completion for the\n> alter subscription set option, the attached patch has the changes for\n> the same.\n> Thoughts?\n\n+1.\n\nWithout patch:\npostgres=# alter subscription testsub set (S\nSLOT_NAME SYNCHRONOUS_COMMIT\n\nWith patch:\npostgres=# alter subscription testsub set (\nBINARY SLOT_NAME STREAMING SYNCHRONOUS_COMMIT\n\nHow about ordering the options alphabetically as the tab complete\noutput anyways shows that way? I'm not sure if that's the practice,\nbut just a thought.\nChange:\n+ COMPLETE_WITH(\"binary\", \"slot_name\", \"synchronous_commit\",\n\"streaming\");\nTo:\n+ COMPLETE_WITH(\"binary\", \"slot_name\", \"streaming\",\n\"synchronous_commit\");\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 May 2021 12:24:51 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Added missing tab completion for alter subscription set option" }, { "msg_contents": "On Fri, May 14, 2021 at 12:25 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, May 14, 2021 at 12:00 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > While I was reviewing one of the logical decoding features, I found\n> > Streaming and binary options were missing in tab completion for the\n> > alter subscription set option, the attached patch has the changes for\n> > the same.\n> > Thoughts?\n>\n> +1.\n>\n> Without patch:\n> postgres=# alter subscription testsub set (S\n> SLOT_NAME SYNCHRONOUS_COMMIT\n>\n> With patch:\n> postgres=# alter subscription testsub set (\n> BINARY SLOT_NAME STREAMING SYNCHRONOUS_COMMIT\n>\n> How about ordering the options alphabetically as the tab complete\n> output anyways shows that way? I'm not sure if that's the practice,\n> but just a thought.\n\nI did not see any rule for this, but also did not see any harm in\nkeeping it in alphabetical order, so changed it in the attached patch.\n\nRegards,\nVignesh", "msg_date": "Fri, 14 May 2021 18:50:50 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Added missing tab completion for alter subscription set option" }, { "msg_contents": "On Fri, May 14, 2021 at 6:51 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, May 14, 2021 at 12:25 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Fri, May 14, 2021 at 12:00 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > While I was reviewing one of the logical decoding features, I found\n> > > Streaming and binary options were missing in tab completion for the\n> > > alter subscription set option, the attached patch has the changes for\n> > > the same.\n> > > Thoughts?\n> >\n> > +1.\n> >\n> > Without patch:\n> > postgres=# alter subscription testsub set (S\n> > SLOT_NAME SYNCHRONOUS_COMMIT\n> >\n> > With patch:\n> > postgres=# alter subscription testsub set (\n> > BINARY SLOT_NAME STREAMING SYNCHRONOUS_COMMIT\n> >\n> > How about ordering the options alphabetically as the tab complete\n> > output anyways shows that way? I'm not sure if that's the practice,\n> > but just a thought.\n>\n> I did not see any rule for this, but also did not see any harm in\n> keeping it in alphabetical order, so changed it in the attached patch.\n\nThanks. Just a few nitpicks:\n1) How about patch name: \"Add tab completion for ALTER SUBSCRIPTION\nSET options streaming and binary\"?\n2) How about a detailed message: \"Tab completion for the options\nstreaming and binary were missing in case of ALTER SUBSCRIPTION SET\ncommand. This patch adds them.\"?\n\nYou may want to add this in commitfest so that we don't lose track of it.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 May 2021 19:10:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Added missing tab completion for alter subscription set option" }, { "msg_contents": "On Fri, May 14, 2021 at 7:10 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, May 14, 2021 at 6:51 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Fri, May 14, 2021 at 12:25 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Fri, May 14, 2021 at 12:00 PM vignesh C <vignesh21@gmail.com>\nwrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > While I was reviewing one of the logical decoding features, I found\n> > > > Streaming and binary options were missing in tab completion for the\n> > > > alter subscription set option, the attached patch has the changes\nfor\n> > > > the same.\n> > > > Thoughts?\n> > >\n> > > +1.\n> > >\n> > > Without patch:\n> > > postgres=# alter subscription testsub set (S\n> > > SLOT_NAME SYNCHRONOUS_COMMIT\n> > >\n> > > With patch:\n> > > postgres=# alter subscription testsub set (\n> > > BINARY SLOT_NAME STREAMING\nSYNCHRONOUS_COMMIT\n> > >\n> > > How about ordering the options alphabetically as the tab complete\n> > > output anyways shows that way? I'm not sure if that's the practice,\n> > > but just a thought.\n> >\n> > I did not see any rule for this, but also did not see any harm in\n> > keeping it in alphabetical order, so changed it in the attached patch.\n>\n> Thanks. Just a few nitpicks:\n> 1) How about patch name: \"Add tab completion for ALTER SUBSCRIPTION\n> SET options streaming and binary\"?\n\nModified.\n\n> 2) How about a detailed message: \"Tab completion for the options\n> streaming and binary were missing in case of ALTER SUBSCRIPTION SET\n> command. This patch adds them.\"?\n>\n\nModified.\n\n> You may want to add this in commitfest so that we don't lose track of it.\n\nI have added a commitfest entry at [1].\nThanks for the comments, the attached patch has the changes for the same.\n\n[1] - https://commitfest.postgresql.org/33/3116/\n\nRegards,\nVignesh", "msg_date": "Sat, 15 May 2021 10:44:11 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Added missing tab completion for alter subscription set option" }, { "msg_contents": "On Sat, May 15, 2021 at 10:44 AM vignesh C <vignesh21@gmail.com> wrote:\n> I have added a commitfest entry at [1].\n> Thanks for the comments, the attached patch has the changes for the same.\n>\n> [1] - https://commitfest.postgresql.org/33/3116/\n\nThanks Vignesh. The v3 patch looks good to me. It applies and compiles\nwell, works as expected i.e. the streaming and binary options are\nshown in the tab-complete of the ALTER SUBSCRIPTION SET command. I\nhave no further comments, hence moving it to \"ready for committer\"\nstate.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 15 May 2021 15:03:45 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Added missing tab completion for alter subscription set option" }, { "msg_contents": "On 2021-May-14, vignesh C wrote:\n\n> While I was reviewing one of the logical decoding features, I found\n> Streaming and binary options were missing in tab completion for the\n> alter subscription set option, the attached patch has the changes for\n> the same.\n> Thoughts?\n\nI wish we didn't have to keep knowledge in the psql source on which\noption names are to be used for each command. If we had some function\n SELECT pg_completion_options('alter subscription set');\nthat returned the list of options usable for each command, we wouldn't\nhave to ... psql would just retrieve the list of options for the current\ncommand.\n\nMaintaining such a list does not seem hard -- for example we could just\nhave a function alongside parse_subscription_option() that returns the\nnames that are recognized by that one. If we drive the implementation\nof both off a single struct, it would never be outdated.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"[PostgreSQL] is a great group; in my opinion it is THE best open source\ndevelopment communities in existence anywhere.\" (Lamar Owen)\n\n\n", "msg_date": "Tue, 18 May 2021 11:50:45 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Added missing tab completion for alter subscription set option" }, { "msg_contents": "On Tue, May 18, 2021 at 9:21 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-May-14, vignesh C wrote:\n>\n> > While I was reviewing one of the logical decoding features, I found\n> > Streaming and binary options were missing in tab completion for the\n> > alter subscription set option, the attached patch has the changes for\n> > the same.\n> > Thoughts?\n>\n> I wish we didn't have to keep knowledge in the psql source on which\n> option names are to be used for each command. If we had some function\n> SELECT pg_completion_options('alter subscription set');\n> that returned the list of options usable for each command, we wouldn't\n> have to ... psql would just retrieve the list of options for the current\n> command.\n>\n> Maintaining such a list does not seem hard -- for example we could just\n> have a function alongside parse_subscription_option() that returns the\n> names that are recognized by that one. If we drive the implementation\n> of both off a single struct, it would never be outdated.\n\nYeah, having something similar to table_storage_parameters works better.\n\nWhile on this, I found that all the options are not listed for CREATE\nSUBSCRIPTION command in tab-complete.c, missing ones are binary and\nstreaming:\n else if (HeadMatches(\"CREATE\", \"SUBSCRIPTION\") && TailMatches(\"WITH\", \"(\"))\n COMPLETE_WITH(\"copy_data\", \"connect\", \"create_slot\", \"enabled\",\n \"slot_name\", \"synchronous_commit\");\n\nSimilarly, CREATE and ALTER PUBLICATION don't have\npublish_via_partition_root option:\n else if (HeadMatches(\"CREATE\", \"PUBLICATION\") && TailMatches(\"WITH\", \"(\"))\n COMPLETE_WITH(\"publish\");\n\nI think having some structures like below in subscriptioncmds.h,\npublicationcmds.h and using them in tab-complete.c would make more\nsense.\n\nstatic const char *const create_subscription_params[] = {\n \"copy_data\",\n \"create_slot\",\n \"enabled\",\n \"slot_name\",\n \"synchronous_commit\",\n \"binary\",\n \"connect\",\n \"streaming\",\n NULL\n}\n\nstatic const char *const alter_subscription_set_params[] = {\n \"binary\",\n \"slot_name\",\n \"streaming\",\n \"synchronous_commit\",\n NULL\n}\n\nstatic const char *const create_or_alter_publication_params[] = {\n \"publish\",\n \"publish_via_partition_root\"\n NULL\n}\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 May 2021 14:03:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Added missing tab completion for alter subscription set option" }, { "msg_contents": "On Tue, May 18, 2021 at 9:20 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-May-14, vignesh C wrote:\n>\n> > While I was reviewing one of the logical decoding features, I found\n> > Streaming and binary options were missing in tab completion for the\n> > alter subscription set option, the attached patch has the changes for\n> > the same.\n> > Thoughts?\n>\n> I wish we didn't have to keep knowledge in the psql source on which\n> option names are to be used for each command. If we had some function\n> SELECT pg_completion_options('alter subscription set');\n> that returned the list of options usable for each command, we wouldn't\n> have to ... psql would just retrieve the list of options for the current\n> command.\n>\n> Maintaining such a list does not seem hard -- for example we could just\n> have a function alongside parse_subscription_option() that returns the\n> names that are recognized by that one. If we drive the implementation\n> of both off a single struct, it would never be outdated.\n>\n\nI like the idea of maintaining a common list, that will also prevent\noptions getting missed in the future. I will work on this and provide\na patch for it.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 19 May 2021 19:58:04 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Added missing tab completion for alter subscription set option" }, { "msg_contents": "On Tue, May 18, 2021 at 9:20 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-May-14, vignesh C wrote:\n>\n> > While I was reviewing one of the logical decoding features, I found\n> > Streaming and binary options were missing in tab completion for the\n> > alter subscription set option, the attached patch has the changes for\n> > the same.\n> > Thoughts?\n>\n> I wish we didn't have to keep knowledge in the psql source on which\n> option names are to be used for each command. If we had some function\n> SELECT pg_completion_options('alter subscription set');\n> that returned the list of options usable for each command, we wouldn't\n> have to ... psql would just retrieve the list of options for the current\n> command.\n>\n> Maintaining such a list does not seem hard -- for example we could just\n> have a function alongside parse_subscription_option() that returns the\n> names that are recognized by that one. If we drive the implementation\n> of both off a single struct, it would never be outdated.\n>\n\nOn further analysis, I felt that as psql is a front end client, we\nshould not put any dependency on backend code. I felt that might be\nthe reason it has been coded to mention the options directly in\ntab-complete instead of having any dependency on backend code. we\ncould have the common module to maintain the options and have both\nfrontend and backend access it or Should we retain the changes like\nthe earlier patch.\nThoughts?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 20 May 2021 21:34:47 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Added missing tab completion for alter subscription set option" }, { "msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> On Tue, May 18, 2021 at 9:20 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> I wish we didn't have to keep knowledge in the psql source on which\n>> option names are to be used for each command. If we had some function\n>> SELECT pg_completion_options('alter subscription set');\n>> that returned the list of options usable for each command, we wouldn't\n>> have to ... psql would just retrieve the list of options for the current\n>> command.\n\n> On further analysis, I felt that as psql is a front end client, we\n> should not put any dependency on backend code. I felt that might be\n> the reason it has been coded to mention the options directly in\n> tab-complete instead of having any dependency on backend code.\n\nWell, the problem with Alvaro's proposal is how do you square it\nwith psql's need to support back versions of the server. Maybe\nyou could code tab-complete.c like \"if server >= v15 then do X\nelse do Y\", but since Y would be largely duplicative of the\nserver-side knowledge accessed by X, you haven't really gotten\nrid of the two-places-that-know-this issue. And I'm afraid that\ntab-complete.c would become even more of a mess than it is now;\nalthough maybe somebody can see a cute way to avoid that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 May 2021 12:10:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Added missing tab completion for alter subscription set option" }, { "msg_contents": "On Thu, May 20, 2021 at 9:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> vignesh C <vignesh21@gmail.com> writes:\n> > On Tue, May 18, 2021 at 9:20 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >> I wish we didn't have to keep knowledge in the psql source on which\n> >> option names are to be used for each command. If we had some function\n> >> SELECT pg_completion_options('alter subscription set');\n> >> that returned the list of options usable for each command, we wouldn't\n> >> have to ... psql would just retrieve the list of options for the current\n> >> command.\n>\n> > On further analysis, I felt that as psql is a front end client, we\n> > should not put any dependency on backend code. I felt that might be\n> > the reason it has been coded to mention the options directly in\n> > tab-complete instead of having any dependency on backend code.\n>\n> Well, the problem with Alvaro's proposal is how do you square it\n> with psql's need to support back versions of the server. Maybe\n> you could code tab-complete.c like \"if server >= v15 then do X\n> else do Y\", but since Y would be largely duplicative of the\n> server-side knowledge accessed by X, you haven't really gotten\n> rid of the two-places-that-know-this issue. And I'm afraid that\n> tab-complete.c would become even more of a mess than it is now;\n> although maybe somebody can see a cute way to avoid that.\n\nIn my opinion let's not make that change as part of this fix. I think\nwe can fix the existing problem with the existing way of just\nincluding the options directly in the tab-complete client code because\nthe new design has an impact on the older versions and also could end\nup in duplication like Tom Lane had pointed out. We can start a new\nthread for this and try to get others' opinions on it.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 23 May 2021 16:17:03 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Added missing tab completion for alter subscription set option" }, { "msg_contents": "On Wed, May 19, 2021 at 2:03 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, May 18, 2021 at 9:21 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n> >\n> > On 2021-May-14, vignesh C wrote:\n> >\n> > > While I was reviewing one of the logical decoding features, I found\n> > > Streaming and binary options were missing in tab completion for the\n> > > alter subscription set option, the attached patch has the changes for\n> > > the same.\n> > > Thoughts?\n> >\n> > I wish we didn't have to keep knowledge in the psql source on which\n> > option names are to be used for each command. If we had some function\n> > SELECT pg_completion_options('alter subscription set');\n> > that returned the list of options usable for each command, we wouldn't\n> > have to ... psql would just retrieve the list of options for the current\n> > command.\n> >\n> > Maintaining such a list does not seem hard -- for example we could just\n> > have a function alongside parse_subscription_option() that returns the\n> > names that are recognized by that one. If we drive the implementation\n> > of both off a single struct, it would never be outdated.\n>\n> Yeah, having something similar to table_storage_parameters works better.\n>\n> While on this, I found that all the options are not listed for CREATE\n> SUBSCRIPTION command in tab-complete.c, missing ones are binary and\n> streaming:\n> else if (HeadMatches(\"CREATE\", \"SUBSCRIPTION\") && TailMatches(\"WITH\",\n\"(\"))\n> COMPLETE_WITH(\"copy_data\", \"connect\", \"create_slot\", \"enabled\",\n> \"slot_name\", \"synchronous_commit\");\n>\n\nModified.\n\n> Similarly, CREATE and ALTER PUBLICATION don't have\n> publish_via_partition_root option:\n> else if (HeadMatches(\"CREATE\", \"PUBLICATION\") && TailMatches(\"WITH\",\n\"(\"))\n> COMPLETE_WITH(\"publish\");\n>\n\nModified.\n\n> I think having some structures like below in subscriptioncmds.h,\n> publicationcmds.h and using them in tab-complete.c would make more\n> sense.\n\nThis approach has few disadvantages that Tom Lane has pointed out in [1],\nLet's use the existing way of adding options directly for tab completion.\n\nThanks for the comments, Attached v4 patch has the fixes for the same.\n[1] -\nhttps://www.postgresql.org/message-id/3690759.1621527026%40sss.pgh.pa.us\n\nRegards,\nVignesh", "msg_date": "Sun, 23 May 2021 16:24:59 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Added missing tab completion for alter subscription set option" }, { "msg_contents": "On Sun, May 23, 2021 at 04:24:59PM +0530, vignesh C wrote:\n> \t/* Complete \"CREATE SUBSCRIPTION <name> ... WITH ( <opt>\" */\n> \telse if (HeadMatches(\"CREATE\", \"SUBSCRIPTION\") && TailMatches(\"WITH\", \"(\"))\n> -\t\tCOMPLETE_WITH(\"copy_data\", \"connect\", \"create_slot\", \"enabled\",\n> -\t\t\t\t\t \"slot_name\", \"synchronous_commit\");\n> +\t\tCOMPLETE_WITH(\"binary\", \"copy_data\", \"connect\", \"create_slot\",\n> +\t\t\t\t\t \"enabled\", \"slot_name\", \"streaming\",\n> +\t\t\t\t\t \"synchronous_commit\");\n\n\"copy_data\" and \"connect\" need to be reversed. Applied.\n--\nMichael", "msg_date": "Fri, 11 Jun 2021 15:56:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Added missing tab completion for alter subscription set option" }, { "msg_contents": "On Fri, Jun 11, 2021 at 12:27 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, May 23, 2021 at 04:24:59PM +0530, vignesh C wrote:\n> > /* Complete \"CREATE SUBSCRIPTION <name> ... WITH ( <opt>\" */\n> > else if (HeadMatches(\"CREATE\", \"SUBSCRIPTION\") && TailMatches(\"WITH\", \"(\"))\n> > - COMPLETE_WITH(\"copy_data\", \"connect\", \"create_slot\", \"enabled\",\n> > - \"slot_name\", \"synchronous_commit\");\n> > + COMPLETE_WITH(\"binary\", \"copy_data\", \"connect\", \"create_slot\",\n> > + \"enabled\", \"slot_name\", \"streaming\",\n> > + \"synchronous_commit\");\n>\n> \"copy_data\" and \"connect\" need to be reversed. Applied.\n\nThanks for committing this.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 14 Jun 2021 08:20:40 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Added missing tab completion for alter subscription set option" } ]
[ { "msg_contents": "I found the following code in multirangetypes.c\n\n>\tif (*ptr == '{')\n>\t\tptr++;\n>\telse\n>\t\tereport(ERROR,\n>\t\t\t\t(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n>\t\t\t\t errmsg(\"malformed multirange literal: \\\"%s\\\"\",\n>\t\t\t\t\t\tinput_str),\n>\t\t\t\t errdetail(\"Missing left bracket.\")));\n\nI'm not sure how much we (or people) are strcit on the distinction\nbetween the $SUBJECT, isn't '{' a brace generally?\n\npostgres=# select '[1,3]'::int4multirange;\nERROR: malformed multirange literal: \"[1,3]\"\nLINE 1: select '[1,3]'::int4multirange;\n ^\nDETAIL: Missing left bracket.\n\nThe distinction is significant there. It should at least be \"Missing\nleft curly bracket.\" or \"Missing left brace.\" (or left curly brace..?)\n\n'{' is mentioned as \"curly brackets\" in comments of the soruce file.\nIt is mentioned as \"brace\" in regexp doc [1]. And.. uh.. I found the\nworld \"curly braces\" in the doc for conding conventions..\n\n[1]: https://www.postgresql.org/docs/devel/functions-matching.html\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 14 May 2021 15:31:53 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Bracket, brace, parenthesis" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> I'm not sure how much we (or people) are strcit on the distinction\n> between the $SUBJECT, isn't '{' a brace generally?\n\n+1. I tend to write \"square bracket\" or \"curly brace\" when I want to\nbe extra clear, but I think the bare terms are widely understood to\nhave those meanings.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 May 2021 10:04:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bracket, brace, parenthesis" }, { "msg_contents": "At Fri, 14 May 2021 10:04:57 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > I'm not sure how much we (or people) are strcit on the distinction\n> > between the $SUBJECT, isn't '{' a brace generally?\n> \n> +1. I tend to write \"square bracket\" or \"curly brace\" when I want to\n> be extra clear, but I think the bare terms are widely understood to\n> have those meanings.\n\nThanks! I think the message is new in 14 so we can fix it right\naway. The attached is the version with a commit message added.\n\nIf not, I'll register this to the next CF.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 27 May 2021 15:20:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Bracket, brace, parenthesis" }, { "msg_contents": "On Thu, May 27, 2021 at 03:20:10PM +0900, Kyotaro Horiguchi wrote:\n> At Fri, 14 May 2021 10:04:57 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n>> +1. I tend to write \"square bracket\" or \"curly brace\" when I want to\n>> be extra clear, but I think the bare terms are widely understood to\n>> have those meanings.\n> \n> Thanks! I think the message is new in 14 so we can fix it right\n> away. The attached is the version with a commit message added.\n\nNo objections from me to fix that on HEAD now for clarity, let's wait\na bit and see if others have more comments. You have missed an update\nof multirangetypes.out, by the way.\n--\nMichael", "msg_date": "Thu, 27 May 2021 21:08:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Bracket, brace, parenthesis" }, { "msg_contents": "At Thu, 27 May 2021 21:08:46 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, May 27, 2021 at 03:20:10PM +0900, Kyotaro Horiguchi wrote:\n> > At Fri, 14 May 2021 10:04:57 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> >> +1. I tend to write \"square bracket\" or \"curly brace\" when I want to\n> >> be extra clear, but I think the bare terms are widely understood to\n> >> have those meanings.\n> > \n> > Thanks! I think the message is new in 14 so we can fix it right\n> > away. The attached is the version with a commit message added.\n> \n> No objections from me to fix that on HEAD now for clarity, let's wait\n> a bit and see if others have more comments. You have missed an update\n> of multirangetypes.out, by the way.\n\nMmm. Thanks. So the test doesn't a check for the case of trailing\ngarbage. Looking the discussion about trailing garbage of interger\nvalues, we might need one for the case.\n\nThe atached second file adds a test for trailing garbage for\nmultirangetype.sql and rangetype.sql.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 28 May 2021 15:25:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Bracket, brace, parenthesis" }, { "msg_contents": "On Fri, May 28, 2021 at 03:25:40PM +0900, Kyotaro Horiguchi wrote:\n> Mmm. Thanks. So the test doesn't a check for the case of trailing\n> garbage. Looking the discussion about trailing garbage of integer\n> values, we might need one for the case.\n> \n> The atached second file adds a test for trailing garbage for\n> multirangetype.sql and rangetype.sql.\n\nTrue for the lack of coverage with some junk after the right brace for\nmulti-ranges, but rangetypes.sql has already some coverage. Applied\nwith this small update.\n--\nMichael", "msg_date": "Mon, 31 May 2021 11:36:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Bracket, brace, parenthesis" }, { "msg_contents": "At Mon, 31 May 2021 11:36:23 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, May 28, 2021 at 03:25:40PM +0900, Kyotaro Horiguchi wrote:\n> > Mmm. Thanks. So the test doesn't a check for the case of trailing\n> > garbage. Looking the discussion about trailing garbage of integer\n> > values, we might need one for the case.\n> > \n> > The atached second file adds a test for trailing garbage for\n> > multirangetype.sql and rangetype.sql.\n> \n> True for the lack of coverage with some junk after the right brace for\n> multi-ranges, but rangetypes.sql has already some coverage. Applied\n> with this small update.\n\nHmm. Right. Thanks for the check and commiting!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 31 May 2021 14:09:16 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Bracket, brace, parenthesis" } ]
[ { "msg_contents": "Hi,\n\nPer Coverity.\nCID 1412632 (#1 of 1): Out-of-bounds access (OVERRUN)1.\noverrun-buffer-val: Overrunning buffer pointed to by &c of 1 bytes by\npassing it to a function which accesses it at byte offset 4.\n\nFor some people, Coverity opinions count zero.\nWho knows for others, it helps.\n\nIt doesn't matter if WideCharToMultiByte, it will fail or not, the danger\nexists.\nIf WideCharToMultiByte returns 4, memmove will possibly destroy 4 bytes.\n\nThe fix, use of the traditional and bogus C style, without tricks.\n\ndiff --git a/src/timezone/zic.c b/src/timezone/zic.c\nindex 0ea6ead2db..a5f7e7f1cd 100644\n--- a/src/timezone/zic.c\n+++ b/src/timezone/zic.c\n@@ -1129,9 +1129,9 @@ static bool\n itssymlink(char const *name)\n {\n #ifdef HAVE_SYMLINK\n- char c;\n+ char linkpath[MAXPGPATH];\n\n- return 0 <= readlink(name, &c, 1);\n+ return 0 <= readlink(name, linkpath, sizeof(linkpath));\n #else\n return false;\n #endif\n\nregards,\nRanier Vilela", "msg_date": "Fri, 14 May 2021 18:43:24 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Possible memory corruption (src/timezone/zic.c b/src/timezone/zic.c)" }, { "msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Per Coverity.\n> CID 1412632 (#1 of 1): Out-of-bounds access (OVERRUN)1.\n> overrun-buffer-val: Overrunning buffer pointed to by &c of 1 bytes by\n> passing it to a function which accesses it at byte offset 4.\n\n> For some people, Coverity opinions count zero.\n\nThis particular complaint seems to match a pattern that Coverity has\nbeen generating a lot lately. I've yet to see one that wasn't a\nfalse positive, so it looks like a Coverity bug to me.\n\n> It doesn't matter if WideCharToMultiByte, it will fail or not, the danger\n> exists.\n> If WideCharToMultiByte returns 4, memmove will possibly destroy 4 bytes.\n\nThis analysis seems to me to be nonsense.\n\n(1) sizeof(char) is one, per the C standard. Therefore, the existing\ncoding in itssymlink accurately describes the size of the buffer it's\nproviding. The alternative you propose also accurately describes\nthe size of the buffer it's providing. It's nonsense to suppose that\none is safer than the other --- if readlink is willing to write past\nthe specified buffer size, they're both equally dangerous. So this\nfix fixes nothing.\n\n(2) As an independent matter, we should worry about whether our\npgreadlink() implementation is capable of writing past the specified\nbuffer size. I don't think that WideCharToMultiByte will do so;\nMicrosoft's documentation clearly says that \"cbMultiByte\" is the\nsize *in bytes* of the buffer indicated by \"lpMultiByteStr\".\nHowever it's fair to question whether that bit of code for deleting\n\"\\??\\\" is safe. I think it is though. Per the Microsoft docs,\nthe return value of WideCharToMultiByte is:\n\n If successful, returns the number of bytes written to the buffer pointed\n to by lpMultiByteStr. If the function succeeds and cbMultiByte is 0, the\n return value is the required size, in bytes, for the buffer indicated by\n lpMultiByteStr. [ but we aren't passing zero for cbMultiByte ]\n\n The function returns 0 if it does not succeed.\n [ and one of the failure cases is: ]\n ERROR_INSUFFICIENT_BUFFER. A supplied buffer size was not large enough,\n or it was incorrectly set to NULL.\n\nSo I don't believe that it will return r > 4 when the supplied buffer size\nis only 1. What's going to happen instead is a failure return, because\nthe string doesn't fit.\n\nHence, we do have a problem here, which is that pgreadlink is pretty\nmuch always going to fail when used in the way zic.c is using it, and\nthus zic is going to fail to recognize symlinks when run on Windows.\n\nThe IANA crew are unlikely to care: they're going to say that they're\nusing readlink() per the POSIX specification for it, and they'll be\nright.\n\nSo the question for us is whether it's worth trying to make pgreadlink\nconform to the letter of the POSIX spec in this detail. TBH, I can't\nget excited about that, at least not so far as zic's usage is concerned.\nWhat Windows user is going to be using our version of zic to install\ntimezone files into a subdirectory that has pre-existing symlinks?\n\nBy the same token, I'm pretty unexcited about working around pgreadlink's\ndeficiency by modifying the IANA code in the way you suggest. It's\npainful enough to keep our copy of their code in sync with their updates;\nwe don't need hacks like that added.\n\nIn short, I don't see much of a case for doing anything; but if somebody\nwere really excited about this they could try to make pgreadlink() fill\nthe supplied buffer without failing when it's too short.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 May 2021 18:32:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible memory corruption (src/timezone/zic.c\n b/src/timezone/zic.c)" }, { "msg_contents": "I wrote:\n> So the question for us is whether it's worth trying to make pgreadlink\n> conform to the letter of the POSIX spec in this detail. TBH, I can't\n> get excited about that, at least not so far as zic's usage is concerned.\n\nHmmm ... on closer inspection, though, it might not be that hard.\npgreadlink is already using a fixed-length buffer (with only enough\nroom for MAX_PATH WCHARs) for the input of WideCharToMultiByte. So\nit could use a fixed-length buffer of say 4 * MAX_PATH bytes for the\noutput, and then transfer just the appropriate amount of data to the\ncaller's buffer.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 May 2021 18:52:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible memory corruption (src/timezone/zic.c\n b/src/timezone/zic.c)" }, { "msg_contents": "Em sex., 14 de mai. de 2021 às 19:52, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> I wrote:\n> > So the question for us is whether it's worth trying to make pgreadlink\n> > conform to the letter of the POSIX spec in this detail. TBH, I can't\n> > get excited about that, at least not so far as zic's usage is concerned.\n>\n> Hmmm ... on closer inspection, though, it might not be that hard.\n> pgreadlink is already using a fixed-length buffer (with only enough\n> room for MAX_PATH WCHARs) for the input of WideCharToMultiByte. So\n> it could use a fixed-length buffer of say 4 * MAX_PATH bytes for the\n> output, and then transfer just the appropriate amount of data to the\n> caller's buffer.\n>\nFollowing your directions, maybe something like this will solve?\n\nregards,\nRanier Vilela", "msg_date": "Sat, 15 May 2021 11:35:13 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible memory corruption (src/timezone/zic.c\n b/src/timezone/zic.c)" }, { "msg_contents": "At Sat, 15 May 2021 11:35:13 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Em sex., 14 de mai. de 2021 às 19:52, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n> \n> > I wrote:\n> > > So the question for us is whether it's worth trying to make pgreadlink\n> > > conform to the letter of the POSIX spec in this detail. TBH, I can't\n> > > get excited about that, at least not so far as zic's usage is concerned.\n> >\n> > Hmmm ... on closer inspection, though, it might not be that hard.\n> > pgreadlink is already using a fixed-length buffer (with only enough\n> > room for MAX_PATH WCHARs) for the input of WideCharToMultiByte. So\n> > it could use a fixed-length buffer of say 4 * MAX_PATH bytes for the\n> > output, and then transfer just the appropriate amount of data to the\n> > caller's buffer.\n> >\n> Following your directions, maybe something like this will solve?\n\n-\tDWORD\t\tattr;\n-\tHANDLE\t\th;\n\nWhy the patch moves the definitions for \"attr\" and \"h\"?\n\n\n+\tAssert(path != NULL && buf != NULL);\n\nI don't think it's required. Even if we want to imitate readlink,\nthey should (maybe) return EFALUT in that case.\n\n\n+\tbuf[r] = '\\0';\n\nreadlink is defined as not appending a terminator. In the first place\nthe \"buf[r] = '\\0'\" is overrunning the given buffer.\n\n\n-\treturn 0 <= readlink(name, &c, 1);\n+\treturn 0 <= readlink(name, linkpath, sizeof(linkpath));\n\nAccording to the discussion, we don't want to modify zic.c at\nall. (Maybe forgot to remove?)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 17 May 2021 10:36:51 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible memory corruption (src/timezone/zic.c\n b/src/timezone/zic.c)" }, { "msg_contents": "Em dom., 16 de mai. de 2021 às 22:37, Kyotaro Horiguchi <\nhorikyota.ntt@gmail.com> escreveu:\n\n> At Sat, 15 May 2021 11:35:13 -0300, Ranier Vilela <ranier.vf@gmail.com>\n> wrote in\n> > Em sex., 14 de mai. de 2021 às 19:52, Tom Lane <tgl@sss.pgh.pa.us>\n> escreveu:\n> >\n> > > I wrote:\n> > > > So the question for us is whether it's worth trying to make\n> pgreadlink\n> > > > conform to the letter of the POSIX spec in this detail. TBH, I can't\n> > > > get excited about that, at least not so far as zic's usage is\n> concerned.\n> > >\n> > > Hmmm ... on closer inspection, though, it might not be that hard.\n> > > pgreadlink is already using a fixed-length buffer (with only enough\n> > > room for MAX_PATH WCHARs) for the input of WideCharToMultiByte. So\n> > > it could use a fixed-length buffer of say 4 * MAX_PATH bytes for the\n> > > output, and then transfer just the appropriate amount of data to the\n> > > caller's buffer.\n> > >\n> > Following your directions, maybe something like this will solve?\n>\n> - DWORD attr;\n> - HANDLE h;\n>\n> Why the patch moves the definitions for \"attr\" and \"h\"?\n>\nHi Kyotaro, thank you for reviewing this.\n\nI changed the declarations of variables for reasons of standardization and\nto avoid fragmentation of memory,\nfollowing the same principles of declaration of structures.\n\n\n>\n> + Assert(path != NULL && buf != NULL);\n>\n> I don't think it's required. Even if we want to imitate readlink,\n> they should (maybe) return EFALUT in that case.\n>\nYes. It is not a requirement.\nBut I try to take every chance to prevent bugs.\nAnd always validating the entries, sooner or later, helps to find errors.\n\n\n>\n>\n> + buf[r] = '\\0';\n>\n> readlink is defined as not appending a terminator. In the first place\n> the \"buf[r] = '\\0'\" is overrunning the given buffer.\n>\nOk. I will remove this.\n\n\n>\n>\n> - return 0 <= readlink(name, &c, 1);\n> + return 0 <= readlink(name, linkpath, sizeof(linkpath));\n>\n> According to the discussion, we don't want to modify zic.c at\n> all. (Maybe forgot to remove?)\n>\nI haven't forgotten.\n\nI just don't agree to use char, as char pointers.\nBut I can remove it from the patch too.\n\nregards,\nRanier Vilela", "msg_date": "Mon, 17 May 2021 08:28:46 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible memory corruption (src/timezone/zic.c\n b/src/timezone/zic.c)" } ]
[ { "msg_contents": "Hi,\n(Adding Andrew in CC for the buildfarm and PostgresNode parts.)\n\n$subject has been around for a couple of years now, with the following\nthreads:\nhttps://www.postgresql.org/message-id/20180126080026.GI17847@paquier.xyz\nhttps://www.postgresql.org/message-id/CAB7nPqRdaN1A1YNjxNL9T1jUEWct8ttqq29dNv8W_o37+e8wfA@mail.gmail.com\n\nAn advantage of moving to TAP is that we can then remove the support\nfor upgrades within the MSVC scripts, and also remove pg_upgrade's\ntest.sh that has accumulated tweaks that are solved by the TAP tests,\nresulting in cleanup:\n 8 files changed, 230 insertions(+), 403 deletions(-)\n\nBased on the past discussions, there were two obstacles preventing to\ndo this switch:\n- Support for tests with older versions, something where the gap as\nbeen closed thanks to Andrew's work in 4c4eaf3d.\n- Buildfarm support, and I am not sure how things need to be extended\nthere.\n\nAnother thing to note is that HEAD uses oldbindir, bindir and libdir\nto track the location of the old and new libraries and binaries. With\nthe infrastructure in place, once can define only an install path for\na PostgresNode, so this version uses only two variables:\n- oldinstall, for the installation path of the version to upgrade\nfrom.\n- oldsrc, to point to the source of the old version.\n\nIt is not difficult to switch to one approach or the other, but\nreducing the logic to a minimum number of variables is a good deal to\ntake IMO.\n\nI have been testing this patch a bit with older versions, down to 12,\nand that was logically working, and PostgresNode may need more to be\nable to work with ~11, as documented in get_new_node(). And I have\nnot tested that with MSVC yet. Anyway, attached is a new patch to\nmake the discussion move on. Even if there is still work to be done\nhere, would people here still support this switch?\n\nThanks,\n--\nMichael", "msg_date": "Sat, 15 May 2021 11:26:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "\nOn 5/14/21 10:26 PM, Michael Paquier wrote:\n> Hi,\n> (Adding Andrew in CC for the buildfarm and PostgresNode parts.)\n>\n> $subject has been around for a couple of years now, with the following\n> threads:\n> https://www.postgresql.org/message-id/20180126080026.GI17847@paquier.xyz\n> https://www.postgresql.org/message-id/CAB7nPqRdaN1A1YNjxNL9T1jUEWct8ttqq29dNv8W_o37+e8wfA@mail.gmail.com\n>\n> An advantage of moving to TAP is that we can then remove the support\n> for upgrades within the MSVC scripts, and also remove pg_upgrade's\n> test.sh that has accumulated tweaks that are solved by the TAP tests,\n> resulting in cleanup:\n> 8 files changed, 230 insertions(+), 403 deletions(-)\n>\n> Based on the past discussions, there were two obstacles preventing to\n> do this switch:\n> - Support for tests with older versions, something where the gap as\n> been closed thanks to Andrew's work in 4c4eaf3d.\n> - Buildfarm support, and I am not sure how things need to be extended\n> there.\n>\n> Another thing to note is that HEAD uses oldbindir, bindir and libdir\n> to track the location of the old and new libraries and binaries. With\n> the infrastructure in place, once can define only an install path for\n> a PostgresNode, so this version uses only two variables:\n> - oldinstall, for the installation path of the version to upgrade\n> from.\n> - oldsrc, to point to the source of the old version.\n>\n> It is not difficult to switch to one approach or the other, but\n> reducing the logic to a minimum number of variables is a good deal to\n> take IMO.\n>\n> I have been testing this patch a bit with older versions, down to 12,\n> and that was logically working, and PostgresNode may need more to be\n> able to work with ~11, as documented in get_new_node(). And I have\n> not tested that with MSVC yet. Anyway, attached is a new patch to\n> make the discussion move on. Even if there is still work to be done\n> here, would people here still support this switch?\n\n\n\nPostgresNode is currently able to create nodes suitable for upgrade down\nto release 10. If we add '-w' to the 'pg_ctl start' flags that can\nextend down to release 9.5. (Just tested) I think we should do that\nforthwith. '-w' is now the default, but having it there explicitly does\nno harm.\n\n\nIf people are interested in what's incompatible on older versions, they\ncan look at\n<https://gitlab.com/adunstan/postgresnodeng/-/blob/master/PostgresNode.pm>\nstarting at about line 2764.\n\n\nI don't think this will work, though, unless there is enough data to\nexercise pg_upgrade fairly thoroughly. The approach taken by both\ntest.sh and (somewhat more comprehensively) by the buildfarm cross\nversion upgrade module is to test a cluster where the regression tests\nhave been run. That might be more difficult when testing against older\nversions, so I have published a snapshot of the dumps of each of the\nversions we tests against in the buildfarm animal crake. These could be\nloaded into PostgresNode instances and then an upgrade attempted. See\n<https://gitlab.com/adunstan/pg-old-bin/-/tree/master/data>. The data\ngoes back to 9.2. These compressed dumps are a couple of megabytes each,\nnot huge.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 15 May 2021 14:22:24 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Sat, May 15, 2021 at 02:22:24PM -0400, Andrew Dunstan wrote:\n> PostgresNode is currently able to create nodes suitable for upgrade down\n> to release 10. If we add '-w' to the 'pg_ctl start' flags that can\n> extend down to release 9.5. (Just tested) I think we should do that\n> forthwith. '-w' is now the default, but having it there explicitly does\n> no harm.\n\nAgreed. When testing manually, I have personally never worked on any\npatches that required binaries older than 9.4, so I would be fine if\nthe TAP tests are able to work easily down to 9.5, even if pg_upgrade\nis supported down to 8.4.\n\n> If people are interested in what's incompatible on older versions, they\n> can look at\n> <https://gitlab.com/adunstan/postgresnodeng/-/blob/master/PostgresNode.pm>\n> starting at about line 2764.\n\nWe should really have adjust_conf() at some point in the in-core\nmodule.\n\n> I don't think this will work, though, unless there is enough data to\n> exercise pg_upgrade fairly thoroughly. The approach taken by both\n> test.sh and (somewhat more comprehensively) by the buildfarm cross\n> version upgrade module is to test a cluster where the regression tests\n> have been run.\n\nYeah, that's what my patch is doing with pg_regress, FWIW. This\nrequires regress.so from the old version.\n\n> That might be more difficult when testing against older\n> versions, so I have published a snapshot of the dumps of each of the\n> versions we tests against in the buildfarm animal crake. These could be\n> loaded into PostgresNode instances and then an upgrade attempted. See\n> <https://gitlab.com/adunstan/pg-old-bin/-/tree/master/data>. The data\n> goes back to 9.2. These compressed dumps are a couple of megabytes each,\n> not huge.\n\nI agree that this can be simpler in some cases. In your experience,\nhow much of an issue is it when it becomes necessary to keep around\nbinaries that rely on libraries older than what a system can support?\nIt is easy to counter issues in this area with OpenSSL and\nnon-necessary things, but we had in the past also cases where we had\ncode that conflicted with the kernel, aka 3e68686.\n\nAt the end of this exercise, what I think we should achieve is to:\n1) Reduce the diff between the buildfarm code and the in-core code.\n2) Get rid of test.sh.\n3) Be able to run easily upgrade tests across major versions for\ndevelopers.\n\nAs of now, 3) requires some extra facilities depending on if this is\ndone by the buildfarm or the in-core tests:\n1) Path to the source code of the old version. This is required once\nit becomes necessary to find out src/test/regress/ for the schedule,\nthe tests to run and its regress.so. There is no need to do that if\nyou have a dump of the old instance.\n2) Path to a custom dump to replace the run with pg_regress from 1).\n3) Location of the old binaries, for pg_upgrade. When it comes to\nPostgresNode, we require an install path, so we cannot use directly\nthe location of the binaries.\n\nLooking at the code of the buildfarm, its code does something smarter\nthan what my patch or HEAD's test.sh does now, as these require the\npath for the old source. The buildfarm code first scans for the\nprobin's used in the regression database and then updates any\nreferences. What test.sh and my patch do is using the path to the old\nsource code and run a single UPDATE. The approach taken by the\nbuildfarm code is more portable, as a path to the old source code\nbecomes necessary only if running pg_regress manually. So, what about\ndoing the following thing?\n1) Update the TAP test so as probin entries are updated in the same way\nas the buildfarm.\n2) Allow one to specify a path to a custom dump, or a path to the old\nsource code for pg_regress.\n\nIf we do that, then it should be possible to reduce the code footprint\nin the buildfarm code, while still allowing people to test major\nupgrades in the same old-fashioned way, right? That's assuming that\nPostgresNode is made compatible down to 9.2, of course, as a first\nstep, as that's the range of the dumps you are keeping around for the\nbuildfarm.\n\nThoughts?\n--\nMichael", "msg_date": "Mon, 17 May 2021 10:55:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "\nOn 5/16/21 9:55 PM, Michael Paquier wrote:\n> On Sat, May 15, 2021 at 02:22:24PM -0400, Andrew Dunstan wrote:\n>> PostgresNode is currently able to create nodes suitable for upgrade down\n>> to release 10. If we add '-w' to the 'pg_ctl start' flags that can\n>> extend down to release 9.5. (Just tested) I think we should do that\n>> forthwith. '-w' is now the default, but having it there explicitly does\n>> no harm.\n> Agreed. When testing manually, I have personally never worked on any\n> patches that required binaries older than 9.4, so I would be fine if\n> the TAP tests are able to work easily down to 9.5, even if pg_upgrade\n> is supported down to 8.4.\n>\n>> If people are interested in what's incompatible on older versions, they\n>> can look at\n>> <https://gitlab.com/adunstan/postgresnodeng/-/blob/master/PostgresNode.pm>\n>> starting at about line 2764.\n> We should really have adjust_conf() at some point in the in-core\n> module.\n\n\n\nYes, I'm going to be proposing a series of smallish patches including\nthese when the tree is branched (which I hope will be in a few weeks).\n\n\n>> I don't think this will work, though, unless there is enough data to\n>> exercise pg_upgrade fairly thoroughly. The approach taken by both\n>> test.sh and (somewhat more comprehensively) by the buildfarm cross\n>> version upgrade module is to test a cluster where the regression tests\n>> have been run.\n> Yeah, that's what my patch is doing with pg_regress, FWIW. This\n> requires regress.so from the old version.\n>\n>> That might be more difficult when testing against older\n>> versions, so I have published a snapshot of the dumps of each of the\n>> versions we tests against in the buildfarm animal crake. These could be\n>> loaded into PostgresNode instances and then an upgrade attempted. See\n>> <https://gitlab.com/adunstan/pg-old-bin/-/tree/master/data>. The data\n>> goes back to 9.2. These compressed dumps are a couple of megabytes each,\n>> not huge.\n> I agree that this can be simpler in some cases. In your experience,\n> how much of an issue is it when it becomes necessary to keep around\n> binaries that rely on libraries older than what a system can support?\n> It is easy to counter issues in this area with OpenSSL and\n> non-necessary things, but we had in the past also cases where we had\n> code that conflicted with the kernel, aka 3e68686.\n\n\nThat one at least isn't an issue. Old versions of postgres didn't have\npg_rewind.\n\n\n\n>\n> At the end of this exercise, what I think we should achieve is to:\n> 1) Reduce the diff between the buildfarm code and the in-core code.\n> 2) Get rid of test.sh.\n> 3) Be able to run easily upgrade tests across major versions for\n> developers.\n>\n> As of now, 3) requires some extra facilities depending on if this is\n> done by the buildfarm or the in-core tests:\n> 1) Path to the source code of the old version. This is required once\n> it becomes necessary to find out src/test/regress/ for the schedule,\n> the tests to run and its regress.so. There is no need to do that if\n> you have a dump of the old instance.\n> 2) Path to a custom dump to replace the run with pg_regress from 1).\n> 3) Location of the old binaries, for pg_upgrade. When it comes to\n> PostgresNode, we require an install path, so we cannot use directly\n> the location of the binaries.\n>\n> Looking at the code of the buildfarm, its code does something smarter\n> than what my patch or HEAD's test.sh does now, as these require the\n> path for the old source. The buildfarm code first scans for the\n> probin's used in the regression database and then updates any\n> references. What test.sh and my patch do is using the path to the old\n> source code and run a single UPDATE. The approach taken by the\n> buildfarm code is more portable, as a path to the old source code\n> becomes necessary only if running pg_regress manually. So, what about\n> doing the following thing?\n> 1) Update the TAP test so as probin entries are updated in the same way\n> as the buildfarm.\n> 2) Allow one to specify a path to a custom dump, or a path to the old\n> source code for pg_regress.\n>\n> If we do that, then it should be possible to reduce the code footprint\n> in the buildfarm code, while still allowing people to test major\n> upgrades in the same old-fashioned way, right? That's assuming that\n> PostgresNode is made compatible down to 9.2, of course, as a first\n> step, as that's the range of the dumps you are keeping around for the\n> buildfarm.\n>\n\n\nI'm intending to add some older dumps. -) But for now 9.2 is a good target.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 17 May 2021 12:32:13 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Mon, May 17, 2021 at 12:32:13PM -0400, Andrew Dunstan wrote:\n> On 5/16/21 9:55 PM, Michael Paquier wrote:\n> Yes, I'm going to be proposing a series of smallish patches including\n> these when the tree is branched (which I hope will be in a few weeks).\n\nThanks! That clearly needs to happen first. I'll help reviewing\nthese.\n\n>> If we do that, then it should be possible to reduce the code footprint\n>> in the buildfarm code, while still allowing people to test major\n>> upgrades in the same old-fashioned way, right? That's assuming that\n>> PostgresNode is made compatible down to 9.2, of course, as a first\n>> step, as that's the range of the dumps you are keeping around for the\n>> buildfarm.\n> \n> I'm intending to add some older dumps. -) But for now 9.2 is a good target.\n\nMakes sense. For now, I'll update this patch set so as it is possible\nto use custom dumps, as an option in parallel of pg_regress when\nspecifying a different source code path. I'll also decouple the\nbusiness with probin updates and stick with the approach used by the\nbuildfarm code.\n--\nMichael", "msg_date": "Tue, 18 May 2021 10:49:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Tue, May 18, 2021 at 10:49:39AM +0900, Michael Paquier wrote:\n> Makes sense. For now, I'll update this patch set so as it is possible\n> to use custom dumps, as an option in parallel of pg_regress when\n> specifying a different source code path. I'll also decouple the\n> business with probin updates and stick with the approach used by the\n> buildfarm code.\n\nThis has proved to not be that difficult. With the updated version\nattached, pg_upgrade has two modes to set up the old instance used for\nthe upgrade with older binaries:\n- With oldsrc and oldinstall set, pg_regress gets used, same way as\nHEAD.\n- With olddump and oldinstall set, an old dump is loaded instead in\nthe old instance before launching the upgrade.\n\noldsrc and olddump are exclusive options. Similarly to HEAD, the\ndumps taken from the old and new instances generate diffs that can be\ninspected manually. The updates of probin are done without any\ndependencies to the source path of the old instance, copying from the\nbuildfarm.\n\nWhile on it, I have fixed a couple of things that exist in test.sh but\nwere not reflected in this new script:\n- Handling of postfix operations with ~13 clusters.\n- Handling oldstyle_length for ~9.6 clusters.\n- Handling of EXTRA_REGRESS_OPT.\n\nThis stuff still needs to be expanded depending on how PostgresNode is\nmade backward-compatible, but I'll wait for that to happen before\ngoing further down here. I have also spent some time testing all that\nwith MSVC, and the installation paths used for pg_regress&co make the\nscript a tad more confusing, so I have dropped this part for now.\n\nThanks,\n--\nMichael", "msg_date": "Thu, 20 May 2021 15:07:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "Hello Michael,\n\nThis patch needs the update from 201a76183 -- the function `get_new_node`\nno longer exists.\n\nRunning check tests in the pg_upgrade folder fails for this reason.\n\nThank you,\nRachel\n\nOn Tue, Sep 7, 2021 at 2:06 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, May 18, 2021 at 10:49:39AM +0900, Michael Paquier wrote:\n> > Makes sense. For now, I'll update this patch set so as it is possible\n> > to use custom dumps, as an option in parallel of pg_regress when\n> > specifying a different source code path. I'll also decouple the\n> > business with probin updates and stick with the approach used by the\n> > buildfarm code.\n>\n> This has proved to not be that difficult. With the updated version\n> attached, pg_upgrade has two modes to set up the old instance used for\n> the upgrade with older binaries:\n> - With oldsrc and oldinstall set, pg_regress gets used, same way as\n> HEAD.\n> - With olddump and oldinstall set, an old dump is loaded instead in\n> the old instance before launching the upgrade.\n>\n> oldsrc and olddump are exclusive options. Similarly to HEAD, the\n> dumps taken from the old and new instances generate diffs that can be\n> inspected manually. The updates of probin are done without any\n> dependencies to the source path of the old instance, copying from the\n> buildfarm.\n>\n> While on it, I have fixed a couple of things that exist in test.sh but\n> were not reflected in this new script:\n> - Handling of postfix operations with ~13 clusters.\n> - Handling oldstyle_length for ~9.6 clusters.\n> - Handling of EXTRA_REGRESS_OPT.\n>\n> This stuff still needs to be expanded depending on how PostgresNode is\n> made backward-compatible, but I'll wait for that to happen before\n> going further down here. I have also spent some time testing all that\n> with MSVC, and the installation paths used for pg_regress&co make the\n> script a tad more confusing, so I have dropped this part for now.\n>\n> Thanks,\n> --\n> Michael\n>\n\nHello Michael,This patch needs the update from 201a76183 -- the function `get_new_node` no longer exists. Running check tests in the pg_upgrade folder fails for this reason.Thank you,RachelOn Tue, Sep 7, 2021 at 2:06 PM Michael Paquier <michael@paquier.xyz> wrote:On Tue, May 18, 2021 at 10:49:39AM +0900, Michael Paquier wrote:\n> Makes sense.  For now, I'll update this patch set so as it is possible\n> to use custom dumps, as an option in parallel of pg_regress when\n> specifying a different source code path.  I'll also decouple the\n> business with probin updates and stick with the approach used by the\n> buildfarm code.\n\nThis has proved to not be that difficult.  With the updated version\nattached, pg_upgrade has two modes to set up the old instance used for\nthe upgrade with older binaries:\n- With oldsrc and oldinstall set, pg_regress gets used, same way as\nHEAD.\n- With olddump and oldinstall set, an old dump is loaded instead in\nthe old instance before launching the upgrade.\n\noldsrc and olddump are exclusive options.  Similarly to HEAD, the\ndumps taken from the old and new instances generate diffs that can be\ninspected manually.  The updates of probin are done without any\ndependencies to the source path of the old instance, copying from the\nbuildfarm.\n\nWhile on it, I have fixed a couple of things that exist in test.sh but\nwere not reflected in this new script:\n- Handling of postfix operations with ~13 clusters.\n- Handling oldstyle_length for ~9.6 clusters.\n- Handling of EXTRA_REGRESS_OPT.\n\nThis stuff still needs to be expanded depending on how PostgresNode is\nmade backward-compatible, but I'll wait for that to happen before\ngoing further down here.  I have also spent some time testing all that\nwith MSVC, and the installation paths used for pg_regress&co make the\nscript a tad more confusing, so I have dropped this part for now.\n\nThanks,\n--\nMichael", "msg_date": "Tue, 7 Sep 2021 14:43:15 -0700", "msg_from": "Rachel Heaton <rachelmheaton@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Tue, Sep 07, 2021 at 02:43:15PM -0700, Rachel Heaton wrote:\n> Running check tests in the pg_upgrade folder fails for this reason.\n\nThanks, rebased as attached. Andrew has posted another patch set that\ncompletely reworks the shape of the modules by moving them into a\ndedicated namespace, meaning that this is going to break again. I'll\nsee about that when we reach this point.\n--\nMichael", "msg_date": "Wed, 8 Sep 2021 15:34:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Thu, May 20, 2021 at 03:07:56PM +0900, Michael Paquier wrote:\n> This stuff still needs to be expanded depending on how PostgresNode is\n> made backward-compatible, but I'll wait for that to happen before\n> going further down here. I have also spent some time testing all that\n> with MSVC, and the installation paths used for pg_regress&co make the\n> script a tad more confusing, so I have dropped this part for now.\n\nAndrew, as this is a bit tied to the buildfarm code and any\nsimplifications that could happen there, do you have any comments\nand/or suggestions for this patch?\n\nThis still applies on HEAD and it holds all the properties of the\nexisting test by using PostgresNodes that point to older installations\nfor the business with binaries and libraries business. There is one\npart where pg_upgrade logs into src/test/regress/, which is not good,\nbut that should be easily fixable.\n--\nMichael", "msg_date": "Fri, 1 Oct 2021 15:19:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "\nOn 10/1/21 2:19 AM, Michael Paquier wrote:\n> On Thu, May 20, 2021 at 03:07:56PM +0900, Michael Paquier wrote:\n>> This stuff still needs to be expanded depending on how PostgresNode is\n>> made backward-compatible, but I'll wait for that to happen before\n>> going further down here. I have also spent some time testing all that\n>> with MSVC, and the installation paths used for pg_regress&co make the\n>> script a tad more confusing, so I have dropped this part for now.\n> Andrew, as this is a bit tied to the buildfarm code and any\n> simplifications that could happen there, do you have any comments\n> and/or suggestions for this patch?\n\n\n\nI haven't looked at the patch closely yet, but from a buildfarm POV I\nthink the only thing that needs to be done is to inhibit the buildfarm\nclient module if the TAP tests are present. The buildfarm code that runs\nTAP tests should automatically detect and run the new test.\n\nI've just counted and there are 116 animals reporting check-pg_upgrade,\nso we'd better put that out pronto. It's a little early but I'll try to\npush out a release containing code for it on Monday or Tuesday (it's a\none line addition).\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 2 Oct 2021 16:58:47 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I haven't looked at the patch closely yet, but from a buildfarm POV I\n> think the only thing that needs to be done is to inhibit the buildfarm\n> client module if the TAP tests are present. The buildfarm code that runs\n> TAP tests should automatically detect and run the new test.\n\n> I've just counted and there are 116 animals reporting check-pg_upgrade,\n> so we'd better put that out pronto. It's a little early but I'll try to\n> push out a release containing code for it on Monday or Tuesday (it's a\n> one line addition).\n\nIIUC, the only problem for a non-updated animal would be that it'd\nrun the test twice? Or would it actually fail? If the latter,\nwe'd need to sit on the patch rather longer.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Oct 2021 17:03:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "\nOn 10/2/21 5:03 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> I haven't looked at the patch closely yet, but from a buildfarm POV I\n>> think the only thing that needs to be done is to inhibit the buildfarm\n>> client module if the TAP tests are present. The buildfarm code that runs\n>> TAP tests should automatically detect and run the new test.\n>> I've just counted and there are 116 animals reporting check-pg_upgrade,\n>> so we'd better put that out pronto. It's a little early but I'll try to\n>> push out a release containing code for it on Monday or Tuesday (it's a\n>> one line addition).\n> IIUC, the only problem for a non-updated animal would be that it'd\n> run the test twice? Or would it actually fail? If the latter,\n> we'd need to sit on the patch rather longer.\n>\n> \t\t\t\n\n\nThe patch removes test.sh, so yes it would break.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 2 Oct 2021 23:32:36 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 10/2/21 5:03 PM, Tom Lane wrote:\n>> IIUC, the only problem for a non-updated animal would be that it'd\n>> run the test twice? Or would it actually fail? If the latter,\n>> we'd need to sit on the patch rather longer.\n\n> The patch removes test.sh, so yes it would break.\n\nMaybe we could leave test.sh in place for awhile? I'd rather\nnot cause a flag day for buildfarm owners. (Also, how do we\nsee this working in the back branches?)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Oct 2021 23:34:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Sat, Oct 02, 2021 at 11:34:38PM -0400, Tom Lane wrote:\n> Maybe we could leave test.sh in place for awhile? I'd rather\n> not cause a flag day for buildfarm owners. (Also, how do we\n> see this working in the back branches?)\n\nI would be fine with test.sh staying around for now.\n\nIf we do that, though, I think that we had better remove the support\nfor upgrades across different major versions in test.sh, and keep this\ncapability in the new script. I am not sure that a lot of people use\nthat to begin with, but it would be weird to support that with a\ndifferent configuration layer for both at the same time (test.sh uses\na combination of bin/ and lib/ paths, while TAP uses just installation\npath to accomodate with what PostgresNode.pm is able to do). The\npatch of this thread also adds support for the load of an old dump\ninstead of an installcheck run of the old instance, which is something\nthe buildfarm could use.\n\nI also looked two days ago at a proposal to move all the\npg_upgrade-specific SQLs into a new, separate, file that makes use of\npsql's \\if to do the job encoded now in test.sh. I think that it\nwould be strange to duplicate this logic in a the pg_upgrade TAP test\nand test.sh if we finish by keeping both around for now. So that's a\nsecond item we had better deal with first, in my opinion:\nhttps://www.postgresql.org/message-id/YVa/se5gxr1PsXDy@paquier.xyz\n\nThoughts?\n--\nMichael", "msg_date": "Sun, 3 Oct 2021 16:03:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "\nOn 10/2/21 11:34 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 10/2/21 5:03 PM, Tom Lane wrote:\n>>> IIUC, the only problem for a non-updated animal would be that it'd\n>>> run the test twice? Or would it actually fail? If the latter,\n>>> we'd need to sit on the patch rather longer.\n>> The patch removes test.sh, so yes it would break.\n> Maybe we could leave test.sh in place for awhile? I'd rather\n> not cause a flag day for buildfarm owners. (Also, how do we\n> see this working in the back branches?)\n>\n> \t\t\t\n\n\nActually, I was wrong. The module just does \"make check\" for non-MSVC.\nFor MSVC it calls vcregress.pl, which the patch doesn't touch (it\nshould, I think).\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 3 Oct 2021 08:22:57 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On 03.10.21 09:03, Michael Paquier wrote:\n> On Sat, Oct 02, 2021 at 11:34:38PM -0400, Tom Lane wrote:\n>> Maybe we could leave test.sh in place for awhile? I'd rather\n>> not cause a flag day for buildfarm owners. (Also, how do we\n>> see this working in the back branches?)\n> \n> I would be fine with test.sh staying around for now.\n\ntest.sh could be changed to invoke the TAP test.\n\n\n", "msg_date": "Sun, 10 Oct 2021 16:07:43 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Sun, Oct 10, 2021 at 04:07:43PM +0200, Peter Eisentraut wrote:\n> On 03.10.21 09:03, Michael Paquier wrote:\n>> On Sat, Oct 02, 2021 at 11:34:38PM -0400, Tom Lane wrote:\n>>> Maybe we could leave test.sh in place for awhile? I'd rather\n>>> not cause a flag day for buildfarm owners. (Also, how do we\n>>> see this working in the back branches?)\n>> \n>> I would be fine with test.sh staying around for now.\n> \n> test.sh could be changed to invoke the TAP test.\n\nThat would remove the possibility to run the tests of pg_upgrade with\n--enable-tap-tests, which is the point I think Tom was making, because\nTestUpgrade.pm in the buildfarm code just uses \"make check\" as of the\nfollowing:\n$cmd = \"cd $self->{pgsql}/src/bin/pg_upgrade && $make $instflags check\";\n--\nMichael", "msg_date": "Mon, 11 Oct 2021 09:40:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "\nOn 10/10/21 10:07 AM, Peter Eisentraut wrote:\n> On 03.10.21 09:03, Michael Paquier wrote:\n>> On Sat, Oct 02, 2021 at 11:34:38PM -0400, Tom Lane wrote:\n>>> Maybe we could leave test.sh in place for awhile?� I'd rather\n>>> not cause a flag day for buildfarm owners.� (Also, how do we\n>>> see this working in the back branches?)\n>>\n>> I would be fine with test.sh staying around for now.\n>\n> test.sh could be changed to invoke the TAP test.\n\n\nKeeping test.sh is not necessary - I mis-remembered what the test module\ndoes.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 11 Oct 2021 09:04:47 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Sun, Oct 03, 2021 at 08:22:57AM -0400, Andrew Dunstan wrote:\n> Actually, I was wrong. The module just does \"make check\" for non-MSVC.\n> For MSVC it calls vcregress.pl, which the patch doesn't touch (it\n> should, I think).\n\nYes, it should. And I'd like to do things so as we replace all the\ninternals of upgradecheck() by a call to tap_check(). The patch does\nnot work yet properly with MSVC, and there were some problems in\ngetting the invocation of pg_regress right as far as I recall. That's\nwhy I have left this part for now. I don't see why we could not do\nthe MSVC part as an independent step though, getting rid of test.sh is\nappealing enough in itself.\n--\nMichael", "msg_date": "Tue, 12 Oct 2021 13:45:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Mon, Oct 11, 2021 at 09:04:47AM -0400, Andrew Dunstan wrote:\n> Keeping test.sh is not necessary - I mis-remembered what the test module\n> does.\n\nSo.. Are people fine to remove entirely test.sh at the end, requiring\nthe tests of pg_upgrade to have TAP installed? I'd rather raise the\nbar here, as it would keep the code simpler in the tree in the long\nterm. Or am I misunderstanding something?\n--\nMichael", "msg_date": "Tue, 12 Oct 2021 13:48:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "Hi,\n\nOn 2021-10-02 23:34:38 -0400, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 10/2/21 5:03 PM, Tom Lane wrote:\n> >> IIUC, the only problem for a non-updated animal would be that it'd\n> >> run the test twice? Or would it actually fail? If the latter,\n> >> we'd need to sit on the patch rather longer.\n> \n> > The patch removes test.sh, so yes it would break.\n> \n> Maybe we could leave test.sh in place for awhile? I'd rather\n> not cause a flag day for buildfarm owners. (Also, how do we\n> see this working in the back branches?)\n\nSeems like we might get away with making make -C contrib/pg_upgrade check and\nvcregress.pl upgradecheck do nothing?\n\nFor the common case of not testing cross-version stuff, pg_upgrade's tests\nwould just be invoked via run_build.pl:run_bin_tests(). And TestUpgrade.pm\nshould be fine with a test doing nothing.\n\nWe'd not loose coverage with non-updated BF animals unless they have tap tests\ndisabled. Just the cross-version test would need timely work by buildfarm\noperators - but I think Andrew could deal with that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 13 Dec 2021 18:08:24 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Mon, Dec 13, 2021 at 06:08:24PM -0800, Andres Freund wrote:\n> Seems like we might get away with making make -C contrib/pg_upgrade check and\n> vcregress.pl upgradecheck do nothing?\n\nYou mean #contrib/#src/bin/# here, right? I don't think that we have\nany need to have \"make -C\" do nothing. For vcregress.pl, we should\nIMO just remove upgradecheck.\n\n> For the common case of not testing cross-version stuff, pg_upgrade's tests\n> would just be invoked via run_build.pl:run_bin_tests(). And TestUpgrade.pm\n> should be fine with a test doing nothing.\n\nPerhaps. I am not sure what's the best picture here, TBH. One\ndifference between the core stuff and the buldfarm is that in the case\nof the buildfarm, we upgrade from a version that has not only the main\nregression database, but everything from the contrib/ modules.\n\nSpeaking of which, I am going to send a patch for the buildfarm to be\nable to use the SQL file from 0df9641, so as committers gain a bit\nmore control on the cross-version upgrade tests run by the buildfarm,\nusing the in-core code a maximum.\n--\nMichael", "msg_date": "Tue, 14 Dec 2021 14:31:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On 2021-12-14 14:31:24 +0900, Michael Paquier wrote:\n> On Mon, Dec 13, 2021 at 06:08:24PM -0800, Andres Freund wrote:\n> > Seems like we might get away with making make -C contrib/pg_upgrade check and\n> > vcregress.pl upgradecheck do nothing?\n> \n> You mean #contrib/#src/bin/# here, right? I don't think that we have\n> any need to have \"make -C\" do nothing. For vcregress.pl, we should\n> IMO just remove upgradecheck.\n\nTom's point was that the buildfarm scripts do\n\tif ($self->{bfconf}->{using_msvc})\n\t\t@checklog = run_log(\"perl vcregress.pl upgradecheck\");\n\telse\n\t\t\t \"cd $self->{pgsql}/src/bin/pg_upgrade && $make $instflags check\";\n\nif we don't want to break every buildfarm member that has TestUpgrade enabled\nthe moment this is committed, we need to have a backward compat path.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 13 Dec 2021 22:14:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Mon, Dec 13, 2021 at 10:14:49PM -0800, Andres Freund wrote:\n> Tom's point was that the buildfarm scripts do\n> \tif ($self->{bfconf}->{using_msvc})\n> \t\t@checklog = run_log(\"perl vcregress.pl upgradecheck\");\n> \telse\n> \t\t\t \"cd $self->{pgsql}/src/bin/pg_upgrade && $make $instflags check\";\n> \n> if we don't want to break every buildfarm member that has TestUpgrade enabled\n> the moment this is committed, we need to have a backward compat path.\n\nMissed that, thanks! I'll think about all that a bit more before\nsending a long-overdue rebased version.\n--\nMichael", "msg_date": "Wed, 15 Dec 2021 10:47:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Wed, Dec 15, 2021 at 10:47:24AM +0900, Michael Paquier wrote:\n> Missed that, thanks! I'll think about all that a bit more before\n> sending a long-overdue rebased version.\n\nOkay, here is finally a rebase of this patch, where I have fixed a\ncouple of existing issues, and I have extended the patch to the point\nwhere the support range is what I expect should be. In short:\n- Added support for MSVC for the TAP test. I have considered making\nupgradecheck silent, but after thinking about it I have just filtered\npg_upgrade from bincheck, and simplified upgradecheck to launch the\nnew test. It is simple to switch from one approach to another. This\nshaves some code in vcregress.pl.\n- Fixed a set of issues with various chdir commands in the previous\npatches. The command of pg_regress has been tweaked so as all results\nare part of src/bin/pg_upgrade/. Any logs generated by pg_upgrade\nstay in this location, same way as HEAD.\n- Adapted to the new modules of src/test/perl/.\n- Support for cross-upgrades now uses upgrade_adapt.sql (I have sent a\npatch for the buildfarm client about that yesterday actually), same\nway as test.sh on HEAD. Like HEAD, attempting to use the\ncross-version HEAD causes diffs between the old and the new dumps.\nBut there is nothing new here. This could be improved more but the\nattached does already a lot. \n- Like the previous versions, this supports two modes when setting up\nthe to-be-upgraded cluster: setup things from an old dump or use\npg_regress. The buildfarm does the former for upgrades down to 9.2.\nThe core code does the latter.\n\nI may have missed one thing or two, but I think that's pretty much\nwhat we should be looking for to do the switch to TAP in terms of\ncoverage.\n--\nMichael", "msg_date": "Thu, 16 Dec 2021 11:51:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Thu, Dec 16, 2021 at 11:51:55AM +0900, Michael Paquier wrote:\n> I may have missed one thing or two, but I think that's pretty much\n> what we should be looking for to do the switch to TAP in terms of\n> coverage.\n\nRebased patch to cool down the CF bot, as per the addition of\n--no-sync to pg_upgrade.\n--\nMichael", "msg_date": "Wed, 5 Jan 2022 17:12:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Wed, Jan 05, 2022 at 05:12:41PM +0900, Michael Paquier wrote:\n> Rebased patch to cool down the CF bot, as per the addition of\n> --no-sync to pg_upgrade.\n\nThe CF bot is unhappy, so here is a rebase, with some typo fixes\nreported by Justin offlist.\n--\nMichael", "msg_date": "Tue, 11 Jan 2022 16:14:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "Hi,\n\nOn Tue, Jan 11, 2022 at 04:14:25PM +0900, Michael Paquier wrote:\n> The CF bot is unhappy, so here is a rebase, with some typo fixes\n> reported by Justin offlist.\n\nThe cfbot still complains about this patch on Windows:\nhttps://cirrus-ci.com/task/6411385683836928\nhttps://api.cirrus-ci.com/v1/artifact/task/6411385683836928/tap/src/bin/pg_upgrade/tmp_check/log/regress_log_002_pg_upgrade\n\n# Running: pg_upgrade --no-sync -d c:/cirrus/src/bin/pg_upgrade/tmp_check/t_002_pg_upgrade_old_node_data/pgdata -D c:/cirrus/src/bin/pg_upgrade/tmp_check/t_002_pg_upgrade_new_node_data/pgdata -b C:/cirrus/tmp_install/bin -B C:/cirrus/tmp_install/bin -p 56296 -P 56297\n\nlibpq environment variable PGHOST has a non-local server value: C:/Users/ContainerAdministrator/AppData/Local/Temp/FhBIlsw6SV\nFailure, exiting\nnot ok 3 - run of pg_upgrade for new instance\n\n# Failed test 'run of pg_upgrade for new instance'\n# at t/002_pg_upgrade.pl line 255.\n### Starting node \"new_node\"\n# Running: pg_ctl -w -D c:/cirrus/src/bin/pg_upgrade/tmp_check/t_002_pg_upgrade_new_node_data/pgdata -l c:/cirrus/src/bin/pg_upgrade/tmp_check/log/002_pg_upgrade_new_node.log -o --cluster-name=new_node start\nwaiting for server to start.... done\nserver started\n# Postmaster PID for node \"new_node\" is 5748\n# Running: pg_dumpall --no-sync -d port=56297 host=C:/Users/ContainerAdministrator/AppData/Local/Temp/FhBIlsw6SV dbname='postgres' -f C:\\cirrus\\src\\bin\\pg_upgrade\\tmp_check\\tmp_test_X4aZ/dump2.sql\n# Running: diff -q C:\\cirrus\\src\\bin\\pg_upgrade\\tmp_check\\tmp_test_X4aZ/dump1.sql C:\\cirrus\\src\\bin\\pg_upgrade\\tmp_check\\tmp_test_X4aZ/dump2.sql\nFiles C:\\cirrus\\src\\bin\\pg_upgrade\\tmp_check\\tmp_test_X4aZ/dump1.sql and C:\\cirrus\\src\\bin\\pg_upgrade\\tmp_check\\tmp_test_X4aZ/dump2.sql differ\nnot ok 4 - old and new dump match after pg_upgrade\n\n# Failed test 'old and new dump match after pg_upgrade'\n\n\n", "msg_date": "Sat, 15 Jan 2022 13:52:39 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Sat, Jan 15, 2022 at 01:52:39PM +0800, Julien Rouhaud wrote:\n> libpq environment variable PGHOST has a non-local server value: C:/Users/ContainerAdministrator/AppData/Local/Temp/FhBIlsw6SV\n> Failure, exiting\n> not ok 3 - run of pg_upgrade for new instance\n\nThere are two things here, as far as I understand:\n1) This is a valid Windows path. So shouldn't we fix pg_upgrade's\nserver.c to be a bit more compliant with Windows paths? The code\naccepts only paths beginning with '/' as local paths, so this breaks.\n2) It looks safer in the long run to disable completely PGHOST and\nPGHOSTADDR when running the pg_upgrade command in the test, and we'd\nbetter not use Cluster::command_ok() or we would fall down to each\nnode's local environment. This could be done in the tests as of the\nattached, I guess, and this would bypass the problem coming from 1).\n\nThe patch needed a refresh as --make-testtablespace-dir has been\nremoved as of d6d317d.\n--\nMichael", "msg_date": "Tue, 18 Jan 2022 11:20:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "Hi,\n\nOn 2022-01-18 11:20:16 +0900, Michael Paquier wrote:\n> On Sat, Jan 15, 2022 at 01:52:39PM +0800, Julien Rouhaud wrote:\n> > libpq environment variable PGHOST has a non-local server value: C:/Users/ContainerAdministrator/AppData/Local/Temp/FhBIlsw6SV\n> > Failure, exiting\n> > not ok 3 - run of pg_upgrade for new instance\n> \n> There are two things here, as far as I understand:\n> 1) This is a valid Windows path. So shouldn't we fix pg_upgrade's\n> server.c to be a bit more compliant with Windows paths? The code\n> accepts only paths beginning with '/' as local paths, so this breaks.\n\nIt also doesn't handle @ correctly. Makes sense to fix. Should probably use\nthe same logic that libpq, psql, ... use?\n\n\t\t\tif (is_unixsock_path(ch->host))\n\t\t\t\tch->type = CHT_UNIX_SOCKET;\n\nthat'd basically be the same amount of code. And easier to understand.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 12 Feb 2022 20:12:42 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "Hi,\n\nOn 2022-01-18 11:20:16 +0900, Michael Paquier wrote:\n> +# required for 002_pg_upgrade.pl\n> +REGRESS_SHLIB=$(abs_top_builddir)/src/test/regress/regress$(DLSUFFIX)\n> +export REGRESS_SHLIB\n\nIt seems weird to propagate this into multiple places. Why don't we define\nthat centrally?\n\nAlthough it's weird for this to use REGRESS_SHLIB, given it's just doing\ndirname() on it. 027_stream_regress.pl has the \"defense\" of not wanting to\nduplicate the variable with 017_shm.pl...\n\nNot that I understand why 017_shm.pl and all the regression test source\nfileseven need $(DLSUFFIX) - expand_dynamic_library_name() should take care of\nit?\n\n\n> +REGRESS_OUTPUTDIR=$(abs_top_builddir)/src/bin/pg_upgrade\n> +export REGRESS_OUTPUTDIR\n\nI don't really understand why 027_stream_regress.pl is using this (and thus\nnot why it's used here). The tap tests create files all the time, why is this\ndifferent?\n\nIt's not like make / msvc put the data in different places:\nsrc/test/recovery/Makefile:REGRESS_OUTPUTDIR=$(abs_top_builddir)/src/test/recovery/tmp_check\nsrc/tools/msvc/vcregress.pl: $ENV{REGRESS_OUTPUTDIR} = \"$topdir/src/test/recovery/tmp_check\";\n\n> +\n> +# From now on, the test of pg_upgrade consists in setting up an instance.\n\nWhat does \"from now on\" mean?\n\n\n\n> +# Default is the location of this source code for both nodes used with\n> +# the upgrade.\n\nCan't quite parse.\n\n\n\n> +# Initialize a new node for the upgrade. This is done early so as it is\n> +# possible to know with which node's PATH the initial dump needs to be\n> +# taken.\n> +my $newnode = PostgreSQL::Test::Cluster->new('new_node');\n> +$newnode->init(extra => [ '--locale=C', '--encoding=LATIN1' ]);\n> +my $newbindir = $newnode->config_data('--bindir');\n> +my $oldbindir = $oldnode->config_data('--bindir');\n\nWhy C/LATIN?\n\nRight now pg_upgrade test.sh uses --wal-segsize 1, and that has helped\nidentify several bugs. So I'd rather not give it up, even if it's a bit weird.\n\n\n\n> +\tmy @regress_command = [\n> +\t\t$ENV{PG_REGRESS},\n> +\t\t'--schedule', \"$oldsrc/src/test/regress/parallel_schedule\",\n> +\t\t'--bindir', $oldnode->config_data('--bindir'),\n> +\t\t'--dlpath', $dlpath,\n> +\t\t'--port', $oldnode->port,\n> +\t\t'--outputdir', $outputdir,\n> +\t\t'--inputdir', $inputdir,\n> +\t\t'--use-existing'\n> +\t];\n\nI think this should use --host (c.f. 7340aceed72). Or is it intending to use\nthe host via env? If so, why is the port specified?\n\n\n> +\t@regress_command = (@regress_command, @extra_opts);\n> +\n> +\t$oldnode->command_ok(@regress_command,\n> +\t\t'regression test run on old instance');\n\nI also think this should take EXTRA_REGRESS_OPTS into account - test.sh did.\n\n\n> +# After dumping, update references to the old source tree's regress.so\n> +# to point to the new tree.\n> +if (defined($ENV{oldinstall}))\n> +{\n\n\nKinda asking for its own function...\n\n> +\n> +# Update the instance.\n> +$oldnode->stop;\n> +\n> +# Time for the real run.\n\nAs opposed to the unreal one?\n\n\n> +# pg_upgrade would complain if PGHOST, so as there are no attempts to\n> +# connect to a different server than the upgraded ones.\n\n\"complain if PGHOST\"?\n\n\n> +# Take a second dump on the upgraded instance.\n\nSounds like you're taking to post-upgrade pg_dumps.\n\n\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 12 Feb 2022 20:50:41 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Sun, Feb 13, 2022 at 5:50 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-01-18 11:20:16 +0900, Michael Paquier wrote:\n> > +REGRESS_OUTPUTDIR=$(abs_top_builddir)/src/bin/pg_upgrade\n> > +export REGRESS_OUTPUTDIR\n>\n> I don't really understand why 027_stream_regress.pl is using this (and thus\n> not why it's used here). The tap tests create files all the time, why is this\n> different?\n>\n> It's not like make / msvc put the data in different places:\n> src/test/recovery/Makefile:REGRESS_OUTPUTDIR=$(abs_top_builddir)/src/test/recovery/tmp_check\n> src/tools/msvc/vcregress.pl: $ENV{REGRESS_OUTPUTDIR} = \"$topdir/src/test/recovery/tmp_check\";\n\nAs I wrote in https://www.postgresql.org/message-id/CA%2BhUKGK-%2Bmg6RWiDu0JudF6jWeL5%2BgPmi8EKUm1eAzmdbwiE_A%40mail.gmail.com,\n\n>> > +# required for 027_stream_regress.pl\n>> > +REGRESS_OUTPUTDIR=$(abs_top_builddir)/src/test/recovery\n>> > +export REGRESS_OUTPUTDIR\n>>\n>> Why do we need this?\n>\n> The Make macro \"prove_check\" (src/Makefile.global.in) always changes\n> to the source directory to run TAP tests. Without an explicit\n> directive to control where regression test output goes, it got\n> splattered all over the source tree in VPATH builds. I didn't see an\n> existing way to adjust that (did I miss something?). Hence desire to\n> pass down a path in the build tree. Better ideas welcome.\n\nI thought it was a goal that VPATH builds shouldn't pollute the source\ntree, but the Make macro prove_check is explicitly doing so by\ndefault. Perhaps *that* should be fixed?\n\n\n", "msg_date": "Sun, 13 Feb 2022 18:07:30 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I thought it was a goal that VPATH builds shouldn't pollute the source\n> tree, but the Make macro prove_check is explicitly doing so by\n> default. Perhaps *that* should be fixed?\n\nIndeed. That seems broken by definition.\n\nMore generally, I thought we'd established a convention that\nall files made by TAP tests should be put inside the tmp_check\ndirectory, to simplify cleanup and .gitignore rules. But in\na VPATH build, tmp_check ought to live in the build tree.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 13 Feb 2022 00:13:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "Hi,\n\nOn 2022-02-13 18:07:30 +1300, Thomas Munro wrote:\n> On Sun, Feb 13, 2022 at 5:50 PM Andres Freund <andres@anarazel.de> wrote:\n> >> > +# required for 027_stream_regress.pl\n> >> > +REGRESS_OUTPUTDIR=$(abs_top_builddir)/src/test/recovery\n> >> > +export REGRESS_OUTPUTDIR\n> >>\n> >> Why do we need this?\n> >\n> > The Make macro \"prove_check\" (src/Makefile.global.in) always changes\n> > to the source directory to run TAP tests. Without an explicit\n> > directive to control where regression test output goes, it got\n> > splattered all over the source tree in VPATH builds. I didn't see an\n> > existing way to adjust that (did I miss something?). Hence desire to\n> > pass down a path in the build tree. Better ideas welcome.\n> \n> I thought it was a goal that VPATH builds shouldn't pollute the source\n> tree, but the Make macro prove_check is explicitly doing so by\n> default. Perhaps *that* should be fixed?\n\nSure, prove changes into the source dir. But we don't put test data / output\ninto the source? That's what TESTDIR is used for:\n\n\t# Determine output directories, and create them. The base path is the\n\t# TESTDIR environment variable, which is normally set by the invoking\n\t# Makefile.\n\t$tmp_check = $ENV{TESTDIR} ? \"$ENV{TESTDIR}/tmp_check\" : \"tmp_check\";\n\t$log_path = \"$tmp_check/log\";\n\nAfaics all the \"regress test inside tap test\" cases would need to do is to pass\n--outputdir=${PostgreSQL::Test::Utils::tmp_check} and you'd get exactly the same path as\nREGRESS_OUTPUTDIR currently provides.\n\nI only use vpath builds, and I don't see any tap test data / log in the source\ntree....\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 12 Feb 2022 21:29:55 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Sun, Feb 13, 2022 at 6:29 PM Andres Freund <andres@anarazel.de> wrote:\n> Afaics all the \"regress test inside tap test\" cases would need to do is to pass\n> --outputdir=${PostgreSQL::Test::Utils::tmp_check} and you'd get exactly the same path as\n> REGRESS_OUTPUTDIR currently provides.\n\nAhh, right. I assume it still needs perl2host() treatment for MSYS2\nsystems, because jacana's log shows TESTDIR is set to a Unixoid path\nthat I assume pg_regress's runtime can't use. That leads to the\nattached.", "msg_date": "Mon, 14 Feb 2022 11:23:18 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On 2022-02-14 11:23:18 +1300, Thomas Munro wrote:\n> On Sun, Feb 13, 2022 at 6:29 PM Andres Freund <andres@anarazel.de> wrote:\n> > Afaics all the \"regress test inside tap test\" cases would need to do is to pass\n> > --outputdir=${PostgreSQL::Test::Utils::tmp_check} and you'd get exactly the same path as\n> > REGRESS_OUTPUTDIR currently provides.\n> \n> Ahh, right. I assume it still needs perl2host() treatment for MSYS2\n> systems, because jacana's log shows TESTDIR is set to a Unixoid path\n> that I assume pg_regress's runtime can't use. That leads to the\n> attached.\n\nLooks sane to me.\n\n\n", "msg_date": "Sun, 13 Feb 2022 14:55:26 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Sun, Feb 13, 2022 at 02:55:26PM -0800, Andres Freund wrote:\n> On 2022-02-14 11:23:18 +1300, Thomas Munro wrote:\n>> Ahh, right. I assume it still needs perl2host() treatment for MSYS2\n>> systems, because jacana's log shows TESTDIR is set to a Unixoid path\n>> that I assume pg_regress's runtime can't use. That leads to the\n>> attached.\n> \n> Looks sane to me.\n\nThis looks like a nice cleanup, indeed. Nice catch.\n--\nMichael", "msg_date": "Mon, 14 Feb 2022 15:40:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Sat, Feb 12, 2022 at 08:12:42PM -0800, Andres Freund wrote:\n> It also doesn't handle @ correctly. Makes sense to fix. Should probably use\n> the same logic that libpq, psql, ... use?\n> \n> \t\t\tif (is_unixsock_path(ch->host))\n> \t\t\t\tch->type = CHT_UNIX_SOCKET;\n> \n> that'd basically be the same amount of code. And easier to understand.\n\nSo, I am catching up with some parts of this thread, and I have\nmanaged to miss is_unixsock_path(). Except if I am missing something\n(now it is close to the end of the day here), a minimal change would\nbe something like that as we'd still want to allow the use of\nlocalhost and others:\n if (value && strlen(value) > 0 &&\n /* check for 'local' host values */\n (strcmp(value, \"localhost\") != 0 && strcmp(value, \"127.0.0.1\") != 0 &&\n- strcmp(value, \"::1\") != 0 && value[0] != '/'))\n+ strcmp(value, \"::1\") != 0 && !is_unixsock_path(value)))\n\nOr perhaps we should restrict more the use of localhost values for\nnon-WIN32? Opinions?\n--\nMichael", "msg_date": "Mon, 14 Feb 2022 17:01:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Sat, Feb 12, 2022 at 08:50:41PM -0800, Andres Freund wrote:\n> On 2022-01-18 11:20:16 +0900, Michael Paquier wrote:\n>> +# required for 002_pg_upgrade.pl\n>> +REGRESS_SHLIB=$(abs_top_builddir)/src/test/regress/regress$(DLSUFFIX)\n>> +export REGRESS_SHLIB\n> \n> It seems weird to propagate this into multiple places. Why don't we define\n> that centrally?\n> \n> Although it's weird for this to use REGRESS_SHLIB, given it's just doing\n> dirname() on it. 027_stream_regress.pl has the \"defense\" of not wanting to\n> duplicate the variable with 017_shm.pl...\n> \n> Not that I understand why 017_shm.pl and all the regression test source\n> fileseven need $(DLSUFFIX) - expand_dynamic_library_name() should take care of\n> it?\n\nI agree that we should be able to get rid of that in the long-term,\nbut this also feels like a separate issue to me and the patch is\nalready doing a lot. I am wondering about the interactions of\ninstallcheck with abs_top_builddir, though. Should it be addressed\nfirst? It does not feel like a mandatory requirement for this\nthread, anyway.\n\n> It's not like make / msvc put the data in different places:\n> src/test/recovery/Makefile:REGRESS_OUTPUTDIR=$(abs_top_builddir)/src/test/recovery/tmp_check\n> src/tools/msvc/vcregress.pl: $ENV{REGRESS_OUTPUTDIR} = \"$topdir/src/test/recovery/tmp_check\";\n\nYeah, removed.\n\n>> +# From now on, the test of pg_upgrade consists in setting up an instance.\n> \n> What does \"from now on\" mean?\n\nIn this context, the next steps of the test. Removed.\n\n>> +# Default is the location of this source code for both nodes used with\n>> +# the upgrade.\n> \n> Can't quite parse.\n\nReworded, to something hopefully better.\n\n>> +# Initialize a new node for the upgrade. This is done early so as it is\n>> +# possible to know with which node's PATH the initial dump needs to be\n>> +# taken.\n>> +my $newnode = PostgreSQL::Test::Cluster->new('new_node');\n>> +$newnode->init(extra => [ '--locale=C', '--encoding=LATIN1' ]);\n>> +my $newbindir = $newnode->config_data('--bindir');\n>> +my $oldbindir = $oldnode->config_data('--bindir');\n> \n> Why C/LATIN?\n\nWell, these are bits from the patch that I have played with\nextensively, and it took me some time to remember why this was needed.\nThe reason why I introduced this option is that the patch created the\ndatabase \"regression\" using a createdb command that would feed from\ntemplate1 as pg_regress used --use-existing. And this combination\nrequired to enforce --locale=C to avoid two regression diffs in\nint8.sql and numeric.sql. It is possible to simplify things by\nremoving --use-existing and the database creation, so as pg_regress\nhandles the creation of the database \"regression\" with template0 to\navoid any problems related to locales.\n\nNow, if you do *not* do that, I have noticed that we run into problems\nwhen testing the TAP script with older versions, where pg_regress\nwould may not create the \"regression\" database, hence requiring an\nextra createdb (perhaps that's better with --locale=C and\n--template=template0) with --use-existing present for the pg_regress\ncommand, command coming from the old branch.\n\nHmm. At the end of the day, I am wondering whether we should not give\nup entirely on the concept of running the regression tests on older\nbranches in the TAP script of a newer branch. pg_regress needs to\ncome from the old source tree, meaning that we would most likely need\nto maintain a set of compatibility tweaks that would most probably\nrot over the time, and the buildfarm only cares about the possibility\nto set up old instances by loading dumps rather than running\npg_regress. This would also make the switch to TAP much easier (no\nneed for the extra createdb or --locale AFAIK). So attempting to\nmaintain all that is going to be a PITA in the long term, and there is\nnothing running that automatically anyway.\n\nThere is also the extra requirement to adjust dump files, but that's\nindependent of setting up the old instance to upgrade, and I don't\nreally plan to tackle that as of this thread (note that the buildfarm\nclient has extra tweaks regarding that).\n\nAny thoughts about that?\n\n> Right now pg_upgrade test.sh uses --wal-segsize 1, and that has helped\n> identify several bugs. So I'd rather not give it up, even if it's a bit weird.\n\n--allow-group-access was missing as well.\n\n>> +\tmy @regress_command = [\n>> +\t\t$ENV{PG_REGRESS},\n>> +\t\t'--schedule', \"$oldsrc/src/test/regress/parallel_schedule\",\n>> +\t\t'--bindir', $oldnode->config_data('--bindir'),\n>> +\t\t'--dlpath', $dlpath,\n>> +\t\t'--port', $oldnode->port,\n>> +\t\t'--outputdir', $outputdir,\n>> +\t\t'--inputdir', $inputdir,\n>> +\t\t'--use-existing'\n>> +\t];\n> \n> I think this should use --host (c.f. 7340aceed72). Or is it intending to use\n> the host via env? If so, why is the port specified?\n\nHm. It looks like you are right here, so added.\n\n>> +\t@regress_command = (@regress_command, @extra_opts);\n>> +\n>> +\t$oldnode->command_ok(@regress_command,\n>> +\t\t'regression test run on old instance');\n> \n> I also think this should take EXTRA_REGRESS_OPTS into account - test.sh did.\n\nThis is already taken into account, as of the @extra_opts bits.\n\n>> +# After dumping, update references to the old source tree's regress.so\n>> +# to point to the new tree.\n>> +if (defined($ENV{oldinstall}))\n>> +{\n> \n> Kinda asking for its own function...\n\nI am not sure this is a gain in readability just for this part, FWIW,\nand once you drop support for setting up an old instance with\npg_regress, that would not be needed.\n\n>> +# Update the instance.\n>> +$oldnode->stop;\n>> +\n>> +# Time for the real run.\n> \n> As opposed to the unreal one?\n\nRemoved that.\n\n>> +# pg_upgrade would complain if PGHOST, so as there are no attempts to\n>> +# connect to a different server than the upgraded ones.\n> \n> \"complain if PGHOST\"?\n\nThere is no need for this tweak once check_pghost_envvar() is fixed to\nbe able to understand Windows paths. This was not working under the\nCI on Windows anyway, but the check_pghost_envvar() fix does.\n\nA last thing that was missing from the patch, AFAIK, is to scan the\ncontents of pg_upgrade_output.d/log, if anything is left around after\na failure so as the buildfarm is able to report all the logs.\npg_upgrade's .gitignore has no need for a refresh, as well.\n\nI have split the patch set into two parts:\n- 0001 is a fix for check_pghost_envvar() with the addition of a call\nto is_unixsock_path() to make sure that Windows paths are handled.\nThis has proved to be enough to make the CI report green on Windows.\n- 0002 is the test, with all the fixes and adjustments mentioned\nupthread, including making sure that the tests can be run with older\nbranches, for now.\n--\nMichael", "msg_date": "Tue, 15 Feb 2022 13:02:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Tue, Feb 15, 2022 at 01:02:41PM +0900, Michael Paquier wrote:\n> Hmm. At the end of the day, I am wondering whether we should not give\n> up entirely on the concept of running the regression tests on older\n> branches in the TAP script of a newer branch. pg_regress needs to\n> come from the old source tree, meaning that we would most likely need\n> to maintain a set of compatibility tweaks that would most probably\n> rot over the time, and the buildfarm only cares about the possibility\n> to set up old instances by loading dumps rather than running\n> pg_regress. This would also make the switch to TAP much easier (no\n> need for the extra createdb or --locale AFAIK). So attempting to\n> maintain all that is going to be a PITA in the long term, and there is\n> nothing running that automatically anyway.\n> \n> There is also the extra requirement to adjust dump files, but that's\n> independent of setting up the old instance to upgrade, and I don't\n> really plan to tackle that as of this thread (note that the buildfarm\n> client has extra tweaks regarding that).\n> \n> Any thoughts about that?\n\nI have been looking at how much simplicity this brings, and I have to\nadmit that it is tempting to just support the loading of dumps when\nsetting up the old instance to upgrade from. We'd still need to do an\nextra effort in terms of cleaning up the diffs for the dump of the old\ninstance with older versions once/if this is plugged into the\nbuildfarm, but that could be addressed later depending on the versions\nthat need to be covered.\n--\nMichael", "msg_date": "Wed, 16 Feb 2022 13:58:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Wed, Feb 16, 2022 at 01:58:10PM +0900, Michael Paquier wrote:\n> I have been looking at how much simplicity this brings, and I have to\n> admit that it is tempting to just support the loading of dumps when\n> setting up the old instance to upgrade from. We'd still need to do an\n> extra effort in terms of cleaning up the diffs for the dump of the old\n> instance with older versions once/if this is plugged into the\n> buildfarm, but that could be addressed later depending on the versions\n> that need to be covered.\n\nThe bug related to the detection of Windows and temporary paths for\npg_upgrade's server.c has been fixed as of dc57366, so attached is the\nremaining rebased piece as perl2host has been recently removed.\n\nDo others have an opinion about a backpatch of the bugfix? Nobody has\ncomplained about that since pg_upgrade exists, so I have just done the\nchange on HEAD.\n--\nMichael", "msg_date": "Wed, 2 Mar 2022 15:57:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On 2022-02-15 13:02:41 +0900, Michael Paquier wrote:\n> >> +\t@regress_command = (@regress_command, @extra_opts);\n> >> +\n> >> +\t$oldnode->command_ok(@regress_command,\n> >> +\t\t'regression test run on old instance');\n> > \n> > I also think this should take EXTRA_REGRESS_OPTS into account - test.sh did.\n> \n> This is already taken into account, as of the @extra_opts bits.\n\nBut in a bad way, because EXTRA_REGRESS_OPTS now always wins, even for stuff\nwe want to override. Note how test.sh explicitly specifies port, bindir etc\nafter the pre-existing EXTRA_REGRESS_OPTS.\n\n\n", "msg_date": "Wed, 2 Mar 2022 00:01:17 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "Hi,\n\nOn 2022-03-02 15:57:23 +0900, Michael Paquier wrote:\n> Do others have an opinion about a backpatch of the bugfix? Nobody has\n> complained about that since pg_upgrade exists, so I have just done the\n> change on HEAD.\n\nWFM.\n\n\n\n> +++ b/src/bin/pg_upgrade/t/001_basic.pl\n> @@ -0,0 +1,9 @@\n> +use strict;\n> +use warnings;\n> +\n> +use PostgreSQL::Test::Utils;\n> +use Test::More tests => 8;\n\nOutdated.\n\n> +program_help_ok('pg_upgrade');\n> +program_version_ok('pg_upgrade');\n> +program_options_handling_ok('pg_upgrade');\n\nUnrelated. But I kinda wish we'd do this in a saner manner than copying this\ntest into every binary. E.g. by ensuring that all tools installed in the temp\ninstall are tested or such.\n\n\n> +# The test of pg_upgrade consists in setting up an instance. This is the\n> +# source instance used for the upgrade. Then a new and fresh instance is\n> +# created, and is used as the target instance for the upgrade.\n\nThis seems a bit repetitive. Lots of \"instance\".\n\n> Before\n> +# running an upgrade, a logical dump of the old instance is taken, and a\n> +# second logical dump of the new instance is taken after the upgrade.\n> +# The upgrade test passes if there are no differences in these two dumps.\n> +\n> +# Testing upgrades with an older instance of PostgreSQL requires setting up\n> +# two environment variables, as of:\n> +# - \"olddump\", to point to a dump file that will be used to set\n> +# up the old instance to upgrade from, the dump being restored in the\n> +# old cluster.\n> +# - \"oldinstall\", to point to the installation path of the old\n> +# instance.\n> +if ( (defined($ENV{olddump}) && !defined($ENV{oldinstall}))\n> +\t|| (!defined($ENV{olddump}) && defined($ENV{oldinstall})))\n\nOdd indentation. Spaces between parens?\n\n\n> +$newnode->init(extra => [ '--wal-segsize', '1', '--allow-group-access' ]);\n\nI'd copy the comments from test.sh wrt --wal-segsize,\n--allow-group-access.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 2 Mar 2022 00:07:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Wed, Mar 02, 2022 at 12:01:17AM -0800, Andres Freund wrote:\n> But in a bad way, because EXTRA_REGRESS_OPTS now always wins, even for stuff\n> we want to override. Note how test.sh explicitly specifies port, bindir etc\n> after the pre-existing EXTRA_REGRESS_OPTS.\n\nAh, right. Will fix.\n--\nMichael", "msg_date": "Wed, 2 Mar 2022 17:11:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Wed, Mar 02, 2022 at 12:07:29AM -0800, Andres Freund wrote:\n>> +++ b/src/bin/pg_upgrade/t/001_basic.pl\n>> @@ -0,0 +1,9 @@\n>> +use strict;\n>> +use warnings;\n>> +\n>> +use PostgreSQL::Test::Utils;\n>> +use Test::More tests => 8;\n> \n> Outdated.\n\nFixed.\n\n>> +program_help_ok('pg_upgrade');\n>> +program_version_ok('pg_upgrade');\n>> +program_options_handling_ok('pg_upgrade');\n> \n> Unrelated. But I kinda wish we'd do this in a saner manner than copying this\n> test into every binary. E.g. by ensuring that all tools installed in the temp\n> install are tested or such.\n\nPerhaps. I am sticking with the existing style for now.\n\n>> +# The test of pg_upgrade consists in setting up an instance. This is the\n>> +# source instance used for the upgrade. Then a new and fresh instance is\n>> +# created, and is used as the target instance for the upgrade.\n> \n> This seems a bit repetitive. Lots of \"instance\".\n\nIndeed. I have reworked the whole, rather than just those three\nsentences.\n\n>> +if ( (defined($ENV{olddump}) && !defined($ENV{oldinstall}))\n>> +\t|| (!defined($ENV{olddump}) && defined($ENV{oldinstall})))\n> \n> Odd indentation. Spaces between parens?\n\nWell, perltidy tells me that this is right.\n\n>> +$newnode->init(extra => [ '--wal-segsize', '1', '--allow-group-access' ]);\n> \n> I'd copy the comments from test.sh wrt --wal-segsize,\n> --allow-group-access.\n\nDone.\n--\nMichael", "msg_date": "Thu, 3 Mar 2022 14:03:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "\nOn 3/3/22 00:03, Michael Paquier wrote:\n>>> +if ( (defined($ENV{olddump}) && !defined($ENV{oldinstall}))\n>>> +\t|| (!defined($ENV{olddump}) && defined($ENV{oldinstall})))\n>> Odd indentation. Spaces between parens?\n> Well, perltidy tells me that this is right.\n>\n>\n\nYeah, I haven't found a way to make it stop doing that :-(\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 3 Mar 2022 08:16:13 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Thu, Mar 03, 2022 at 02:03:38PM +0900, Michael Paquier wrote:\n> Indeed. I have reworked the whole, rather than just those three\n> sentences.\n\nSo, any particular feelings about this patch? This has been around\nfor a couple of months/years now, so it could be a good time to do the\nswitch now rather than wait an extra year, or even the beginning of\nthe next release cycle. And the buildfarm is already able to handle\nthat in its code based on the last release, by skipping the upgrade\ncheck if it finds a pg_upgrade/t/ subdirectory.\n--\nMichael", "msg_date": "Thu, 31 Mar 2022 13:10:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> So, any particular feelings about this patch? This has been around\n> for a couple of months/years now, so it could be a good time to do the\n> switch now rather than wait an extra year, or even the beginning of\n> the next release cycle. And the buildfarm is already able to handle\n> that in its code based on the last release, by skipping the upgrade\n> check if it finds a pg_upgrade/t/ subdirectory.\n\nThere's still about a third of the buildfarm running older\nclient releases --- I count\n\n 2 REL_8\n 2 REL_10\n 13 REL_11\n 6 REL_12\n 16 REL_13.1\n 89 REL_14\n\nHow well does this patch work with pre-14 buildfarm clients?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Mar 2022 01:00:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "I wrote:\n> There's still about a third of the buildfarm running older\n> client releases --- I count\n\n> 2 REL_8\n> 2 REL_10\n> 13 REL_11\n> 6 REL_12\n> 16 REL_13.1\n> 89 REL_14\n\nWait a minute ... actually, what's most relevant here is\nthe population running TAP tests, which seems to be\n\n 2 REL_8\n 4 REL_11\n 1 REL_12\n 7 REL_13.1\n 53 REL_14\n\nSo there are still some people we'd have to nag if it doesn't\nwork pre-v14, but fewer than I thought --- specifically,\nthe owners of\n\nbutterflyfish\ncopperhead\neelpout\nelver\nhalibut\nkittiwake\nmantid\nmarabou\nmassasauga\nmyna\nsnakefly\nsnapper\nspurfowl\ntadarida\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Mar 2022 01:25:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On 2022-03-31 01:00:14 -0400, Tom Lane wrote:\n> How well does this patch work with pre-14 buildfarm clients?\n\nLooks to me like it'll just run the test twice, once via TestUpgrade, once via\ntaptest. It's possible that there could be trouble somehow due to duplicated\nlog files or something?\n\n\n", "msg_date": "Wed, 30 Mar 2022 22:36:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Wed, Mar 30, 2022 at 10:36:16PM -0700, Andres Freund wrote:\n> On 2022-03-31 01:00:14 -0400, Tom Lane wrote:\n> > How well does this patch work with pre-14 buildfarm clients?\n> \n> Looks to me like it'll just run the test twice, once via TestUpgrade, once via\n> taptest. It's possible that there could be trouble somehow due to duplicated\n> log files or something?\n\nHmm. TestUpgrade.pm also uses tmp_check/, and the TAP tests would\nremove this path before running. Still, all the contents of the logs\nwould be printed out before moving to the next tests at the end of\ncheck-pg_upgrade. It does not seem like this double run is going to\nbe an issue on this side. \n--\nMichael", "msg_date": "Thu, 31 Mar 2022 16:56:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Mar 30, 2022 at 10:36:16PM -0700, Andres Freund wrote:\n>> On 2022-03-31 01:00:14 -0400, Tom Lane wrote:\n>>> How well does this patch work with pre-14 buildfarm clients?\n\n>> Looks to me like it'll just run the test twice, once via TestUpgrade, once via\n>> taptest. It's possible that there could be trouble somehow due to duplicated\n>> log files or something?\n\n> Hmm. TestUpgrade.pm also uses tmp_check/, and the TAP tests would\n> remove this path before running. Still, all the contents of the logs\n> would be printed out before moving to the next tests at the end of\n> check-pg_upgrade. It does not seem like this double run is going to\n> be an issue on this side. \n\nWell, let's go ahead with it and see what happens. If it's too\nmuch of a mess we can always revert.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Mar 2022 09:49:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Thu, Mar 31, 2022 at 09:49:50AM -0400, Tom Lane wrote:\n> Well, let's go ahead with it and see what happens. If it's too\n> much of a mess we can always revert.\n\nOkay, done after an extra round of self-review. I have finished by\ntweaking a couple of comments, and adjusted further TESTING to explain\nwhat needs to be done to have a dump compatible with the test. Let's\nnow see what goes wrong.\n--\nMichael", "msg_date": "Fri, 1 Apr 2022 10:16:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Fri, Apr 01, 2022 at 10:16:48AM +0900, Michael Paquier wrote:\n> On Thu, Mar 31, 2022 at 09:49:50AM -0400, Tom Lane wrote:\n> > Well, let's go ahead with it and see what happens. If it's too\n> > much of a mess we can always revert.\n> \n> Okay, done after an extra round of self-review. I have finished by\n> tweaking a couple of comments, and adjusted further TESTING to explain\n> what needs to be done to have a dump compatible with the test. Let's\n> now see what goes wrong.\n\nThe REL_14 buildfarm client did not grab logs from the first failure:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-04-01%2001%3A39%3A04\n\nThe failure looked like this:\n\n# Running: diff -q /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/tmp_test_lPFv/dump1.sql /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/tmp_test_lPFv/dump2.sql\n/usr/bin/diff: illegal option -- q\nusage: diff [-bitw] [-c | -e | -f | -h | -n | -u] file1 file2\n diff [-bitw] [-C number | -U number] file1 file2\n diff [-bitw] [-D string] file1 file2\n diff [-bitw] [-c | -e | -f | -h | -n | -u] [-l] [-r] [-s] [-S name] directory1 directory2\nnot ok 4 - old and new dump match after pg_upgrade\n\n\n", "msg_date": "Thu, 31 Mar 2022 20:42:41 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Thu, Mar 31, 2022 at 08:42:41PM -0700, Noah Misch wrote:\n> On Fri, Apr 01, 2022 at 10:16:48AM +0900, Michael Paquier wrote:\n> > On Thu, Mar 31, 2022 at 09:49:50AM -0400, Tom Lane wrote:\n> > > Well, let's go ahead with it and see what happens. If it's too\n> > > much of a mess we can always revert.\n> > \n> > Okay, done after an extra round of self-review. I have finished by\n> > tweaking a couple of comments, and adjusted further TESTING to explain\n> > what needs to be done to have a dump compatible with the test. Let's\n> > now see what goes wrong.\n> \n> The REL_14 buildfarm client did not grab logs from the first failure:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-04-01%2001%3A39%3A04\n> \n> The failure looked like this:\n> \n> # Running: diff -q /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/tmp_test_lPFv/dump1.sql /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/tmp_test_lPFv/dump2.sql\n> /usr/bin/diff: illegal option -- q\n> usage: diff [-bitw] [-c | -e | -f | -h | -n | -u] file1 file2\n> diff [-bitw] [-C number | -U number] file1 file2\n> diff [-bitw] [-D string] file1 file2\n> diff [-bitw] [-c | -e | -f | -h | -n | -u] [-l] [-r] [-s] [-S name] directory1 directory2\n> not ok 4 - old and new dump match after pg_upgrade\n\nIs diff -q defined somewhere ? I can't find it in postgres sources nor in\nsources for bf client.\n\nMaybe your bf member could use git diff --exit-code --quiet ?\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 31 Mar 2022 22:51:59 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Fri, Apr 01, 2022 at 10:16:48AM +0900, Michael Paquier wrote:\n> Okay, done after an extra round of self-review. I have finished by\n> tweaking a couple of comments, and adjusted further TESTING to explain\n> what needs to be done to have a dump compatible with the test. Let's\n> now see what goes wrong.\n\nSo, the first reports are published, and the buildfarm is rather cool\non the matter. wrasse is the only buildfarm member that has reported\na failure, complaining that the dumps generated do not match. I am\nnot completely sure what's going on there, so I have applied an extra\npatch to get more information from the logs on failures, and switched\nthe test to use File::Compare::compare() to check if the dumps match.\nThis last part feels safer in the long run, anyway. There should be a\ndiff command as previous runs used test.sh, so perhaps this is an\nissue with its perl. The next report should tell more.\n--\nMichael", "msg_date": "Fri, 1 Apr 2022 12:54:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Thu, Mar 31, 2022 at 08:42:41PM -0700, Noah Misch wrote:\n> The failure looked like this:\n> \n> # Running: diff -q /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/tmp_test_lPFv/dump1.sql /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/tmp_test_lPFv/dump2.sql\n> /usr/bin/diff: illegal option -- q\n> usage: diff [-bitw] [-c | -e | -f | -h | -n | -u] file1 file2\n> diff [-bitw] [-C number | -U number] file1 file2\n> diff [-bitw] [-D string] file1 file2\n> diff [-bitw] [-c | -e | -f | -h | -n | -u] [-l] [-r] [-s] [-S name] directory1 directory2\n> not ok 4 - old and new dump match after pg_upgrade\n\nAh, thanks for the information! So the problem was that the first\ncommit of the patch took the diff command from the MSVC scripts, and\nthere is no -q on Solaris 11.3. Using File::Compare should be enough\nto fix the problem, then. Hopefully.\n--\nMichael", "msg_date": "Fri, 1 Apr 2022 13:00:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Thu, Mar 31, 2022 at 10:51:59PM -0500, Justin Pryzby wrote:\n> Is diff -q defined somewhere ? I can't find it in postgres sources nor in\n> sources for bf client.\n\n322becb has added such a call, at the end of 002_pg_upgrade.pl.\nvcregress.pl also has one before this commit.\n--\nMichael", "msg_date": "Fri, 1 Apr 2022 15:01:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Fri, Apr 01, 2022 at 03:01:38PM +0900, Michael Paquier wrote:\n> On Thu, Mar 31, 2022 at 10:51:59PM -0500, Justin Pryzby wrote:\n>> Is diff -q defined somewhere ? I can't find it in postgres sources nor in\n>> sources for bf client.\n> \n> 322becb has added such a call, at the end of 002_pg_upgrade.pl.\n> vcregress.pl also has one before this commit.\n\nThe Windows animals seem to be in good shape, except hamerkop that\ndies on \"vcregress upgradecheck\" when the TAP tests are disabled\ncausing the buildfarm client to stop. My idea to use upgradecheck\nleads to more code than just moving the test to bincheck so let's\nreuse the suggestion from Andres upthread and disable completely\nupgradecheck, keeping the target around only for compatibility. The\nattached does that, and the test of pg_upgrade would go through\nbincheck instead.\n\nIt is late here, I'll try to get that patched up tomorrow.\n--\nMichael", "msg_date": "Fri, 1 Apr 2022 20:53:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Fri, Apr 01, 2022 at 08:53:10PM +0900, Michael Paquier wrote:\n> On Fri, Apr 01, 2022 at 03:01:38PM +0900, Michael Paquier wrote:\n> > On Thu, Mar 31, 2022 at 10:51:59PM -0500, Justin Pryzby wrote:\n> >> Is diff -q defined somewhere ? I can't find it in postgres sources nor in\n> >> sources for bf client.\n> > \n> > 322becb has added such a call, at the end of 002_pg_upgrade.pl.\n> > vcregress.pl also has one before this commit.\n> \n> The Windows animals seem to be in good shape, except hamerkop that\n> dies on \"vcregress upgradecheck\" when the TAP tests are disabled\n> causing the buildfarm client to stop. My idea to use upgradecheck\n> leads to more code than just moving the test to bincheck so let's\n> reuse the suggestion from Andres upthread and disable completely\n> upgradecheck, keeping the target around only for compatibility. The\n> attached does that, and the test of pg_upgrade would go through\n> bincheck instead.\n\nIf you do that, should also remove upgradecheck from .cirrus.yaml, which\ncurrently runs the upgradecheck target.\n\nI suspect this'll cause windows CI a bit slower.\nhttps://cirrus-ci.com/task/4703731324289024\n\nAn alternative to your patch to have the buildfarm client avoid calling\nupgradecheck if tap tests are disabled. Under your patch, upgrade check is a\nNOP, so it should stop calling upgradecheck anyway. So maybe this is a better\noption ?\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 1 Apr 2022 08:34:34 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Fri, Apr 01, 2022 at 08:34:34AM -0500, Justin Pryzby wrote:\n> If you do that, should also remove upgradecheck from .cirrus.yaml, which\n> currently runs the upgradecheck target.\n\nIndeed. It makes no sense to keep that. I have removed this part and\napplied the patch, after one extra run through the CI.\n\n> An alternative to your patch to have the buildfarm client avoid calling\n> upgradecheck if tap tests are disabled. Under your patch, upgrade check is a\n> NOP, so it should stop calling upgradecheck anyway. So maybe this is a better\n> option ?\n\nYeah, there is an extra issue with the buildfarm client here. The\nanimals that have TAP enabled are now running the tests of pg_upgrade\ntwice: once per the optional module TestUpgrade and once in\nrun_bin_tests()@run_build.pl. This is something that needs to be\nchanged in the client code itself, and maybe the best fix is to\ndisable TestUpgrade.pm when running with v15~ or a newer version. A\nfix with this approach would become much easier once REL_15_STABLE is\ncreated, though. I am pretty sure that it should also be possible to\nchange the list of optional modules depending on the branch running,\nbut I have not dug into that..\n--\nMichael", "msg_date": "Sat, 2 Apr 2022 12:10:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Fri, Apr 01, 2022 at 10:16:48AM +0900, Michael Paquier wrote:\n> On Thu, Mar 31, 2022 at 09:49:50AM -0400, Tom Lane wrote:\n> > Well, let's go ahead with it and see what happens. If it's too\n> > much of a mess we can always revert.\n> \n> Okay, done after an extra round of self-review.\n\ncommit 322becb wrote:\n> --- /dev/null\n> +++ b/src/bin/pg_upgrade/t/002_pg_upgrade.pl\n\n> +# Generate a database with a name made of a range of ASCII characters.\n> +sub generate_db\n> +{\n> +\tmy ($node, $from_char, $to_char) = @_;\n> +\n> +\tmy $dbname = '';\n> +\tfor my $i ($from_char .. $to_char)\n> +\t{\n> +\t\tnext if $i == 7 || $i == 10 || $i == 13; # skip BEL, LF, and CR\n> +\t\t$dbname = $dbname . sprintf('%c', $i);\n> +\t}\n> +\t$node->run_log(\n> +\t\t[ 'createdb', '--host', $node->host, '--port', $node->port, $dbname ]\n> +\t);\n\nNothing checks the command result, so the test file passes even if each of\nthese createdb calls fails. Other run_log() calls in this file have the same\nproblem. This particular one should be command_ok() or similar.\n\n--host and --port are redundant in a PostgreSQL::Test::Cluster::run_log call,\nbecause that call puts equivalent configuration in the environment. Other\ncalls in the file have the same redundant operands. (No other test file has\nredundant --host or --port.)\n\n> +\t# Grab any regression options that may be passed down by caller.\n> +\tmy $extra_opts_val = $ENV{EXTRA_REGRESS_OPT} || \"\";\n\nTypo: s/_OPT/_OPTS/\n\n> +\tmy @extra_opts = split(/\\s+/, $extra_opts_val);\n\nsrc/test/recovery/t/027_stream_regress.pl and the makefiles treat\nEXTRA_REGRESS_OPTS as a shell fragment. To be compatible, use the\nsrc/test/recovery/t/027_stream_regress.pl approach. Affected usage patetrns\nare not very important, but since the tree has code for it, you may as well\nborrow that code. These examples witness the difference:\n\nEXTRA_REGRESS_OPTS='--nosuc\" h\"' MAKEFLAGS= make -C src/bin/pg_upgrade check PROVE_TESTS=t/002_pg_upgrade.pl\n# log has: /home/nm/src/pg/postgresql/src/bin/pg_upgrade/../../../src/test/regress/pg_regress: unrecognized option '--nosuc\"'\nEXTRA_REGRESS_OPTS='--nosuc\" h\"' MAKEFLAGS= make -C src/test/recovery check PROVE_TESTS=t/027_stream_regress.pl\n# log has: /home/nm/src/pg/postgresql/src/test/recovery/../../../src/test/regress/pg_regress: unrecognized option '--nosuc h'\n\n> --- a/src/bin/pg_upgrade/test.sh\n> +++ /dev/null\n\n> -# Create databases with names covering the ASCII bytes other than NUL, BEL,\n> -# LF, or CR. BEL would ring the terminal bell in the course of this test, and\n> -# it is not otherwise a special case. PostgreSQL doesn't support the rest.\n> -dbname1=`awk 'BEGIN { for (i= 1; i < 46; i++)\n> -\tif (i != 7 && i != 10 && i != 13) printf \"%c\", i }' </dev/null`\n> -# Exercise backslashes adjacent to double quotes, a Windows special case.\n> -dbname1='\\\"\\'$dbname1'\\\\\"\\\\\\'\n\nThis rewrite dropped the exercise of backslashes adjacent to double quotes.\n\n\n", "msg_date": "Sun, 1 May 2022 21:27:18 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Sun, May 01, 2022 at 09:27:18PM -0700, Noah Misch wrote:\n> commit 322becb wrote:\n\nThanks, Noah. I am out this week, but I should be able to address all\nyour points at the beginning of next week. I have added an open item\nfor now.\n--\nMichael", "msg_date": "Mon, 2 May 2022 16:00:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Sun, May 01, 2022 at 09:27:18PM -0700, Noah Misch wrote:\n> On Fri, Apr 01, 2022 at 10:16:48AM +0900, Michael Paquier wrote:\n>> commit 322becb wrote:\n>\n> Nothing checks the command result, so the test file passes even if each of\n> these createdb calls fails. Other run_log() calls in this file have the same\n> problem. This particular one should be command_ok() or similar.\n\nAll of them could rely on command_ok(), as they should never fail, so\nswitched this way.\n\n> --host and --port are redundant in a PostgreSQL::Test::Cluster::run_log call,\n> because that call puts equivalent configuration in the environment. Other\n> calls in the file have the same redundant operands. (No other test file has\n> redundant --host or --port.)\n\nRight. Removed all that.\n\n>> +\t# Grab any regression options that may be passed down by caller.\n>> +\tmy $extra_opts_val = $ENV{EXTRA_REGRESS_OPT} || \"\";\n> \n> Typo: s/_OPT/_OPTS/\n\nOops, fixed.\n\n>> +\tmy @extra_opts = split(/\\s+/, $extra_opts_val);\n> \n> src/test/recovery/t/027_stream_regress.pl and the makefiles treat\n> EXTRA_REGRESS_OPTS as a shell fragment. To be compatible, use the\n> src/test/recovery/t/027_stream_regress.pl approach. Affected usage patetrns\n> are not very important, but since the tree has code for it, you may as well\n> borrow that code. These examples witness the difference:\n\nSo the pattern of EXTRA_REGRESS_OPTS being used in the Makefiles is\nthe decision point here. Makes sense.\n\n>> -# Create databases with names covering the ASCII bytes other than NUL, BEL,\n>> -# LF, or CR. BEL would ring the terminal bell in the course of this test, and\n>> -# it is not otherwise a special case. PostgreSQL doesn't support the rest.\n>> -dbname1=`awk 'BEGIN { for (i= 1; i < 46; i++)\n>> -\tif (i != 7 && i != 10 && i != 13) printf \"%c\", i }' </dev/null`\n>> -# Exercise backslashes adjacent to double quotes, a Windows special case.\n>> -dbname1='\\\"\\'$dbname1'\\\\\"\\\\\\'\n> \n> This rewrite dropped the exercise of backslashes adjacent to double quotes.\n\nDamn, thanks. If I am reading that right, this could be done with the\nfollowing addition in generate_db(), adding double quotes surrounded\nby backslashes before and after the database name: \n$dbname = '\\\\\"\\\\' . $dbname . '\\\\\"\\\\';\n\nAll these fixes lead me to the attached patch. Does that look fine to\nyou?\n\nThanks,\n--\nMichael", "msg_date": "Mon, 9 May 2022 12:18:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Mon, May 09, 2022 at 12:18:39PM +0900, Michael Paquier wrote:\n> All these fixes lead me to the attached patch.\n\nI have applied this stuff as of 7dd3ee5, in time for beta1, and closed\nthe open item. One difference is that I've added one backslash\nsurrounding the double quote at the beginning *and* the end of the\ndatabase name in the patch. However, the original case was different,\nwith:\n- At the beginning of the database name, one backslash before and\nafter the double quote.\n- At the end of the database name, two backslaces before the double\nquote and three after the double quote.\n--\nMichael", "msg_date": "Wed, 11 May 2022 10:29:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Wed, May 11, 2022 at 10:29:44AM +0900, Michael Paquier wrote:\n> On Mon, May 09, 2022 at 12:18:39PM +0900, Michael Paquier wrote:\n> > All these fixes lead me to the attached patch.\n> \n> I have applied this stuff as of 7dd3ee5, in time for beta1, and closed\n> the open item. One difference is that I've added one backslash\n> surrounding the double quote at the beginning *and* the end of the\n> database name in the patch. However, the original case was different,\n> with:\n> - At the beginning of the database name, one backslash before and\n> after the double quote.\n> - At the end of the database name, two backslaces before the double\n> quote and three after the double quote.\n\nWhy did you discontinue testing the longstanding test database name?\n\n\n", "msg_date": "Tue, 10 May 2022 22:32:55 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Tue, May 10, 2022 at 10:32:55PM -0700, Noah Misch wrote:\n> Why did you discontinue testing the longstanding test database name?\n\nI am not sure what you mean here. Are you saying that the test should\nbe changed to prefix each database name by \"regression\", as it was the\ncase in test.sh? Or do you mean that the backslash/double-quote\nbusiness should only apply to the first database name and not the\nother two, implying that the new generate_db() in 002_pg_upgrade.pl\nhad better have a $prefix and a $suffix like it was originally\nwritten?\n--\nMichael", "msg_date": "Thu, 12 May 2022 14:27:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Thu, May 12, 2022 at 02:27:30PM +0900, Michael Paquier wrote:\n> On Tue, May 10, 2022 at 10:32:55PM -0700, Noah Misch wrote:\n> > On Wed, May 11, 2022 at 10:29:44AM +0900, Michael Paquier wrote:\n> > > On Mon, May 09, 2022 at 12:18:39PM +0900, Michael Paquier wrote:\n> > > > All these fixes lead me to the attached patch.\n> > > \n> > > I have applied this stuff as of 7dd3ee5, in time for beta1, and closed\n> > > the open item. One difference is that I've added one backslash\n> > > surrounding the double quote at the beginning *and* the end of the\n> > > database name in the patch. However, the original case was different,\n> > > with:\n> > > - At the beginning of the database name, one backslash before and\n> > > after the double quote.\n> > > - At the end of the database name, two backslaces before the double\n> > > quote and three after the double quote.\n\nHere, you describe differences between test.sh and your rewrite of test.sh.\n\n> > Why did you discontinue testing the longstanding test database name?\n> \n> I am not sure what you mean here.\n\nHere, I requested the rationale for the differences you had just described.\nYou made a choice to stop testing one list of database names and start testing\na different list of database names. Why?\n\n> Are you saying that the test should\n> be changed to prefix each database name by \"regression\", as it was the\n> case in test.sh? Or do you mean that the backslash/double-quote\n> business should only apply to the first database name and not the\n> other two, implying that the new generate_db() in 002_pg_upgrade.pl\n> had better have a $prefix and a $suffix like it was originally\n> written?\n\nNo, I wasn't saying any of those. (Later, I may say one or more of those.)\n\n\n", "msg_date": "Sat, 14 May 2022 01:27:28 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Sat, May 14, 2022 at 01:27:28AM -0700, Noah Misch wrote:\n> Here, I requested the rationale for the differences you had just described.\n> You made a choice to stop testing one list of database names and start testing\n> a different list of database names. Why?\n\nBecause the shape of the new names does not change the test coverage\n(\"regression\" prefix or the addition of the double quotes with\nbackslashes for all the database names), while keeping the code a bit\nsimpler. If you think that the older names are more adapted, I have\nno objections to use them, FWIW, which is something like the patch\nattached would achieve.\n\nThis uses the same convention as vcregress.pl before 322becb, but not\nthe one of test.sh where \"regression\" was appended to the database\nnames.\n--\nMichael", "msg_date": "Mon, 16 May 2022 14:30:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Mon, May 16, 2022 at 02:30:00PM +0900, Michael Paquier wrote:\n> On Sat, May 14, 2022 at 01:27:28AM -0700, Noah Misch wrote:\n> > Here, I requested the rationale for the differences you had just described.\n> > You made a choice to stop testing one list of database names and start testing\n> > a different list of database names. Why?\n> \n> Because the shape of the new names does not change the test coverage\n> (\"regression\" prefix or the addition of the double quotes with\n> backslashes for all the database names), while keeping the code a bit\n> simpler. If you think that the older names are more adapted, I have\n> no objections to use them, FWIW, which is something like the patch\n> attached would achieve.\n> \n> This uses the same convention as vcregress.pl before 322becb, but not\n> the one of test.sh where \"regression\" was appended to the database\n> names.\n\nI would have picked the test.sh names, both because test.sh was the senior\nimplementation and because doing so avoids warnings under\n-DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS. See the warnings here:\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=longfin&dt=2022-05-18%2000%3A59%3A35&stg=pg_upgrade-check\n\nMore-notable line from that same log:\nsh: /Users/buildfarm/bf-data/HEAD/pgsql.build/src/bin/pg_upgrade/../../../src/test/regress/pg_regress--port=5678: No such file or directory\n\nCommit 7dd3ee5 adopted much of the 027_stream_regress.pl approach to running\npg_regress, but it didn't grab the \"is($rc, 0, 'regression tests pass')\"\nneeded to make defects like that report a failure.\n\n> --- a/src/bin/pg_upgrade/t/002_pg_upgrade.pl\n> +++ b/src/bin/pg_upgrade/t/002_pg_upgrade.pl\n> @@ -13,18 +13,16 @@ use Test::More;\n> # Generate a database with a name made of a range of ASCII characters.\n> sub generate_db\n> {\n> -\tmy ($node, $from_char, $to_char) = @_;\n> +\tmy ($node, $prefix, $from_char, $to_char, $suffix) = @_;\n> \n> -\tmy $dbname = '';\n> +\tmy $dbname = $prefix;\n> \tfor my $i ($from_char .. $to_char)\n> \t{\n> \t\tnext if $i == 7 || $i == 10 || $i == 13; # skip BEL, LF, and CR\n> \t\t$dbname = $dbname . sprintf('%c', $i);\n> \t}\n> \n> -\t# Exercise backslashes adjacent to double quotes, a Windows special\n> -\t# case.\n> -\t$dbname = '\\\\\"\\\\' . $dbname . '\\\\\\\\\"\\\\\\\\\\\\';\n> +\t$dbname .= $suffix;\n> \t$node->command_ok([ 'createdb', $dbname ]);\n> }\n> \n> @@ -79,10 +77,12 @@ else\n> {\n> \t# Default is to use pg_regress to set up the old instance.\n> \n> -\t# Create databases with names covering most ASCII bytes\n> -\tgenerate_db($oldnode, 1, 45);\n> -\tgenerate_db($oldnode, 46, 90);\n> -\tgenerate_db($oldnode, 91, 127);\n> +\t# Create databases with names covering most ASCII bytes. The\n> +\t# first name exercises backslashes adjacent to double quotes, a\n> +\t# Windows special case.\n> +\tgenerate_db($oldnode, \"\\\\\\\"\\\\\", 1, 45, \"\\\\\\\\\\\"\\\\\\\\\\\\\");\n> +\tgenerate_db($oldnode, '', 46, 90, '');\n> +\tgenerate_db($oldnode, '', 91, 127, '');\n\nDoes this pass on Windows? I'm 65% confident that released IPC::Run can't\nhandle this input due to https://github.com/toddr/IPC-Run/issues/142. If it's\npassing for you on Windows, then disregard.\n\n\n", "msg_date": "Wed, 18 May 2022 01:03:15 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Wed, May 18, 2022 at 01:03:15AM -0700, Noah Misch wrote:\n> On Mon, May 16, 2022 at 02:30:00PM +0900, Michael Paquier wrote:\n>> Because the shape of the new names does not change the test coverage\n>> (\"regression\" prefix or the addition of the double quotes with\n>> backslashes for all the database names), while keeping the code a bit\n>> simpler. If you think that the older names are more adapted, I have\n>> no objections to use them, FWIW, which is something like the patch\n>> attached would achieve.\n>> \n>> This uses the same convention as vcregress.pl before 322becb, but not\n>> the one of test.sh where \"regression\" was appended to the database\n>> names.\n> \n> I would have picked the test.sh names, both because test.sh was the senior\n> implementation and because doing so avoids warnings under\n> -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS. See the warnings here:\n> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=longfin&dt=2022-05-18%2000%3A59%3A35&stg=pg_upgrade-check\n\nYes, I saw that. This did not bother me much as the TAP tests run in\nisolation, but I am fine to stick to your option and silence these.\n\n> More-notable line from that same log:\n> sh: /Users/buildfarm/bf-data/HEAD/pgsql.build/src/bin/pg_upgrade/../../../src/test/regress/pg_regress--port=5678: No such file or directory\n\nSo you are using EXTRA_REGRESS_OPTS, then, and a space is missing from\nthe first argument of the command used to make that work properly.\n\n> Commit 7dd3ee5 adopted much of the 027_stream_regress.pl approach to running\n> pg_regress, but it didn't grab the \"is($rc, 0, 'regression tests pass')\"\n> needed to make defects like that report a failure.\n\nOkay, added this one.\n\n>> +\tgenerate_db($oldnode, \"\\\\\\\"\\\\\", 1, 45, \"\\\\\\\\\\\"\\\\\\\\\\\\\");\n>> +\tgenerate_db($oldnode, '', 46, 90, '');\n>> +\tgenerate_db($oldnode, '', 91, 127, '');\n> \n> Does this pass on Windows? I'm 65% confident that released IPC::Run can't\n> handle this input due to https://github.com/toddr/IPC-Run/issues/142. If it's\n> passing for you on Windows, then disregard.\n\nHmm. The CI has been passing for me with this name pattern in place,\nas of https://github.com/michaelpq/postgres/tree/upgrade_tap_fixes.\n\nAttached is an updated patch to address your concerns.\n--\nMichael", "msg_date": "Wed, 18 May 2022 18:20:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Wed, May 18, 2022 at 06:20:08PM +0900, Michael Paquier wrote:\n> Attached is an updated patch to address your concerns.\n\nLooks successful.\n\n\n", "msg_date": "Fri, 20 May 2022 18:28:01 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Fri, May 20, 2022 at 06:28:01PM -0700, Noah Misch wrote:\n> Looks successful.\n\nThanks a lot for confirming. I have applied that on HEAD, then.\n--\nMichael", "msg_date": "Sat, 21 May 2022 12:03:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "Hi,\n\nI just saw a pg_upgrade failure on my aio branch [1]. Not sure what caused it\nyet. The reason I'm writing in this thread is that I looked at the\nregress_log_* for the failure, and found it to be 14.95MiB (which crashed the\nbrowser on my phone...).\n\nhttps://api.cirrus-ci.com/v1/artifact/task/5167740683026432/log/src/bin/pg_upgrade/tmp_check/log/regress_log_002_pg_upgrade\n\nThat seems way beyond reasonable.\n\nregress_log_002_pg_upgrade.log includes all of 002_pg_upgrade_old_node.log and\n002_pg_upgrade_new_node.log. The old node's log includes all pg_dump queries.\n\nFollowed by many MB of diff due to\n\n=== diff of /Users/admin/pgsql/src/bin/pg_upgrade/tmp_check/tmp_test_Q7GQ/dump1.sql and /Users/admin/pgsql/src/bin/pg_upgrade/tmp_check/tmp_test_Q7GQ/dump2.sql\n=== stdout ===\n\nGreetings,\n\nAndres Freund\n\n\n[1] https://cirrus-ci.com/task/5167740683026432\n\n\n", "msg_date": "Fri, 3 Jun 2022 12:53:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Fri, Jun 03, 2022 at 12:53:18PM -0700, Andres Freund wrote:\n> [...]\n\nTRAP: FailedAssertion(\"AmIoWorkerProcess()\", File: \"xlog.c\", Line:\n4860, PID: 35325)\n\n> regress_log_002_pg_upgrade.log includes all of 002_pg_upgrade_old_node.log and\n> 002_pg_upgrade_new_node.log. The old node's log includes all pg_dump queries.\n\nlog_statement = all is the part biting here. It does not seem like\nwe'd lose a lot of context even if this is made less verbose.\n\n> Followed by many MB of diff due to\n> \n> === diff of /Users/admin/pgsql/src/bin/pg_upgrade/tmp_check/tmp_test_Q7GQ/dump1.sql and /Users/admin/pgsql/src/bin/pg_upgrade/tmp_check/tmp_test_Q7GQ/dump2.sql\n> === stdout ===\n\nSomething like 80~85% of the bloat comes from the diffs in your case.\nWell, it is always possible to limit that to an arbitrary amount of\ncharacters (say 50k~100k?) to still give some context, and dump the\nwhole in a different file outside the log/ path (aka tmp_check/), so\nthat the buildfarm would show a minimum amount of information, while\nlocal failures would still have an access to everything.\n\nDo you have any preferences?\n--\nMichael", "msg_date": "Sat, 4 Jun 2022 12:35:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" }, { "msg_contents": "On Sat, Jun 04, 2022 at 12:35:45PM +0900, Michael Paquier wrote:\n> Something like 80~85% of the bloat comes from the diffs in your case.\n> Well, it is always possible to limit that to an arbitrary amount of\n> characters (say 50k~100k?) to still give some context, and dump the\n> whole in a different file outside the log/ path (aka tmp_check/), so\n> that the buildfarm would show a minimum amount of information, while\n> local failures would still have an access to everything.\n\nAfter looking a bit around that. Something like the attached, where\nthe characters are limited at 10k, would limit the output generated..\n--\nMichael", "msg_date": "Mon, 6 Jun 2022 13:58:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Rewriting the test of pg_upgrade as a TAP test - take three -\n remastered set" } ]