threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Hi,\n\nwe have txid_current(), which returns an int8. But there's no convenient\nway to convert that to type 'xid'. Which is fairly inconvenient, given\nthat we expose xids in various places.\n\nMy current need for this was just a regression test to make sure that\nsystem columns (xmin/xmax in particular) don't get broken again for ON\nCONFLICT. But I've needed this before in other scenarios - e.g. age(xid)\ncan be useful to figure out how old a transaction is, but age() doesn't\nwork with txid_current()'s return value.\n\nSeems easiest to just add xid_current(), or add a cast from int8 to xid\n(probably explicit?) that handles the wraparound logic correctly?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 Jul 2019 17:06:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Thu, Jul 25, 2019 at 12:06 PM Andres Freund <andres@anarazel.de> wrote:\n> we have txid_current(), which returns an int8. But there's no convenient\n> way to convert that to type 'xid'. Which is fairly inconvenient, given\n> that we expose xids in various places.\n>\n> My current need for this was just a regression test to make sure that\n> system columns (xmin/xmax in particular) don't get broken again for ON\n> CONFLICT. But I've needed this before in other scenarios - e.g. age(xid)\n> can be useful to figure out how old a transaction is, but age() doesn't\n> work with txid_current()'s return value.\n>\n> Seems easiest to just add xid_current(), or add a cast from int8 to xid\n> (probably explicit?) that handles the wraparound logic correctly?\n\nYeah, I was wondering about that. int8 isn't really the right type,\nsince FullTransactionId is unsigned. If we had a SQL type for 64 bit\nxids, it should be convertible to xid, and the reverse conversion\nshould require a more complicated dance. Of course we can't casually\nchange txid_current() without annoying people who are using it, so\nperhaps if we invent a new SQL type we should also make a new function\nthat returns it.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Jul 2019 12:20:58 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-25 12:20:58 +1200, Thomas Munro wrote:\n> On Thu, Jul 25, 2019 at 12:06 PM Andres Freund <andres@anarazel.de> wrote:\n> > we have txid_current(), which returns an int8. But there's no convenient\n> > way to convert that to type 'xid'. Which is fairly inconvenient, given\n> > that we expose xids in various places.\n> >\n> > My current need for this was just a regression test to make sure that\n> > system columns (xmin/xmax in particular) don't get broken again for ON\n> > CONFLICT. But I've needed this before in other scenarios - e.g. age(xid)\n> > can be useful to figure out how old a transaction is, but age() doesn't\n> > work with txid_current()'s return value.\n> >\n> > Seems easiest to just add xid_current(), or add a cast from int8 to xid\n> > (probably explicit?) that handles the wraparound logic correctly?\n> \n> Yeah, I was wondering about that. int8 isn't really the right type,\n> since FullTransactionId is unsigned.\n\nFor now that doesn't seem that big an impediment...\n\n\n> If we had a SQL type for 64 bit xids, it should be convertible to xid,\n> and the reverse conversion should require a more complicated dance.\n> Of course we can't casually change txid_current() without annoying\n> people who are using it, so perhaps if we invent a new SQL type we\n> should also make a new function that returns it.\n\nPossibly we could add a fullxid or xid8, xid64, pg_xid64, ... type, and\nhave an implicit cast to int8?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 Jul 2019 17:27:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-07-25 12:20:58 +1200, Thomas Munro wrote:\n>> On Thu, Jul 25, 2019 at 12:06 PM Andres Freund <andres@anarazel.de> wrote:\n>>> Seems easiest to just add xid_current(), or add a cast from int8 to xid\n>>> (probably explicit?) that handles the wraparound logic correctly?\n\n>> Yeah, I was wondering about that. int8 isn't really the right type,\n>> since FullTransactionId is unsigned.\n\n> For now that doesn't seem that big an impediment...\n\nYeah, I would absolutely NOT recommend that you open that can of worms\nright now. We have looked at adding unsigned integer types in the past\nand it looked like a mess.\n\nI think an explicit cast is a reasonable thing to add, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Jul 2019 20:34:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-24 20:34:39 -0400, Tom Lane wrote:\n> Yeah, I would absolutely NOT recommend that you open that can of worms\n> right now. We have looked at adding unsigned integer types in the past\n> and it looked like a mess.\n\nI assume Thomas was thinking more of another bespoke type like xid, just\nwider. There's some notational advantage in not being able to\nimmediately do math etc on xids.\n\n- Andres\n\n\n",
"msg_date": "Wed, 24 Jul 2019 17:40:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-07-24 20:34:39 -0400, Tom Lane wrote:\n>> Yeah, I would absolutely NOT recommend that you open that can of worms\n>> right now. We have looked at adding unsigned integer types in the past\n>> and it looked like a mess.\n\n> I assume Thomas was thinking more of another bespoke type like xid, just\n> wider. There's some notational advantage in not being able to\n> immediately do math etc on xids.\n\nWell, we could invent an xid8 type if we want, just don't try to make\nit part of the numeric hierarchy (as indeed xid isn't).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Jul 2019 20:42:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Thu, Jul 25, 2019 at 12:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-07-24 20:34:39 -0400, Tom Lane wrote:\n> >> Yeah, I would absolutely NOT recommend that you open that can of worms\n> >> right now. We have looked at adding unsigned integer types in the past\n> >> and it looked like a mess.\n>\n> > I assume Thomas was thinking more of another bespoke type like xid, just\n> > wider. There's some notational advantage in not being able to\n> > immediately do math etc on xids.\n>\n> Well, we could invent an xid8 type if we want, just don't try to make\n> it part of the numeric hierarchy (as indeed xid isn't).\n\nYeah, I meant an xid64/xid8/fxid/pg_something/... type that isn't a\nkind of number.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Jul 2019 13:11:59 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Thu, Jul 25, 2019 at 1:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Jul 25, 2019 at 12:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2019-07-24 20:34:39 -0400, Tom Lane wrote:\n> > >> Yeah, I would absolutely NOT recommend that you open that can of worms\n> > >> right now. We have looked at adding unsigned integer types in the past\n> > >> and it looked like a mess.\n> >\n> > > I assume Thomas was thinking more of another bespoke type like xid, just\n> > > wider. There's some notational advantage in not being able to\n> > > immediately do math etc on xids.\n> >\n> > Well, we could invent an xid8 type if we want, just don't try to make\n> > it part of the numeric hierarchy (as indeed xid isn't).\n>\n> Yeah, I meant an xid64/xid8/fxid/pg_something/... type that isn't a\n> kind of number.\n\nI played around with an xid8 type over here (not tested much yet, in\nparticular not tested on 32 bit box):\n\nhttps://www.postgresql.org/message-id/CA%2BhUKGKbQtX8E5TEdcZaYhTxqLqrvcpN1Vjb7eCu2bz5EACZbw%40mail.gmail.com\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 2 Aug 2019 22:42:16 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Fri, Aug 2, 2019 at 10:42 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Jul 25, 2019 at 1:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Thu, Jul 25, 2019 at 12:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Andres Freund <andres@anarazel.de> writes:\n> > > > On 2019-07-24 20:34:39 -0400, Tom Lane wrote:\n> > > >> Yeah, I would absolutely NOT recommend that you open that can of worms\n> > > >> right now. We have looked at adding unsigned integer types in the past\n> > > >> and it looked like a mess.\n> > >\n> > > > I assume Thomas was thinking more of another bespoke type like xid, just\n> > > > wider. There's some notational advantage in not being able to\n> > > > immediately do math etc on xids.\n> > >\n> > > Well, we could invent an xid8 type if we want, just don't try to make\n> > > it part of the numeric hierarchy (as indeed xid isn't).\n> >\n> > Yeah, I meant an xid64/xid8/fxid/pg_something/... type that isn't a\n> > kind of number.\n\nI thought about how to deal with the transition to xid8 for the\ntxid_XXX() family of functions. The best idea I've come up with so\nfar is to create a parallel xid8_XXX() family of functions, and\ndeclare the bigint-based functions to be deprecated, and threaten to\ndrop them from a future release. The C code for the two families can\nbe the same (it's a bit of a dirty trick, but only until the\ntxid_XXX() variants go away). Here's a PoC patch demonstrating that.\nNot tested much, yet, probably needs some more work, but I wanted to\nsee if others thought the idea was terrible first.\n\nI wonder if there is a better way to share hash functions than the\nhack in check_hash_func_signature(), which I had to extend to cover\nxid8.\n\nAdding to CF.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Sun, 1 Sep 2019 17:04:27 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Sun, Sep 1, 2019 at 5:04 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Adding to CF.\n\nRebased. An OID clashed so re-roll the dice. Also spotted a typo.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Tue, 10 Sep 2019 12:05:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sun, Sep 1, 2019 at 5:04 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Adding to CF.\n\n> Rebased. An OID clashed so re-roll the dice. Also spotted a typo.\n\nFWIW, I'd move *all* the OIDs added by this patch up to >= 8000.\nI don't feel a strong need to fill in the gaps in the low-numbered\nOIDs, and people who do try that are likely to hit problems of the\nsort you just did.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Sep 2019 20:13:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "> \n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> On Sun, Sep 1, 2019 at 5:04 PM Thomas Munro <thomas.munro@gmail.com> \n>> wrote:\n>>> Adding to CF.\n> \n>> Rebased. An OID clashed so re-roll the dice. Also spotted a typo.\n> \n\nI have some questions in this code.\n\nFirst,\n\"FullTransactionIdPrecedes(xmax, val)\" is not equal to \"val >= xmax\" of \nthe previous code. \"FullTransactionIdPrecedes(xmax, val)\" expresses \n\"val > xmax\". Is it all right?\n\n@@ -384,15 +324,17 @@ parse_snapshot(const char *str)\n \twhile (*str != '\\0')\n \t{\n \t\t/* read next value */\n-\t\tval = str2txid(str, &endp);\n+\t\tval = FullTransactionIdFromU64(pg_strtouint64(str, &endp, 10));\n \t\tstr = endp;\n\n \t\t/* require the input to be in order */\n-\t\tif (val < xmin || val >= xmax || val < last_val)\n+\t\tif (FullTransactionIdPrecedes(val, xmin) ||\n+\t\t\tFullTransactionIdPrecedes(xmax, val) ||\n+\t\t\tFullTransactionIdPrecedes(val, last_val))\n\nIn addition to it, as to current TransactionId(not FullTransactionId) \ncomparison, when we express \">=\" of TransactionId, we use \n\"TransactionIdFollowsOrEquals\". this method is referred by some methods. \nOn the other hand, FullTransactionIdFollowsOrEquals has not implemented \nyet. So, how about implementing this method?\n\nSecond,\nAbout naming rule, \"8\" of xid8 means 8 bytes, but \"8\" has different \nmeaning in each situation. For example, int8 of PostgreSQL means 8 \nbytes, int8 of C language means 8 bits. If 64 is used, it just means 64 \nbits. how about xid64()?\n\nregards,\n\nTakao Fujii\n\n\n\n\n\n",
"msg_date": "Tue, 29 Oct 2019 13:23:07 +0900",
"msg_from": "btfujiitkp <btfujiitkp@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Tue, Oct 29, 2019 at 5:23 PM btfujiitkp <btfujiitkp@oss.nttdata.com> wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> >> On Sun, Sep 1, 2019 at 5:04 PM Thomas Munro <thomas.munro@gmail.com>\n> >> wrote:\n> >>> Adding to CF.\n> >\n> >> Rebased. An OID clashed so re-roll the dice. Also spotted a typo.\n> >\n>\n> I have some questions in this code.\n\nThanks for looking at the patch.\n\n> First,\n> \"FullTransactionIdPrecedes(xmax, val)\" is not equal to \"val >= xmax\" of\n> the previous code. \"FullTransactionIdPrecedes(xmax, val)\" expresses\n> \"val > xmax\". Is it all right?\n>\n> @@ -384,15 +324,17 @@ parse_snapshot(const char *str)\n> while (*str != '\\0')\n> {\n> /* read next value */\n> - val = str2txid(str, &endp);\n> + val = FullTransactionIdFromU64(pg_strtouint64(str, &endp, 10));\n> str = endp;\n>\n> /* require the input to be in order */\n> - if (val < xmin || val >= xmax || val < last_val)\n> + if (FullTransactionIdPrecedes(val, xmin) ||\n> + FullTransactionIdPrecedes(xmax, val) ||\n> + FullTransactionIdPrecedes(val, last_val))\n>\n> In addition to it, as to current TransactionId(not FullTransactionId)\n> comparison, when we express \">=\" of TransactionId, we use\n> \"TransactionIdFollowsOrEquals\". this method is referred by some methods.\n> On the other hand, FullTransactionIdFollowsOrEquals has not implemented\n> yet. So, how about implementing this method?\n\nGood idea. I added the missing variants:\n\n+#define FullTransactionIdPrecedesOrEquals(a, b) ((a).value <= (b).value)\n+#define FullTransactionIdFollows(a, b) ((a).value > (b).value)\n+#define FullTransactionIdFollowsOrEquals(a, b) ((a).value >= (b).value)\n\n> Second,\n> About naming rule, \"8\" of xid8 means 8 bytes, but \"8\" has different\n> meaning in each situation. For example, int8 of PostgreSQL means 8\n> bytes, int8 of C language means 8 bits. If 64 is used, it just means 64\n> bits. how about xid64()?\n\nIn C, the typenames use bits, by happy coincidence similar to the C99\nstdint.h typenames (int32_t etc) that we should perhaps eventually\nswitch to.\n\nIn SQL, the types have names based on the number of bytes: int2, int4,\nint8, float4, float8, not conforming to any standard but established\nover 3 decades ago and also understood by a few other SQL systems.\n\nThat's unfortunate, but I can't see that ever changing. I thought\nthat it would make most sense for the SQL type to be called xid8,\nthough admittedly it doesn't quite fit the pattern because xid is not\ncalled xid4. There is another example a bit like that: macaddr (6\nbytes) and macaccdr8 (8 bytes). As for the C type, we use\nTransactionId and FullTransactionId (rather than, say, xid32 and\nxid64).\n\nIn the attached I also took Tom's advice and used unused_oids script\nto pick random OIDs >= 8000 for all new objects (ignoring nearby\ncomments about the range of OIDs used in different sections of the\nfile).",
"msg_date": "Mon, 4 Nov 2019 11:43:36 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "> On Tue, Oct 29, 2019 at 5:23 PM btfujiitkp <btfujiitkp@oss.nttdata.com> \n> wrote:\n>> > Thomas Munro <thomas.munro@gmail.com> writes:\n>> >> On Sun, Sep 1, 2019 at 5:04 PM Thomas Munro <thomas.munro@gmail.com>\n>> >> wrote:\n>> >>> Adding to CF.\n>> >\n>> >> Rebased. An OID clashed so re-roll the dice. Also spotted a typo.\n>> >\n>> \n>> I have some questions in this code.\n> \n> Thanks for looking at the patch.\n> \n>> First,\n>> \"FullTransactionIdPrecedes(xmax, val)\" is not equal to \"val >= xmax\" \n>> of\n>> the previous code. \"FullTransactionIdPrecedes(xmax, val)\" expresses\n>> \"val > xmax\". Is it all right?\n>> \n>> @@ -384,15 +324,17 @@ parse_snapshot(const char *str)\n>> while (*str != '\\0')\n>> {\n>> /* read next value */\n>> - val = str2txid(str, &endp);\n>> + val = FullTransactionIdFromU64(pg_strtouint64(str, \n>> &endp, 10));\n>> str = endp;\n>> \n>> /* require the input to be in order */\n>> - if (val < xmin || val >= xmax || val < last_val)\n>> + if (FullTransactionIdPrecedes(val, xmin) ||\n>> + FullTransactionIdPrecedes(xmax, val) ||\n>> + FullTransactionIdPrecedes(val, last_val))\n>> \n>> In addition to it, as to current TransactionId(not FullTransactionId)\n>> comparison, when we express \">=\" of TransactionId, we use\n>> \"TransactionIdFollowsOrEquals\". this method is referred by some \n>> methods.\n>> On the other hand, FullTransactionIdFollowsOrEquals has not \n>> implemented\n>> yet. So, how about implementing this method?\n> \n> Good idea. I added the missing variants:\n> \n> +#define FullTransactionIdPrecedesOrEquals(a, b) ((a).value <= \n> (b).value)\n> +#define FullTransactionIdFollows(a, b) ((a).value > (b).value)\n> +#define FullTransactionIdFollowsOrEquals(a, b) ((a).value >= \n> (b).value)\n> \n\nThank you for your patch.\nIt looks good.\n\n\n>> Second,\n>> About naming rule, \"8\" of xid8 means 8 bytes, but \"8\" has different\n>> meaning in each situation. For example, int8 of PostgreSQL means 8\n>> bytes, int8 of C language means 8 bits. If 64 is used, it just means \n>> 64\n>> bits. how about xid64()?\n> \n> In C, the typenames use bits, by happy coincidence similar to the C99\n> stdint.h typenames (int32_t etc) that we should perhaps eventually\n> switch to.\n> \n> In SQL, the types have names based on the number of bytes: int2, int4,\n> int8, float4, float8, not conforming to any standard but established\n> over 3 decades ago and also understood by a few other SQL systems.\n> \n> That's unfortunate, but I can't see that ever changing. I thought\n> that it would make most sense for the SQL type to be called xid8,\n> though admittedly it doesn't quite fit the pattern because xid is not\n> called xid4. There is another example a bit like that: macaddr (6\n> bytes) and macaccdr8 (8 bytes). As for the C type, we use\n> TransactionId and FullTransactionId (rather than, say, xid32 and\n> xid64).\n\nThat makes sense.\n\nAnyway,\nIn the pg_proc.dat, \"xid_snapshot_xip\" should be \"xid8_snapshot_xip\".\nAnd some parts of 0002 patch are rejected when I patch 0002 after \npatching 0001.\n\nregards\n\n\n",
"msg_date": "Wed, 13 Nov 2019 18:11:31 +0900",
"msg_from": "btfujiitkp <btfujiitkp@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "Hi Thomas,\r\n\r\nPlease let me ask something about wraparound problem.\r\n\r\n+static FullTransactionId\r\n+convert_xid(TransactionId xid, FullTransactionId next_fxid)\r\n {\r\n-\tuint64\t\tepoch;\r\n+\tTransactionId next_xid = XidFromFullTransactionId(next_fxid);\r\n+\tuint32 epoch = EpochFromFullTransactionId(next_fxid);\r\n \r\n...\r\n \r\n-\t/* xid can be on either side when near wrap-around */\r\n-\tepoch = (uint64) state->epoch;\r\n-\tif (xid > state->last_xid &&\r\n-\t\tTransactionIdPrecedes(xid, state->last_xid))\r\n+\tif (xid > next_xid)\r\n \t\tepoch--;\r\n-\telse if (xid < state->last_xid &&\r\n-\t\t\t TransactionIdFollows(xid, state->last_xid))\r\n-\t\tepoch++;\r\n \r\n-\treturn (epoch << 32) | xid;\r\n+\treturn FullTransactionIdFromEpochAndXid(epoch, xid);\r\n\r\n\r\nISTM codes for wraparound are deleted. Is that correct?\r\nI couldn't have read all related threads about using FullTransactionId but\r\ndoes using FullTransactionId avoid wraparound problem? \r\n\r\nIf we consider below conditions, we can say it's difficult to see wraparound\r\nwith current disk like SSD (2GB/s) or memory DDR4 (34GB/s), but if we can use \r\nmore high-spec hardware like HBM3 (2048GB/s), we can see wraparound. Or do\r\nI say silly things?\r\n\r\n* 10 year system ( < 2^4 )\r\n* 1 year = 31536000 ( = 60 * 60 * 24 * 365) secs ( < 2^25 )\r\n* 2^35 ( = 2^64 / 2^4 / 2^25) transactions we can use in each seconds\r\n* we can write at (2^5 * 2^30 * n) bytes/sec = (32 * n) GB/sec if we use 'n'\r\n bytes for each transactions.\r\n\r\nIs there any agreement we can throw the wraparound problem away if we adopt\r\nFullTransactionId?\r\n\r\n\r\nThanks\r\n--\r\nYoshikazu Imai\r\n\r\n",
"msg_date": "Wed, 20 Nov 2019 04:43:00 +0000",
"msg_from": "\"imai.yoshikazu@fujitsu.com\" <imai.yoshikazu@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 5:43 PM imai.yoshikazu@fujitsu.com\n<imai.yoshikazu@fujitsu.com> wrote:\n> Is there any agreement we can throw the wraparound problem away if we adopt\n> FullTransactionId?\n\nHere is one argument for why 64 bits ought to be enough: we use 64 bit\nLSNs for the WAL, and it usually takes more than one byte of WAL to\nconsume a transaction. If you write about 500MB of WAL per second,\nyour system will break in about a thousand years due to LSN\nwraparound, that is, assuming the earth hasn't been destroyed to make\nway for a hyperspace bypass, but either way you will probably still\nhave some spare full transaction IDs.\n\nThat's fun to think about, but unfortunately it's not easy to figure\nout how to retrofit FullTransactionId into enough places to make\nwraparounds go away in the traditional heap. It's a goal of at least\na couple of ongoing new AM projects to not have that problem, and I\nfigured it was a good idea to lay down very basic facilities for that,\ntrivial as they might be, and see where else they can be useful...\n\n\n",
"msg_date": "Wed, 20 Nov 2019 23:51:07 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "\n\nOn 11/3/19 2:43 PM, Thomas Munro wrote:\n> On Tue, Oct 29, 2019 at 5:23 PM btfujiitkp <btfujiitkp@oss.nttdata.com> wrote:\n>>> Thomas Munro <thomas.munro@gmail.com> writes:\n>>>> On Sun, Sep 1, 2019 at 5:04 PM Thomas Munro <thomas.munro@gmail.com>\n>>>> wrote:\n>>>>> Adding to CF.\n>>>\n>>>> Rebased. An OID clashed so re-roll the dice. Also spotted a typo.\n>>>\n>>\n>> I have some questions in this code.\n> \n> Thanks for looking at the patch.\n> \n>> First,\n>> \"FullTransactionIdPrecedes(xmax, val)\" is not equal to \"val >= xmax\" of\n>> the previous code. \"FullTransactionIdPrecedes(xmax, val)\" expresses\n>> \"val > xmax\". Is it all right?\n>>\n>> @@ -384,15 +324,17 @@ parse_snapshot(const char *str)\n>> while (*str != '\\0')\n>> {\n>> /* read next value */\n>> - val = str2txid(str, &endp);\n>> + val = FullTransactionIdFromU64(pg_strtouint64(str, &endp, 10));\n>> str = endp;\n>>\n>> /* require the input to be in order */\n>> - if (val < xmin || val >= xmax || val < last_val)\n>> + if (FullTransactionIdPrecedes(val, xmin) ||\n>> + FullTransactionIdPrecedes(xmax, val) ||\n>> + FullTransactionIdPrecedes(val, last_val))\n>>\n>> In addition to it, as to current TransactionId(not FullTransactionId)\n>> comparison, when we express \">=\" of TransactionId, we use\n>> \"TransactionIdFollowsOrEquals\". this method is referred by some methods.\n>> On the other hand, FullTransactionIdFollowsOrEquals has not implemented\n>> yet. So, how about implementing this method?\n> \n> Good idea. I added the missing variants:\n> \n> +#define FullTransactionIdPrecedesOrEquals(a, b) ((a).value <= (b).value)\n> +#define FullTransactionIdFollows(a, b) ((a).value > (b).value)\n> +#define FullTransactionIdFollowsOrEquals(a, b) ((a).value >= (b).value)\n> \n>> Second,\n>> About naming rule, \"8\" of xid8 means 8 bytes, but \"8\" has different\n>> meaning in each situation. For example, int8 of PostgreSQL means 8\n>> bytes, int8 of C language means 8 bits. If 64 is used, it just means 64\n>> bits. how about xid64()?\n> \n> In C, the typenames use bits, by happy coincidence similar to the C99\n> stdint.h typenames (int32_t etc) that we should perhaps eventually\n> switch to.\n> \n> In SQL, the types have names based on the number of bytes: int2, int4,\n> int8, float4, float8, not conforming to any standard but established\n> over 3 decades ago and also understood by a few other SQL systems.\n> \n> That's unfortunate, but I can't see that ever changing. I thought\n> that it would make most sense for the SQL type to be called xid8,\n> though admittedly it doesn't quite fit the pattern because xid is not\n> called xid4. There is another example a bit like that: macaddr (6\n> bytes) and macaccdr8 (8 bytes). As for the C type, we use\n> TransactionId and FullTransactionId (rather than, say, xid32 and\n> xid64).\n> \n> In the attached I also took Tom's advice and used unused_oids script\n> to pick random OIDs >= 8000 for all new objects (ignoring nearby\n> comments about the range of OIDs used in different sections of the\n> file).\n> \n\nThese two patches (v3) no longer apply cleanly. Could you please\nrebase?\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Sat, 30 Nov 2019 15:22:26 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Sun, Dec 1, 2019 at 12:22 PM Mark Dilger <hornschnorter@gmail.com> wrote:\n> These two patches (v3) no longer apply cleanly. Could you please\n> rebase?\n\nHi Mark,\nThanks. Here's v4.",
"msg_date": "Sun, 1 Dec 2019 14:14:03 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "\n\nOn 11/30/19 5:14 PM, Thomas Munro wrote:\n> On Sun, Dec 1, 2019 at 12:22 PM Mark Dilger <hornschnorter@gmail.com> wrote:\n>> These two patches (v3) no longer apply cleanly. Could you please\n>> rebase?\n> \n> Hi Mark,\n> Thanks. Here's v4.\n\nThanks, Thomas.\n\nThe new patches apply cleanly and pass 'installcheck'.\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Mon, 2 Dec 2019 09:55:59 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Tue, Dec 3, 2019 at 6:56 AM Mark Dilger <hornschnorter@gmail.com> wrote:\n> On 11/30/19 5:14 PM, Thomas Munro wrote:\n> > On Sun, Dec 1, 2019 at 12:22 PM Mark Dilger <hornschnorter@gmail.com> wrote:\n> >> These two patches (v3) no longer apply cleanly. Could you please\n> >> rebase?\n> >\n> > Hi Mark,\n> > Thanks. Here's v4.\n>\n> Thanks, Thomas.\n>\n> The new patches apply cleanly and pass 'installcheck'.\n\nI rebased, fixed the \"xid_snapshot_xip\" problem spotted by Takao Fujii\nthat I had missed earlier, updated a couple of error messages to refer\nto the new names (even when using the old functions) and ran\ncheck-world and some simple manual tests on an -m32 build just to be\nparanoid. Here are the versions of these patches I'd like to commit.\nDoes anyone want to object to the txid/xid8 type punning scheme or\nlong term txid-sunsetting plan?",
"msg_date": "Tue, 28 Jan 2020 18:05:24 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "\n\nOn 2020/01/28 14:05, Thomas Munro wrote:\n> On Tue, Dec 3, 2019 at 6:56 AM Mark Dilger <hornschnorter@gmail.com> wrote:\n>> On 11/30/19 5:14 PM, Thomas Munro wrote:\n>>> On Sun, Dec 1, 2019 at 12:22 PM Mark Dilger <hornschnorter@gmail.com> wrote:\n>>>> These two patches (v3) no longer apply cleanly. Could you please\n>>>> rebase?\n>>>\n>>> Hi Mark,\n>>> Thanks. Here's v4.\n>>\n>> Thanks, Thomas.\n>>\n>> The new patches apply cleanly and pass 'installcheck'.\n> \n> I rebased, fixed the \"xid_snapshot_xip\" problem spotted by Takao Fujii\n> that I had missed earlier, updated a couple of error messages to refer\n> to the new names (even when using the old functions) and ran\n> check-world and some simple manual tests on an -m32 build just to be\n> paranoid. Here are the versions of these patches I'd like to commit.\n\nThanks for the patches! Here are my minor comments.\n\nIsn't it better to add also xid8lt, xid8gt, xid8le, and xid8ge?\n\nxid8 and xid8_snapshot should be documented in datatype.sgml like\ntxid_snapshot is?\n\nlogicaldecoding.sgml and monitoring.sgml still referred to txid_xxx.\nThey should be updated so that new xid8_xxx is used?\n\nIn func.sgml, the table \"Snapshot Components\" is described still based\non txid. It should be updated so that it uses xid8, instead?\n\n+# xid_ops\n+{ amopfamily => 'hash/xid8_ops', amoplefttype => 'xid8', amoprighttype => 'xid8',\n+ amopstrategy => '1', amopopr => '=(xid8,xid8)', amopmethod => 'hash' },\n\n\"xid_ops\" in the comment should be \"xid8_ops\".\n\n+{ oid => '9558',\n+ proname => 'xid8neq', proleakproof => 't', prorettype => 'bool',\n+ proargtypes => 'xid8 xid8', prosrc => 'xid8neq' },\n\nBasically the names of not-equal functions for most data types are\nsomething like \"xxxne\" not \"xxxneq\". So IMO it's better to use the name\n\"xid8ne\" instead of \"xid8neq\" here.\n\n /*\n- * do a TransactionId -> txid conversion for an XID near the given epoch\n+ * Do a TransactionId -> fxid conversion for an XID that is known to precede\n+ * the given 'next_fxid'.\n */\n-static txid\n-convert_xid(TransactionId xid, const TxidEpoch *state)\n+static FullTransactionId\n+convert_xid(TransactionId xid, FullTransactionId next_fxid)\n\nAs the comment suggests, this function assumes that \"xid\" must\nprecede \"next_fxid\". But there is no check for the assumption.\nIsn't it better to add, e.g., an assertion checking that?\nOr convert_xid() should handle the case where \"xid\" follows\n\"next_fxid\" like the orignal convert_xid() does. That is, don't\napply the following change.\n\n-\tif (xid > state->last_xid &&\n-\t\tTransactionIdPrecedes(xid, state->last_xid))\n+\tif (xid > next_xid)\n \t\tepoch--;\n-\telse if (xid < state->last_xid &&\n-\t\t\t TransactionIdFollows(xid, state->last_xid))\n-\t\tepoch++;\n\n> Does anyone want to object to the txid/xid8 type punning scheme or\n> long term txid-sunsetting plan?\n\nNo. +1 to retire txid someday.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Tue, 28 Jan 2020 20:43:33 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "Hello Fujii-san,\n\nThanks for your review!\n\nOn Wed, Jan 29, 2020 at 12:43 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> Isn't it better to add also xid8lt, xid8gt, xid8le, and xid8ge?\n\nxid doesn't have these operators, probably to avoid confusion about\nwraparound. But you're right, we should add them for xid8, especially\nsince the xid_snapshot documentation mentions such comparisons (the\ntype used by the old functions was int8, so that worked). Done. I\nalso added the extra catalogue nuts and bolts required to use xid8 in\nbtree indexes and merge joins.\n\nTo test the operators, I added a new regression test for xid and xid8\ntypes. While doing that, I tried to add some range checks to validate\ninput, but I discovered that it's a bit tricky to do so portably with\nstrtoul(). I suspect '0x100000000'::xid already gives different\nresults on Windows and Unix today, and if better input validation is\ndesired, I think it should be tackled outside this project.\n\nWhile working on those tests, I realised that we probably wanted two\nsets of tests:\n\n1. txid.sql: The existing tests that show that the old txid_XXX()\nfunctions continue to work correctly (with the only user-visible\ndifference being that in their error messages they sometimes mention\nnames including xid8). This test will be dropped when the txid_XXX()\nfunctions are dropped.\n\n2. xid.sql: A new set of tests that show that the new xid8_XXX()\nfunctions work correctly.\n\nTo verify that the old tests and the new tests are exactly the same\nexcept for typenames and some casts, use:\n\ndiff -u src/test/regress/expected/txid.out src/test/regress/expected/xid.out\n\n> xid8 and xid8_snapshot should be documented in datatype.sgml like\n> txid_snapshot is?\n\nDone.\n\n> logicaldecoding.sgml and monitoring.sgml still referred to txid_xxx.\n> They should be updated so that new xid8_xxx is used?\n\nDone.\n\n> In func.sgml, the table \"Snapshot Components\" is described still based\n> on txid. It should be updated so that it uses xid8, instead?\n\nDone.\n\n> +# xid_ops\n> +{ amopfamily => 'hash/xid8_ops', amoplefttype => 'xid8', amoprighttype => 'xid8',\n> + amopstrategy => '1', amopopr => '=(xid8,xid8)', amopmethod => 'hash' },\n>\n> \"xid_ops\" in the comment should be \"xid8_ops\".\n\nFixed.\n\n> +{ oid => '9558',\n> + proname => 'xid8neq', proleakproof => 't', prorettype => 'bool',\n> + proargtypes => 'xid8 xid8', prosrc => 'xid8neq' },\n>\n> Basically the names of not-equal functions for most data types are\n> something like \"xxxne\" not \"xxxneq\". So IMO it's better to use the name\n> \"xid8ne\" instead of \"xid8neq\" here.\n\nHuh. You are right, but the existing function xidneq is an exception.\nIt's not clear which one to follow. I will take your advice and use\nxid8ne. We could potentially change xidneq to xidne too, but it's\nuser-visible.\n\n> /*\n> - * do a TransactionId -> txid conversion for an XID near the given epoch\n> + * Do a TransactionId -> fxid conversion for an XID that is known to precede\n> + * the given 'next_fxid'.\n> */\n> -static txid\n> -convert_xid(TransactionId xid, const TxidEpoch *state)\n> +static FullTransactionId\n> +convert_xid(TransactionId xid, FullTransactionId next_fxid)\n>\n> As the comment suggests, this function assumes that \"xid\" must\n> precede \"next_fxid\". But there is no check for the assumption.\n> Isn't it better to add, e.g., an assertion checking that?\n> Or convert_xid() should handle the case where \"xid\" follows\n> \"next_fxid\" like the orignal convert_xid() does. That is, don't\n> apply the following change.\n>\n> - if (xid > state->last_xid &&\n> - TransactionIdPrecedes(xid, state->last_xid))\n> + if (xid > next_xid)\n> epoch--;\n> - else if (xid < state->last_xid &&\n> - TransactionIdFollows(xid, state->last_xid))\n> - epoch++;\n\nI need to think about this part some more, but I wanted to share\nresponses to the rest of your review now. I'll return to this point\nnext week.\n\n> > Does anyone want to object to the txid/xid8 type punning scheme or\n> > long term txid-sunsetting plan?\n>\n> No. +1 to retire txid someday.\n\nCool. Let's do that in a couple of years.",
"msg_date": "Fri, 14 Feb 2020 18:31:45 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Fri, Feb 14, 2020 at 6:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Jan 29, 2020 at 12:43 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > /*\n> > - * do a TransactionId -> txid conversion for an XID near the given epoch\n> > + * Do a TransactionId -> fxid conversion for an XID that is known to precede\n> > + * the given 'next_fxid'.\n> > */\n> > -static txid\n> > -convert_xid(TransactionId xid, const TxidEpoch *state)\n> > +static FullTransactionId\n> > +convert_xid(TransactionId xid, FullTransactionId next_fxid)\n> >\n> > As the comment suggests, this function assumes that \"xid\" must\n> > precede \"next_fxid\". But there is no check for the assumption.\n> > Isn't it better to add, e.g., an assertion checking that?\n> > Or convert_xid() should handle the case where \"xid\" follows\n> > \"next_fxid\" like the orignal convert_xid() does. That is, don't\n> > apply the following change.\n> >\n> > - if (xid > state->last_xid &&\n> > - TransactionIdPrecedes(xid, state->last_xid))\n> > + if (xid > next_xid)\n> > epoch--;\n> > - else if (xid < state->last_xid &&\n> > - TransactionIdFollows(xid, state->last_xid))\n> > - epoch++;\n\nI don't think it is reachable. I have renamed the function to\nwiden_snapshot_xid() and rewritten the comments to explain the logic.\n\nThe other changes in this version:\n\n* updated OIDs to avoid collisions\n* added btequalimage to btree/xid8_ops",
"msg_date": "Sat, 21 Mar 2020 11:14:13 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Sat, Mar 21, 2020 at 11:14 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> * updated OIDs to avoid collisions\n> * added btequalimage to btree/xid8_ops\n\nHere's the version I'm planning to commit tomorrow, if no one objects. Changes:\n\n* txid.c renamed to xid8funcs.c\n* remaining traces of \"txid\" replaced various internal identifiers\n* s/backwards compatible/backward compatible/ in funcs.sgml (en_GB -> en_US)",
"msg_date": "Thu, 2 Apr 2020 16:21:34 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "\n\n> On Apr 1, 2020, at 8:21 PM, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> On Sat, Mar 21, 2020 at 11:14 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> * updated OIDs to avoid collisions\n>> * added btequalimage to btree/xid8_ops\n> \n> Here's the version I'm planning to commit tomorrow, if no one objects. Changes:\n> \n> * txid.c renamed to xid8funcs.c\n> * remaining traces of \"txid\" replaced various internal identifiers\n> * s/backwards compatible/backward compatible/ in funcs.sgml (en_GB -> en_US)\n> <v8-0001-Add-SQL-type-xid8-to-expose-FullTransactionId-to-.patch><v8-0002-Introduce-xid8_XXX-functions-to-replace-txid_XXX.patch>\n\nHi Thomas, Thanks for working on this.\n\nI'm taking a quick look at your patches. It's not a big deal, and certainly not a show stopper if you want to go ahead with the commit, but you've left some mentions of \"txid_current\" that might better be modified to use the new name \"xid8_current\". At least one mention of \"txid_current\" is needed to check that the old name still works, but leaving this many in the regression test suite may lead other developers to follow that lead and use txid_current() in newly developed code. (\"xid8_current\" is not exercised by name anywhere in the regression suite, that I can see.)\n\n> contrib/test_decoding/expected/ddl.out:SELECT txid_current() != 0; -- so no fixed xid apears in the outfile\n> contrib/test_decoding/expected/decoding_in_xact.out:SELECT txid_current() = 0;\n> contrib/test_decoding/expected/decoding_in_xact.out:SELECT txid_current() = 0;\n> contrib/test_decoding/expected/decoding_in_xact.out:SELECT txid_current() = 0;\n> contrib/test_decoding/expected/oldest_xmin.out:step s0_getxid: SELECT txid_current() IS NULL;\n> contrib/test_decoding/expected/ondisk_startup.out:step s2txid: SELECT txid_current() IS NULL;\n> contrib/test_decoding/expected/ondisk_startup.out:step s3txid: SELECT txid_current() IS NULL;\n> contrib/test_decoding/expected/ondisk_startup.out:step s2txid: SELECT txid_current() IS NULL;\n> contrib/test_decoding/expected/snapshot_transfer.out:step s0_log_assignment: SELECT txid_current() IS NULL;\n> contrib/test_decoding/expected/snapshot_transfer.out:step s0_log_assignment: SELECT txid_current() IS NULL;\n> contrib/test_decoding/specs/oldest_xmin.spec:step \"s0_getxid\" { SELECT txid_current() IS NULL; }\n> contrib/test_decoding/specs/ondisk_startup.spec:step \"s2txid\" { SELECT txid_current() IS NULL; }\n> contrib/test_decoding/specs/ondisk_startup.spec:step \"s3txid\" { SELECT txid_current() IS NULL; }\n> contrib/test_decoding/specs/snapshot_transfer.spec:step \"s0_log_assignment\" { SELECT txid_current() IS NULL; }\n> contrib/test_decoding/sql/ddl.sql:SELECT txid_current() != 0; -- so no fixed xid apears in the outfile\n> contrib/test_decoding/sql/decoding_in_xact.sql:SELECT txid_current() = 0;\n> contrib/test_decoding/sql/decoding_in_xact.sql:SELECT txid_current() = 0;\n> contrib/test_decoding/sql/decoding_in_xact.sql:SELECT txid_current() = 0;\n> \n> src/test/modules/commit_ts/t/004_restart.pl: SELECT txid_current();\n> src/test/modules/commit_ts/t/004_restart.pl: EXECUTE 'SELECT txid_current()';\n> src/test/modules/commit_ts/t/004_restart.pl: SELECT txid_current();\n> src/test/recovery/t/003_recovery_targets.pl: \"SELECT pg_current_wal_lsn(), txid_current();\");\n> src/test/recovery/t/011_crash_recovery.pl:SELECT txid_current();\n> src/test/recovery/t/011_crash_recovery.pl:cmp_ok($node->safe_psql('postgres', 'SELECT txid_current()'),\n> src/test/regress/expected/alter_table.out: where transactionid = txid_current()::integer)\n> src/test/regress/expected/alter_table.out: where transactionid = txid_current()::integer)\n> src/test/regress/expected/hs_standby_functions.out:select txid_current();\n> src/test/regress/expected/hs_standby_functions.out:ERROR: cannot execute txid_current() during recovery\n> src/test/regress/expected/hs_standby_functions.out:select length(txid_current_snapshot()::text) >= 4;\n> src/test/regress/expected/txid.out:select txid_current() >= txid_snapshot_xmin(txid_current_snapshot());\n> src/test/regress/expected/txid.out:select txid_visible_in_snapshot(txid_current(), txid_current_snapshot());\n> src/test/regress/expected/txid.out:-- test txid_current_if_assigned\n> src/test/regress/expected/txid.out:SELECT txid_current_if_assigned() IS NULL;\n> src/test/regress/expected/txid.out:SELECT txid_current() \\gset\n> src/test/regress/expected/txid.out:SELECT txid_current_if_assigned() IS NOT DISTINCT FROM BIGINT :'txid_current';\n> src/test/regress/expected/txid.out:SELECT txid_current() AS committed \\gset\n> src/test/regress/expected/txid.out:SELECT txid_current() AS rolledback \\gset\n> src/test/regress/expected/txid.out:SELECT txid_current() AS inprogress \\gset\n> src/test/regress/expected/update.out:CREATE FUNCTION xid_current() RETURNS xid LANGUAGE SQL AS $$SELECT (txid_current() % ((1::int8<<32)))::text::xid;$$;\n> src/test/regress/sql/alter_table.sql: where transactionid = txid_current()::integer)\n> src/test/regress/sql/alter_table.sql: where transactionid = txid_current()::integer)\n> src/test/regress/sql/hs_standby_functions.sql:select txid_current();\n> src/test/regress/sql/hs_standby_functions.sql:select length(txid_current_snapshot()::text) >= 4;\n> src/test/regress/sql/txid.sql:select txid_current() >= txid_snapshot_xmin(txid_current_snapshot());\n> src/test/regress/sql/txid.sql:select txid_visible_in_snapshot(txid_current(), txid_current_snapshot());\n> src/test/regress/sql/txid.sql:-- test txid_current_if_assigned\n> src/test/regress/sql/txid.sql:SELECT txid_current_if_assigned() IS NULL;\n> src/test/regress/sql/txid.sql:SELECT txid_current() \\gset\n> src/test/regress/sql/txid.sql:SELECT txid_current_if_assigned() IS NOT DISTINCT FROM BIGINT :'txid_current';\n> src/test/regress/sql/txid.sql:SELECT txid_current() AS committed \\gset\n> src/test/regress/sql/txid.sql:SELECT txid_current() AS rolledback \\gset\n> src/test/regress/sql/txid.sql:SELECT txid_current() AS inprogress \\gset\n> src/test/regress/sql/update.sql:CREATE FUNCTION xid_current() RETURNS xid LANGUAGE SQL AS $$SELECT (txid_current() % ((1::int8<<32)))::text::xid;$$;\n\nA reasonable argument could be made for treating txid_current as the preferred form, and xid8_current merely as a synonym, but then I can't make sense of the change your patch makes to the docs:\n\n+ <para>\n+ In releases of <productname>PostgreSQL</productname> before 13 there was\n+ no <type>xid8</type> type, so variants of these functions were provided\n+ that used <type>bigint</type>. The older functions with\n+ <literal>txid</literal>\n+ in the name are still supported for backward compatibility, but may be\n+ removed from a future release. The <type>bigint</type> variants are shown\n+ in <xref linkend=\"functions-txid-snapshot\"/>.\n+ </para>\n\nwhich looks like a txid deprecation warning to me.\n\nAm I reading all this wrong? If I'm reading this right, then FYI there are similar s/txid_(.*)/xid8_$1/g changes to be made that I didn't bother listing here by name.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 2 Apr 2020 09:06:10 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "\n\n> On Apr 2, 2020, at 9:06 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> (\"xid8_current\" is not exercised by name anywhere in the regression suite, that I can see.)\n\nI spoke too soon. That's exercised in the new xid.sql test file. It didn't show up in my 'git diff', because it's new. Sorry about that.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 2 Apr 2020 09:12:28 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On 2020-Apr-02, Thomas Munro wrote:\n\n> On Sat, Mar 21, 2020 at 11:14 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > * updated OIDs to avoid collisions\n> > * added btequalimage to btree/xid8_ops\n> \n> Here's the version I'm planning to commit tomorrow, if no one objects. Changes:\n> \n> * txid.c renamed to xid8funcs.c\n> * remaining traces of \"txid\" replaced various internal identifiers\n> * s/backwards compatible/backward compatible/ in funcs.sgml (en_GB -> en_US)\n\nHmm, for some reason I had it in my head that we would make these use an\n\"epoch/val\" output format rather than raw uint64 values. Are we really\ngoing to do it this way? Myself I can convert values easily enough, but\nI'm not sure this is user-friendly. (If somebody were to tell me that\nLSNs are going to be straight uint64 values, I would not be happy.)\n\nOr maybe it's the other way around -- this is fine for everyone except\nme -- and we should never expose the epoch as a separate quantity. \n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 2 Apr 2020 14:33:18 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "Hi,\n\nOn 2020-04-02 14:33:18 -0300, Alvaro Herrera wrote:\n> On 2020-Apr-02, Thomas Munro wrote:\n> \n> > On Sat, Mar 21, 2020 at 11:14 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > * updated OIDs to avoid collisions\n> > > * added btequalimage to btree/xid8_ops\n> > \n> > Here's the version I'm planning to commit tomorrow, if no one objects. Changes:\n> > \n> > * txid.c renamed to xid8funcs.c\n> > * remaining traces of \"txid\" replaced various internal identifiers\n> > * s/backwards compatible/backward compatible/ in funcs.sgml (en_GB -> en_US)\n> \n> Hmm, for some reason I had it in my head that we would make these use an\n> \"epoch/val\" output format rather than raw uint64 values.\n\nWhy would we do that? IMO the goal should be to reduce awareness of the\n32bitness of normal xids from as many places as possible, and treat them\nas an internal space optimization.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 Apr 2020 11:01:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "\n\n> On Apr 2, 2020, at 11:01 AM, Andres Freund <andres@anarazel.de> wrote:\n> \n>> \n>> Hmm, for some reason I had it in my head that we would make these use an\n>> \"epoch/val\" output format rather than raw uint64 values.\n> \n> Why would we do that? IMO the goal should be to reduce awareness of the\n> 32bitness of normal xids from as many places as possible, and treat them\n> as an internal space optimization.\n\nI agree with transitioning to 64-bit xids with 32 bit xid/epoch pairs as an internal implementation and storage detail only, but we still have user facing views that don't treat it that way. pg_stat_get_activity still returns backend_xid and backend_xmin as 32-bit, not 64-bit. Should this function change to be consistent? I'm curious what the user experience will be during the transitional period where some user facing xids are 64 bit and others (perhaps the same xids but viewed elsewhere) will be 32 bit. That might make it difficult for users to match them up.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 2 Apr 2020 11:47:32 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-04-02 14:33:18 -0300, Alvaro Herrera wrote:\n>> Hmm, for some reason I had it in my head that we would make these use an\n>> \"epoch/val\" output format rather than raw uint64 values.\n\n> Why would we do that? IMO the goal should be to reduce awareness of the\n> 32bitness of normal xids from as many places as possible, and treat them\n> as an internal space optimization.\n\nIf they're just int64s then you don't need special functions to do\nthings like finding the min or max in a column of them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Apr 2020 15:20:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Thu, Apr 2, 2020 at 2:47 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Apr 2, 2020, at 11:01 AM, Andres Freund <andres@anarazel.de> wrote:\n> >\n> >>\n> >> Hmm, for some reason I had it in my head that we would make these use an\n> >> \"epoch/val\" output format rather than raw uint64 values.\n> >\n> > Why would we do that? IMO the goal should be to reduce awareness of the\n> > 32bitness of normal xids from as many places as possible, and treat them\n> > as an internal space optimization.\n>\n> I agree with transitioning to 64-bit xids with 32 bit xid/epoch pairs as an internal implementation and storage detail only, but we still have user facing views that don't treat it that way. pg_stat_get_activity still returns backend_xid and backend_xmin as 32-bit, not 64-bit. Should this function change to be consistent? I'm curious what the user experience will be during the transitional period where some user facing xids are 64 bit and others (perhaps the same xids but viewed elsewhere) will be 32 bit. That might make it difficult for users to match them up.\n\n\nAgreed. The \"benefit\" (at least in the short term) of using the\nepoch/value style is that it makes (visual, at least) comparison with\nother (32-bit) xid values easier.\n\nI'm not sure if that's worth it, or if it's worth making a change\ndepend on changing all of those views too.\n\nJames\n\n\n",
"msg_date": "Thu, 2 Apr 2020 16:59:46 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "Hi,\n\nOn 2020-04-02 11:47:32 -0700, Mark Dilger wrote:\n> I agree with transitioning to 64-bit xids with 32 bit xid/epoch pairs\n> as an internal implementation and storage detail only, but we still\n> have user facing views that don't treat it that way.\n\nNote that epochs are not really a thing internally anymore. The xid\ncounter is a FullTransactionId.\n\n\n> pg_stat_get_activity still returns backend_xid and backend_xmin as\n> 32-bit, not 64-bit. Should this function change to be consistent? I'm\n> curious what the user experience will be during the transitional period\n> where some user facing xids are 64 bit and others (perhaps the same xids\n> but viewed elsewhere) will be 32 bit. That might make it difficult for\n> users to match them up.\n\nI think we probably should switch them over at some point, but I would\nstrongly advise against coupling that with Thomas' patch. That patch\ndoesn't make the current situation around 32bit / 64bit any worse, as\nfar as I can tell.\n\nGiven that txid_current() \"always\" has been a plain 64 bit integer, and\nthe various txid_* functions always have returned 64 bit integers, I\nreally don't think arguing for some 32bit/32bit situation now makes\nsense.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 Apr 2020 14:13:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "\n\n> On Apr 2, 2020, at 2:13 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> Hi,\n> \n> On 2020-04-02 11:47:32 -0700, Mark Dilger wrote:\n>> I agree with transitioning to 64-bit xids with 32 bit xid/epoch pairs\n>> as an internal implementation and storage detail only, but we still\n>> have user facing views that don't treat it that way.\n> \n> Note that epochs are not really a thing internally anymore. The xid\n> counter is a FullTransactionId.\n> \n> \n>> pg_stat_get_activity still returns backend_xid and backend_xmin as\n>> 32-bit, not 64-bit. Should this function change to be consistent? I'm\n>> curious what the user experience will be during the transitional period\n>> where some user facing xids are 64 bit and others (perhaps the same xids\n>> but viewed elsewhere) will be 32 bit. That might make it difficult for\n>> users to match them up.\n> \n> I think we probably should switch them over at some point, but I would\n> strongly advise against coupling that with Thomas' patch. That patch\n> doesn't make the current situation around 32bit / 64bit any worse, as\n> far as I can tell.\n\nI agree with that.\n\n> Given that txid_current() \"always\" has been a plain 64 bit integer, and\n> the various txid_* functions always have returned 64 bit integers, I\n> really don't think arguing for some 32bit/32bit situation now makes\n> sense.\n\nYeah, I'm not arguing for that, though I can see how my email might have been ambiguous on that point.\n\nSince Thomas's patch is really just focused on transitioning from txid -> xid8, I think this conversation is a bit beyond scope for this patch, except that \"xid8\" sounds an awful lot like the new type that all user facing xid output will transition to. Maybe I'm wrong about that. Are we going to change the definition of the \"xid\" type to 8 bytes? That sounds dangerous, from a compatibility standpoint.\n\nOn balance, I'd rather have xid8in and xid8out work just as Thomas has it. I'm not asking for any change there. But I'm curious if the whole community is on the same page regarding where this is all heading.\n\nI'm contemplating the statement that \"the goal should be to reduce awareness of the 32bitness of normal xids from as many places as possible\", which I support, and what that means for the eventual signatures of functions like pg_stat_get_activity, including:\n\n\t(..., backend_xid XID, backend_xminxid XID, ...) pg_stat_get_activity(pid INTEGER)\n\n\t(..., transactionid XID, ...) pg_lock_status()\n\n\t(transaction XID, ...) pg_prepared_xact()\n\n\ttimestamptz pg_xact_commit_timestamp(XID)\n\n\t(xid XID, ...) pg_last_committed_xact()\n\n\t(..., xmin XID, catalog_xmin XID, ...) pg_get_replication_slots()\n\n\t... more that I'm too lazy to copy-and-paste just now ...\n\nI would expect that, eventually, these would be upgraded to xid8. If that happened seemlessly in one release, then there would be no problem with some functions returning 4-byte xids and some returning 8-byte xids, but otherwise there would be a transition period where some have been reworked to return xid8 but others not, and users during that transition period might be happier with Alvaro's suggestion of treating epoch/xid as two fields in xid8in and xid8out. I'm also doubtful that these functions would be \"upgraded\". It seems far more likely that alternate versions, perhaps named something with /xid8/ in them, would exist side-by-side with the originals.\n\nSo I'm really just wondering where others on this list think all this is heading, and if there are any arguments brewing about that which could be avoided by making assumptions clear right up front.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 2 Apr 2020 14:26:41 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "Hi,\n\nOn 2020-04-02 14:26:41 -0700, Mark Dilger wrote:\n> Since Thomas's patch is really just focused on transitioning from txid\n> -> xid8, I think this conversation is a bit beyond scope for this\n> patch, except that \"xid8\" sounds an awful lot like the new type that\n> all user facing xid output will transition to. Maybe I'm wrong about\n> that.\n\nSeveral at least. For me it'd e.g. make no sense to change pageinspect\netc.\n\n\n> Are we going to change the definition of the \"xid\" type to 8 bytes?\n> That sounds dangerous, from a compatibility standpoint.\n\nNo, I can't see that happening.\n\n\n> On balance, I'd rather have xid8in and xid8out work just as Thomas has\n> it. I'm not asking for any change there. But I'm curious if the\n> whole community is on the same page regarding where this is all\n> heading.\n>\n> I'm contemplating the statement that \"the goal should be to reduce\n> awareness of the 32bitness of normal xids from as many places as\n> possible\", which I support, and what that means for the eventual\n> signatures of functions like pg_stat_get_activity, including:\n\nMaybe. Aiming to do things like this all-at-once just makes it less\nlikely for anything to ever happen.\n\n\n> but otherwise there would be a transition period where some have been\n> reworked to return xid8 but others not, and users during that\n> transition period might be happier with Alvaro's suggestion of\n> treating epoch/xid as two fields in xid8in and xid8out.\n\n-countless\n\nI can only restate my point that we've had 8 byte xids exposed for many\nyears. We've had very few epoch/xid values exposed. I think it'd be\ninsane to now start to expose that more widely.\n\nIt's just about impossible for normal users to compare xids. Once one\nwrapped around, it's just too hard/mindbending. Yes, an accompanying\nepoch makes it easier, but it still can be quite confusing.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 Apr 2020 14:37:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Fri, Apr 3, 2020 at 10:37 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-04-02 14:26:41 -0700, Mark Dilger wrote:\n> > On balance, I'd rather have xid8in and xid8out work just as Thomas has\n> > it. I'm not asking for any change there. But I'm curious if the\n> > whole community is on the same page regarding where this is all\n> > heading.\n> >\n> > I'm contemplating the statement that \"the goal should be to reduce\n> > awareness of the 32bitness of normal xids from as many places as\n> > possible\", which I support, and what that means for the eventual\n> > signatures of functions like pg_stat_get_activity, including:\n>\n> Maybe. Aiming to do things like this all-at-once just makes it less\n> likely for anything to ever happen.\n\nAgreed. Let's just keep chipping away at this stuff.\n\n> > but otherwise there would be a transition period where some have been\n> > reworked to return xid8 but others not, and users during that\n> > transition period might be happier with Alvaro's suggestion of\n> > treating epoch/xid as two fields in xid8in and xid8out.\n>\n> -countless\n>\n> I can only restate my point that we've had 8 byte xids exposed for many\n> years. We've had very few epoch/xid values exposed. I think it'd be\n> insane to now start to expose that more widely.\n>\n> It's just about impossible for normal users to compare xids. Once one\n> wrapped around, it's just too hard/mindbending. Yes, an accompanying\n> epoch makes it easier, but it still can be quite confusing.\n\nJust by the way, any xid8 values can be sliced with ::xid, so that\nshould help with comparisons. I'm not keen to allow users to convert\nin the other direction though, due to the hard-to-understand\ninterlocking requirements of modulo xids (as belaboured elsewhere).\n\nAs Mark noted, I'd left a few uses of txid_XXX stuff in other tests.\nSo here's a 0003 patch that upgrades all of those too, so that the\nonly remaining usage is in the txid.sql tests (= the tests that the\nbackwards compat functions still work). No change to 0001 and 0002,\nother than a commit message tweak (reviewer email address change).",
"msg_date": "Fri, 3 Apr 2020 15:39:04 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "\n\n> On Apr 2, 2020, at 7:39 PM, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> <v9-0001-Add-SQL-type-xid8-to-expose-FullTransactionId-to-.patch><v9-0002-Introduce-xid8_XXX-functions-to-replace-txid_XXX.patch><v9-0003-Replace-all-txid_XXX-usage-in-tests-with-xid8_XXX.patch>\n\nThese apply cleanly, build and pass check-world on mac, and the documentation and regression test changes surrounding txid look good to me.\n\nFYI, (not the responsibility of this patch), we never quite define what the abbreviation \"xip\" stands for. If \"Active xid8s at the time of the snapshot.\" were rewritten as \"In progress xid8s at the time of the snapshot\", it might be slightly easier for the reader to figure out that \"xip\" = \"Xid8s In Progress\". As it stands, nothing in the docs seems to explain the abbrevation. See doc/src/sgml/func.sgml\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 3 Apr 2020 08:45:17 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Sat, Apr 4, 2020 at 4:45 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> FYI, (not the responsibility of this patch), we never quite define what the abbreviation \"xip\" stands for. If \"Active xid8s at the time of the snapshot.\" were rewritten as \"In progress xid8s at the time of the snapshot\", it might be slightly easier for the reader to figure out that \"xip\" = \"Xid8s In Progress\". As it stands, nothing in the docs seems to explain the abbrevation. See doc/src/sgml/func.sgml\n\nYou're right. Done.\n\nI also noticed that \"xid8s\" didn't flow very well here, so I changed\nit to \"transaction IDs\" in a couple of places like that (which I think\nis fine in these English sentences, to mean xid, xid8 or bigint\ndepending on the context).\n\nI also removed a sentence about values being \"extended with an epoch\",\nbecause that's not really how we want people to think about this stuff\nanymore. It's rather the other way around: transaction IDs begin life\nas 64 bit numbers and then get sliced.\n\nI noticed that the description of xmax was flat out wrong (it didn't\nknow about ancient commit 6bd4f401), so I rewrote it. And then while\nexercising my backspace key, it bothered me that the description of\nxip_list said almost the same thing twice so I kept only the more\naccurate of two sentences.\n\nHowever, I am getting cold feet about the new function names. The\nexisting naming structure made sense when all this stuff originated in\na contrib module with \"txid_\" as a prefix all over the place, but now\nthat 64 bit IDs are a core concept, I wonder if we shouldn't aim for\nsomething that looks a little more like core functionality and doesn't\nhave those \"xid8_\" warts in the names. Here's what I now propose:\n\nTransaction ID functions, using names that fit with others (cf\npg_xact_commit_timestamp()):\n\n pg_current_xact_id()\n pg_current_xact_id_if_assigned()\n pg_xact_status(xid8)\n\nSnapshot functions (cf pg_export_snapshot()):\n\n pg_current_snapshot()\n pg_snapshot_xmin(pg_snapshot)\n pg_snapshot_xmax(pg_snapshot)\n pg_snapshot_xip(pg_snapshot)\n pg_visible_in_snapshot(xid8, pg_snapshot)\n\nHere's a patch set like that (0003 has been squashed into 0002).",
"msg_date": "Sun, 5 Apr 2020 02:11:06 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "\n\n> On Apr 4, 2020, at 7:11 AM, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> On Sat, Apr 4, 2020 at 4:45 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> FYI, (not the responsibility of this patch), we never quite define what the abbreviation \"xip\" stands for. If \"Active xid8s at the time of the snapshot.\" were rewritten as \"In progress xid8s at the time of the snapshot\", it might be slightly easier for the reader to figure out that \"xip\" = \"Xid8s In Progress\". As it stands, nothing in the docs seems to explain the abbrevation. See doc/src/sgml/func.sgml\n> \n> You're right. Done.\n\nThanks!\n\n> However, I am getting cold feet about the new function names. The\n> existing naming structure made sense when all this stuff originated in\n> a contrib module with \"txid_\" as a prefix all over the place, but now\n> that 64 bit IDs are a core concept, I wonder if we shouldn't aim for\n> something that looks a little more like core functionality and doesn't\n> have those \"xid8_\" warts in the names. \n\nThe \"xid8_\" warts are partly motivated by having a type named \"xid8\", which is a bit of a wart in itself.\n\n> Here's what I now propose:\n> \n> Transaction ID functions, using names that fit with others (cf\n> pg_xact_commit_timestamp()):\n> \n> pg_current_xact_id()\n> pg_current_xact_id_if_assigned()\n> pg_xact_status(xid8)\n> \n> Snapshot functions (cf pg_export_snapshot()):\n> \n> pg_current_snapshot()\n> pg_snapshot_xmin(pg_snapshot)\n> pg_snapshot_xmax(pg_snapshot)\n> pg_snapshot_xip(pg_snapshot)\n> pg_visible_in_snapshot(xid8, pg_snapshot)\n\nI like some aspects of this, but not others. Function pg_stat_get_activity(), which gets exposed through view pg_stat_activity exposes both \"backend_xid\" and \"backend_xmin\" as (32-bit) xid. Your new function names are not sufficiently distinct from these older names for users to easily remember the difference:\n\nselect pg_snapshot_xmax(st.snap)\n from snapshot_test st, pg_stat_activity sa\n where pg_snapshot_xmin(st.snap) = sa.backend_xmin;\nERROR: operator does not exist: xid8 = xid\n\nSELECT * FROM pg_stat_activity s WHERE backend_xid = pg_current_xact_id();\nERROR: operator does not exist: xid = xid8\n\nSELECT pg_xact_status(backend_xmin), * FROM pg_stat_activity;\nERROR: function pg_xact_status(xid) does not exist\n\nIt's not the end of the world, and users can figure out to put a cast on those, but it's kind of ugly.\n\nIt was cleaner before v10, when \"backend_xid = xid8_current()\" clearly had a xid vs. xid8 mismatch. On the other hand, if the xid columns in pg_stat_activity and elsewhere eventually get upgraded to xid8 fields, then the new naming convention in v10 will be cleaner.\n\nAs such, I'm ±0 for the change. \n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Sat, 4 Apr 2020 15:34:38 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Sun, Apr 5, 2020 at 10:34 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > On Apr 4, 2020, at 7:11 AM, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > However, I am getting cold feet about the new function names. The\n> > existing naming structure made sense when all this stuff originated in\n> > a contrib module with \"txid_\" as a prefix all over the place, but now\n> > that 64 bit IDs are a core concept, I wonder if we shouldn't aim for\n> > something that looks a little more like core functionality and doesn't\n> > have those \"xid8_\" warts in the names.\n>\n> The \"xid8_\" warts are partly motivated by having a type named \"xid8\", which is a bit of a wart in itself.\n\nJust a thought for the future, not sure if it's a good one: would it\nseem less warty in years to come if we introduced xid4 as an alias for\nxid, and preferred the name xid4? Then it wouldn't look so much like\nxid is the \"real\" transaction ID type and xid8 is some kind of freaky\nextended version; instead it would look like xid4 and xid8 are narrow\nand wide transaction IDs, and xid is just a historical name for xid4.\n\n> > Here's what I now propose:\n> >\n> > Transaction ID functions, using names that fit with others (cf\n> > pg_xact_commit_timestamp()):\n> >\n> > pg_current_xact_id()\n> > pg_current_xact_id_if_assigned()\n> > pg_xact_status(xid8)\n> >\n> > Snapshot functions (cf pg_export_snapshot()):\n> >\n> > pg_current_snapshot()\n> > pg_snapshot_xmin(pg_snapshot)\n> > pg_snapshot_xmax(pg_snapshot)\n> > pg_snapshot_xip(pg_snapshot)\n> > pg_visible_in_snapshot(xid8, pg_snapshot)\n>\n> I like some aspects of this, but not others. Function pg_stat_get_activity(), which gets exposed through view pg_stat_activity exposes both \"backend_xid\" and \"backend_xmin\" as (32-bit) xid. Your new function names are not sufficiently distinct from these older names for users to easily remember the difference:\n>\n> select pg_snapshot_xmax(st.snap)\n> from snapshot_test st, pg_stat_activity sa\n> where pg_snapshot_xmin(st.snap) = sa.backend_xmin;\n> ERROR: operator does not exist: xid8 = xid\n>\n> SELECT * FROM pg_stat_activity s WHERE backend_xid = pg_current_xact_id();\n> ERROR: operator does not exist: xid = xid8\n\nIt's quite tempting to go and widen pg_stat_activity etc... but in\nany case I'm sure it'll happen for PG14.\n\n> SELECT pg_xact_status(backend_xmin), * FROM pg_stat_activity;\n> ERROR: function pg_xact_status(xid) does not exist\n>\n> It's not the end of the world, and users can figure out to put a cast on those, but it's kind of ugly.\n\nThat particular one can't really be fixed with a cast, either before\nor after this patch (I mean, if you add the right casts you can get\nthe query to run with this function or its txid ancestor, but it'll\nonly give the right answers during epoch 0 so I would call this\nfriction a good case of the type system doing its job during the\ntransition).\n\n> It was cleaner before v10, when \"backend_xid = xid8_current()\" clearly had a xid vs. xid8 mismatch. On the other hand, if the xid columns in pg_stat_activity and elsewhere eventually get upgraded to xid8 fields, then the new naming convention in v10 will be cleaner.\n\nYeah. Well, my cold feet with the v9 names came from thinking about\nhow all this is going to look in a couple of years as xid8 flows into\nmore administration interfaces. It seems inevitable that there will\nbe some friction along the way, but it seems like a nice goal to have\nwider values everywhere possible from functions and views with\nnon-warty names, and use cast to get narrow values if needed for some\nreason.\n\n> As such, I'm ±0 for the change.\n\nI'll let this sit for another day and see if some more reactions appear.\n\n\n",
"msg_date": "Sun, 5 Apr 2020 11:31:05 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Sun, Apr 5, 2020 at 11:31 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sun, Apr 5, 2020 at 10:34 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > The \"xid8_\" warts are partly motivated by having a type named \"xid8\", which is a bit of a wart in itself.\n>\n> Just a thought for the future, not sure if it's a good one: would it\n> seem less warty in years to come if we introduced xid4 as an alias for\n> xid, and preferred the name xid4? Then it wouldn't look so much like\n> xid is the \"real\" transaction ID type and xid8 is some kind of freaky\n> extended version; instead it would look like xid4 and xid8 are narrow\n> and wide transaction IDs, and xid is just a historical name for xid4.\n\nI'll look into proposing that for PG14. One reason I like that idea\nis that system view names like backend_xid could potentially retain\ntheir names while switching to xid8 type, (maybe?) breaking fewer\nqueries and avoiding ugly names, on the theory that _xid doesn't\nspecify whether it's xid4 or an xid8.\n\n> > > pg_current_xact_id()\n> > > pg_current_xact_id_if_assigned()\n> > > pg_xact_status(xid8)\n\n> > > pg_current_snapshot()\n> > > pg_snapshot_xmin(pg_snapshot)\n> > > pg_snapshot_xmax(pg_snapshot)\n> > > pg_snapshot_xip(pg_snapshot)\n> > > pg_visible_in_snapshot(xid8, pg_snapshot)\n\n> > As such, I'm ±0 for the change.\n>\n> I'll let this sit for another day and see if some more reactions appear.\n\nHearing no objections, pushed. Happy to reconsider these names before\nrelease if someone finds a problem with this scheme.\n\n\n",
"msg_date": "Tue, 7 Apr 2020 12:14:32 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "\n\n> On Apr 6, 2020, at 5:14 PM, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> On Sun, Apr 5, 2020 at 11:31 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> On Sun, Apr 5, 2020 at 10:34 AM Mark Dilger\n>> <mark.dilger@enterprisedb.com> wrote:\n>>> The \"xid8_\" warts are partly motivated by having a type named \"xid8\", which is a bit of a wart in itself.\n>> \n>> Just a thought for the future, not sure if it's a good one: would it\n>> seem less warty in years to come if we introduced xid4 as an alias for\n>> xid, and preferred the name xid4? Then it wouldn't look so much like\n>> xid is the \"real\" transaction ID type and xid8 is some kind of freaky\n>> extended version; instead it would look like xid4 and xid8 are narrow\n>> and wide transaction IDs, and xid is just a historical name for xid4.\n> \n> I'll look into proposing that for PG14. One reason I like that idea\n> is that system view names like backend_xid could potentially retain\n> their names while switching to xid8 type, (maybe?) breaking fewer\n> queries and avoiding ugly names, on the theory that _xid doesn't\n> specify whether it's xid4 or an xid8.\n> \n>>>> pg_current_xact_id()\n>>>> pg_current_xact_id_if_assigned()\n>>>> pg_xact_status(xid8)\n> \n>>>> pg_current_snapshot()\n>>>> pg_snapshot_xmin(pg_snapshot)\n>>>> pg_snapshot_xmax(pg_snapshot)\n>>>> pg_snapshot_xip(pg_snapshot)\n>>>> pg_visible_in_snapshot(xid8, pg_snapshot)\n> \n>>> As such, I'm ±0 for the change.\n>> \n>> I'll let this sit for another day and see if some more reactions appear.\n> \n> Hearing no objections, pushed. Happy to reconsider these names before\n> release if someone finds a problem with this scheme.\n\nHmm, I should have spoken sooner...\n\nsrc/backend/replication/walsender.c:static bool TransactionIdInRecentPast(TransactionId xid, uint32 epoch);\nsrc/backend/utils/adt/xid8funcs.c:TransactionIdInRecentPast(FullTransactionId fxid, TransactionId *extracted_xid)\n\nI don't care much for having two different functions with the same name and related semantics but different argument types.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 16 Apr 2020 08:44:36 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Fri, Apr 17, 2020 at 3:44 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Hmm, I should have spoken sooner...\n>\n> src/backend/replication/walsender.c:static bool TransactionIdInRecentPast(TransactionId xid, uint32 epoch);\n> src/backend/utils/adt/xid8funcs.c:TransactionIdInRecentPast(FullTransactionId fxid, TransactionId *extracted_xid)\n>\n> I don't care much for having two different functions with the same name and related semantics but different argument types.\n\nMaybe that's not ideal, but it's not because of this patch. Those\nfunctions are from 5737c12df05 and 857ee8e391f.\n\n\n",
"msg_date": "Fri, 17 Apr 2020 05:56:58 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Thu, Apr 2, 2020 at 5:13 PM Andres Freund <andres@anarazel.de> wrote:\n> Given that txid_current() \"always\" has been a plain 64 bit integer, and\n> the various txid_* functions always have returned 64 bit integers, I\n> really don't think arguing for some 32bit/32bit situation now makes\n> sense.\n\nI'm not sure what the best thing to do is here, but the reality is\nthat there are many places where 32-bit XIDs are going to be showing\nup for years to come. With the format printed as a raw 64-bit\nquantity, people troubleshooting stuff are going to spend a lot of\ntime figuring what x%2^32 is. And I can't do that in my head. So I\nthink saying that the proposal does not makes sense is a gross\noverstatement. It may not be what we want to do. But it definitely\nwould make sense.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 17 Apr 2020 13:33:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "Hi,\n\nOn 2020-04-17 13:33:53 -0400, Robert Haas wrote:\n> On Thu, Apr 2, 2020 at 5:13 PM Andres Freund <andres@anarazel.de> wrote:\n> > Given that txid_current() \"always\" has been a plain 64 bit integer, and\n> > the various txid_* functions always have returned 64 bit integers, I\n> > really don't think arguing for some 32bit/32bit situation now makes\n> > sense.\n> \n> I'm not sure what the best thing to do is here, but the reality is\n> that there are many places where 32-bit XIDs are going to be showing\n> up for years to come. With the format printed as a raw 64-bit\n> quantity, people troubleshooting stuff are going to spend a lot of\n> time figuring what x%2^32 is. And I can't do that in my head. So I\n> think saying that the proposal does not makes sense is a gross\n> overstatement. It may not be what we want to do. But it definitely\n> would make sense.\n\nYou seem to be entirely disregarding my actual point, namely that\ntxid_current(), as well as some other txid_* functions, have returned\n64bit xids for many many years. txid_current() is the only function to\nget the current xid in a reasonable way. I don't understand how a\nproposal to add a 32/32 bit representation *in addition* to the existing\n32 and 64bit representations is going to improve the situation. Nor do I\nsee changing txid_current()'s return format as something we're going to\ngo for.\n\nI did not argue against a function to turn 64bit xids into epoch/32bit\nxid or such.\n\n?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Apr 2020 10:45:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Fri, Apr 17, 2020 at 1:45 PM Andres Freund <andres@anarazel.de> wrote:\n> You seem to be entirely disregarding my actual point, namely that\n> txid_current(), as well as some other txid_* functions, have returned\n> 64bit xids for many many years. txid_current() is the only function to\n> get the current xid in a reasonable way. I don't understand how a\n> proposal to add a 32/32 bit representation *in addition* to the existing\n> 32 and 64bit representations is going to improve the situation. Nor do I\n> see changing txid_current()'s return format as something we're going to\n> go for.\n>\n> I did not argue against a function to turn 64bit xids into epoch/32bit\n> xid or such.\n\nI thought we were talking about how the new xid8 type ought to behave.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 17 Apr 2020 14:07:07 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "Hi,\n\nOn 2020-04-17 14:07:07 -0400, Robert Haas wrote:\n> On Fri, Apr 17, 2020 at 1:45 PM Andres Freund <andres@anarazel.de> wrote:\n> > You seem to be entirely disregarding my actual point, namely that\n> > txid_current(), as well as some other txid_* functions, have returned\n> > 64bit xids for many many years. txid_current() is the only function to\n> > get the current xid in a reasonable way. I don't understand how a\n> > proposal to add a 32/32 bit representation *in addition* to the existing\n> > 32 and 64bit representations is going to improve the situation. Nor do I\n> > see changing txid_current()'s return format as something we're going to\n> > go for.\n> >\n> > I did not argue against a function to turn 64bit xids into epoch/32bit\n> > xid or such.\n> \n> I thought we were talking about how the new xid8 type ought to behave.\n\nYes? But that type doesn't exist in isolation. Having yet another\nsignificantly different representation of 64bit xids (as plain 64 bit\nintegers, and as some 32/32 epoch/xid split) would make an already\nconfusing situation even more complex.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Apr 2020 11:42:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On 2020-Apr-17, Andres Freund wrote:\n\n> Yes? But that type doesn't exist in isolation. Having yet another\n> significantly different representation of 64bit xids (as plain 64 bit\n> integers, and as some 32/32 epoch/xid split) would make an already\n> confusing situation even more complex.\n\nOn the contrary -- I think it would clarify a confusing situation.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 17 Apr 2020 15:26:46 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
},
{
"msg_contents": "On Tue, Apr 7, 2020 at 12:14 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sun, Apr 5, 2020 at 11:31 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Sun, Apr 5, 2020 at 10:34 AM Mark Dilger\n> > <mark.dilger@enterprisedb.com> wrote:\n> > > The \"xid8_\" warts are partly motivated by having a type named \"xid8\", which is a bit of a wart in itself.\n> >\n> > Just a thought for the future, not sure if it's a good one: would it\n> > seem less warty in years to come if we introduced xid4 as an alias for\n> > xid, and preferred the name xid4? Then it wouldn't look so much like\n> > xid is the \"real\" transaction ID type and xid8 is some kind of freaky\n> > extended version; instead it would look like xid4 and xid8 are narrow\n> > and wide transaction IDs, and xid is just a historical name for xid4.\n>\n> I'll look into proposing that for PG14. One reason I like that idea\n> is that system view names like backend_xid could potentially retain\n> their names while switching to xid8 type, (maybe?) breaking fewer\n> queries and avoiding ugly names, on the theory that _xid doesn't\n> specify whether it's xid4 or an xid8.\n\nHere's a patch that renames xid to xid4, but I realised that we lack\nthe technology to create a suitable backwards compat alias. The\nbigint/int8 keyword trick doesn't work here, because it would break\nexisting queries using xid as, for example, a function argument name.\nPerhaps we could invent a new kind of type that is a simple alias for\nanother type, and is entirely replaced by the base type early in\nprocessing, so that you can do type aliases without bigint-style\nkeywords. Perhaps all of this is not worth the churn just for a\nneatnick project.",
"msg_date": "Sat, 18 Apr 2020 09:17:54 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we add xid_current() or a int8->xid cast?"
}
] |
[
{
"msg_contents": "Hi folks\n\nI recently tracked down a race in shutdown of logical walsenders that can\ncause PostgreSQL shutdown to hang for wal_sender_timeout/2 before it\ncontinues to a normal shutdown. With a long timeout that can be quite\ndisruptive.\n\nTL;DR: The logical walsender may be signalled to stop, then read the last\nWAL record before the shutdown checkpoint is due to be written and go to\nsleep. The checkpointer will wait for it to acknowledge the shutdown and\nthe walsender will wait for new WAL. The deadlock is eventually broken by\nthe walsender timeout keepalive timer.\n\nPatch attached.\n\nThe issue arises from the difference between logical and physical walsender\nshutdown as introduced by commit c6c3334364 \"Prevent possibility of panics\nduring shutdown checkpoint\". It's probably fairly hard to trigger. I ran\ninto a case where it happened regularly only because of an unrelated patch\nthat caused some WAL to be written just before the checkpointer issued\nwalsender shutdown requests. But it's still a legit bug.\n\nIf you hit the issue you'll see that walsender(s) can be seen to be\nsleeping in WaitLatchOrSocket in WalSndLoop. They'll keep sleeping until\nwoken by the keepalive timeout. The checkpointer will be waiting in\nWalSndWaitStopping() for the walsenders to enter WALSNDSTATE_STOPPING or\nexit, whichever happens first. The postmaster will be waiting in ServerLoop\nfor the checkpointer to finish the shutdown checkpoint.\n\nThe checkpointer waits in WalSndWaitStopping() for all walsenders to either\nexit or enter WALSNDSTATE_STOPPING state. Logical walsenders never enter\nWALSNDSTATE_STOPPING, they go straight to exiting, so the checkpointer\ncan't finish WalSndWaitStopping() and write the shutdown checkpoint. A\nlogical walsender usually notices the shutdown request and exits as soon as\nit has flushed all WAL up to the server's flushpoint, while physical\nwalsenders enter WALSNDSTATE_STOPPING.\n\nBut there's a race where a logical walsender may read the final available\nrecord and notice it has caught up - but not notice that it has reached\nend-of-WAL and check whether it should exit. This happens on the following\n(simplified) code path in XLogSendLogical:\n\n if (record != NULL)\n {\n XLogRecPtr flushPtr = GetFlushRecPtr();\n LogicalDecodingProcessRecord(...);\n sentPtr = ...;\n if (sentPtr >= flushPtr)\n WalSndCaughtUp = true; // <-- HERE\n }\n\nbecause the test for got_STOPPING that sets got_SIGUSR2 is only on the\nother branch where getting a record returns `NULL`; this branch can sleep\nbefore checking if shutdown was requested.\n\nSo if the walsender read the last WAL record available, when it's >= the\nflush pointer and it already handled the SIGUSR1 latch wakeup for the WAL\nwrite, it might go back to sleep and not wake up until the timeout.\n\nThe checkpointer already sent PROCSIG_WALSND_INIT_STOPPING to the\nwalsenders in the prior WalSndInitStopping() call so the walsender won't be\nwoken by a signal from the checkpointer. No new WAL will be written because\nthe walsender just consumed the final record written before the\ncheckpointer went to sleep, and the checkpointer won't write anything more\nuntil the walsender exits. The client might not be due a keepalive for some\ntime.The only reason this doesn't turn into a total deadlock is that\nkeepalive wakeup.\n\nAn alternative fix would be to have the logical walsender set\nWALSNDSTATE_STOPPING instead of faking got_SIGUSR2, then go to sleep\nwaiting for more WAL. Logical decoding would need to check if it was\nrunning during shutdown and Assert(...) then ERROR if it saw any WAL\nrecords that result in output plugin calls or snapshot management calls. I\navoided this approach as it's more intrusive and I'm not confident I can\nconcoct a reliable test to trigger it.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 25 Jul 2019 09:24:26 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Race condition in logical walsender causes long postgresql\n shutdown delay"
},
{
"msg_contents": "On 2019-Jul-25, Craig Ringer wrote:\n\n> Patch attached.\n\nHere's a non-broken version of this patch. I have not done anything\nother than reflowing the new comment.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 3 Sep 2019 18:18:58 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Race condition in logical walsender causes long\n postgresql shutdown delay"
},
{
"msg_contents": "On 2019-Sep-03, Alvaro Herrera wrote:\n\n> On 2019-Jul-25, Craig Ringer wrote:\n> \n> > Patch attached.\n> \n> Here's a non-broken version of this patch. I have not done anything\n> other than reflowing the new comment.\n\nReading over this code, I noticed that the detection of the catch-up\nstate ends up being duplicate code, so I would rework that function as\nin the attached patch.\n\nThe naming of those flags (got_SIGUSR2, got_STOPPING) is terrible, but\nI'm not going to change that in a backpatchable bug fix.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 11 Sep 2019 16:52:33 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Race condition in logical walsender causes long\n postgresql shutdown delay"
},
{
"msg_contents": "On Wed, Sep 11, 2019 at 3:52 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n>\n> Reading over this code, I noticed that the detection of the catch-up\n> state ends up being duplicate code, so I would rework that function as\n> in the attached patch.\n>\n> The naming of those flags (got_SIGUSR2, got_STOPPING) is terrible, but\n> I'm not going to change that in a backpatchable bug fix.\n>\n\nHi Alvaro, does this count as a review? And Craig, do you agree with\nAlvaro's patch as a replacement for your own?\n\nThanks,\n\nJeff\n\nOn Wed, Sep 11, 2019 at 3:52 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\nReading over this code, I noticed that the detection of the catch-up\nstate ends up being duplicate code, so I would rework that function as\nin the attached patch.\n\nThe naming of those flags (got_SIGUSR2, got_STOPPING) is terrible, but\nI'm not going to change that in a backpatchable bug fix.Hi Alvaro, does this count as a review? And Craig, do you agree with Alvaro's patch as a replacement for your own? Thanks,Jeff",
"msg_date": "Thu, 26 Sep 2019 19:57:51 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Race condition in logical walsender causes long\n postgresql shutdown delay"
},
{
"msg_contents": "On 2019-Sep-26, Jeff Janes wrote:\n\n> On Wed, Sep 11, 2019 at 3:52 PM Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> \n> > Reading over this code, I noticed that the detection of the catch-up\n> > state ends up being duplicate code, so I would rework that function as\n> > in the attached patch.\n> >\n> > The naming of those flags (got_SIGUSR2, got_STOPPING) is terrible, but\n> > I'm not going to change that in a backpatchable bug fix.\n> \n> Hi Alvaro, does this count as a review?\n\nWell, I'm already a second pair of eyes for Craig's code, so I think it\ndoes :-) I would have liked confirmation from Craig that my change\nlooks okay to him too, but maybe we'll have to go without that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 26 Sep 2019 22:22:13 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Race condition in logical walsender causes long\n postgresql shutdown delay"
},
{
"msg_contents": "On 2019-Sep-26, Alvaro Herrera wrote:\n\n> On 2019-Sep-26, Jeff Janes wrote:\n\n> > Hi Alvaro, does this count as a review?\n> \n> Well, I'm already a second pair of eyes for Craig's code, so I think it\n> does :-) I would have liked confirmation from Craig that my change\n> looks okay to him too, but maybe we'll have to go without that.\n\nThere not being a third pair of eyes, I have pushed this.\n\nThanks!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 17 Oct 2019 10:13:59 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Race condition in logical walsender causes long\n postgresql shutdown delay"
},
{
"msg_contents": "On Thu, 17 Oct 2019 at 21:19, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Sep-26, Alvaro Herrera wrote:\n>\n> > On 2019-Sep-26, Jeff Janes wrote:\n>\n> > > Hi Alvaro, does this count as a review?\n> >\n> > Well, I'm already a second pair of eyes for Craig's code, so I think it\n> > does :-) I would have liked confirmation from Craig that my change\n> > looks okay to him too, but maybe we'll have to go without that.\n>\n> There not being a third pair of eyes, I have pushed this.\n>\n> Thanks!\n>\n>\n> Thanks.\n\nI'm struggling to keep up with my own threads right now...\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 17 Oct 2019 at 21:19, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2019-Sep-26, Alvaro Herrera wrote:\n\n> On 2019-Sep-26, Jeff Janes wrote:\n\n> > Hi Alvaro, does this count as a review?\n> \n> Well, I'm already a second pair of eyes for Craig's code, so I think it\n> does :-) I would have liked confirmation from Craig that my change\n> looks okay to him too, but maybe we'll have to go without that.\n\nThere not being a third pair of eyes, I have pushed this.\n\nThanks!\nThanks.I'm struggling to keep up with my own threads right now... -- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 18 Oct 2019 18:14:59 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Race condition in logical walsender causes long\n postgresql shutdown delay"
}
] |
[
{
"msg_contents": "Hi,\n\nInitdb fails when following path is provided as input:\ndatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/\n\nAlso the cleanup also tends to fail in the cleanup path.\n\nCould be something to do with path handling.\nI'm not sure if this is already known.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Jul 2019 11:08:40 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Initdb failure"
},
{
"msg_contents": "On Thu, 25 Jul 2019 at 07:39, vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> Initdb fails when following path is provided as input:\n> datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/\n>\n> Also the cleanup also tends to fail in the cleanup path.\n>\n> Could be something to do with path handling.\nThis is because the value of MAXPGPATH is 1024 and the path you are\nproviding is more than that. Hence, when it is trying to read\nPG_VERSION in ValidatePgVersion it is going to a wrong path with just\n1024 characters.\n\n> I'm not sure if this is already known.\nI am also not sure if it is known or intentional. On the other hand I\nalso don't know if it is practical to give such long names for\ndatabase directory anyway.\n\n-- \nRegards,\nRafia Sabih\n\n\n",
"msg_date": "Thu, 25 Jul 2019 13:21:52 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Initdb failure"
},
{
"msg_contents": "On Thu, Jul 25, 2019 at 4:52 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n>\n> On Thu, 25 Jul 2019 at 07:39, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Initdb fails when following path is provided as input:\n> > datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/\n> >\n> > Also the cleanup also tends to fail in the cleanup path.\n> >\n> > Could be something to do with path handling.\n> This is because the value of MAXPGPATH is 1024 and the path you are\n> providing is more than that. Hence, when it is trying to read\n> PG_VERSION in ValidatePgVersion it is going to a wrong path with just\n> 1024 characters.\n>\n\nThe error occurs at a very later point after performing the initial\nwork like creating directory. I'm thinking we should check this in\nthe beginning and throw the error message at the beginning and exit\ncleanly.\n\n>\n> > I'm not sure if this is already known.\n> I am also not sure if it is known or intentional. On the other hand I\n> also don't know if it is practical to give such long names for\n> database directory anyway.\n>\n\nUsually they will not be using such long path, but this is one of the\nodd scenarios.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Jul 2019 17:20:24 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Initdb failure"
},
{
"msg_contents": "On Thu, 25 Jul 2019 at 13:50, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, Jul 25, 2019 at 4:52 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> >\n> > On Thu, 25 Jul 2019 at 07:39, vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > Initdb fails when following path is provided as input:\n> > > datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/\n> > >\n> > > Also the cleanup also tends to fail in the cleanup path.\n> > >\n> > > Could be something to do with path handling.\n> > This is because the value of MAXPGPATH is 1024 and the path you are\n> > providing is more than that. Hence, when it is trying to read\n> > PG_VERSION in ValidatePgVersion it is going to a wrong path with just\n> > 1024 characters.\n> >\n>\n> The error occurs at a very later point after performing the initial\n> work like creating directory. I'm thinking we should check this in\n> the beginning and throw the error message at the beginning and exit\n> cleanly.\n>\nNow that you say this, it does make sense to atleast inform about the\ncorrect error and that too earlier. Something like the attached patch\nwould make sense.\n\n-- \nRegards,\nRafia Sabih",
"msg_date": "Thu, 25 Jul 2019 16:10:47 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Initdb failure"
},
{
"msg_contents": "Rafia Sabih <rafia.pghackers@gmail.com> writes:\n> On Thu, 25 Jul 2019 at 13:50, vignesh C <vignesh21@gmail.com> wrote:\n>>> Initdb fails when following path is provided as input:\n>>> datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/\n\n> Now that you say this, it does make sense to atleast inform about the\n> correct error and that too earlier. Something like the attached patch\n> would make sense.\n\nI am not terribly excited about putting effort into this at all, because\nI don't think that any actual user anywhere will ever get any benefit.\nThe proposed test case is just silly.\n\nHowever, if we are going to put effort into it, it needs to be more than\nthis. First off, what is the actual failure point? (It's surely less\nthan MAXPGPATH, because we tend to append subdirectory/file names onto\nwhatever is given.) Second, what of absolute versus relative paths?\nIf converting the given path to absolute makes it exceed MAXPGPATH,\nis that a problem?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Jul 2019 10:44:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Initdb failure"
},
{
"msg_contents": "On Thu, 25 Jul 2019 at 16:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Rafia Sabih <rafia.pghackers@gmail.com> writes:\n> > On Thu, 25 Jul 2019 at 13:50, vignesh C <vignesh21@gmail.com> wrote:\n> >>> Initdb fails when following path is provided as input:\n> >>> datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/\n>\n> > Now that you say this, it does make sense to atleast inform about the\n> > correct error and that too earlier. Something like the attached patch\n> > would make sense.\n>\n> I am not terribly excited about putting effort into this at all, because\n> I don't think that any actual user anywhere will ever get any benefit.\n> The proposed test case is just silly.\n\nThat I totally agree upon!\nBut on the other hand emitting the right error message atleast would\nbe good for the sake of correctness if nothing else. But yes that\ndefinitely should be weighed against what is the effort required for\nthis.\n\n-- \nRegards,\nRafia Sabih\n\n\n",
"msg_date": "Thu, 25 Jul 2019 17:09:04 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Initdb failure"
},
{
"msg_contents": "On Thu, Jul 25, 2019 at 8:39 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n>\n> On Thu, 25 Jul 2019 at 16:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Rafia Sabih <rafia.pghackers@gmail.com> writes:\n> > > On Thu, 25 Jul 2019 at 13:50, vignesh C <vignesh21@gmail.com> wrote:\n> > >>> Initdb fails when following path is provided as input:\n> > >>> datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/datasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafddsdatasadfasfdsafdds/\n> >\n> > > Now that you say this, it does make sense to atleast inform about the\n> > > correct error and that too earlier. Something like the attached patch\n> > > would make sense.\n> >\n> > I am not terribly excited about putting effort into this at all, because\n> > I don't think that any actual user anywhere will ever get any benefit.\n> > The proposed test case is just silly.\n>\n> That I totally agree upon!\n> But on the other hand emitting the right error message atleast would\n> be good for the sake of correctness if nothing else. But yes that\n> definitely should be weighed against what is the effort required for\n> this.\n>\nThanks Tom for your opinion.\nThanks Rafia for your thoughts and effort in making the patch.\n\nI'm not sure if we are planning to fix this.\nIf we are planning to fix, one suggestion from my side we can\nchoose a safe length which would include the subdirectories\nand file paths. I think one of these will be the longest:\nbase/database_oid/tables\npg_wal/archive_status/\npg_wal/archive_file\n\nFix can be something like:\nMAXPGPATH - (LONGEST_PATH_FROM_ABOVE)\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Jul 2019 22:25:44 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Initdb failure"
},
{
"msg_contents": "On 2019-07-25 17:09, Rafia Sabih wrote:\n> But on the other hand emitting the right error message atleast would\n> be good for the sake of correctness if nothing else. But yes that\n> definitely should be weighed against what is the effort required for\n> this.\n\nI think if you want to make this more robust, get rid of the fixed-size\narray, use dynamic allocation with PQExpBuffer, and let the operating\nsystem complain if it doesn't like the directory name length.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 27 Jul 2019 08:22:22 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Initdb failure"
},
{
"msg_contents": "On Sat, Jul 27, 2019 at 2:22 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> I think if you want to make this more robust, get rid of the fixed-size\n> array, use dynamic allocation with PQExpBuffer, and let the operating\n> system complain if it doesn't like the directory name length.\n\nAgreed, but I think we should just do nothing. To actually fix this\nin general, we'd have to get rid of every instance of MAXPGPATH in the\nsource tree:\n\n[rhaas pgsql]$ git grep MAXPGPATH | wc -l\n 611\n\nIf somebody feels motivated to spend that amount of effort improving\nthis, I will stand back and cheer from the sidelines, but that's gonna\nbe a LOT of work for a problem that, as Tom says, is probably not\nreally affecting very many people.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 30 Jul 2019 14:51:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Initdb failure"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Agreed, but I think we should just do nothing. To actually fix this\n> in general, we'd have to get rid of every instance of MAXPGPATH in the\n> source tree:\n> [rhaas pgsql]$ git grep MAXPGPATH | wc -l\n> 611\n\nI don't think it'd really be necessary to go that far. One of the\nreasons we chdir to the data directory at postmaster start is so that\n(pretty nearly) all filenames that backends deal with are relative\npathnames of very predictable, short lengths. So a lot of those\nMAXPGPATH uses are probably perfectly safe, indeed likely overkill.\n\nHowever, identifying which ones are not safe would still take looking\nat every use case, so I agree there'd be a lot of work here.\n\nWould there be any value in teaching initdb to do something similar,\nie chdir to the parent of the target datadir location? Then the set\nof places in initdb that have to deal with long pathnames would be\npretty well constrained.\n\nOn the whole though, I don't have a problem with the \"do nothing\"\nanswer. There's no security risk here, and no issue that seems\nlikely to arise in actual use cases rather than try-to-break-it\ntest scripts.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jul 2019 15:30:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Initdb failure"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 03:30:12PM -0400, Tom Lane wrote:\n> On the whole though, I don't have a problem with the \"do nothing\"\n> answer. There's no security risk here, and no issue that seems\n> likely to arise in actual use cases rather than try-to-break-it\n> test scripts.\n\n+1.\n--\nMichael",
"msg_date": "Wed, 31 Jul 2019 10:42:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Initdb failure"
}
] |
[
{
"msg_contents": "Hi folks,\n\nI’ve run into a planning conundrum with my query rewriting extension for MVs when attempting to rewrite a RECURSIVE CTE.\n\nRECURSIVE CTEs are expensive — and presumably tricky to optimise — and so a good use case for query rewrite against an MV; all the more so if Yugo’s Incremental View Maintenance concept gets traction.\n\nI want to add an alternative Path for the UPPERREL_FINAL of the CTE root, but my new MV scan path (which is actually a thin CustomScan atop a scan of the MV) is rejected in favour of the existing paths. \n\nThis seems to be because my Path is more expensive than the Rel’s existing Paths when considered alone. (The CTE’s final scan is actually a scan Path over a worktable, so it really is much lighter.)\n\nHowever, if I factor back in the cost of the InitPlan, things net out much more in favour of a scan against the MV. Of course, the add_path() comparison logic doesn’t include the InitPlan cost, so the point is moot. \n\nI’m wondering how I should approach this problem. First pass, I can’t see how to achieve an amicable solution with existing infrastructure.\n\nI have a few possible solutions. Do any of the following make sense?\n\n1. Override the add_path() logic to force my Path to win? This was initially my least favourite approach, but perhaps it’s actually the most pragmatic. Advantage is I think I could do this entirely in my EXTENSION. \n\n2. Make a new version of add_path() which is more aware of dependencies.\n\nSeems #2 could have utility in PG generally. If I’m not wrong, my guess is that one of the reasons for the >=2-references-for-materialising-a-CTE;1-for-inlining policy is that we don’t have the planner logic to trade off materialisation versus inlining.\n\nAlso, I am wondering if my MV rewrite logic misses cases where the planner decides to materialise an intermediate result as an InitPlan for later processing. \n\n3. I considered creating a new root PlannerInfo structure, and burying the existing one another level down, alongside my MV scan, in a Gather-like arrangement. That coverts the costing conundrum to a choice between roots. Obviously that will include the InitPlan costs. I figured I could eliminate one sub-root much as Path elimination works. But on reflection, I’m not sure PG has enough flexibility in the Path concept to support this route forward.\n\nI’d welcome any view, ideas or advice. \n\nd.\n\n\n",
"msg_date": "Thu, 25 Jul 2019 09:17:54 +0100",
"msg_from": "Dent John <denty@qqdd.eu>",
"msg_from_op": true,
"msg_subject": "add_path() for Path without InitPlan: cost comparison vs. Paths that\n require one"
},
{
"msg_contents": "Dent John <denty@qqdd.eu> writes:\n> However, if I factor back in the cost of the InitPlan, things net out much more in favour of a scan against the MV. Of course, the add_path() comparison logic doesn’t include the InitPlan cost, so the point is moot. \n\nPlease explain yourself. InitPlans will, as a rule, get stuck into the\nsame place in the plan tree regardless of which paths are chosen; that's\nwhy we need not consider them in path cost comparisons. Moreover, once\nthe initplan's own subplan is determined, it's going to be the same\nregardless of the shape of the outer query --- so if we did factor it\nin, it'd contribute the same cost to every outer path, and thus it\nstill wouldn't change any decisions. So I don't follow what you're\non about here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Jul 2019 09:25:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: add_path() for Path without InitPlan: cost comparison vs. Paths\n that require one"
},
{
"msg_contents": "Hi Tom,\n\n> On 25 Jul 2019, at 14:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Please explain yourself. InitPlans will, as a rule, get stuck into the\n> same place in the plan tree regardless of which paths are chosen; that's\n> why we need not consider them in path cost comparisons.\n\nAh that’s true. I didn’t realise that at the time I wrote. \n\nBut I think my problem is still real...\n\n> Moreover, once\n> the initplan's own subplan is determined, it's going to be the same\n> regardless of the shape of the outer query ---\n\nYes that’s true too. \n\n> so if we did factor it\n> in, it'd contribute the same cost to every outer path, and thus it\n> still wouldn't change any decisions. \n\nI think I’m exposed to the problem because I’m changing how certain queries are fulfilled. \n\nAnd in the case of a RECURSIVE CTE, the plan ends up being an InitPlan that materializes the CTE, and then a scan of that materialized result.\n\nThe problem is that I can fulfil the entire query with a scan against an MV table. Point is it’s an alternative that achieves both the InitPlan (because it’s unnecessary) and the final scan.\n\nBut the cost comparison during add_path() is only taking into account the cost of the final scan, which is so cheap that it is preferable even to a simple scan of an MV. \n\n> So I don't follow what you're\n> on about here.\n\nHmm. Having written the above, I realise I’m not clear on why my extension isn’t offered the opportunity to materialise the work table for the InitPlan.\n\nSorry. I should have thought about that question first. It might just be an error in my code. I’ll follow up with an answer.\n\n> \n> regards, tom lane\n\n\n\n",
"msg_date": "Thu, 25 Jul 2019 17:52:01 +0100",
"msg_from": "Dent John <denty@qqdd.eu>",
"msg_from_op": true,
"msg_subject": "Re: add_path() for Path without InitPlan: cost comparison vs. Paths\n that require one"
}
] |
[
{
"msg_contents": "We've discussed this internally many times, but today finally decided\nto write up a doc patch.\n\nAutovacuum holds a SHARE UPDATE EXCLUSIVE lock, but other processes\ncan cancel autovacuum if blocked by that lock unless the autovacuum is\nto prevent wraparound.This can result in very surprising behavior:\nimagine a system that needs to run ANALYZE manually before batch jobs\nto ensure reasonable query plans. That ANALYZE will interrupt attempts\nto run autovacuum, and pretty soon the table is far more bloated than\nexpected, and query plans (ironically) degrade further.\n\nAttached is a patch to document that behavior (as opposed to just in\nthe code at src/backend/storage/lmgr/proc.c:1320-1321).\n\nJames Coleman",
"msg_date": "Thu, 25 Jul 2019 15:14:48 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "[DOC] Document auto vacuum interruption"
},
{
"msg_contents": "On Fri, Jul 26, 2019 at 1:45 AM James Coleman <jtc331@gmail.com> wrote:\n>\n> We've discussed this internally many times, but today finally decided\n> to write up a doc patch.\n>\n\nThanks, I think something on the lines of what you have written can\nhelp some users to understand the behavior in this area and there\ndoesn't seem to be any harm in giving such information to the user.\n\n> Autovacuum holds a SHARE UPDATE EXCLUSIVE lock, but other processes\n> can cancel autovacuum if blocked by that lock unless the autovacuum is\n> to prevent wraparound.This can result in very surprising behavior:\n> imagine a system that needs to run ANALYZE manually before batch jobs\n> to ensure reasonable query plans. That ANALYZE will interrupt attempts\n> to run autovacuum, and pretty soon the table is far more bloated than\n> expected, and query plans (ironically) degrade further.\n>\n\n+ If a process attempts to acquire a <literal>SHARE UPDATE\nEXCLUSIVE</literal>\n+ lock (the lock type held by autovacuum), lock acquisition will interrupt\n+ the autovacuum.\n\nI think it is not only for a process that tries to acquire a lock in\nSHARE UPDATE EXCLUSIVE mode, rather when a process tries to acquire\nany lock mode that conflicts with SHARE UPDATE EXCLUSIVE. For the\nconflicting lock modes, you can refer docs [1] (See Table 13.2.\nConflicting Lock Modes).\n\n[1] - https://www.postgresql.org/docs/devel/explicit-locking.html\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 1 Sep 2019 08:21:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document auto vacuum interruption"
},
{
"msg_contents": "On Sat, Aug 31, 2019 at 10:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 26, 2019 at 1:45 AM James Coleman <jtc331@gmail.com> wrote:\n> >\n> > We've discussed this internally many times, but today finally decided\n> > to write up a doc patch.\n> >\n>\n> Thanks, I think something on the lines of what you have written can\n> help some users to understand the behavior in this area and there\n> doesn't seem to be any harm in giving such information to the user.\n>\n> > Autovacuum holds a SHARE UPDATE EXCLUSIVE lock, but other processes\n> > can cancel autovacuum if blocked by that lock unless the autovacuum is\n> > to prevent wraparound.This can result in very surprising behavior:\n> > imagine a system that needs to run ANALYZE manually before batch jobs\n> > to ensure reasonable query plans. That ANALYZE will interrupt attempts\n> > to run autovacuum, and pretty soon the table is far more bloated than\n> > expected, and query plans (ironically) degrade further.\n> >\n>\n> + If a process attempts to acquire a <literal>SHARE UPDATE\n> EXCLUSIVE</literal>\n> + lock (the lock type held by autovacuum), lock acquisition will interrupt\n> + the autovacuum.\n>\n> I think it is not only for a process that tries to acquire a lock in\n> SHARE UPDATE EXCLUSIVE mode, rather when a process tries to acquire\n> any lock mode that conflicts with SHARE UPDATE EXCLUSIVE. For the\n> conflicting lock modes, you can refer docs [1] (See Table 13.2.\n> Conflicting Lock Modes).\n>\n> [1] - https://www.postgresql.org/docs/devel/explicit-locking.html\n\nUpdated patch attached. I changed the wording to be about conflicting\nlocks rather than a single lock type, added a link to the conflicting\nlocks table, and fixed a few sgml syntax issues in the original.\n\nJames Coleman",
"msg_date": "Fri, 13 Sep 2019 14:29:28 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document auto vacuum interruption"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 11:59 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Sat, Aug 31, 2019 at 10:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> Updated patch attached. I changed the wording to be about conflicting\n> locks rather than a single lock type, added a link to the conflicting\n> locks table, and fixed a few sgml syntax issues in the original.\n>\n\nI see error while compiling this patch on HEAD. See the below error:\n/usr/bin/xmllint --path . --noout --valid postgres.sgml\npostgres.sgml:833: element xref: validity error : IDREF attribute\nlinkend references an unknown ID\n\"mvcc-locking-tables-table-lock-compatibility\"\nmake: *** [check] Error 4\n\nThe tag id mvcc-locking-tables-table-lock-compatibility is wrong. The\nother problem I see is the wrong wording in one of the literals. I\nhave fixed both of these issues and slightly tweaked one of the\nsentence. See the updated patch attached. On which version, are you\npreparing this patch? I see both HEAD and 9.4 has the problems fixed\nby me.\n\nLet me know what you think of attached? I think we can back-patch\nthis patch. What do you think? Does anyone else have an opinion on\nthis patch especially if we see any problem in back-patching this?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 17 Sep 2019 11:51:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document auto vacuum interruption"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 2:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Sep 13, 2019 at 11:59 PM James Coleman <jtc331@gmail.com> wrote:\n> >\n> > On Sat, Aug 31, 2019 at 10:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > Updated patch attached. I changed the wording to be about conflicting\n> > locks rather than a single lock type, added a link to the conflicting\n> > locks table, and fixed a few sgml syntax issues in the original.\n> >\n>\n> I see error while compiling this patch on HEAD. See the below error:\n> /usr/bin/xmllint --path . --noout --valid postgres.sgml\n> postgres.sgml:833: element xref: validity error : IDREF attribute\n> linkend references an unknown ID\n> \"mvcc-locking-tables-table-lock-compatibility\"\n> make: *** [check] Error 4\n>\n> The tag id mvcc-locking-tables-table-lock-compatibility is wrong.\n\nMy apologies; I'd fixed that on my local copy before sending my last\nemail, but I must have somehow grabbed the wrong patch file to attach\nto the email.\n\n> The\n> other problem I see is the wrong wording in one of the literals. I\n> have fixed both of these issues and slightly tweaked one of the\n> sentence. See the updated patch attached. On which version, are you\n> preparing this patch? I see both HEAD and 9.4 has the problems fixed\n> by me.\n>\n> Let me know what you think of attached? I think we can back-patch\n> this patch. What do you think? Does anyone else have an opinion on\n> this patch especially if we see any problem in back-patching this?\n\nThe attached looks great!\n\nI was working on HEAD for the patch, but this concern has been an\nissue for quite a long time. We were running into it on 9.6 in\nproduction, for example. And given how frequently it seems like there\nare large-scale production issues related to auto vacuum, I think any\namount of back patching we can do to make that footgun less likely\nwould be a good thing.\n\nJames Coleman\n\n\n",
"msg_date": "Tue, 17 Sep 2019 08:18:41 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document auto vacuum interruption"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 5:48 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Tue, Sep 17, 2019 at 2:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Let me know what you think of attached? I think we can back-patch\n> > this patch. What do you think? Does anyone else have an opinion on\n> > this patch especially if we see any problem in back-patching this?\n>\n> The attached looks great!\n>\n> I was working on HEAD for the patch, but this concern has been an\n> issue for quite a long time. We were running into it on 9.6 in\n> production, for example. And given how frequently it seems like there\n> are large-scale production issues related to auto vacuum, I think any\n> amount of back patching we can do to make that footgun less likely\n> would be a good thing.\n>\n\nOkay, I will commit this tomorrow unless someone has any comments or objections.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 18 Sep 2019 10:25:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document auto vacuum interruption"
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 10:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Sep 17, 2019 at 5:48 PM James Coleman <jtc331@gmail.com> wrote:\n> >\n> > On Tue, Sep 17, 2019 at 2:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Let me know what you think of attached? I think we can back-patch\n> > > this patch. What do you think? Does anyone else have an opinion on\n> > > this patch especially if we see any problem in back-patching this?\n> >\n> > The attached looks great!\n> >\n> > I was working on HEAD for the patch, but this concern has been an\n> > issue for quite a long time. We were running into it on 9.6 in\n> > production, for example. And given how frequently it seems like there\n> > are large-scale production issues related to auto vacuum, I think any\n> > amount of back patching we can do to make that footgun less likely\n> > would be a good thing.\n> >\n>\n> Okay, I will commit this tomorrow unless someone has any comments or objections.\n>\n\nPushed with minor changes. There was one extra space in a few lines\nand the tag for back-branches (from 10~9.4) was slightly different.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 19 Sep 2019 15:04:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document auto vacuum interruption"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 5:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Sep 18, 2019 at 10:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Sep 17, 2019 at 5:48 PM James Coleman <jtc331@gmail.com> wrote:\n> > >\n> > > On Tue, Sep 17, 2019 at 2:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > Let me know what you think of attached? I think we can back-patch\n> > > > this patch. What do you think? Does anyone else have an opinion on\n> > > > this patch especially if we see any problem in back-patching this?\n> > >\n> > > The attached looks great!\n> > >\n> > > I was working on HEAD for the patch, but this concern has been an\n> > > issue for quite a long time. We were running into it on 9.6 in\n> > > production, for example. And given how frequently it seems like there\n> > > are large-scale production issues related to auto vacuum, I think any\n> > > amount of back patching we can do to make that footgun less likely\n> > > would be a good thing.\n> > >\n> >\n> > Okay, I will commit this tomorrow unless someone has any comments or objections.\n> >\n>\n> Pushed with minor changes. There was one extra space in a few lines\n> and the tag for back-branches (from 10~9.4) was slightly different.\n\nI completely forgot to reply to this; thanks Amit for working on this.\n\nJames\n\n\n",
"msg_date": "Fri, 14 Feb 2020 16:14:01 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document auto vacuum interruption"
}
] |
[
{
"msg_contents": "Hello,\r\n\r\nI am just starting to get my feet wet with PostgreSQL development and am starting to understand the source, so please be kind 😊. I am working on the REL_11_4 tag.\r\n\r\nWhen CachedPlanSource instances are created the field query_string is filled with pstrdup(query_string) in CreateCachedPlan, plancache.c:182, which is just a wrapper for strdup. According to the docs the returned pointer should be freed with “free” sometimes later.\r\n\r\nI believe in DropCachedPlan the free should take place but I don’t see it. Is it just missing or is memory allocated by strdup and friends automatically created in the current MemoryContext? It so, why do I need to use palloc() instead of malloc()?\r\n\r\nKind regards,\r\nDaniel Migowski\r\n\n\n\n\n\n\n\n\n\nHello,\n \nI am just starting to get my feet wet with PostgreSQL development and am starting to understand the source, so please be kind\r\n😊. I am working on the REL_11_4 tag.\r\n\n \nWhen CachedPlanSource instances are created the field query_string is filled with pstrdup(query_string) in CreateCachedPlan, plancache.c:182, which is just a wrapper for strdup. According to the docs the returned pointer\r\n should be freed with “free” sometimes later. \n \nI believe in DropCachedPlan the free should take place but I don’t see it. Is it just missing or is memory allocated by strdup and friends automatically created in the current MemoryContext? It so, why do I need to use\r\n palloc() instead of malloc()?\n \nKind regards,\nDaniel Migowski",
"msg_date": "Thu, 25 Jul 2019 20:21:06 +0000",
"msg_from": "Daniel Migowski <dmigowski@ikoffice.de>",
"msg_from_op": true,
"msg_subject": "Question about MemoryContexts / possible memory leak in\n CachedPlanSource usage"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-25 20:21:06 +0000, Daniel Migowski wrote:\n> When CachedPlanSource instances are created the field query_string is\n> filled with pstrdup(query_string) in CreateCachedPlan,\n> plancache.c:182, which is just a wrapper for strdup. According to the\n> docs the returned pointer should be freed with “free” sometimes later.\n\nNote pstrdup is *not* just a wrapper for strdup:\n\n\treturn MemoryContextStrdup(CurrentMemoryContext, in);\n\ni.e. it explicitly allocates memory in the current memory context.\n\nPerhaps you looked at the version of pstrdup() in\nsrc/common/fe_memutils.c? That's just for \"frontend\" code (we call code\nthat doesn't run in the server frontend, for reasons). There we don't\nhave the whole memory context infrastructure... It's only there so we\ncan reuse code that uses pstrdup() between frontend and server.\n\n\n> I believe in DropCachedPlan the free should take place but I don’t see\n> it. Is it just missing or is memory allocated by strdup and friends\n> automatically created in the current MemoryContext? It so, why do I\n> need to use palloc() instead of malloc()?\n\nWe don't intercept malloc itself (doing so would have a *lot* of issues,\nstarting from palloc internally using malloc, and ending with having\nlots of problems with libraries because their malloc would suddenly\nbehave differently).\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 25 Jul 2019 13:30:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Question about MemoryContexts / possible memory leak in\n CachedPlanSource usage"
},
{
"msg_contents": "Ah, you are right, I looked in fe_memutils.c. Makes sense now, thanks!!\r\n\r\n\r\n-----Ursprüngliche Nachricht-----\r\nVon: Andres Freund <andres@anarazel.de> \r\nGesendet: Donnerstag, 25. Juli 2019 22:31\r\nAn: Daniel Migowski <dmigowski@ikoffice.de>\r\nCc: pgsql-hackers@postgresql.org\r\nBetreff: Re: Question about MemoryContexts / possible memory leak in CachedPlanSource usage\r\n\r\nHi,\r\n\r\nOn 2019-07-25 20:21:06 +0000, Daniel Migowski wrote:\r\n> When CachedPlanSource instances are created the field query_string is \r\n> filled with pstrdup(query_string) in CreateCachedPlan, \r\n> plancache.c:182, which is just a wrapper for strdup. According to the \r\n> docs the returned pointer should be freed with “free” sometimes later.\r\n\r\nNote pstrdup is *not* just a wrapper for strdup:\r\n\r\n\treturn MemoryContextStrdup(CurrentMemoryContext, in);\r\n\r\ni.e. it explicitly allocates memory in the current memory context.\r\n\r\nPerhaps you looked at the version of pstrdup() in src/common/fe_memutils.c? That's just for \"frontend\" code (we call code that doesn't run in the server frontend, for reasons). There we don't have the whole memory context infrastructure... It's only there so we can reuse code that uses pstrdup() between frontend and server.\r\n\r\n\r\n> I believe in DropCachedPlan the free should take place but I don’t see \r\n> it. Is it just missing or is memory allocated by strdup and friends \r\n> automatically created in the current MemoryContext? It so, why do I \r\n> need to use palloc() instead of malloc()?\r\n\r\nWe don't intercept malloc itself (doing so would have a *lot* of issues, starting from palloc internally using malloc, and ending with having lots of problems with libraries because their malloc would suddenly behave differently).\r\n\r\n\r\nGreetings,\r\n\r\nAndres Freund\r\n",
"msg_date": "Thu, 25 Jul 2019 20:40:31 +0000",
"msg_from": "Daniel Migowski <dmigowski@ikoffice.de>",
"msg_from_op": true,
"msg_subject": "AW: Question about MemoryContexts / possible memory leak in\n CachedPlanSource usage"
}
] |
[
{
"msg_contents": "Hello.\n\nWhile looking [1], I noticed that NOTICE messages of the next\ncommand is processed before PQgetResult returns. Clients can\nreceive such spurious NOTICE messages.\n\nLooking pqParseInput3, its main loop seems considered to exit\nafter complete messages is processed. (As I read.)\n\n> * Loop to parse successive complete messages available in the buffer.\n\nBut actually, 'C' message doesn't work that way. I think we\nshould do as the comment suggests. Clients still can process\nasync messages or (somehow issued) NOTICE messages in later\ncalls.\n\n\n[1]: https://www.postgresql.org/message-id/alpine.DEB.2.21.1904132231510.8961@lancre\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 26 Jul 2019 13:18:01 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "pqParseInput3 overruns"
}
] |
[
{
"msg_contents": "Hi,\n\nI have noticed in some cases the warning messages appear twice, one such\ninstance is given below:\npostgres=# begin;\nBEGIN\npostgres=# prepare transaction 't1';\nPREPARE TRANSACTION\npostgres=# rollback;\n\n*WARNING: there is no transaction in progressWARNING: there is no\ntransaction in progress*\nROLLBACK\n\nHowever if logging is enabled, the warning message appears only once.\n\nI'm not sure if this is already known.\nI'm not sure if this is widely used scenario or not.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nHi,I have noticed in some cases the warning messages appear twice, one such instance is given below:postgres=# begin;BEGINpostgres=# prepare transaction 't1';PREPARE TRANSACTIONpostgres=# rollback;WARNING: there is no transaction in progressWARNING: there is no transaction in progressROLLBACKHowever if logging is enabled, the warning message appears only once.I'm not sure if this is already known.I'm not sure if this is widely used scenario or not.Regards,VigneshEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 26 Jul 2019 11:03:40 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Warning messages appearing twice"
},
{
"msg_contents": "On Fri, Jul 26, 2019 at 11:04 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> I have noticed in some cases the warning messages appear twice, one such instance is given below:\n> postgres=# begin;\n> BEGIN\n> postgres=# prepare transaction 't1';\n> PREPARE TRANSACTION\n> postgres=# rollback;\n> WARNING: there is no transaction in progress\n> WARNING: there is no transaction in progress\n> ROLLBACK\n>\n> However if logging is enabled, the warning message appears only once.\n\nSeems like you are seeing one message from the client and the other\none from the server log as you have not enabled the logging collector\nthe WARNING is printed on your console.\n\n>\n> I'm not sure if this is already known.\n> I'm not sure if this is widely used scenario or not.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Jul 2019 11:23:26 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Warning messages appearing twice"
},
{
"msg_contents": "On Fri, Jul 26, 2019 at 11:23 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Fri, Jul 26, 2019 at 11:04 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I have noticed in some cases the warning messages appear twice, one such\n> instance is given below:\n> > postgres=# begin;\n> > BEGIN\n> > postgres=# prepare transaction 't1';\n> > PREPARE TRANSACTION\n> > postgres=# rollback;\n> > WARNING: there is no transaction in progress\n> > WARNING: there is no transaction in progress\n> > ROLLBACK\n> >\n> > However if logging is enabled, the warning message appears only once.\n>\n> Seems like you are seeing one message from the client and the other\n> one from the server log as you have not enabled the logging collector\n> the WARNING is printed on your console.\n>\n>\nThanks for the clarification Dilip. <http://www.enterprisedb.com>\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, Jul 26, 2019 at 11:23 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Fri, Jul 26, 2019 at 11:04 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> I have noticed in some cases the warning messages appear twice, one such instance is given below:\n> postgres=# begin;\n> BEGIN\n> postgres=# prepare transaction 't1';\n> PREPARE TRANSACTION\n> postgres=# rollback;\n> WARNING: there is no transaction in progress\n> WARNING: there is no transaction in progress\n> ROLLBACK\n>\n> However if logging is enabled, the warning message appears only once.\n\nSeems like you are seeing one message from the client and the other\none from the server log as you have not enabled the logging collector\nthe WARNING is printed on your console.\n Thanks for the clarification Dilip.\nRegards,VigneshEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 26 Jul 2019 11:28:17 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Warning messages appearing twice"
}
] |
[
{
"msg_contents": "Hello.\n\nWhile looking [1], I noticed that pg_walfile_name_offset behaves\nsomewhat oddly at segment boundary.\n\nselect * from (values ('0/16ffffff'), ('0/17000000'), ('0/17000001')) as t(lsn), lateral pg_walfile_name_offset(lsn::pg_lsn);\n lsn | file_name | file_offset \n------------+--------------------------+-------------\n 0/16ffffff | 000000020000000000000016 | 16777215\n 0/17000000 | 000000020000000000000016 | 0\n 0/17000001 | 000000020000000000000017 | 1\n\n\nThe file names are right as defined, but the return value of the\nsecond line wrong, or at least misleading. It should be (16,\n1000000) or (16, FFFFFF). The former is out-of-domain so we would\nhave no way than choosing the latter. I'm not sure the purpose of\nthe second output parameter, thus the former might be right\ndecision.\n\n\nThe function returns the following result after this patch is\napplied.\n\nselect * from (values ('0/16ffffff'), ('0/17000000'), ('0/17000001')) as t(lsn), lateral pg_walfile_name_offset(lsn::pg_lsn);\n lsn | file_name | file_offset \n------------+--------------------------+-------------\n 0/16ffffff | 000000020000000000000016 | 16777214\n 0/17000000 | 000000020000000000000016 | 16777215\n 0/17000001 | 000000020000000000000017 | 0\n\n\nregards.\n\n[1]: https://www.postgresql.org/message-id/20190725193808.1648ddc8@firost\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 26 Jul 2019 17:21:20 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_walfile_name_offset can return inconsistent values"
},
{
"msg_contents": "On Fri, 26 Jul 2019 17:21:20 +0900 (Tokyo Standard Time)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> Hello.\n> \n> While looking [1], I noticed that pg_walfile_name_offset behaves\n> somewhat oddly at segment boundary.\n> \n> select * from (values ('0/16ffffff'), ('0/17000000'), ('0/17000001')) as\n> t(lsn), lateral pg_walfile_name_offset(lsn::pg_lsn);\n> lsn | file_name | file_offset\n> ------------+--------------------------+-------------\n> 0/16ffffff | 000000020000000000000016 | 16777215\n> 0/17000000 | 000000020000000000000016 | 0\n> 0/17000001 | 000000020000000000000017 | 1\n> \n> \n> The file names are right as defined, but the return value of the\n> second line wrong, or at least misleading.\n\n+1\nI noticed it as well and put this report on hold while working on my patch.\nThanks for reporting this!\n\n> It should be (16, 1000000) or (16, FFFFFF). The former is out-of-domain so we\n> would have no way than choosing the latter. I'm not sure the purpose of\n> the second output parameter, thus the former might be right\n> decision.\n>\n> The function returns the following result after this patch is\n> applied.\n> \n> select * from (values ('0/16ffffff'), ('0/17000000'), ('0/17000001')) as\n> t(lsn), lateral pg_walfile_name_offset(lsn::pg_lsn);\n> lsn | file_name | file_offset\n> ------------+--------------------------+-------------\n> 0/16ffffff | 000000020000000000000016 | 16777214\n> 0/17000000 | 000000020000000000000016 | 16777215\n> 0/17000001 | 000000020000000000000017 | 0\n\nSo you shift the file offset for all LSN by one byte? This could lead to\nregression in various tools relying on this function.\n\nMoreover, it looks weird as the LSN doesn't reflect the given offset anymore\n(FFFFFF <> 16777214, 000001 <> 0, etc).\n\nAnother solution might be to return the same result when for both 0/16ffffff and\n0/17000000, but it doesn't feel right either.\n\nSo in fact, returning 0x1000000 seems to be the cleaner result to me.\n\nRegards,\n\n\n",
"msg_date": "Fri, 26 Jul 2019 11:30:19 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_walfile_name_offset can return inconsistent values"
},
{
"msg_contents": "On Fri, Jul 26, 2019 at 11:30:19AM +0200, Jehan-Guillaume de Rorthais wrote:\n> On Fri, 26 Jul 2019 17:21:20 +0900 (Tokyo Standard Time)\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> \n> > Hello.\n> > \n> > While looking [1], I noticed that pg_walfile_name_offset behaves\n> > somewhat oddly at segment boundary.\n> > \n> > select * from (values ('0/16ffffff'), ('0/17000000'), ('0/17000001')) as\n> > t(lsn), lateral pg_walfile_name_offset(lsn::pg_lsn);\n> > lsn | file_name | file_offset\n> > ------------+--------------------------+-------------\n> > 0/16ffffff | 000000020000000000000016 | 16777215\n> > 0/17000000 | 000000020000000000000016 | 0\n> > 0/17000001 | 000000020000000000000017 | 1\n> > \n> > \n> > The file names are right as defined, but the return value of the\n> > second line wrong, or at least misleading.\n> \n> +1\n> I noticed it as well and put this report on hold while working on my patch.\n> Thanks for reporting this!\n> \n> > It should be (16, 1000000) or (16, FFFFFF). The former is out-of-domain so we\n> > would have no way than choosing the latter. I'm not sure the purpose of\n> > the second output parameter, thus the former might be right\n> > decision.\n> >\n> > The function returns the following result after this patch is\n> > applied.\n> > \n> > select * from (values ('0/16ffffff'), ('0/17000000'), ('0/17000001')) as\n> > t(lsn), lateral pg_walfile_name_offset(lsn::pg_lsn);\n> > lsn | file_name | file_offset\n> > ------------+--------------------------+-------------\n> > 0/16ffffff | 000000020000000000000016 | 16777214\n> > 0/17000000 | 000000020000000000000016 | 16777215\n> > 0/17000001 | 000000020000000000000017 | 0\n> \n> So you shift the file offset for all LSN by one byte? This could lead to\n> regression in various tools relying on this function.\n> \n> Moreover, it looks weird as the LSN doesn't reflect the given offset anymore\n> (FFFFFF <> 16777214, 000001 <> 0, etc).\n> \n> Another solution might be to return the same result when for both 0/16ffffff and\n> 0/17000000, but it doesn't feel right either.\n> \n> So in fact, returning 0x1000000 seems to be the cleaner result to me.\n\nI know this bug report is four years old, but it is still a\npg_walfile_name_offset() bug. Here is the bug:\n\n\tSELECT *\n\tFROM (VALUES ('0/16ffffff'), ('0/17000000'), ('0/17000001')) AS t(lsn), \n\t LATERAL pg_walfile_name_offset(lsn::pg_lsn);\n\n\t lsn | file_name | file_offset\n\t------------+--------------------------+-------------\n\t 0/16ffffff | 000000010000000000000016 | 16777215\n-->\t 0/17000000 | 000000010000000000000016 | 0\n\t 0/17000001 | 000000010000000000000017 | 1\n\nThe bug is in the indicated line --- it shows the filename as 00016 but\noffset as zero, when clearly the LSN is pointing to 17/0. The bug is\nessentially that the code for pg_walfile_name_offset() uses the exact\noffset from the LSN, but uses the file name from the previous byte of\nthe LSN.\n\nThe fix involves deciding what the description or purpose of\npg_walfile_name_offset() means, and adjusting it to be clearer. The\ncurrent documentation says:\n\n\tConverts a write-ahead log location to a WAL file name and byte\n\toffset within that file.\n\nFix #1: If we assume write-ahead log location means LSN, it is saying\nshow the file/offset of the LSN, and that is most clearly:\n\n\t lsn | file_name | file_offset\n\t------------+--------------------------+-------------\n\t 0/16ffffff | 000000010000000000000016 | 16777215\n\t 0/17000000 | 000000010000000000000017 | 0\n\t 0/17000001 | 000000010000000000000017 | 1\n\nFix #2: Now, there are some who have said they want the output to be\nthe last written WAL byte (the byte before the LSN), not the current\nLSN, for archiving purposes. However, if we do that, we have to update\nthe docs to clarify it. Its output would be:\n\n\t lsn | file_name | file_offset\n\t------------+--------------------------+-------------\n\t 0/16ffffff | 000000010000000000000016 | 16777214\n\t 0/17000000 | 000000010000000000000016 | 16777215\n\t 0/17000001 | 000000010000000000000017 | 0\n\nThe email thread also considered having the second row offset be 16777216\n(2^24), which is not a valid offset for a file if we assume a zero-based\noffset.\n\nLooking further, pg_walfile_name() also returns the filename for the\nprevious byte:\n\n\tSELECT *\n\tFROM (values ('0/16ffffff'), ('0/17000000'), ('0/17000001')) as t(lsn),\n\t LATERAL pg_walfile_name_offset(lsn::pg_lsn), LATERAL pg_walfile_name(lsn::pg_lsn);\n\t lsn | file_name | file_offset | pg_walfile_name\n\t------------+--------------------------+-------------+--------------------------\n\t 0/16ffffff | 000000010000000000000016 | 16777215 | 000000010000000000000016\n\t 0/17000000 | 000000010000000000000016 | 0 | 000000010000000000000016\n\t 0/17000001 | 000000010000000000000017 | 1 | 000000010000000000000017\n\nI have attached fix #1 as offset1.diff and fix #2 as offset2.diff.\n\nI think the most logical fix is #1, but pg_walfile_name() would need to\nbe modified. If the previous file/byte offset are what is desired, fix\n#2 will need doc changes for both functions. This probably needs to be\ndocumented as a backward incompatibility for either fix.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Thu, 9 Nov 2023 14:22:14 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_walfile_name_offset can return inconsistent values"
},
{
"msg_contents": "On Thu, 9 Nov 2023 at 20:22, Bruce Momjian <bruce@momjian.us> wrote:\n> I know this bug report is four years old, but it is still a\n> pg_walfile_name_offset() bug. Here is the bug:\n>\n> SELECT *\n> FROM (VALUES ('0/16ffffff'), ('0/17000000'), ('0/17000001')) AS t(lsn),\n> LATERAL pg_walfile_name_offset(lsn::pg_lsn);\n>\n> lsn | file_name | file_offset\n> ------------+--------------------------+-------------\n> 0/16ffffff | 000000010000000000000016 | 16777215\n> --> 0/17000000 | 000000010000000000000016 | 0\n> 0/17000001 | 000000010000000000000017 | 1\n>\n> The bug is in the indicated line --- it shows the filename as 00016 but\n> offset as zero, when clearly the LSN is pointing to 17/0. The bug is\n> essentially that the code for pg_walfile_name_offset() uses the exact\n> offset from the LSN, but uses the file name from the previous byte of\n> the LSN.\n\nYes, that's definitely not a correct result.\n\n> The fix involves deciding what the description or purpose of\n> pg_walfile_name_offset() means, and adjusting it to be clearer. The\n> current documentation says:\n>\n> Converts a write-ahead log location to a WAL file name and byte\n> offset within that file.\n>\n> Fix #1: If we assume write-ahead log location means LSN, it is saying\n> show the file/offset of the LSN, and that is most clearly:\n>\n> lsn | file_name | file_offset\n> ------------+--------------------------+-------------\n> 0/16ffffff | 000000010000000000000016 | 16777215\n> 0/17000000 | 000000010000000000000017 | 0\n> 0/17000001 | 000000010000000000000017 | 1\n>\n> Fix #2: Now, there are some who have said they want the output to be\n> the last written WAL byte (the byte before the LSN), not the current\n> LSN, for archiving purposes. However, if we do that, we have to update\n> the docs to clarify it. Its output would be:\n>\n> lsn | file_name | file_offset\n> ------------+--------------------------+-------------\n> 0/16ffffff | 000000010000000000000016 | 16777214\n> 0/17000000 | 000000010000000000000016 | 16777215\n> 0/17000001 | 000000010000000000000017 | 0\n>\n> I have attached fix #1 as offset1.diff and fix #2 as offset2.diff.\n\nI believe you got the references wrong; fix #1 looks like the output\nof offset2's changes, and fix #2 looks like the result of offset1's\nchanges.\n\nEither way, I think fix #1 is most correct (as was attached in\noffset2.diff, and quoted verbatim here), because that has no chance of\nhaving surprising underflowing behaviour when you use '0/0'::lsn as\ninput.\n\n> diff --git a/src/backend/access/transam/xlogfuncs.c b/src/backend/access/transam/xlogfuncs.c\n> index 45a70668b1..e65502d51e 100644\n> --- a/src/backend/access/transam/xlogfuncs.c\n> +++ b/src/backend/access/transam/xlogfuncs.c\n> @@ -414,7 +414,7 @@ pg_walfile_name_offset(PG_FUNCTION_ARGS)\n> /*\n> * xlogfilename\n> */\n> - XLByteToPrevSeg(locationpoint, xlogsegno, wal_segment_size);\n> + XLByteToSeg(locationpoint, xlogsegno, wal_segment_size);\n> XLogFileName(xlogfilename, GetWALInsertionTimeLine(), xlogsegno,\n> wal_segment_size);\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Thu, 9 Nov 2023 21:49:48 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_walfile_name_offset can return inconsistent values"
},
{
"msg_contents": "On Thu, Nov 9, 2023 at 09:49:48PM +0100, Matthias van de Meent wrote:\n> > I have attached fix #1 as offset1.diff and fix #2 as offset2.diff.\n> \n> I believe you got the references wrong; fix #1 looks like the output\n> of offset2's changes, and fix #2 looks like the result of offset1's\n> changes.\n\nSorry, I swaped them around when I realized the order I was posting them\nin the email, and got it wrong.\n\n> Either way, I think fix #1 is most correct (as was attached in\n> offset2.diff, and quoted verbatim here), because that has no chance of\n> having surprising underflowing behaviour when you use '0/0'::lsn as\n> input.\n\nAttached is the full patch that changes pg_walfile_name_offset() and\npg_walfile_name(). There is no need for doc changes. We need to\ndocument this as incompatible in case users are realying on the old\nbehavior for WAL archiving purposes. If they want the old behavior they\nneed to check for an offset of zero and subtract one from the file name.\n\nCan someone check that all other calls to XLByteToPrevSeg() are correct?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Thu, 9 Nov 2023 16:14:07 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_walfile_name_offset can return inconsistent values"
},
{
"msg_contents": "On Thu, Nov 09, 2023 at 04:14:07PM -0500, Bruce Momjian wrote:\n> Attached is the full patch that changes pg_walfile_name_offset() and\n> pg_walfile_name(). There is no need for doc changes. We need to\n> document this as incompatible in case users are realying on the old\n> behavior for WAL archiving purposes. If they want the old behavior they\n> need to check for an offset of zero and subtract one from the file name.\n\nFWIW, I am not really convinced that there is a strong need to\nbackpatch any of that. There's a risk that some queries relying on\nthe old behavior suddenly break after a minor release, and that's\nalways annoying. A HEAD-only change seems like a safer bet to me.\n\n> Can someone check that all other calls to XLByteToPrevSeg() are correct?\n\nOn a quick check, all the other calls use that for end record LSNs, so\nthat looks fine.\n--\nMichael",
"msg_date": "Fri, 10 Nov 2023 08:25:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_walfile_name_offset can return inconsistent values"
},
{
"msg_contents": "On Fri, Nov 10, 2023 at 08:25:35AM +0900, Michael Paquier wrote:\n> On Thu, Nov 09, 2023 at 04:14:07PM -0500, Bruce Momjian wrote:\n> > Attached is the full patch that changes pg_walfile_name_offset() and\n> > pg_walfile_name(). There is no need for doc changes. We need to\n> > document this as incompatible in case users are realying on the old\n> > behavior for WAL archiving purposes. If they want the old behavior they\n> > need to check for an offset of zero and subtract one from the file name.\n> \n> FWIW, I am not really convinced that there is a strong need to\n> backpatch any of that. There's a risk that some queries relying on\n> the old behavior suddenly break after a minor release, and that's\n> always annoying. A HEAD-only change seems like a safer bet to me.\n\nYes, this cannot be backpatched, clearly.\n\n> > Can someone check that all other calls to XLByteToPrevSeg() are correct?\n> \n> On a quick check, all the other calls use that for end record LSNs, so\n> that looks fine.\n\nThank you.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Thu, 9 Nov 2023 18:35:42 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_walfile_name_offset can return inconsistent values"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-09 16:14:07 -0500, Bruce Momjian wrote:\n> On Thu, Nov 9, 2023 at 09:49:48PM +0100, Matthias van de Meent wrote:\n> > Either way, I think fix #1 is most correct (as was attached in\n> > offset2.diff, and quoted verbatim here), because that has no chance of\n> > having surprising underflowing behaviour when you use '0/0'::lsn as\n> > input.\n> \n> Attached is the full patch that changes pg_walfile_name_offset() and\n> pg_walfile_name(). There is no need for doc changes.\n\nI think this needs to add tests \"documenting\" the changed behaviour and\nperhaps also for a few other edge cases. You could e.g. test\n SELECT * FROM pg_walfile_name_offset('0/0')\nwhich today returns bogus values, and which is independent of the wal segment\nsize.\n\nAnd with\nSELECT setting::int8 AS segment_size FROM pg_settings WHERE name = 'wal_segment_size' \\gset\nyou can test real things too, e.g.:\nSELECT segment_number, file_offset FROM pg_walfile_name_offset('0/0'::pg_lsn + :segment_size), pg_split_walfile_name(file_name);\nSELECT segment_number, file_offset FROM pg_walfile_name_offset('0/0'::pg_lsn + :segment_size + 1), pg_split_walfile_name(file_name);\nSELECT segment_number, file_offset = :segment_size - 1 FROM pg_walfile_name_offset('0/0'::pg_lsn + :segment_size - 1), pg_split_walfile_name(file_name);\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Nov 2023 19:59:43 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_walfile_name_offset can return inconsistent values"
},
{
"msg_contents": "On Fri, Nov 10, 2023 at 07:59:43PM -0800, Andres Freund wrote:\n> I think this needs to add tests \"documenting\" the changed behaviour and\n> perhaps also for a few other edge cases. You could e.g. test\n> SELECT * FROM pg_walfile_name_offset('0/0')\n> which today returns bogus values, and which is independent of the wal segment\n> size.\n\n+1.\n--\nMichael",
"msg_date": "Mon, 13 Nov 2023 08:31:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_walfile_name_offset can return inconsistent values"
},
{
"msg_contents": "On Fri, Nov 10, 2023 at 07:59:43PM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2023-11-09 16:14:07 -0500, Bruce Momjian wrote:\n> > On Thu, Nov 9, 2023 at 09:49:48PM +0100, Matthias van de Meent wrote:\n> > > Either way, I think fix #1 is most correct (as was attached in\n> > > offset2.diff, and quoted verbatim here), because that has no chance of\n> > > having surprising underflowing behaviour when you use '0/0'::lsn as\n> > > input.\n> > \n> > Attached is the full patch that changes pg_walfile_name_offset() and\n> > pg_walfile_name(). There is no need for doc changes.\n> \n> I think this needs to add tests \"documenting\" the changed behaviour and\n> perhaps also for a few other edge cases. You could e.g. test\n> SELECT * FROM pg_walfile_name_offset('0/0')\n> which today returns bogus values, and which is independent of the wal segment\n> size.\n> \n> And with\n> SELECT setting::int8 AS segment_size FROM pg_settings WHERE name = 'wal_segment_size' \\gset\n> you can test real things too, e.g.:\n> SELECT segment_number, file_offset FROM pg_walfile_name_offset('0/0'::pg_lsn + :segment_size), pg_split_walfile_name(file_name);\n> SELECT segment_number, file_offset FROM pg_walfile_name_offset('0/0'::pg_lsn + :segment_size + 1), pg_split_walfile_name(file_name);\n> SELECT segment_number, file_offset = :segment_size - 1 FROM pg_walfile_name_offset('0/0'::pg_lsn + :segment_size - 1), pg_split_walfile_name(file_name);\n\nSure, I have added these tests.\n\nFYI, pg_walfile_name_offset() has this C comment, which I have removed\nin this patch;\n\n\t* Note that a location exactly at a segment boundary is taken to be in\n\t* the previous segment. This is usually the right thing, since the\n\t* expected usage is to determine which xlog file(s) are ready to archive.\n\nThere is also a documentation mention of this behavior:\n\n When the given write-ahead log location is exactly at a write-ahead log file boundary, both\n these functions return the name of the preceding write-ahead log file.\n This is usually the desired behavior for managing write-ahead log archiving\n behavior, since the preceding file is the last one that currently\n needs to be archived.\n\nAfter seeing the doc mention, I started digging into the history of this\nfeature. It was originally called pg_current_xlogfile_offset() and\nproposed in this email thread, which started on 2006-07-31:\n\n\thttps://www.postgresql.org/message-id/flat/1154384790.3226.21.camel%40localhost.localdomain\n\nIn the initial patch by Simon Riggs, there was no \"previous segment\nfile\" behavior, just a simple filename/offset calculation.\n\nThis was applied on 2006-08-06 with this commit:\n\n\tcommit 704ddaaa09\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\tDate: Sun Aug 6 03:53:44 2006 +0000\n\t\n\t Add support for forcing a switch to a new xlog file; cause such a switch\n\t to happen automatically during pg_stop_backup(). Add some functions for\n\t interrogating the current xlog insertion point and for easily extracting\n\t WAL filenames from the hex WAL locations displayed by pg_stop_backup\n\t and friends. Simon Riggs with some editorialization by Tom Lane.\n\nand the email of the commit said:\n\n\thttps://www.postgresql.org/message-id/18457.1154836638%40sss.pgh.pa.us\n\n\tI also made the new user-level functions a bit more orthogonal, so\n\tthat filenames could be extracted from the existing functions like\n\tpg_stop_backup.\n\nThere is later talk about returning last write pointer vs. the current\ninsert pointer, and having it match what is returned by pg_stop_backup():\n\n\thttps://www.postgresql.org/message-id/1155124565.2368.95.camel%40localhost.localdomain\n\t\n\tMethinks it should be the Write pointer all of the time, since I can't\n\tthink of a valid reason for wanting to know where the Insert pointer is\n\t*before* we've written to the xlog file. Having it be the Insert pointer\n\tcould lead to some errors.\n\nand I suspect that it was the desire to return the last write pointer\nthat caused the function to return the previous segment on a boundary\noffset. This was intended to simplify log shipping implementations, I\nthink.\n\nThe function eventually was renamed in the xlog-to-wal renaming and moved\nfrom xlog.c to xlogfuncs.c. This thread in 2022 mentioned the\ninconsistency for 0/0, but didn't seem to talk about the inconsistency\nof offset vs file name:\n\n\thttps://www.postgresql.org/message-id/flat/20220204225057.GA1535307%40nathanxps13#d964275c9540d8395e138efc0a75f7e8\n\nand it concluded with:\n\n\tYes, its the deliberate choice of design, or a kind of\n\tquestionable-but-unoverturnable decision. I think there are many\n\texternal tools conscious of this behavior.\n\nHowever, with the report about the inconsistency, the attached patch\nfixes the behavior and removes the documentation about the odd behavior.\nThis will need to be mentioned as an incompatibility in the major\nversion release notes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Mon, 13 Nov 2023 12:14:57 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_walfile_name_offset can return inconsistent values"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-13 12:14:57 -0500, Bruce Momjian wrote:\n> +SELECT *\n> +FROM (values ('0/16ffffff'), ('0/17000000'), ('0/17000001')) as t(lsn),\n> + LATERAL pg_walfile_name_offset(lsn::pg_lsn),\n> + LATERAL pg_walfile_name(lsn::pg_lsn);\n> + lsn | file_name | file_offset | pg_walfile_name \n> +------------+--------------------------+-------------+--------------------------\n> + 0/16ffffff | 000000010000000000000016 | 16777215 | 000000010000000000000016\n> + 0/17000000 | 000000010000000000000017 | 0 | 000000010000000000000017\n> + 0/17000001 | 000000010000000000000017 | 1 | 000000010000000000000017\n> +(3 rows)\n\nThese would break when testing with a different segment size. Today that's not\nthe case...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Nov 2023 09:49:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_walfile_name_offset can return inconsistent values"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 09:49:53AM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2023-11-13 12:14:57 -0500, Bruce Momjian wrote:\n> > +SELECT *\n> > +FROM (values ('0/16ffffff'), ('0/17000000'), ('0/17000001')) as t(lsn),\n> > + LATERAL pg_walfile_name_offset(lsn::pg_lsn),\n> > + LATERAL pg_walfile_name(lsn::pg_lsn);\n> > + lsn | file_name | file_offset | pg_walfile_name \n> > +------------+--------------------------+-------------+--------------------------\n> > + 0/16ffffff | 000000010000000000000016 | 16777215 | 000000010000000000000016\n> > + 0/17000000 | 000000010000000000000017 | 0 | 000000010000000000000017\n> > + 0/17000001 | 000000010000000000000017 | 1 | 000000010000000000000017\n> > +(3 rows)\n> \n> These would break when testing with a different segment size. Today that's not\n> the case...\n\nOkay, test removed in the updated patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Mon, 13 Nov 2023 14:12:12 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_walfile_name_offset can return inconsistent values"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 02:12:12PM -0500, Bruce Momjian wrote:\n> On Mon, Nov 13, 2023 at 09:49:53AM -0800, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2023-11-13 12:14:57 -0500, Bruce Momjian wrote:\n> > > +SELECT *\n> > > +FROM (values ('0/16ffffff'), ('0/17000000'), ('0/17000001')) as t(lsn),\n> > > + LATERAL pg_walfile_name_offset(lsn::pg_lsn),\n> > > + LATERAL pg_walfile_name(lsn::pg_lsn);\n> > > + lsn | file_name | file_offset | pg_walfile_name \n> > > +------------+--------------------------+-------------+--------------------------\n> > > + 0/16ffffff | 000000010000000000000016 | 16777215 | 000000010000000000000016\n> > > + 0/17000000 | 000000010000000000000017 | 0 | 000000010000000000000017\n> > > + 0/17000001 | 000000010000000000000017 | 1 | 000000010000000000000017\n> > > +(3 rows)\n> > \n> > These would break when testing with a different segment size. Today that's not\n> > the case...\n> \n> Okay, test removed in the updated patch.\n\nPatch applied to master.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 24 Nov 2023 19:44:25 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_walfile_name_offset can return inconsistent values"
}
] |
[
{
"msg_contents": "Hi All,\n\nI'm able to insert data into a table column marked as GENERATED ALWAYS\nusing COPY command however, it fails with INSERT command. Isn't that a\nbug with COPY command?\n\nHere is the test-case for more clarity.\n\npostgres=# create table tab_always (i int generated always as identity, j int);\nCREATE TABLE\n\npostgres=# insert into tab_always values(1, 10);\nERROR: cannot insert into column \"i\"\nDETAIL: Column \"i\" is an identity column defined as GENERATED ALWAYS.\nHINT: Use OVERRIDING SYSTEM VALUE to override.\n\n[ashu@localhost bin]$ cat /tmp/always.csv\n13 10\n14 20\n15 30\n16 40\n\npostgres=# copy tab_always from '/tmp/always.csv';\nCOPY 4\npostgres=# select * from tab_always;\n i | j\n----+----\n 13 | 10\n 14 | 20\n 15 | 30\n 16 | 40\n(4 rows)\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Jul 2019 15:12:28 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "COPY command on a table column marked as GENERATED ALWAYS"
},
{
"msg_contents": "On Fri, Jul 26, 2019 at 03:12:28PM +0530, Ashutosh Sharma wrote:\n> Hi All,\n> \n> I'm able to insert data into a table column marked as GENERATED ALWAYS\n> using COPY command however, it fails with INSERT command. Isn't that a\n> bug with COPY command?\n\nPer the documentation in the section for GENERATED ALWAYS:\nhttps://www.postgresql.org/docs/devel/sql-createtable.html\n\n\"The clauses ALWAYS and BY DEFAULT determine how the sequence value is\ngiven precedence over a user-specified value in an INSERT\nstatement. If ALWAYS is specified, a user-specified value is only\naccepted if the INSERT statement specifies OVERRIDING SYSTEM VALUE. If\nBY DEFAULT is specified, then the user-specified value takes\nprecedence. See INSERT for details. (In the COPY command,\nuser-specified values are always used regardless of this setting.)\"\n\nSo it behaves as documented.\n--\nMichael",
"msg_date": "Mon, 29 Jul 2019 10:57:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: COPY command on a table column marked as GENERATED ALWAYS"
},
{
"msg_contents": "On Mon, Jul 29, 2019 at 7:27 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jul 26, 2019 at 03:12:28PM +0530, Ashutosh Sharma wrote:\n> > Hi All,\n> >\n> > I'm able to insert data into a table column marked as GENERATED ALWAYS\n> > using COPY command however, it fails with INSERT command. Isn't that a\n> > bug with COPY command?\n>\n> Per the documentation in the section for GENERATED ALWAYS:\n> https://www.postgresql.org/docs/devel/sql-createtable.html\n>\n> \"The clauses ALWAYS and BY DEFAULT determine how the sequence value is\n> given precedence over a user-specified value in an INSERT\n> statement. If ALWAYS is specified, a user-specified value is only\n> accepted if the INSERT statement specifies OVERRIDING SYSTEM VALUE. If\n> BY DEFAULT is specified, then the user-specified value takes\n> precedence. See INSERT for details. (In the COPY command,\n> user-specified values are always used regardless of this setting.)\"\n>\n> So it behaves as documented.\n\nOkay, Thanks for the pointer!\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Jul 2019 09:45:31 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: COPY command on a table column marked as GENERATED ALWAYS"
}
] |
[
{
"msg_contents": "Hello devs,\n\nAs pointed out by Kyotaro Horiguchi in\n\nhttps://www.postgresql.org/message-id/20190726.131704.86173346.horikyota.ntt@gmail.com\n\nFETCH_COUNT does not work with combined queries, and probably has never \nworked since 2006.\n\nWhat seems to happen is that ExecQueryUsingCursor is hardcoded to handle \none simple query. It simply inserts the cursor generation in front of the \nquery believed to be a select:\n\n DECLARE ... <query>\n\nFor combined queries, say two selects, it results in:\n\n DECLARE ... <first select>; <second select>\n\nThen PQexec returns the result of the second one, and nothing is printed.\n\nHowever, if the second query is not a select, eg: \"select ... \\; update \n... ;\", the result of the *first* query is shown.\n\nHow fun!\n\nThis is because PQexec returns the second result. The cursor declaration \nexpects a PGRES_COMMAND_OK before proceeding. With a select it gets \nPGRES_TUPLES_OK so decides it is an error and silently skips to the end. \nWith the update it indeed obtains the expected PGRES_COMMAND_OK, not \nreally for the command it sent but who cares, and proceeds to show the \ncursor results.\n\nBasically, the whole logic is broken.\n\nThe minimum is to document that it does not work properly with combined \nqueries. Attached patch does that, so that the bug becomes a documented \nbug, aka a feature:-)\n\nOtherwise, probably psql lexer could detect, with some efforts, that it is \na combined query (detect embedded ; and check that they are not empty \nqueries), so that it could skip the feature if it is the case.\n\nAnother approach would be to try to detect if the returned result does not \ncorrespond to the cursor one reliably. Maybe some result counting could be \nadded somewhere so that the number of results under PQexec is accessible \nto the user, i.e. result struct would contain its own number. Hmmm.\n\nA more complex approach would be to keep the position of embedded queries, \nand to insert cursor declarations where needed, currently the last one if \nit is a SELECT. However, for the previous ones the allocation and such \ncould be prohibitive as no cursor would be used. Not sure it is worth the \neffort as the bug has not been detected for 13 years.\n\n-- \nFabien.",
"msg_date": "Fri, 26 Jul 2019 10:02:14 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "psql FETCH_COUNT feature does not work with combined queries"
},
{
"msg_contents": "> FETCH_COUNT does not work with combined queries, and probably has never \n> worked since 2006.\n\nFor the record, this bug has already been reported & discussed by Daniel \nVᅵritᅵ a few months back, see:\n\nhttps://www.postgresql.org/message-id/flat/a0a854b6-563c-4a11-bf1c-d6c6f924004d%40manitou-mail.org\n\nGiven the points of view expressed on this thread, especially Tom's view \nagainst improving the lexer used by psql to known where query ends, it \nlooks that my documentation patch is the way to go in the short term, and \nthat if the \"always show query results\" patch is committed, it might \nsimplify a bit solving this issue as the PQexec call is turned into \nPQsendQuery.\n\n\n-- \nFabien.",
"msg_date": "Fri, 26 Jul 2019 16:13:23 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql FETCH_COUNT feature does not work with combined queries"
},
{
"msg_contents": "On Fri, Jul 26, 2019 at 04:13:23PM +0000, Fabien COELHO wrote:\n>\n>>FETCH_COUNT does not work with combined queries, and probably has \n>>never worked since 2006.\n>\n>For the record, this bug has already been reported & discussed by \n>Daniel V�rit� a few months back, see:\n>\n>https://www.postgresql.org/message-id/flat/a0a854b6-563c-4a11-bf1c-d6c6f924004d%40manitou-mail.org\n>\n>Given the points of view expressed on this thread, especially Tom's \n>view against improving the lexer used by psql to known where query \n>ends, it looks that my documentation patch is the way to go in the \n>short term, and that if the \"always show query results\" patch is \n>committed, it might simplify a bit solving this issue as the PQexec \n>call is turned into PQsendQuery.\n>\n\nAssuming there's no one willing to fix the behavior (and that seems\nunlikely given we're in the middle of a 2020-01 commitfest) it makes\nsense to at least document the behavior.\n\nThat being said, I think the proposed patch may be a tad too brief. I\ndon't think we need to describe the exact behavior, of course, but maybe\nit'd be good to mention that the behavior depends on the type of the\nlast command etc. Also, I don't think \"combined query\" is a term used\nanywhere in the code/docs - I think the proper term is \"multi-query\nstring\".\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 6 Jan 2020 23:20:13 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: psql FETCH_COUNT feature does not work with combined queries"
},
{
"msg_contents": "Hi Fabien,\n\nOn 1/6/20 5:20 PM, Tomas Vondra wrote:\n> On Fri, Jul 26, 2019 at 04:13:23PM +0000, Fabien COELHO wrote:\n>>\n>>> FETCH_COUNT does not work with combined queries, and probably has \n>>> never worked since 2006.\n>>\n>> For the record, this bug has already been reported & discussed by \n>> Daniel Vérité a few months back, see:\n>>\n>> https://www.postgresql.org/message-id/flat/a0a854b6-563c-4a11-bf1c-d6c6f924004d%40manitou-mail.org \n>>\n>>\n>> Given the points of view expressed on this thread, especially Tom's \n>> view against improving the lexer used by psql to known where query \n>> ends, it looks that my documentation patch is the way to go in the \n>> short term, and that if the \"always show query results\" patch is \n>> committed, it might simplify a bit solving this issue as the PQexec \n>> call is turned into PQsendQuery.\n>>\n> \n> Assuming there's no one willing to fix the behavior (and that seems\n> unlikely given we're in the middle of a 2020-01 commitfest) it makes\n> sense to at least document the behavior.\n> \n> That being said, I think the proposed patch may be a tad too brief. I\n> don't think we need to describe the exact behavior, of course, but maybe\n> it'd be good to mention that the behavior depends on the type of the\n> last command etc. Also, I don't think \"combined query\" is a term used\n> anywhere in the code/docs - I think the proper term is \"multi-query\n> string\".\n\nAny thoughts on Tomas' comments?\n\nI'll mark this Waiting on Author because I feel like some response \nand/or a new patch is needed.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 27 Mar 2020 11:36:34 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: psql FETCH_COUNT feature does not work with combined queries"
},
{
"msg_contents": ">> Assuming there's no one willing to fix the behavior (and that seems\n>> unlikely given we're in the middle of a 2020-01 commitfest) it makes\n>> sense to at least document the behavior.\n>> \n>> That being said, I think the proposed patch may be a tad too brief. I\n>> don't think we need to describe the exact behavior, of course, but maybe\n>> it'd be good to mention that the behavior depends on the type of the\n>> last command etc. Also, I don't think \"combined query\" is a term used\n>> anywhere in the code/docs - I think the proper term is \"multi-query\n>> string\".\n>\n> Any thoughts on Tomas' comments?\n\nSorry, I did not notice them.\n\nI tried \"multi-command\", matching some wording use elsewhere in the file. \nI'm at lost about how to describe the insane behavior the feature has when \nthey are used, which is really just an unfixed bug, so I expanded a minima \nin the attached v2.\n\n-- \nFabien.",
"msg_date": "Fri, 27 Mar 2020 23:53:03 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql FETCH_COUNT feature does not work with combined queries"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: tested, passed\n\nFabien,\r\n\r\nThere is a minor typo (works -> work) but overall I'm not a fan of the wording.\r\n\r\nInstead of a note maybe add a paragraph stating:\r\n\r\n\"This setting is ignored when multiple statements are sent to the server as a single command (i.e., via the -c command line option or the \\; meta-command). Additionally, it is possible for certain combinations of statements sent in that manner to instead return no results, or even be ignored, instead of returning the result set of the last query. In short, when FETCH_COUNT is non-zero, and you send a multi-statement command to the server, the results are undefined. This combination in presently allowed for backward compatibility.\"\r\n\r\nI would suggest, however, adding some tests in src/test/psql.sql demonstrating the broken behavior. A potentially useful test setup would be:\r\ncreate temp table testtbl (idx int, div int);\r\ninsert into testtbl (1,1), (2,1),(3,1),(4,0),(5,1);\r\nand combine that with FETCH_COUNT 3\r\n\r\nSome other things I tried, with and without FETCH_COUNT:\r\n\r\nbegin \\; select 2 \\; commit \\; select 1 / div from (select div from testtbl order by idx) as src;\r\nselect 1/0 \\; select 1 / div from (select div from testtbl where div <> 0 order by idx) as src;\r\nbegin \\; select 2 \\; select 1 \\; commit;\r\nbegin \\; select 2 \\; select 1;\r\ncommit;\r\n\r\nI'm not sure how to go about getting a equivalent result of \"select pg_sleep(2) \\; select 1;\" not sleeping. If I put select 1/0 instead of pg_sleep I get an error and any DML seems to end up working just fine (minus actual batch fetching)\r\nupdate testtbl set val = 2 \\; select 1; --result (1)\r\n\r\nDavid J.\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Thu, 16 Jul 2020 23:08:08 +0000",
"msg_from": "David Johnston <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql FETCH_COUNT feature does not work with combined queries"
},
{
"msg_contents": "On 2020-Jul-16, David Johnston wrote:\n\n> Instead of a note maybe add a paragraph stating:\n> \n> \"This setting is ignored when multiple statements are sent to the server as a single command (i.e., via the -c command line option or the \\; meta-command). Additionally, it is possible for certain combinations of statements sent in that manner to instead return no results, or even be ignored, instead of returning the result set of the last query. In short, when FETCH_COUNT is non-zero, and you send a multi-statement command to the server, the results are undefined. This combination in presently allowed for backward compatibility.\"\n\nI personally dislike documenting things that don't work, if worded in a\nway that don't leave open the possibility of fixing it in the future.\nSo I didn't like Fabien's original wording either, though I was thinking\nthat just adding \"This may change in a future release of Postgres\" might\nhave been enough. That seems more difficult with your proposed wording,\nbut maybe you see a good way to do it?\n\nAfter rereading it a few times, I think it's too specific about how it\nfails. Maybe it's possible to reduce to just the last two phrases,\nalong the lines of\n\n> When FETCH_COUNT is non-zero, and you send a multi-statement command\n> to the server (for example via -c or the \\; meta-command), the results\n> are undefined. This combination in presently allowed for backward\n> compatibility and may be changed in a future release.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 16 Jul 2020 19:24:42 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: psql FETCH_COUNT feature does not work with combined queries"
},
{
"msg_contents": "On Thu, Jul 16, 2020 at 4:24 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2020-Jul-16, David Johnston wrote:\n>\n> > Instead of a note maybe add a paragraph stating:\n> >\n> > \"This setting is ignored when multiple statements are sent to the server\n> as a single command (i.e., via the -c command line option or the \\;\n> meta-command). Additionally, it is possible for certain combinations of\n> statements sent in that manner to instead return no results, or even be\n> ignored, instead of returning the result set of the last query. In short,\n> when FETCH_COUNT is non-zero, and you send a multi-statement command to the\n> server, the results are undefined. This combination in presently allowed\n> for backward compatibility.\"\n>\n> I personally dislike documenting things that don't work, if worded in a\n> way that don't leave open the possibility of fixing it in the future.\n> So I didn't like Fabien's original wording either, though I was thinking\n> that just adding \"This may change in a future release of Postgres\" might\n> have been enough. That seems more difficult with your proposed wording,\n> but maybe you see a good way to do it?\n>\n> After rereading it a few times, I think it's too specific about how it\n> fails. Maybe it's possible to reduce to just the last two phrases,\n> along the lines of\n>\n> > When FETCH_COUNT is non-zero, and you send a multi-statement command\n> > to the server (for example via -c or the \\; meta-command), the results\n> > are undefined. This combination in presently allowed for backward\n> > compatibility and may be changed in a future release.\n>\n>\nEverything may change in a future release so I am generally opposed to\nincluding that. But I'm ok with being less descriptive on the observed\nfailure modes and sweeping it all under \"undefined\". Need to fix my typo,\n\"This combination is presently allowed\".\n\nWhen FETCH_COUNT is non-zero, and you send a multi-statement command\nto the server (for example via -c or the \\; meta-command), the results\nare undefined. This combination is presently allowed for backward\ncompatibility.\n\nDavid J.\n\nOn Thu, Jul 16, 2020 at 4:24 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2020-Jul-16, David Johnston wrote:\n\n> Instead of a note maybe add a paragraph stating:\n> \n> \"This setting is ignored when multiple statements are sent to the server as a single command (i.e., via the -c command line option or the \\; meta-command). Additionally, it is possible for certain combinations of statements sent in that manner to instead return no results, or even be ignored, instead of returning the result set of the last query. In short, when FETCH_COUNT is non-zero, and you send a multi-statement command to the server, the results are undefined. This combination in presently allowed for backward compatibility.\"\n\nI personally dislike documenting things that don't work, if worded in a\nway that don't leave open the possibility of fixing it in the future.\nSo I didn't like Fabien's original wording either, though I was thinking\nthat just adding \"This may change in a future release of Postgres\" might\nhave been enough. That seems more difficult with your proposed wording,\nbut maybe you see a good way to do it?\n\nAfter rereading it a few times, I think it's too specific about how it\nfails. Maybe it's possible to reduce to just the last two phrases,\nalong the lines of\n\n> When FETCH_COUNT is non-zero, and you send a multi-statement command\n> to the server (for example via -c or the \\; meta-command), the results\n> are undefined. This combination in presently allowed for backward\n> compatibility and may be changed in a future release.Everything may change in a future release so I am generally opposed to including that. But I'm ok with being less descriptive on the observed failure modes and sweeping it all under \"undefined\". Need to fix my typo, \"This combination is presently allowed\".When FETCH_COUNT is non-zero, and you send a multi-statement commandto the server (for example via -c or the \\; meta-command), the resultsare undefined. This combination is presently allowed for backwardcompatibility.David J.",
"msg_date": "Thu, 16 Jul 2020 16:44:47 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql FETCH_COUNT feature does not work with combined queries"
},
{
"msg_contents": "On Thu, Jul 16, 2020 at 04:44:47PM -0700, David G. Johnston wrote:\n> When FETCH_COUNT is non-zero, and you send a multi-statement command\n> to the server (for example via -c or the \\; meta-command), the results\n> are undefined. This combination is presently allowed for backward\n> compatibility.\n\nThis has been waiting on author for two months now, so I have marked\nthe patch as RwF in the CF app.\n--\nMichael",
"msg_date": "Wed, 30 Sep 2020 15:27:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql FETCH_COUNT feature does not work with combined queries"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen specifying a config a PGC_POSTMASTER variable on the commandline\n(i.e. -c something=other) the config processing blurts a wrong warning\nabout not being able to change that value. E.g. when specifying\nshared_buffers via -c, I get:\n\n2019-07-26 16:28:04.795 PDT [14464][] LOG: 00000: received SIGHUP, reloading configuration files\n2019-07-26 16:28:04.795 PDT [14464][] LOCATION: SIGHUP_handler, postmaster.c:2629\n2019-07-26 16:28:04.798 PDT [14464][] LOG: 55P02: parameter \"shared_buffers\" cannot be changed without restarting the server\n2019-07-26 16:28:04.798 PDT [14464][] LOCATION: set_config_option, guc.c:7107\n2019-07-26 16:28:04.798 PDT [14464][] LOG: F0000: configuration file \"/srv/dev/pgdev-dev/postgresql.conf\" contains errors; unaffected changes were applied\n2019-07-26 16:28:04.798 PDT [14464][] LOCATION: ProcessConfigFileInternal, guc-file.l:502\n\nISTM that the codeblocks throwing these warnings:\n\n if (prohibitValueChange)\n {\n if (*conf->variable != newval)\n {\n record->status |= GUC_PENDING_RESTART;\n ereport(elevel,\n (errcode(ERRCODE_CANT_CHANGE_RUNTIME_PARAM),\n errmsg(\"parameter \\\"%s\\\" cannot be changed without restarting the server\",\n name)));\n return 0;\n }\n record->status &= ~GUC_PENDING_RESTART;\n return -1;\n }\n\nought to only enter the error path if changeVal indicates that we're\nactually intending to apply the value. I.e. something roughly like the\nattached.\n\n\nTwo more things I noticed when looking at this code:\n\n1) Aren't we leaking memory if prohibitValueChange is set, but newextra\n is present? The cleanup path for that:\n\n /* Perhaps we didn't install newextra anywhere */\n if (newextra && !extra_field_used(&conf->gen, newextra))\n free(newextra);\n\n isn't reached in the prohibitValueChange path shown above. ISTM the\n return -1 in the prohibitValueChange ought to be removed?\n\n2) The amount of PGC_* dependant code duplication in set_config_option()\n imo is over the top. ISTM that they should be merged, and\n a call_*_check_hook wrapper take care of invoking the check hooks,\n and a nother wrapper should take care of calling the assign hook,\n ->variable, and reset_val processing.\n\n Those wrappers could probably also reduce the amount of code in\n InitializeOneGUCOption(), parse_and_validate_value(),\n ResetAllOptions(), AtEOXact_GUC().\n\n I'm also wondering we shouldn't just use config_var_value for at\n least config_*->{reset_val, boot_val}. It seems pretty clear that\n reset_extra ought to be moved?\n\n I'm even wondering the various hooks shouldn't actually just take\n config_var_value. But changing that would probably cause more pain to\n external users - in contrast to looking directly at reset_val,\n boot_val, reset_extra they're much more likely to have hooks.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 26 Jul 2019 18:37:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "warning on reload for PGC_POSTMASTER, guc.c duplication, ..."
}
] |
[
{
"msg_contents": "Since we have three or four different NOTIFY improvement proposals\nfloating around in the current CF, I got a bit distressed at the lack\nof test coverage for that functionality. While the code coverage\nreport makes it look like commands/async.c isn't so badly covered,\nthat's all coming from src/test/regress/sql/async.sql and\nsrc/test/isolation/specs/async-notify.spec. A look at those files\nshows that nowhere is there any actual verification that \"NOTIFY foo\"\nresults in a report of \"foo\" being received; let alone any\nmore-advanced questions such as whether de-duplication of reports\nhappens.\n\nThe reason for this is that psql's report of a notification event\nincludes the sending backend's PID, making it impossible for the\ntest output to be stable; neither the core nor isolation regression\ntest frameworks can cope with unpredictable output.\n\nWe've occasionally batted around ideas for making it possible for\nthese test frameworks to verify not-entirely-fixed output, and that\nwould be a good thing to do, but I'm not volunteering for that today.\n\nSo, if we'd like to have more thorough NOTIFY coverage without going\nto that much work, what to do? I thought of a few alternatives:\n\n1. Write a TAP test instead of using the old test frameworks, and\nuse regexps to check the expected output. But this seems ugly and\nhard to get right. In particular, our TAP infrastructure doesn't\nseem to be (easily?) capable of running concurrent psql sessions,\nso it doesn't seem like there's any good way to test cross-session\nnotifies that way.\n\n2. Change psql so that there's a way to get NOTIFY messages without\nthe sending PID. For testing purposes, it'd be sufficient to know\nwhether the sending PID is our own backend's PID or not. This idea\nis not horrible, and it might even be useful for outside purposes\nif we made it flexible enough; which leads to thoughts like allowing\nthe psql user to set a format-style string, similar to the PROMPT\nstrings but with escapes for channel name, payload, etc. I foresee\nbikeshedding, but we could probably come to an agreement on a feature\nlike that.\n\n3. On the other hand, that doesn't help much for the isolation tester\nbecause it doesn't go through psql. In fact, AFAICS it doesn't have\nany provision for dealing with notify messages at all; probably,\nin the async-notify.spec test, the listening session builds up a\nqueue of notifies that it never reads. So we could imagine addressing\nthe testing gap strictly inside the isolation-tester framework, if we\nadded the ability for it to detect and print notifications in a\ntest-friendly format (no explicit PIDs).\n\nI'm finding alternative #3 the most attractive, because we really\nwant isolation-style testing for LISTEN/NOTIFY, and this solution\ndoesn't require designing a psql feature that we'd need to get\nconsensus on.\n\nBefore I start coding that, any thoughts or better ideas?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 27 Jul 2019 12:46:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Testing LISTEN/NOTIFY more effectively"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-27 12:46:51 -0400, Tom Lane wrote:\n> So, if we'd like to have more thorough NOTIFY coverage without going\n> to that much work, what to do? I thought of a few alternatives:\n> \n> 1. Write a TAP test instead of using the old test frameworks, and\n> use regexps to check the expected output. But this seems ugly and\n> hard to get right. In particular, our TAP infrastructure doesn't\n> seem to be (easily?) capable of running concurrent psql sessions,\n> so it doesn't seem like there's any good way to test cross-session\n> notifies that way.\n\nIt's not that hard to have concurrent psql sessions -\ne.g. src/test/recovery/t/013_crash_restart.pl does so. Writing tests by\ninteractively controlling psql is pretty painful regardless.\n\nI'm inclined to think that this is better tested using isolationtester\nthan a tap test.\n\n\n> 2. Change psql so that there's a way to get NOTIFY messages without\n> the sending PID. For testing purposes, it'd be sufficient to know\n> whether the sending PID is our own backend's PID or not. This idea\n> is not horrible, and it might even be useful for outside purposes\n> if we made it flexible enough; which leads to thoughts like allowing\n> the psql user to set a format-style string, similar to the PROMPT\n> strings but with escapes for channel name, payload, etc. I foresee\n> bikeshedding, but we could probably come to an agreement on a feature\n> like that.\n\nI was wondering about just tying it to VERBOSITY. But that'd not allow\nus to see whether our backend was the sender. I'm mildly inclined to\nthink that that might still be a good idea, even if we mostly go with\n3) - some basic plain regression test coverage of actually receiving\nnotifies would be good.\n\n\n> 3. On the other hand, that doesn't help much for the isolation tester\n> because it doesn't go through psql. In fact, AFAICS it doesn't have\n> any provision for dealing with notify messages at all; probably,\n> in the async-notify.spec test, the listening session builds up a\n> queue of notifies that it never reads. So we could imagine addressing\n> the testing gap strictly inside the isolation-tester framework, if we\n> added the ability for it to detect and print notifications in a\n> test-friendly format (no explicit PIDs).\n> \n> I'm finding alternative #3 the most attractive, because we really\n> want isolation-style testing for LISTEN/NOTIFY, and this solution\n> doesn't require designing a psql feature that we'd need to get\n> consensus on.\n\nYea. I think that's really what need. As you say, the type of test we\nreally need is what isolationtester provides. We can reimplement it\nawkwardly in perl, but there seems to be little point in doing so.\nEspecially as what we're talking about is an additional ~15 lines or so\nof code in isolationtester.\n\nIt'd be kinda neat if we had other information in the notify\nmessage. E.g. having access to the sender's application name would be\nuseful for isolationtester, to actually verify where the message came\nfrom. But it's probably not worth investing a lot in that.\n\nPerhaps we could just have isolationtester check to which\nisolationtester session the backend pid belongs? And then print the\nsession name instead of the pid? That should be fairly easy, and would\nprobably give us all we need?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 27 Jul 2019 10:42:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Testing LISTEN/NOTIFY more effectively"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-07-27 12:46:51 -0400, Tom Lane wrote:\n>> I'm finding alternative #3 the most attractive, because we really\n>> want isolation-style testing for LISTEN/NOTIFY, and this solution\n>> doesn't require designing a psql feature that we'd need to get\n>> consensus on.\n\n> Perhaps we could just have isolationtester check to which\n> isolationtester session the backend pid belongs? And then print the\n> session name instead of the pid? That should be fairly easy, and would\n> probably give us all we need?\n\nOh, that's a good idea -- it's already tracking all the backend PIDs,\nso probably not much extra work to do it like that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 27 Jul 2019 13:53:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Testing LISTEN/NOTIFY more effectively"
},
{
"msg_contents": "While I'm looking at isolationtester ... my eye was immediately drawn\nto this bit, because it claims to be dealing with NOTIFY messages ---\nthough that's wrong, it's really blocking NOTICE messages:\n\n /*\n * Suppress NOTIFY messages, which otherwise pop into results at odd\n * places.\n */\n res = PQexec(conns[i], \"SET client_min_messages = warning;\");\n if (PQresultStatus(res) != PGRES_COMMAND_OK)\n {\n fprintf(stderr, \"message level setup failed: %s\", PQerrorMessage(conns[i]));\n exit(1);\n }\n PQclear(res);\n\nThis seems to me to be a great example of terrible test design.\nIt's not isolationtester's job to impose a client_min_messages level\non the test scripts; if they want a non-default level, they can\nperfectly well set it for themselves in their setup sections.\nFurthermore, if I remove this bit, the only NOTICE messages I'm\nactually seeing come from explicit RAISE NOTICE messages in the\ntest scripts themselves, which means this is overriding the express\nintent of individual test authors. And my testing isn't detecting\nany instability in when those come out, although of course the\nbuildfarm might have a different opinion.\n\nSo I think we should apply something like the attached, and if the\nbuildfarm shows any instability as a result, dealing with that by\ntaking out the RAISE NOTICE commands.\n\nI'm a little inclined to remove the notice anyway in the\nplpgsql-toast test, as the bulk-to-value ratio doesn't seem good.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 27 Jul 2019 14:15:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Testing LISTEN/NOTIFY more effectively"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-27 14:15:39 -0400, Tom Lane wrote:\n> So I think we should apply something like the attached, and if the\n> buildfarm shows any instability as a result, dealing with that by\n> taking out the RAISE NOTICE commands.\n\n+1\n\n> diff --git a/src/test/isolation/expected/insert-conflict-specconflict.out b/src/test/isolation/expected/insert-conflict-specconflict.out\n> index 5726bdb..20cc421 100644\n> --- a/src/test/isolation/expected/insert-conflict-specconflict.out\n> +++ b/src/test/isolation/expected/insert-conflict-specconflict.out\n> @@ -13,7 +13,11 @@ pg_advisory_locksess lock\n> step controller_show: SELECT * FROM upserttest;\n> key data \n> \n> +s1: NOTICE: called for k1\n> +s1: NOTICE: blocking 3\n> step s1_upsert: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted s1') ON CONFLICT (blurt_and_lock(key)) DO UPDATE SET data = upserttest.data || ' with conflict update s1'; <waiting ...>\n> +s2: NOTICE: called for k1\n> +s2: NOTICE: blocking 3\n\nHm, that actually makes the test - which is pretty complicated - easier\nto understand.\n\n\n> diff --git a/src/test/isolation/expected/plpgsql-toast.out b/src/test/isolation/expected/plpgsql-toast.out\n> index 4341153..39a7bbe 100644\n> --- a/src/test/isolation/expected/plpgsql-toast.out\n> +++ b/src/test/isolation/expected/plpgsql-toast.out\n> @@ -35,6 +35,7 @@ step unlock:\n> pg_advisory_unlock\n> \n> t \n> +s1: NOTICE: x = foofoofoofo\n\nYea, there indeed does not not much point in this.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 27 Jul 2019 11:55:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Testing LISTEN/NOTIFY more effectively"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-07-27 14:15:39 -0400, Tom Lane wrote:\n>> So I think we should apply something like the attached, and if the\n>> buildfarm shows any instability as a result, dealing with that by\n>> taking out the RAISE NOTICE commands.\n\n> +1\n\n>> diff --git a/src/test/isolation/expected/insert-conflict-specconflict.out b/src/test/isolation/expected/insert-conflict-specconflict.out\n>> index 5726bdb..20cc421 100644\n>> --- a/src/test/isolation/expected/insert-conflict-specconflict.out\n>> +++ b/src/test/isolation/expected/insert-conflict-specconflict.out\n>> @@ -13,7 +13,11 @@ pg_advisory_locksess lock\n>> step controller_show: SELECT * FROM upserttest;\n>> key data \n>> \n>> +s1: NOTICE: called for k1\n>> +s1: NOTICE: blocking 3\n>> step s1_upsert: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted s1') ON CONFLICT (blurt_and_lock(key)) DO UPDATE SET data = upserttest.data || ' with conflict update s1'; <waiting ...>\n>> +s2: NOTICE: called for k1\n>> +s2: NOTICE: blocking 3\n\n> Hm, that actually makes the test - which is pretty complicated - easier\n> to understand.\n\nUnfortunately, I just found out that on a slow enough machine\n(prairiedog's host) there *is* some variation in when that test's\nnotices come out. I am unsure whether that's to be expected or\nwhether there's something wrong there --- Peter, any thoughts?\n\nWhat I will do for the moment is remove the client_min_messages=WARNING\nsetting from isolationtester.c and instead put it into\ninsert-conflict-specconflict.spec, which seems like a saner\nway to manage this. If we can get these messages to appear\nstably, we can just fix that spec file.\n\n>> +s1: NOTICE: x = foofoofoofo\n\n> Yea, there indeed does not not much point in this.\n\nMaybe we could just log the lengths of the strings... if there's\nanything broken, we could expect that the decompressed output\nwould be a different length.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 27 Jul 2019 15:39:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Testing LISTEN/NOTIFY more effectively"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-27 15:39:44 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> >> diff --git a/src/test/isolation/expected/insert-conflict-specconflict.out b/src/test/isolation/expected/insert-conflict-specconflict.out\n> >> index 5726bdb..20cc421 100644\n> >> --- a/src/test/isolation/expected/insert-conflict-specconflict.out\n> >> +++ b/src/test/isolation/expected/insert-conflict-specconflict.out\n> >> @@ -13,7 +13,11 @@ pg_advisory_locksess lock\n> >> step controller_show: SELECT * FROM upserttest;\n> >> key data \n> >> \n> >> +s1: NOTICE: called for k1\n> >> +s1: NOTICE: blocking 3\n> >> step s1_upsert: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted s1') ON CONFLICT (blurt_and_lock(key)) DO UPDATE SET data = upserttest.data || ' with conflict update s1'; <waiting ...>\n> >> +s2: NOTICE: called for k1\n> >> +s2: NOTICE: blocking 3\n> \n> > Hm, that actually makes the test - which is pretty complicated - easier\n> > to understand.\n> \n> Unfortunately, I just found out that on a slow enough machine\n> (prairiedog's host) there *is* some variation in when that test's\n> notices come out. I am unsure whether that's to be expected or\n> whether there's something wrong there\n\nHm. Any chance you could show the diff? I don't immediately see why.\n\n\n> --- Peter, any thoughts?\n\nThink that's my transgression :/\n\n\n> What I will do for the moment is remove the client_min_messages=WARNING\n> setting from isolationtester.c and instead put it into\n> insert-conflict-specconflict.spec, which seems like a saner\n> way to manage this. If we can get these messages to appear\n> stably, we can just fix that spec file.\n\nMakes sense.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 27 Jul 2019 13:19:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Testing LISTEN/NOTIFY more effectively"
},
{
"msg_contents": "On Sat, Jul 27, 2019 at 12:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Unfortunately, I just found out that on a slow enough machine\n> (prairiedog's host) there *is* some variation in when that test's\n> notices come out. I am unsure whether that's to be expected or\n> whether there's something wrong there --- Peter, any thoughts?\n\nI don't know why this happens, but it's worth noting that the plpgsql\nfunction that raises these notices (\"blurt_and_lock()\") is marked\nIMMUTABLE (not sure if you noticed that already). This is a deliberate\nmisrepresentation which is needed to acquire advisory locks at just\nthe right points during execution.\n\nIf I had to guess, I'd guess that it had something to do with that. I\nmight be able to come up with a better explanation if I saw the diff.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 27 Jul 2019 13:53:35 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Testing LISTEN/NOTIFY more effectively"
},
{
"msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> Perhaps we could just have isolationtester check to which\n>> isolationtester session the backend pid belongs? And then print the\n>> session name instead of the pid? That should be fairly easy, and would\n>> probably give us all we need?\n\n> Oh, that's a good idea -- it's already tracking all the backend PIDs,\n> so probably not much extra work to do it like that.\n\nI found out that to avoid confusion, one really wants the message to\nidentify both the sending and receiving sessions. Here's a patch\nthat does it that way and extends the async-notify.spec test to\nperform basic end-to-end checks on LISTEN/NOTIFY.\n\nI intentionally made the test show the lack of NOTIFY de-deduplication\nthat currently happens with subtransactions. If we change this as I\nproposed in <17822.1564186806@sss.pgh.pa.us>, this test output will\nchange.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 27 Jul 2019 18:20:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Testing LISTEN/NOTIFY more effectively"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-07-27 15:39:44 -0400, Tom Lane wrote:\n>> Unfortunately, I just found out that on a slow enough machine\n>> (prairiedog's host) there *is* some variation in when that test's\n>> notices come out. I am unsure whether that's to be expected or\n>> whether there's something wrong there\n\n> Hm. Any chance you could show the diff? I don't immediately see why.\n\nSure. If I remove the client_min_messages hack from HEAD, then on\nmy dev workstation I get the attached test diff; that reproduces\nquite reliably on a couple of machines. However, running that\ndiff on prairiedog's host gets the failure attached second more\noften than not. (Sometimes it will pass.)\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 27 Jul 2019 18:48:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Testing LISTEN/NOTIFY more effectively"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-27 18:20:52 -0400, Tom Lane wrote:\n> diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c\n> index 6ab19b1..98e5bf2 100644\n> --- a/src/test/isolation/isolationtester.c\n> +++ b/src/test/isolation/isolationtester.c\n> @@ -23,10 +23,12 @@\n> \n> /*\n> * conns[0] is the global setup, teardown, and watchdog connection. Additional\n> - * connections represent spec-defined sessions.\n> + * connections represent spec-defined sessions. We also track the backend\n> + * PID, in numeric and string formats, for each connection.\n> */\n> static PGconn **conns = NULL;\n> -static const char **backend_pids = NULL;\n> +static int *backend_pids = NULL;\n> +static const char **backend_pid_strs = NULL;\n> static int\tnconns = 0;\n\nHm, a bit sad to have both of those around. Not worth getting bothered\nabout memory wise, but it does irk me somewhat.\n\n\n> @@ -187,26 +191,9 @@ main(int argc, char **argv)\n> \t\t\t\t\t\t\t\t blackholeNoticeProcessor,\n> \t\t\t\t\t\t\t\t NULL);\n> \n> -\t\t/* Get the backend pid for lock wait checking. */\n> -\t\tres = PQexec(conns[i], \"SELECT pg_catalog.pg_backend_pid()\");\n> -\t\tif (PQresultStatus(res) == PGRES_TUPLES_OK)\n> -\t\t{\n> -\t\t\tif (PQntuples(res) == 1 && PQnfields(res) == 1)\n> -\t\t\t\tbackend_pids[i] = pg_strdup(PQgetvalue(res, 0, 0));\n> -\t\t\telse\n> -\t\t\t{\n> -\t\t\t\tfprintf(stderr, \"backend pid query returned %d rows and %d columns, expected 1 row and 1 column\",\n> -\t\t\t\t\t\tPQntuples(res), PQnfields(res));\n> -\t\t\t\texit(1);\n> -\t\t\t}\n> -\t\t}\n> -\t\telse\n> -\t\t{\n> -\t\t\tfprintf(stderr, \"backend pid query failed: %s\",\n> -\t\t\t\t\tPQerrorMessage(conns[i]));\n> -\t\t\texit(1);\n> -\t\t}\n> -\t\tPQclear(res);\n> +\t\t/* Save each connection's backend PID for subsequent use. */\n> +\t\tbackend_pids[i] = PQbackendPID(conns[i]);\n> +\t\tbackend_pid_strs[i] = psprintf(\"%d\", backend_pids[i]);\n\nHeh.\n\n\n> @@ -738,7 +728,7 @@ try_complete_step(Step *step, int flags)\n> \t\t\t\tbool\t\twaiting;\n> \n> \t\t\t\tres = PQexecPrepared(conns[0], PREP_WAITING, 1,\n> -\t\t\t\t\t\t\t\t\t &backend_pids[step->session + 1],\n> +\t\t\t\t\t\t\t\t\t &backend_pid_strs[step->session + 1],\n> \t\t\t\t\t\t\t\t\t NULL, NULL, 0);\n> \t\t\t\tif (PQresultStatus(res) != PGRES_TUPLES_OK ||\n> \t\t\t\t\tPQntuples(res) != 1)\n\nWe could of course just send the pids in binary ;). No, not worth it\njust to avoid a small redundant array ;)\n\n\n> +\t/* Report any available NOTIFY messages, too */\n> +\tPQconsumeInput(conn);\n> +\twhile ((notify = PQnotifies(conn)) != NULL)\n> +\t{\n\nHm. I wonder if all that's happening with prairedog is that the notice\nis sent a bit later. I think that could e.g. conceivably happen because\nit TCP_NODELAY isn't supported on prariedog? Or just because the machine\nis very slow?\n\nThe diff you showed with the reordering afaict only reordered the NOTIFY\naround statements that are marked as <waiting ...>. As the waiting\ndetection is done over a separate connection, there's afaict no\nguarantee that we see all notices/notifies that occurred before the\nquery started blocking. It's possible we could make this practically\nrobust enough by checking for notice/notifies on the blocked connection\njust before printing out the <waiting ...>? That still leaves the\npotential issue that the different backend connection deliver data out\nof order, but that seems not very likely?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 27 Jul 2019 16:18:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Testing LISTEN/NOTIFY more effectively"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> We could of course just send the pids in binary ;). No, not worth it\n> just to avoid a small redundant array ;)\n\nIIRC, we'd have to do htonl on them, so we'd still end up with\ntwo representations ...\n\n> Hm. I wonder if all that's happening with prairedog is that the notice\n> is sent a bit later. I think that could e.g. conceivably happen because\n> it TCP_NODELAY isn't supported on prariedog? Or just because the machine\n> is very slow?\n\nThe notices (not notifies) are coming out in the opposite order from\nexpected. I haven't really thought hard about what's causing that;\nit seems odd, because isolationtester isn't supposed to give up waiting\nfor a session until it's visibly blocked according to pg_locks. Maybe\nit needs to recheck for incoming data once more after seeing that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 27 Jul 2019 19:27:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Testing LISTEN/NOTIFY more effectively"
},
{
"msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> Hm. I wonder if all that's happening with prairedog is that the notice\n>> is sent a bit later. I think that could e.g. conceivably happen because\n>> it TCP_NODELAY isn't supported on prariedog? Or just because the machine\n>> is very slow?\n\n> The notices (not notifies) are coming out in the opposite order from\n> expected. I haven't really thought hard about what's causing that;\n> it seems odd, because isolationtester isn't supposed to give up waiting\n> for a session until it's visibly blocked according to pg_locks. Maybe\n> it needs to recheck for incoming data once more after seeing that?\n\nAh-hah, that seems to be the answer. With the attached patch I'm\ngetting reliable-seeming passes on prairiedog.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 27 Jul 2019 19:45:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Testing LISTEN/NOTIFY more effectively"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-27 19:27:17 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > We could of course just send the pids in binary ;). No, not worth it\n> > just to avoid a small redundant array ;)\n> \n> IIRC, we'd have to do htonl on them, so we'd still end up with\n> two representations ...\n\nYea. Although that'd could just be done in a local variable. Anyway,\nit's obviously not important.\n\n\n> > Hm. I wonder if all that's happening with prairedog is that the notice\n> > is sent a bit later. I think that could e.g. conceivably happen because\n> > it TCP_NODELAY isn't supported on prariedog? Or just because the machine\n> > is very slow?\n> \n> The notices (not notifies) are coming out in the opposite order from\n> expected. I haven't really thought hard about what's causing that;\n> it seems odd, because isolationtester isn't supposed to give up waiting\n> for a session until it's visibly blocked according to pg_locks. Maybe\n> it needs to recheck for incoming data once more after seeing that?\n\nYea, that's precisely what I was trying to refer to / suggesting. What I\nthink is happening is that both queries get sent to the server, we\nPQisBusy();select() and figure out they're not done yet. On most\nmachines the raise NOTICE will have been processed by that time, after\nit's a trivial query. But on prariedog (and I suspect even more likely\non valgrind / clobber cache animals), they're not that far yet. So we\nsend the blocking query, until we've seen that it blocks. But there's no\ninterlock guaranteeing that we'll have seen the notices before the\n*other* connection has detected us blocking. As the blocking query is\nmore complex to plan and execute, that window isn't that small.\n\nPolling for notices on the blocked connection before printing anything\nought to practically be reliable. Theoretically I think it still allows\nfor some reordering, e.g. because there was packet loss on one, but not\nthe other connection.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 27 Jul 2019 16:51:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Testing LISTEN/NOTIFY more effectively"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Polling for notices on the blocked connection before printing anything\n> ought to practically be reliable. Theoretically I think it still allows\n> for some reordering, e.g. because there was packet loss on one, but not\n> the other connection.\n\nAs long as it's a local connection, packet loss shouldn't be a problem\n;-). I'm slightly more worried about the case of more than one bufferful\nof NOTICE messages: calling PQconsumeInput isn't entirely guaranteed to\nabsorb *all* available input. But for the cases we actually need to\ndeal with, I think probably the patch as I sent it is OK. We could\ncomplicate matters by going around the loop extra time(s) to verify\nthat select() thinks no data is waiting, but I doubt it's worth the\ncomplexity.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 27 Jul 2019 20:02:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Testing LISTEN/NOTIFY more effectively"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-27 20:02:13 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> I'm slightly more worried about the case of more than one bufferful\n> of NOTICE messages: calling PQconsumeInput isn't entirely guaranteed to\n> absorb *all* available input. But for the cases we actually need to\n> deal with, I think probably the patch as I sent it is OK. We could\n> complicate matters by going around the loop extra time(s) to verify\n> that select() thinks no data is waiting, but I doubt it's worth the\n> complexity.\n\nIt'd just be one continue; right? Except that we don't know if\nPQconsumeInput() actually did anything... So we'd need to do something\nlike executing a select and only call PQconsumeInput() if the select\nsignals that there's data? And then always retry? Yea, that seems too\ncomplicated.\n\nKinda annoying that we don't expose pqReadData()'s return value anywhere\nthat I can see. Not so much for this, but in general. Travelling back\ninto the past, ISTM, PQconsumeInput() should have returned a different\nreturn code if either pqReadData() or pqFlush() did anything.\n\nI wonder if there aren't similar dangers around the notify handling. In\nyour patch we don't print them particularly eagerly. Doesn't that also\nopen us up to timing concerns? In particular, for notifies sent out\nwhile idle, we might print them together with the *last* command\nexecuted - as far as I can tell, if they arrive before the\nPQconsumeInput(), we'll process them all in the PQisBusy() call at the\ntop of try_complete_step()'s loop? Am I missing some interlock here?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 27 Jul 2019 17:54:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Testing LISTEN/NOTIFY more effectively"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I wonder if there aren't similar dangers around the notify handling. In\n> your patch we don't print them particularly eagerly. Doesn't that also\n> open us up to timing concerns?\n\nI think probably not, because of the backend-side restrictions on when\nnotify messages will be sent. The corresponding case for the NOTICE\nbug we just fixed would be if a backend sent a NOTIFY before blocking;\nbut it can't do that internally to a transaction, and anyway the proposed\ntest script isn't doing anything that tricky.\n\nI did spend some time thinking about how isolationtester might report\nnotifys that are sent spontaneously (without any \"triggering\" query)\nbut I didn't feel that that was worth messing with. We'd have to\nhave the program checking all the connections not just the one that's\nrunning what it thinks is the currently active step.\n\nWe might be approaching a time where it's worth scrapping the\nisolationtester logic and starting over. I'm not volunteering though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 27 Jul 2019 21:08:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Testing LISTEN/NOTIFY more effectively"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-07-27 12:46:51 -0400, Tom Lane wrote:\n>> 2. Change psql so that there's a way to get NOTIFY messages without\n>> the sending PID. For testing purposes, it'd be sufficient to know\n>> whether the sending PID is our own backend's PID or not. This idea\n>> is not horrible, and it might even be useful for outside purposes\n>> if we made it flexible enough; which leads to thoughts like allowing\n>> the psql user to set a format-style string, similar to the PROMPT\n>> strings but with escapes for channel name, payload, etc. I foresee\n>> bikeshedding, but we could probably come to an agreement on a feature\n>> like that.\n\n> I was wondering about just tying it to VERBOSITY. But that'd not allow\n> us to see whether our backend was the sender. I'm mildly inclined to\n> think that that might still be a good idea, even if we mostly go with\n> 3) - some basic plain regression test coverage of actually receiving\n> notifies would be good.\n\nBTW, as far as that goes, do you think we could get away with changing\npsql to print \"from self\" instead of \"from PID n\" when it's a self-notify?\nThat would be enough to make the output stable for cases that we'd be\nable to check in the core test infrastructure.\n\nSo far as the backend is concerned, doing anything there is redundant\nwith the isolation tests I just committed --- but it would allow psql's\nown notify code to be covered, so maybe it's worth the trouble.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 28 Jul 2019 12:07:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Testing LISTEN/NOTIFY more effectively"
}
] |
[
{
"msg_contents": "Hello,\n\nI just implemented a small change that adds another column \"mem_usage\" to the system view \"pg_prepared_statements\". It returns the memory usage total of CachedPlanSource.context, CachedPlanSource.query_content and if available CachedPlanSource.gplan.context.\n\nLooks like this:\n\nIKOffice_Daume=# prepare test as select * from vw_report_salesinvoice where salesinvoice_id = $1;\nPREPARE\nIKOffice_Daume=# select * from pg_prepared_statements;\nname | statement | prepare_time | parameter_types | from_sql | mem_usage\n------+----------------------------------------------------------------------------------+------------------------------+-----------------+----------+-----------\ntest | prepare test as select * from vw_report_salesinvoice where salesinvoice_id = $1; | 2019-07-27 20:21:12.63093+02 | {integer} | t | 33580232\n(1 row)\n\nI did this in preparation of reducing the memory usage of prepared statements and believe that this gives client application an option to investigate which prepared statements should be dropped. Also this makes it possible to directly examine the results of further changes and their effectiveness on reducing the memory load of prepared_statements.\n\nIs a patch welcome or is this feature not of interest?\n\nAlso I wonder why the \"prepare test as\" is part of the statement column. I isn't even part of the real statement that is prepared as far as I would assume. Would prefer to just have the \"select *...\" in that column.\n\nKind regards,\nDaniel Migowski\n\n\n\n\n\n\n\n\n\n\nHello,\n \nI just implemented a small change that adds another column “mem_usage” to the system view “pg_prepared_statements”. It returns the memory usage total of CachedPlanSource.context, CachedPlanSource.query_content and if\n available CachedPlanSource.gplan.context.\n \nLooks like this:\n \nIKOffice_Daume=# prepare test as select * from vw_report_salesinvoice where salesinvoice_id = $1;\nPREPARE\nIKOffice_Daume=# select * from pg_prepared_statements;\nname | statement | prepare_time | parameter_types | from_sql | mem_usage\n------+----------------------------------------------------------------------------------+------------------------------+-----------------+----------+-----------\ntest | prepare test as select * from vw_report_salesinvoice where salesinvoice_id = $1; | 2019-07-27 20:21:12.63093+02 | {integer} | t | \n33580232\n(1 row)\n \nI did this in preparation of reducing the memory usage of prepared statements and believe that this gives client application an option to investigate which prepared statements should be dropped. Also this makes it possible\n to directly examine the results of further changes and their effectiveness on reducing the memory load of prepared_statements.\n \nIs a patch welcome or is this feature not of interest?\n \nAlso I wonder why the “prepare test as” is part of the statement column. I isn’t even part of the real statement that is prepared as far as I would assume. Would prefer to just have the “select *…” in that column.\n\n \nKind regards,\nDaniel Migowski",
"msg_date": "Sat, 27 Jul 2019 18:29:23 +0000",
"msg_from": "Daniel Migowski <dmigowski@ikoffice.de>",
"msg_from_op": true,
"msg_subject": "Adding column \"mem_usage\" to view pg_prepared_statements"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-27 18:29:23 +0000, Daniel Migowski wrote:\n> I just implemented a small change that adds another column \"mem_usage\"\n> to the system view \"pg_prepared_statements\". It returns the memory\n> usage total of CachedPlanSource.context,\n> CachedPlanSource.query_content and if available\n> CachedPlanSource.gplan.context.\n\nFWIW, it's generally easier to comment if you actually provide the\npatch, even if it's just POC, as that gives a better handle on how much\nadditional complexity it introduces.\n\nI think this could be a useful feature. I'm not so sure we want it tied\nto just cached statements however - perhaps we ought to generalize it a\nbit more.\n\n\nRegarding the prepared statements specific considerations: I don't think\nwe ought to explicitly reference CachedPlanSource.query_content, and\nCachedPlanSource.gplan.context.\n\nIn the case of actual prepared statements (rather than oneshot plans)\nCachedPlanSource.query_context IIRC should live under\nCachedPlanSource.context. I think there's no relevant cases where\ngplan.context isn't a child of CachedPlanSource.context either, but not\nquite sure.\n\nThen we ought to just include child contexts in the memory computation\n(cf. logic in MemoryContextStatsInternal(), although you obviously\nwouldn't need all that). That way, if the cached statements has child\ncontexts, we're going to stay accurate.\n\n\n> Also I wonder why the \"prepare test as\" is part of the statement\n> column. I isn't even part of the real statement that is prepared as\n> far as I would assume. Would prefer to just have the \"select *...\" in\n> that column.\n\nIt's the statement that was executed. Note that you'll not see that in\nthe case of protocol level prepared statements. It will sometimes\ninclude relevant information, e.g. about the types specified as part of\nthe prepare (as in PREPARE foo(int, float, ...) AS ...).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 27 Jul 2019 12:11:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding column \"mem_usage\" to view pg_prepared_statements"
}
] |
[
{
"msg_contents": "Hi,\n\nThe discussion in [1]\nagain reminded me how much I dislike that we currently issue database\nqueries in tap tests by forking psql and writing/reading from it's\nstdin/stdout.\n\nThat's quite cumbersome to write, and adds a good number of additional\nfailure scenarios to worry about. For a lot of tasks you then have to\nreparse psql's output to separate out columns etc.\n\nI think we have a better approach for [1] than using tap tests, but I\nthink it's a more general issue preventing us from having more extensive\ntest coverage, and especially from having more robust coverage.\n\nI think we seriously ought to consider depending on a proper perl\ndatabase driver. I can see various approaches:\n\n1) Just depend on DBD::Pg being installed. It's fairly common, after\n all. It'd be somewhat annoying that we'd often end up using a\n different version of libpq than what we're testing against. But in\n most cases that'd not be particularly problematic.\n\n2) Depend on DBD::PgPP, a pure perl driver. It'd likely often be more\n lightweight to install. On the other hand, it's basically\n unmaintained (last commit 2010), and is a lot less commonly already\n installed than DBD::Pg. Also depends on DBI, which isn't part of a\n core perl IIRC.\n\n3) Vendor a test-only copy of one of those libraries, and build them as\n part of the test setup. That'd cut down on the number of\n dependencies.\n\n But probably not that much, because we'd still depend on DBI, which\n IIRC isn't part of core perl.\n\n DBI by default does include C code, and is not that small. There's\n however a pure perl version maintained as part of DBI, and it looks\n like it might be reasonably enough sized. If we vendored that, and\n DBD::PgPP, we'd not have any additional dependencies.\n\n I suspect that the licensing (dual GPL *version 1* / Artistic\n License, also V1), makes this too complicated, however.\n\n4) We develop a fairly minimal pure perl database driver, that doesn't\n depend on DBI. Include it somewhere as part of the test code, instead\n of src/interfaces, so it's clearer that it's not ment as an actual\n official driver.\n\n The obvious disadvantage is that this would be a noticable amount of\n code. But it's also not that crazily much.\n\n One big advantage I can see is that that'd make it a lot easier to\n write low-level protocol tests. Right now we either don't have them,\n or they have to go through libpq, which quite sensibly doesn't expose\n all the details to the outside. IMO it'd be really nice if we had a\n way to to write low level protocol tests, especially for testing\n things like the v2 protocol.\n\nI'm not volunteering to do 4), my perl skills aren't great (if the test\ninfra were python, otoh... I have the skeleton of a pure perl driver\nthat I used for testing somewhere). But I am leaning towards that being\nthe most reasonable choice.\n\nCraig, IIRC you'd thought about this before too?\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/31304.1564246011%40sss.pgh.pa.us\n\n\n",
"msg_date": "Sat, 27 Jul 2019 12:15:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "tap tests driving the database via psql"
},
{
"msg_contents": "On Sat, Jul 27, 2019 at 12:15:23PM -0700, Andres Freund wrote:\n> Hi,\n> \n> The discussion in [1]\n> again reminded me how much I dislike that we currently issue database\n> queries in tap tests by forking psql and writing/reading from it's\n> stdin/stdout.\n> \n> That's quite cumbersome to write, and adds a good number of additional\n> failure scenarios to worry about. For a lot of tasks you then have to\n> reparse psql's output to separate out columns etc.\n> \n> I think we have a better approach for [1] than using tap tests, but I\n> think it's a more general issue preventing us from having more extensive\n> test coverage, and especially from having more robust coverage.\n> \n> I think we seriously ought to consider depending on a proper perl\n> database driver. I can see various approaches:\n> \n> 1) Just depend on DBD::Pg being installed. It's fairly common, after\n> all. It'd be somewhat annoying that we'd often end up using a\n> different version of libpq than what we're testing against. But in\n> most cases that'd not be particularly problematic.\n> \n> 2) Depend on DBD::PgPP, a pure perl driver. It'd likely often be more\n> lightweight to install. On the other hand, it's basically\n> unmaintained (last commit 2010), and is a lot less commonly already\n> installed than DBD::Pg. Also depends on DBI, which isn't part of a\n> core perl IIRC.\n> \n> 3) Vendor a test-only copy of one of those libraries, and build them as\n> part of the test setup. That'd cut down on the number of\n> dependencies.\n> \n> But probably not that much, because we'd still depend on DBI, which\n> IIRC isn't part of core perl.\n> \n> DBI by default does include C code, and is not that small. There's\n> however a pure perl version maintained as part of DBI, and it looks\n> like it might be reasonably enough sized. If we vendored that, and\n> DBD::PgPP, we'd not have any additional dependencies.\n> \n> I suspect that the licensing (dual GPL *version 1* / Artistic\n> License, also V1), makes this too complicated, however.\n> \n> 4) We develop a fairly minimal pure perl database driver, that doesn't\n> depend on DBI. Include it somewhere as part of the test code, instead\n> of src/interfaces, so it's clearer that it's not ment as an actual\n> official driver.\n\nThere's one that may or may not need updates that's basically just a\nwrapper around libpq.\n\nhttps://ftp.postgresql.org/pub/projects/gborg/pgperl/stable/\n\n> The obvious disadvantage is that this would be a noticable amount of\n> code. But it's also not that crazily much.\n> \n> One big advantage I can see is that that'd make it a lot easier to\n> write low-level protocol tests. Right now we either don't have them,\n> or they have to go through libpq, which quite sensibly doesn't expose\n> all the details to the outside. IMO it'd be really nice if we had a\n> way to to write low level protocol tests, especially for testing\n> things like the v2 protocol.\n\nThat sounds worth doing as a separate thing, and an obvious\napplication of it would be something like a febesmith, which would get\nus a better idea as to whether we've implemented the protocol we say\nwe have.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sat, 27 Jul 2019 22:32:37 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: tap tests driving the database via psql"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-27 22:32:37 +0200, David Fetter wrote:\n> On Sat, Jul 27, 2019 at 12:15:23PM -0700, Andres Freund wrote:\n> > 4) We develop a fairly minimal pure perl database driver, that doesn't\n> > depend on DBI. Include it somewhere as part of the test code, instead\n> > of src/interfaces, so it's clearer that it's not ment as an actual\n> > official driver.\n> \n> There's one that may or may not need updates that's basically just a\n> wrapper around libpq.\n> \n> https://ftp.postgresql.org/pub/projects/gborg/pgperl/stable/\n\nThat's pretty darn old however (2002). Needs to be compiled. And is GPL\nv1 / Artistic v1 licensed. I think all of the other alternatives are\nbetter than this.\n\n\n> > The obvious disadvantage is that this would be a noticable amount of\n> > code. But it's also not that crazily much.\n> > \n> > One big advantage I can see is that that'd make it a lot easier to\n> > write low-level protocol tests. Right now we either don't have them,\n> > or they have to go through libpq, which quite sensibly doesn't expose\n> > all the details to the outside. IMO it'd be really nice if we had a\n> > way to to write low level protocol tests, especially for testing\n> > things like the v2 protocol.\n> \n> That sounds worth doing as a separate thing\n\nWhat would be the point of doing this separately? If we have a small\ndriver for writing protocol tests, why would we want something\nseparate for the tap tests?\n\n\n> and an obvious application of it would be something like a febesmith,\n> which would get us a better idea as to whether we've implemented the\n> protocol we say we have.\n\nHm, not convinced that's useful. And fairly sure that's pretty\nindependent of what I was writing about.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 27 Jul 2019 14:14:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tap tests driving the database via psql"
},
{
"msg_contents": "\nOn 7/27/19 3:15 PM, Andres Freund wrote:\n> Hi,\n>\n> The discussion in [1]\n> again reminded me how much I dislike that we currently issue database\n> queries in tap tests by forking psql and writing/reading from it's\n> stdin/stdout.\n>\n> That's quite cumbersome to write, and adds a good number of additional\n> failure scenarios to worry about. For a lot of tasks you then have to\n> reparse psql's output to separate out columns etc.\n>\n> I think we have a better approach for [1] than using tap tests, but I\n> think it's a more general issue preventing us from having more extensive\n> test coverage, and especially from having more robust coverage.\n>\n> I think we seriously ought to consider depending on a proper perl\n> database driver. I can see various approaches:\n>\n> 1) Just depend on DBD::Pg being installed. It's fairly common, after\n> all. It'd be somewhat annoying that we'd often end up using a\n> different version of libpq than what we're testing against. But in\n> most cases that'd not be particularly problematic.\n>\n> 2) Depend on DBD::PgPP, a pure perl driver. It'd likely often be more\n> lightweight to install. On the other hand, it's basically\n> unmaintained (last commit 2010), and is a lot less commonly already\n> installed than DBD::Pg. Also depends on DBI, which isn't part of a\n> core perl IIRC.\n>\n> 3) Vendor a test-only copy of one of those libraries, and build them as\n> part of the test setup. That'd cut down on the number of\n> dependencies.\n>\n> But probably not that much, because we'd still depend on DBI, which\n> IIRC isn't part of core perl.\n>\n> DBI by default does include C code, and is not that small. There's\n> however a pure perl version maintained as part of DBI, and it looks\n> like it might be reasonably enough sized. If we vendored that, and\n> DBD::PgPP, we'd not have any additional dependencies.\n>\n> I suspect that the licensing (dual GPL *version 1* / Artistic\n> License, also V1), makes this too complicated, however.\n>\n> 4) We develop a fairly minimal pure perl database driver, that doesn't\n> depend on DBI. Include it somewhere as part of the test code, instead\n> of src/interfaces, so it's clearer that it's not ment as an actual\n> official driver.\n>\n> The obvious disadvantage is that this would be a noticable amount of\n> code. But it's also not that crazily much.\n>\n> One big advantage I can see is that that'd make it a lot easier to\n> write low-level protocol tests. Right now we either don't have them,\n> or they have to go through libpq, which quite sensibly doesn't expose\n> all the details to the outside. IMO it'd be really nice if we had a\n> way to to write low level protocol tests, especially for testing\n> things like the v2 protocol.\n>\n> I'm not volunteering to do 4), my perl skills aren't great (if the test\n> infra were python, otoh... I have the skeleton of a pure perl driver\n> that I used for testing somewhere). But I am leaning towards that being\n> the most reasonable choice.\n>\n\n\n+1 for #4.\n\n\nI'll be happy to participate in any effort.\n\n\nAbout 22 years ago I wrote a pure perl implementation of a wire protocol\nof roughly similar complexity (RFC1459). I got it basically working in\njust a few days, so this sort of thing is very doable. Let's see your\nskeleton and maybe it's a good starting point.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sat, 27 Jul 2019 17:48:39 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: tap tests driving the database via psql"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-27 17:48:39 -0400, Andrew Dunstan wrote:\n> On 7/27/19 3:15 PM, Andres Freund wrote:\n> > I'm not volunteering to do 4), my perl skills aren't great (if the test\n> > infra were python, otoh... I have the skeleton of a pure perl driver\n> > that I used for testing somewhere). But I am leaning towards that being\n> > the most reasonable choice.\n> \n> +1 for #4.\n> \n> \n> I'll be happy to participate in any effort.\n> \n> \n> About 22 years ago I wrote a pure perl implementation of a wire protocol\n> of roughly similar complexity (RFC1459). I got it basically working in\n> just a few days, so this sort of thing is very doable. Let's see your\n> skeleton and maybe it's a good starting point.\n\nThe skeleton's in python though, not sure how helpful that is?\n\n/me once more regrets that perl, not python, has been chosen as the\nscripting language of choice for postgres...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 27 Jul 2019 15:37:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tap tests driving the database via psql"
},
{
"msg_contents": "\nOn 7/27/19 6:37 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2019-07-27 17:48:39 -0400, Andrew Dunstan wrote:\n>> On 7/27/19 3:15 PM, Andres Freund wrote:\n>>> I'm not volunteering to do 4), my perl skills aren't great (if the test\n>>> infra were python, otoh... I have the skeleton of a pure perl driver\n>>> that I used for testing somewhere). But I am leaning towards that being\n>>> the most reasonable choice.\n>> +1 for #4.\n>>\n>>\n>> I'll be happy to participate in any effort.\n>>\n>>\n>> About 22 years ago I wrote a pure perl implementation of a wire protocol\n>> of roughly similar complexity (RFC1459). I got it basically working in\n>> just a few days, so this sort of thing is very doable. Let's see your\n>> skeleton and maybe it's a good starting point.\n> The skeleton's in python though, not sure how helpful that is?\n\n\nMaybe I don't write much python but I can read it without too much\ndifficulty :-)\n\n\nBut you did say your skeleton was pure perl ... slip of the fingers?\n\n\n>\n> /me once more regrets that perl, not python, has been chosen as the\n> scripting language of choice for postgres...\n>\n\n\nThat ship has sailed long ago.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sat, 27 Jul 2019 18:57:58 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: tap tests driving the database via psql"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-27 18:57:58 -0400, Andrew Dunstan wrote:\n> Maybe I don't write much python but I can read it without too much\n> difficulty :-)\n> \n> \n> But you did say your skeleton was pure perl ... slip of the fingers?\n\nOoops, yea.\n\n\n> > /me once more regrets that perl, not python, has been chosen as the\n> > scripting language of choice for postgres...\n\n> That ship has sailed long ago.\n\nIndeed. Not proposing to change that. Just sad about it...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 27 Jul 2019 16:01:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tap tests driving the database via psql"
},
{
"msg_contents": "On Sun, 28 Jul 2019 at 03:15, Andres Freund <andres@anarazel.de> wrote:\n\n> 1) Just depend on DBD::Pg being installed. It's fairly common, after\n> all. It'd be somewhat annoying that we'd often end up using a\n> different version of libpq than what we're testing against. But in\n> most cases that'd not be particularly problematic.\n>\n\nI advocated for this in the past, and still think it's the best option.\n\n>\n> 4) We develop a fairly minimal pure perl database driver, that doesn't\n> depend on DBI. Include it somewhere as part of the test code, instead\n> of src/interfaces, so it's clearer that it's not ment as an actual\n> official driver.\n>\n\nWhy not write a new language interpreter while we're at it, and maybe a\ncompiler and runtime? :p\n\nThe community IMO wastes *so* much time on not-invented-here make-work and\non jumping through hoops to avoid depending on anything newer than the late\n'90s. I'd rather not go deeper down that path. If someone on SunOS or SCO\nOpenServer or whatever doesn't want to install DBD::Pg, have the TAP tests\njust skip the tests on that platform and make it the platform owner's\nproblem.\n\n\n> Craig, IIRC you'd thought about this before too?\n>\n\nRight. But IIRC Tom vetoed it on grounds of not wanting to expect buildfarm\noperators to install it, something like that.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Sun, 28 Jul 2019 at 03:15, Andres Freund <andres@anarazel.de> wrote:\n1) Just depend on DBD::Pg being installed. It's fairly common, after\n all. It'd be somewhat annoying that we'd often end up using a\n different version of libpq than what we're testing against. But in\n most cases that'd not be particularly problematic.I advocated for this in the past, and still think it's the best option.\n4) We develop a fairly minimal pure perl database driver, that doesn't\n depend on DBI. Include it somewhere as part of the test code, instead\n of src/interfaces, so it's clearer that it's not ment as an actual\n official driver.Why not write a new language interpreter while we're at it, and maybe a compiler and runtime? :pThe community IMO wastes *so* much time on not-invented-here make-work and on jumping through hoops to avoid depending on anything newer than the late '90s. I'd rather not go deeper down that path. If someone on SunOS or SCO OpenServer or whatever doesn't want to install DBD::Pg, have the TAP tests just skip the tests on that platform and make it the platform owner's problem. \nCraig, IIRC you'd thought about this before too?Right. But IIRC Tom vetoed it on grounds of not wanting to expect buildfarm operators to install it, something like that. -- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Tue, 30 Jul 2019 14:13:40 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: tap tests driving the database via psql"
},
{
"msg_contents": "Craig Ringer <craig@2ndquadrant.com> writes:\n> On Sun, 28 Jul 2019 at 03:15, Andres Freund <andres@anarazel.de> wrote:\n>> 1) Just depend on DBD::Pg being installed. It's fairly common, after\n>> all. It'd be somewhat annoying that we'd often end up using a\n>> different version of libpq than what we're testing against. But in\n>> most cases that'd not be particularly problematic.\n\n> I advocated for this in the past, and still think it's the best option.\n\nI think the not-same-libpq issue would be a much bigger problem than either\nof you are accounting for. Some off-the-top-of-the-head reasons:\n\n* Since we'd have to presume a possibly-very-back-rev libpq, we could\nnever add tests related to any recent libpq bug fixes or new features.\n\n* The system libpq might have a different idea of default port\nand/or socket directory than the test build. Yeah, we could work\naround that by forcing the choice all the time, but it would be a\nconstant hazard.\n\n* We don't have control over what else gets brought in beside libpq.\nDepending on the whims of the platform packagers, there might need\nto be other parts of the platform's default postgres installation,\neg psql, sitting in one's PATH. Again, this wouldn't be too much\nof a hazard for pre-debugged test scripts --- but it would be a huge\nhazard for developers, who do lots of things manually and would always be\nat risk of invoking the wrong psql. I learned long ago not to have any\npart of a platform's postgres packages in place on my development systems,\nand I will fight hard against any test procedure change that requires me\nto do differently.\n\nNow, none of these things are really a problem with DBD/DBI as such\n--- rather, they are reasons not to depend on a pre-packaged build\nof DBD::Pg that depends on a pre-packaged build of libpq.so.\nI haven't looked at the size, or the license, of DBD::Pg ... but\ncould it be sane to include our own modified copy in the tree?\n\n(Forking DBD::Pg would also let us add custom testing features\nto it ...)\n\n> The community IMO wastes *so* much time on not-invented-here make-work and\n> on jumping through hoops to avoid depending on anything newer than the late\n> '90s.\n\nThat's an unnecessary, and false, ad-hominem attack.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jul 2019 09:40:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tap tests driving the database via psql"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-30 09:40:54 -0400, Tom Lane wrote:\n> Craig Ringer <craig@2ndquadrant.com> writes:\n> > On Sun, 28 Jul 2019 at 03:15, Andres Freund <andres@anarazel.de> wrote:\n> >> 1) Just depend on DBD::Pg being installed. It's fairly common, after\n> >> all. It'd be somewhat annoying that we'd often end up using a\n> >> different version of libpq than what we're testing against. But in\n> >> most cases that'd not be particularly problematic.\n> \n> > I advocated for this in the past, and still think it's the best option.\n> \n> I think the not-same-libpq issue would be a much bigger problem than either\n> of you are accounting for. Some off-the-top-of-the-head reasons:\n\nI came to the same conclusion?\n\n\n> Now, none of these things are really a problem with DBD/DBI as such\n> --- rather, they are reasons not to depend on a pre-packaged build\n> of DBD::Pg that depends on a pre-packaged build of libpq.so.\n> I haven't looked at the size, or the license, of DBD::Pg ... but\n> could it be sane to include our own modified copy in the tree?\n\nI had that as an alternative too. I think the license (Artistic v1/GPL\nv1) probably makes that non-feasible. The pure-perl version of DBI\nprobably would otherwise be realistic.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Jul 2019 07:42:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tap tests driving the database via psql"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-07-30 09:40:54 -0400, Tom Lane wrote:\n>> Now, none of these things are really a problem with DBD/DBI as such\n>> --- rather, they are reasons not to depend on a pre-packaged build\n>> of DBD::Pg that depends on a pre-packaged build of libpq.so.\n>> I haven't looked at the size, or the license, of DBD::Pg ... but\n>> could it be sane to include our own modified copy in the tree?\n\n> I had that as an alternative too. I think the license (Artistic v1/GPL\n> v1) probably makes that non-feasible. The pure-perl version of DBI\n> probably would otherwise be realistic.\n\nOK, so just lifting DBD::Pg in toto is out for license reasons.\nHowever, maybe we could consider writing a new DBD driver from\nscratch (while using a platform-provided DBI layer) rather than\ndoing everything from scratch. I'm not sure how much actual\nfunctionality is in the DBI layer, so maybe that approach\nwouldn't buy much.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jul 2019 10:48:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tap tests driving the database via psql"
},
{
"msg_contents": "On 2019-Jul-30, Tom Lane wrote:\n\n> OK, so just lifting DBD::Pg in toto is out for license reasons.\n> However, maybe we could consider writing a new DBD driver from\n> scratch (while using a platform-provided DBI layer) rather than\n> doing everything from scratch. I'm not sure how much actual\n> functionality is in the DBI layer, so maybe that approach\n> wouldn't buy much.\n\nThen again, maybe we don't *need* all the functionality that DBI offers.\nDBI is enormous, has a lot of extensibility, cross-database\ncompatibility ... and, well, just the fact that it's a layered design\n(requiring a DBD on top of it before it even works) makes it even more\ncomplicated.\n\nI think a pure-perl standalone driver might be a lot simpler than\nmaintanining our own DBD ... and we don't have to convince animal\nmaintainers to install the right version of DBI in the first place.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 30 Jul 2019 14:39:30 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: tap tests driving the database via psql"
},
{
"msg_contents": "On Tue, 30 Jul 2019 at 21:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Craig Ringer <craig@2ndquadrant.com> writes:\n> > On Sun, 28 Jul 2019 at 03:15, Andres Freund <andres@anarazel.de> wrote:\n> >> 1) Just depend on DBD::Pg being installed. It's fairly common, after\n> >> all. It'd be somewhat annoying that we'd often end up using a\n> >> different version of libpq than what we're testing against. But in\n> >> most cases that'd not be particularly problematic.\n>\n> > I advocated for this in the past, and still think it's the best option.\n>\n> I think the not-same-libpq issue would be a much bigger problem than either\n> of you are accounting for.\n\n\nOK. So rather than building our own $everything from scratch, lets look at\nsolving that. I'd argue that the PostgreSQL community has enough work to do\nmaintaining the drivers that are fairly directly community-owned like\nPgJDBC and psqlODBC, let alone writing new ones for less-in-demand\nlanguages just for test use.\n\nPerl modules support local installations. Would it be reasonable to require\nDBI to be present, then rebuild DBD::Pg against our libpq during our own\ntest infra compilation, and run it with a suitable LD_LIBRARY_PATH or using\nrpath? That'd actually get us a side-benefit of some test coverage of\nDBD::Pg and libpq.\n\nNote that I'm not saying this is the only or right way. Your concerns about\nusing a system libpq are entirely valid, as are your concerns about trying\nto link to our libpq using a DBD::Pg originally built against the system\nlibpq. I've just been dealing with issues similar to those today and I know\nhow much of a hassle it can be. However, I don't think \"it's hard\" is a\ngood reason to write a whole new driver and potentially do a\nNetscape/Mozilla.\n\n\n> * Since we'd have to presume a possibly-very-back-rev libpq, we could\n> never add tests related to any recent libpq bug fixes or new features.\n>\n\nWe can feature-test. We're not dealing with pg_regress where it's\nessentially impossible to do anything conditionally without blocks of\nplpgsql. We're dealing with Perl and Test::More where we can make tests\nconditional, we can probe the runtime environment to decide whether we can\nrun a given test, etc.\n\nAlso, lets compare to the status quo. Will it be worse than exec()ing psql\nwith a string argument and trying to do sane database access via stdio?\nDefinitely. Not.\n\n\n> * The system libpq might have a different idea of default port\n> and/or socket directory than the test build. Yeah, we could work\n> around that by forcing the choice all the time, but it would be a\n> constant hazard.\n>\n\nI'd call that a minor irritation personally, as all we have to do is set up\nthe environment and we're done.\n\n\n> * We don't have control over what else gets brought in beside libpq.\n>\n\nThat's a very significant point that we must pay attention to.\n\nAnyone who's worked with nss will probably know the pain of surprise\nlibrary conflicts arising from nss plugins being loaded unexpectedly into\nthe program. I still twitch thinking about libnss-ldap.\n\nIt'd make a lot of sense to capture \"ldd\" output and/or do a minimal run\nwith LD_DEBUG set during buildfarm test runs to help identify any\ninteresting surprises.\n\n\n> Depending on the whims of the platform packagers, there might need\n> to be other parts of the platform's default postgres installation,\n> eg psql, sitting in one's PATH. Again, this wouldn't be too much\n> of a hazard for pre-debugged test scripts --- but it would be a huge\n> hazard for developers, who do lots of things manually and would always be\n> at risk of invoking the wrong psql. I learned long ago not to have any\n> part of a platform's postgres packages in place on my development systems,\n> and I will fight hard against any test procedure change that requires me\n> to do differently.\n>\n\nTBH I think that's a habit/procedure issue. That doesn't make it invalid,\nbut it might not be as big a concern for everyone as for you. I have a\nshell alias that sets up my environment and another that prints the current\nsetup. I never run into issues like this, despite often multi-tasking while\ntired. Not only that, I find platform postgres extremely useful to have on\nmy systems to use for simple cross-version tests.\n\nIf we adopt something like I suggested above where we (re)build DBD::Pg\nagainst our installed pg and libpq, that wouldn't be much of an issue\nanyway. We'd want a nice way to set up the runtime environment in a shell\nfor manual testing of course, like a generated .sh we can source. But\nfrankly I'd find that a useful thing to have in postgres anyway. I can't\ncount the number of times I've wished there was an easy way to pause\npg_regress and launch a psql session against its temp instance, or do the\nsame with a TAP test.\n\nNow, none of these things are really a problem with DBD/DBI as such\n> --- rather, they are reasons not to depend on a pre-packaged build\n> of DBD::Pg that depends on a pre-packaged build of libpq.so.\n>\n\nYeah. While I don't agree with your conclusion in terms of how you weigh\nthe priorities, I agree that all your points are valid and make sense. I'd\npersonally go ahead anyway, because the status quo is pretty bad and an\nincremental improvement would still be significant. But I understand your\nreluctance a bit better.\n\n\n> I haven't looked at the size, or the license, of DBD::Pg ... but\n> could it be sane to include our own modified copy in the tree?\n>\n\nOr ... clone it on demand? I won't suggest git submodules, because they're\nawkward bodgy hacks, but we can reasonably enough just fetch the sources\nwhen we need them so long as they aren't a hard dependency on building\npostgres. Which they wouldn't need to be; at worst we'd have to skip the\nTAP tests if no DBD::Pg was found.\n\nWe could also permit falling back to system DBD::Pg with a suitable\nvisible message in logs to ensure nobody gets confused.\n\nI'd very strongly prefer not to have a copy in-tree polluting the postgres\nhistory and generally creating a mess. We don't even have \"blessed\" drivers\nthat live on community infra like psqlODBC or PgJDBC in-tree.\n\n\n> (Forking DBD::Pg would also let us add custom testing features\n> to it ...)\n>\n\nLets not. Unless utterly unavoidable. There's enough to do already.\n\n\n> > The community IMO wastes *so* much time on not-invented-here make-work\n> and\n> > on jumping through hoops to avoid depending on anything newer than the\n> late\n> > '90s.\n>\n> That's an unnecessary, and false, ad-hominem attack.\n>\n\nIt's none of the above. It's an honest opinion, not an attack, and it's not\ndirected at you or any one other person.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Tue, 30 Jul 2019 at 21:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:Craig Ringer <craig@2ndquadrant.com> writes:\n> On Sun, 28 Jul 2019 at 03:15, Andres Freund <andres@anarazel.de> wrote:\n>> 1) Just depend on DBD::Pg being installed. It's fairly common, after\n>> all. It'd be somewhat annoying that we'd often end up using a\n>> different version of libpq than what we're testing against. But in\n>> most cases that'd not be particularly problematic.\n\n> I advocated for this in the past, and still think it's the best option.\n\nI think the not-same-libpq issue would be a much bigger problem than either\nof you are accounting for.OK. So rather than building our own $everything from scratch, lets look at solving that. I'd argue that the PostgreSQL community has enough work to do maintaining the drivers that are fairly directly community-owned like PgJDBC and psqlODBC, let alone writing new ones for less-in-demand languages just for test use.Perl modules support local installations. Would it be reasonable to require DBI to be present, then rebuild DBD::Pg against our libpq during our own test infra compilation, and run it with a suitable LD_LIBRARY_PATH or using rpath? That'd actually get us a side-benefit of some test coverage of DBD::Pg and libpq.Note that I'm not saying this is the only or right way. Your concerns about using a system libpq are entirely valid, as are your concerns about trying to link to our libpq using a DBD::Pg originally built against the system libpq. I've just been dealing with issues similar to those today and I know how much of a hassle it can be. However, I don't think \"it's hard\" is a good reason to write a whole new driver and potentially do a Netscape/Mozilla. * Since we'd have to presume a possibly-very-back-rev libpq, we could\nnever add tests related to any recent libpq bug fixes or new features.We can feature-test. We're not dealing with pg_regress where it's essentially impossible to do anything conditionally without blocks of plpgsql. We're dealing with Perl and Test::More where we can make tests conditional, we can probe the runtime environment to decide whether we can run a given test, etc.Also, lets compare to the status quo. Will it be worse than exec()ing psql with a string argument and trying to do sane database access via stdio? Definitely. Not. * The system libpq might have a different idea of default port\nand/or socket directory than the test build. Yeah, we could work\naround that by forcing the choice all the time, but it would be a\nconstant hazard.I'd call that a minor irritation personally, as all we have to do is set up the environment and we're done. * We don't have control over what else gets brought in beside libpq.That's a very significant point that we must pay attention to.Anyone who's worked with nss will probably know the pain of surprise library conflicts arising from nss plugins being loaded unexpectedly into the program. I still twitch thinking about libnss-ldap.It'd make a lot of sense to capture \"ldd\" output and/or do a minimal run with LD_DEBUG set during buildfarm test runs to help identify any interesting surprises. \nDepending on the whims of the platform packagers, there might need\nto be other parts of the platform's default postgres installation,\neg psql, sitting in one's PATH. Again, this wouldn't be too much\nof a hazard for pre-debugged test scripts --- but it would be a huge\nhazard for developers, who do lots of things manually and would always be\nat risk of invoking the wrong psql. I learned long ago not to have any\npart of a platform's postgres packages in place on my development systems,\nand I will fight hard against any test procedure change that requires me\nto do differently.TBH I think that's a habit/procedure issue. That doesn't make it invalid, but it might not be as big a concern for everyone as for you. I have a shell alias that sets up my environment and another that prints the current setup. I never run into issues like this, despite often multi-tasking while tired. Not only that, I find platform postgres extremely useful to have on my systems to use for simple cross-version tests.If we adopt something like I suggested above where we (re)build DBD::Pg against our installed pg and libpq, that wouldn't be much of an issue anyway. We'd want a nice way to set up the runtime environment in a shell for manual testing of course, like a generated .sh we can source. But frankly I'd find that a useful thing to have in postgres anyway. I can't count the number of times I've wished there was an easy way to pause pg_regress and launch a psql session against its temp instance, or do the same with a TAP test.Now, none of these things are really a problem with DBD/DBI as such\n--- rather, they are reasons not to depend on a pre-packaged build\nof DBD::Pg that depends on a pre-packaged build of libpq.so.Yeah. While I don't agree with your conclusion in terms of how you weigh the priorities, I agree that all your points are valid and make sense. I'd personally go ahead anyway, because the status quo is pretty bad and an incremental improvement would still be significant. But I understand your reluctance a bit better. \nI haven't looked at the size, or the license, of DBD::Pg ... but\ncould it be sane to include our own modified copy in the tree?Or ... clone it on demand? I won't suggest git submodules, because they're awkward bodgy hacks, but we can reasonably enough just fetch the sources when we need them so long as they aren't a hard dependency on building postgres. Which they wouldn't need to be; at worst we'd have to skip the TAP tests if no DBD::Pg was found.We could also permit falling back to system DBD::Pg with a suitable visible message in logs to ensure nobody gets confused.I'd very strongly prefer not to have a copy in-tree polluting the postgres history and generally creating a mess. We don't even have \"blessed\" drivers that live on community infra like psqlODBC or PgJDBC in-tree. (Forking DBD::Pg would also let us add custom testing features\nto it ...)Lets not. Unless utterly unavoidable. There's enough to do already. > The community IMO wastes *so* much time on not-invented-here make-work and\n> on jumping through hoops to avoid depending on anything newer than the late\n> '90s.\n\nThat's an unnecessary, and false, ad-hominem attack.It's none of the above. It's an honest opinion, not an attack, and it's not directed at you or any one other person. -- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Wed, 31 Jul 2019 09:32:10 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: tap tests driving the database via psql"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-31 09:32:10 +0800, Craig Ringer wrote:\n> OK. So rather than building our own $everything from scratch, lets look at\n> solving that.\n\nIDK, a minimal driver that just does what we need it to do is a few\nhundred lines, not more. And there's plenty of stuff that we simply\nwon't be able to test with any driver that's not purposefully written\nfor testing. There's e.g. basically no way with any of the drivers to\ntest intentionally bogus sequences of protocol messages, yet that's\nsomething we really ought to test.\n\nI've written custom hacks^Wprograms to tests things like that a few\ntimes. That really shouldn't be necessary.\n\n\n> Perl modules support local installations. Would it be reasonable to require\n> DBI to be present, then rebuild DBD::Pg against our libpq during our own\n> test infra compilation, and run it with a suitable LD_LIBRARY_PATH or using\n> rpath? That'd actually get us a side-benefit of some test coverage of\n> DBD::Pg and libpq.\n\nYou're intending to download DBD::Pg? Or how would you get around the\nlicensing issues? Downloading it will have some issues too: For one at least I\noften want to be able to run tests when offline; there's security\nconcerns too.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Jul 2019 19:20:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tap tests driving the database via psql"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 07:20:53PM -0700, Andres Freund wrote:\n> IDK, a minimal driver that just does what we need it to do is a few\n> hundred lines, not more. And there's plenty of stuff that we simply\n> won't be able to test with any driver that's not purposefully written\n> for testing. There's e.g. basically no way with any of the drivers to\n> test intentionally bogus sequences of protocol messages, yet that's\n> something we really ought to test.\n\nYes, I don't know if a hundred of lines measure that correctly, but\nreally we should avoid bloating the stuff we rely on like by fetching\nan independent driver if we finish by not using most of its features.\n300~500 lines would be fine for me, 10k definitely not.\n\n> I've written custom hacks^Wprograms to tests things like that a few\n> times. That really shouldn't be necessary.\n\nMine are usually plain hacks :), and usually on top of libpq but I\nhave done some python-ish script to send bogus and hardcoded protocol\nmessages. If we can automate a bit this area, that could be useful.\n\n>> Perl modules support local installations. Would it be reasonable to require\n>> DBI to be present, then rebuild DBD::Pg against our libpq during our own\n>> test infra compilation, and run it with a suitable LD_LIBRARY_PATH or using\n>> rpath? That'd actually get us a side-benefit of some test coverage of\n>> DBD::Pg and libpq.\n> \n> You're intending to download DBD::Pg? Or how would you get around the\n> licensing issues? Downloading it will have some issues too: For one at least I\n> often want to be able to run tests when offline; there's security\n> concerns too.\n\nAs Craig has mentioned CPAN can be configured to install all the\nlibraries for a local user, so there is no need to be online if a copy\nhas been already downloaded.\n--\nMichael",
"msg_date": "Wed, 31 Jul 2019 13:20:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: tap tests driving the database via psql"
},
{
"msg_contents": "On Wed, 31 Jul 2019 at 10:20, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-07-31 09:32:10 +0800, Craig Ringer wrote:\n> > OK. So rather than building our own $everything from scratch, lets look\n> at\n> > solving that.\n>\n> IDK, a minimal driver that just does what we need it to do is a few\n> hundred lines, not more. And there's plenty of stuff that we simply\n> won't be able to test with any driver that's not purposefully written\n> for testing. There's e.g. basically no way with any of the drivers to\n> test intentionally bogus sequences of protocol messages, yet that's\n> something we really ought to test.\n>\n> I've written custom hacks^Wprograms to tests things like that a few\n> times. That really shouldn't be necessary.\n>\n>\nThat's a good point. I've had to write simple protocol hacks myself, and\nhaving a reusable base tool for it would indeed be valuable.\n\nI'm just worried it'll snowball into Yet Another Driver We Don't Need, or\nland up as a half-finished poorly-maintained burden nobody wants to touch.\nThough the general fondness for and familiarity with Perl in the core\ncircle of Pg contributors and committers would probably help here, since\nit'd inevitably land up being written in Perl...\n\nI'm interested in evolution of the protocol and ways to enhance it, and I\ncan see something we can actively use for protocol testing being valuable.\nIf done right, the protocol implementation could probably pass-through\ninspect messages from regular libpq etc as well as serving as either a\nclient or even as a dumb server simulator for pregenerated responses for\nclient testing.\n\nWe certainly couldn't do anything like that with existing tools and by\nreusing existing drivers.\n\nI wonder if there's any way to make writing and maintaining it less painful\nthough. Having to make everything work with Perl 5.8.8 and no non-core\nmodules leads to a whole pile of bespoke code and reinvention.\n\n\n> > Perl modules support local installations. Would it be reasonable to\n> require\n> > DBI to be present, then rebuild DBD::Pg against our libpq during our own\n> > test infra compilation, and run it with a suitable LD_LIBRARY_PATH or\n> using\n> > rpath? That'd actually get us a side-benefit of some test coverage of\n> > DBD::Pg and libpq.\n>\n> You're intending to download DBD::Pg?\n\n\nThat'd be a reasonable option IMO, yes. Either via git with https, or a\ntarball where we check a signature. So long as it can use any pre-existing\nlocal copy without needing to hit the Internet, and the build can also\nsucceed without the presence of it at all, I think that'd be reasonable\nenough.\n\nBut I'm probably getting contaminated by excessive exposure to Apache Maven\nwhen I have to work with Java. I'm rapidly getting used to downloading\nthings being a part of builds. Personally I'd expect that unless\nspecifically testing new libpq functionality etc, most people would just be\nusing their system DBD::Pg... and I consider linking the system DBD::Pg\nagainst our new-compiled libpq more feature than bug, as if we retain the\nsame soname we should be doing a reasonable job of not breaking upward\ncompatibility anyway. (It'd potentially be an issue if you have a very new\nsystem libpq and are running tests on an old postgres, I guess).\n\n\n> Or how would you get around the licensing issues? Downloading it will have\n> some issues too: For one at least I\n> often want to be able to run tests when offline; there's security\n> concerns too.\n>\n\nRoughly what I was thinking of was:\n\nFor Pg source builds (git, or dist tarball), we'd generally curl a tarball\nof DBD::Pg from a HTTPs URL specified in Makefile variables (with =? so it\ncan be overridden to point to an alt location, different version, local\npath, etc) and unpack it, then build it. The makefile can also store a\nsigning key fingerprint so we can download the sig file and check the sig\nis by the expected signer if the user has imported the signing key for\nDBD::Pg.\n\nWe could test if the target directory already exists and is populated and\nre-use it, so people can git-clone DBD::Pg if they prefer.\n\nAnd we'd allow users to specify --with-system-dbd-pg at configure time, or\n--without-dbd-pg .\n\nThe Pg perl libraries for our TAP test harness/wrappers would offer a\nsimple function to skip a test if DBD::Pg is missing, a simple function to\nskip a test if the loaded DBD::Pg lacks $some_feature_or_attribute, etc.\n\n\n\n\n\n\n>\n> Greetings,\n>\n> Andres Freund\n>\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Wed, 31 Jul 2019 at 10:20, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-07-31 09:32:10 +0800, Craig Ringer wrote:\n> OK. So rather than building our own $everything from scratch, lets look at\n> solving that.\n\nIDK, a minimal driver that just does what we need it to do is a few\nhundred lines, not more. And there's plenty of stuff that we simply\nwon't be able to test with any driver that's not purposefully written\nfor testing. There's e.g. basically no way with any of the drivers to\ntest intentionally bogus sequences of protocol messages, yet that's\nsomething we really ought to test.\n\nI've written custom hacks^Wprograms to tests things like that a few\ntimes. That really shouldn't be necessary.\nThat's a good point. I've had to write simple protocol hacks myself, and having a reusable base tool for it would indeed be valuable.I'm just worried it'll snowball into Yet Another Driver We Don't Need, or land up as a half-finished poorly-maintained burden nobody wants to touch. Though the general fondness for and familiarity with Perl in the core circle of Pg contributors and committers would probably help here, since it'd inevitably land up being written in Perl...I'm interested in evolution of the protocol and ways to enhance it, and I can see something we can actively use for protocol testing being valuable. If done right, the protocol implementation could probably pass-through inspect messages from regular libpq etc as well as serving as either a client or even as a dumb server simulator for pregenerated responses for client testing.We certainly couldn't do anything like that with existing tools and by reusing existing drivers.I wonder if there's any way to make writing and maintaining it less painful though. Having to make everything work with Perl 5.8.8 and no non-core modules leads to a whole pile of bespoke code and reinvention.\n\n> Perl modules support local installations. Would it be reasonable to require\n> DBI to be present, then rebuild DBD::Pg against our libpq during our own\n> test infra compilation, and run it with a suitable LD_LIBRARY_PATH or using\n> rpath? That'd actually get us a side-benefit of some test coverage of\n> DBD::Pg and libpq.\n\nYou're intending to download DBD::Pg?That'd be a reasonable option IMO, yes. Either via git with https, or a tarball where we check a signature. So long as it can use any pre-existing local copy without needing to hit the Internet, and the build can also succeed without the presence of it at all, I think that'd be reasonable enough.But I'm probably getting contaminated by excessive exposure to Apache Maven when I have to work with Java. I'm rapidly getting used to downloading things being a part of builds. Personally I'd expect that unless specifically testing new libpq functionality etc, most people would just be using their system DBD::Pg... and I consider linking the system DBD::Pg against our new-compiled libpq more feature than bug, as if we retain the same soname we should be doing a reasonable job of not breaking upward compatibility anyway. (It'd potentially be an issue if you have a very new system libpq and are running tests on an old postgres, I guess). Or how would you get around the licensing issues? Downloading it will have some issues too: For one at least I\noften want to be able to run tests when offline; there's security\nconcerns too.Roughly what I was thinking of was:For Pg source builds (git, or dist tarball), we'd generally curl a tarball of DBD::Pg from a HTTPs URL specified in Makefile variables (with =? so it can be overridden to point to an alt location, different version, local path, etc) and unpack it, then build it. The makefile can also store a signing key fingerprint so we can download the sig file and check the sig is by the expected signer if the user has imported the signing key for DBD::Pg.We could test if the target directory already exists and is populated and re-use it, so people can git-clone DBD::Pg if they prefer.And we'd allow users to specify --with-system-dbd-pg at configure time, or --without-dbd-pg .The Pg perl libraries for our TAP test harness/wrappers would offer a simple function to skip a test if DBD::Pg is missing, a simple function to skip a test if the loaded DBD::Pg lacks $some_feature_or_attribute, etc. \n\nGreetings,\n\nAndres Freund\n-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Wed, 31 Jul 2019 13:08:12 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: tap tests driving the database via psql"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that after\n\ncommit 8255c7a5eeba8f1a38b7a431c04909bde4f5e67d\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2019-05-22 13:04:48 -0400\n\n Phase 2 pgindent run for v12.\n \n Switch to 2.1 version of pg_bsd_indent. This formats\n multiline function declarations \"correctly\", that is with\n additional lines of parameter declarations indented to match\n where the first line's left parenthesis is.\n \n Discussion: https://postgr.es/m/CAEepm=0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug@mail.gmail.com\n\na few prototypes look odd. It appears to be cases where previously the\nodd indentation was put to some use, by indenting parameters less:\n\nextern void DefineCustomBoolVariable(\n const char *name,\n const char *short_desc,\n const char *long_desc,\n bool *valueAddr,\n bool bootValue,\n GucContext context,\n int flags,\n GucBoolCheckHook check_hook,\n GucBoolAssignHook assign_hook,\n GucShowHook show_hook);\n\nbut now that looks odd:\n\nextern void DefineCustomBoolVariable(\n const char *name,\n const char *short_desc,\n const char *long_desc,\n bool *valueAddr,\n bool bootValue,\n GucContext context,\n int flags,\n GucBoolCheckHook check_hook,\n GucBoolAssignHook assign_hook,\n GucShowHook show_hook);\n\nUnless somebody protests I'm going to remove the now pretty useless\nlooking newline in the cases I can find. I used\nack --type cc --type cpp '^[a-zA-Z_].*\\(\\n'\nto find the ones I did. Not sure that catches everything.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sat, 27 Jul 2019 18:37:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "minor fixes after pgindent prototype fixes"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> a few prototypes look odd. It appears to be cases where previously the\n> odd indentation was put to some use, by indenting parameters less:\n> ...\n> but now that looks odd:\n> extern void DefineCustomBoolVariable(\n> const char *name,\n> const char *short_desc,\n\n> Unless somebody protests I'm going to remove the now pretty useless\n> looking newline in the cases I can find.\n\n+1. I think Alvaro was muttering something about doing this,\nbut you beat him to it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 28 Jul 2019 00:09:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: minor fixes after pgindent prototype fixes"
},
{
"msg_contents": "On 2019-Jul-28, Tom Lane wrote:\n\n> Andres Freund <andres@anarazel.de> writes:\n> > a few prototypes look odd. It appears to be cases where previously the\n> > odd indentation was put to some use, by indenting parameters less:\n> > ...\n> > but now that looks odd:\n> > extern void DefineCustomBoolVariable(\n> > const char *name,\n> > const char *short_desc,\n> \n> > Unless somebody protests I'm going to remove the now pretty useless\n> > looking newline in the cases I can find.\n> \n> +1. I think Alvaro was muttering something about doing this,\n> but you beat him to it.\n\nNo, this is a different issue ... I was talking about function *calls*\nending in parens, and it changed because of the previous round of\npgindent changes, not the last one. The number of affected places was a\nlot larger than the patch Andres posted.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 29 Jul 2019 14:47:12 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: minor fixes after pgindent prototype fixes"
},
{
"msg_contents": "On 2019-07-28 00:09:51 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > a few prototypes look odd. It appears to be cases where previously the\n> > odd indentation was put to some use, by indenting parameters less:\n> > ...\n> > but now that looks odd:\n> > extern void DefineCustomBoolVariable(\n> > const char *name,\n> > const char *short_desc,\n> \n> > Unless somebody protests I'm going to remove the now pretty useless\n> > looking newline in the cases I can find.\n> \n> +1. I think Alvaro was muttering something about doing this,\n> but you beat him to it.\n\nAnd pushed...\n\n- Andres\n\n\n",
"msg_date": "Wed, 31 Jul 2019 00:17:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: minor fixes after pgindent prototype fixes"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nPlease consider fixing the next set of typos and inconsistencies in the\ntree:\n8.1. LABORT -> LIKE_ABORT\n8.2. LagTrackerWriter -> LagTrackerWrite\n8.3. lag_with_offset_and_default, * ->\nwindow_lag_with_offset_and_default, window_* (in windowfuncs.c)\n8.4. language-name -> language_name\n8.5. lastOverflowedXID -> lastOverflowedXid\n8.6. last_processed -> last_processing\n8.7. last-query -> last_query\n8.8. lastsysoid -> datlastsysoid\n8.9. lastUsedPage -> lastUsedPages\n8.10. lbv -> lbsv\n8.11. leafSegment -> leafSegmentInfo\n8.12. LibraryName/SymbolName -> remove (orphaned after f9143d10)\n8.13. licence -> license\n8.14. LINE_ALLOC -> remove (orphaned since 12ee6ec7)\n8.15. local_ip_addr, local_port_addr -> remove and update a comment\n(orphaned since b4cea00a)\n8.16. local_passwd.c -> update a comment (see\nhttp://cvsweb.netbsd.org/bsdweb.cgi/src/usr.bin/passwd/local_passwd.c.diff?r1=1.19&r2=1.20\n)\n8.17. localTransactionid -> localTransactionId\n8.18. LocalTransactionID -> localTransactionId\n8.19. LOCKDEF_H_ -> LOCKDEFS_H_\n8.20. LOCK_H -> LOCK_H_\n8.21. lockid -> lock\n8.22. LOGICAL_PROTO_VERSION_NUM, PGLOGICAL_PROTO_MIN_VERSION_NUM ->\nLOGICALREP_PROTO_VERSION_NUM, LOGICALREP_PROTO_MIN_VERSION_NUM\n8.23. LOGICALREP_PROTO_H -> LOGICAL_PROTO_H\n8.24. LogicalRewriteHeapCheckpoint -> CheckPointLogicalRewriteHeap\n8.25. log_snap_interval_ms -> LOG_SNAPSHOT_INTERVAL_MS\n8.26. from LVT -> form LVT\n8.27. lwlockMode -> lwWaitMode\n8.28. LWLockWait -> LWLockWaitForVar\n8.29. MacroAssert -> AssertMacro\n8.30. maintainer-check -> remove (orphaned after 5dd41f35)\n8.31. manip.c -> remove (not present since PG95-1_01)\n8.32. markleftchild -> markfollowright\n8.33. mask_page_lsn -> mask_page_lsn_and_checksum\n8.34. mdfd_seg_fds -> md_seg_fds\n8.35. md_update -> px_md_update\n8.36. meg -> 1 MB\n8.37. MIGRATOR_API_VERSION -> remove (orphaned after 6f56b41a)\n8.38. min_apply_delay -> recovery_min_apply_delay\n8.39. min_multi -> cutoff_multi\n8.40. minwg -> mingw\n8.41. missingok -> missing_ok\n8.42. mksafefunc/mkunsafefunc -> mkfunc (orphaned after 1f474d29)\n8.43. MSG000001.bin -> MSG00001.bin\n8.44. MSPACE -> MSSPACE\n8.45. mtransfunc -> mtransfn\n8.46. MULTI_QUERY -> PORTAL_MULTI_QUERY\n8.47. MultixactId -> MultiXactId\n8.48. MVDistinctItem -> MVNDistinctItem\n\nIn passing, I found a legacy script, FAQ2txt, that should be deleted as\nunusable.\n\nBest regards,\nAlexander",
"msg_date": "Sun, 28 Jul 2019 07:44:44 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix typos and inconsistencies for HEAD (take 8)"
},
{
"msg_contents": "On Sun, Jul 28, 2019 at 07:44:44AM +0300, Alexander Lakhin wrote:\n> 8.3. lag_with_offset_and_default, * ->\n> window_lag_with_offset_and_default, window_* (in windowfuncs.c)\n\nThe intention here is to refer to the SQL-visible names.\n\n> In passing, I found a legacy script, FAQ2txt, that should be deleted as\n> unusable.\n\nRight. This got forgotten with the cleanup from f3f45c87 and\nbf4497cc.\n\nApplied. Thanks, as usual.\n--\nMichael",
"msg_date": "Mon, 29 Jul 2019 12:39:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for HEAD (take 8)"
}
] |
[
{
"msg_contents": "Hello Andres,\n\nhow do you want to generalize it? Are you thinking about a view solely for the display of the memory usage of different objects? Like functions or views (that also have a plan associated with it, when I think about it)? While being interesting I still believe monitoring the mem usage of prepared statements is a bit more important than that of other objects because of how they change memory consumption of the server without using any DDL or configuration options and I am not aware of other objects with the same properties, or are there some? And for the other volatile objects like tables and indexes and their contents PostgreSQL already has it's information functions. \n\nRegardless of that here is the patch for now. I didn't want to fiddle to much with MemoryContexts yet, so it still doesn't recurse in child contexts, but I will change that also when I try to build a more compact MemoryContext implementation and see how that works out.\n\nThanks for pointing out the relevant information in the statement column of the view.\n\nRegards,\nDaniel Migowski\n\n-----Ursprüngliche Nachricht-----\nVon: Andres Freund <andres@anarazel.de> \nGesendet: Samstag, 27. Juli 2019 21:12\nAn: Daniel Migowski <dmigowski@ikoffice.de>\nCc: pgsql-hackers@lists.postgresql.org\nBetreff: Re: Adding column \"mem_usage\" to view pg_prepared_statements\n\nHi,\n\nOn 2019-07-27 18:29:23 +0000, Daniel Migowski wrote:\n> I just implemented a small change that adds another column \"mem_usage\"\n> to the system view \"pg_prepared_statements\". It returns the memory \n> usage total of CachedPlanSource.context, \n> CachedPlanSource.query_content and if available \n> CachedPlanSource.gplan.context.\n\nFWIW, it's generally easier to comment if you actually provide the patch, even if it's just POC, as that gives a better handle on how much additional complexity it introduces.\n\nI think this could be a useful feature. I'm not so sure we want it tied to just cached statements however - perhaps we ought to generalize it a bit more.\n\n\nRegarding the prepared statements specific considerations: I don't think we ought to explicitly reference CachedPlanSource.query_content, and CachedPlanSource.gplan.context.\n\nIn the case of actual prepared statements (rather than oneshot plans) CachedPlanSource.query_context IIRC should live under CachedPlanSource.context. I think there's no relevant cases where gplan.context isn't a child of CachedPlanSource.context either, but not quite sure.\n\nThen we ought to just include child contexts in the memory computation (cf. logic in MemoryContextStatsInternal(), although you obviously wouldn't need all that). That way, if the cached statements has child contexts, we're going to stay accurate.\n\n\n> Also I wonder why the \"prepare test as\" is part of the statement \n> column. I isn't even part of the real statement that is prepared as \n> far as I would assume. Would prefer to just have the \"select *...\" in \n> that column.\n\nIt's the statement that was executed. Note that you'll not see that in the case of protocol level prepared statements. It will sometimes include relevant information, e.g. about the types specified as part of the prepare (as in PREPARE foo(int, float, ...) AS ...).\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sun, 28 Jul 2019 06:20:40 +0000",
"msg_from": "Daniel Migowski <dmigowski@ikoffice.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding column \"mem_usage\" to view pg_prepared_statements"
},
{
"msg_contents": "Hello,\n\nWill my patch be considered for 12.0? The calculation of the mem_usage value might be improved later on but because the system catalog is changed I would love to add it before 12.0 becomes stable. \n\nRegards,\nDaniel Migowski\n\n-----Ursprüngliche Nachricht-----\nVon: Daniel Migowski <dmigowski@ikoffice.de> \nGesendet: Sonntag, 28. Juli 2019 08:21\nAn: Andres Freund <andres@anarazel.de>\nCc: pgsql-hackers@lists.postgresql.org\nBetreff: Re: Adding column \"mem_usage\" to view pg_prepared_statements\n\nHello Andres,\n\nhow do you want to generalize it? Are you thinking about a view solely for the display of the memory usage of different objects? Like functions or views (that also have a plan associated with it, when I think about it)? While being interesting I still believe monitoring the mem usage of prepared statements is a bit more important than that of other objects because of how they change memory consumption of the server without using any DDL or configuration options and I am not aware of other objects with the same properties, or are there some? And for the other volatile objects like tables and indexes and their contents PostgreSQL already has it's information functions. \n\nRegardless of that here is the patch for now. I didn't want to fiddle to much with MemoryContexts yet, so it still doesn't recurse in child contexts, but I will change that also when I try to build a more compact MemoryContext implementation and see how that works out.\n\nThanks for pointing out the relevant information in the statement column of the view.\n\nRegards,\nDaniel Migowski\n\n-----Ursprüngliche Nachricht-----\nVon: Andres Freund <andres@anarazel.de>\nGesendet: Samstag, 27. Juli 2019 21:12\nAn: Daniel Migowski <dmigowski@ikoffice.de>\nCc: pgsql-hackers@lists.postgresql.org\nBetreff: Re: Adding column \"mem_usage\" to view pg_prepared_statements\n\nHi,\n\nOn 2019-07-27 18:29:23 +0000, Daniel Migowski wrote:\n> I just implemented a small change that adds another column \"mem_usage\"\n> to the system view \"pg_prepared_statements\". It returns the memory \n> usage total of CachedPlanSource.context, \n> CachedPlanSource.query_content and if available \n> CachedPlanSource.gplan.context.\n\nFWIW, it's generally easier to comment if you actually provide the patch, even if it's just POC, as that gives a better handle on how much additional complexity it introduces.\n\nI think this could be a useful feature. I'm not so sure we want it tied to just cached statements however - perhaps we ought to generalize it a bit more.\n\n\nRegarding the prepared statements specific considerations: I don't think we ought to explicitly reference CachedPlanSource.query_content, and CachedPlanSource.gplan.context.\n\nIn the case of actual prepared statements (rather than oneshot plans) CachedPlanSource.query_context IIRC should live under CachedPlanSource.context. I think there's no relevant cases where gplan.context isn't a child of CachedPlanSource.context either, but not quite sure.\n\nThen we ought to just include child contexts in the memory computation (cf. logic in MemoryContextStatsInternal(), although you obviously wouldn't need all that). That way, if the cached statements has child contexts, we're going to stay accurate.\n\n\n> Also I wonder why the \"prepare test as\" is part of the statement \n> column. I isn't even part of the real statement that is prepared as \n> far as I would assume. Would prefer to just have the \"select *...\" in \n> that column.\n\nIt's the statement that was executed. Note that you'll not see that in the case of protocol level prepared statements. It will sometimes include relevant information, e.g. about the types specified as part of the prepare (as in PREPARE foo(int, float, ...) AS ...).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Jul 2019 22:01:09 +0000",
"msg_from": "Daniel Migowski <dmigowski@ikoffice.de>",
"msg_from_op": true,
"msg_subject": "AW: Adding column \"mem_usage\" to view pg_prepared_statements"
},
{
"msg_contents": "Daniel Migowski <dmigowski@ikoffice.de> writes:\n> Will my patch be considered for 12.0? The calculation of the mem_usage value might be improved later on but because the system catalog is changed I would love to add it before 12.0 becomes stable. \n\nv12 has been feature-frozen for months, and it's pretty hard to paint\nthis as a bug fix, so I'd say the answer is \"no\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jul 2019 18:09:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: Adding column \"mem_usage\" to view pg_prepared_statements"
},
{
"msg_contents": "Ok, just have read about the commitfest thing. Is the patch OK for that? Or is there generally no love for a mem_usage column here? If it was, I really would add some memory monitoring in our app regarding this.\n\n-----Ursprüngliche Nachricht-----\nVon: Tom Lane <tgl@sss.pgh.pa.us> \nGesendet: Mittwoch, 31. Juli 2019 00:09\nAn: Daniel Migowski <dmigowski@ikoffice.de>\nCc: Andres Freund <andres@anarazel.de>; pgsql-hackers@lists.postgresql.org\nBetreff: Re: AW: Adding column \"mem_usage\" to view pg_prepared_statements\n\nDaniel Migowski <dmigowski@ikoffice.de> writes:\n> Will my patch be considered for 12.0? The calculation of the mem_usage value might be improved later on but because the system catalog is changed I would love to add it before 12.0 becomes stable. \n\nv12 has been feature-frozen for months, and it's pretty hard to paint this as a bug fix, so I'd say the answer is \"no\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jul 2019 22:15:09 +0000",
"msg_from": "Daniel Migowski <dmigowski@ikoffice.de>",
"msg_from_op": true,
"msg_subject": "AW: AW: Adding column \"mem_usage\" to view pg_prepared_statements"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 10:01:09PM +0000, Daniel Migowski wrote:\n>Hello,\n>\n>Will my patch be considered for 12.0? The calculation of the mem_usage\n>value might be improved later on but because the system catalog is\n>changed I would love to add it before 12.0 becomes stable.\n>\n\nNope. Code freeze for PG12 data was April 7, 2019. You're a bit late for\nthat, unfortunately. We're in PG13 land now.\n\nFWIW not sure what mail client you're using, but it seems to be breaking\nthe threads for some reason, splitting it into two - see [1].\n\nAlso, please stop top-posting. It makes it way harder to follow the\ndiscussion.\n\n[1] https://www.postgresql.org/message-id/flat/41ED3F5450C90F4D8381BC4D8DF6BBDCF02E10B4%40EXCHANGESERVER.ikoffice.de\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 31 Jul 2019 00:17:40 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding column \"mem_usage\" to view pg_prepared_statements"
},
{
"msg_contents": "Daniel Migowski <dmigowski@ikoffice.de> writes:\n> Ok, just have read about the commitfest thing. Is the patch OK for that? Or is there generally no love for a mem_usage column here? If it was, I really would add some memory monitoring in our app regarding this.\n\nYou should certainly put it into the next commitfest. We might or\nmight not accept it, but if it's not listed in the CF we might\nnot remember to even review it. (The CF app is really a to-do\nlist for patches ...)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jul 2019 18:29:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: Adding column \"mem_usage\" to view pg_prepared_statements"
},
{
"msg_contents": "Am 31.07.2019 um 00:17 schrieb Tomas Vondra:\n> FWIW not sure what mail client you're using, but it seems to be breaking\n> the threads for some reason, splitting it into two - see [1].\n>\n> Also, please stop top-posting. It makes it way harder to follow the\n> discussion.\n\nWas using Outlook because it's my companies mail client but switching to \nThunderbird now for communication with the list, trying to be a good \ncitizen now.\n\nRegards,\nDaniel Migowski\n\n\n\n",
"msg_date": "Wed, 31 Jul 2019 00:32:46 +0200",
"msg_from": "Daniel Migowski <dmigowski@ikoffice.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding column \"mem_usage\" to view pg_prepared_statements"
},
{
"msg_contents": "\nAm 31.07.2019 um 00:29 schrieb Tom Lane:\n> Daniel Migowski <dmigowski@ikoffice.de> writes:\n>> Ok, just have read about the commitfest thing. Is the patch OK for that? Or is there generally no love for a mem_usage column here? If it was, I really would add some memory monitoring in our app regarding this.\n> You should certainly put it into the next commitfest. We might or\n> might not accept it, but if it's not listed in the CF we might\n> not remember to even review it. (The CF app is really a to-do\n> list for patches ...)\n\nOK, added it there. Thanks for your patience :).\n\nRegards,\nDaniel Migowski\n\n\n\n",
"msg_date": "Wed, 31 Jul 2019 00:38:50 +0200",
"msg_from": "Daniel Migowski <dmigowski@ikoffice.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding column \"mem_usage\" to view pg_prepared_statements"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 10:01:09PM +0000, Daniel Migowski wrote:\n> Hello,\n> \n> Will my patch be considered for 12.0? The calculation of the\n> mem_usage value might be improved later on but because the system\n> catalog is changed I would love to add it before 12.0 becomes\n> stable. \n\nFeature freeze was April 8, so no.\n\nhttps://www.mail-archive.com/pgsql-hackers@lists.postgresql.org/msg37059.html\n\nYou're in plenty of time for 13, though!\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Wed, 31 Jul 2019 02:33:55 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Adding column \"mem_usage\" to view pg_prepared_statements"
},
{
"msg_contents": "\n\nOn 31.07.2019 1:38, Daniel Migowski wrote:\n>\n> Am 31.07.2019 um 00:29 schrieb Tom Lane:\n>> Daniel Migowski <dmigowski@ikoffice.de> writes:\n>>> Ok, just have read about the commitfest thing. Is the patch OK for \n>>> that? Or is there generally no love for a mem_usage column here? If \n>>> it was, I really would add some memory monitoring in our app \n>>> regarding this.\n>> You should certainly put it into the next commitfest. We might or\n>> might not accept it, but if it's not listed in the CF we might\n>> not remember to even review it. (The CF app is really a to-do\n>> list for patches ...)\n>\n> OK, added it there. Thanks for your patience :).\n>\n> Regards,\n> Daniel Migowski\n>\n\nThe patch is not applied to the most recent source because extra \nparameter was added to CreateTemplateTupleDesc method.\nPlease rebase it - the fix is trivial.\n\nI think that including in pg_prepared_statements information about \nmemory used this statement is very useful.\nCachedPlanMemoryUsage function may be useful not only for this view, but \nfor example it is also need in my autoprepare patch.\n\nI wonder if you consider go further and not only report but control \nmemory used by prepared statements?\nFor example implement some LRU replacement discipline on top of prepared \nstatements cache which can\nevict rarely used prepared statements to avoid memory overflow.\nWe have such patch for PgPro-EE but it limits only number of prepared \nstatement, not taken in account amount of memory used by them.\nI think that memory based limit will be more accurate (although it adds \nmore overhead).\n\nIf you want, I can be reviewer of your patch.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Mon, 5 Aug 2019 19:30:08 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Adding column \"mem_usage\" to view pg_prepared_statements"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-28 06:20:40 +0000, Daniel Migowski wrote:\n> how do you want to generalize it? Are you thinking about a view solely\n> for the display of the memory usage of different objects?\n\nI'm not quite sure. I'm just not sure that adding separate\ninfrastructure for various objects is a sutainable approach. We'd likely\nwant to have this for prepared statements, for cursors, for the current\nstatement, for various caches, ...\n\nI think an approach would be to add an 'owning_object' field to memory\ncontexts, which has to point to a Node type if set. A table returning reporting\nfunction could recursively walk through the memory contexts, starting at\nTopMemoryContext. Whenever it encounters a context with owning_object\nset, it prints a string version of nodeTag(owning_object). For some node\ntypes it knows about (e.g. PreparedStatement, Portal, perhaps some of\nthe caches), it prints additional metadata specific to the type (so for\nprepared statement it'd be something like 'prepared statement', '[name\nof prepared statement]'), and prints information about the associated\ncontext and all its children.\n\nThe general context information probably should be something like:\ncontext_name, context_ident,\ncontext_total_bytes, context_total_blocks, context_total_freespace, context_total_freechunks, context_total_used, context_total_children\ncontext_self_bytes, context_self_blocks, context_self_freespace, context_self_freechunks, context_self_used, context_self_children,\n\nIt might make sense to have said function return a row for the contexts\nit encounters that do not have an owner set too (that way we'd e.g. get\nCacheMemoryContext handled), but then still recurse.\n\n\nArguably the proposed owning_object field would be a bit redundant with\nthe already existing ident/MemoryContextSetIdentifier field, which\ne.g. already associates the query string with the contexts used for a\nprepared statement. But I'm not convinced that's going to be enough\ncontext in a lot of cases, because e.g. for prepared statements it could\nbe interesting to have access to both the prepared statement name, and\nthe statement.\n\nThe reason I like something like this is that we wouldn't add new\ncolumns to a number of views, and lack views to associate such\ninformation to for some objects. And it'd be disproportional to add all\nthe information to numerous places anyway.\n\nOne counter-argument is that it'd be more expensive to get information\nspecific to prepared statements (or other object types) that way. I'm\nnot sure I buy that that's a problem - this isn't something that's\nlikely going to be used at a high frequency. But if it becomes a\nproblem, we can add a function that starts that process at a distinct\nmemory context (e.g. a function that does this just for a single\nprepared statement, identified by name) - but I'd not start there.\n\n\n> While being interesting I still believe monitoring the mem usage of\n> prepared statements is a bit more important than that of other objects\n> because of how they change memory consumption of the server without\n> using any DDL or configuration options and I am not aware of other\n> objects with the same properties, or are there some? And for the other\n> volatile objects like tables and indexes and their contents PostgreSQL\n> already has it's information functions.\n\nPlenty other objects have that property. E.g. cursors. And for the\ncatalog/relation/... caches it's even more pernicious - the client might\nhave closed all its \"handles\", but we still use memory (and it's\nabsolutely crucial for performance).\n\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Aug 2019 10:16:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding column \"mem_usage\" to view pg_prepared_statements"
},
{
"msg_contents": "Am 05.08.2019 um 18:30 schrieb Konstantin Knizhnik:\n> On 31.07.2019 1:38, Daniel Migowski wrote:\n>> Am 31.07.2019 um 00:29 schrieb Tom Lane:\n>>> Daniel Migowski <dmigowski@ikoffice.de> writes:\n>>>> Ok, just have read about the commitfest thing. Is the patch OK for \n>>>> that? Or is there generally no love for a mem_usage column here? If \n>>>> it was, I really would add some memory monitoring in our app \n>>>> regarding this.\n>>> You should certainly put it into the next commitfest. We might or\n>>> might not accept it, but if it's not listed in the CF we might\n>>> not remember to even review it. (The CF app is really a to-do\n>>> list for patches ...)\n>> OK, added it there. Thanks for your patience :).\n> The patch is not applied to the most recent source because extra \n> parameter was added to CreateTemplateTupleDesc method.\n> Please rebase it - the fix is trivial.\nOK, will do.\n> I think that including in pg_prepared_statements information about \n> memory used this statement is very useful.\n> CachedPlanMemoryUsage function may be useful not only for this view, \n> but for example it is also need in my autoprepare patch.\nI would love to use your work if it's done, and would also love to work \ntogether here. I am quite novice in C thought, I might take my time to \nget things right.\n> I wonder if you consider go further and not only report but control \n> memory used by prepared statements?\n> For example implement some LRU replacement discipline on top of \n> prepared statements cache which can\n> evict rarely used prepared statements to avoid memory overflow.\n\nTHIS! Having some kind of safety net here would finally make sure that \nmy precious processes will not grow endlessly until all mem is eaten up, \neven with prep statement count limits.\n\nWhile working on stuff I noticed there are three things stored in a \nCachedPlanSource. The raw query tree (a relatively small thing), the \nquery list (analyzed-and-rewritten query tree) which takes up the most \nmemory (at least here, maybe different with your usecases), and (often \nafter the 6th call) the CachedPlan, which is more optimized that the \nquery list and often needs less memory (half of the query list here).\n\nThe query list seems to take the most time to create here, because I hit \nthe GEQO engine here, but it could be recreated easily (up to 500ms for \nsome queries). Creating the CachedPlan afterwards takes 60ms in some \nusecase. IF we could just invalidate them from time to time, that would \nbe grate.\n\nAlso, invalidating just the queries or the CachedPlan would not \ninvalidate the whole prepared statement, which would break clients \nexpectations, but just make them a slower, adding much to the stability \nof the system. I would pay that price, because I just don't use manually \nnamed prepared statements anyway and just autogenerate them as \nperformance sugar without thinking about what really needs to be \nprepared anyway. There is an option in the JDBC driver to use prepared \nstatements automatically after you have used them a few time.\n\n> We have such patch for PgPro-EE but it limits only number of prepared \n> statement, not taken in account amount of memory used by them.\n> I think that memory based limit will be more accurate (although it \n> adds more overhead).\n\nLimiting them by number is already done automatically here and would \nreally not be of much value, but having a mem limit would be great. We \ncould have a combined memory limit for your autoprepared statements as \nwell as the manually prepared ones, so clients can know for sure the \nserver processes won't eat up more that e.g. 800MB for prepared \nstatements. And also I would like to have this value spread across all \nclient processes, e.g. specifying max_prepared_statement_total_mem=5G \nfor the server, and maybe max_prepared_statement_mem=1G for client \nprocesses. Of course we would have to implement cross client process \ninvalidation here, and I don't know if communicating client processes \nare even intended.\n\nAnyway, a memory limit won't really add that much more overhead. At \nleast not more than having no prepared statements at all because of the \nfear of server OOMs, or have just a small count of those statements. I \nwas even think about a prepared statement reaper that checks the \npg_prepared_statements every few minutes to clean things up manually, \nbut having this in the server would be of great value to me.\n\n> If you want, I can be reviewer of your patch.\n\nI'd love to have you as my reviewer.\n\nRegards,\nDaniel Migowski\n\n\n\n",
"msg_date": "Mon, 5 Aug 2019 21:35:13 +0200",
"msg_from": "Daniel Migowski <dmigowski@ikoffice.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding column \"mem_usage\" to view pg_prepared_statements"
},
{
"msg_contents": "Am 05.08.2019 um 19:16 schrieb Andres Freund:\n> On 2019-07-28 06:20:40 +0000, Daniel Migowski wrote:\n>> how do you want to generalize it? Are you thinking about a view solely\n>> for the display of the memory usage of different objects?\n> I'm not quite sure. I'm just not sure that adding separate\n> infrastructure for various objects is a sutainable approach. We'd likely\n> want to have this for prepared statements, for cursors, for the current\n> statement, for various caches, ...\n>\n> I think an approach would be to add an 'owning_object' field to memory\n> contexts, which has to point to a Node type if set. A table returning reporting\n> function could recursively walk through the memory contexts, starting at\n> TopMemoryContext. Whenever it encounters a context with owning_object\n> set, it prints a string version of nodeTag(owning_object). For some node\n> types it knows about (e.g. PreparedStatement, Portal, perhaps some of\n> the caches), it prints additional metadata specific to the type (so for\n> prepared statement it'd be something like 'prepared statement', '[name\n> of prepared statement]'), and prints information about the associated\n> context and all its children.\nI understand. So it would be something like the output of \nMemoryContextStatsInternal, but in table form with some extra columns. I \nwould have loved this extra information already in \nMemoryContextStatsInternal btw., so it might be a good idea to upgrade \nit first to find the information and wrap a table function over it \nafterwards.\n> The general context information probably should be something like:\n> context_name, context_ident,\n> context_total_bytes, context_total_blocks, context_total_freespace, context_total_freechunks, context_total_used, context_total_children\n> context_self_bytes, context_self_blocks, context_self_freespace, context_self_freechunks, context_self_used, context_self_children,\n>\n> It might make sense to have said function return a row for the contexts\n> it encounters that do not have an owner set too (that way we'd e.g. get\n> CacheMemoryContext handled), but then still recurse.\nA nice way to learn about the internals of the server and to analyze the \neffects of memory reducing enhancements.\n> Arguably the proposed owning_object field would be a bit redundant with\n> the already existing ident/MemoryContextSetIdentifier field, which\n> e.g. already associates the query string with the contexts used for a\n> prepared statement. But I'm not convinced that's going to be enough\n> context in a lot of cases, because e.g. for prepared statements it could\n> be interesting to have access to both the prepared statement name, and\n> the statement.\nThe identifier seems to be more like a category at the moment, because \nit does not seem to hold any relevant information about the object in \nquestion. So a more specific name would be nice.\n> The reason I like something like this is that we wouldn't add new\n> columns to a number of views, and lack views to associate such\n> information to for some objects. And it'd be disproportional to add all\n> the information to numerous places anyway.\nI understand your argumentation, but things like Cursors and Portals are \nrather short living while prepared statements seem to be the place where \nmemory really builds up.\n> One counter-argument is that it'd be more expensive to get information\n> specific to prepared statements (or other object types) that way. I'm\n> not sure I buy that that's a problem - this isn't something that's\n> likely going to be used at a high frequency. But if it becomes a\n> problem, we can add a function that starts that process at a distinct\n> memory context (e.g. a function that does this just for a single\n> prepared statement, identified by name) - but I'd not start there.\nI also see no problem here, and with Konstantin Knizhnik's autoprepare I \nwouldn't use this very often anyway, more just for monitoring purposes, \nwhere I don't care if my query is a bit more complex.\n>> While being interesting I still believe monitoring the mem usage of\n>> prepared statements is a bit more important than that of other objects\n>> because of how they change memory consumption of the server without\n>> using any DDL or configuration options and I am not aware of other\n>> objects with the same properties, or are there some? And for the other\n>> volatile objects like tables and indexes and their contents PostgreSQL\n>> already has it's information functions.\n> Plenty other objects have that property. E.g. cursors. And for the\n> catalog/relation/... caches it's even more pernicious - the client might\n> have closed all its \"handles\", but we still use memory (and it's\n> absolutely crucial for performance).\n\nMaybe we can do both? Add a single column to pg_prepared_statements, and \nadd another table for the output of MemoryContextStatsDetail? This has \nthe advantage that the single real memory indicator useful for end users \n(to the question: How much mem takes my sh*t up?) is in \npg_prepared_statements and some more intrinsic information in a detail \nview.\n\nThinking about the latter I am against such a table, at least in the \nform where it gives information like context_total_freechunks, because \nit would just be useful for us developers. Why should any end user care \nfor how many chunks are still open in a MemoryContext, except when he is \nworking on C-style extensions. Could just be a source of confusion for \nthem.\n\nLet's think about the goal this should have: The end user should be able \nto monitor the memory consumption of things he's in control of or could \naffect the system performance. Should such a table automatically \naggregate some information? I think so. I would not add more than two \nmemory columns to the view, just mem_used and mem_reserved. And even \nmem_used is questionable, because in his eyes only the memory he cannot \nuse for other stuff because of object x is important for him (that was \nthe reason I just added one column). He would even ask: WHY is there 50% \nmore memory reserved than used, and how I can optimize it? (Would lead \nto more curious PostgreSQL developers maybe, so that's maybe a plus).\n\nSomething that also clearly speaks FOR such a table and against my \nproposal is, that if someone cares for memory, he would most likely care \nfor ALL his memory, and in that case monitoring prepared statements \nwould just be a small subset of stuff to monitor. Ok, I am defeated and \nwill rewrite my patch if the next proposal finds approval:\n\nI would propose a table pg_mem_usage containing the columns \nobject_class, name, detail, mem_usage (rename them if it fits the style \nof the other tables more). The name would be empty for some objects like \nthe unnamed prepared statement, the query strings would be in the detail \ncolumn. One could add a final \"Other\" row containing the mem no specific \noutput line has been accounted for. Also it could contain lines for \nCursors and other stuff I am to novice to think of here.\n\nAnd last: A reason why still we need a child-parent-relationship in this \ntable (and distinct this_ and total_ mem functions), is that prepared \nstatements start up to use much more memory when the Generic Plan is \nstored in it after a few uses. As a user I always had the assumption \nthat prepared a statement would already do all the required work to be \nfast, but a statement just becomes blazingly fast when the Generic Plan \nis available (and used), and it would be nice to see for which \nstatements that plan has already been generated to consume his memory. I \nbelieve the reason for this would be the fear of excessive memory usage.\n\nOn the other hand: The Generic Plan had been created for the first \ninvocation of the prepared statement, why not store it immediatly. It is \na named statement for a reason that it is intended to be reused, even \nwhen it is just twice, and since memory seems not to be seen as a scarce \nresource in this context why not store that immediately. Would drop the \nneed for a hierarchy here also.\n\nAny comments?\n\nRegards,\nDaniel Migowski\n\n\n\n",
"msg_date": "Mon, 5 Aug 2019 22:46:47 +0200",
"msg_from": "Daniel Migowski <dmigowski@ikoffice.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding column \"mem_usage\" to view pg_prepared_statements"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-05 22:46:47 +0200, Daniel Migowski wrote:\n> > Arguably the proposed owning_object field would be a bit redundant with\n> > the already existing ident/MemoryContextSetIdentifier field, which\n> > e.g. already associates the query string with the contexts used for a\n> > prepared statement. But I'm not convinced that's going to be enough\n> > context in a lot of cases, because e.g. for prepared statements it could\n> > be interesting to have access to both the prepared statement name, and\n> > the statement.\n> The identifier seems to be more like a category at the moment, because it\n> does not seem to hold any relevant information about the object in question.\n> So a more specific name would be nice.\n\nI think you might be thinking of the context's name, not ident? E.g. for\nprepared statements the context name is:\n\n\tsource_context = AllocSetContextCreate(CurrentMemoryContext,\n\t\t\t\t\t\t\t\t\t\t \"CachedPlanSource\",\n\t\t\t\t\t\t\t\t\t\t ALLOCSET_START_SMALL_SIZES);\n\nwhich is obviously the same for every statement. But then there's\n\n\tMemoryContextSetIdentifier(source_context, plansource->query_string);\n\nwhich obviously differs.\n\n\n> > The reason I like something like this is that we wouldn't add new\n> > columns to a number of views, and lack views to associate such\n> > information to for some objects. And it'd be disproportional to add all\n> > the information to numerous places anyway.\n> I understand your argumentation, but things like Cursors and Portals are\n> rather short living while prepared statements seem to be the place where\n> memory really builds up.\n\nThat's not necessarily true, especially given WITH HOLD cursors. Nor\ndoes one only run out of memory in the context of long-lived objects.\n\n\n> > > While being interesting I still believe monitoring the mem usage of\n> > > prepared statements is a bit more important than that of other objects\n> > > because of how they change memory consumption of the server without\n> > > using any DDL or configuration options and I am not aware of other\n> > > objects with the same properties, or are there some? And for the other\n> > > volatile objects like tables and indexes and their contents PostgreSQL\n> > > already has it's information functions.\n\n> > Plenty other objects have that property. E.g. cursors. And for the\n> > catalog/relation/... caches it's even more pernicious - the client might\n> > have closed all its \"handles\", but we still use memory (and it's\n> > absolutely crucial for performance).\n> \n> Maybe we can do both? Add a single column to pg_prepared_statements, and add\n> another table for the output of MemoryContextStatsDetail? This has the\n> advantage that the single real memory indicator useful for end users (to the\n> question: How much mem takes my sh*t up?) is in pg_prepared_statements and\n> some more intrinsic information in a detail view.\n\nI don't see why we'd want to do both. Just makes pg_prepared_statements\na considerably more expensive. And that's used by some applications /\nclients in an automated manner.\n\n\n> Thinking about the latter I am against such a table, at least in the form\n> where it gives information like context_total_freechunks, because it would\n> just be useful for us developers.\n\nDevelopers are also an audience for us. I mean we certainly can use this\ninformation during development. But even for bugreports such information\nwould be useufl.\n\n\n> Why should any end user care for how many\n> chunks are still open in a MemoryContext, except when he is working on\n> C-style extensions. Could just be a source of confusion for them.\n\nMeh. As long as the crucial stuff is first, that's imo enough.\n\n\n> Let's think about the goal this should have: The end user should be able to\n> monitor the memory consumption of things he's in control of or could affect\n> the system performance. Should such a table automatically aggregate some\n> information? I think so. I would not add more than two memory columns to the\n> view, just mem_used and mem_reserved. And even mem_used is questionable,\n> because in his eyes only the memory he cannot use for other stuff because of\n> object x is important for him (that was the reason I just added one column).\n> He would even ask: WHY is there 50% more memory reserved than used, and how\n> I can optimize it? (Would lead to more curious PostgreSQL developers maybe,\n> so that's maybe a plus).\n\nIt's important because it influences how memory usage will grow.\n\n\n> On the other hand: The Generic Plan had been created for the first\n> invocation of the prepared statement, why not store it immediatly. It is a\n> named statement for a reason that it is intended to be reused, even when it\n> is just twice, and since memory seems not to be seen as a scarce resource in\n> this context why not store that immediately. Would drop the need for a\n> hierarchy here also.\n\nWell, we'll maybe never use it, so ...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Aug 2019 14:03:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding column \"mem_usage\" to view pg_prepared_statements"
},
{
"msg_contents": "\n\nOn 05.08.2019 22:35, Daniel Migowski wrote:\n> .\n>> I think that including in pg_prepared_statements information about \n>> memory used this statement is very useful.\n>> CachedPlanMemoryUsage function may be useful not only for this view, \n>> but for example it is also need in my autoprepare patch.\n> I would love to use your work if it's done, and would also love to \n> work together here. I am quite novice in C thought, I might take my \n> time to get things right.\n\nRight now I resused your implementation of CachedPlanMemoryUsage function:)\nBefore I took in account only memory used by plan->context, but not of \nplan->query_context and plan->gplan->context (although query_context for \nraw parse tree seems to be much smaller).\n\n\n>> I wonder if you consider go further and not only report but control \n>> memory used by prepared statements?\n>> For example implement some LRU replacement discipline on top of \n>> prepared statements cache which can\n>> evict rarely used prepared statements to avoid memory overflow.\n>\n> THIS! Having some kind of safety net here would finally make sure that \n> my precious processes will not grow endlessly until all mem is eaten \n> up, even with prep statement count limits.\n>\n> While working on stuff I noticed there are three things stored in a \n> CachedPlanSource. The raw query tree (a relatively small thing), the \n> query list (analyzed-and-rewritten query tree) which takes up the most \n> memory (at least here, maybe different with your usecases), and (often \n> after the 6th call) the CachedPlan, which is more optimized that the \n> query list and often needs less memory (half of the query list here).\n>\n> The query list seems to take the most time to create here, because I \n> hit the GEQO engine here, but it could be recreated easily (up to \n> 500ms for some queries). Creating the CachedPlan afterwards takes 60ms \n> in some usecase. IF we could just invalidate them from time to time, \n> that would be grate.\n>\n> Also, invalidating just the queries or the CachedPlan would not \n> invalidate the whole prepared statement, which would break clients \n> expectations, but just make them a slower, adding much to the \n> stability of the system. I would pay that price, because I just don't \n> use manually named prepared statements anyway and just autogenerate \n> them as performance sugar without thinking about what really needs to \n> be prepared anyway. There is an option in the JDBC driver to use \n> prepared statements automatically after you have used them a few time.\n\nI have noticed that cached plans for implicitly prepared statements in \nstored procedures are not shown in pg_prepared_statements view.\nIt may be not a problem in your case (if you are accessing Postgres \nthrough JDBC and not using prepared statements),\nbut can cause memory overflow in applications actively using stored \nprocedures, because unlike explicitly created prepared statements, it is \nvery difficult\nto estimate and control statements implicitly prepared by plpgsql.\n\nI am not sure what will be the best solution in this case. Adding yet \nanother view for implicitly prepared statements? Or include them in \npg_prepared_statements view?\n>\n>> We have such patch for PgPro-EE but it limits only number of prepared \n>> statement, not taken in account amount of memory used by them.\n>> I think that memory based limit will be more accurate (although it \n>> adds more overhead).\n>\n> Limiting them by number is already done automatically here and would \n> really not be of much value, but having a mem limit would be great. We \n> could have a combined memory limit for your autoprepared statements as \n> well as the manually prepared ones, so clients can know for sure the \n> server processes won't eat up more that e.g. 800MB for prepared \n> statements. And also I would like to have this value spread across all \n> client processes, e.g. specifying max_prepared_statement_total_mem=5G \n> for the server, and maybe max_prepared_statement_mem=1G for client \n> processes. Of course we would have to implement cross client process \n> invalidation here, and I don't know if communicating client processes \n> are even intended.\n>\n> Anyway, a memory limit won't really add that much more overhead. At \n> least not more than having no prepared statements at all because of \n> the fear of server OOMs, or have just a small count of those \n> statements. I was even think about a prepared statement reaper that \n> checks the pg_prepared_statements every few minutes to clean things up \n> manually, but having this in the server would be of great value to me.\n\nRight now memory context has no field containing amount of currently \nused memory.\nThis is why context->methods->stats function implementation has to \ntraverse all blocks to calculate size of memory used by context.\nIt may be not so fast for large contexts. But I do not expect that \ncontexts of prepared statements will be very large, although\nI have deal with customers which issued queries with query text length \nlarger than few megabytes. I afraid to estimate size of plan for such \nqueries.\nThis is the reason of my concern that calculating memory context size \nmay have negative effect on performance. But is has to be done only once \nwhen statement is prepared. So may be it is not a problem at all.\n\n-- \n\nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Tue, 6 Aug 2019 10:48:31 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Adding column \"mem_usage\" to view pg_prepared_statements"
},
{
"msg_contents": "Hello Daniel,\n\nThis patch no longer applies. Please submit an updated version. Also,\nthere's voluminous discussion that doesn't seem to have resulted in any\nrevision of the code. Please fix that too.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 3 Sep 2019 16:56:38 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding column \"mem_usage\" to view pg_prepared_statements"
}
] |
[
{
"msg_contents": "Hi,\n\nI could not locate the caller of ParsePrepareRecord function in twophase.c.\nAny idea how it gets called?\nor\nIs it a dead function?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nHi,I could not locate the caller of ParsePrepareRecord function in twophase.c.Any idea how it gets called?orIs it a dead function?Regards,VigneshEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 29 Jul 2019 13:39:38 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Is ParsePrepareRecord dead function"
},
{
"msg_contents": "On Mon, Jul 29, 2019 at 4:10 AM vignesh C <vignesh21@gmail.com> wrote:\n> I could not locate the caller of ParsePrepareRecord function in twophase.c.\n> Any idea how it gets called?\n> or\n> Is it a dead function?\n\nIt looks like it's not only dead, but stillborn. Commit\n1eb6d6527aae264b3e0b9c95aa70bb7a594ad1cf introduced it but without\nintroducing any code that called it, and nothing has changed since\nthen.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 29 Jul 2019 10:54:26 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is ParsePrepareRecord dead function"
},
{
"msg_contents": "On Mon, Jul 29, 2019 at 8:24 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jul 29, 2019 at 4:10 AM vignesh C <vignesh21@gmail.com> wrote:\n> > I could not locate the caller of ParsePrepareRecord function in twophase.c.\n> > Any idea how it gets called?\n> > or\n> > Is it a dead function?\n>\n> It looks like it's not only dead, but stillborn. Commit\n> 1eb6d6527aae264b3e0b9c95aa70bb7a594ad1cf introduced it but without\n> introducing any code that called it, and nothing has changed since\n> then.\n>\nI feel the code can be safely removed.\nPatch for the same is attached.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 29 Jul 2019 21:33:31 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Is ParsePrepareRecord dead function"
},
{
"msg_contents": "On 2019-Jul-29, vignesh C wrote:\n\n> On Mon, Jul 29, 2019 at 8:24 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Mon, Jul 29, 2019 at 4:10 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > I could not locate the caller of ParsePrepareRecord function in twophase.c.\n> > > Any idea how it gets called?\n> > > or\n> > > Is it a dead function?\n> >\n> > It looks like it's not only dead, but stillborn. Commit\n> > 1eb6d6527aae264b3e0b9c95aa70bb7a594ad1cf introduced it but without\n> > introducing any code that called it, and nothing has changed since\n> > then.\n>\n> I feel the code can be safely removed.\n> Patch for the same is attached.\n\nI think there's a patch from Fujii Masao that touches that? Might be\npremature to remove it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 29 Jul 2019 17:04:22 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Is ParsePrepareRecord dead function"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 2:34 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Jul-29, vignesh C wrote:\n>\n> > On Mon, Jul 29, 2019 at 8:24 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > On Mon, Jul 29, 2019 at 4:10 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > > I could not locate the caller of ParsePrepareRecord function in twophase.c.\n> > > > Any idea how it gets called?\n> > > > or\n> > > > Is it a dead function?\n> > >\n> > > It looks like it's not only dead, but stillborn. Commit\n> > > 1eb6d6527aae264b3e0b9c95aa70bb7a594ad1cf introduced it but without\n> > > introducing any code that called it, and nothing has changed since\n> > > then.\n> >\n> > I feel the code can be safely removed.\n> > Patch for the same is attached.\n>\n> I think there's a patch from Fujii Masao that touches that? Might be\n> premature to remove it.\n>\n\nOkay, can you point to that patch? Recently, Robert/Thomas has raised\na comment on undo machinery wherein we are considering to store\nFullTransactionId in two-phase file. So, in that connection, we need\nto modify this function as well. It is not impossible to test some\nunused function (we can try it by superficially calling it at\nsomeplace in code for the purpose of test), but it would have been\nbetter if it is used in someplace.\n\n[1] - https://www.postgresql.org/message-id/CA%2BTgmob1Oby7Wc5ryB_VBccU9N%2BuSKjXXocgT9dY_edfxqSA8Q%40mail.gmail.com\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Jul 2019 08:42:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is ParsePrepareRecord dead function"
},
{
"msg_contents": "On 2019-Jul-30, Amit Kapila wrote:\n\n> On Tue, Jul 30, 2019 at 2:34 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> > I think there's a patch from Fujii Masao that touches that? Might be\n> > premature to remove it.\n> \n> Okay, can you point to that patch?\n\nhttps://postgr.es/m/CAHGQGwFQgRWMOoRfbOOHXy1VdGM-YkwdwvWr_bD0TQXFTjD9Tw@mail.gmail.com\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 29 Jul 2019 23:45:30 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Is ParsePrepareRecord dead function"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 08:42:06AM +0530, Amit Kapila wrote:\n> Okay, can you point to that patch?\n\nHere you go:\nhttps://commitfest.postgresql.org/23/2105/\nThe thread is mostly waiting after Fujii-san for an update.\n--\nMichael",
"msg_date": "Tue, 30 Jul 2019 12:49:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Is ParsePrepareRecord dead function"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 2:34 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Jul-29, vignesh C wrote:\n>\n> > On Mon, Jul 29, 2019 at 8:24 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > On Mon, Jul 29, 2019 at 4:10 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > > I could not locate the caller of ParsePrepareRecord function in twophase.c.\n> > > > Any idea how it gets called?\n> > > > or\n> > > > Is it a dead function?\n> > >\n> > > It looks like it's not only dead, but stillborn. Commit\n> > > 1eb6d6527aae264b3e0b9c95aa70bb7a594ad1cf introduced it but without\n> > > introducing any code that called it, and nothing has changed since\n> > > then.\n> >\n> > I feel the code can be safely removed.\n> > Patch for the same is attached.\n>\n> I think there's a patch from Fujii Masao that touches that? Might be\n> premature to remove it.\n>\nOk, it makes sense not to remove it when there is some work being done in\ndifferent thread.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Jul 2019 09:33:28 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Is ParsePrepareRecord dead function"
}
] |
[
{
"msg_contents": "Hi!\n\nDuring my work on bringing jsonpath patchset to commit, I was always\nkeeping in mind that we need to make jsonb_path_*() functions\nimmutable. Having these functions immutable, users can build\nexpression indexes over them. Naturally, in majority of cases one\ndoesn't need to index whole json documents, but only some parts of\nthem. jsonpath provide great facilities to extract indexable parts of\ndocument, much more powerful than our current operator set.\n\nHowever, we've spotted some deviations between standard and our implementation.\n * like_regex predicate uses our regular expression engine, which\ndeviates from standard.\n * We always do numeric computations using numeric datatype. Even if\nuser explicitly calls .double() method. Probably, our current\nimplementation still fits standard. But in future we may like to use\nfloating point computation in some cases for performance optimization.\n\nThese deviations don't look critical by itself. But immutable\nfunctions make problematic fixing them in future. Also, I'm not sure\nthis is complete list of deviations we have. We might have, for\nexample, hidden deviations in handling strict/lax modes, which are\nhard to detect and understand.\n\nTherefore, I'm going to mark jsonb_path_*() functions stable, not\nimmutable. Nevertheless users will still have multiple options for\nindexing:\n1) jsonb_path_ops supports jsonpath matching operators in some cases.\n2) One can wrap jsonb_path_*() in pl/* function and mark it as\nimmutable on his own risk. This approach is widely used to build\nindexes over to_date()/to_timestamp().\n3) We're going to provide support of jsonpath operators in jsquery\nextension before release of PostgreSQL 12.\n\nI'd like to note I don't mean we wouldn't ever have immutable\nfunctions for jsonpath evaluation. I think once we sure enough that\nwe know immutable subset of jsonpath, we may define immutable\nfunctions for its evaluation.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 29 Jul 2019 17:25:23 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Define jsonpath functions as stable"
},
{
"msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> During my work on bringing jsonpath patchset to commit, I was always\n> keeping in mind that we need to make jsonb_path_*() functions\n> immutable. Having these functions immutable, users can build\n> expression indexes over them.\n\nRight.\n\n> However, we've spotted some deviations between standard and our implementation.\n> * like_regex predicate uses our regular expression engine, which\n> deviates from standard.\n> * We always do numeric computations using numeric datatype. Even if\n> user explicitly calls .double() method. Probably, our current\n> implementation still fits standard. But in future we may like to use\n> floating point computation in some cases for performance optimization.\n> ...\n> Therefore, I'm going to mark jsonb_path_*() functions stable, not\n> immutable.\n\nI dunno, I think you are applying a far more rigorous definition of\n\"immutable\" than we ever have in the past. The possibility that we\nmight change the implementation in the future should not be enough\nto disqualify a function from being immutable --- if that were the\ncriterion, nothing more complex than int4pl could be immutable.\n\nWouldn't it be better that, in the hypothetical major version where\nwe change the implementation, we tell users that they must reindex\nany affected indexes?\n\nAs a comparison point, we allow people to build indexes on tsvector\nresults, which are *easy* to change just by adjusting configuration\nfiles. The fact that this might force the need for reindexing hasn't\nmade it unworkable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Jul 2019 10:36:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "Hi,\n\nOn 7/29/19 10:25 AM, Alexander Korotkov wrote:\n\n> * like_regex predicate uses our regular expression engine, which\n> deviates from standard.\n\nI still favor adding some element to the syntax (like a 'posix' or 'pg'\nkeyword in the grammar for like_regex) that identifies it as using\na different regexp flavor, so the way forward to a possible compliant\nversion later is not needlessly blocked (or consigned to a\nstandard_conforming_strings-like experience).\n\nThat would also resolve much of the case against calling that\npredicate immutable.\n\nIt looks as if, in my first implementation of XQuery regexps, there\nwill have to be a \"not-quite-standard\" flag for those too, because\nit turns out the SQL committee made some tweaks to XQuery regexps[1],\nwhereas any XQuery library one relies on is going to provide untweaked\nXQuery regexps out of the box. (The differences only affect ^ $ . \\s \\S)\n\nRegards,\n-Chap\n\n\n[1]\nhttps://wiki.postgresql.org/wiki/PostgreSQL_vs_SQL/XML_Standards#XML_Query_regular_expressions\n\n\n",
"msg_date": "Mon, 29 Jul 2019 10:53:36 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "On Mon, Jul 29, 2019 at 5:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > However, we've spotted some deviations between standard and our implementation.\n> > * like_regex predicate uses our regular expression engine, which\n> > deviates from standard.\n> > * We always do numeric computations using numeric datatype. Even if\n> > user explicitly calls .double() method. Probably, our current\n> > implementation still fits standard. But in future we may like to use\n> > floating point computation in some cases for performance optimization.\n> > ...\n> > Therefore, I'm going to mark jsonb_path_*() functions stable, not\n> > immutable.\n>\n> I dunno, I think you are applying a far more rigorous definition of\n> \"immutable\" than we ever have in the past. The possibility that we\n> might change the implementation in the future should not be enough\n> to disqualify a function from being immutable --- if that were the\n> criterion, nothing more complex than int4pl could be immutable.\n>\n> Wouldn't it be better that, in the hypothetical major version where\n> we change the implementation, we tell users that they must reindex\n> any affected indexes?\n>\n> As a comparison point, we allow people to build indexes on tsvector\n> results, which are *easy* to change just by adjusting configuration\n> files. The fact that this might force the need for reindexing hasn't\n> made it unworkable.\n\nThank you for the explanation. Given that there is no need to mark\nexisting json_path_*() functions as stable. We can just advise users\nto rebuild their indexes if we have incompatible changes.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 30 Jul 2019 01:25:25 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "On Mon, Jul 29, 2019 at 5:55 PM Chapman Flack <chap@anastigmatix.net> wrote:\n> On 7/29/19 10:25 AM, Alexander Korotkov wrote:\n>\n> > * like_regex predicate uses our regular expression engine, which\n> > deviates from standard.\n>\n> I still favor adding some element to the syntax (like a 'posix' or 'pg'\n> keyword in the grammar for like_regex) that identifies it as using\n> a different regexp flavor, so the way forward to a possible compliant\n> version later is not needlessly blocked (or consigned to a\n> standard_conforming_strings-like experience).\n\nWhat do you think about renaming existing operator from like_regex to\npg_like_regex? Or introducing special flag indicating that PostgreSQL\nregex engine is used ('p' for instance)?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 30 Jul 2019 01:27:56 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "On 07/29/19 18:27, Alexander Korotkov wrote:\n\n> What do you think about renaming existing operator from like_regex to\n> pg_like_regex? Or introducing special flag indicating that PostgreSQL\n> regex engine is used ('p' for instance)?\n\nRenaming the operator is simple and certainly solves the problem.\n\nI don't have a strong technical argument for or against any of:\n\n\n$.** ? (@ pg_like_regex \"O(w|v)\" flag \"i\")\n$.** ? (@ pg_like_regex \"O(w|v)\")\n\n\n$.** ? (@ like_regex \"O(w|v)\" pg flag \"i\")\n$.** ? (@ like_regex \"O(w|v)\" pg)\n\n\n$.** ? (@ like_regex \"O(w|v)\" flag \"ip\")\n$.** ? (@ like_regex \"O(w|v)\" flag \"p\")\n\n\nIt seems more of an aesthetic judgment (on which I am no particular\nauthority).\n\nI think I would be -0.3 on the third approach just because of the need\nto still spell out ' flag \"p\"' when there is no other flag you want.\n\nI assume the first two approaches would be about equally easy to\nimplement, assuming there's a parser that already has an optional\nproduction for \"flag\" STRING.\n\nBoth of the first two seem pretty safe from colliding with a\nfuture addition to the standard.\n\nTo my aesthetic sense, pg_like_regex feels like \"another operator\nto remember\" while like_regex ... pg feels like \"ok, a slight variant\non the operator from the spec\".\n\nLater on, if a conformant version is added, the grammar might be a bit\nsimpler with just one name and an optional pg.\n\nGoing with a flag, there is some question of the likelihood of\nthe chosen flag letter being usurped by the standard at some point.\n\nI'm leaning toward a flag for now in my own effort to provide the five SQL\nfunctions (like_regex, occurrences_regex, position_regex, substring_regex,\nand translate_regex), as for the time being it will be as an extension,\nso no custom grammar for me, and I don't really want to make five\npg_* variant function names, and have that expand to ten function names\nsomeday if the real ones are implemented. (Hmm, I suppose I could add\nan optional function argument, distinct from flags; that would be\nanalogous to adding a pg in the grammar ... avoids overloading the flags,\navoids renaming the functions.)\n\nI see in the Saxon library there is already a convention where it\nallows a few flags undefined by the standard, after a semicolon in the\nflag string. That has no official status; the XQuery spec\ndefines [smixq] and requires an error for anything else. But it\ndoes have the advantage that the flag string can just be chopped\nat the semicolon to eliminate all but the standard flags, and the\nadvantage (?) that at least one thing is already doing it.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Mon, 29 Jul 2019 20:33:47 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "Hi,\n\nOn 7/29/19 8:33 PM, Chapman Flack wrote:\n> On 07/29/19 18:27, Alexander Korotkov wrote:\n> \n>> What do you think about renaming existing operator from like_regex to\n>> pg_like_regex? Or introducing special flag indicating that PostgreSQL\n>> regex engine is used ('p' for instance)?\n> \n> Renaming the operator is simple and certainly solves the problem.\n> \n> I don't have a strong technical argument for or against any of:\n> \n> \n> $.** ? (@ pg_like_regex \"O(w|v)\" flag \"i\")\n> $.** ? (@ pg_like_regex \"O(w|v)\")\n> \n> \n> $.** ? (@ like_regex \"O(w|v)\" pg flag \"i\")\n> $.** ? (@ like_regex \"O(w|v)\" pg)\n> \n> \n> $.** ? (@ like_regex \"O(w|v)\" flag \"ip\")\n> $.** ? (@ like_regex \"O(w|v)\" flag \"p\")\n> \n> \n> It seems more of an aesthetic judgment (on which I am no particular\n> authority).\n> \n> I think I would be -0.3 on the third approach just because of the need\n> to still spell out ' flag \"p\"' when there is no other flag you want.\n> \n> I assume the first two approaches would be about equally easy to\n> implement, assuming there's a parser that already has an optional\n> production for \"flag\" STRING.\n> \n> Both of the first two seem pretty safe from colliding with a\n> future addition to the standard.\n> \n> To my aesthetic sense, pg_like_regex feels like \"another operator\n> to remember\" while like_regex ... pg feels like \"ok, a slight variant\n> on the operator from the spec\".\n> \n> Later on, if a conformant version is added, the grammar might be a bit\n> simpler with just one name and an optional pg.\n> \n> Going with a flag, there is some question of the likelihood of\n> the chosen flag letter being usurped by the standard at some point.\n> \n> I'm leaning toward a flag for now in my own effort to provide the five SQL\n> functions (like_regex, occurrences_regex, position_regex, substring_regex,\n> and translate_regex), as for the time being it will be as an extension,\n> so no custom grammar for me, and I don't really want to make five\n> pg_* variant function names, and have that expand to ten function names\n> someday if the real ones are implemented. (Hmm, I suppose I could add\n> an optional function argument, distinct from flags; that would be\n> analogous to adding a pg in the grammar ... avoids overloading the flags,\n> avoids renaming the functions.)\n\nLooking at this thread and[1] and the current state of open items[2], a\nfew thoughts:\n\nIt sounds like the easiest path to completion without potentially adding\nfutures headaches pushing back the release too far would be that, e.g.\nthese examples:\n\n\t$.** ? (@ like_regex \"O(w|v)\" pg flag \"i\")\n\t$.** ? (@ like_regex \"O(w|v)\" pg)\n\nIf it's using POSIX regexp, I would +1 using \"posix\" instead of \"pg\"\n\nThat said, from a user standpoint, it's slightly annoying to have to\ninclude that keyword every time, and could potentially mean changing /\ntesting quite a bit of code once we do support XQuery regexps. Based on\nhow we currently handle regular expressions, we've already condition\nuser's to expect a certain behavior, and it would be inconsistent if we\ndo one thing in one place, and another thing here, so I would like for\nus to be cognizant of that.\n\nReading the XQuery spec that Chapman provided[3], it sounds like there\nare some challenges present if we were to try to implement XQuery-based\nregexps.\n\nI do agree with Alvaro's comment (\"We have an opportunity to do\nbetter\")[4], but I think we have to weigh the likelihood of actually\nsupporting the XQuery behaviors before we add more burden to our users.\nBased on what needs to be done, it does not sound like it is any time soon.\n\nMy first choice would be to leave it as is. We can make it abundantly\nclear that if we make changes in a future version we advise our users on\nwhat actions to take, and counsel on any behavior changes.\n\nMy second choice is to have a flag that makes it clear what kind of\nregex's are being used, in which case \"posix\" -- this is abundantly\nclearer to the user, but still default, at present, to using \"posix\"\nexpressions. If we ever do add the XQuery ones, we can debate whether we\ndefault to the standard at that time, and if we do, we treat it like we\ntreat other deprecation issues and make abundantly clear what the\nbehavior is now.\n\nThanks,\n\nJonathan\n\n[1]\nhttps://www.postgresql.org/message-id/flat/5CF28EA0.80902%40anastigmatix.net\n[2] https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items\n[3]\nhttps://wiki.postgresql.org/wiki/PostgreSQL_vs_SQL/XML_Standards#XML_Query_regular_expressions\n[4]\nhttps://www.postgresql.org/message-id/20190618154907.GA6049%40alvherre.pgsql",
"msg_date": "Mon, 16 Sep 2019 10:55:17 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> It sounds like the easiest path to completion without potentially adding\n> futures headaches pushing back the release too far would be that, e.g.\n> these examples:\n\n> \t$.** ? (@ like_regex \"O(w|v)\" pg flag \"i\")\n> \t$.** ? (@ like_regex \"O(w|v)\" pg)\n\n> If it's using POSIX regexp, I would +1 using \"posix\" instead of \"pg\"\n\nI agree that we'd be better off to say \"POSIX\". However, having just\nlooked through the references Chapman provided, it seems to me that\nthe regex language Henry Spencer's library provides is awful darn\nclose to what XPath is asking for. The main thing I see in the XML/XPath\nspecs that we don't have is a bunch of character class escapes that are\nspecifically tied to Unicode character properties. We could possibly\nadd code to implement those, but I'm not sure how it'd work in non-UTF8\ndatabase encodings. There may also be subtle differences in the behavior\nof character class escapes that we do have in common, such as \"\\s\" for\nwhite space; but again I'm not sure that those are any different than\nwhat you get naturally from encoding or locale variations.\n\nI think we could possibly get away with not having any special marker\non regexes, but just explaining in the documentation that \"features\nso-and-so are not implemented\". Writing that text would require closer\nanalysis than I've seen in this thread as to exactly what the differences\nare.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Sep 2019 11:20:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "On 9/16/19 11:20 AM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> It sounds like the easiest path to completion without potentially adding\n>> futures headaches pushing back the release too far would be that, e.g.\n>> these examples:\n> \n>> \t$.** ? (@ like_regex \"O(w|v)\" pg flag \"i\")\n>> \t$.** ? (@ like_regex \"O(w|v)\" pg)\n> \n>> If it's using POSIX regexp, I would +1 using \"posix\" instead of \"pg\"\n> \n> I agree that we'd be better off to say \"POSIX\". However, having just\n> looked through the references Chapman provided, it seems to me that\n> the regex language Henry Spencer's library provides is awful darn\n> close to what XPath is asking for. The main thing I see in the XML/XPath\n> specs that we don't have is a bunch of character class escapes that are\n> specifically tied to Unicode character properties. We could possibly\n> add code to implement those, but I'm not sure how it'd work in non-UTF8\n> database encodings.\n\nMaybe taking a page from the pg_saslprep implementation. For some cases\nwhere the string in question would issue a \"reject\" under normal\nSASLprep[1] considerations (really stringprep[2]), PostgreSQL just lets\nthe string passthrough to the next step, without alteration.\n\nWhat's implied here is if the string is UTF-8, it goes through SASLprep,\nbut if not, it is just passed through.\n\nSo perhaps the answer is that if we implement XQuery, the escape for\nUTF-8 character properties are only honored if the encoding is set to be\nUTF-8, and ignored otherwise. We would have to document that said\nescapes only work on UTF-8 encodings.\n\n> There may also be subtle differences in the behavior\n> of character class escapes that we do have in common, such as \"\\s\" for\n> white space; but again I'm not sure that those are any different than\n> what you get naturally from encoding or locale variations.\n>\n> I think we could possibly get away with not having any special marker\n> on regexes, but just explaining in the documentation that \"features\n> so-and-so are not implemented\". Writing that text would require closer\n> analysis than I've seen in this thread as to exactly what the differences\n> are.\n\n+1, and likely would need some example strings too that highlight the\ndifference in how they are processed.\n\nAnd again, if we end up updating the behavior in the future, it becomes\na part of our standard deprecation notice at the beginning of the\nrelease notes, though one that could require a lot of explanation.\n\nJonathan\n\n[1] https://tools.ietf.org/html/rfc4013\n[2] https://www.ietf.org/rfc/rfc3454.txt",
"msg_date": "Mon, 16 Sep 2019 13:36:29 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 9/16/19 11:20 AM, Tom Lane wrote:\n>> I think we could possibly get away with not having any special marker\n>> on regexes, but just explaining in the documentation that \"features\n>> so-and-so are not implemented\". Writing that text would require closer\n>> analysis than I've seen in this thread as to exactly what the differences\n>> are.\n\n> +1, and likely would need some example strings too that highlight the\n> difference in how they are processed.\n\nI spent an hour digging through these specs. I was initially troubled\nby the fact that XML Schema regexps are implicitly anchored, ie must\nmatch the whole string; that's a huge difference from POSIX. However,\n19075-6 says that jsonpath like_regex works the same as the LIKE_REGEX\npredicate in SQL; and SQL:2011 \"9.18 XQuery regular expression matching\"\ndefines LIKE_REGEX to work exactly like XQuery's fn:matches function,\nexcept for some weirdness around newline matching; and that spec\nclearly says that fn:matches treats its pattern argument as NOT anchored.\nSo it looks like we end up in the same place as POSIX for this.\n\nOtherwise, the pattern language differences I could find are all details\nof character class expressions (bracket expressions, such as \"[a-z0-9]\")\nand escapes that are character class shorthands:\n\n* We don't have \"character class subtraction\". I'd be pretty hesitant\nto add that to our regexp language because it seems to change \"-\" into\na metacharacter, which would break an awful lot of regexps. I might\nbe misunderstanding their syntax for it, because elsewhere that spec\nexplicitly claims that \"-\" is not a metacharacter.\n\n* Character class elements can be #xNN (NN being hex digits), which seems\nequivalent to POSIX \\xNN as long as you're using UTF8 encoding. Again,\nthe compatibility costs of allowing that don't seem attractive, since #\nisn't a metacharacter today.\n\n* Character class elements can be \\p{UnicodeProperty} or\nthe complement \\P{UnicodeProperty}, where there are a bunch of different\npossible properties. Perhaps we could add that someday; since there's no\nreason to escape \"p\" or \"P\" today, this doesn't seem like it'd be a huge\ncompatibility hit. But I'm content to document this as unimplemented\nfor now.\n\n* XQuery adds character class shorthands \\i (complement \\I) for \"initial\nname characters\" and \\c (complement \\C) for \"NameChar\". Same as above;\nmaybe add someday, but no hurry.\n\n* It looks like XQuery's \\w class might allow more characters than our\ninterpretation does, and hence \\W allows fewer. But since \\w devolves\nto what libc thinks the \"alnum\" class is, it's at least possible that\nsome locales might do the same thing XQuery calls for.\n\n* Likewise, any other discrepancies between the Unicode-centric character\nclass definitions in XQuery and what our stuff does are well within the\nboundaries of locale variances. So I don't feel too bad about that.\n\n* The SQL-spec newline business mentioned above is a possible exception:\nit appears to require that when '.' is allowed to match newlines, a\nsingle '.' should match a '\\r\\n' Windows newline. I think we can\ndocument that and move on.\n\n* The x flag in XQuery is defined as ignoring all whitespace in\nthe pattern except within character class expressions. Spencer's\nx flag does mostly that, but it thinks that \"\\ \" means a literal space\nwhereas XQuery explicitly says that the space is ignored and the\nbackslash applies to the next non-space character. (That's just\nweird, in my book.) Also, Spencer's x mode causes # to begin\na comment extending to EOL, which is a nice thing XQuery hasn't\ngot, and it says you can't put spaces within multi-character\nsymbols like \"(?:\", which presumably is allowed with XQuery's \"x\".\n\nI feel a bit uncomfortable with these inconsistencies in x-flag\nrules. We could probably teach the regexp library to have an\nalternate expanded mode that matches XQuery's rules, but that's\nnot a project to tackle for v12. I tentatively recommend that\nwe remove the jsonpath \"x\" flag for the time being.\n\nAlso, I noted some things that seem to be flat out sloppiness\nin the XQuery flag conversions:\n\n* The newline-matching flags (m and s flags) can be mapped to\nfeatures of Spencer's library, but jsonpath_gram.y does so\nincorrectly.\n\n* XQuery says that the q flag overrides m, s, and x flags, which is\nexactly the opposite of what our code does; besides which the code\nis flag-order-sensitive which is just wrong.\n\nThese last two are simple to fix and we should just go do it.\nOtherwise, I think we're okay with regarding Spencer's library\nas being a sufficiently close approximation to LIKE_REGEX.\nWe need some documentation work though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Sep 2019 17:10:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "On 9/16/19 5:10 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> On 9/16/19 11:20 AM, Tom Lane wrote:\n>>> I think we could possibly get away with not having any special marker\n>>> on regexes, but just explaining in the documentation that \"features\n>>> so-and-so are not implemented\". Writing that text would require closer\n>>> analysis than I've seen in this thread as to exactly what the differences\n>>> are.\n> \n>> +1, and likely would need some example strings too that highlight the\n>> difference in how they are processed.\n> \n> I spent an hour digging through these specs.\n\nThanks! That sounds like quite the endeavor...\n\n> I was initially troubled\n> by the fact that XML Schema regexps are implicitly anchored, ie must\n> match the whole string; that's a huge difference from POSIX. However,\n> 19075-6 says that jsonpath like_regex works the same as the LIKE_REGEX\n> predicate in SQL; and SQL:2011 \"9.18 XQuery regular expression matching\"\n> defines LIKE_REGEX to work exactly like XQuery's fn:matches function,\n> except for some weirdness around newline matching; and that spec\n> clearly says that fn:matches treats its pattern argument as NOT anchored.\n> So it looks like we end up in the same place as POSIX for this.\n> \n> Otherwise, the pattern language differences I could find are all details\n> of character class expressions (bracket expressions, such as \"[a-z0-9]\")\n> and escapes that are character class shorthands:\n> \n> * We don't have \"character class subtraction\". I'd be pretty hesitant\n> to add that to our regexp language because it seems to change \"-\" into\n> a metacharacter, which would break an awful lot of regexps. I might\n> be misunderstanding their syntax for it, because elsewhere that spec\n> explicitly claims that \"-\" is not a metacharacter.\n\nUsing something I could understand[1] it looks like the syntax is like:\n\n\t[a-z-[aeiou]\n\ne.g. all the consonants of the alphabet. I don't believe that would\nbreak many, if any, regexps. I also don't know what kind of effort it\nwould take to add that in given I had not looked at the regexp code\nuntil today (and only at some of the amusing comments in the header\nfile, which seemed like it wasn't expected the code would be read 20\nyears later), but it would likely not be a v12 problem.\n\n> * Character class elements can be #xNN (NN being hex digits), which seems\n> equivalent to POSIX \\xNN as long as you're using UTF8 encoding. Again,\n> the compatibility costs of allowing that don't seem attractive, since #\n> isn't a metacharacter today.\n\nSeems reasonable.\n\n> * Character class elements can be \\p{UnicodeProperty} or\n> the complement \\P{UnicodeProperty}, where there are a bunch of different\n> possible properties. Perhaps we could add that someday; since there's no\n> reason to escape \"p\" or \"P\" today, this doesn't seem like it'd be a huge\n> compatibility hit. But I'm content to document this as unimplemented\n> for now.\n\n+1.\n\n> * XQuery adds character class shorthands \\i (complement \\I) for \"initial\n> name characters\" and \\c (complement \\C) for \"NameChar\". Same as above;\n> maybe add someday, but no hurry.\n\n+1.\n\n> * It looks like XQuery's \\w class might allow more characters than our\n> interpretation does, and hence \\W allows fewer. But since \\w devolves\n> to what libc thinks the \"alnum\" class is, it's at least possible that\n> some locales might do the same thing XQuery calls for.\n\nI'd still add this to the \"to document\" list.\n\n> * The SQL-spec newline business mentioned above is a possible exception:\n> it appears to require that when '.' is allowed to match newlines, a\n> single '.' should match a '\\r\\n' Windows newline. I think we can\n> document that and move on.\n\n+1.\n\n> * The x flag in XQuery is defined as ignoring all whitespace in\n> the pattern except within character class expressions. Spencer's\n> x flag does mostly that, but it thinks that \"\\ \" means a literal space\n> whereas XQuery explicitly says that the space is ignored and the\n> backslash applies to the next non-space character. (That's just\n> weird, in my book.) Also, Spencer's x mode causes # to begin\n> a comment extending to EOL, which is a nice thing XQuery hasn't\n> got, and it says you can't put spaces within multi-character\n> symbols like \"(?:\", which presumably is allowed with XQuery's \"x\".\n> \n> I feel a bit uncomfortable with these inconsistencies in x-flag\n> rules. We could probably teach the regexp library to have an\n> alternate expanded mode that matches XQuery's rules, but that's\n> not a project to tackle for v12.\n\nThat does not sound fun by any means. But likely that would be a part of\nan overall effort to implement XQuery rules.\n\n> I tentatively recommend that\n> we remove the jsonpath \"x\" flag for the time being.\n\nI would add an alternative suggestion of just removing that \"x\" is\nsupported in the documentation...but likely better to just remove the\nflag + docs.\n\n> Also, I noted some things that seem to be flat out sloppiness\n> in the XQuery flag conversions:\n> \n> * The newline-matching flags (m and s flags) can be mapped to\n> features of Spencer's library, but jsonpath_gram.y does so\n> incorrectly\n> * XQuery says that the q flag overrides m, s, and x flags, which is\n> exactly the opposite of what our code does; besides which the code\n> is flag-order-sensitive which is just wrong.\n> \n> These last two are simple to fix and we should just go do it.\n\n+1.\n\n> Otherwise, I think we're okay with regarding Spencer's library\n> as being a sufficiently close approximation to LIKE_REGEX.\n> We need some documentation work though.\n\nMy main question is \"where\" -- I'm thinking somewhere in the JSON\npath[2] section. After reading your email 3 times, I may have enough\nknowledge to attempt some documentation on the regexp in JSON path.\n\nJonathan\n\n[1] https://www.regular-expressions.info/charclasssubtract.html\n[2]\nhttps://www.postgresql.org/docs/12/functions-json.html#FUNCTIONS-SQLJSON-PATH",
"msg_date": "Mon, 16 Sep 2019 18:39:40 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "On 09/16/19 17:10, Tom Lane wrote:\n\n> I was initially troubled\n> by the fact that XML Schema regexps are implicitly anchored, ie must\n> match the whole string; that's a huge difference from POSIX. However,\n> 19075-6 says that jsonpath like_regex works the same as the LIKE_REGEX\n> predicate in SQL; and SQL:2011 \"9.18 XQuery regular expression matching\"\n> defines LIKE_REGEX to work exactly like XQuery's fn:matches function,\n> except for some weirdness around newline matching; and that spec\n> clearly says that fn:matches treats its pattern argument as NOT anchored.\n\nYeah, it's a layer cake. XML Schema regexps[1] are implicitly anchored and\ndon't have any metacharacters devoted to anchoring.\n\nXQuery regexps layer onto[2] XML Schema regexps, adding ^ and $ anchors,\nrescinding the implicit anchored-ness, adding reluctant quantifiers,\ncapturing groups, and back-references, and defining flags.\n\nThen ISO SQL adds a third layer changing the newline semantics, affecting\n^, $, ., \\s, and \\S.\n\nRegards,\n-Chap\n\n\n[1] https://www.w3.org/TR/xmlschema-2/#regexs\n[2] https://www.w3.org/TR/xpath-functions-31/#regex-syntax\n\n\n",
"msg_date": "Mon, 16 Sep 2019 19:11:13 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "On 9/16/19 6:39 PM, Jonathan S. Katz wrote:\n\n> My main question is \"where\" -- I'm thinking somewhere in the JSON\n> path[2] section. After reading your email 3 times, I may have enough\n> knowledge to attempt some documentation on the regexp in JSON path.\n\nHere is said attempt to document. Notes:\n\n- I centered it around the specification for LIKE_REGEX, which uses\nXQuery, but primarily noted where our implementation of POSIX regex's\ndiffers from what is specified for LIKE_REGEX vis-a-vis XQuery\n\n- I put the pith of the documentation in a subsection off of \"POSIX\nregular expressions\"\n\n- I noted that LIKE_REGEX is specified in SQL:2008, which I read on the\nInternet(tm) but was not able to confirm in the spec as I do not have a copy\n\n- For my explanation about the \"x\" flag differences, I talked about how\nwe extended it, but I could not capture how Tom described the nuances above.\n\n- From the SQL/JSON path docs, I added a section on regular expressions\nstating what the behavior is, and referring back to the main regex docs\n\n- I removed the \"x\" flag being supported for like_regex in JSON path\n\nI also presume it needs a bit of wordsmithing / accuracy checks, but\nhope it's a good start and does not require a massive rewrite.\n\nThanks,\n\nJonathan",
"msg_date": "Tue, 17 Sep 2019 11:38:28 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "On 2019-09-17 17:38, Jonathan S. Katz wrote:\n> On 9/16/19 6:39 PM, Jonathan S. Katz wrote:\n> [regex.patch]\n\nA few things/typos caught my eye:\n\n1.\n'implementation' seems the wrong word in sentence:\n\n\"Several other parts of the SQL standard\nalso define LIKE_REGEX equivalents that refer\nto this implementation, including the\nSQL/JSON path like_regex filter.\"\n\nAs I understand this text, 'concept' seems better.\nI'd drop 'also', too.\n\n2.\n'whereas the POSIX will those' should be\n'whereas POSIX will regard those'\n or maybe 'read those'\n\n3.\n+ The SQL/JSON standard borrows its definition for how regular \nexpressions\n+ from the <literal>LIKE_REGEX</literal> operator, which in turns \nuses the\n+ XQuery standard.\nThat sentence needs the verb 'work', no? 'for how regular expressions \nwork [..]'\nOr alternatively drop 'how'.\n\n\nthanks,\n\nErik Rijkers\n\n\n\n\n",
"msg_date": "Tue, 17 Sep 2019 18:09:01 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "On 9/17/19 12:09 PM, Erik Rijkers wrote:\n> On 2019-09-17 17:38, Jonathan S. Katz wrote:\n>> [regex.patch]\n\nThanks for the review!\n\n> \"Several other parts of the SQL standard\n> also define LIKE_REGEX equivalents that refer\n> to this implementation, including the\n> SQL/JSON path like_regex filter.\"\n> \n> As I understand this text, 'concept' seems better.\n> I'd drop 'also', too.\n\nI rewrote this to be:\n\n\"Several other parts of the SQL standard refer to the LIKE_REGEX\nspecification to define similar operations, including...\"\n\n> 2.\n> 'whereas the POSIX will those' should be\n> 'whereas POSIX will regard those'\n> or maybe 'read those'\n\nI used \"treat those\"\n\n> \n> 3.\n> + The SQL/JSON standard borrows its definition for how regular\n> expressions\n> + from the <literal>LIKE_REGEX</literal> operator, which in turns\n> uses the\n> + XQuery standard.\n> That sentence needs the verb 'work', no? 'for how regular expressions\n> work [..]'\n> Or alternatively drop 'how'.\n\nI dropped the \"how\".\n\nv2 attached. Thanks!\n\nJonathan",
"msg_date": "Tue, 17 Sep 2019 13:58:31 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> v2 attached. Thanks!\n\nI whacked this around some (well, quite a bit actually); notably,\nI thought we'd better describe things that are in our engine but\nnot XQuery, as well as vice-versa.\n\nAfter a re-read of the XQuery spec, it seems to me that the character\nentry form that they have and we don't is actually \"&#NNNN;\" like\nHTML, rather than just \"#NN\". Can anyone double-check that? Does\nit work outside bracket expressions, or only inside?\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 17 Sep 2019 18:40:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "On 9/17/19 6:40 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> v2 attached. Thanks!\n> \n> I whacked this around some (well, quite a bit actually);\n\nSo I see :) Thanks.\n\n> notably,\n> I thought we'd better describe things that are in our engine but\n> not XQuery, as well as vice-versa.\n\nYeah, that makes sense. Overall it reads really well. One question I had\nin my head (and probably should have asked) was answered around the \\w\ncharacter class wrt collation.\n\n> After a re-read of the XQuery spec, it seems to me that the character\n> entry form that they have and we don't is actually \"&#NNNN;\" like\n> HTML, rather than just \"#NN\". Can anyone double-check that?\n\nClicking through the XQuery spec eventual got me to here[1] (which warns\nme that its out of date, but that is what its \"current\" specs linked me\nto), which describes being able to use \"&#[0-9]+;\" and \"&#[0-9a-fA-F]+;\"\nto specify characters (which I recognize as a character escape from\nHTML, XML et al.).\n\nSo based on that, my answer is \"yes.\"\n\n> Does\n> it work outside bracket expressions, or only inside?\n\nLooking at the parse tree (start with the \"atom\"[2]), I read it as being\nable to use that syntax both inside and outside the bracket expressions.\n\nHere is a v4. I added some more paragraphs the bullet point that\nexplains the different flags to make it feel a bit less dense.\n\nThanks,\n\nJonathan\n\n[1] https://www.w3.org/TR/2000/WD-xml-2e-20000814#dt-charref\n[2] https://www.w3.org/TR/xmlschema-2/#nt-atom",
"msg_date": "Tue, 17 Sep 2019 21:13:18 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "On 09/17/19 21:13, Jonathan S. Katz wrote:\n\n> to), which describes being able to use \"&#[0-9]+;\" and \"&#[0-9a-fA-F]+;\"\n\nEr, that is, \"&#[0-9]+;\" and \"&#x[0-9a-fA-F]+;\" (with x for the hex case).\n\n>> Does\n>> it work outside bracket expressions, or only inside?\n> \n> Looking at the parse tree (start with the \"atom\"[2]), I read it as being\n> able to use that syntax both inside and outside the bracket expressions.\n\nMaybe I can plug a really handy environment for messin'-around-in-XQuery,\nBaseX: http://basex.org/\n\nAll the buzzwords on the landing page make it seem as if it's going to be\nsome monstrous thing to download and set up, but on the downloads page,\nthe \"Core Package\" option is a single standalone 3.8 MB jar file:\n\n http://files.basex.org/releases/9.2.4/BaseX924.jar\n\n\"java -jar BaseX924.jar\" is all it takes to start up, and wham, you're\nin a nice IDE-like environment where the editor pane is syntax-aware\nfor XQuery and will run your code and show results with a click of the\ngo button.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Tue, 17 Sep 2019 22:00:44 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "On 9/17/19 10:00 PM, Chapman Flack wrote:\n> On 09/17/19 21:13, Jonathan S. Katz wrote:\n> \n>> to), which describes being able to use \"&#[0-9]+;\" and \"&#[0-9a-fA-F]+;\"\n> \n> Er, that is, \"&#[0-9]+;\" and \"&#x[0-9a-fA-F]+;\" (with x for the hex case).\n\nCorrect, I missed the \"x\".\n\nThanks,\n\nJonathan",
"msg_date": "Tue, 17 Sep 2019 22:07:28 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 4:13 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> Here is a v4. I added some more paragraphs the bullet point that\n> explains the different flags to make it feel a bit less dense.\n\nSorry that I didn't participate this discussion till now. FWIW, I\nagree with selected approach to document differences with XQuery regex\nand and forbid 'x' from jsonpath like_regex. Patch also looks good\nfor me at the first glance.\n\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 18 Sep 2019 13:29:35 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 9/17/19 6:40 PM, Tom Lane wrote:\n>> After a re-read of the XQuery spec, it seems to me that the character\n>> entry form that they have and we don't is actually \"&#NNNN;\" like\n>> HTML, rather than just \"#NN\". Can anyone double-check that?\n\n> Clicking through the XQuery spec eventual got me to here[1] (which warns\n> me that its out of date, but that is what its \"current\" specs linked me\n> to), which describes being able to use \"&#[0-9]+;\" and \"&#[0-9a-fA-F]+;\"\n> to specify characters (which I recognize as a character escape from\n> HTML, XML et al.).\n\nAfter further reading, it seems like what that text is talking about\nis not actually a regex feature, but an outgrowth of the fact that\nthe regex pattern is being expressed as a string literal in a language\nfor which XML character entities are a native aspect of the string\nliteral syntax. So it looks to me like the entities get folded to\nraw characters in a string-literal parser before the regex engine\never sees them.\n\nAs such, I think this doesn't apply to SQL/JSON. The SQL/JSON spec\nseems to defer to Javascript/ECMAscript for syntax details, and\nin either of those languages you have backslash escape sequences\nfor writing weird characters, *not* XML entities. You certainly\nwouldn't have use of such entities in a native implementation of\nLIKE_REGEX in SQL.\n\nSo now I'm thinking we can just remove the handwaving about entities.\nOn the other hand, this points up a large gap in our docs about\nSQL/JSON, which is that nowhere does it even address the question of\nwhat the string literal syntax is within a path expression. Much\nless point out that that syntax is nothing like native SQL strings.\nGood luck finding out from the docs that you'd better double any\nbackslashes you'd like to have in your regex --- but a moment's\ntesting proves that that is the case in our code as it stands.\nHave we misread the spec badly enough to get this wrong?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Sep 2019 17:12:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "On 09/18/19 17:12, Tom Lane wrote:\n\n> After further reading, it seems like what that text is talking about\n> is not actually a regex feature, but an outgrowth of the fact that\n> the regex pattern is being expressed as a string literal in a language\n> for which XML character entities are a native aspect of the string\n> literal syntax. So it looks to me like the entities get folded to\n> raw characters in a string-literal parser before the regex engine\n> ever sees them.\n\nHmm. That occurred to me too, but I thought the explicit mention of\n'character reference' in the section specific to regexes[1] might not\nmean that. It certainly could have been clearer.\n\nBut you seem to have the practical agreement of both BaseX:\n\nlet $foo := codepoints-to-string((38,35,120,54,49,59))\nreturn ($foo, matches('a', $foo))\n------\na\nfalse\n\nand the Saxon-based pljava example:\n\nselect occurrences_regex('a', 'a', w3cNewlines => true);\n occurrences_regex\n-------------------\n 0\n\n> As such, I think this doesn't apply to SQL/JSON. The SQL/JSON spec\n> seems to defer to Javascript/ECMAscript for syntax details, and\n> in either of those languages you have backslash escape sequences\n> for writing weird characters, *not* XML entities. You certainly\n> wouldn't have use of such entities in a native implementation of\n> LIKE_REGEX in SQL.\n\nSo yeah, that seems to be correct.\n\nThe upshot seems to be a two-parter:\n\n1. Whatever string literal syntax is used in front of the regex engine\n had better have some way to represent any character you could want\n to match, and\n2. There is only one way to literally match a character that is a regex\n metacharacter, namely, to precede it with a backslash (that the regex\n engine will see; therefore doubled if necessary). Whatever codepoint\n escape form might be available in the string literal syntax does not\n offer another way to do that, because it happens too early, before\n the regex engine can see it.\n\n> So now I'm thinking we can just remove the handwaving about entities.\n> On the other hand, this points up a large gap in our docs about\n> SQL/JSON, which is that nowhere does it even address the question of\n> what the string literal syntax is within a path expression.\n\nThat does seem like it ought to be covered.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Wed, 18 Sep 2019 18:47:15 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 09/18/19 17:12, Tom Lane wrote:\n>> As such, I think this doesn't apply to SQL/JSON. The SQL/JSON spec\n>> seems to defer to Javascript/ECMAscript for syntax details, and\n>> in either of those languages you have backslash escape sequences\n>> for writing weird characters, *not* XML entities. You certainly\n>> wouldn't have use of such entities in a native implementation of\n>> LIKE_REGEX in SQL.\n\n> So yeah, that seems to be correct.\n\nThanks for double-checking. I removed that para from the patch.\n\n>> So now I'm thinking we can just remove the handwaving about entities.\n>> On the other hand, this points up a large gap in our docs about\n>> SQL/JSON, which is that nowhere does it even address the question of\n>> what the string literal syntax is within a path expression.\n\n> That does seem like it ought to be covered.\n\nI found a spot that seemed like a reasonable place, and added some\ncoverage of the point. Updated patch attached.\n\nIt seems to me that there are some discrepancies between what the spec\nsays and what jsonpath_scan.l actually does, so maybe we should take a\nhard look at that code too. The biggest issue is that jsonpath_scan.l\nseems to allow single- and double-quoted strings interchangeably, which is\nOK per ECMAScript, but then the SQL/JSON spec seems to be saying that only\ndouble-quoted strings are allowed. I'd rather be conservative about this\nthan get out in front of the spec and use syntax space that they might do\nsomething else with someday.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 18 Sep 2019 19:41:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "I wrote:\n> I found a spot that seemed like a reasonable place, and added some\n> coverage of the point. Updated patch attached.\n\nDoc patch pushed.\n\n> It seems to me that there are some discrepancies between what the spec\n> says and what jsonpath_scan.l actually does, so maybe we should take a\n> hard look at that code too. The biggest issue is that jsonpath_scan.l\n> seems to allow single- and double-quoted strings interchangeably, which is\n> OK per ECMAScript, but then the SQL/JSON spec seems to be saying that only\n> double-quoted strings are allowed. I'd rather be conservative about this\n> than get out in front of the spec and use syntax space that they might do\n> something else with someday.\n\nThe attached proposed patch makes these changes:\n\n1. Remove support for single-quoted literals in jsonpath.\n\n2. Treat an unrecognized escape (e.g., \"\\z\") as meaning the escaped\n character, rather than throwing an error.\n\n3. A few cosmetic adjustments to make the jsonpath_scan code shorter and\n clearer (IMHO).\n\nAs for #1, although the SQL/JSON tech report does reference ECMAScript\nwhich allows both single- and double-quoted strings, it seems to me\nthat their intent is to allow only the double-quoted variant. They\nspecifically reference JSON string literals at one point, and of course\nJSON only allows double-quoted. Also, all of their discussion and\nexamples use double-quoted. Plus you'd have to be pretty nuts to want\nto use single-quoted when writing a jsonpath string literal inside a SQL\nliteral (and the tech report seems to contemplate that jsonpaths MUST be\nstring literals, though of course our implementation does not require\nthat).\n\nAs for #2, the existing code throws an error, but this is contrary\nto clear statements in every single one of the relevant standards.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 19 Sep 2019 12:25:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "On 9/19/19 12:25 PM, Tom Lane wrote:\n> I wrote:\n>> I found a spot that seemed like a reasonable place, and added some\n>> coverage of the point. Updated patch attached.\n> \n> Doc patch pushed.\n\nThanks! I did not get to review them last night but upon review not too\nlong ago, they looked great.\n\n>> It seems to me that there are some discrepancies between what the spec\n>> says and what jsonpath_scan.l actually does, so maybe we should take a\n>> hard look at that code too. The biggest issue is that jsonpath_scan.l\n>> seems to allow single- and double-quoted strings interchangeably, which is\n>> OK per ECMAScript, but then the SQL/JSON spec seems to be saying that only\n>> double-quoted strings are allowed. I'd rather be conservative about this\n>> than get out in front of the spec and use syntax space that they might do\n>> something else with someday.\n\nI agree with erring on the side of the spec vs. what ECMAScript does. In\nJSON, strings, identifiers, etc. are double-quoted. Anything that is\nsingle quoted with throw an error in a compliant JSON parser.\n\nLooking at the user documentation for how some other databases with\nSQL/JSON support, this seems to back up your analysis.\n\n> \n> The attached proposed patch makes these changes:\n> \n> 1. Remove support for single-quoted literals in jsonpath.\n> \n> 2. Treat an unrecognized escape (e.g., \"\\z\") as meaning the escaped\n> character, rather than throwing an error.\n> \n> 3. A few cosmetic adjustments to make the jsonpath_scan code shorter and\n> clearer (IMHO).\n\nIf this refers to s/any/other/, yes I would agree it's clearer.\n\n> As for #1, although the SQL/JSON tech report does reference ECMAScript\n> which allows both single- and double-quoted strings, it seems to me\n> that their intent is to allow only the double-quoted variant. They\n> specifically reference JSON string literals at one point, and of course\n> JSON only allows double-quoted. Also, all of their discussion and\n> examples use double-quoted. Plus you'd have to be pretty nuts to want\n> to use single-quoted when writing a jsonpath string literal inside a SQL\n> literal (and the tech report seems to contemplate that jsonpaths MUST be\n> string literals, though of course our implementation does not require\n> that).\n\nI agree with the above (though wrt single-quoting and literals, I have\nseen stranger things).\n\n> As for #2, the existing code throws an error, but this is contrary\n> to clear statements in every single one of the relevant standards.\n\nMakes sense.\n\nI looked at the patch, but did not test it. From what I can see, it\nlooks good, but perhaps we add a test in it to show that single-quoted\nliterals are unsupported?\n\nJonathan",
"msg_date": "Thu, 19 Sep 2019 14:37:29 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> I looked at the patch, but did not test it. From what I can see, it\n> looks good, but perhaps we add a test in it to show that single-quoted\n> literals are unsupported?\n\nI thought about that, but it seems like it'd be memorializing some\nother weird behavior:\n\nregression=# select '''foo'''::jsonpath;\nERROR: syntax error, unexpected IDENT_P at end of jsonpath input\nLINE 1: select '''foo'''::jsonpath;\n ^\n\nregression=# select '''foo'' <= ''bar'''::jsonpath;\nERROR: syntax error, unexpected IDENT_P at or near \" \" of jsonpath input\nLINE 1: select '''foo'' <= ''bar'''::jsonpath;\n ^\n\nThere isn't anything I like about these error messages. Seems like\nthe error handling in jsonpath_gram.y could use some cleanup too\n... although I don't think it's a task to tackle while we're\nrushing to get v12 shippable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Sep 2019 15:48:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "On 9/19/19 3:48 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> I looked at the patch, but did not test it. From what I can see, it\n>> looks good, but perhaps we add a test in it to show that single-quoted\n>> literals are unsupported?\n> \n> I thought about that, but it seems like it'd be memorializing some\n> other weird behavior:\n> \n> regression=# select '''foo'''::jsonpath;\n> ERROR: syntax error, unexpected IDENT_P at end of jsonpath input\n> LINE 1: select '''foo'''::jsonpath;\n> ^\n> \n> regression=# select '''foo'' <= ''bar'''::jsonpath;\n> ERROR: syntax error, unexpected IDENT_P at or near \" \" of jsonpath input\n> LINE 1: select '''foo'' <= ''bar'''::jsonpath;\n> ^\n\nAh yeah, those are some interesting errors.\n\n> There isn't anything I like about these error messages.\n\nAgreed. It would be nice to have tests around it, but yes, I think\nlooking at the regression outpout one may scratch their head.\n\n> Seems like\n> the error handling in jsonpath_gram.y could use some cleanup too\n> ... although I don't think it's a task to tackle while we're\n> rushing to get v12 shippable.\n\nIIRC if we want to change the contents of an error message we wait until\nmajor releases. Is there anything we can do before 12 to avoid messages\nlike \"unexpected IDENT_P\" coming to a user? Would that be something\nacceptable to fix as a 12.1 or would it have to wait until 13?\n\nJonathan",
"msg_date": "Thu, 19 Sep 2019 18:04:02 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 9/19/19 3:48 PM, Tom Lane wrote:\n>> Seems like\n>> the error handling in jsonpath_gram.y could use some cleanup too\n>> ... although I don't think it's a task to tackle while we're\n>> rushing to get v12 shippable.\n\n> IIRC if we want to change the contents of an error message we wait until\n> major releases. Is there anything we can do before 12 to avoid messages\n> like \"unexpected IDENT_P\" coming to a user? Would that be something\n> acceptable to fix as a 12.1 or would it have to wait until 13?\n\nI think these messages are sufficiently confusing that we could call\nit a bug fix to improve them. As long as we don't change the SQLSTATE\nthat's thrown, it's hard to claim that there's any real application\ncompatibility hazard from changing them.\n\nI just don't want to call this point a release blocker. It's not\nabout changing any semantics or the set of things that work.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Sep 2019 18:18:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
},
{
"msg_contents": "On 9/19/19 6:18 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> On 9/19/19 3:48 PM, Tom Lane wrote:\n>>> Seems like\n>>> the error handling in jsonpath_gram.y could use some cleanup too\n>>> ... although I don't think it's a task to tackle while we're\n>>> rushing to get v12 shippable.\n> \n>> IIRC if we want to change the contents of an error message we wait until\n>> major releases. Is there anything we can do before 12 to avoid messages\n>> like \"unexpected IDENT_P\" coming to a user? Would that be something\n>> acceptable to fix as a 12.1 or would it have to wait until 13?\n> \n> I think these messages are sufficiently confusing that we could call\n> it a bug fix to improve them. As long as we don't change the SQLSTATE\n> that's thrown, it's hard to claim that there's any real application\n> compatibility hazard from changing them.\n\nGreat. +1 on that.\n\n> I just don't want to call this point a release blocker. It's not\n> about changing any semantics or the set of things that work.\n\n+100 on that.\n\nJonathan",
"msg_date": "Thu, 19 Sep 2019 18:20:16 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Define jsonpath functions as stable"
}
] |
[
{
"msg_contents": "Hello,\n\n\nI attached one example of a partitioned table with multi column partition key. I also attached the output.\n\nDisabling the hash_join is not really necessary, it just shows the more drastic result in the case of low work_mem.\n\n\nComparing the first and the second query I was surprised to see that SET enable_partitionwise_join could cause the costs to go up. Shouldn't the paths of the first query be generated as well?\n\n\n\nThe third query seems to have a different issue. That one is close to my original performance problem. It looks to me like the push down of the sl condition stops the optimizer considering a partial join.\n\nIf so would it be sane to keep a copy of the original quals to make the partial join possible?\n\n\nRegards\n\nArne",
"msg_date": "Mon, 29 Jul 2019 16:43:05 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Partial join"
},
{
"msg_contents": "Hello,\n\nI attached one example of a partitioned table with multi column partition key. I also attached the output.\nDisabling the hash_join is not really necessary, it just shows the more drastic result in the case of low work_mem.\n\nComparing the first and the second query I was surprised to see that SET enable_partitionwise_join could cause the costs to go up. Shouldn't the paths of the first query be generated as well?\n\nThe third query seems to have a different issue. That one is close to my original performance problem. It looks to me like the push down of the sl condition stops the optimizer considering a partial join.\nIf so would it be sane to keep a copy of the original quals to make the partial join possible? Do you have better ideas?\n\n\nRegards\nArne",
"msg_date": "Thu, 1 Aug 2019 08:07:25 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Partial join"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 5:38 PM Arne Roland <A.Roland@index.de> wrote:\n\n> Hello,\n>\n> I attached one example of a partitioned table with multi column partition\n> key. I also attached the output.\n> Disabling the hash_join is not really necessary, it just shows the more\n> drastic result in the case of low work_mem.\n>\n> Comparing the first and the second query I was surprised to see that SET\n> enable_partitionwise_join could cause the costs to go up. Shouldn't the\n> paths of the first query be generated as well?\n>\n> The third query seems to have a different issue. That one is close to my\n> original performance problem. It looks to me like the push down of the sl\n> condition stops the optimizer considering a partial join.\n> If so would it be sane to keep a copy of the original quals to make the\n> partial join possible? Do you have better ideas?\n>\n\nFor the third query, a rough investigation shows that, the qual 'sl =\n5' and 'sc.sl = sg.sl' will form an equivalence class and generate two\nimplied equalities: 'sc.sl = 5' and 'sg.sl = 5', which can be pushed\ndown to the base rels. One consequence of the deduction is when\nconstructing restrict lists for the joinrel, we lose the original\nrestrict 'sc.sl = sg.sl', and this would fail the check\nhave_partkey_equi_join(), which checks if there exists an equi-join\ncondition for each pair of partition keys. As a result, this joinrel\nwould not be considered as an input to further partitionwise joins.\n\nWe need to fix this.\n\nThanks\nRichard\n\nOn Thu, Aug 1, 2019 at 5:38 PM Arne Roland <A.Roland@index.de> wrote:\n\n\nHello,\n\nI attached one example of a partitioned table with multi column partition key. I also attached the output.\nDisabling the hash_join is not really necessary, it just shows the more drastic result in the case of low work_mem.\n\nComparing the first and the second query I was surprised to see that SET enable_partitionwise_join could cause the costs to go up. Shouldn't the paths of the first query be generated as well?\n\nThe third query seems to have a different issue. That one is close to my original performance problem. It looks to me like the push down of the sl condition stops the optimizer considering a partial join.\nIf so would it be sane to keep a copy of the original quals to make the partial join possible? Do you have better ideas?For the third query, a rough investigation shows that, the qual 'sl =5' and 'sc.sl = sg.sl' will form an equivalence class and generate twoimplied equalities: 'sc.sl = 5' and 'sg.sl = 5', which can be pusheddown to the base rels. One consequence of the deduction is whenconstructing restrict lists for the joinrel, we lose the originalrestrict 'sc.sl = sg.sl', and this would fail the checkhave_partkey_equi_join(), which checks if there exists an equi-joincondition for each pair of partition keys. As a result, this joinrelwould not be considered as an input to further partitionwise joins.We need to fix this.ThanksRichard",
"msg_date": "Thu, 1 Aug 2019 19:14:44 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Partial join"
},
{
"msg_contents": "Hello Richard,\n\n\nthanks for your quick reply.\n\n\n> We need to fix this.\n\n\nDo you have a better idea than just keeping the old quals - possibly just the ones that get eliminated - in a separate data structure? Is the push down of quals the only case of elimination of quals, only counting the ones which happen before the restrict lists are generated?\n\n\nRegards\n\nArne\n\n________________________________\nFrom: Richard Guo <riguo@pivotal.io>\nSent: Thursday, August 1, 2019 1:14:44 PM\nTo: Arne Roland\nCc: pgsql-hackers@lists.postgresql.org\nSubject: Re: Partial join\n\n\nOn Thu, Aug 1, 2019 at 5:38 PM Arne Roland <A.Roland@index.de<mailto:A.Roland@index.de>> wrote:\nHello,\n\nI attached one example of a partitioned table with multi column partition key. I also attached the output.\nDisabling the hash_join is not really necessary, it just shows the more drastic result in the case of low work_mem.\n\nComparing the first and the second query I was surprised to see that SET enable_partitionwise_join could cause the costs to go up. Shouldn't the paths of the first query be generated as well?\n\nThe third query seems to have a different issue. That one is close to my original performance problem. It looks to me like the push down of the sl condition stops the optimizer considering a partial join.\nIf so would it be sane to keep a copy of the original quals to make the partial join possible? Do you have better ideas?\n\nFor the third query, a rough investigation shows that, the qual 'sl =\n5' and 'sc.sl<http://sc.sl> = sg.sl<http://sg.sl>' will form an equivalence class and generate two\nimplied equalities: 'sc.sl<http://sc.sl> = 5' and 'sg.sl<http://sg.sl> = 5', which can be pushed\ndown to the base rels. One consequence of the deduction is when\nconstructing restrict lists for the joinrel, we lose the original\nrestrict 'sc.sl<http://sc.sl> = sg.sl<http://sg.sl>', and this would fail the check\nhave_partkey_equi_join(), which checks if there exists an equi-join\ncondition for each pair of partition keys. As a result, this joinrel\nwould not be considered as an input to further partitionwise joins.\n\nWe need to fix this.\n\nThanks\nRichard\n\n\n\n\n\n\n\n\nHello Richard,\n\n\nthanks for your quick reply.\n\n\n\n> We need to fix this.\n\n\nDo you have a better idea than just keeping the old quals - possibly just the ones that get eliminated - in a separate data structure? Is the push down of quals the only case of elimination of quals, only counting the ones which happen before the restrict\n lists are generated?\n\n\nRegards\nArne\n\n\n\nFrom: Richard Guo <riguo@pivotal.io>\nSent: Thursday, August 1, 2019 1:14:44 PM\nTo: Arne Roland\nCc: pgsql-hackers@lists.postgresql.org\nSubject: Re: Partial join\n \n\n\n\n\n\n\nOn Thu, Aug 1, 2019 at 5:38 PM Arne Roland <A.Roland@index.de> wrote:\n\n\n\n\nHello,\n\nI attached one example of a partitioned table with multi column partition key. I also attached the output.\nDisabling the hash_join is not really necessary, it just shows the more drastic result in the case of low work_mem.\n\nComparing the first and the second query I was surprised to see that SET enable_partitionwise_join could cause the costs to go up. Shouldn't the paths of the first query be generated as well?\n\nThe third query seems to have a different issue. That one is close to my original performance problem. It looks to me like the push down of the sl condition stops the optimizer considering a partial join.\nIf so would it be sane to keep a copy of the original quals to make the partial join possible? Do you have better ideas?\n\n\n\n\n\n\nFor the third query, a rough investigation shows that, the qual 'sl =\n5' and 'sc.sl = sg.sl' will form an equivalence class and generate two\nimplied equalities: 'sc.sl = 5' and 'sg.sl = 5', which can be pushed\ndown to the base rels. One consequence of the deduction is when\nconstructing restrict lists for the joinrel, we lose the original\nrestrict 'sc.sl = sg.sl', and this would fail the check\nhave_partkey_equi_join(), which checks if there exists an equi-join\ncondition for each pair of partition keys. As a result, this joinrel\nwould not be considered as an input to further partitionwise joins.\n\n\n\nWe need to fix this.\n\n\nThanks\nRichard",
"msg_date": "Thu, 1 Aug 2019 11:46:08 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Partial join"
},
{
"msg_contents": "Richard Guo <riguo@pivotal.io> writes:\n> For the third query, a rough investigation shows that, the qual 'sl =\n> 5' and 'sc.sl = sg.sl' will form an equivalence class and generate two\n> implied equalities: 'sc.sl = 5' and 'sg.sl = 5', which can be pushed\n> down to the base rels. One consequence of the deduction is when\n> constructing restrict lists for the joinrel, we lose the original\n> restrict 'sc.sl = sg.sl', and this would fail the check\n> have_partkey_equi_join(), which checks if there exists an equi-join\n> condition for each pair of partition keys. As a result, this joinrel\n> would not be considered as an input to further partitionwise joins.\n\n> We need to fix this.\n\nUh ... why? The pushed-down restrictions should result in pruning\naway any prunable partitions at the scan level, leaving nothing for\nthe partitionwise join code to do.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Aug 2019 10:14:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Partial join"
},
{
"msg_contents": "\"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n> Uh ... why? The pushed-down restrictions should result in pruning\n> away any prunable partitions at the scan level, leaving nothing for\n> the partitionwise join code to do.\n\nIt seems reasonable to me that the join condition can no longer be verified, since 'sc.sl = sg.sl' is now replaced by 'sg.sl = 5' so the join condition can no longer be validated.\n\nIt's true that the pruning would prune everything but one partition, in case we'd just have a single column partition key. But we don't. I don't see how pruning partitions should help in this case, since we are left with multiple partitions for both relations.\n\nRegards\nArne\n\n________________________________\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: Thursday, August 1, 2019 4:14:54 PM\nTo: Richard Guo\nCc: Arne Roland; pgsql-hackers@lists.postgresql.org\nSubject: Re: Partial join\n\nRichard Guo <riguo@pivotal.io> writes:\n> For the third query, a rough investigation shows that, the qual 'sl =\n> 5' and 'sc.sl = sg.sl' will form an equivalence class and generate two\n> implied equalities: 'sc.sl = 5' and 'sg.sl = 5', which can be pushed\n> down to the base rels. One consequence of the deduction is when\n> constructing restrict lists for the joinrel, we lose the original\n> restrict 'sc.sl = sg.sl', and this would fail the check\n> have_partkey_equi_join(), which checks if there exists an equi-join\n> condition for each pair of partition keys. As a result, this joinrel\n> would not be considered as an input to further partitionwise joins.\n\n> We need to fix this.\n\nUh ... why? The pushed-down restrictions should result in pruning\naway any prunable partitions at the scan level, leaving nothing for\nthe partitionwise join code to do.\n\n regards, tom lane\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n\n> Uh ... why? The pushed-down restrictions should result in pruning\n> away any prunable partitions at the scan level, leaving nothing for\n> the partitionwise join code to do.\n\n\n\nIt seems reasonable to me that the join condition can no longer be verified, since 'sc.sl = sg.sl' is now replaced by 'sg.sl = 5' so the join condition can no longer be validated.\n\nIt's true that the pruning would prune everything but one partition, in case we'd just have a single column partition key. But we don't. I don't see how pruning partitions should help in this case, since we are left with multiple partitions for both relations.\n\n\n\nRegards\nArne\n\n\n\n\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: Thursday, August 1, 2019 4:14:54 PM\nTo: Richard Guo\nCc: Arne Roland; pgsql-hackers@lists.postgresql.org\nSubject: Re: Partial join\n \n\n\n\nRichard Guo <riguo@pivotal.io> writes:\n> For the third query, a rough investigation shows that, the qual 'sl =\n> 5' and 'sc.sl = sg.sl' will form an equivalence class and generate two\n> implied equalities: 'sc.sl = 5' and 'sg.sl = 5', which can be pushed\n> down to the base rels. One consequence of the deduction is when\n> constructing restrict lists for the joinrel, we lose the original\n> restrict 'sc.sl = sg.sl', and this would fail the check\n> have_partkey_equi_join(), which checks if there exists an equi-join\n> condition for each pair of partition keys. As a result, this joinrel\n> would not be considered as an input to further partitionwise joins.\n\n> We need to fix this.\n\nUh ... why? The pushed-down restrictions should result in pruning\naway any prunable partitions at the scan level, leaving nothing for\nthe partitionwise join code to do.\n\n regards, tom lane",
"msg_date": "Thu, 1 Aug 2019 16:29:21 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Partial join"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 10:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <riguo@pivotal.io> writes:\n> > For the third query, a rough investigation shows that, the qual 'sl =\n> > 5' and 'sc.sl = sg.sl' will form an equivalence class and generate two\n> > implied equalities: 'sc.sl = 5' and 'sg.sl = 5', which can be pushed\n> > down to the base rels. One consequence of the deduction is when\n> > constructing restrict lists for the joinrel, we lose the original\n> > restrict 'sc.sl = sg.sl', and this would fail the check\n> > have_partkey_equi_join(), which checks if there exists an equi-join\n> > condition for each pair of partition keys. As a result, this joinrel\n> > would not be considered as an input to further partitionwise joins.\n>\n> > We need to fix this.\n>\n> Uh ... why? The pushed-down restrictions should result in pruning\n> away any prunable partitions at the scan level, leaving nothing for\n> the partitionwise join code to do.\n>\n\nHmm..In the case of multiple partition keys, for range partitioning, if\nwe have no clauses for a given key, any later keys would not be\nconsidered for partition pruning.\n\nThat is to day, for table 'p partition by range (k1, k2)', quals like\n'k2 = Const' would not prune partitions.\n\nFor query:\n\nselect * from p as t1 join p as t2 on t1.k1 = t2.k1 and t1.k2 = t2.k2\nand t1.k2 = 2;\n\nSince we don't consider ECs containing consts when generating join\nclauses, we don't have restriction 't1.k2 = t2.k2' when building the\njoinrel. As a result, partitionwise join is not considered as it\nrequires there existing an equi-join condition for each pair of\npartition keys.\n\nIs this a problem? What's your opinion?\n\nThanks\nRichard\n\nOn Thu, Aug 1, 2019 at 10:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <riguo@pivotal.io> writes:\n> For the third query, a rough investigation shows that, the qual 'sl =\n> 5' and 'sc.sl = sg.sl' will form an equivalence class and generate two\n> implied equalities: 'sc.sl = 5' and 'sg.sl = 5', which can be pushed\n> down to the base rels. One consequence of the deduction is when\n> constructing restrict lists for the joinrel, we lose the original\n> restrict 'sc.sl = sg.sl', and this would fail the check\n> have_partkey_equi_join(), which checks if there exists an equi-join\n> condition for each pair of partition keys. As a result, this joinrel\n> would not be considered as an input to further partitionwise joins.\n\n> We need to fix this.\n\nUh ... why? The pushed-down restrictions should result in pruning\naway any prunable partitions at the scan level, leaving nothing for\nthe partitionwise join code to do.Hmm..In the case of multiple partition keys, for range partitioning, ifwe have no clauses for a given key, any later keys would not beconsidered for partition pruning.That is to day, for table 'p partition by range (k1, k2)', quals like'k2 = Const' would not prune partitions.For query:select * from p as t1 join p as t2 on t1.k1 = t2.k1 and t1.k2 = t2.k2and t1.k2 = 2;Since we don't consider ECs containing consts when generating joinclauses, we don't have restriction 't1.k2 = t2.k2' when building thejoinrel. As a result, partitionwise join is not considered as itrequires there existing an equi-join condition for each pair ofpartition keys.Is this a problem? What's your opinion?ThanksRichard",
"msg_date": "Fri, 2 Aug 2019 17:33:39 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Partial join"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 7:46 PM Arne Roland <A.Roland@index.de> wrote:\n\n> Hello Richard,\n>\n> thanks for your quick reply.\n>\n>\n> > We need to fix this.\n>\n>\n> Do you have a better idea than just keeping the old quals - possibly just\n> the ones that get eliminated - in a separate data structure? Is the push\n> down of quals the only case of elimination of quals, only counting the ones\n> which happen before the restrict lists are generated?\n>\nIn you case, the restriction 'sl = sl' is just not generated for the\njoin, because it forms an EC with const, which is not considered when\ngenerating join clauses.\n\nPlease refer to the code snippet below:\n\n@@ -1164,8 +1164,8 @@ generate_join_implied_equalities(PlannerInfo *root,\n List *sublist = NIL;\n\n /* ECs containing consts do not need any further\nenforcement */\n if (ec->ec_has_const)\n continue;\n\nThanks\nRichard\n\nOn Thu, Aug 1, 2019 at 7:46 PM Arne Roland <A.Roland@index.de> wrote:\n\n\nHello Richard,\n\n\nthanks for your quick reply.\n\n\n\n> We need to fix this.\n\n\nDo you have a better idea than just keeping the old quals - possibly just the ones that get eliminated - in a separate data structure? Is the push down of quals the only case of elimination of quals, only counting the ones which happen before the restrict\n lists are generated?In you case, the restriction 'sl = sl' is just not generated for thejoin, because it forms an EC with const, which is not considered whengenerating join clauses.Please refer to the code snippet below:@@ -1164,8 +1164,8 @@ generate_join_implied_equalities(PlannerInfo *root, List *sublist = NIL; /* ECs containing consts do not need any further enforcement */ if (ec->ec_has_const) continue;ThanksRichard",
"msg_date": "Fri, 2 Aug 2019 18:00:01 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Partial join"
},
{
"msg_contents": "Richard Guo <riguo@pivotal.io> wrote:\n> Please refer to the code snippet below:\n>\n> @@ -1164,8 +1164,8 @@ generate_join_implied_equalities(PlannerInfo *root,\n> List *sublist = NIL;\n>\n> /* ECs containing consts do not need any further enforcement */\n> if (ec->ec_has_const)\n> continue;\n\nSorry, I'm quite busy currently. And thanks! That was a good read.\n\nI might be wrong, but I think have_partkey_equi_join in joinrels.c should be aware of the const case. My naive approach would be keeping pointers to the first few constant clauses, which are referencing to a yet unmatched partition key, to keep the memory footprint feasible in manner similar to pk_has_clause. The question would be what to do, if there are a lot of const expressions on the part keys. One could palloc additional memory in that case, hoping that it will be quite rare. Or is there a different, better way to go about that?\nThank you for your feedback!\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\nRichard Guo <riguo@pivotal.io> wrote:\n\n\n> Please refer to the code snippet below:\n> \n\n> @@ -1164,8 +1164,8 @@ generate_join_implied_equalities(PlannerInfo *root,\n> List *sublist = NIL;\n> \n> /* ECs containing consts do not need any further enforcement */\n> if (ec->ec_has_const)\n> continue;\n\n\n\nSorry, I'm quite busy currently. And thanks! That was a good read.\n\n\n\nI might be wrong, but I think have_partkey_equi_join in\njoinrels.c should be aware of the const case. My naive approach would be keeping pointers to the first few constant clauses, which are referencing to a yet unmatched partition key, to keep the memory footprint feasible in manner similar to pk_has_clause.\n The question would be what to do, if there are a lot of const expressions on the part keys. One could palloc additional memory in that case, hoping that it will be quite rare. Or is there a different, better way to go about that?\nThank you for your feedback!\n\n\n\nRegards\nArne",
"msg_date": "Mon, 19 Aug 2019 15:17:21 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Partial join"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile reviewing some code around pg_lsn_in() I came across a couple of\n(potential?) issues:\n\n1.\nCommit 21f428eb moves lsn conversion functionality from pg_lsn_in() to a new\nfunction pg_lsn_in_internal(). It takes two parameters the lsn string and a\npointer to a boolean (*have_error) to indicate if there was an error while\nconverting string format to XLogRecPtr.\n\npg_lsn_in_internal() only sets the *have_error to 'true' if there is an\nerror,\nbut leaves it for the caller to make sure it was passed by initializing as\n'false'. Currently it is only getting called from pg_lsn_in() and\ntimestamptz_in()\nwhere it has been taken care that the flag is set to false before making the\ncall. But I think in general it opens the door for unpredictable bugs if\npg_lsn_in_internal() gets called from other locations in future (if need\nmaybe) and by mistake, it just checks the return value of the flag without\nsetting it to false before making a call.\n\nI am attaching a patch that makes sure that *have_error is set to false in\npg_lsn_in_internal() itself, rather than being caller dependent.\n\nAlso, I think there might be callers who may not care if there had been an\nerror\nwhile converting and just ok to use InvalidXLogRecPtr against return value,\nand\nmay pass just a NULL boolean pointer to this function, but for now, I have\nleft\nthat untouched. Maybe just adding an Assert would improve the situation for\ntime being.\n\nI have attached a patch (fix_have_error_flag.patch) to take care of above.\n\n2.\nI happened to peep in test case pg_lsn.sql, and I started exploring the\nmacros\naround lsn.\n\nFollowing macros:\n\n{code}\n/*\n * Zero is used indicate an invalid pointer. Bootstrap skips the first\npossible\n * WAL segment, initializing the first WAL page at WAL segment size, so no\nXLOG\n * record can begin at zero.\n\n */\n#define InvalidXLogRecPtr 0\n#define XLogRecPtrIsInvalid(r) ((r) == InvalidXLogRecPtr)\n{code}\n\nIIUC, in the comment above we clearly want to mark 0 as an invalid lsn (also\nfurther IIUC the comment states - lsn would start from (walSegSize + 1)).\nGiven\nthis, should not it be invalid to allow \"0/0\" as the value of type pg_lsn,\nor\nfor that matter any number < walSegSize?\n\nThere is a test scenario in test case pg_lsn.sql which tests insertion of\n\"0/0\"\nin a table having a pg_lsn column. I think this is contradictory to the\ncomment.\n\nI am not sure of thought behind this and might be wrong while making the\nabove\nassumption. But, I tried to look around a bit in hackers emails and could\nnot\nlocate any related discussion.\n\nI have attached a patch (mark_lsn_0_invalid.patch) that makes above changes.\n\nThoughts?\n\nRegards,\nJeevan Ladhe",
"msg_date": "Mon, 29 Jul 2019 22:55:29 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "concerns around pg_lsn"
},
{
"msg_contents": "On Mon, Jul 29, 2019 at 10:55:29PM +0530, Jeevan Ladhe wrote:\n> I am attaching a patch that makes sure that *have_error is set to false in\n> pg_lsn_in_internal() itself, rather than being caller dependent.\n\nAgreed about making the code more defensive as you do. I would keep\nthe initialization in check_recovery_target_lsn and pg_lsn_in_internal\nthough. That does not hurt and makes the code easier to understand,\naka we don't expect an error by default in those paths.\n\n> IIUC, in the comment above we clearly want to mark 0 as an invalid lsn (also\n> further IIUC the comment states - lsn would start from (walSegSize + 1)).\n> Given this, should not it be invalid to allow \"0/0\" as the value of\n> type pg_lsn, or for that matter any number < walSegSize?\n\nYou can rely on \"0/0\" as a base point to calculate the offset in a\nsegment, so my guess is that we could break applications by generating\nan error. Please note that the behavior is much older than the\nintroduction of pg_lsn, as the original parsing logic has been removed\nin 6f289c2b with validate_xlog_location() in xlogfuncs.c. \n--\nMichael",
"msg_date": "Tue, 30 Jul 2019 13:12:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": "Hi Michael,\n\nThanks for your inputs, really appreciate.\n\nOn Tue, Jul 30, 2019 at 9:42 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Jul 29, 2019 at 10:55:29PM +0530, Jeevan Ladhe wrote:\n> > I am attaching a patch that makes sure that *have_error is set to false\n> in\n> > pg_lsn_in_internal() itself, rather than being caller dependent.\n>\n> Agreed about making the code more defensive as you do. I would keep\n> the initialization in check_recovery_target_lsn and pg_lsn_in_internal\n> though. That does not hurt and makes the code easier to understand,\n> aka we don't expect an error by default in those paths.\n>\n\nSure, understood. I am ok with this.\n\n> IIUC, in the comment above we clearly want to mark 0 as an invalid lsn\n> (also\n> > further IIUC the comment states - lsn would start from (walSegSize + 1)).\n> > Given this, should not it be invalid to allow \"0/0\" as the value of\n> > type pg_lsn, or for that matter any number < walSegSize?\n>\n> You can rely on \"0/0\" as a base point to calculate the offset in a\n> segment, so my guess is that we could break applications by generating\n> an error.\n\n\nAgree that it may break the applications.\n\nPlease note that the behavior is much older than the\n> introduction of pg_lsn, as the original parsing logic has been removed\n> in 6f289c2b with validate_xlog_location() in xlogfuncs.c.\n>\n\nMy only concern was something that we internally treat as invalid, why do\nwe allow, that as a valid value for that type. While I am not trying to\nreinvent the wheel here, I am trying to understand if there had been any\nidea behind this and I am missing it.\n\nRegards,\nJeevan Ladhe\n\nHi Michael,Thanks for your inputs, really appreciate.On Tue, Jul 30, 2019 at 9:42 AM Michael Paquier <michael@paquier.xyz> wrote:On Mon, Jul 29, 2019 at 10:55:29PM +0530, Jeevan Ladhe wrote:> I am attaching a patch that makes sure that *have_error is set to false in> pg_lsn_in_internal() itself, rather than being caller dependent.\nAgreed about making the code more defensive as you do. I would keepthe initialization in check_recovery_target_lsn and pg_lsn_in_internalthough. That does not hurt and makes the code easier to understand,aka we don't expect an error by default in those paths.Sure, understood. I am ok with this.> IIUC, in the comment above we clearly want to mark 0 as an invalid lsn (also> further IIUC the comment states - lsn would start from (walSegSize + 1)).> Given this, should not it be invalid to allow \"0/0\" as the value of> type pg_lsn, or for that matter any number < walSegSize?\nYou can rely on \"0/0\" as a base point to calculate the offset in asegment, so my guess is that we could break applications by generatingan error. Agree that it may break the applications.Please note that the behavior is much older than theintroduction of pg_lsn, as the original parsing logic has been removedin 6f289c2b with validate_xlog_location() in xlogfuncs.c. My only concern was something that we internally treat as invalid, why dowe allow, that as a valid value for that type. While I am not trying toreinvent the wheel here, I am trying to understand if there had been anyidea behind this and I am missing it.Regards,Jeevan Ladhe",
"msg_date": "Tue, 30 Jul 2019 14:22:30 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 4:52 AM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> My only concern was something that we internally treat as invalid, why do\n> we allow, that as a valid value for that type. While I am not trying to\n> reinvent the wheel here, I am trying to understand if there had been any\n> idea behind this and I am missing it.\n\nWell, the word \"invalid\" can mean more than one thing. Something can\nbe valid or invalid depending on context. I can't have -2 dollars in\nmy wallet, but I could have -2 dollars in my bank account, because the\nbank will allow me to pay out slightly more money than I actually have\non the idea that I will pay them back later (and with interest!). So\nas an amount of money in my wallet, -2 is invalid, but as an amount of\nmoney in my bank account, it is valid.\n\n0/0 is not a valid LSN in the sense that (in current releases) we\nnever write a WAL record there, but it's OK to compute with it.\nSubtracting '0/0'::pg_lsn seems useful as a way to convert an LSN to\nan absolute byte-index in the WAL stream.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 30 Jul 2019 08:36:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 02:22:30PM +0530, Jeevan Ladhe wrote:\n> On Tue, Jul 30, 2019 at 9:42 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Agreed about making the code more defensive as you do. I would keep\n>> the initialization in check_recovery_target_lsn and pg_lsn_in_internal\n>> though. That does not hurt and makes the code easier to understand,\n>> aka we don't expect an error by default in those paths.\n>>\n> \n> Sure, understood. I am ok with this.\n\nI am adding Peter Eisentraut in CC as 21f428e is his commit. I think\nthat the first patch is a good idea, so I would be fine to apply it,\nbut let's see the original committer's opinion first.\n--\nMichael",
"msg_date": "Wed, 31 Jul 2019 09:51:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 6:06 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Jul 30, 2019 at 4:52 AM Jeevan Ladhe\n> <jeevan.ladhe@enterprisedb.com> wrote:\n> > My only concern was something that we internally treat as invalid, why do\n> > we allow, that as a valid value for that type. While I am not trying to\n> > reinvent the wheel here, I am trying to understand if there had been any\n> > idea behind this and I am missing it.\n>\n> Well, the word \"invalid\" can mean more than one thing. Something can\n> be valid or invalid depending on context. I can't have -2 dollars in\n> my wallet, but I could have -2 dollars in my bank account, because the\n> bank will allow me to pay out slightly more money than I actually have\n> on the idea that I will pay them back later (and with interest!). So\n> as an amount of money in my wallet, -2 is invalid, but as an amount of\n> money in my bank account, it is valid.\n>\n> 0/0 is not a valid LSN in the sense that (in current releases) we\n> never write a WAL record there, but it's OK to compute with it.\n> Subtracting '0/0'::pg_lsn seems useful as a way to convert an LSN to\n> an absolute byte-index in the WAL stream.\n>\n\nThanks Robert for such a nice and detailed explanation.\nI now understand why LSN '0/0' can still be useful.\n\nRegards,\nJeevan Ladhe\n\nOn Tue, Jul 30, 2019 at 6:06 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Jul 30, 2019 at 4:52 AM Jeevan Ladhe<jeevan.ladhe@enterprisedb.com> wrote:> My only concern was something that we internally treat as invalid, why do> we allow, that as a valid value for that type. While I am not trying to> reinvent the wheel here, I am trying to understand if there had been any> idea behind this and I am missing it.\nWell, the word \"invalid\" can mean more than one thing. Something canbe valid or invalid depending on context. I can't have -2 dollars inmy wallet, but I could have -2 dollars in my bank account, because thebank will allow me to pay out slightly more money than I actually haveon the idea that I will pay them back later (and with interest!). Soas an amount of money in my wallet, -2 is invalid, but as an amount ofmoney in my bank account, it is valid.\n0/0 is not a valid LSN in the sense that (in current releases) wenever write a WAL record there, but it's OK to compute with it.Subtracting '0/0'::pg_lsn seems useful as a way to convert an LSN toan absolute byte-index in the WAL stream.Thanks Robert for such a nice and detailed explanation.I now understand why LSN '0/0' can still be useful.Regards,Jeevan Ladhe",
"msg_date": "Wed, 31 Jul 2019 17:26:42 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 09:51:30AM +0900, Michael Paquier wrote:\n> I am adding Peter Eisentraut in CC as 21f428e is his commit. I think\n> that the first patch is a good idea, so I would be fine to apply it,\n> but let's see the original committer's opinion first.\n\nOn further review of the first patch, I think that it could be a good\nidea to apply the same safeguards within float8in_internal_opt_error.\nJeevan, what do you think?\n--\nMichael",
"msg_date": "Thu, 1 Aug 2019 09:46:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": "On the topic of pg_lsn, I recently noticed that there's no\noperator(+)(pg_lsn,bigint) nor is there an operator(-)(pg_lsn,bigint) so\nyou can't compute offsets easily. We don't have a cast between pg_lsn and\nbigint because we don't expose an unsigned bigint type in SQL, so you can't\nwork around it that way.\n\nI may be missing the obvious, but I suggest (and will follow with a patch\nfor) adding + and - operators for computing offsets. I was considering an\nage() function for it too, but I think it's best to force the user to be\nclear about what current LSN they want to compare with so I'll skip that.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn the topic of pg_lsn, I recently noticed that there's no operator(+)(pg_lsn,bigint) nor is there an operator(-)(pg_lsn,bigint) so you can't compute offsets easily. We don't have a cast between pg_lsn and bigint because we don't expose an unsigned bigint type in SQL, so you can't work around it that way.I may be missing the obvious, but I suggest (and will follow with a patch for) adding + and - operators for computing offsets. I was considering an age() function for it too, but I think it's best to force the user to be clear about what current LSN they want to compare with so I'll skip that.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 1 Aug 2019 10:45:50 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": "Hi Michael,\n\n\n> On further review of the first patch, I think that it could be a good\n> idea to apply the same safeguards within float8in_internal_opt_error.\n> Jeevan, what do you think?\n>\n\nSure, agree, it makes sense to address float8in_internal_opt_error(),\nthere might be more occurrences of such instances in other functions\nas well. I think if we agree, as and when encounter them while touching\nthose areas we should fix them.\n\nWhat is more dangerous with float8in_internal_opt_error() is, it has\nthe have_error flag, which is never ever set or used in that function.\nFurther\nmore risks are - the callers of this function e.g.\nexecuteItemOptUnwrapTarget()\nare passing a non-null pointer to it(default set to false) and expect to\nthrow\nan error if it sees some error during float8in_internal_opt_error(), *but*\nfloat8in_internal_opt_error() has actually never touched the have_error\nflag.\nSo, in this case it is fine because the flag was set to false, if it was not\nset, then the garbage value would always result in true and keep on throwing\nan error!\nHere is relevant code from function executeItemOptUnwrapTarget():\n\n{code}\n 975 if (jb->type == jbvNumeric)\n 976 {\n 977 char *tmp =\nDatumGetCString(DirectFunctionCall1(numeric_out,\n 978\n NumericGetDatum(jb->val.numeric)));\n 979 bool have_error = false;\n 980\n 981 (void) float8in_internal_opt_error(tmp,\n 982 NULL,\n 983 \"double\nprecision\",\n 984 tmp,\n 985 &have_error);\n 986\n 987 if (have_error)\n 988 RETURN_ERROR(ereport(ERROR,\n 989\n (errcode(ERRCODE_NON_NUMERIC_JSON_ITEM),\n 990 errmsg(\"jsonpath item\nmethod .%s() can only be applied to a numeric value\",\n 991\n jspOperationName(jsp->type)))));\n 992 res = jperOk;\n 993 }\n 994 else if (jb->type == jbvString)\n 995 {\n 996 /* cast string as double */\n 997 double val;\n 998 char *tmp = pnstrdup(jb->val.string.val,\n 999 jb->val.string.len);\n1000 bool have_error = false;\n1001\n1002 val = float8in_internal_opt_error(tmp,\n1003 NULL,\n1004 \"double\nprecision\",\n1005 tmp,\n1006 &have_error);\n1007\n1008 if (have_error || isinf(val))\n1009 RETURN_ERROR(ereport(ERROR,\n1010\n (errcode(ERRCODE_NON_NUMERIC_JSON_ITEM),\n1011 errmsg(\"jsonpath item\nmethod .%s() can only be applied to a numeric value\",\n1012\n jspOperationName(jsp->type)))));\n1013\n1014 jb = &jbv;\n1015 jb->type = jbvNumeric;\n1016 jb->val.numeric =\nDatumGetNumeric(DirectFunctionCall1(float8_numeric,\n1017\n Float8GetDatum(val)));\n1018 res = jperOk;\n1019 }\n{code}\n\nI will further check if by mistake any further commits have removed\nreferences\nto assignments from float8in_internal_opt_error(), evaluate it, and set out\na\npatch.\n\nThis is one of the reason, I was saying it can be taken as a good practice\nto\nlet the function who is accepting an out parameter sets the value for sure\nto\nsome or other value.\n\nRegards,\nJeevan Ladhe\n\nHi Michael, On further review of the first patch, I think that it could be a goodidea to apply the same safeguards within float8in_internal_opt_error.Jeevan, what do you think?Sure, agree, it makes sense to address float8in_internal_opt_error(),there might be more occurrences of such instances in other functionsas well. I think if we agree, as and when encounter them while touchingthose areas we should fix them.What is more dangerous with float8in_internal_opt_error() is, it hasthe have_error flag, which is never ever set or used in that function. Furthermore risks are - the callers of this function e.g. executeItemOptUnwrapTarget()are passing a non-null pointer to it(default set to false) and expect to throwan error if it sees some error during float8in_internal_opt_error(), *but*float8in_internal_opt_error() has actually never touched the have_error flag.So, in this case it is fine because the flag was set to false, if it was notset, then the garbage value would always result in true and keep on throwingan error!Here is relevant code from function executeItemOptUnwrapTarget():{code} 975 if (jb->type == jbvNumeric) 976 { 977 char *tmp = DatumGetCString(DirectFunctionCall1(numeric_out, 978 NumericGetDatum(jb->val.numeric))); 979 bool have_error = false; 980 981 (void) float8in_internal_opt_error(tmp, 982 NULL, 983 \"double precision\", 984 tmp, 985 &have_error); 986 987 if (have_error) 988 RETURN_ERROR(ereport(ERROR, 989 (errcode(ERRCODE_NON_NUMERIC_JSON_ITEM), 990 errmsg(\"jsonpath item method .%s() can only be applied to a numeric value\", 991 jspOperationName(jsp->type))))); 992 res = jperOk; 993 } 994 else if (jb->type == jbvString) 995 { 996 /* cast string as double */ 997 double val; 998 char *tmp = pnstrdup(jb->val.string.val, 999 jb->val.string.len);1000 bool have_error = false;1001 1002 val = float8in_internal_opt_error(tmp,1003 NULL,1004 \"double precision\",1005 tmp,1006 &have_error);1007 1008 if (have_error || isinf(val))1009 RETURN_ERROR(ereport(ERROR,1010 (errcode(ERRCODE_NON_NUMERIC_JSON_ITEM),1011 errmsg(\"jsonpath item method .%s() can only be applied to a numeric value\",1012 jspOperationName(jsp->type)))));1013 1014 jb = &jbv;1015 jb->type = jbvNumeric;1016 jb->val.numeric = DatumGetNumeric(DirectFunctionCall1(float8_numeric,1017 Float8GetDatum(val)));1018 res = jperOk;1019 }{code}I will further check if by mistake any further commits have removed referencesto assignments from float8in_internal_opt_error(), evaluate it, and set out apatch.This is one of the reason, I was saying it can be taken as a good practice tolet the function who is accepting an out parameter sets the value for sure tosome or other value.Regards,Jeevan Ladhe",
"msg_date": "Thu, 1 Aug 2019 11:14:32 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": "Hi Michael,\n\nWhat is more dangerous with float8in_internal_opt_error() is, it has\n> the have_error flag, which is never ever set or used in that function.\n> Further\n> more risks are - the callers of this function e.g.\n> executeItemOptUnwrapTarget()\n> are passing a non-null pointer to it(default set to false) and expect to\n> throw\n> an error if it sees some error during float8in_internal_opt_error(), *but*\n> float8in_internal_opt_error() has actually never touched the have_error\n> flag.\n>\n\nMy bad, I see there's this macro call in float8in_internal_opt_error() and\nthat\nset the flag:\n\n{code}\n#define RETURN_ERROR(throw_error) \\\ndo { \\\n if (have_error) { \\\n *have_error = true; \\\n return 0.0; \\\n } else { \\\n throw_error; \\\n } \\\n} while (0)\n{code}\n\nMy patch on way, thanks.\n\nRegards,\nJeevan Ladhe\n\nHi Michael,What is more dangerous with float8in_internal_opt_error() is, it hasthe have_error flag, which is never ever set or used in that function. Furthermore risks are - the callers of this function e.g. executeItemOptUnwrapTarget()are passing a non-null pointer to it(default set to false) and expect to throwan error if it sees some error during float8in_internal_opt_error(), *but*float8in_internal_opt_error() has actually never touched the have_error flag.My bad, I see there's this macro call in float8in_internal_opt_error() and thatset the flag:{code}#define RETURN_ERROR(throw_error) \\do { \\ if (have_error) { \\ *have_error = true; \\ return 0.0; \\ } else { \\ throw_error; \\ } \\} while (0){code}My patch on way, thanks.Regards,Jeevan Ladhe",
"msg_date": "Thu, 1 Aug 2019 11:29:54 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": "On Thu, Aug 01, 2019 at 11:14:32AM +0530, Jeevan Ladhe wrote:\n> Sure, agree, it makes sense to address float8in_internal_opt_error(),\n> there might be more occurrences of such instances in other functions\n> as well. I think if we agree, as and when encounter them while touching\n> those areas we should fix them.\n\nI have spotted a third area within make_result_opt_error in numeric.c\nwhich could gain readability by initializing have_error if the pointer\nis defined.\n\n> I will further check if by mistake any further commits have removed\n> references to assignments from float8in_internal_opt_error(),\n> evaluate it, and set out a patch.\n\nThanks, Jeevan!\n--\nMichael",
"msg_date": "Thu, 1 Aug 2019 15:01:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": "Hi Michael,\n\n> I will further check if by mistake any further commits have removed\n> > references to assignments from float8in_internal_opt_error(),\n> > evaluate it, and set out a patch.\n>\n> Thanks, Jeevan!\n>\n\nHere is a patch that takes care of addressing the flag issue including\npg_lsn_in_internal() and others.\n\nI have further also fixed couple of other functions,\nnumeric_div_opt_error() and\nnumeric_mod_opt_error() which are basically callers of\nmake_result_opt_error().\n\nKindly do let me know if you have any comments.\n\nRegards,\nJeevan Ladhe",
"msg_date": "Thu, 1 Aug 2019 12:39:26 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": "On Thu, Aug 01, 2019 at 12:39:26PM +0530, Jeevan Ladhe wrote:\n> Here is a patch that takes care of addressing the flag issue including\n> pg_lsn_in_internal() and others.\n\nYour original patch for pg_lsn_in_internal() was right IMO, and the\nnew one is not. In the numeric and float code paths, we have this\nkind of pattern:\nif (have_error)\n{\n *have_error = true;\n return;\n}\nelse\n elog(ERROR, \"Boom. Show is over.\");\n\nBut the pg_lsn.c portion does not have that. have_error cannot be\nNULL or the caller may fall into the trap of setting it to NULL and\nmiss some errors at parsing-time. So I think that keeping the\nassertion on (have_error != NULL) is necessary.\n--\nMichael",
"msg_date": "Thu, 1 Aug 2019 17:21:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": "Hi Michael,\n\nOn Thu, Aug 1, 2019 at 1:51 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Aug 01, 2019 at 12:39:26PM +0530, Jeevan Ladhe wrote:\n> > Here is a patch that takes care of addressing the flag issue including\n> > pg_lsn_in_internal() and others.\n>\n> Your original patch for pg_lsn_in_internal() was right IMO, and the\n> new one is not. In the numeric and float code paths, we have this\n> kind of pattern:\n> if (have_error)\n> {\n> *have_error = true;\n> return;\n> }\n> else\n> elog(ERROR, \"Boom. Show is over.\");\n>\n> But the pg_lsn.c portion does not have that. have_error cannot be\n> NULL or the caller may fall into the trap of setting it to NULL and\n> miss some errors at parsing-time. So I think that keeping the\n> assertion on (have_error != NULL) is necessary.\n>\n\nThanks for your concern.\n\nIn pg_lsn_in_internal() changes, the caller will get the invalid lsn\nin case there are errors:\n\n{code}\n if (len1 < 1 || len1 > MAXPG_LSNCOMPONENT || str[len1] != '/')\n {\n if (have_error)\n *have_error = true;\n\n return InvalidXLogRecPtr;\n }\n{code}\n\nThe only thing is that, if the caller cares about the error during\nthe parsing or not. For some callers just making sure if the given\nstring was valid lsn or not might be ok, and the return value\n'InvalidXLogRecPtr' will tell that. That caller may not unnecessary\ndeclare the flag and pass a pointer to it.\n\nRegards,\nJeevan Ladhe\n\nHi Michael,On Thu, Aug 1, 2019 at 1:51 PM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Aug 01, 2019 at 12:39:26PM +0530, Jeevan Ladhe wrote:> Here is a patch that takes care of addressing the flag issue including> pg_lsn_in_internal() and others.\nYour original patch for pg_lsn_in_internal() was right IMO, and thenew one is not. In the numeric and float code paths, we have thiskind of pattern:if (have_error){ *have_error = true; return;}else elog(ERROR, \"Boom. Show is over.\");\nBut the pg_lsn.c portion does not have that. have_error cannot beNULL or the caller may fall into the trap of setting it to NULL andmiss some errors at parsing-time. So I think that keeping theassertion on (have_error != NULL) is necessary.Thanks for your concern.In pg_lsn_in_internal() changes, the caller will get the invalid lsnin case there are errors:{code} if (len1 < 1 || len1 > MAXPG_LSNCOMPONENT || str[len1] != '/') { if (have_error) *have_error = true; return InvalidXLogRecPtr; }{code}The only thing is that, if the caller cares about the error duringthe parsing or not. For some callers just making sure if the givenstring was valid lsn or not might be ok, and the return value'InvalidXLogRecPtr' will tell that. That caller may not unnecessarydeclare the flag and pass a pointer to it.Regards,Jeevan Ladhe",
"msg_date": "Thu, 1 Aug 2019 14:10:08 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": "On Thu, Aug 01, 2019 at 02:10:08PM +0530, Jeevan Ladhe wrote:\n> The only thing is that, if the caller cares about the error during\n> the parsing or not.\n\nThat's where the root of the problem is. We should really make things\nso as the caller of this routine cares about errors. With your patch\na caller could do pg_lsn_in_internal('G/G', NULL), and then get\nInvalidXLogRecPtr which is plain wrong. It is true that a caller may\nnot care about the error, but the idea is to make callers *think*\nabout the error case when they implement something and decide if it is\nvalid or not. The float and numeric code paths do that, not pg_lsn\nwith this patch. It would actually be fine to move ereport(ERROR)\nfrom pg_lsn_in to pg_lsn_in_internal and trigger these if have_error\nis NULL, but that means a duplication and the code is simple.\n--\nMichael",
"msg_date": "Thu, 1 Aug 2019 18:12:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": "Sure Michael, in the attached patch I have reverted the checks from\npg_lsn_in_internal() and added Assert() per my original patch.\n\nRegards,\nJeevan Ladhe",
"msg_date": "Sun, 4 Aug 2019 09:11:09 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": "On 2019-Aug-04, Jeevan Ladhe wrote:\n\n> Sure Michael, in the attached patch I have reverted the checks from\n> pg_lsn_in_internal() and added Assert() per my original patch.\n\nCan we please change the macro definition so that have_error is one of\nthe arguments? Having the variable be used inside the macro definition\nbut not appear literally in the call is quite confusing.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 3 Aug 2019 23:57:01 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": "On Sat, Aug 03, 2019 at 11:57:01PM -0400, Alvaro Herrera wrote:\n> Can we please change the macro definition so that have_error is one of\n> the arguments? Having the variable be used inside the macro definition\n> but not appear literally in the call is quite confusing.\n\nGood idea. This needs some changes only in float.c.\n--\nMichael",
"msg_date": "Sun, 4 Aug 2019 15:43:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": "On Sun, Aug 4, 2019 at 12:13 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sat, Aug 03, 2019 at 11:57:01PM -0400, Alvaro Herrera wrote:\n> > Can we please change the macro definition so that have_error is one of\n> > the arguments? Having the variable be used inside the macro definition\n> > but not appear literally in the call is quite confusing.\n>\n\nCan't agree more. This is where I also got confused initially and thought\nthe flag is unused.\n\nGood idea. This needs some changes only in float.c.\n\n\nPlease find attached patch with the changes to RETURN_ERROR and\nit's references in float.c\n\nRegards,\nJeevan Ladhe",
"msg_date": "Mon, 5 Aug 2019 09:15:02 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": "On Mon, Aug 05, 2019 at 09:15:02AM +0530, Jeevan Ladhe wrote:\n> Please find attached patch with the changes to RETURN_ERROR and\n> it's references in float.c\n\nThanks. Committed after applying some tweaks to it. I have noticed\nthat you forgot numeric_int4_opt_error() in the set.\n--\nMichael",
"msg_date": "Mon, 5 Aug 2019 15:36:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: concerns around pg_lsn"
},
{
"msg_contents": ">\n> Thanks. Committed after applying some tweaks to it. I have noticed\n> that you forgot numeric_int4_opt_error() in the set.\n\n\nOops. Thanks for the commit, Michael.\n\nRegards,\nJeevan Ladhe\n\nThanks. Committed after applying some tweaks to it. I have noticedthat you forgot numeric_int4_opt_error() in the set.Oops. Thanks for the commit, Michael.Regards,Jeevan Ladhe",
"msg_date": "Mon, 5 Aug 2019 12:25:46 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: concerns around pg_lsn"
}
] |
[
{
"msg_contents": "Hi,\n\nIn contrib/pgcrypto/pgp.c we have a struct member int_name in digest_info which\nisn’t used, and seems to have never been used (a potential copy/pasteo from the\ncipher_info struct?). Is there a reason for keeping this, or can it be removed\nas per the attached?\n\ncheers ./daniel",
"msg_date": "Tue, 30 Jul 2019 17:48:49 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Unused struct member in pgcrypto pgp.c"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 05:48:49PM +0200, Daniel Gustafsson wrote:\n> In contrib/pgcrypto/pgp.c we have a struct member int_name in digest_info which\n> isn’t used, and seems to have never been used (a potential copy/pasteo from the\n> cipher_info struct?). Is there a reason for keeping this, or can it be removed\n> as per the attached?\n\nI don't see one as this is not used in any logic for the digest\nlookups. So agreed and applied. This originally comes from e94dd6a.\n--\nMichael",
"msg_date": "Wed, 31 Jul 2019 10:23:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Unused struct member in pgcrypto pgp.c"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 9:19 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> Hi,\n>\n> In contrib/pgcrypto/pgp.c we have a struct member int_name in digest_info which\n> isn’t used, and seems to have never been used (a potential copy/pasteo from the\n> cipher_info struct?). Is there a reason for keeping this, or can it be removed\n> as per the attached?\n>\nAgreed.\nIt seems the member is not being used anywhere, only code and name members\nare being used in digest lookup.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Jul 2019 11:08:11 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unused struct member in pgcrypto pgp.c"
}
] |
[
{
"msg_contents": "Hi,\n\nI've occasionally wished for a typesafe version of pg_printf() and other\nvarargs functions. The compiler warnings are nice, but also far from\ncomplete.\n\nHere's a somewhat crazy hack/prototype for how printf could get actual\nargument types. I'm far from certain it's worth pursuing this\nfurther... Nor the contrary.\n\nNote that this requires removing the parentheses from VA_ARGS_NARGS_'s\nresult (i.e. (N) -> N). To me those parens don't make much sense, we're\npretty much guaranteed to only ever have a number there.\n\nWith the following example e.g.\n\n\tmyprintf(\"boring fmt\", 1, 0.1, (char)'c', (void*)0, \"crazy stuff\");\n\tmyprintf(\"empty argument fmt\");\n\nyield\n\nformat string \"boring fmt\", 5 args\n\targ number 0 is of type int: 1\n\targ number 1 is of type double: 0.100000\n\targ number 2 is of type bool: true\n\targ number 3 is of type void*: (nil)\n\targ number 4 is of type char*: crazy stuff\nformat string \"empty argument fmt\", 0 args\n\n\nwhich'd obviously allow for error checking inside myprintf.\n\n\n#include \"c.h\"\n\n// hack pg version out of the way\n#undef printf\n\n// FIXME: This doesn't correctly work with zero arguments\n#define VA_ARGS_EACH(wrap, ...) \\\n\tVA_ARGS_EACH_EXPAND(VA_ARGS_NARGS(__VA_ARGS__)) (wrap, __VA_ARGS__)\n#define VA_ARGS_EACH_EXPAND(count) VA_ARGS_EACH_EXPAND_REALLY(VA_ARGS_EACH_INT_, count)\n#define VA_ARGS_EACH_EXPAND_REALLY(prefix, count) prefix##count\n\n#define VA_ARGS_EACH_INT_1(wrap, el1) wrap(el1)\n#define VA_ARGS_EACH_INT_2(wrap, el1, ...) wrap(el1), VA_ARGS_EACH_INT_1(wrap, __VA_ARGS__)\n#define VA_ARGS_EACH_INT_3(wrap, el1, ...) wrap(el1), VA_ARGS_EACH_INT_2(wrap, __VA_ARGS__)\n#define VA_ARGS_EACH_INT_4(wrap, el1, ...) wrap(el1), VA_ARGS_EACH_INT_3(wrap, __VA_ARGS__)\n#define VA_ARGS_EACH_INT_5(wrap, el1, ...) wrap(el1), VA_ARGS_EACH_INT_4(wrap, __VA_ARGS__)\n#define VA_ARGS_EACH_INT_6(wrap, el1, ...) wrap(el1), VA_ARGS_EACH_INT_5(wrap, __VA_ARGS__)\n#define VA_ARGS_EACH_INT_7(wrap, el1, ...) wrap(el1), VA_ARGS_EACH_INT_6(wrap, __VA_ARGS__)\n\n\ntypedef enum printf_arg_type\n{\n\tPRINTF_ARG_BOOL,\n\tPRINTF_ARG_CHAR,\n\tPRINTF_ARG_INT,\n\tPRINTF_ARG_DOUBLE,\n\tPRINTF_ARG_CHARP,\n\tPRINTF_ARG_VOIDP,\n} printf_arg_type;\n\ntypedef struct arginfo\n{\n\tprintf_arg_type tp;\n} arginfo;\n\n// hackfix empty argument case\n#define myprintf(...) myprintf_wrap(__VA_ARGS__, \"dummy\")\n#define myprintf_wrap(fmt, ... ) \\\n\tmyprintf_impl(fmt, VA_ARGS_NARGS(__VA_ARGS__) - 1, ((arginfo[]){ VA_ARGS_EACH(blurttype, __VA_ARGS__) }), __VA_ARGS__)\n\n// FIXME: Obviously not enough types\n#define blurttype(x) ((arginfo){_Generic(x, char: PRINTF_ARG_BOOL, int: PRINTF_ARG_INT, double: PRINTF_ARG_DOUBLE, char *: PRINTF_ARG_CHARP, void *: PRINTF_ARG_VOIDP)})\n\nstatic const char*\nprintf_arg_typename(printf_arg_type tp)\n{\n\tswitch (tp)\n\t{\n\t\tcase PRINTF_ARG_BOOL:\n\t\t\treturn \"bool\";\n\t\tcase PRINTF_ARG_CHAR:\n\t\t\treturn \"char\";\n\t\tcase PRINTF_ARG_INT:\n\t\t\treturn \"int\";\n\t\tcase PRINTF_ARG_DOUBLE:\n\t\t\treturn \"double\";\n\t\tcase PRINTF_ARG_CHARP:\n\t\t\treturn \"char*\";\n\t\tcase PRINTF_ARG_VOIDP:\n\t\t\treturn \"void*\";\n\t}\n\n\treturn \"\";\n}\n\nstatic void\nmyprintf_impl(char *fmt, size_t nargs, arginfo arg[], ...)\n{\n\tva_list args;\n\tva_start(args, arg);\n\n\tprintf(\"format string \\\"%s\\\", %zu args\\n\", fmt, nargs);\n\tfor (int argno = 0; argno < nargs; argno++)\n\t{\n\t\tprintf(\"\\targ number %d is of type %s: \",\n\t\t\t argno,\n\t\t\t printf_arg_typename(arg[argno].tp));\n\n\t\tswitch (arg[argno].tp)\n\t\t{\n\t\t\tcase PRINTF_ARG_BOOL:\n\t\t\t\tprintf(\"%s\", ((bool) va_arg(args, int)) ? \"true\" : \"false\");\n\t\t\t\tbreak;\n\t\t\tcase PRINTF_ARG_CHAR:\n\t\t\t\tprintf(\"%c\", (char) va_arg(args, int));\n\t\t\t\tbreak;\n\t\t\tcase PRINTF_ARG_INT:\n\t\t\t\tprintf(\"%d\", va_arg(args, int));\n\t\t\t\tbreak;\n\t\t\tcase PRINTF_ARG_DOUBLE:\n\t\t\t\tprintf(\"%f\", va_arg(args, double));\n\t\t\t\tbreak;\n\t\t\tcase PRINTF_ARG_CHARP:\n\t\t\t\tprintf(\"%s\", va_arg(args, char *));\n\t\t\t\tbreak;\n\t\t\tcase PRINTF_ARG_VOIDP:\n\t\t\t\tprintf(\"%p\", va_arg(args, void *));\n\t\t\t\tbreak;\n\t\t}\n\n\t\tprintf(\"\\n\");\n\t}\n}\n\nint main(int argc, char **argv)\n{\n\tmyprintf(\"boring fmt\", 1, 0.1, (char)'c', (void*)0, \"crazy stuff\");\n\tmyprintf(\"empty argument fmt\");\n}\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Jul 2019 11:18:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "typesafe printf hackery"
}
] |
[
{
"msg_contents": "Logs are important to diagnose problems or monitor operations, but logs\ncan contain sensitive information which is often unnecessary for these\npurposes. Redacting the sensitive information would enable easier\naccess and simpler integration with analysis tools without compromising\nthe sensitive information.\n\nThe challenge is that nobody wants to classify all of the log messages;\nand even if someone did that today, there would be never-ending work in\nthe future to try to maintain that classification.\n\nMy proposal is:\n\n * redact every '%s' in an ereport by having a special mode for\nsnprintf.c (this is possible because we now own snprintf)\n * generate both redacted and unredacted messages (if redaction is\nenabled)\n * choose which destinations (stderr, eventlog, syslog, csvlog) get\nredacted or plain messages\n * emit_log_hook always has both redacted and plain messages available\n * allow specifying a custom redaction function, e.g. a function that\nhashes the string rather than completely redacting it\n\nI think '%s' in a log message is a pretty close match to the kind of\ninformation that might be sensitive. All data goes through type output\nfunctions (e.g. the conflicting datum for a unique constraint violation\nmessage), and most other things that a user might type would go through\n%s. A lot of other information useful in logs, like LSNs, %m's, PIDs,\netc. would be preserved.\n\nAll object names would be redacted, but that's not as bad as it sounds:\n (a) You can specify a custom redaction function that hashes rather\nthan completely redacts. That allows you to see if different messages\nrefer to the same object, and also map back to suspected objects if you\nreally need to.\n (b) The unredacted object names are still a part of ErrorData, so you\ncan do something interesting with emit_log_hook.\n (c) You still might have the unredacted logs in a more protected\nplace, and can access them when you really need to.\n\nA weakness of this proposal is that it could be confusing to use\nereport() in combination with snprintf(). If using snprintf to build\nthe format string, nothing would be redacted, so you'd have to be\ncareful not to expand any %s that might be sensitive. If using snprintf\nto build up an argument, the entire argument would be redacted. The\nfirst case should not be common, because good coding generally avoids\nnon-constant format strings. The second case is just over-redaction,\nwhich is not necessarily bad.\n\nOne annoying case would be if some of the arguments to ereport() are\nused for things like the right number of commas or tabs -- redacting\nthose would just make the message look horrible. I didn't find such\ncases but I'm pretty sure they exist. Another annoying case is time,\nwhich is useful for debugging, but formatted with %s so it gets\nredacted (I did find plenty of these cases).\n\nBut I don't see a better solution. Right now, it's a pain to treat log\nfiles as sensitive things when there are so many ways they can help\nwith smooth operations and so many tools available to analyze them.\nThis proposal seems like a practical solution to enable better use of\nlog files while protecting potentially-sensitive information.\n\nAttached is a WIP patch.\n\nRegards,\n\tJeff Davis",
"msg_date": "Tue, 30 Jul 2019 11:54:55 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Redacting information from logs"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 11:54:55AM -0700, Jeff Davis wrote:\n>Logs are important to diagnose problems or monitor operations, but logs\n>can contain sensitive information which is often unnecessary for these\n>purposes. Redacting the sensitive information would enable easier\n>access and simpler integration with analysis tools without compromising\n>the sensitive information.\n>\n\nOK, that's a worthwhile goal. I assume by \"sensitive data\" you mean user\ndata, right?\n\n>The challenge is that nobody wants to classify all of the log messages;\n>and even if someone did that today, there would be never-ending work in\n>the future to try to maintain that classification.\n>\n>My proposal is:\n>\n> * redact every '%s' in an ereport by having a special mode for\n>snprintf.c (this is possible because we now own snprintf)\n> * generate both redacted and unredacted messages (if redaction is\n>enabled)\n> * choose which destinations (stderr, eventlog, syslog, csvlog) get\n>redacted or plain messages\n> * emit_log_hook always has both redacted and plain messages available\n> * allow specifying a custom redaction function, e.g. a function that\n>hashes the string rather than completely redacting it\n>\n>I think '%s' in a log message is a pretty close match to the kind of\n>information that might be sensitive. All data goes through type output\n>functions (e.g. the conflicting datum for a unique constraint violation\n>message), and most other things that a user might type would go through\n>%s. A lot of other information useful in logs, like LSNs, %m's, PIDs,\n>etc. would be preserved.\n>\n\nIMHO the crucial part here is 'might be sensitive'. How often is that\nactually true? My guess is 99% of places using %s are not sensitive at\nall, and are used for things like filenames, table/attribute names,\nand so on. And redacting those parts will make the logs essentially\nuseless, because we'll get things like this:\n\n ERROR: column \"******\" does not exist at character 10\n\n ERROR: division by zero\n CONTEXT: SQL function \"******\" during inlining\n\nI'm not sure those are the logs I'd like to see on a production system\nwhile investigating an issue.\n\n>All object names would be redacted, but that's not as bad as it sounds:\n> (a) You can specify a custom redaction function that hashes rather\n>than completely redacts. That allows you to see if different messages\n>refer to the same object, and also map back to suspected objects if you\n>really need to.\n> (b) The unredacted object names are still a part of ErrorData, so you\n>can do something interesting with emit_log_hook.\n\nIsn't hashing essentially an information leak, i.e. somewhat undesirable\nfor sensitive data?\n\n> (c) You still might have the unredacted logs in a more protected\n>place, and can access them when you really need to.\n>\n\nThe question is whether that's actually an acceptable solution for\ndeployments that do handle sensitive data ...\n\n>A weakness of this proposal is that it could be confusing to use\n>ereport() in combination with snprintf(). If using snprintf to build\n>the format string, nothing would be redacted, so you'd have to be\n>careful not to expand any %s that might be sensitive. If using snprintf\n>to build up an argument, the entire argument would be redacted. The\n>first case should not be common, because good coding generally avoids\n>non-constant format strings. The second case is just over-redaction,\n>which is not necessarily bad.\n>\n>One annoying case would be if some of the arguments to ereport() are\n>used for things like the right number of commas or tabs -- redacting\n>those would just make the message look horrible. I didn't find such\n>cases but I'm pretty sure they exist. Another annoying case is time,\n>which is useful for debugging, but formatted with %s so it gets\n>redacted (I did find plenty of these cases).\n>\n>But I don't see a better solution. Right now, it's a pain to treat log\n>files as sensitive things when there are so many ways they can help\n>with smooth operations and so many tools available to analyze them.\n>This proposal seems like a practical solution to enable better use of\n>log files while protecting potentially-sensitive information.\n>\n\nHmm. I wonder how difficult would it be to actually go through the\nereport calls and classify those that can leak sensitive data, and then\ndo redaction only for those. That's about the only alternative approach\nI can think of.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 3 Aug 2019 23:34:46 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Redacting information from logs"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-30 11:54:55 -0700, Jeff Davis wrote:\n> My proposal is:\n>\n> * redact every '%s' in an ereport by having a special mode for\n> snprintf.c (this is possible because we now own snprintf)\n\nI'm extremely doubtful this is a sane approach. We use snprintf for a\nheck of a lot of things. The likelihood of this having unintended\nconsequences seems high (consider an error being thrown while trying to\nreport another error message and such). Nor do I think that snprintf.c\nis a good layer to perform redaction - it's too low level. It's used for\nboth frontend/backend. It's used for both non-error and error purposes.\n\nI also don't think you're actually going to get that far with it -\nthere's plenty places where we concatenate error messages without using\n*printf, but e.g. appendStringInfoString().\n\n\n> But I don't see a better solution. Right now, it's a pain to treat log\n> files as sensitive things when there are so many ways they can help\n> with smooth operations and so many tools available to analyze them.\n> This proposal seems like a practical solution to enable better use of\n> log files while protecting potentially-sensitive information.\n\nI don't really see a low-effort way either. But I'm fairly certain that\nthis will cause at least many problems as it'll help solve.\n\nI think incrementally moving to messages where portions of information\nare separated out (e.g. the things we'd inline with %s) is, although a\nlengthy process, the better approach. It'll make richer output formats\npossible, it'll allow for proper redaction, etc.\n\nI.e. something very roughly like\n\nereport(ERROR,\n errmsg_rich(\"string with %{named}s references to %{parameter}s\"),\n errparam(\"named\", somevar),\n errparam(\"parameter\", othervar, .redact = CONTEXT));\n\nWhich would allow us to add annotate whether a specific parameter needs\nto be redacted for certain contexts.\n\nI'd probably add a errredact(bool) to annotate whether a message needs\nto be redacted, mostly so we can easily flag a lot of current messages\nas OK. When not present, I'd redact the entire message when errmsg() is\nbeing used, and redact nothing if errmsg_rich() is used, and none of the\nparameters flag an error.\n\nThat'd then also allow us to reference parameters that clients /\nexception handlers may not see, e.g. the arguments to leakproof\nfunctions. Which currently makes a lot of issues harder to debug,\nbecause we don't get the values for e.g. overflows, input syntax errors\netc.\n\nAllowing errparam()s to be specified that are not used in the error\nmessages, we can provide more detail to errors for people using richer\nlog outputs. I'd assume we'd fairly quickly have logfmt/json logging\ntarget/format.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 3 Aug 2019 15:47:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Redacting information from logs"
},
{
"msg_contents": "On 7/30/19 2:54 PM, Jeff Davis wrote:\n> Logs are important to diagnose problems or monitor operations, but logs\n> can contain sensitive information which is often unnecessary for these\n> purposes. Redacting the sensitive information would enable easier\n> access and simpler integration with analysis tools without compromising\n> the sensitive information.\n\nThis seems like a thread that could contain a link to this other thread:\n\nhttps://www.postgresql.org/message-id/1055919c-98ea-56d2-9ea7-f8a7c72e16b4%40anastigmatix.net\n\nImplementing ideas are different, but motivating concerns the same.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Sat, 3 Aug 2019 18:57:44 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Redacting information from logs"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-07-30 11:54:55 -0700, Jeff Davis wrote:\n>> My proposal is:\n>> * redact every '%s' in an ereport by having a special mode for\n>> snprintf.c (this is possible because we now own snprintf)\n\n> I'm extremely doubtful this is a sane approach.\n\nYeah, it's really hard to believe that messing with snprintf isn't\ngoing to have a lot of unintended consequences. If we want to do\nsomething about this, we're going to have to bite the bullet and\ngo around to annotate all those arguments with redaction-worthiness\ninfo. It'd be a lot of work, but as long as we can make sure to\ndo it incrementally, we could get there. (It's not like ereport\nitself has been there forever ...)\n\n> I.e. something very roughly like\n\n> ereport(ERROR,\n> errmsg_rich(\"string with %{named}s references to %{parameter}s\"),\n> errparam(\"named\", somevar),\n> errparam(\"parameter\", othervar, .redact = CONTEXT));\n\nI'm not terribly attracted by that specific approach, though. With\nthis, we'd get to maintain *two* variants of snprintf, and I think\nthe one with annotation knowledge would have significant performance\nproblems. (Something I'm sensitive to, on account of certain persons\nbeating me over the head about snprintf.c's performance.) It'd be\nan amazing pain in the rear for translators, too, on account of\ntheir tools not understanding this format-string language.\n\nIt seems to me that it'd be sufficient to do the annotation by\ninserting wrapper functions, like the errparam() you suggest above.\nIf we just had errparam() choosing whether to return \"...\" instead of\nits argument string, we'd have what we need, without messing with\nthe format language.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Aug 2019 19:14:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Redacting information from logs"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-03 19:14:13 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I.e. something very roughly like\n>\n> > ereport(ERROR,\n> > errmsg_rich(\"string with %{named}s references to %{parameter}s\"),\n> > errparam(\"named\", somevar),\n> > errparam(\"parameter\", othervar, .redact = CONTEXT));\n>\n> I'm not terribly attracted by that specific approach, though. With\n> this, we'd get to maintain *two* variants of snprintf, and I think\n> the one with annotation knowledge would have significant performance\n> problems. (Something I'm sensitive to, on account of certain persons\n> beating me over the head about snprintf.c's performance.)\n\nI don't think that the performance issues of *printf in general - which\nwe use in a lot of hot paths like output functions - are the same as\nwhen we log messages/throw errors. My gut feeling that at least >=\nDEBUG1 the difference would be hard to measure. And given how we\ne.g. build up the log_line_prefix, we could probably also recoup a lot\npotentially lost performance, around logging.\n\n\nWe would probably just allow both positional references and named\nreferences. I.e. for smaller messages we can just use the current\nformat, and just for the very long ones with many parameters we'd\nnormally use a positional reference - but still have the name associated\nwith the parameter. Even if we don't want to allow reference parameters\nby name in the format string, I think it's worthwhile. See below.\n\n\nIt's really kind of a shame that we constantly re-parse the format\nstrings. With something like modern C++ it'd be fairly easy to not do\nthat (by using compile time evaluated constexprs when possible) - but I\nwonder if we couldn't also manage to do that within plain C, at the cost\nof a branch and some macro magic. I'm imagining something like\n\n#define pg_printf(fmt, ...) \\\n do { \\\n if ( __builtin_constant_p(fmt)) \\\n { \\\n static processed_fmt processed_fmt_ = {.format = fmt}; \\\n if (unlikely(!processed_fmt_.parsed)) \\\n preprocess_format_string(&processed_fmt_) \\\n pg_printf_processed(&processed_fmt_, __VA_ARGS__); \\\n } \\\n else \\\n pg_printf_unprocessed(fmt, __VA_ARGS__); \\\n } while (0) \\\n\nhaving to use a static variable for this purpose sucks somewhat, but I\ncan't think of something in C that'd let us avoid that.\n\n\n\n> It'd be an amazing pain in the rear for translators, too, on account\n> of their tools not understanding this format-string language.\n\nThe limitations of that world are starting to be really frustrating.\n\n\n> It seems to me that it'd be sufficient to do the annotation by\n> inserting wrapper functions, like the errparam() you suggest above.\n> If we just had errparam() choosing whether to return \"...\" instead of\n> its argument string, we'd have what we need, without messing with\n> the format language.\n\nI agree that if redaction were the only reason, that the expanded format\nstrings wouldn't be worth it. And perhaps they still aren't.\n\nI think we've quite repeatedly had requests for a log format that can be\nparsed reasonably, and annotated with additional information that's too\nverbose for the main message. It's a pain to have to parse quite\ncomplex strings in production, just to extract the value of the actual\ninformation in the message.\n\nAs an *easy* example of something I wanted to do in the past: Plot the\ntotal wait time of backends that had to wait for locks longer than\ndeadlock_timeout. In that case, because we saw the system failing with\ntoo much waiting, but didn't really know when that started become problematic, and what the neormal amount of waiting is\nSo we need to parse\n ereport(LOG,\n (errmsg(\"process %d acquired %s on %s after %ld.%03d ms\",\n MyProcPid, modename, buf.data, msecs, usecs)));\ninto something usable. First problem is of course that the\nlog_line_prefix is variable, and that we'll likely need information from\nit to hunt down the source of locks. Parsing the above with regexes\nisn't too hard - but then we also need to deal with the fact that the\nmessage can be translated. Oh, and finding the associated statement\nrequires looking forward in the log stream by a hard to predict amount,\nand requires knowledge about log verbosity was set to.\n\nIn the past I had also wanted to parse the log to understand how much\nwork manual and auto-vacuum an installation was doing, and whether\nthat's perhaps correlated to problems. But for that one need a parser\nthat can handle this mess:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/heap/vacuumlazy.c;h=a3c4a1df3b4b28953ab1dfcd43f3d566eecac555;hb=HEAD#l396\n\nwhich is obviously much harder.\n\n\nIf we instead had the relevant variables separated out in a named\nfashion, we could - like many other tools these days - have a richer log\ntarget, that'd first have typical log message as a human readable\nstring, but then also all the interpolated values separately.\n\nE.g. in theaforementioned simpler case it could be something like\n\nts=\"2019-08-04 10:19:21.286 PDT\" pid=32416 vxid=12/9 sev=LOG \\\n msg=\"Prozess 15027 erlangte AccessShareLock-Sperre auf Relation 1259 der Datenbank 13416 nach 29808,865 ms\" \\\n fmt=\"process %d acquired %s on %s after %ld.%03d ms\" \\\n p:modename=\"AccessShareLock-Sperre auf Relation 1259\" \\\n p:msec=18343 p:usec=173 \\\n p:locktag=190345/134343/0/0/relation/default \\\n c:statement=\"LOCK TABLE pg_class;\" \\\n l:func=ProcSleep l:file=proc.c l:line=1495\n\nSo, the prefix fields are just included as k=v, the evaluated message as\none field, the *untranslated* format as another. All the parameters are\nincluded prefixed with p:, context information prefixed with c:, and\nlocation with l:. Note how e.g. p:modename is a pre-translated string,\nwhich is why p:locktag was added as an intentionall untranslated\nparamter that's not included in the message.\n\nThis isn't perfect, but still a heck of a lot easier to parse than what\nwe have now. All of the data is one block rather than in consecutive log\nlines that may or may not belong together, it's key=value in a way that\ncan be parsed, the untranslated format string is available, so tools can\nactually analyze logs independent of the current locale.\n\nThis isn't a proposal to adopt precisely the above log format - that's\njust something I came up with while writing this email - but to have\nenough information available to actually produce something like it when\nemitting logs.\n\nIf we have enough information to pass to the logging hook, we don't even\nneed to define how all of this is going to look like exactly (although\nI'd probably argue that a logfmt or json target ought to be in core).\n\n\nThe cool part - and people from other projects/company projects might be\nlaughing about me describing it as cool in this day and age - is that\nonce you have a format that can readily be parsed it's pretty simple to\nwrite tools that process a logfile and extract just the information you\nwant.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 4 Aug 2019 11:17:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Redacting information from logs"
},
{
"msg_contents": "On Sat, 2019-08-03 at 19:14 -0400, Tom Lane wrote:\n> It seems to me that it'd be sufficient to do the annotation by\n> inserting wrapper functions, like the errparam() you suggest above.\n> If we just had errparam() choosing whether to return \"...\" instead of\n> its argument string, we'd have what we need, without messing with\n> the format language.\n\nI'm having trouble getting the ergonomics to work out here so that it\ncan generate both a redacted and an unredacted message.\n\nIf errparam() is a normal argument to errmsg(), then errparam() will be\nevaluated first. Will it return the redacted version, the unredacted\nversion, or a special type that holds both?\n\nIf I try to use macros to force multiple evaluation (to get one\nredacted and one unredacted string), then it seems like that would\nhappen for all arguments (not just errparam arguments), which would be\nbad.\n\nSuggestions?\n\nRegards,\n Jeff Davis\n\n\n\n\n",
"msg_date": "Mon, 05 Aug 2019 13:37:50 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Redacting information from logs"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-05 13:37:50 -0700, Jeff Davis wrote:\n> On Sat, 2019-08-03 at 19:14 -0400, Tom Lane wrote:\n> > It seems to me that it'd be sufficient to do the annotation by\n> > inserting wrapper functions, like the errparam() you suggest above.\n> > If we just had errparam() choosing whether to return \"...\" instead of\n> > its argument string, we'd have what we need, without messing with\n> > the format language.\n> \n> I'm having trouble getting the ergonomics to work out here so that it\n> can generate both a redacted and an unredacted message.\n> \n> If errparam() is a normal argument to errmsg(), then errparam() will be\n> evaluated first. Will it return the redacted version, the unredacted\n> version, or a special type that holds both?\n\nI was thinking that it'd just store a struct ErrParam, which'd reference\nthe passed value and metadata like the name (for structured log\noutput) and redaction category. The bigger problem I see is handling\nthe different types of arguments - but perhaps the answer there would be\nto just make the macro typesafe? Or have one for scalar values and one\nfor pointer types?\n\nWe could even allocate the necessary information for this on the stack,\nwith some macro trickery. Not sure if it's worth it, but ...\n\nDoing something like this does however require not directly using plain\nsprintf. I'm personally not bothered by that, but I'd not be surprised\nif e.g. Tom saw that differently.\n\n\n> If I try to use macros to force multiple evaluation (to get one\n> redacted and one unredacted string), then it seems like that would\n> happen for all arguments (not just errparam arguments), which would be\n> bad.\n\nYea, multiple evaluation clearly is a no-go.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Aug 2019 14:10:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Redacting information from logs"
},
{
"msg_contents": "On Sun, 2019-08-04 at 11:17 -0700, Andres Freund wrote:\n> I'm imagining something like\n> \n> #define pg_printf(fmt, ...) \\\n> do { \\\n> if ( __builtin_constant_p(fmt)) \\\n> { \\\n> static processed_fmt processed_fmt_ = {.format = fmt}; \\\n> if (unlikely(!processed_fmt_.parsed)) \\\n> preprocess_format_string(&processed_fmt_) \\\n> pg_printf_processed(&processed_fmt_, __VA_ARGS__); \\\n> } \\\n> else \\\n> pg_printf_unprocessed(fmt, __VA_ARGS__); \\\n> } while (0) \\\n\nWhat would you do in the preprocessing exactly? Create a list of\nindexes into the string where the format codes are?\n\n> I think we've quite repeatedly had requests for a log format that can\n> be\n> parsed reasonably, and annotated with additional information that's\n> too\n> verbose for the main message. It's a pain to have to parse quite\n> complex strings in production, just to extract the value of the\n> actual\n> information in the message.\n> \n> ts=\"2019-08-04 10:19:21.286 PDT\" pid=32416 vxid=12/9 sev=LOG \\\n> msg=\"Prozess 15027 erlangte AccessShareLock-Sperre auf Relation\n> 1259 der Datenbank 13416 nach 29808,865 ms\" \\\n> fmt=\"process %d acquired %s on %s after %ld.%03d ms\" \\\n> p:modename=\"AccessShareLock-Sperre auf Relation 1259\" \\\n> p:msec=18343 p:usec=173 \\\n> p:locktag=190345/134343/0/0/relation/default \\\n> c:statement=\"LOCK TABLE pg_class;\" \\\n> l:func=ProcSleep l:file=proc.c l:line=1495\n> \n\n...\n\n> If we have enough information to pass to the logging hook, we don't\n> even\n> need to define how all of this is going to look like exactly\n> (although\n> I'd probably argue that a logfmt or json target ought to be in core).\n\nI think I see where you are going with this now: it is almost\northogonal to your new-style format strings ( %{foo}s ), but not quite.\n\nYou're suggesting that we save all of the arguments, along with some\nannotation, in the ErrorData struct, and then let emit_log_hook sort it\nout (perhaps by constructing some JSON message, perhaps translating the\nmessage_id, etc.).\n\nI like the idea, but still playing with the ergonomics a bit, and how\nit interacts with various message parts (msg, hint, detail, etc.). If\nwe had the name-based format strings, then the message parts could all\nshare a common set of parameters; but with the current positional\nformat strings, I think each message part would need its own set of\narguments.\n\nPositional:\n\n ereport(ERROR,\n (errcode(ERRCODE_UNIQUE_VIOLATION),\n errmsg_params(\"duplicate key value violates unique\nconstraint \\\"%s\\\"\",\n errparam(\"constraintname\", MSGDEFAULT,\n RelationGetRelationName(rel)),\n errdetail_params(\"Key %s already exists.\",\n errparam(\"key\", MSGUSERDATA, key_desc)))\n );\n\nNamed:\n\n ereport(ERROR,\n (errcode(ERRCODE_UNIQUE_VIOLATION),\n errmsg_rich(\"duplicate key value violates unique constraint\n\\\"%{constraintname}s\\\"\"),\n errdetail_rich(\"Key %{key}s already exists.\"),\n errparam(\"key\", MSGUSERDATA, key_desc))\n errparam(\"constraintname\", MSGDEFAULT, \n RelationGetRelationName(rel)))\n );\n\nI think either one needs some ergonomic improvements, but it seems like\nwe are going in the right direction.\n\nMaybe we can make the parameters common to different message parts by\nusing an integer index to reference the parameter, like:\n\n ereport(ERROR,\n (errcode(ERRCODE_UNIQUE_VIOLATION),\n errmsg_rich(\"duplicate key value violates unique constraint\n\\\"%s\\\"\", 1 /* second errparam */),\n errdetail_rich(\"Key %s already exists.\", 0 /* first\nerrparam */),\n errparam(\"key\", MSGUSERDATA, key_desc))\n errparam(\"constraintname\", MSGDEFAULT, \n RelationGetRelationName(rel)))\n );\n\nNot quite ideal, but might get us closer.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Mon, 05 Aug 2019 14:26:44 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Redacting information from logs"
},
{
"msg_contents": "On Mon, 2019-08-05 at 14:10 -0700, Andres Freund wrote:\n> I was thinking that it'd just store a struct ErrParam, which'd\n> reference\n> the passed value and metadata like the name (for structured log\n> output) and redaction category. The bigger problem I see is handling\n> the different types of arguments - but perhaps the answer there would\n> be\n> to just make the macro typesafe? Or have one for scalar values and\n> one\n> for pointer types?\n\nLosing the compile-time checks for compatibility between format codes\nand arguments would be a shame. Are you saying there's a potential\nsolution for that?\n\n> Doing something like this does however require not directly using\n> plain\n> sprintf. I'm personally not bothered by that, but I'd not be\n> surprised\n> if e.g. Tom saw that differently.\n\nIt may be possible to still use sprintf if we translate the ErrParams\ninto plain values first.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Mon, 05 Aug 2019 14:32:36 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Redacting information from logs"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-05 14:26:44 -0700, Jeff Davis wrote:\n> On Sun, 2019-08-04 at 11:17 -0700, Andres Freund wrote:\n> > I'm imagining something like\n> > \n> > #define pg_printf(fmt, ...) \\\n> > do { \\\n> > if ( __builtin_constant_p(fmt)) \\\n> > { \\\n> > static processed_fmt processed_fmt_ = {.format = fmt}; \\\n> > if (unlikely(!processed_fmt_.parsed)) \\\n> > preprocess_format_string(&processed_fmt_) \\\n> > pg_printf_processed(&processed_fmt_, __VA_ARGS__); \\\n> > } \\\n> > else \\\n> > pg_printf_unprocessed(fmt, __VA_ARGS__); \\\n> > } while (0) \\\n> \n> What would you do in the preprocessing exactly? Create a list of\n> indexes into the string where the format codes are?\n\nYea, basically. If you look at snprint.c's dopr(), there's a fair bit of\nparsing going on that is going to stay the same from call to call in\nnearly all cases (and the cases where not we possibly ought to fix). And\nthings like the $ processing for argument order, or having named\narguments as I suggest, make that more pronounced.\n\n\n> > If we have enough information to pass to the logging hook, we don't\n> > even\n> > need to define how all of this is going to look like exactly\n> > (although\n> > I'd probably argue that a logfmt or json target ought to be in core).\n> \n> I think I see where you are going with this now: it is almost\n> orthogonal to your new-style format strings ( %{foo}s ), but not quite.\n> \n> You're suggesting that we save all of the arguments, along with some\n> annotation, in the ErrorData struct, and then let emit_log_hook sort it\n> out (perhaps by constructing some JSON message, perhaps translating the\n> message_id, etc.).\n\nRight.\n\n\n> I like the idea, but still playing with the ergonomics a bit, and how\n> it interacts with various message parts (msg, hint, detail, etc.). If\n> we had the name-based format strings, then the message parts could all\n> share a common set of parameters; but with the current positional\n> format strings, I think each message part would need its own set of\n> arguments.\n\nRight, I think that's a good part of where I was coming from.\n\n\n> Maybe we can make the parameters common to different message parts by\n> using an integer index to reference the parameter, like:\n> \n> ereport(ERROR,\n> (errcode(ERRCODE_UNIQUE_VIOLATION),\n> errmsg_rich(\"duplicate key value violates unique constraint\n> \\\"%s\\\"\", 1 /* second errparam */),\n> errdetail_rich(\"Key %s already exists.\", 0 /* first\n> errparam */),\n> errparam(\"key\", MSGUSERDATA, key_desc))\n> errparam(\"constraintname\", MSGDEFAULT, \n> RelationGetRelationName(rel)))\n> );\n> \n> Not quite ideal, but might get us closer.\n\nIf we insist that errmsg_rich/errdetail_rich may not have parameters,\nthen they can just use the same set of arguments, without any of this,\nat the cost of sometimes more complicated % syntax (i.e. %1$d to refer\nto the first argument).\n\nI think the probable loss of gcc format warnings would be the biggest\nissue with this whole proposal, and potential translator trouble the\nbiggest impediment for named parameters.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Aug 2019 14:44:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Redacting information from logs"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-05 14:32:36 -0700, Jeff Davis wrote:\n> On Mon, 2019-08-05 at 14:10 -0700, Andres Freund wrote:\n> > I was thinking that it'd just store a struct ErrParam, which'd\n> > reference\n> > the passed value and metadata like the name (for structured log\n> > output) and redaction category. The bigger problem I see is handling\n> > the different types of arguments - but perhaps the answer there would\n> > be\n> > to just make the macro typesafe? Or have one for scalar values and\n> > one\n> > for pointer types?\n> \n> Losing the compile-time checks for compatibility between format codes\n> and arguments would be a shame. Are you saying there's a potential\n> solution for that?\n\nYea, I'd just written that in another reply to yours. I did actually\nthink about this some recently:\nhttps://www.postgresql.org/message-id/20190730181845.jyyk4selyohagwnf%40alap3.anarazel.de\n\nNot sure if any of that is really applicable. Once more I really am\nregretting that PG is C rather than C++.\n\n\n> > Doing something like this does however require not directly using\n> > plain\n> > sprintf. I'm personally not bothered by that, but I'd not be\n> > surprised\n> > if e.g. Tom saw that differently.\n> \n> It may be possible to still use sprintf if we translate the ErrParams\n> into plain values first.\n\nIt's pretty hard to call those functions with a runtime variable number\nof arguments. va_list as an interface is not great...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Aug 2019 14:49:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Redacting information from logs"
},
{
"msg_contents": "On Mon, 2019-08-05 at 14:44 -0700, Andres Freund wrote:\n> at the cost of sometimes more complicated % syntax (i.e. %1$d to\n> refer\n> to the first argument).\n> \n> I think the probable loss of gcc format warnings would be the biggest\n> issue with this whole proposal, and potential translator trouble the\n> biggest impediment for named parameters.\n\nI'd be OK with '%1$d' syntax.\n\nThat leaves type safety as the main problem. Your solution involving\n_Generic is interesting -- I didn't even know that existed. I don't\nthink it needs to be supported on all compilers, as long as we are\ngetting errors from somewhere. They would be runtime errors instead of\ncompile-time errors, though.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Mon, 05 Aug 2019 16:05:11 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Redacting information from logs"
}
] |
[
{
"msg_contents": "Hi,\n\n\nI wrote a patch for adding CORRESPONDING/CORRESPONDING BY to set operation.\nIt is a task in the todo list. This is how the patch works:\n\n\nI modified transformSetOperationStmt() to get an intersection target list\nwhich is the intersection of the target lists of the left clause and right\nclause for a set operation statement (sostmt). The intersection target list\nis calculated in transformSetOperationTree() and then I modified the target\nlists of the larg and rarg of sostmt to make them equal to the intersection\ntarget list. Also, I also changed the target list in pstate->p_rtable in\norder to make it consistent with the intersection target list.\n\n\nI attached the scratch version of this patch to the email. I am not sure\nwhether the method used in the patch is acceptable or not, but any\nsuggestions are appreciated. I will add tests and other related things to\nthe patch if the method used in this patch is acceptable.\n\n\n\nBest,\n\nRuijia",
"msg_date": "Tue, 30 Jul 2019 14:43:05 -0700",
"msg_from": "=?UTF-8?B?5q+b55Ge5ZiJ?= <alanmao94@gmail.com>",
"msg_from_op": true,
"msg_subject": "[Patch] Adding CORRESPONDING/CORRESPONDING BY to set operation"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 02:43:05PM -0700, 毛瑞嘉 wrote:\n> Hi,\n> \n> \n> I wrote a patch for adding CORRESPONDING/CORRESPONDING BY to set operation.\n> It is a task in the todo list. This is how the patch works:\n> \n> I modified transformSetOperationStmt() to get an intersection target list\n> which is the intersection of the target lists of the left clause and right\n> clause for a set operation statement (sostmt). The intersection target list\n> is calculated in transformSetOperationTree() and then I modified the target\n> lists of the larg and rarg of sostmt to make them equal to the intersection\n> target list. Also, I also changed the target list in pstate->p_rtable in\n> order to make it consistent with the intersection target list.\n> \n> \n> I attached the scratch version of this patch to the email. I am not sure\n> whether the method used in the patch is acceptable or not, but any\n> suggestions are appreciated. I will add tests and other related things to\n> the patch if the method used in this patch is acceptable.\n\nThanks for sending this!\n\nIt needs documentation and tests so people can see whether it does\nwhat it's supposed to do. Would you be so kind as to include those in\nthe next revision of the patch? You can just attach the patch(es)\nwithout zipping them.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 30 Jul 2019 23:55:57 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Adding CORRESPONDING/CORRESPONDING BY to set operation"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 02:43:05PM -0700, 毛瑞嘉 wrote:\n> Hi,\n> \n> \n> I wrote a patch for adding CORRESPONDING/CORRESPONDING BY to set operation.\n> It is a task in the todo list. This is how the patch works:\n> \n> \n> I modified transformSetOperationStmt() to get an intersection target list\n> which is the intersection of the target lists of the left clause and right\n> clause for a set operation statement (sostmt). The intersection target list\n> is calculated in transformSetOperationTree() and then I modified the target\n> lists of the larg and rarg of sostmt to make them equal to the intersection\n> target list. Also, I also changed the target list in pstate->p_rtable in\n> order to make it consistent with the intersection target list.\n> \n> \n> I attached the scratch version of this patch to the email. I am not sure\n> whether the method used in the patch is acceptable or not, but any\n> suggestions are appreciated. I will add tests and other related things to\n> the patch if the method used in this patch is acceptable.\n\nI tried adding documentation based on what I could infer about the\nbehavior of this patch. Is that documentation correct?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sat, 3 Aug 2019 17:56:04 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Adding CORRESPONDING/CORRESPONDING BY to set operation"
},
{
"msg_contents": "On Sat, Aug 03, 2019 at 05:56:04PM +0200, David Fetter wrote:\n> On Tue, Jul 30, 2019 at 02:43:05PM -0700, 毛瑞嘉 wrote:\n> > Hi,\n> > \n> > \n> > I wrote a patch for adding CORRESPONDING/CORRESPONDING BY to set operation.\n> > It is a task in the todo list. This is how the patch works:\n> > \n> > \n> > I modified transformSetOperationStmt() to get an intersection target list\n> > which is the intersection of the target lists of the left clause and right\n> > clause for a set operation statement (sostmt). The intersection target list\n> > is calculated in transformSetOperationTree() and then I modified the target\n> > lists of the larg and rarg of sostmt to make them equal to the intersection\n> > target list. Also, I also changed the target list in pstate->p_rtable in\n> > order to make it consistent with the intersection target list.\n> > \n> > \n> > I attached the scratch version of this patch to the email. I am not sure\n> > whether the method used in the patch is acceptable or not, but any\n> > suggestions are appreciated. I will add tests and other related things to\n> > the patch if the method used in this patch is acceptable.\n> \n> I tried adding documentation based on what I could infer about the\n> behavior of this patch. Is that documentation correct?\n\nThis time, with the patch attached.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sat, 3 Aug 2019 18:22:29 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Adding CORRESPONDING/CORRESPONDING BY to set operation"
}
] |
[
{
"msg_contents": "Hello,\n\nWhen we add a new path using add_path(), it checks estimated cost and path-keys,\nthen it also removes dominated paths, if any.\nDo we have a reasonable way to retain these \"dominated\" paths? Once it\nis considered\nlesser paths at a level, however, it may have a combined cheaper cost\nwith upper pathnode.\n\nIn my case, PG-Strom adds CustomPath to support JOIN/GROUP BY\nworkloads that utilizes\nGPU and NVME storage. If GpuPreAgg and GpuJoin are executed\ncontinuously, output buffer\nof GpuJoin simultaneously performs as input buffer of GpuPreAgg on GPU\ndevice memory.\nSo, it allows to avoid senseless DMA transfer between GPU and CPU/RAM.\nThis behavior\naffects cost estimation during path construction steps - GpuPreAgg\ndiscount DMA cost if its\ninput path is GpuJoin.\nOn the other hands, it looks to me add_path() does not consider upper\nlevel optimization\nother than sorting path-keys. As long as we can keep these \"lesser\"\npathnodes that has\nfurther optimization chance, it will help making more reasonable query plan.\n\nDo we have any reasonable way to retain these paths at add_path() even\nif it is dominated\nby other paths? Any idea welcome.\n\nBest regards,\n\n[*] GpuJoin + GpuPreAgg combined GPU kernel\nhttps://www.slideshare.net/kaigai/20181016pgconfeussd2gpumulti/13\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Wed, 31 Jul 2019 12:24:37 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "How to retain lesser paths at add_path()?"
},
{
"msg_contents": "Kohei KaiGai <kaigai@heterodb.com> writes:\n> When we add a new path using add_path(), it checks estimated cost and path-keys,\n> then it also removes dominated paths, if any.\n> Do we have a reasonable way to retain these \"dominated\" paths? Once it\n> is considered\n> lesser paths at a level, however, it may have a combined cheaper cost\n> with upper pathnode.\n\nYou do *not* want to have add_path fail to remove dominated paths in\ngeneral. Don't even think about it, because otherwise you will have\nplenty of time to regret your folly while you wait for the planner\nto chew through an exponential number of possible join plans.\n\nWhat you'd want to do for something like the above, I think, is to\nhave some kind of figure of merit or other special marking for paths\nthat will have some possible special advantage in later planning\nsteps. Then you can teach add_path that that's another dimension it\nshould consider, in the same way that paths with different sort orders\nor parallizability attributes don't dominate each other.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Jul 2019 11:07:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> What you'd want to do for something like the above, I think, is to\n> have some kind of figure of merit or other special marking for paths\n> that will have some possible special advantage in later planning\n> steps. Then you can teach add_path that that's another dimension it\n> should consider, in the same way that paths with different sort orders\n> or parallizability attributes don't dominate each other.\n\nYeah, but I have to admit that this whole design makes me kinda\nuncomfortable. Every time somebody comes up with a new figure of\nmerit, it increases not only the number of paths retained but also the\ncost of comparing two paths to possibly reject one of them. A few\nyears ago, you came up with the (good) idea of rejecting some join\npaths before actually creating the paths, and I wonder if we ought to\ntry to go further with that somehow. Or maybe, as Peter Geoghegan, has\nbeen saying, we ought to think about planning top-down with\nmemoization instead of bottom up (yeah, I know that's a huge change).\nIt just feels like the whole idea of a list of paths ordered by cost\nbreaks down when there are so many ways that a not-cheapest path can\nstill be worth keeping. Not sure exactly what would be better, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 31 Jul 2019 11:44:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Yeah, but I have to admit that this whole design makes me kinda\n> uncomfortable. Every time somebody comes up with a new figure of\n> merit, it increases not only the number of paths retained but also the\n> cost of comparing two paths to possibly reject one of them. A few\n> years ago, you came up with the (good) idea of rejecting some join\n> paths before actually creating the paths, and I wonder if we ought to\n> try to go further with that somehow. Or maybe, as Peter Geoghegan, has\n> been saying, we ought to think about planning top-down with\n> memoization instead of bottom up (yeah, I know that's a huge change).\n> It just feels like the whole idea of a list of paths ordered by cost\n> breaks down when there are so many ways that a not-cheapest path can\n> still be worth keeping. Not sure exactly what would be better, though.\n\nYeah, I agree that add_path is starting to feel creaky. I don't\nknow what to do instead though. Changing to a top-down design\nsounds like it would solve some problems while introducing others\n(not to mention the amount of work and breakage involved).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Jul 2019 12:41:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "2019年8月1日(木) 1:41 Tom Lane <tgl@sss.pgh.pa.us>:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Yeah, but I have to admit that this whole design makes me kinda\n> > uncomfortable. Every time somebody comes up with a new figure of\n> > merit, it increases not only the number of paths retained but also the\n> > cost of comparing two paths to possibly reject one of them. A few\n> > years ago, you came up with the (good) idea of rejecting some join\n> > paths before actually creating the paths, and I wonder if we ought to\n> > try to go further with that somehow. Or maybe, as Peter Geoghegan, has\n> > been saying, we ought to think about planning top-down with\n> > memoization instead of bottom up (yeah, I know that's a huge change).\n> > It just feels like the whole idea of a list of paths ordered by cost\n> > breaks down when there are so many ways that a not-cheapest path can\n> > still be worth keeping. Not sure exactly what would be better, though.\n>\n> Yeah, I agree that add_path is starting to feel creaky. I don't\n> know what to do instead though. Changing to a top-down design\n> sounds like it would solve some problems while introducing others\n> (not to mention the amount of work and breakage involved).\n>\nHmm... It looks the problem we ought to revise about path construction\nis much larger than my expectation, and uncertain for me how much works\nare needed.\n\nAlthough it might be a workaround until fundamental reconstruction,\nhow about to have a margin of estimated cost to reject paths?\nCurrent add_path() immediately rejects lesser paths if its cost is\neven a little more expensive than the compared one. One the other hands,\na little cost difference may turn back final optimizer decision in some cases.\nFor example, if we retain lesser paths that have less then 10% expensive\ncost than the current cheapest path in the same level, we may be able to\nkeep the number of lesser paths retained and still use simple cost comparison\nfor path survive decision.\n\nI understand it is not an essential re-design of path-construction logic, and\nmay have limitation. However, amount of works are reasonable and no side-\neffect. (current behavior = 0% threshold).\nHow about your opinions?\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Thu, 1 Aug 2019 15:11:33 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 2:12 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n\n> 2019年8月1日(木) 1:41 Tom Lane <tgl@sss.pgh.pa.us>:\n> >\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > Yeah, but I have to admit that this whole design makes me kinda\n> > > uncomfortable. Every time somebody comes up with a new figure of\n> > > merit, it increases not only the number of paths retained but also the\n> > > cost of comparing two paths to possibly reject one of them. A few\n> > > years ago, you came up with the (good) idea of rejecting some join\n> > > paths before actually creating the paths, and I wonder if we ought to\n> > > try to go further with that somehow. Or maybe, as Peter Geoghegan, has\n> > > been saying, we ought to think about planning top-down with\n> > > memoization instead of bottom up (yeah, I know that's a huge change).\n> > > It just feels like the whole idea of a list of paths ordered by cost\n> > > breaks down when there are so many ways that a not-cheapest path can\n> > > still be worth keeping. Not sure exactly what would be better, though.\n> >\n> > Yeah, I agree that add_path is starting to feel creaky. I don't\n> > know what to do instead though. Changing to a top-down design\n> > sounds like it would solve some problems while introducing others\n> > (not to mention the amount of work and breakage involved).\n> >\n> Hmm... It looks the problem we ought to revise about path construction\n> is much larger than my expectation, and uncertain for me how much works\n> are needed.\n>\n> Although it might be a workaround until fundamental reconstruction,\n> how about to have a margin of estimated cost to reject paths?\n> Current add_path() immediately rejects lesser paths if its cost is\n> even a little more expensive than the compared one. One the other hands,\n>\n\nHmm.. I don't think so. Currently add_path() uses fuzzy comparisons on\ncosts of two paths, although the fuzz factor (1%) is hard coded and not\nuser-controllable.\n\n\n> I understand it is not an essential re-design of path-construction logic,\n> and\n> may have limitation. However, amount of works are reasonable and no side-\n> effect. (current behavior = 0% threshold).\n> How about your opinions?\n>\n>\nHow's about Tom's suggestion on adding another dimension in add_path()\nto be considered, just like how it considers paths of better sort order\nor parallel-safe?\n\nThanks\nRichard\n\nOn Thu, Aug 1, 2019 at 2:12 PM Kohei KaiGai <kaigai@heterodb.com> wrote:2019年8月1日(木) 1:41 Tom Lane <tgl@sss.pgh.pa.us>:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Yeah, but I have to admit that this whole design makes me kinda\n> > uncomfortable. Every time somebody comes up with a new figure of\n> > merit, it increases not only the number of paths retained but also the\n> > cost of comparing two paths to possibly reject one of them. A few\n> > years ago, you came up with the (good) idea of rejecting some join\n> > paths before actually creating the paths, and I wonder if we ought to\n> > try to go further with that somehow. Or maybe, as Peter Geoghegan, has\n> > been saying, we ought to think about planning top-down with\n> > memoization instead of bottom up (yeah, I know that's a huge change).\n> > It just feels like the whole idea of a list of paths ordered by cost\n> > breaks down when there are so many ways that a not-cheapest path can\n> > still be worth keeping. Not sure exactly what would be better, though.\n>\n> Yeah, I agree that add_path is starting to feel creaky. I don't\n> know what to do instead though. Changing to a top-down design\n> sounds like it would solve some problems while introducing others\n> (not to mention the amount of work and breakage involved).\n>\nHmm... It looks the problem we ought to revise about path construction\nis much larger than my expectation, and uncertain for me how much works\nare needed.\n\nAlthough it might be a workaround until fundamental reconstruction,\nhow about to have a margin of estimated cost to reject paths?\nCurrent add_path() immediately rejects lesser paths if its cost is\neven a little more expensive than the compared one. One the other hands,Hmm.. I don't think so. Currently add_path() uses fuzzy comparisons oncosts of two paths, although the fuzz factor (1%) is hard coded and notuser-controllable.\n\nI understand it is not an essential re-design of path-construction logic, and\nmay have limitation. However, amount of works are reasonable and no side-\neffect. (current behavior = 0% threshold).\nHow about your opinions?How's about Tom's suggestion on adding another dimension in add_path()to be considered, just like how it considers paths of better sort orderor parallel-safe?ThanksRichard",
"msg_date": "Thu, 1 Aug 2019 15:19:44 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "2019年8月1日(木) 16:19 Richard Guo <riguo@pivotal.io>:\n>\n> On Thu, Aug 1, 2019 at 2:12 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n>>\n>> 2019年8月1日(木) 1:41 Tom Lane <tgl@sss.pgh.pa.us>:\n>> >\n>> > Robert Haas <robertmhaas@gmail.com> writes:\n>> > > Yeah, but I have to admit that this whole design makes me kinda\n>> > > uncomfortable. Every time somebody comes up with a new figure of\n>> > > merit, it increases not only the number of paths retained but also the\n>> > > cost of comparing two paths to possibly reject one of them. A few\n>> > > years ago, you came up with the (good) idea of rejecting some join\n>> > > paths before actually creating the paths, and I wonder if we ought to\n>> > > try to go further with that somehow. Or maybe, as Peter Geoghegan, has\n>> > > been saying, we ought to think about planning top-down with\n>> > > memoization instead of bottom up (yeah, I know that's a huge change).\n>> > > It just feels like the whole idea of a list of paths ordered by cost\n>> > > breaks down when there are so many ways that a not-cheapest path can\n>> > > still be worth keeping. Not sure exactly what would be better, though.\n>> >\n>> > Yeah, I agree that add_path is starting to feel creaky. I don't\n>> > know what to do instead though. Changing to a top-down design\n>> > sounds like it would solve some problems while introducing others\n>> > (not to mention the amount of work and breakage involved).\n>> >\n>> Hmm... It looks the problem we ought to revise about path construction\n>> is much larger than my expectation, and uncertain for me how much works\n>> are needed.\n>>\n>> Although it might be a workaround until fundamental reconstruction,\n>> how about to have a margin of estimated cost to reject paths?\n>> Current add_path() immediately rejects lesser paths if its cost is\n>> even a little more expensive than the compared one. One the other hands,\n>\n>\n> Hmm.. I don't think so. Currently add_path() uses fuzzy comparisons on\n> costs of two paths, although the fuzz factor (1%) is hard coded and not\n> user-controllable.\n>\nAh, sorry, I oversight this logic...\n\n>> I understand it is not an essential re-design of path-construction logic, and\n>> may have limitation. However, amount of works are reasonable and no side-\n>> effect. (current behavior = 0% threshold).\n>> How about your opinions?\n>>\n>\n> How's about Tom's suggestion on adding another dimension in add_path()\n> to be considered, just like how it considers paths of better sort order\n> or parallel-safe?\n>\nRobert also mentioned it makes comparison operation more complicated.\nIf we try to have another dimension here, a callback function in Path node\nmay be able to tell the core optimizer whether \"dominated path\" shall be\ndropped or not, without further complexity. It is just an idea.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Thu, 1 Aug 2019 18:28:08 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "On Thu, Aug 01, 2019 at 06:28:08PM +0900, Kohei KaiGai wrote:\n>2019年8月1日(木) 16:19 Richard Guo <riguo@pivotal.io>:\n>>\n>> On Thu, Aug 1, 2019 at 2:12 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n>>>\n>>> 2019年8月1日(木) 1:41 Tom Lane <tgl@sss.pgh.pa.us>:\n>>> >\n>>> > Robert Haas <robertmhaas@gmail.com> writes:\n>>> > > Yeah, but I have to admit that this whole design makes me kinda\n>>> > > uncomfortable. Every time somebody comes up with a new figure of\n>>> > > merit, it increases not only the number of paths retained but also the\n>>> > > cost of comparing two paths to possibly reject one of them. A few\n>>> > > years ago, you came up with the (good) idea of rejecting some join\n>>> > > paths before actually creating the paths, and I wonder if we ought to\n>>> > > try to go further with that somehow. Or maybe, as Peter Geoghegan, has\n>>> > > been saying, we ought to think about planning top-down with\n>>> > > memoization instead of bottom up (yeah, I know that's a huge change).\n>>> > > It just feels like the whole idea of a list of paths ordered by cost\n>>> > > breaks down when there are so many ways that a not-cheapest path can\n>>> > > still be worth keeping. Not sure exactly what would be better, though.\n>>> >\n>>> > Yeah, I agree that add_path is starting to feel creaky. I don't\n>>> > know what to do instead though. Changing to a top-down design\n>>> > sounds like it would solve some problems while introducing others\n>>> > (not to mention the amount of work and breakage involved).\n>>> >\n>>> Hmm... It looks the problem we ought to revise about path construction\n>>> is much larger than my expectation, and uncertain for me how much works\n>>> are needed.\n>>>\n>>> Although it might be a workaround until fundamental reconstruction,\n>>> how about to have a margin of estimated cost to reject paths?\n>>> Current add_path() immediately rejects lesser paths if its cost is\n>>> even a little more expensive than the compared one. One the other hands,\n>>\n>>\n>> Hmm.. I don't think so. Currently add_path() uses fuzzy comparisons on\n>> costs of two paths, although the fuzz factor (1%) is hard coded and not\n>> user-controllable.\n>>\n>Ah, sorry, I oversight this logic...\n>\n\nFWIW I doubt adding larger \"fuzz factor\" is unlikely to be a reliable\nsolution, because how would you know what value is the right one? Why ould\n10% be the right threshold, for example? In my experience these these\nhard-coded coefficients imply behavior that's difficult to predict and\nexplain to users.\n\n>>> I understand it is not an essential re-design of path-construction logic, and\n>>> may have limitation. However, amount of works are reasonable and no side-\n>>> effect. (current behavior = 0% threshold).\n>>> How about your opinions?\n>>>\n>>\n>> How's about Tom's suggestion on adding another dimension in add_path()\n>> to be considered, just like how it considers paths of better sort order\n>> or parallel-safe?\n>>\n>Robert also mentioned it makes comparison operation more complicated.\n>If we try to have another dimension here, a callback function in Path node\n>may be able to tell the core optimizer whether \"dominated path\" shall be\n>dropped or not, without further complexity. It is just an idea.\n>\n\nI think adding a hook to add_path() allowing to override the decidion\nshould be OK. The chance of getting that committed in the near future\nseems much higher than for a patch that completely reworks add_path().\n\nThere's one caveat, though - AFAICS various places in the planner use\nthings like cheapest_total_path, cheapest_startup_path and even\nget_cheapest_path_for_pathkeys() which kinda assumes add_path() only\nconsiders startup/total cost. It might happen that even after keeping\nadditional paths, the planner still won't use them :-(\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 1 Aug 2019 12:24:54 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "2019年8月1日(木) 19:24 Tomas Vondra <tomas.vondra@2ndquadrant.com>:\n>\n> On Thu, Aug 01, 2019 at 06:28:08PM +0900, Kohei KaiGai wrote:\n> >2019年8月1日(木) 16:19 Richard Guo <riguo@pivotal.io>:\n> >>\n> >> On Thu, Aug 1, 2019 at 2:12 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> >>>\n> >>> 2019年8月1日(木) 1:41 Tom Lane <tgl@sss.pgh.pa.us>:\n> >>> >\n> >>> > Robert Haas <robertmhaas@gmail.com> writes:\n> >>> > > Yeah, but I have to admit that this whole design makes me kinda\n> >>> > > uncomfortable. Every time somebody comes up with a new figure of\n> >>> > > merit, it increases not only the number of paths retained but also the\n> >>> > > cost of comparing two paths to possibly reject one of them. A few\n> >>> > > years ago, you came up with the (good) idea of rejecting some join\n> >>> > > paths before actually creating the paths, and I wonder if we ought to\n> >>> > > try to go further with that somehow. Or maybe, as Peter Geoghegan, has\n> >>> > > been saying, we ought to think about planning top-down with\n> >>> > > memoization instead of bottom up (yeah, I know that's a huge change).\n> >>> > > It just feels like the whole idea of a list of paths ordered by cost\n> >>> > > breaks down when there are so many ways that a not-cheapest path can\n> >>> > > still be worth keeping. Not sure exactly what would be better, though.\n> >>> >\n> >>> > Yeah, I agree that add_path is starting to feel creaky. I don't\n> >>> > know what to do instead though. Changing to a top-down design\n> >>> > sounds like it would solve some problems while introducing others\n> >>> > (not to mention the amount of work and breakage involved).\n> >>> >\n> >>> Hmm... It looks the problem we ought to revise about path construction\n> >>> is much larger than my expectation, and uncertain for me how much works\n> >>> are needed.\n> >>>\n> >>> Although it might be a workaround until fundamental reconstruction,\n> >>> how about to have a margin of estimated cost to reject paths?\n> >>> Current add_path() immediately rejects lesser paths if its cost is\n> >>> even a little more expensive than the compared one. One the other hands,\n> >>\n> >>\n> >> Hmm.. I don't think so. Currently add_path() uses fuzzy comparisons on\n> >> costs of two paths, although the fuzz factor (1%) is hard coded and not\n> >> user-controllable.\n> >>\n> >Ah, sorry, I oversight this logic...\n> >\n>\n> FWIW I doubt adding larger \"fuzz factor\" is unlikely to be a reliable\n> solution, because how would you know what value is the right one? Why ould\n> 10% be the right threshold, for example? In my experience these these\n> hard-coded coefficients imply behavior that's difficult to predict and\n> explain to users.\n>\nAh... That's exactly hard task to explain to users.\n\n> >>> I understand it is not an essential re-design of path-construction logic, and\n> >>> may have limitation. However, amount of works are reasonable and no side-\n> >>> effect. (current behavior = 0% threshold).\n> >>> How about your opinions?\n> >>>\n> >>\n> >> How's about Tom's suggestion on adding another dimension in add_path()\n> >> to be considered, just like how it considers paths of better sort order\n> >> or parallel-safe?\n> >>\n> >Robert also mentioned it makes comparison operation more complicated.\n> >If we try to have another dimension here, a callback function in Path node\n> >may be able to tell the core optimizer whether \"dominated path\" shall be\n> >dropped or not, without further complexity. It is just an idea.\n> >\n>\n> I think adding a hook to add_path() allowing to override the decidion\n> should be OK. The chance of getting that committed in the near future\n> seems much higher than for a patch that completely reworks add_path().\n>\n> There's one caveat, though - AFAICS various places in the planner use\n> things like cheapest_total_path, cheapest_startup_path and even\n> get_cheapest_path_for_pathkeys() which kinda assumes add_path() only\n> considers startup/total cost. It might happen that even after keeping\n> additional paths, the planner still won't use them :-(\n>\nEven if existing code looks at only cheapest_xxx_path, I don't think it is\na problematic behavior because these paths are exactly cheapest at a level,\nbut they may use more expensive paths in the deeper level.\nIf a hook can prevent dropping a path, not cheapest, in a particular level,\nthis path shall not appear on the cheapest_xxx_path, however, upper level\npath construction logic can pick up these paths as a candidate of input.\nIf it has special discount factor here, the startup/total cost of the\nupper level\npath may have cheaper cost in spite of expensive input cost.\n\nIn this scenario, this hook gives a decision whether dominated path-node\nshall be dropped or not. So, core optimizer still compares path-node using\nestimated cost value.\n\nIn my scenario, even if GpuJoin is expensive at top level of JOIN/SCAN path\nconstruction, GpuPreAgg + GpuJoin may be cheaper than others because of\ndata exchange on GPU device memory.\nAs long as GpuJoin is not removed from the pathlist, extension can build its\ncustom-path with cheaper cost.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Thu, 1 Aug 2019 21:14:03 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "Hello,\n\nFor implementation of the concept, this patch puts a hook on add_path\n/ add_partial_path\nto override the path removal decision by extensions, according to its\nown viewpoint.\nWe don't add any metrics other than path's cost and path keys, so\nset_cheapest() still picks\nup paths based on its cost for each depth.\nAs we are currently doing, extensions (FDW / CSP) are responsible to\nconstruct and add\npaths with reasonable cost values, then PostgreSQL optimizer chooses\nthe \"best\" path\naccording to the (self-reported) cost. On the other hands, an\nexpensive path at a particular\nlevel is not always expensive from upper viewpoint, if combination of\npath-A and path-B\nhas special optimization, like a reduction of DMA transfer between\nhost and device, or omit\nof network transfer between local and remote.\nIn these cases, extension can get a control to override a decision\nwhether old_path that\nis dominated by new_path shall be removed, or not. If old_path\nsurvived, extension can\nre-use the path at the upper level to construct a special path.\n\nBest regards,\n\n2019年8月1日(木) 21:14 Kohei KaiGai <kaigai@heterodb.com>:\n>\n> 2019年8月1日(木) 19:24 Tomas Vondra <tomas.vondra@2ndquadrant.com>:\n> >\n> > On Thu, Aug 01, 2019 at 06:28:08PM +0900, Kohei KaiGai wrote:\n> > >2019年8月1日(木) 16:19 Richard Guo <riguo@pivotal.io>:\n> > >>\n> > >> On Thu, Aug 1, 2019 at 2:12 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > >>>\n> > >>> 2019年8月1日(木) 1:41 Tom Lane <tgl@sss.pgh.pa.us>:\n> > >>> >\n> > >>> > Robert Haas <robertmhaas@gmail.com> writes:\n> > >>> > > Yeah, but I have to admit that this whole design makes me kinda\n> > >>> > > uncomfortable. Every time somebody comes up with a new figure of\n> > >>> > > merit, it increases not only the number of paths retained but also the\n> > >>> > > cost of comparing two paths to possibly reject one of them. A few\n> > >>> > > years ago, you came up with the (good) idea of rejecting some join\n> > >>> > > paths before actually creating the paths, and I wonder if we ought to\n> > >>> > > try to go further with that somehow. Or maybe, as Peter Geoghegan, has\n> > >>> > > been saying, we ought to think about planning top-down with\n> > >>> > > memoization instead of bottom up (yeah, I know that's a huge change).\n> > >>> > > It just feels like the whole idea of a list of paths ordered by cost\n> > >>> > > breaks down when there are so many ways that a not-cheapest path can\n> > >>> > > still be worth keeping. Not sure exactly what would be better, though.\n> > >>> >\n> > >>> > Yeah, I agree that add_path is starting to feel creaky. I don't\n> > >>> > know what to do instead though. Changing to a top-down design\n> > >>> > sounds like it would solve some problems while introducing others\n> > >>> > (not to mention the amount of work and breakage involved).\n> > >>> >\n> > >>> Hmm... It looks the problem we ought to revise about path construction\n> > >>> is much larger than my expectation, and uncertain for me how much works\n> > >>> are needed.\n> > >>>\n> > >>> Although it might be a workaround until fundamental reconstruction,\n> > >>> how about to have a margin of estimated cost to reject paths?\n> > >>> Current add_path() immediately rejects lesser paths if its cost is\n> > >>> even a little more expensive than the compared one. One the other hands,\n> > >>\n> > >>\n> > >> Hmm.. I don't think so. Currently add_path() uses fuzzy comparisons on\n> > >> costs of two paths, although the fuzz factor (1%) is hard coded and not\n> > >> user-controllable.\n> > >>\n> > >Ah, sorry, I oversight this logic...\n> > >\n> >\n> > FWIW I doubt adding larger \"fuzz factor\" is unlikely to be a reliable\n> > solution, because how would you know what value is the right one? Why ould\n> > 10% be the right threshold, for example? In my experience these these\n> > hard-coded coefficients imply behavior that's difficult to predict and\n> > explain to users.\n> >\n> Ah... That's exactly hard task to explain to users.\n>\n> > >>> I understand it is not an essential re-design of path-construction logic, and\n> > >>> may have limitation. However, amount of works are reasonable and no side-\n> > >>> effect. (current behavior = 0% threshold).\n> > >>> How about your opinions?\n> > >>>\n> > >>\n> > >> How's about Tom's suggestion on adding another dimension in add_path()\n> > >> to be considered, just like how it considers paths of better sort order\n> > >> or parallel-safe?\n> > >>\n> > >Robert also mentioned it makes comparison operation more complicated.\n> > >If we try to have another dimension here, a callback function in Path node\n> > >may be able to tell the core optimizer whether \"dominated path\" shall be\n> > >dropped or not, without further complexity. It is just an idea.\n> > >\n> >\n> > I think adding a hook to add_path() allowing to override the decidion\n> > should be OK. The chance of getting that committed in the near future\n> > seems much higher than for a patch that completely reworks add_path().\n> >\n> > There's one caveat, though - AFAICS various places in the planner use\n> > things like cheapest_total_path, cheapest_startup_path and even\n> > get_cheapest_path_for_pathkeys() which kinda assumes add_path() only\n> > considers startup/total cost. It might happen that even after keeping\n> > additional paths, the planner still won't use them :-(\n> >\n> Even if existing code looks at only cheapest_xxx_path, I don't think it is\n> a problematic behavior because these paths are exactly cheapest at a level,\n> but they may use more expensive paths in the deeper level.\n> If a hook can prevent dropping a path, not cheapest, in a particular level,\n> this path shall not appear on the cheapest_xxx_path, however, upper level\n> path construction logic can pick up these paths as a candidate of input.\n> If it has special discount factor here, the startup/total cost of the\n> upper level\n> path may have cheaper cost in spite of expensive input cost.\n>\n> In this scenario, this hook gives a decision whether dominated path-node\n> shall be dropped or not. So, core optimizer still compares path-node using\n> estimated cost value.\n>\n> In my scenario, even if GpuJoin is expensive at top level of JOIN/SCAN path\n> construction, GpuPreAgg + GpuJoin may be cheaper than others because of\n> data exchange on GPU device memory.\n> As long as GpuJoin is not removed from the pathlist, extension can build its\n> custom-path with cheaper cost.\n>\n> Best regards,\n> --\n> HeteroDB, Inc / The PG-Strom Project\n> KaiGai Kohei <kaigai@heterodb.com>\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>",
"msg_date": "Mon, 12 Aug 2019 13:28:03 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "On Mon, Aug 12, 2019 at 12:28 AM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> For implementation of the concept, this patch puts a hook on add_path\n> / add_partial_path\n> to override the path removal decision by extensions, according to its\n> own viewpoint.\n\nI don't think this hook is a very useful approach to this problem, and\nI'm concerned that it might have a measurable performance cost.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 4 Oct 2019 12:19:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "On Fri, Oct 04, 2019 at 12:19:06PM -0400, Robert Haas wrote:\n>On Mon, Aug 12, 2019 at 12:28 AM Kohei KaiGai <kaigai@heterodb.com> wrote:\n>> For implementation of the concept, this patch puts a hook on add_path\n>> / add_partial_path\n>> to override the path removal decision by extensions, according to its\n>> own viewpoint.\n>\n>I don't think this hook is a very useful approach to this problem, and\n>I'm concerned that it might have a measurable performance cost.\n>\n\nCan you be more specific why you don't think this approach is not\nuseful? I'm not sure whether you consider all hooks to have this issue\nor just this proposed one.\n\nAs for the performance impact, I think that's not difficult to measure.\nI'd be surprised if it has measurable impact on cases with no hook\ninstalled (there's plenty more expensive stuff going on). Of course, it\nmay have some impact for cases when the hook retains many more paths\nand/or does something expensive, but that's kinda up to whoever writes\nthat particular hook. I think the assumption is that the savings from\nbuilding better plans far outweight that extra cost.\n\nThat does not necessarily mean the proposed hook is correct - I only\nbriefly looked at it, and it's not clear to me why would it be OK to\ncall the hook for remove_old=true but not also for accept_new=false? How\ndo we know whether the \"better\" path arrives first?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 6 Oct 2019 21:23:13 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "On Sun, Oct 6, 2019 at 3:23 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> >I don't think this hook is a very useful approach to this problem, and\n> >I'm concerned that it might have a measurable performance cost.\n>\n> Can you be more specific why you don't think this approach is not\n> useful? I'm not sure whether you consider all hooks to have this issue\n> or just this proposed one.\n\nI'll start by admitting that that remark was rather off-the-cuff. On\nfurther reflection, add_path() is not really a crazy place to try to\nadd a new dimension of merit, which is really what KaiGai wants to do\nhere. On the other hand, as Tom and I noted upthread, that system is\ncreaking under its weight as it is, and making it extensible seems\nlike it might therefore be a bad idea - specifically, because it might\nslow down planning quite a bit on large join problems, either because\nof the additional cost testing the additional dimension of merit or\nbecause of the additional cost of dealing with the extra paths that\nget kept.\n\nIt is maybe worth noting that join/aggregate pushdown for FDWs has a\nsomewhat similar problem, and we didn't solve it this way. Should we\nhave? Maybe it would have worked better and been less buggy. But\nslowing down all planning for the benefit of that feature also sounds\nbad. I think any changes in this area need careful though.\n\n> As for the performance impact, I think that's not difficult to measure.\n> I'd be surprised if it has measurable impact on cases with no hook\n> installed (there's plenty more expensive stuff going on). Of course, it\n> may have some impact for cases when the hook retains many more paths\n> and/or does something expensive, but that's kinda up to whoever writes\n> that particular hook. I think the assumption is that the savings from\n> building better plans far outweight that extra cost.\n\nYou might be right, but add_path() is a pretty darn hot path in the\nplanner. You probably wouldn't see a significant overhead on a\nsingle-table query, but on a query with many tables I would not be\nsurprised if even the overhead of an empty function call was\nmeasurable. But I could be wrong, too.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 6 Oct 2019 22:00:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sun, Oct 6, 2019 at 3:23 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>> Can you be more specific why you don't think this approach is not\n>> useful? I'm not sure whether you consider all hooks to have this issue\n>> or just this proposed one.\n\n> I'll start by admitting that that remark was rather off-the-cuff. On\n> further reflection, add_path() is not really a crazy place to try to\n> add a new dimension of merit, which is really what KaiGai wants to do\n> here. On the other hand, as Tom and I noted upthread, that system is\n> creaking under its weight as it is, and making it extensible seems\n> like it might therefore be a bad idea - specifically, because it might\n> slow down planning quite a bit on large join problems, either because\n> of the additional cost testing the additional dimension of merit or\n> because of the additional cost of dealing with the extra paths that\n> get kept.\n\nFWIW, I think that the patch as proposed would most certainly have\nnasty performance problems. To make intelligent decisions, the\nhook function would basically have to re-do a large fraction of\nthe calculations that add_path itself does. It's also fairly\nunclear (or at least undocumented) how things would work if multiple\npath providers wanted to make use of the hook; except that that\nperformance issue would get worse, as each of them redoes that work.\n\nAlso, as Robert says, the real goal here is to allow some additional\ncomparison dimension to be considered. Simply preventing pre-existing\npaths from being removed isn't sufficient for that --- you have to be\nable to change the accept_new decisions as well, as Tomas already\nworried about. But if we phrase that as an additional hook that\nonly concerns itself with accept_new, then the duplicate-calculation\nproblem gets really substantially worse: I think actual use of a hook\nlike that would require reconsidering the entire existing path list.\n\nI'm not very sure what a good design for adding new comparison dimensions\nwould look like, but I think this isn't it.\n\nWe could imagine, maybe, that a hook for the purpose of allowing an\nadditional dimension to be considered would be essentially a path\ncomparison function, returning -1, +1, or 0 depending on whether\npath A is dominated by path B (on this new dimension), dominates\npath B, or neither. However, I do not see how multiple extensions\ncould usefully share use of such a hook.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 07 Oct 2019 09:56:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "On Mon, Oct 7, 2019 at 9:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> We could imagine, maybe, that a hook for the purpose of allowing an\n> additional dimension to be considered would be essentially a path\n> comparison function, returning -1, +1, or 0 depending on whether\n> path A is dominated by path B (on this new dimension), dominates\n> path B, or neither. However, I do not see how multiple extensions\n> could usefully share use of such a hook.\n\nTypically, we support hook-sharing mostly by ad-hoc methods: when\ninstalling a hook, you remember the previous value of the function\npointer, and arrange to call that function yourself. That's not a\ngreat method. One problem with it is that you can't reasonably\nuninstall a hook function, which would be a nice thing to be able to\ndo. We could do better by reusing the technique from on_shmem_exit or\nRegisterXactCallbacks: keep an array of pointers, and call them in\norder. I wish we'd retrofit all of our hooks to work more like that;\nbeing able to unload shared libraries would be a good feature.\n\nBut if we want to stick with the ad-hoc method, we could also just\nhave four possible return values: dominates, dominated-by, both, or\nneither.\n\nStill, this doesn't feel like very scalable paradigm, because this\ncode gets called a lot. Unless both calling the hook functions and\nthe hook functions themselves are dirt-cheap, it's going to hurt, and\nTBH, I wonder if even the cost of detecting that the hook is unused\nmight be material.\n\nI wonder whether we might get a nicer solution to this problem if our\nmethod of managing paths looked less something invented by a LISP\nhacker, but I don't have a specific proposal in mind.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 7 Oct 2019 10:44:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Oct 7, 2019 at 9:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> We could imagine, maybe, that a hook for the purpose of allowing an\n>> additional dimension to be considered would be essentially a path\n>> comparison function, returning -1, +1, or 0 depending on whether\n>> path A is dominated by path B (on this new dimension), dominates\n>> path B, or neither. However, I do not see how multiple extensions\n>> could usefully share use of such a hook.\n\n> ... if we want to stick with the ad-hoc method, we could also just\n> have four possible return values: dominates, dominated-by, both, or\n> neither.\n\nRight, and then *each* user of the hook would have to be prepared\nto merge its result with the result from the previous user(s),\nwhich is a complicated bit of logic that somebody would surely\nget wrong, especially if (a) there's no prototype to copy from\nand (b) testing only their own extension would not exercise it.\n\n[ thinks a bit... ] Maybe that could be improved if we can express\nthe result as a bitmask, defined in such a way that OR'ing (or maybe\nAND'ing? haven't worked it out) the results from different comparisons\ndoes the right thing.\n\n> Still, this doesn't feel like very scalable paradigm, because this\n> code gets called a lot. Unless both calling the hook functions and\n> the hook functions themselves are dirt-cheap, it's going to hurt, and\n> TBH, I wonder if even the cost of detecting that the hook is unused\n> might be material.\n\nYeah, I'm worried about that too. This is quite a hot code path,\nand so I don't think we can just assume that changes are free.\nStill, if we could come up with a cleaner paradigm, maybe we could\nbuy back a few cycles in the core-code comparison logic, and thus\nnot come out behind from adding a hook.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 07 Oct 2019 11:57:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "2019年10月7日(月) 23:44 Robert Haas <robertmhaas@gmail.com>:\n>\n> On Mon, Oct 7, 2019 at 9:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > We could imagine, maybe, that a hook for the purpose of allowing an\n> > additional dimension to be considered would be essentially a path\n> > comparison function, returning -1, +1, or 0 depending on whether\n> > path A is dominated by path B (on this new dimension), dominates\n> > path B, or neither. However, I do not see how multiple extensions\n> > could usefully share use of such a hook.\n>\n> Typically, we support hook-sharing mostly by ad-hoc methods: when\n> installing a hook, you remember the previous value of the function\n> pointer, and arrange to call that function yourself. That's not a\n> great method. One problem with it is that you can't reasonably\n> uninstall a hook function, which would be a nice thing to be able to\n> do. We could do better by reusing the technique from on_shmem_exit or\n> RegisterXactCallbacks: keep an array of pointers, and call them in\n> order. I wish we'd retrofit all of our hooks to work more like that;\n> being able to unload shared libraries would be a good feature.\n>\n> But if we want to stick with the ad-hoc method, we could also just\n> have four possible return values: dominates, dominated-by, both, or\n> neither.\n>\nIt seems to me this is a bit different from the purpose of this hook.\nI never intend to overwrite existing cost-based decision by this hook.\nThe cheapest path at a particular level is the cheapest one regardless\nof the result of this hook. However, this hook enables to prevent\nimmediate elimination of a particular path that we (extension) want to\nuse it later and may have potentially cheaper cost (e.g; a pair of\ncustom GpuAgg + GpuJoin by reduction of DMA cost).\n\nSo, I think expected behavior when multiple extensions would use\nthis hook is clear. If any of call-chain on the hook wanted to preserve\nthe path, it should be kept on the pathlist. (Of couse, it is not a cheapest\none)\n\n> Still, this doesn't feel like very scalable paradigm, because this\n> code gets called a lot. Unless both calling the hook functions and\n> the hook functions themselves are dirt-cheap, it's going to hurt, and\n> TBH, I wonder if even the cost of detecting that the hook is unused\n> might be material.\n>\n> I wonder whether we might get a nicer solution to this problem if our\n> method of managing paths looked less something invented by a LISP\n> hacker, but I don't have a specific proposal in mind.\n>\nOne other design in my mind is, add a callback function pointer on Path\nstructure. Only if Path structure has valid pointer (not-NULL), add_path()\ncalls extension's own logic to determine whether the Path can be\neliminated now.\nThis design may minimize the number of callback invocation.\n\nOne potential downside of this approach is, function pointer makes\nhard to implement readfuncs of Path nodes, even though we have\nno read handler of them, right now.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Tue, 8 Oct 2019 01:04:45 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "Kohei KaiGai <kaigai@heterodb.com> writes:\n> 2019年10月7日(月) 23:44 Robert Haas <robertmhaas@gmail.com>:\n>> But if we want to stick with the ad-hoc method, we could also just\n>> have four possible return values: dominates, dominated-by, both, or\n>> neither.\n\n> It seems to me this is a bit different from the purpose of this hook.\n> I never intend to overwrite existing cost-based decision by this hook.\n> The cheapest path at a particular level is the cheapest one regardless\n> of the result of this hook. However, this hook enables to prevent\n> immediate elimination of a particular path that we (extension) want to\n> use it later and may have potentially cheaper cost (e.g; a pair of\n> custom GpuAgg + GpuJoin by reduction of DMA cost).\n\nI do not think this will work for the purpose you wish, for the reason\nTomas already pointed out: if you don't also modify the accept_new\ndecision, there's no guarantee that the path you want will get into\nthe relation's path list in the first place.\n\nAnother problem with trying to manage this only by preventing removals\nis that it is likely to lead to keeping extra paths and thereby wasting\nplanning effort. What if there's more than one path having the property\nyou want to keep?\n\nGiven the way that add_path works, you really have to think about the\nproblem as adding an additional figure-of-merit or comparison dimension.\nAnything else is going to have some unpleasant behaviors.\n\n> One other design in my mind is, add a callback function pointer on Path\n> structure. Only if Path structure has valid pointer (not-NULL), add_path()\n> calls extension's own logic to determine whether the Path can be\n> eliminated now.\n\nWhile I'm not necessarily against having a per-path callback, I don't\nsee how it helps us solve this problem, especially not in the presence\nof multiple extensions trying to add different paths. I do not wish\nto see things ending up with extensions saying \"don't delete my path\nno matter what\", because that's going to be costly in terms of later\nplanning effort. But I'm not seeing how this wouldn't degenerate to\npretty much that behavior.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 07 Oct 2019 12:56:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "2019年10月8日(火) 1:56 Tom Lane <tgl@sss.pgh.pa.us>:\n>\n> Kohei KaiGai <kaigai@heterodb.com> writes:\n> > 2019年10月7日(月) 23:44 Robert Haas <robertmhaas@gmail.com>:\n> >> But if we want to stick with the ad-hoc method, we could also just\n> >> have four possible return values: dominates, dominated-by, both, or\n> >> neither.\n>\n> > It seems to me this is a bit different from the purpose of this hook.\n> > I never intend to overwrite existing cost-based decision by this hook.\n> > The cheapest path at a particular level is the cheapest one regardless\n> > of the result of this hook. However, this hook enables to prevent\n> > immediate elimination of a particular path that we (extension) want to\n> > use it later and may have potentially cheaper cost (e.g; a pair of\n> > custom GpuAgg + GpuJoin by reduction of DMA cost).\n>\n> I do not think this will work for the purpose you wish, for the reason\n> Tomas already pointed out: if you don't also modify the accept_new\n> decision, there's no guarantee that the path you want will get into\n> the relation's path list in the first place.\n>\nAh, it is right, indeed. We may need to have a variation of add_path()\nthat guarantee to preserve a path newly added.\nWe may be utilize the callback to ask extension whether it allows the\nnew path to be dominated by the existing cheapest path also.\n\n> Another problem with trying to manage this only by preventing removals\n> is that it is likely to lead to keeping extra paths and thereby wasting\n> planning effort. What if there's more than one path having the property\n> you want to keep?\n>\nMy assumption is that upper path tries to walk on the path-list, not only\ncheapest one, to construct upper paths with lesser paths if they are capable\nfor special optimization.\nOf course, it is not a cheap way, however, majority of path-nodes are not\ninterested in the lesser paths, as its sub-path. Only limited number of\nextension will walk on the lesser path also?\nA separated list is probably an idea, to keep the lesser paths. It is not\nreferenced at the usual path, however, extension can walk on the list\nto lookup another opportunity more than the cheapest path.\nIn this case, length of the path_list is not changed.\n\nBest regards,\n--\nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Thu, 10 Oct 2019 00:16:03 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "Hi,\n\nI wonder what is the status of this patch/thread. There was quite a bit\nof discussion about possible approaches, but we currently don't have any\npatch for review, AFAICS. Not sure what's the plan?\n\nSo \"needs review\" status seems wrong, and considering we haven't seen\nany patch since August (so in the past two CFs) I propose marking it as\nreturned with feedback. Any objections?\n\nFWIW I think we may well need a more elaborate logic which paths to\nkeep, but I'd prefer re-adding it back to the CF when we actually have a\nnew patch.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 11 Jan 2020 03:01:33 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "Hi,\n\nThe proposition I posted at 10th-Oct proposed to have a separate list to retain\nlesser paths not to expand the path_list length, but here are no comments by\nothers at that time.\nIndeed, the latest patch has not been updated yet.\nPlease wait for a few days. I'll refresh the patch again.\n\nThanks,\n\n2020年1月11日(土) 11:01 Tomas Vondra <tomas.vondra@2ndquadrant.com>:\n>\n> Hi,\n>\n> I wonder what is the status of this patch/thread. There was quite a bit\n> of discussion about possible approaches, but we currently don't have any\n> patch for review, AFAICS. Not sure what's the plan?\n>\n> So \"needs review\" status seems wrong, and considering we haven't seen\n> any patch since August (so in the past two CFs) I propose marking it as\n> returned with feedback. Any objections?\n>\n> FWIW I think we may well need a more elaborate logic which paths to\n> keep, but I'd prefer re-adding it back to the CF when we actually have a\n> new patch.\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Sat, 11 Jan 2020 17:07:11 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "On Sat, Jan 11, 2020 at 05:07:11PM +0900, Kohei KaiGai wrote:\n>Hi,\n>\n>The proposition I posted at 10th-Oct proposed to have a separate list to retain\n>lesser paths not to expand the path_list length, but here are no comments by\n>others at that time.\n>Indeed, the latest patch has not been updated yet.\n>Please wait for a few days. I'll refresh the patch again.\n>\n\nOK, thanks for the update. I've marked the patch as \"waiting on author\".\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 11 Jan 2020 13:27:06 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "The v2 patch is attached.\n\nThis adds two dedicated lists on the RelOptInfo to preserve lesser paths\nif extension required to retain the path-node to be removed in usual manner.\nThese lesser paths are kept in the separated list, so it never expand the length\nof pathlist and partial_pathlist. That was the arguable point in the discussion\nat the last October.\n\nThe new hook is called just before the path-node removal operation, and\ngives extension a chance for extra decision.\nIf extension considers the path-node to be removed can be used in the upper\npath construction stage, they can return 'true' as a signal to preserve this\nlesser path-node.\nIn case when same kind of path-node already exists in the preserved_pathlist\nand the supplied lesser path-node is cheaper than the old one, extension can\nremove the worse one arbitrarily to keep the length of preserved_pathlist.\n(E.g, PG-Strom may need one GpuJoin path-node either pathlist or preserved-\npathlist for further opportunity of combined usage with GpuPreAgg path-node.\nIt just needs \"the best GpuJoin path-node\" somewhere, not two or more.)\n\nBecause PostgreSQL core has no information which preserved path-node can\nbe removed, extensions that uses path_removal_decision_hook() has responsibility\nto keep the length of preserved_(partial_)pathlist reasonable.\n\n\nBTW, add_path() now removes the lesser path-node by pfree(), not only detach\nfrom the path-list. (IndexPath is an exception)\nDoes it really make sense? It only releases the path-node itself, so may not\nrelease entire objects. So, efficiency of memory usage is limited. And\nForeignScan\n/ CustomScan may references the path-node to be removed. It seems to me here\nis no guarantee lesser path-nodes except for IndexPath nodes are safe\nto release.\n\nBest regards,\n\n2020年1月11日(土) 21:27 Tomas Vondra <tomas.vondra@2ndquadrant.com>:\n>\n> On Sat, Jan 11, 2020 at 05:07:11PM +0900, Kohei KaiGai wrote:\n> >Hi,\n> >\n> >The proposition I posted at 10th-Oct proposed to have a separate list to retain\n> >lesser paths not to expand the path_list length, but here are no comments by\n> >others at that time.\n> >Indeed, the latest patch has not been updated yet.\n> >Please wait for a few days. I'll refresh the patch again.\n> >\n>\n> OK, thanks for the update. I've marked the patch as \"waiting on author\".\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>",
"msg_date": "Tue, 14 Jan 2020 00:46:02 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "> On Tue, Jan 14, 2020 at 12:46:02AM +0900, Kohei KaiGai wrote:\n> The v2 patch is attached.\n>\n> This adds two dedicated lists on the RelOptInfo to preserve lesser paths\n> if extension required to retain the path-node to be removed in usual manner.\n> These lesser paths are kept in the separated list, so it never expand the length\n> of pathlist and partial_pathlist. That was the arguable point in the discussion\n> at the last October.\n>\n> The new hook is called just before the path-node removal operation, and\n> gives extension a chance for extra decision.\n> If extension considers the path-node to be removed can be used in the upper\n> path construction stage, they can return 'true' as a signal to preserve this\n> lesser path-node.\n> In case when same kind of path-node already exists in the preserved_pathlist\n> and the supplied lesser path-node is cheaper than the old one, extension can\n> remove the worse one arbitrarily to keep the length of preserved_pathlist.\n> (E.g, PG-Strom may need one GpuJoin path-node either pathlist or preserved-\n> pathlist for further opportunity of combined usage with GpuPreAgg path-node.\n> It just needs \"the best GpuJoin path-node\" somewhere, not two or more.)\n>\n> Because PostgreSQL core has no information which preserved path-node can\n> be removed, extensions that uses path_removal_decision_hook() has responsibility\n> to keep the length of preserved_(partial_)pathlist reasonable.\n\nHi,\n\nThanks for the patch! I had a quick look at it and have a few questions:\n\n* What would be the exact point/hook at which an extension can use\n preserved pathlists? I guess it's important, since I can imagine it's\n important for one of the issues mentioned in the thread about such an\n extension have to re-do significant part of the calculations from\n add_path.\n\n* Do you have any benchmark results with some extension using this\n hook? The idea with another pathlist of \"discarded\" paths sounds like\n a lightweight solution, and indeed I've performed few tests with two\n workloads (simple queries, queries with joins of 10 tables) and the\n difference between master and patched versions is rather small (no\n stable difference for the former, couple of percent for the latter).\n But it's of course with an empty hook, so it would be good to see\n other benchmarks as well.\n\n* Does it make sense to something similar with add_path_precheck,\n which also in some situations excluding paths?\n\n* This part sounds dangerous for me:\n\n> Because PostgreSQL core has no information which preserved path-node can\n> be removed, extensions that uses path_removal_decision_hook() has responsibility\n> to keep the length of preserved_(partial_)pathlist reasonable.\n\n since an extension can keep limited number of paths in the list, but\n then the same hook could be reused by another extension which will\n also try to limit such paths, but together they'll explode.\n\n\n",
"msg_date": "Thu, 5 Nov 2020 16:41:31 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "On 11/5/20 10:41 AM, Dmitry Dolgov wrote:\n>> On Tue, Jan 14, 2020 at 12:46:02AM +0900, Kohei KaiGai wrote:\n>> The v2 patch is attached.\n> \n> Thanks for the patch! I had a quick look at it and have a few questions:\nKaiGai, any thoughts on Dmitry's questions?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 5 Mar 2021 11:33:18 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "2020年11月6日(金) 0:40 Dmitry Dolgov <9erthalion6@gmail.com>:\n>\n> > On Tue, Jan 14, 2020 at 12:46:02AM +0900, Kohei KaiGai wrote:\n> > The v2 patch is attached.\n> >\n> > This adds two dedicated lists on the RelOptInfo to preserve lesser paths\n> > if extension required to retain the path-node to be removed in usual manner.\n> > These lesser paths are kept in the separated list, so it never expand the length\n> > of pathlist and partial_pathlist. That was the arguable point in the discussion\n> > at the last October.\n> >\n> > The new hook is called just before the path-node removal operation, and\n> > gives extension a chance for extra decision.\n> > If extension considers the path-node to be removed can be used in the upper\n> > path construction stage, they can return 'true' as a signal to preserve this\n> > lesser path-node.\n> > In case when same kind of path-node already exists in the preserved_pathlist\n> > and the supplied lesser path-node is cheaper than the old one, extension can\n> > remove the worse one arbitrarily to keep the length of preserved_pathlist.\n> > (E.g, PG-Strom may need one GpuJoin path-node either pathlist or preserved-\n> > pathlist for further opportunity of combined usage with GpuPreAgg path-node.\n> > It just needs \"the best GpuJoin path-node\" somewhere, not two or more.)\n> >\n> > Because PostgreSQL core has no information which preserved path-node can\n> > be removed, extensions that uses path_removal_decision_hook() has responsibility\n> > to keep the length of preserved_(partial_)pathlist reasonable.\n>\n> Hi,\n>\n> Thanks for the patch! I had a quick look at it and have a few questions:\n>\nSorry for the very late response. It's my oversight.\n\n> * What would be the exact point/hook at which an extension can use\n> preserved pathlists? I guess it's important, since I can imagine it's\n> important for one of the issues mentioned in the thread about such an\n> extension have to re-do significant part of the calculations from\n> add_path.\n>\nset_join_pathlist_hook and create_upper_paths_hook\n\nFor example, even if GpuPreAgg may be able to generate cheaper path\nwith GpuJoin result, make_one_rel() may drop GpuJoin results due to\nits own cost estimation. In this case, if lesser GpuJoin path would be\npreserved somewhere, the extension invoked by create_upper_paths_hook\ncan make GpuPreAgg path with GpuJoin sub-path; that can reduce\ndata transfer between CPU and GPU.\n\n> * Do you have any benchmark results with some extension using this\n> hook? The idea with another pathlist of \"discarded\" paths sounds like\n> a lightweight solution, and indeed I've performed few tests with two\n> workloads (simple queries, queries with joins of 10 tables) and the\n> difference between master and patched versions is rather small (no\n> stable difference for the former, couple of percent for the latter).\n> But it's of course with an empty hook, so it would be good to see\n> other benchmarks as well.\n>\nNot yet. And, an empty hook will not affect so much.\n\nEven if the extension uses the hook, it shall be more lightweight than\nits own alternative implementation. In case of PG-Strom, it also saves\nGpu-related paths in its own hash-table, then we look at the hash-table\nalso to find out the opportunity to merge multiple GPU invocations into\nsingle invocation.\n\n> * Does it make sense to something similar with add_path_precheck,\n> which also in some situations excluding paths?\n>\nThis logic allows to skip the paths creation that will obviously have\nexpensive cost. Its decision is based on the cost estimation.\n\nThe path_removal_decision_hook also gives extensions a chance to\npreserve pre-built paths that can be used later even if cost is not best.\nThis decision is not only based on the cost. In my expectations, it allows\nto preserve the best path in the gpu related ones.\n\n> * This part sounds dangerous for me:\n>\n> > Because PostgreSQL core has no information which preserved path-node can\n> > be removed, extensions that uses path_removal_decision_hook() has responsibility\n> > to keep the length of preserved_(partial_)pathlist reasonable.\n>\n> since an extension can keep limited number of paths in the list, but\n> then the same hook could be reused by another extension which will\n> also try to limit such paths, but together they'll explode.\n>\nIf Path node has a flag to indicate whether it is referenced by any other\nupper node, we can simplify the check whether it is safe to release.\nIn the current implementation, the lesser paths except for IndexPath are\nreleased at the end of add_path.\n\nOn the other hand, I'm uncertain whether the pfree(new_path) at the tail\nof add_path makes sense on the modern hardware, because they allow to\nrecycle just small amount of memory, then entire memory consumption\nby the optimizer shall be released by MemoryContext mechanism.\nIf add_path does not release path-node, the above portion is much simpler.\n\nBest regards,\n--\nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Sat, 6 Mar 2021 18:50:13 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "On Sat, Mar 06, 2021 at 06:50:13PM +0900, Kohei KaiGai wrote:\n> \n> On the other hand, I'm uncertain whether the pfree(new_path) at the tail\n> of add_path makes sense on the modern hardware, because they allow to\n> recycle just small amount of memory, then entire memory consumption\n> by the optimizer shall be released by MemoryContext mechanism.\n> If add_path does not release path-node, the above portion is much simpler.\n> \n\nHi Kaigai-san,\n\nDo you have an updated patch? Please feel free to resubmit to next CF.\nCurrent CF entry has been marked as RwF.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Tue, 5 Oct 2021 12:24:47 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "On Wed, 31 Jul 2019 at 11:45, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jul 31, 2019 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > What you'd want to do for something like the above, I think, is to\n> > have some kind of figure of merit or other special marking for paths\n> > that will have some possible special advantage in later planning\n> > steps. Then you can teach add_path that that's another dimension it\n> > should consider, in the same way that paths with different sort orders\n> > or parallizability attributes don't dominate each other.\n>\n> Yeah, but I have to admit that this whole design makes me kinda\n> uncomfortable. Every time somebody comes up with a new figure of\n> merit, it increases not only the number of paths retained but also the\n> cost of comparing two paths to possibly reject one of them.\n\nBut this is a fundamental problem with having lots of possible reasons\na path might be good. Not a problem with the algorithm.\n\nI'm imagining that you're both right. If we had some sort of way to\nlook at the shape of the query and make decisions early on about what\nfigures of merit might be relevant then we might be able to pick just\na few. Sort of like how we currently only check paths that match some\njoin or other query feature.\n\n\n-- \ngreg\n\n\n",
"msg_date": "Sat, 23 Oct 2021 16:02:40 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "Hi,\n\nWhile researching the viability to change Citus' planner in a way to\ntake more advantage of the postgres planner I ran into the same\nlimitations in add_path as described on this thread. For Citus we are\ninvestigating a possibility to introduce Path nodes that describe the\noperation of transer tuples over the network. Due to the pruning rules\nin add_path we currently lose many paths that have great opportunities\nof future optimizations.\n\nHaving looked at the patches provided I don't think that adding a new\nList to the Path structure where we retain 'lesser' paths is the right\napproach. First of all, this would require extension developers to\ninterpret all these paths, where many would dominate others on cost,\nsorting, parameterization etc.\n\nInstead I like the approach suggested by both Robert Haas and Tom Lane.\n> have some kind of figure of merit or other special marking for paths\n> that will have some possible special advantage in later planning\n> steps.\n\nThis is well in line with how the current logic works in add_path\nwhere cost, sorting and parameterization, rowcount and parallel safety\nare dimensions on which paths are compared. IMHO extensions should be\nable to add dimensions of their interest to this list of current\ndimensions used.\n\nThe thoughts Tom Lane expressed earlier struc a chord with me.\n> [ thinks a bit... ] Maybe that could be improved if we can express\n> the result as a bitmask, defined in such a way that OR'ing (or maybe\n> AND'ing? haven't worked it out) the results from different comparisons\n> does the right thing.\n\nAttached you will find 3 patches that implement a way for extensions\nto introduce 'a figure of merit' by the means of path comparisons.\n - The first patch refactors the decision logic out of the forloop\ninto their own function as to make it easier to reason about what is\nbeing compared. This refactor also changes the direction of some\nif-statements as to provide clear early decision points.\n - The second patch rewrites the original if/else/switch tree into 5\nlogical or operations of a bitmask expressing the comparison between\npaths, together with early escapes once we know the paths are\ndifferent. To keep the original logic there is a slight deviation from\nsimply or-ing 5 comparisons. After comparing cost, sorting and\nparameterization we only continue or-ing rows and parallelism if the\npaths are leaning either to path1 (new_path) or path2 (old_path). If\nthe first 3 comparisons result in the paths being equal the original\ncode prefers parallel paths over paths with less rows.\n - The third and last path builds on the logical or operations\nintroduced in the middle patch. After the first three dimensions\npostgres compares paths on we allow extensions to compare paths in the\nsame manner. Their result gets merged into the compounded comparison.\n\nTo come to the three patches above I have decomposed the orignal\nbehaviour into 3 possible decisions add_path can take per iteration in\nthe loop. It either REJECTS the new path, DOMINATES the old path or\nCONTINUES with the next path in the list. The decision for any of the\n3 actions is based on 6 different input parameters:\n - cost std fuzz factor\n - sort order / pathkeys\n - parameterizations / bitmap subset of required outer relid's\n - row count\n - parallel safety\n - cost smaller fuzz factor\n\nTo verify the correct decisions being made in the refactor of the\nsecond patch I modelled both implementations in a scripting language\nand passed in all possible comparisons for the six dimensions above.\nWith the change of behaviour after the first 3 dimensions I came to an\nexact one-to-one mapping of the decisions being made before and after\npatch 2.\n\nThroughout this thread an emphasis on performance has been expressed\nby many participants. I want to acknowledge their stance. Due to the\nnature of the current planner the code in add_path gets invoked on an\nexponential scale when the number of joins increases and extra paths\nare retained. Especially with my proposed patch being called inside\nthe loop where paths are being compared makes that the hook gets\ncalled ever more often compared to the earlier proposed patches on\nthis thread.\n\nTo reason about the performance impact of a patch in this area I think\nwe need to get to a mutual understanding and agreement on how to\nevaluate the performance.\n\nGiven many if not most installations will run without any hook\ninstalled in this area we should aim for minimal to no measurable\noverhead of the code without a hook installed. This is also why I made\nsure, via modelling of the decision logic in a scripting language the\nbehaviour of which paths are retained is not changed with these\npatches when no hook is installed.\n\nWhen an extension needs to add a dimension to the comparison of paths\nthe impact of this patch is twofold:\n - A dynamic function gets invoked to compare every path to every\nother path. Both the dynamic function call and the logics to compare\nthe paths will introduce extra time spent in add_path\n - A hook in this area will by definition always retain more paths.\nThis is just the nature of how the comparison of paths in this area\nworks.\n\nBoth of the dimensions above will make that extensions requiring this\nhook to retain more paths will have to think carefully about the work\nthey do, and on how many dimensions paths are compared in the end. As\nTomas Vondra pointed out earlier in this thread:\n> I think the assumption is that the savings from\n> building better plans far outweight that extra cost.\n\nFor Citus this translates into less network traffic by having more\noptimization possibilities of pushing down expensive and row reducing\noperations like joins and groupings, etc to the nodes hosting the\ndata. For PG-Strom this translates into amortising the DMA transfer\ncost between host and GPU. Due to the gravity of data we always prefer\nextra planning time for complex queries over transferring many tuples.\nFrankly, our planner is already introducing much overhead currently\nand we expect to reduce this overhead by having the possibility to\nhook into postgres in areas like this.\n\nTo understand the impact these patches have when no hook is installed\nI performed the following two comparisons.\n - Run a benchmark on a patched and a non-patched version of postgres\n14.1. I used HammerDB for this as we have tooling around to quickly\nrun these. Given the execution time dominates such a benchmark I don't\nthink it gives good insights. At Least it shows that both versions\nperform in the same range of performance.\n - Analysed the machine code on Ubuntu 20.04 x86_64. The machine code\nis very much alike. The patched version has slightly less jumps and\nslightly less instructions. My fear was that the excessive breaking\ninto small functions and switches to translate enums into each other\nwould cause many layers of indirection to be introduced. Instead all\ncode gets inlined when compiled with the same configuration as the\n14.1 release.\n\nI don't think the above two are enough to conclude this patch doesn't\nintroduce overhead when the hook is empty. I invite everyone with an\ninterest in this area to perform their own measurements and report\ntheir findings.\n\nBest,\n-- Nils\nCitus Data / Microsoft",
"msg_date": "Fri, 24 Dec 2021 15:14:28 +0100",
"msg_from": "Nils Dijk <me@thanod.nl>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "Nils Dijk <me@thanod.nl> writes:\n> Attached you will find 3 patches that implement a way for extensions\n> to introduce 'a figure of merit' by the means of path comparisons.\n\nI took a brief look through this. I quite like your idea of expressing\nPathComparison merging as an OR of suitably-chosen values. I do have\nsome minor criticisms of the patch, which potentially make for small\nperformance improvements so I've not bothered yet to try to measure\nperformance.\n\n* I think you could do without PATH_COMPARISON_MASK. Use of the enum\nalready implies that we only allow valid values of the enum, and given\nthat the inputs of path_comparison_combine are valid, so is the output\nof the \"|\". There's no need to expend cycles masking it, and if there\nwere it would be dubious whether the masking restores correctness.\nWhat *is* needed, though, is a comment pointing out that the values of\nPathComparison are chosen with malice aforethought to cause ORing of\nthem to give semantically-correct results.\n\n* I'd also drop enum AddPathDecision and add_path_decision(),\nand just let add_path() do what it must based on the PathComparison\nresult. I don't think the extra layer of mapping adds anything,\nand it's probably costing some cycles.\n\n* Perhaps it's worth explicitly marking the new small functions\nas \"static inline\"? Probably modern compilers will do that without\nbeing prompted, but we might as well be clear about what we want.\n\n* Some more attention to comments is needed, eg the header comment\nfor compare_path_costs_fuzzily still refers to COSTS_DIFFERENT.\n(However, on the whole I'm not sure s/COSTS_DIFFERENT/PATHS_DIFFERENT/\netc is an improvement. Somebody looking at this code for the first\ntime would probably think the answer should always be that the two\npaths are \"different\", because one would hope we're not generating\nredundant identical paths. What we want to say is that the paths'\nfigures of merit are different; but \"PATH_FIGURES_OF_MERIT_DIFFERENT\"\nis way too verbose. Unless somebody has got a good proposal for\na short name I'd lean to sticking with the COSTS_XXX names.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 31 Jul 2022 15:10:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "... BTW, a question I think we're going to be forced to face up to\nif we put a hook here is: is path cost/FOM comparison guaranteed\ntransitive? That is, if we find that path A dominates path B\nand that path B dominates path C, is it guaranteed that comparing\nA directly to C would conclude that A dominates C? add_path()\ncurrently assumes that such transitivity holds, because if the\nnew_path dominates an old_path, we immediately discard old_path.\nThis is unjustifiable if new_path later gets rejected because it\nis dominated by some later list element: we just lost a path and\nreplaced it with nothing. (Presumably, that can only happen if\nneither existing list entry dominates the other.)\n\nTBH, I'm not entirely convinced that transitivity is guaranteed\neven today, now that we've got things like parallel safety in\nthe mix. For sure I doubt that we should assume that injecting\nmultiple hook functions each with their own agendas will result\nin transitivity-preserving comparisons.\n\nThe most honest way to deal with that would be to convert\nadd_path to a two-pass implementation. In the first pass,\nsee if new_path is dominated by any existing list entry;\nif so, stop immediately, discarding new_path. If we get\nthrough that, we will add new_path, so now identify which\nold paths it dominates and remove them. We could avoid\nrunning path_compare() twice by retaining state from the\nfirst pass to the second, but that's hardly free. On the\nother hand, if you assume that most add_path calls end in\nrejecting new_path, having a streamlined route to determining\nthat could be a win.\n\nA possibly-cheaper answer could be to say that if new_path is\nfound to dominate any old_path, add it, even if it's later found\nto be dominated. This'd only require some rejiggering of the way\nthe accept_new flag is tracked, I think. On the other hand this\nway might be penny wise and pound foolish, if it ends in keeping\nmore paths than we really need.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 31 Jul 2022 16:05:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-31 16:05:20 -0400, Tom Lane wrote:\n> Thoughts?\n\nAs the patch got some feedback ~2 months ago, I'm updating the status to\nwaiting-for-author.\n\nMinor note: cfbot complains about a cpluspluscheck violation:\n\n[12:24:50.124] time make -s cpluspluscheck EXTRAFLAGS='-fmax-errors=10'\n[12:25:17.757] In file included from /tmp/cpluspluscheck.AoEDdi/test.cpp:3:\n[12:25:17.757] /tmp/cirrus-ci-build/src/include/optimizer/pathnode.h: In function ‘PathComparison path_comparison_combine(PathComparison, PathComparison)’:\n[12:25:17.757] /tmp/cirrus-ci-build/src/include/optimizer/pathnode.h:39:19: error: invalid conversion from ‘int’ to ‘PathComparison’ [-fpermissive]\n[12:25:17.757] 39 | return (c1 | c2) & PATH_COMPARISON_MASK;\n[12:25:17.757] | ^\n[12:25:17.757] | |\n[12:25:17.757] | int\n[12:25:33.857] make: *** [GNUmakefile:141: cpluspluscheck] Error 1\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 2 Oct 2022 09:59:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: How to retain lesser paths at add_path()?"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that there are many header files being\nincluded which need not be included.\nI have tried this in a few files and found the\ncompilation and regression to be working.\nI have attached the patch for the files that\nI tried.\nI tried this in CentOS, I did not find the header\nfiles to be platform specific.\nShould we pursue this further and cleanup in\nall the files?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 31 Jul 2019 11:19:08 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Unused header file inclusion"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 11:19:08AM +0530, vignesh C wrote:\n> I noticed that there are many header files being included which need\n> not be included. I have tried this in a few files and found the\n> compilation and regression to be working. I have attached the patch\n> for the files that I tried. I tried this in CentOS, I did not find\n> the header files to be platform specific.\n> Should we pursue this further and cleanup in all the files?\n\nDo you use a particular method here or just manual deduction after\nlooking at each file individually? If this can be cleaned up a bit, I\nthink that's welcome. The removal of headers is easily forgotten when\nmoving code from one file to another...\n--\nMichael",
"msg_date": "Wed, 31 Jul 2019 14:56:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 11:26 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jul 31, 2019 at 11:19:08AM +0530, vignesh C wrote:\n> > I noticed that there are many header files being included which need\n> > not be included. I have tried this in a few files and found the\n> > compilation and regression to be working. I have attached the patch\n> > for the files that I tried. I tried this in CentOS, I did not find\n> > the header files to be platform specific.\n> > Should we pursue this further and cleanup in all the files?\n>\n> Do you use a particular method here or just manual deduction after\n> looking at each file individually? If this can be cleaned up a bit, I\n> think that's welcome. The removal of headers is easily forgotten when\n> moving code from one file to another...\n>\nThanks Michael.\nI'm writing some perl scripts to identify this.\nThe script will scan through all the files, make changes,\nand verify.\nFinally it will give the changed files.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Jul 2019 11:31:15 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 11:31 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Jul 31, 2019 at 11:26 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, Jul 31, 2019 at 11:19:08AM +0530, vignesh C wrote:\n> > > I noticed that there are many header files being included which need\n> > > not be included. I have tried this in a few files and found the\n> > > compilation and regression to be working. I have attached the patch\n> > > for the files that I tried. I tried this in CentOS, I did not find\n> > > the header files to be platform specific.\n> > > Should we pursue this further and cleanup in all the files?\n> >\n> > Do you use a particular method here or just manual deduction after\n> > looking at each file individually? If this can be cleaned up a bit, I\n> > think that's welcome. The removal of headers is easily forgotten when\n> > moving code from one file to another...\n> >\n> Thanks Michael.\n> I'm writing some perl scripts to identify this.\n> The script will scan through all the files, make changes,\n> and verify.\n\nIf we can come up with some such tool, we might be able to integrate\nit with Thomas's patch tester [1] wherein it can apply the patch,\nverify if there are unnecessary includes in the patch and report the\nsame.\n\n[1] - http://commitfest.cputube.org/\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Jul 2019 11:55:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 11:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 31, 2019 at 11:31 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Wed, Jul 31, 2019 at 11:26 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Wed, Jul 31, 2019 at 11:19:08AM +0530, vignesh C wrote:\n> > > > I noticed that there are many header files being included which need\n> > > > not be included. I have tried this in a few files and found the\n> > > > compilation and regression to be working. I have attached the patch\n> > > > for the files that I tried. I tried this in CentOS, I did not find\n> > > > the header files to be platform specific.\n> > > > Should we pursue this further and cleanup in all the files?\n> > >\n> > > Do you use a particular method here or just manual deduction after\n> > > looking at each file individually? If this can be cleaned up a bit, I\n> > > think that's welcome. The removal of headers is easily forgotten when\n> > > moving code from one file to another...\n> > >\n> > Thanks Michael.\n> > I'm writing some perl scripts to identify this.\n> > The script will scan through all the files, make changes,\n> > and verify.\n>\n> If we can come up with some such tool, we might be able to integrate\n> it with Thomas's patch tester [1] wherein it can apply the patch,\n> verify if there are unnecessary includes in the patch and report the\n> same.\n>\n> [1] - http://commitfest.cputube.org/\n>\nThanks Amit.\nI will post the tool along with the patch once I complete this activity. We\ncan enhance the tool further based on feedback and take it forward.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Jul 2019 12:07:04 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 11:55:37AM +0530, Amit Kapila wrote:\n> If we can come up with some such tool, we might be able to integrate\n> it with Thomas's patch tester [1] wherein it can apply the patch,\n> verify if there are unnecessary includes in the patch and report the\n> same.\n> \n> [1] - http://commitfest.cputube.org/\n\nOr even get something into src/tools/? If the produced result is\nclean enough, that could be interesting.\n--\nMichael",
"msg_date": "Wed, 31 Jul 2019 15:37:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-31 11:19:08 +0530, vignesh C wrote:\n> I noticed that there are many header files being\n> included which need not be included.\n> I have tried this in a few files and found the\n> compilation and regression to be working.\n> I have attached the patch for the files that\n> I tried.\n> I tried this in CentOS, I did not find the header\n> files to be platform specific.\n> Should we pursue this further and cleanup in\n> all the files?\n\nThese type of changes imo have a good chance of making things more\nfragile. A lot of the includes in header files are purely due to needing\none or two definitions (which often could even be avoided by forward\ndeclarations). If you remove all the includes directly from the c files\nthat are also included from some .h file, you increase the reliance on\nthe indirect includes - making it harder to clean up.\n\nIf anything, we should *increase* the number of includes, so we don't\nrely on indirect includes. But that's also not necessarily the right\nchoice, because it adds unnecessary dependencies.\n\n\n> --- a/src/backend/utils/mmgr/freepage.c\n> +++ b/src/backend/utils/mmgr/freepage.c\n> @@ -56,7 +56,6 @@\n> #include \"miscadmin.h\"\n> \n> #include \"utils/freepage.h\"\n> -#include \"utils/relptr.h\"\n\nI don't think it's a good idea to remove this header, for example. A\n*lot* of code in freepage.c relies on it. The fact that freepage.h also\nincludes it here is just due to needing other parts of it\n\n\n> /* Magic numbers to identify various page types */\n> diff --git a/src/backend/utils/mmgr/portalmem.c b/src/backend/utils/mmgr/portalmem.c\n> index 334e35b..67268fd 100644\n> --- a/src/backend/utils/mmgr/portalmem.c\n> +++ b/src/backend/utils/mmgr/portalmem.c\n> @@ -26,7 +26,6 @@\n> #include \"utils/builtins.h\"\n> #include \"utils/memutils.h\"\n> #include \"utils/snapmgr.h\"\n> -#include \"utils/timestamp.h\"\n\nSimilarly, this file uses timestamp.h functions directly. The fact that\ntimestamp.h already is included is just due to implementation details of\nxact.h that this file shouldn't depend on.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Jul 2019 23:56:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Jul 31, 2019 at 11:55:37AM +0530, Amit Kapila wrote:\n>> If we can come up with some such tool, we might be able to integrate\n>> it with Thomas's patch tester [1] wherein it can apply the patch,\n>> verify if there are unnecessary includes in the patch and report the\n>> same.\n\n> Or even get something into src/tools/? If the produced result is\n> clean enough, that could be interesting.\n\nI take it nobody has actually bothered to *look* in src/tools.\n\nsrc/tools/pginclude/README\n\nNote that our experience with this sort of thing has not been very good.\nSee particularly 1609797c2.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Jul 2019 09:42:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "On 2019-Jul-31, vignesh C wrote:\n\n> I noticed that there are many header files being\n> included which need not be included.\n\nYeah, we have tooling for this already in src/tools/pginclude. It's\nbeen used before, and it has wreaked considerable havoc; see \"git log\n--grep pgrminclude\".\n\nI think doing this sort of cleanup is useful to a point -- as Andres\nmentions, some includes are somewhat more \"important\" than others, so\njudgement is needed in each case.\n\nI think removing unnecessary include lines from header files is much\nmore useful than from .c files. However, nowadays even I am not very\nconvinced that that is a very fruitful use of time, since many/most\ndevelopers use ccache which will reduce the compile times anyway in many\ncases; and development machines are typically much faster than ten years\nago.\n\nAlso, I think addition of new include lines to existing .c files should\nbe a point worth specific attention in patch review, to avoid breaking\nreasonable modularity boundaries unnecessarily.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 31 Jul 2019 11:23:22 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-31 11:23:22 -0400, Alvaro Herrera wrote:\n> I think removing unnecessary include lines from header files is much\n> more useful than from .c files. However, nowadays even I am not very\n> convinced that that is a very fruitful use of time, since many/most\n> developers use ccache which will reduce the compile times anyway in many\n> cases; and development machines are typically much faster than ten years\n> ago.\n\nIDK, I find the compilation times annoying. And it's gotten quite\nnoticably worse with all the speculative execution mitigations. Although\nto some degree that's not really the fault of individual compilations,\nbut our buildsystem being pretty slow.\n\nI think there's also just modularity reasons for removing includes from\nheaders. We've some pretty oddly interlinked systems, often without good\nreason (*). Cleaning those up imo is well worth the time - but hardly\ncan be automated.\n\nIf one really wanted to automate removal of header files, it'd need to\nbe a lot smarter than just checking whether a file fails to compile if\none header is removed. In the general case we'd have to test if the .c\nfile itself uses any of the symbols from the possibly-to-be-removed\nheader. That's hard to do without using something like llvm's\nlibclang. The one case that's perhaps a bit easier to automate, and\npossibly worthwhile: If a header is not indirectly included (possibly\ntestable via #ifndef HEADER_NAME_H #error 'already included' etc), and\ncompilation doesn't fail with it removed, *then* it's actually likely\nuseless (except for portability cases).\n\n* I think a lot of the interlinking stems from the bad idea to use\ntypedef's everywhere. In contrast to structs they cannot be forward\ndeclared portably in our version of C. We should use a lot more struct\nforward declarations, and just not use the typedef.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Jul 2019 08:44:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "On 2019-Jul-31, Andres Freund wrote:\n\n> IDK, I find the compilation times annoying. And it's gotten quite\n> noticably worse with all the speculative execution mitigations. Although\n> to some degree that's not really the fault of individual compilations,\n> but our buildsystem being pretty slow.\n\nWe're in a much better position now than a decade ago, in terms of clock\ntime. Back then I would resort to many tricks to avoid spurious\ncompiles, even manually touching some files to dates in the past to\navoid them. Nowadays I never bother with such things. But yes,\nreducing the build time even more would be welcome for sure.\n\n> * I think a lot of the interlinking stems from the bad idea to use\n> typedef's everywhere. In contrast to structs they cannot be forward\n> declared portably in our version of C. We should use a lot more struct\n> forward declarations, and just not use the typedef.\n\nI don't know about that ... I think the problem is that we both declare\nthe typedef *and* define the struct in the same place. If we were to\nsplit those things to separate files, the required rebuilds would be\nmuch less, I think, because changing a struct would no longer require\nrecompiles of files that merely pass those structs around (that's very\ncommon for Node-derived structs). Forward-declaring structs in\nunrelated header files just because they need them, feels a bit like\ncheating to me.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 31 Jul 2019 16:36:07 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jul-31, Andres Freund wrote:\n>> * I think a lot of the interlinking stems from the bad idea to use\n>> typedef's everywhere. In contrast to structs they cannot be forward\n>> declared portably in our version of C. We should use a lot more struct\n>> forward declarations, and just not use the typedef.\n\n> I don't know about that ... I think the problem is that we both declare\n> the typedef *and* define the struct in the same place. If we were to\n> split those things to separate files, the required rebuilds would be\n> much less, I think, because changing a struct would no longer require\n> recompiles of files that merely pass those structs around (that's very\n> common for Node-derived structs). Forward-declaring structs in\n> unrelated header files just because they need them, feels a bit like\n> cheating to me.\n\nYeah. I seem to recall a proposal that nodes.h should contain\n\n\ttypedef struct Foo Foo;\n\nfor every node type Foo, and then the other headers would just\nfill in the structs, and we could get rid of a lot of ad-hoc\nforward struct declarations and other hackery.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Jul 2019 16:55:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-31 16:55:31 -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2019-Jul-31, Andres Freund wrote:\n> >> * I think a lot of the interlinking stems from the bad idea to use\n> >> typedef's everywhere. In contrast to structs they cannot be forward\n> >> declared portably in our version of C. We should use a lot more struct\n> >> forward declarations, and just not use the typedef.\n> \n> > I don't know about that ... I think the problem is that we both declare\n> > the typedef *and* define the struct in the same place. If we were to\n> > split those things to separate files, the required rebuilds would be\n> > much less, I think, because changing a struct would no longer require\n> > recompiles of files that merely pass those structs around (that's very\n> > common for Node-derived structs). Forward-declaring structs in\n> > unrelated header files just because they need them, feels a bit like\n> > cheating to me.\n> \n> Yeah. I seem to recall a proposal that nodes.h should contain\n> \n> \ttypedef struct Foo Foo;\n> \n> for every node type Foo, and then the other headers would just\n> fill in the structs, and we could get rid of a lot of ad-hoc\n> forward struct declarations and other hackery.\n\nThat to me just seems objectively worse. Now adding a new struct as a\nminor implementation detail of some subsystem doesn't just require\nrecompiling the relevant files, but just about all of pg. And just about\nevery header would acquire a nodes.h include - there's still a lot of them\nthat today don't.\n\nI don't understand why you guys consider forward declaring structs\nugly. It's what just about every other C project does. The only reason\nit's sometimes problematic in postgres is that we \"hide the pointer\"\nwithin some typedefs, making it not as obvious which type we're\nreferring to (because the struct usage will be struct FooData*, instead\nof just Foo). But we also have confusion due to that in a lot of other\nplaces, so I don't really buy that this is a significant issue.\n\n\nRight now we really have weird dependencies between largely independent\nsubsystem. Some are partially because functions aren't always in the\nright file, but it's also not often clear what the right one would\nbe. E.g. snapmgr.h depending on relcache.h (for\nTransactionIdLimitedForOldSnapshots() having a Relation parameter), on\nresowner.h (for RegisterSnapshotOnOwner()) is imo not good. For one\nthey lead to a number of .c files that actually use functionality from\nresowner.h to not have the necessary includes. There's a lot of things\nlike that.\n\n.oO(the fmgr.h include in snapmgr.h has been unnecessary since 352a24a1f9)\n\n\nWe could of course be super rigorous and have an accompanying\nfoo_structs.h or something for every foo.h. But that seems to add no\nactual advantages, and makes things more complicated.\n\n\nThe only reason the explicit forward declaration is needed in the common\ncase of a 'struct foo*' parameter is that C has weird visibility rules\nabout the scope of forward declarations in paramters. If you instead\nfirst have e.g. a function *return* type of that struct type, the\nexplicit forward declaration isn't even needed - it's visible\nafterwards. But for parameters it's basically a *different* struct, that\ncannot be referenced again. Note that in C++ the visibility routines are\nmore consistent, and you don't need an explicit forward declaration in\neither case (I'd also be OK with requiring it in both cases, it's just\nweird to only need them in one).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Jul 2019 14:53:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-07-31 16:55:31 -0400, Tom Lane wrote:\n>> Yeah. I seem to recall a proposal that nodes.h should contain\n>> \n>> typedef struct Foo Foo;\n>> \n>> for every node type Foo, and then the other headers would just\n>> fill in the structs, and we could get rid of a lot of ad-hoc\n>> forward struct declarations and other hackery.\n\n> That to me just seems objectively worse. Now adding a new struct as a\n> minor implementation detail of some subsystem doesn't just require\n> recompiling the relevant files, but just about all of pg.\n\nEr, what? This list of typedefs would change exactly when enum NodeTag\nchanges, so AFAICS your objection is bogus.\n\nIt's true that this proposal doesn't help for structs that aren't Nodes,\nbut my sense is that > 90% of our ad-hoc struct references are for Nodes.\n\n> Right now we really have weird dependencies between largely independent\n> subsystem.\n\nAgreed, but I think fixing that will take some actually serious design\nwork. It's not going to mechanically fall out of changing typedef rules.\n\n> The only reason the explicit forward declaration is needed in the common\n> case of a 'struct foo*' parameter is that C has weird visibility rules\n> about the scope of forward declarations in paramters.\n\nYeah, but there's not much we can do about that, nor is getting rid\nof typedefs in favor of \"struct\" going to make it even a little bit\nbetter.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Jul 2019 19:25:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-31 19:25:01 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-07-31 16:55:31 -0400, Tom Lane wrote:\n> >> Yeah. I seem to recall a proposal that nodes.h should contain\n> >> \n> >> typedef struct Foo Foo;\n> >> \n> >> for every node type Foo, and then the other headers would just\n> >> fill in the structs, and we could get rid of a lot of ad-hoc\n> >> forward struct declarations and other hackery.\n> \n> > That to me just seems objectively worse. Now adding a new struct as a\n> > minor implementation detail of some subsystem doesn't just require\n> > recompiling the relevant files, but just about all of pg.\n> \n> Er, what? This list of typedefs would change exactly when enum NodeTag\n> changes, so AFAICS your objection is bogus.\n\n> It's true that this proposal doesn't help for structs that aren't Nodes,\n> but my sense is that > 90% of our ad-hoc struct references are for Nodes.\n\nAh, well, I somehow assumed you were talking about all nodes. I don't\nthink I agree with the 90% figure. In headers I feel like most the\nreferences are to things like Relation, Snapshot, HeapTuple, etc.\n\n\n> > Right now we really have weird dependencies between largely independent\n> > subsystem.\n> \n> Agreed, but I think fixing that will take some actually serious design\n> work. It's not going to mechanically fall out of changing typedef rules.\n\nNo, but without finding a more workable approach than what we're doing\noften doing now wrt includes and forward declares, we'll have a lot\nharder time to separate subsystems out more.\n\n\n> > The only reason the explicit forward declaration is needed in the common\n> > case of a 'struct foo*' parameter is that C has weird visibility rules\n> > about the scope of forward declarations in paramters.\n> \n> Yeah, but there's not much we can do about that, nor is getting rid\n> of typedefs in favor of \"struct\" going to make it even a little bit\n> better.\n\nIt imo pretty fundamentally does. You cannot redefine typedefs, but you\ncan forward declare structs.\n\n\nE.g. in the attached series of patches, I'm removing a good portion of\nunnecessary dependencies to fmgr.h. But to actually make a difference\nthat requires referencing two structs without including the header - and\nI don't think restructing fmgr.h into two headers is a particularly\nattractive alternative (would make it a lot more work and a lot more\ninvasive).\n\nThink the first three are pretty clearly a good idea, I'm a bit less\nsanguine about the fourth:\nHeaders like utils/timestamp.h are often included just because we need a\nTimestampTz type somewhere, or call GetCurrentTimestamp(). Approximately\nnone of these need the PG_GETARG_* macros, which are the only reason for\nincluding fmgr.h in these headers. As they're macros that's not\nactually needed, although I think normally good style. But I' think here\navoiding exposing fmgr.h to more headers is a bigger win.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sat, 3 Aug 2019 12:37:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 12:26 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-07-31 11:19:08 +0530, vignesh C wrote:\n> > I noticed that there are many header files being\n> > included which need not be included.\n> > I have tried this in a few files and found the\n> > compilation and regression to be working.\n> > I have attached the patch for the files that\n> > I tried.\n> > I tried this in CentOS, I did not find the header\n> > files to be platform specific.\n> > Should we pursue this further and cleanup in\n> > all the files?\n>\n> These type of changes imo have a good chance of making things more\n> fragile. A lot of the includes in header files are purely due to needing\n> one or two definitions (which often could even be avoided by forward\n> declarations). If you remove all the includes directly from the c files\n> that are also included from some .h file, you increase the reliance on\n> the indirect includes - making it harder to clean up.\n>\n> If anything, we should *increase* the number of includes, so we don't\n> rely on indirect includes. But that's also not necessarily the right\n> choice, because it adds unnecessary dependencies.\n>\n>\n> > --- a/src/backend/utils/mmgr/freepage.c\n> > +++ b/src/backend/utils/mmgr/freepage.c\n> > @@ -56,7 +56,6 @@\n> > #include \"miscadmin.h\"\n> >\n> > #include \"utils/freepage.h\"\n> > -#include \"utils/relptr.h\"\n>\n> I don't think it's a good idea to remove this header, for example. A\n> *lot* of code in freepage.c relies on it. The fact that freepage.h also\n> includes it here is just due to needing other parts of it\n>\nFixed this.\n>\n> > /* Magic numbers to identify various page types */\n> > diff --git a/src/backend/utils/mmgr/portalmem.c b/src/backend/utils/mmgr/portalmem.c\n> > index 334e35b..67268fd 100644\n> > --- a/src/backend/utils/mmgr/portalmem.c\n> > +++ b/src/backend/utils/mmgr/portalmem.c\n> > @@ -26,7 +26,6 @@\n> > #include \"utils/builtins.h\"\n> > #include \"utils/memutils.h\"\n> > #include \"utils/snapmgr.h\"\n> > -#include \"utils/timestamp.h\"\n>\n> Similarly, this file uses timestamp.h functions directly. The fact that\n> timestamp.h already is included is just due to implementation details of\n> xact.h that this file shouldn't depend on.\n>\nFixed this.\n\nMade the fixes based on your comments, updated patch has the changes\nfor the same.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sun, 4 Aug 2019 15:01:33 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "On 2019-Aug-04, vignesh C wrote:\n\n> Made the fixes based on your comments, updated patch has the changes\n> for the same.\n\nWell, you fixed the two things that seem to me quoted as examples of\nproblems, but you didn't fix other occurrences of the same issues\nelsewhere. For example, you remove lwlock.h from dsa.c but there are\nstructs there that have LWLocks as members. That's just the first hunk\nof the patch; didn't look at the others but it wouldn't surprise that\nthey have similar issues. I suggest this patch should be rejected.\n\nThen there's the <limits.h> removal, which is in tuplesort.c because of\nINT_MAX as added by commit d26559dbf356 and still present ...\n\nFWIW sharedtuplestore.c, a very young file, also includes <limits.h> but\nthat appears to be copy-paste of includes from some other file (and also\nin an inappropriate place), so I have no objections to obliterating that\none. But other than that one line, this patch needs more \"adult\nsupervision\".\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 5 Aug 2019 14:06:56 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Then there's the <limits.h> removal, which is in tuplesort.c because of\n> INT_MAX as added by commit d26559dbf356 and still present ...\n\nOne has to be especially wary of removing system-header inclusions;\nthe fact that they don't seem to be needed on your own machine doesn't\nprove they aren't needed elsewhere.\n\n> ... I suggest this patch should be rejected.\n\nYeah. If we do anything along this line it should be based on\npgrminclude results, and even then I think it'd require manual\nreview, especially for changes in header files.\n\nThe big picture here is that removing #includes is seldom worth\nthe effort it takes :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Aug 2019 14:25:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "On 2019-Aug-05, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > Then there's the <limits.h> removal, which is in tuplesort.c because of\n> > INT_MAX as added by commit d26559dbf356 and still present ...\n> \n> One has to be especially wary of removing system-header inclusions;\n> the fact that they don't seem to be needed on your own machine doesn't\n> prove they aren't needed elsewhere.\n\nAs far as I can see, this line was just carried over from tuplestore.c,\nbut that file uses INT_MAX and this one doesn't use any of those macros\nas far as I can tell.\n\nI pushed this change and hereby consider this to be over.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 7 Aug 2019 17:00:08 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "On Thu, Aug 8, 2019 at 9:00 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Aug-05, Tom Lane wrote:\n> > Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > > Then there's the <limits.h> removal, which is in tuplesort.c because of\n> > > INT_MAX as added by commit d26559dbf356 and still present ...\n> >\n> > One has to be especially wary of removing system-header inclusions;\n> > the fact that they don't seem to be needed on your own machine doesn't\n> > prove they aren't needed elsewhere.\n>\n> As far as I can see, this line was just carried over from tuplestore.c,\n> but that file uses INT_MAX and this one doesn't use any of those macros\n> as far as I can tell.\n\nYeah, probably, or maybe I used one of those sorts of macros in an\nearlier version of the patch.\n\nBy the way, I see now that I had put the offending #include <limits.h>\n*after* project headers, which is a convention I picked up from other\nprojects, mentors and authors, but not what PostgreSQL usually does.\nIn my own code I do that to maximise the chances that project headers\nwill fail to compile if they themselves forget to include the system\nheaders they depend on. Of course an attempt to compile every header\nin the project individually as a transaction unit also achieves that.\n\n/me runs away\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Aug 2019 09:47:32 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "On Thu, Aug 8, 2019 at 9:47 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> transaction unit\n\n*translation unit\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Aug 2019 09:49:48 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-03 12:37:33 -0700, Andres Freund wrote:\n> Think the first three are pretty clearly a good idea, I'm a bit less\n> sanguine about the fourth:\n> Headers like utils/timestamp.h are often included just because we need a\n> TimestampTz type somewhere, or call GetCurrentTimestamp(). Approximately\n> none of these need the PG_GETARG_* macros, which are the only reason for\n> including fmgr.h in these headers. As they're macros that's not\n> actually needed, although I think normally good style. But I' think here\n> avoiding exposing fmgr.h to more headers is a bigger win.\n\nI still think the fourth is probably worthwhile, but I don't feel\nconfident enough to do it without somebody else +0.5'ing it...\n\nI've pushed the other ones.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 16 Aug 2019 16:07:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I've pushed the other ones.\n\nChecking whether header files compile standalone shows you were overly\naggressive about removing fmgr.h includes:\n\nIn file included from /tmp/headerscheck.Ss8bVx/test.c:3:\n./src/include/utils/selfuncs.h:143: error: expected declaration specifiers or '...' before 'FmgrInfo'\n./src/include/utils/selfuncs.h:146: error: expected declaration specifiers or '...' before 'FmgrInfo'\n./src/include/utils/selfuncs.h:152: error: expected declaration specifiers or '...' before 'FmgrInfo'\n\nThat's with a script I use that's like cpluspluscheck except it tests\nwith plain gcc not g++. I attached it for the archives' sake.\n\nOddly, cpluspluscheck does not complain about those cases, but it\ndoes complain about\n\nIn file included from /tmp/cpluspluscheck.FgX2SW/test.cpp:4:\n./src/bin/scripts/scripts_parallel.h:18: error: ISO C++ forbids declaration of 'PGconn' with no type\n./src/bin/scripts/scripts_parallel.h:18: error: expected ';' before '*' token\n./src/bin/scripts/scripts_parallel.h:29: error: 'PGconn' has not been declared\n\n(My headerscheck script is missing that header; I need to update it to\nmatch the latest version of cpluspluscheck.)\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 18 Aug 2019 14:37:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "I wrote:\n> (My headerscheck script is missing that header; I need to update it to\n> match the latest version of cpluspluscheck.)\n\nI did that, and ended up with the attached. I'm rather tempted to stick\nthis into src/tools/ alongside cpluspluscheck, because it seems to find\nrather different trouble spots than cpluspluscheck does. Thoughts?\n\nAs of HEAD (927f34ce8), I get clean passes from both this and\ncpluspluscheck, except for the FmgrInfo references in selfuncs.h.\nI tried it on RHEL/Fedora, FreeBSD 12, and current macOS.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 18 Aug 2019 20:50:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "On 2019-Aug-18, Tom Lane wrote:\n\n> I wrote:\n> > (My headerscheck script is missing that header; I need to update it to\n> > match the latest version of cpluspluscheck.)\n> \n> I did that, and ended up with the attached. I'm rather tempted to stick\n> this into src/tools/ alongside cpluspluscheck, because it seems to find\n> rather different trouble spots than cpluspluscheck does. Thoughts?\n\nYeah, let's include this. I've written its equivalent a couple of times\nalready. (My strategy is just to compile the .h file directly though,\nwhich creates a .gch file, rather than writing a temp .c file.)\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 19 Aug 2019 10:22:14 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Aug-18, Tom Lane wrote:\n>> I did that, and ended up with the attached. I'm rather tempted to stick\n>> this into src/tools/ alongside cpluspluscheck, because it seems to find\n>> rather different trouble spots than cpluspluscheck does. Thoughts?\n\n> Yeah, let's include this.\n\nDone.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Aug 2019 14:23:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-18 14:37:34 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I've pushed the other ones.\n> \n> Checking whether header files compile standalone shows you were overly\n> aggressive about removing fmgr.h includes:\n> \n> In file included from /tmp/headerscheck.Ss8bVx/test.c:3:\n> ./src/include/utils/selfuncs.h:143: error: expected declaration specifiers or '...' before 'FmgrInfo'\n> ./src/include/utils/selfuncs.h:146: error: expected declaration specifiers or '...' before 'FmgrInfo'\n> ./src/include/utils/selfuncs.h:152: error: expected declaration specifiers or '...' before 'FmgrInfo'\n\nDarn. Pushed the obvious fix of adding a direct fmgr.h include, rather\nthan the preivous indirect include.\n\n\n> That's with a script I use that's like cpluspluscheck except it tests\n> with plain gcc not g++. I attached it for the archives' sake.\n> \n> Oddly, cpluspluscheck does not complain about those cases, but it\n> does complain about\n\nHm. I don't understand why it's not complaining. Wonder if it's a\nquestion of the flags or such.\n\n\n> In file included from /tmp/cpluspluscheck.FgX2SW/test.cpp:4:\n> ./src/bin/scripts/scripts_parallel.h:18: error: ISO C++ forbids declaration of 'PGconn' with no type\n> ./src/bin/scripts/scripts_parallel.h:18: error: expected ';' before '*' token\n> ./src/bin/scripts/scripts_parallel.h:29: error: 'PGconn' has not been declared\n\nI noticed that \"manually\" earlier, when looking at the openbsd issue.\n\n\n> (My headerscheck script is missing that header; I need to update it to\n> match the latest version of cpluspluscheck.)\n\nI wonder if we should just add a #ifndef HEADERCHECK or such to the\nheaders that we don't want to process as standalone headers (or #ifndef\nNOT_STANDALONE or whatever). That way multiple tools can rely on these markers,\nrather than copying knowledge about that kind of information into\nmultiple places.\n\nI wish we could move the whole logic of those scripts into makefiles, so\nwe could employ parallelism.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 19 Aug 2019 13:07:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-19 13:07:50 -0700, Andres Freund wrote:\n> On 2019-08-18 14:37:34 -0400, Tom Lane wrote:\n> > That's with a script I use that's like cpluspluscheck except it tests\n> > with plain gcc not g++. I attached it for the archives' sake.\n> > \n> > Oddly, cpluspluscheck does not complain about those cases, but it\n> > does complain about\n> \n> Hm. I don't understand why it's not complaining. Wonder if it's a\n> question of the flags or such.\n\nI'ts caused by -fsyntax-only\n\n# These switches are g++ specific, you may override if necessary.\nCXXFLAGS=${CXXFLAGS:- -fsyntax-only -Wall}\n\nwhich also explains why your headercheck doesn't have that problem, it\ndoesn't use -fsyntax-only.\n\n\n> > (My headerscheck script is missing that header; I need to update it to\n> > match the latest version of cpluspluscheck.)\n> \n> I wonder if we should just add a #ifndef HEADERCHECK or such to the\n> headers that we don't want to process as standalone headers (or #ifndef\n> NOT_STANDALONE or whatever). That way multiple tools can rely on these markers,\n> rather than copying knowledge about that kind of information into\n> multiple places.\n> \n> I wish we could move the whole logic of those scripts into makefiles, so\n> we could employ parallelism.\n\nHm. Perhaps the way to do that would be to use gcc's -include to include\npostgres.h, and use -Wc++-compat to detect c++ issues, rather than using\ng++. Without tempfiles it ought to be a lot easier to just do all of the\nrelevant work in make, without a separate shell script. The\npython/perl/ecpg logic would b e abit annoying, but probably not too\nbad? Think we could just always add all of them?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 19 Aug 2019 13:46:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "On 2019-Aug-19, Andres Freund wrote:\n\n> > I wish we could move the whole logic of those scripts into makefiles, so\n> > we could employ parallelism.\n> \n> Hm. Perhaps the way to do that would be to use gcc's -include to include\n> postgres.h, and use -Wc++-compat to detect c++ issues, rather than using\n> g++. Without tempfiles it ought to be a lot easier to just do all of the\n> relevant work in make, without a separate shell script.\n\nI used to have this:\nhttps://postgr.es/m/1293469595-sup-1462@alvh.no-ip.org\nNot sure how much this helps, since it's a shell line in make, so not\nvery paralellizable. And you still have to build the exclusions\nsomehow.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 19 Aug 2019 17:18:19 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-08-19 13:07:50 -0700, Andres Freund wrote:\n>> On 2019-08-18 14:37:34 -0400, Tom Lane wrote:\n>>> Oddly, cpluspluscheck does not complain about those cases, but it\n>>> does complain about\n\n>> Hm. I don't understand why it's not complaining. Wonder if it's a\n>> question of the flags or such.\n\n> I'ts caused by -fsyntax-only\n\nAh-hah. Should we change that to something else? That's probably\na hangover from thinking that all we had to do was check for C++\nkeywords.\n\n>> I wish we could move the whole logic of those scripts into makefiles, so\n>> we could employ parallelism.\n\nI can't really get excited about expending a whole bunch of additional\nwork on these scripts.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Aug 2019 18:09:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused header file inclusion"
}
] |
[
{
"msg_contents": "Current Postgres implementation of temporary table causes number of \nproblems:\n\n1. Catalog bloating: if client creates and deletes too many temporary \ntables, then autovacuum get stuck on catalog.\n2. Parallel queries: right now usage of temporary tables in query \ndisables parallel plan.\n3. It is not possible to use temporary tables at replica. Hot standby \nconfiguration is frequently used to run OLAP queries on replica\nand results of such queries are used to be saved in temporary tables. \nRight now it is not possible (except \"hackers\" solution with storing \nresults in file_fdw).\n4. Temporary tables can not be used in prepared transactions.\n5. Inefficient memory usage and possible memory overflow: each backend \nmaintains its own local buffers for work with temporary tables.\nDefault size of temporary buffers is 8Mb. It seems to be too small for \nmodern servers having hundreds of gigabytes of RAM, causing extra \ncopying of data\nbetween OS cache and local buffers. But if there are thousands of \nbackends, each executing queries with temporary tables, then total \namount of\nmemory used for temporary buffers can exceed several tens of gigabytes.\n6. Connection pooler can not reschedule session which has created \ntemporary tables to some other backend\nbecause it's data is stored in local buffers.\n\nThere were several attempts to address this problems.\nFor example Alexandr Alekseev has implemented patch which allows to \ncreate fast temporary tables without accessing system catalog:\nhttps://www.postgresql.org/message-id/flat/20160301182500.2c81c3dc%40fujitsu\nUnfortunately this patch was too invasive and rejected by community.\n\nThere was also attempt to allow under some condition use temporary \ntables in 2PC transactions:\nhttps://www.postgresql.org/message-id/flat/m2d0pllvqy.fsf%40dimitris-macbook-pro.home\nhttps://www.postgresql.org/message-id/flat/3a4b3c88-4fa5-1edb-a878-1ed76fa1c82b%40postgrespro.ru#d8a8342d07317d12e3405b903d3b15e4\nThem were also rejected.\n\nI try to make yet another attempt to address this problems, first of all \n1), 2), 5) and 6)\nTo solve this problems I propose notion of \"global temporary\" tables, \nsimilar with ones in Oracle.\nDefinition of this table (metadata) is shared by all backends but data \nis private to the backend. After session termination data is obviously lost.\n\nSuggested syntax for creation of global temporary tables:\n\n create global temp table\nor\n create session table\n\nOnce been created it can be used by all backends.\nGlobal temporary tables are accessed though shared buffers (to solve \nproblem 2).\nCleanup of temporary tables data (release of shared buffer and deletion \nof relation files) is performed on backend termination.\nIn case of abnormal server termination, files of global temporary tables \nare cleaned-up in the same way as of local temporary tables.\n\nCertainly there are cases were global temporary tables can not be used, \ni.e. when application is dynamically constructed name and columns of \ntemporary table.\nAlso access to local buffers is more efficient than access to shared \nbuffers because it doesn't require any synchronization.\nBut please notice that it is always possible to create old (local) \ntemporary tables which preserves current behavior.\n\nThe problem with replica is still not solved. But shared metadata is \nstep in this direction.\nI am thinking about reimplementation of temporary tables using new table \naccess method API.\nThe drawback of such approach is that it will be necessary to \nreimplement large bulk of heapam code.\nBut this approach allows to eliminate visibility check for temporary \ntable tuples and decrease size of tuple header.\nI still not sure if implementing special table access method for \ntemporary tables is good idea.\n\nPatch for global temporary tables is attached to this mail.\nThe known limitation is that now it supports only B-Tree indexes.\nAny feedback is welcome.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 31 Jul 2019 18:05:19 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Global temporary tables"
},
{
"msg_contents": "On Wed, 31 Jul 2019 at 23:05, Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\nwrote:\n\n> Current Postgres implementation of temporary table causes number of\n> problems:\n>\n> 1. Catalog bloating: if client creates and deletes too many temporary\n> tables, then autovacuum get stuck on catalog.\n>\n\nThis also upsets logical decoding a little - AFAICS it still has to treat\ntransactions that use temporary tables as catalog-modifying transactions,\ntracking them in its historic catalog snapshots and doing extra cache\nflushes etc when decoding them.\n\nThis will become even more important as we work to support eager/optimistic\noutput plugin processing of in-progress transactions. We'd have to switch\nsnapshots more, and that can get quite expensive so using temp tables could\nreally hurt performance. Or we'd have to serialize on catalog-changing\ntransactions, in which case using temp tables would negate the benefits of\noptimistic streaming of in-progress transactions.\n\n\n> 3. It is not possible to use temporary tables at replica.\n\n\nFor physical replicas, yes.\n\n\n> Hot standby\n> configuration is frequently used to run OLAP queries on replica\n> and results of such queries are used to be saved in temporary tables.\n> Right now it is not possible (except \"hackers\" solution with storing\n> results in file_fdw).\n>\n\nRight. Because we cannot modify pg_class, pg_attribute etc, even though we\ncould reasonably enough write to local-only relfilenodes on a replica if we\ndidn't have to change WAL-logged catalog tables.\n\nI've seen some hacks suggested around this where we have an unlogged fork\nof each of the needed catalog tables, allowing replicas to write temp table\ninfo to them. We'd scan both the logged and unlogged forks when doing\nrelcache management etc. But there are plenty of ugly issues with this.\nWe'd have to reserve oid ranges for them which is ugly; to make it BC\nfriendly those reservations would probably have to take the form of some\nkind of placeholder entry in the real pg_class. And it gets ickier from\nthere. It hardly seems worth it when we should probably just implement\nglobal temp tables instead.\n\n\n> 5. Inefficient memory usage and possible memory overflow: each backend\n> maintains its own local buffers for work with temporary tables.\n>\n\nIs there any reason that would change with global temp tables? We'd still\nbe creating a backend-local relfilenode for each backend that actually\nwrites to the temp table, and I don't see how it'd be useful or practical\nto keep those in shared_buffers.\n\nUsing local buffers has big advantages too. It saves shared_buffers space\nfor data where there's actually some possibility of getting cache hits, or\nfor where we can benefit from lazy/async writeback and write combining. I\nwouldn't want to keep temp data there if I had the option.\n\nIf you're concerned about the memory use of backend local temp buffers, or\nabout how we account for and limit those, that's worth looking into. But I\ndon't think it'd be something that should be affected by global-temp vs\nbackend-local-temp tables.\n\n\n> Default size of temporary buffers is 8Mb. It seems to be too small for\n> modern servers having hundreds of gigabytes of RAM, causing extra\n> copying of data between OS cache and local buffers. But if there are\n\nthousands of backends, each executing queries with temporary tables,\n\nthen total amount of memory used for temporary buffers can exceed\n\nseveral tens of gigabytes.\n>\n\nRight. But what solution do you propose for this? Putting that in\nshared_buffers will do nothing except deprive shared_buffers of space that\ncan be used for other more useful things. A server-wide temp buffer would\nadd IPC and locking overheads and AFAICS little benefit. One of the big\nappeals of temp tables is that we don't need any of that.\n\nIf you want to improve server-wide temp buffer memory accounting and\nmanagement that makes sense. I can see it being useful to have things like\na server-wide DSM/DSA pool of temp buffers that backends borrow from and\nreturn to based on memory pressure on a LRU-ish basis, maybe. But I can\nalso see how that'd be complex and hard to get right. It'd also be prone to\npriority inversion problems where an idle/inactive backend must be woken up\nto release memory or release locks, depriving an actively executing backend\nof runtime. And it'd be as likely to create inefficiencies with copying and\neviction as solve them since backends could easily land up taking turns\nkicking each other out of memory and re-reading their own data.\n\nI don't think this is something that should be tackled as part of work on\nglobal temp tables personally.\n\n\n\n> 6. Connection pooler can not reschedule session which has created\n> temporary tables to some other backend because it's data is stored in local\n> buffers.\n>\n\nYeah, if you're using transaction-associative pooling. That's just part of\na more general problem though, there are piles of related issues with temp\ntables, session GUCs, session advisory locks and more.\n\nI don't see how global temp tables will do you the slightest bit of good\nhere as the data in them will still be backend-local. If it isn't then you\nshould just be using unlogged tables.\n\n\n> Definition of this table (metadata) is shared by all backends but data\n> is private to the backend. After session termination data is obviously\n> lost.\n>\n\n+1 that's what a global temp table should be, and it's IIRC pretty much how\nthe SQL standard specifies temp tables.\n\nI suspect I'm overlooking some complexities here, because to me it seems\nlike we could implement these fairly simply. A new relkind would identify\nit as a global temp table and the relfilenode would be 0. Same for indexes\non temp tables. We'd extend the relfilenode mapper to support a\nbackend-local non-persistent relfilenode map that's used to track temp\ntable and index relfilenodes. If no relfilenode is defined for the table,\nthe mapper would allocate one. We already happily create missing\nrelfilenodes on write so we don't even have to pre-create the actual file.\nWe'd register the relfilenode as a tempfile and use existing tempfile\ncleanup mechanisms, and we'd use the temp tablespace to store it.\n\nI must be missing something important because it doesn't seem hard.\n\nGlobal temporary tables are accessed though shared buffers (to solve\n> problem 2).\n>\n\nI'm far from convinced of the wisdom or necessity of that, but I haven't\nspent as much time digging into this problem as you have.\n\n\n> The drawback of such approach is that it will be necessary to\n> reimplement large bulk of heapam code.\n> But this approach allows to eliminate visibility check for temporary\n> table tuples and decrease size of tuple header.\n>\n\nThat sounds potentially cool, but perhaps a \"next step\" thing? Allow the\ncreation of global temp tables to specify reloptions, and you can add it as\na reloption later. You can't actually eliminate visibility checks anyway\nbecause they're still MVCC heaps. Savepoints can create invisible tuples\neven if you're using temp tables that are cleared on commit, and of course\nso can DELETEs or UPDATEs. So I'm not sure how much use it'd really be in\npractice.\n\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Wed, 31 Jul 2019 at 23:05, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:Current Postgres implementation of temporary table causes number of \nproblems:\n\n1. Catalog bloating: if client creates and deletes too many temporary \ntables, then autovacuum get stuck on catalog.This also upsets logical decoding a little - AFAICS it still has to treat transactions that use temporary tables as catalog-modifying transactions, tracking them in its historic catalog snapshots and doing extra cache flushes etc when decoding them.This will become even more important as we work to support eager/optimistic output plugin processing of in-progress transactions. We'd have to switch snapshots more, and that can get quite expensive so using temp tables could really hurt performance. Or we'd have to serialize on catalog-changing transactions, in which case using temp tables would negate the benefits of optimistic streaming of in-progress transactions. 3. It is not possible to use temporary tables at replica.For physical replicas, yes. Hot standby \nconfiguration is frequently used to run OLAP queries on replica\nand results of such queries are used to be saved in temporary tables. \nRight now it is not possible (except \"hackers\" solution with storing \nresults in file_fdw).Right. Because we cannot modify pg_class, pg_attribute etc, even though we could reasonably enough write to local-only relfilenodes on a replica if we didn't have to change WAL-logged catalog tables.I've seen some hacks suggested around this where we have an unlogged fork of each of the needed catalog tables, allowing replicas to write temp table info to them. We'd scan both the logged and unlogged forks when doing relcache management etc. But there are plenty of ugly issues with this. We'd have to reserve oid ranges for them which is ugly; to make it BC friendly those reservations would probably have to take the form of some kind of placeholder entry in the real pg_class. And it gets ickier from there. It hardly seems worth it when we should probably just implement global temp tables instead. 5. Inefficient memory usage and possible memory overflow: each backend \nmaintains its own local buffers for work with temporary tables.Is there any reason that would change with global temp tables? We'd still be creating a backend-local relfilenode for each backend that actually writes to the temp table, and I don't see how it'd be useful or practical to keep those in shared_buffers.Using local buffers has big advantages too. It saves shared_buffers space for data where there's actually some possibility of getting cache hits, or for where we can benefit from lazy/async writeback and write combining. I wouldn't want to keep temp data there if I had the option.If you're concerned about the memory use of backend local temp buffers, or about how we account for and limit those, that's worth looking into. But I don't think it'd be something that should be affected by global-temp vs backend-local-temp tables. \nDefault size of temporary buffers is 8Mb. It seems to be too small for \nmodern servers having hundreds of gigabytes of RAM, causing extra \ncopying of data between OS cache and local buffers. But if there arethousands of backends, each executing queries with temporary tables,then total amount of memory used for temporary buffers can exceedseveral tens of gigabytes.Right. But what solution do you propose for this? Putting that in shared_buffers will do nothing except deprive shared_buffers of space that can be used for other more useful things. A server-wide temp buffer would add IPC and locking overheads and AFAICS little benefit. One of the big appeals of temp tables is that we don't need any of that.If you want to improve server-wide temp buffer memory accounting and management that makes sense. I can see it being useful to have things like a server-wide DSM/DSA pool of temp buffers that backends borrow from and return to based on memory pressure on a LRU-ish basis, maybe. But I can also see how that'd be complex and hard to get right. It'd also be prone to priority inversion problems where an idle/inactive backend must be woken up to release memory or release locks, depriving an actively executing backend of runtime. And it'd be as likely to create inefficiencies with copying and eviction as solve them since backends could easily land up taking turns kicking each other out of memory and re-reading their own data.I don't think this is something that should be tackled as part of work on global temp tables personally. \n6. Connection pooler can not reschedule session which has created temporary tables to some other backend because it's data is stored in local buffers.Yeah, if you're using transaction-associative pooling. That's just part of a more general problem though, there are piles of related issues with temp tables, session GUCs, session advisory locks and more.I don't see how global temp tables will do you the slightest bit of good here as the data in them will still be backend-local. If it isn't then you should just be using unlogged tables. Definition of this table (metadata) is shared by all backends but data \nis private to the backend. After session termination data is obviously lost.+1 that's what a global temp table should be, and it's IIRC pretty much how the SQL standard specifies temp tables.I suspect I'm overlooking some complexities here, because to me it seems like we could implement these fairly simply. A new relkind would identify it as a global temp table and the relfilenode would be 0. Same for indexes on temp tables. We'd extend the relfilenode mapper to support a backend-local non-persistent relfilenode map that's used to track temp table and index relfilenodes. If no relfilenode is defined for the table, the mapper would allocate one. We already happily create missing relfilenodes on write so we don't even have to pre-create the actual file. We'd register the relfilenode as a tempfile and use existing tempfile cleanup mechanisms, and we'd use the temp tablespace to store it.I must be missing something important because it doesn't seem hard.Global temporary tables are accessed though shared buffers (to solve \nproblem 2).I'm far from convinced of the wisdom or necessity of that, but I haven't spent as much time digging into this problem as you have. The drawback of such approach is that it will be necessary to \nreimplement large bulk of heapam code.\nBut this approach allows to eliminate visibility check for temporary \ntable tuples and decrease size of tuple header.That sounds potentially cool, but perhaps a \"next step\" thing? Allow the creation of global temp tables to specify reloptions, and you can add it as a reloption later. You can't actually eliminate visibility checks anyway because they're still MVCC heaps. Savepoints can create invisible tuples even if you're using temp tables that are cleared on commit, and of course so can DELETEs or UPDATEs. So I'm not sure how much use it'd really be in practice.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 1 Aug 2019 11:10:40 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 01.08.2019 6:10, Craig Ringer wrote:\n>\n> 3. It is not possible to use temporary tables at replica.\n>\n>\n> For physical replicas, yes.\n\nYes, definitely logical replicas (for example our PgPro-EE multimaster \nbased on logical replication) do not suffer from this problem.\nBut in case of multimaster we have another problem related with \ntemporary tables: we have to use 2PC for each transaction and using \ntemporary tables in prepared transaction is now prohibited.\nThis was the motivation of the patch proposed by Stas Kelvich which \nallows to use temporary tables in prepared transactions under some \nconditions.\n> 5. Inefficient memory usage and possible memory overflow: each backend\n>\n> maintains its own local buffers for work with temporary tables.\n>\n>\n> Is there any reason that would change with global temp tables? We'd \n> still be creating a backend-local relfilenode for each backend that \n> actually writes to the temp table, and I don't see how it'd be useful \n> or practical to keep those in shared_buffers.\n>\nYes, my implementation of global temp tables is using shared buffers.\nIt was not strictly needed as far as data is local. It is possible to \nhave shared metadata and private data accessed through local buffers.\nBut I have done it for three reasons:\n1, Make it possible to use parallel plans for temp tables.\n2. Eliminate memory overflow problem.\n3. Make in possible to reschedule session to other backens (connection \npooler).\n\n> Using local buffers has big advantages too. It saves shared_buffers \n> space for data where there's actually some possibility of getting \n> cache hits, or for where we can benefit from lazy/async writeback and \n> write combining. I wouldn't want to keep temp data there if I had the \n> option.\n\nDefinitely local buffers have some advantages:\n- do not require synchronization\n- avoid flushing data from shared buffers\n\nBut global temp tables are not excluding use of original (local) temp \ntables.\nSo you will have a choice: either to use local temp tables which can be \neasily created on demand and accessed through local buffers,\neither create global temp tables, which eliminate catalog bloating, \nallow parallel queries and which data is controlled by the same cache \nreplacement discipline as for normal tables...\n\n>\n>\n> Default size of temporary buffers is 8Mb. It seems to be too small\n> for\n> modern servers having hundreds of gigabytes of RAM, causing extra\n> copying of data between OS cache and local buffers. But if there are\n>\n> thousands of backends, each executing queries with temporary tables,\n>\n> then total amount of memory used for temporary buffers can exceed\n>\n> several tens of gigabytes.\n>\n>\n> Right. But what solution do you propose for this? Putting that in \n> shared_buffers will do nothing except deprive shared_buffers of space \n> that can be used for other more useful things. A server-wide temp \n> buffer would add IPC and locking overheads and AFAICS little benefit. \n> One of the big appeals of temp tables is that we don't need any of that.\n\nI do not think that parallel execution and efficient connection pooling \nare \"little benefit\".\n>\n> If you want to improve server-wide temp buffer memory accounting and \n> management that makes sense. I can see it being useful to have things \n> like a server-wide DSM/DSA pool of temp buffers that backends borrow \n> from and return to based on memory pressure on a LRU-ish basis, maybe. \n> But I can also see how that'd be complex and hard to get right. It'd \n> also be prone to priority inversion problems where an idle/inactive \n> backend must be woken up to release memory or release locks, depriving \n> an actively executing backend of runtime. And it'd be as likely to \n> create inefficiencies with copying and eviction as solve them since \n> backends could easily land up taking turns kicking each other out of \n> memory and re-reading their own data.\n>\n> I don't think this is something that should be tackled as part of work \n> on global temp tables personally.\n\nMy assumptions are the following: temporary tables are mostly used in \nOLAP queries. And OLAP workload means that there are few concurrent \nqueries which are working with large datasets.\nSo size of produced temporary tables can be quite big. For OLAP it seems \nto be very important to be able to use parallel query execution and use \nthe same cache eviction rule both for persistent and temp tables\n(otherwise you either cause swapping, either extra copying of data \nbetween OS and Postgres caches).\n\n>\n> 6. Connection pooler can not reschedule session which has created\n> temporary tables to some other backend because it's data is stored\n> in local buffers.\n>\n>\n> Yeah, if you're using transaction-associative pooling. That's just \n> part of a more general problem though, there are piles of related \n> issues with temp tables, session GUCs, session advisory locks and more.\n>\n> I don't see how global temp tables will do you the slightest bit of \n> good here as the data in them will still be backend-local. If it isn't \n> then you should just be using unlogged tables.\n\nYou can not use the same unlogged table to save intermediate query \nresults in two parallel sessions.\n\n> Definition of this table (metadata) is shared by all backends but\n> data\n> is private to the backend. After session termination data is\n> obviously lost.\n>\n>\n> +1 that's what a global temp table should be, and it's IIRC pretty \n> much how the SQL standard specifies temp tables.\n>\n> I suspect I'm overlooking some complexities here, because to me it \n> seems like we could implement these fairly simply. A new relkind would \n> identify it as a global temp table and the relfilenode would be 0. \n> Same for indexes on temp tables. We'd extend the relfilenode mapper to \n> support a backend-local non-persistent relfilenode map that's used to \n> track temp table and index relfilenodes. If no relfilenode is defined \n> for the table, the mapper would allocate one. We already happily \n> create missing relfilenodes on write so we don't even have to \n> pre-create the actual file. We'd register the relfilenode as a \n> tempfile and use existing tempfile cleanup mechanisms, and we'd use \n> the temp tablespace to store it.\n>\n> I must be missing something important because it doesn't seem hard.\n>\nAs I already wrote, I tried to kill two bird with one stone: eliminate \ncatalog bloating and allow access to temp tables from multiple backends \n(to be able to perform parallel queries and connection pooling).\nThis is why I have to use shared buffers for global temp tables.\nMay be it was not so good idea. But it was one of my primary intention \nof publishing this patch to know opinion of other people.\nIn PG-Pro some of my colleagues think that the most critical problem is \ninability to use temporary tables at replica.\nOther think that it is not a problem at all if you are using logical \nreplication.\n From my point of view the most critical problem is inability to use \nparallel plans for temporary tables.\nBut looks like you don't think so.\n\nI see three different activities related with temporary tables:\n1. Shared metadata\n2. Shared buffers\n3. Alternative concurrency control & reducing tuple header size \n(specialized table access method for temporary tables)\n\nIn my proposal I combined 1 and 2, leaving 3 for next step.\nI will be interested to know other suggestions.\n\nOne more thing - 1 and 2 are really independent: you can share metadata \nwithout sharing buffers.\nBut introducing yet another kind of temporary tables seems to be really \noverkill:\n- local temp tables (private namespace and lcoal buffers)\n- tables with shared metadata but local bufferes\n- tables with shared metadata and bufferes\n\n\n> The drawback of such approach is that it will be necessary to\n> reimplement large bulk of heapam code.\n> But this approach allows to eliminate visibility check for temporary\n> table tuples and decrease size of tuple header.\n>\n>\n> That sounds potentially cool, but perhaps a \"next step\" thing? Allow \n> the creation of global temp tables to specify reloptions, and you can \n> add it as a reloption later. You can't actually eliminate visibility \n> checks anyway because they're still MVCC heaps.\n\nSorry?\nI mean elimination of MVCC overhead (visibility checks) for temp tables \nonly.\nI am not sure that we can really fully eliminate it if we support use of \ntemp tables in prepared transactions and autonomous transactions (yet \nanother awful feature we have in PgPro-EE).\nAlso looks like we need to have some analogue of CID to be able to \ncorrectly executed queries like \"insert into T (select from T ...)\" \nwhere T is global temp table.\nI didn't think much about it, but I really considering new table access \nmethod API for reducing per-tuple storage overhead for temporary and \nappend-only tables.\n\n> Savepoints can create invisible tuples even if you're using temp \n> tables that are cleared on commit, and of course so can DELETEs or \n> UPDATEs. So I'm not sure how much use it'd really be in practice.\n>\nYehh, subtransactions can be also a problem for eliminating xmin/xmax \nfor temp tables. Thanks for noticing it.\n\n\nI noticed that I have not patched some extension - fixed and rebased \nversion of the patch is attached.\nAlso you can find this version in our github repository: \nhttps://github.com/postgrespro/postgresql.builtin_pool.git\nbranch global_temp.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 1 Aug 2019 11:13:18 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "New version of the patch with several fixes is attached.\nMany thanks to Roman Zharkov for testing.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 6 Aug 2019 11:31:58 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On Tue, 6 Aug 2019 at 16:32, Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\nwrote:\n\n> New version of the patch with several fixes is attached.\n> Many thanks to Roman Zharkov for testing.\n>\n\nFWIW I still don't understand your argument with regards to using\nshared_buffers for temp tables having connection pooling benefits. Are you\nassuming the presence of some other extension in your extended version of\nPostgreSQL ? In community PostgreSQL a temp table's contents in one backend\nwill not be visible in another backend. So if your connection pooler in\ntransaction pooling mode runs txn 1 on backend 42 and it populates temp\ntable X, then the pooler runs the same app session's txn 2 on backend 45,\nthe contents of temp table X are not visible anymore.\n\nCan you explain? Because AFAICS so long as temp table contents are\nbackend-private there's absolutely no point ever using shared buffers for\ntheir contents.\n\nPerhaps you mean that in a connection pooling case, each backend may land\nup filling up temp buffers with contents from *multiple different temp\ntables*? If so, sure, I get that, but the answer there seems to be to\nimprove eviction and memory accounting, not make backends waste precious\nshared_buffers space on non-shareable data.\n\nAnyhow, I strongly suggest you simplify the feature to add the basic global\ntemp table feature so the need to change pg_class, pg_attribute etc to use\ntemp tables is removed, but separate changes to temp table memory handling\netc into a follow-up patch. That'll make it smaller and easier to review\nand merge too. The two changes are IMO logically quite separate anyway.\n\nCome to think of it, I think connection poolers might benefit from an\nextension to the DISCARD command, say \"DISCARD TEMP_BUFFERS\", which evicts\ntemp table buffers from memory *without* dropping the temp tables. If\nthey're currently in-memory tuplestores they'd be written out and evicted.\nThat way a connection pooler could \"clean\" the backend, at the cost of some\nprobably pretty cheap buffered writes to the system buffer cache. The\nkernel might not even bother to write out the buffercache and it won't be\nforced to do so by fsync, checkpoints, etc, nor will the writes go via WAL\nso such evictions could be pretty cheap - and if not under lots of memory\npressure the backend could often read the temp table back in from system\nbuffer cache without disk I/O.\n\nThat's my suggestion for how to solve your pooler problem, assuming I've\nunderstood it correctly.\n\nAlong these lines I suggest adding the following to DISCARD at some point,\nobviously not as part of your patch:\n\n* DISCARD TEMP_BUFFERS\n* DISCARD SHARED_BUFFERS\n* DISCARD TEMP_FILES\n* DISCARD CATALOG_CACHE\n* DISCARD HOLD_CURSORS\n* DISCARD ADVISORY_LOCKS\n\nwhere obviously DISCARD SHARED_BUFFERS would be superuser-only and evict\nonly clean buffers.\n\n(Also, if we extend DISCARD lets also it to be written as DISCARD (LIST,\nOF, THINGS, TO, DISCARD) so that we can make the syntax extensible for\nplugins in future).\n\nThoughts?\n\nWould DISCARD TEMP_BUFFERS meet your needs?\n\nOn Tue, 6 Aug 2019 at 16:32, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:New version of the patch with several fixes is attached.\nMany thanks to Roman Zharkov for testing.FWIW I still don't understand your argument with regards to using shared_buffers for temp tables having connection pooling benefits. Are you assuming the presence of some other extension in your extended version of PostgreSQL ? In community PostgreSQL a temp table's contents in one backend will not be visible in another backend. So if your connection pooler in transaction pooling mode runs txn 1 on backend 42 and it populates temp table X, then the pooler runs the same app session's txn 2 on backend 45, the contents of temp table X are not visible anymore.Can you explain? Because AFAICS so long as temp table contents are backend-private there's absolutely no point ever using shared buffers for their contents.Perhaps you mean that in a connection pooling case, each backend may land up filling up temp buffers with contents from *multiple different temp tables*? If so, sure, I get that, but the answer there seems to be to improve eviction and memory accounting, not make backends waste precious shared_buffers space on non-shareable data.Anyhow, I strongly suggest you simplify the feature to add the basic global temp table feature so the need to change pg_class, pg_attribute etc to use temp tables is removed, but separate changes to temp table memory handling etc into a follow-up patch. That'll make it smaller and easier to review and merge too. The two changes are IMO logically quite separate anyway.Come to think of it, I think connection poolers might benefit from an extension to the DISCARD command, say \"DISCARD TEMP_BUFFERS\", which evicts temp table buffers from memory *without* dropping the temp tables. If they're currently in-memory tuplestores they'd be written out and evicted. That way a connection pooler could \"clean\" the backend, at the cost of some probably pretty cheap buffered writes to the system buffer cache. The kernel might not even bother to write out the buffercache and it won't be forced to do so by fsync, checkpoints, etc, nor will the writes go via WAL so such evictions could be pretty cheap - and if not under lots of memory pressure the backend could often read the temp table back in from system buffer cache without disk I/O.That's my suggestion for how to solve your pooler problem, assuming I've understood it correctly.Along these lines I suggest adding the following to DISCARD at some point, obviously not as part of your patch:* DISCARD TEMP_BUFFERS* DISCARD SHARED_BUFFERS* DISCARD TEMP_FILES* DISCARD CATALOG_CACHE* DISCARD HOLD_CURSORS* DISCARD ADVISORY_LOCKSwhere obviously DISCARD SHARED_BUFFERS would be superuser-only and evict only clean buffers.(Also, if we extend DISCARD lets also it to be written as DISCARD (LIST, OF, THINGS, TO, DISCARD) so that we can make the syntax extensible for plugins in future).Thoughts?Would DISCARD TEMP_BUFFERS meet your needs?",
"msg_date": "Thu, 8 Aug 2019 10:40:29 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 08.08.2019 5:40, Craig Ringer wrote:\n> On Tue, 6 Aug 2019 at 16:32, Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n>\n> New version of the patch with several fixes is attached.\n> Many thanks to Roman Zharkov for testing.\n>\n>\n> FWIW I still don't understand your argument with regards to using \n> shared_buffers for temp tables having connection pooling benefits. Are \n> you assuming the presence of some other extension in your extended \n> version of PostgreSQL ? In community PostgreSQL a temp table's \n> contents in one backend will not be visible in another backend. So if \n> your connection pooler in transaction pooling mode runs txn 1 on \n> backend 42 and it populates temp table X, then the pooler runs the \n> same app session's txn 2 on backend 45, the contents of temp table X \n> are not visible anymore.\n\nCertainly here I mean built-in connection pooler which is not currently \npresent in Postgres,\nbut it is part of PgPRO-EE and there is my patch for vanilla at commitfest:\nhttps://commitfest.postgresql.org/24/2067\n\n>\n> Can you explain? Because AFAICS so long as temp table contents are \n> backend-private there's absolutely no point ever using shared buffers \n> for their contents.\n>\nSure, there is no such problem with temporary tables now.\nThere is another problem: you can not use temporary table with any \nexisted connection poolers (pgbouncer,...) with pooling level other than \nsession unless temporary table is used inside one transaction.\nOne of the advantages of built-in connection pooler is that it can \nprovide session semantic (GUCs, prepared statement, temporary \ntables,...) with limited number of backends (smaller than number of \nsessions).\n\nIn PgPRO-EE this problem was solved by binding session to backend. I.e. \none backend can manage multiple sessions,\nbut session can not migrate to another backend. The drawback of such \nsolution is obvious: one long living transaction can block transactions \nof all other sessions scheduled to this backend.\nPossibility to migrate session to another backend is one of the obvious \nsolutions of the problem. But the main show stopper for it is temporary \ntables.\nThis is why I consider moving temporary tables to shared buffers as \nvery important step.\n\nIn vanilla version of built-in connection pooler situation is slightly \ndifferent.\nRight now if client is using temporary tables without \"ON COMMIT DROP\" \nclause, backend is marked as \"tainted\" and is pinned for this session.\nSo it is actually excluded from connection pool and servers only this \nsession. Once again - if I will be able to access temporary table data \nfrom other backend, there will be no need to mark backend as tainted in \nthis case.\nCertainly it also requires shared metadata. And here we come to the \nconcept of global temp tables (if we forget for a moment that global \ntemp tables were \"invented\" long time ago by Oracle and many other DBMSes:)\n\n> Perhaps you mean that in a connection pooling case, each backend may \n> land up filling up temp buffers with contents from *multiple different \n> temp tables*? If so, sure, I get that, but the answer there seems to \n> be to improve eviction and memory accounting, not make backends waste \n> precious shared_buffers space on non-shareable data.\n>\n> Anyhow, I strongly suggest you simplify the feature to add the basic \n> global temp table feature so the need to change pg_class, pg_attribute \n> etc to use temp tables is removed, but separate changes to temp table \n> memory handling etc into a follow-up patch. That'll make it smaller \n> and easier to review and merge too. The two changes are IMO logically \n> quite separate anyway.\n\nI agree that them are separate.\nBut even if we forget about built-in connection pooler, don't you think \nthat possibility to use parallel query plans for temporary tables is \nitself strong enough motivation to access global temp table through \nshared buffers\n(while still supporting private page pool for local temp tables). So \nboth approaches (shared vs. private buffers) have their pros and \ncontras. This is why it seems to be reasonable to support both of them \nand let user to make choice most suitable for concrete application. \nCertainly it is possible to provide \"global shared temp tables\" and \n\"global private temp tables\". But IMHO it is overkill.\n>\n> Come to think of it, I think connection poolers might benefit from an \n> extension to the DISCARD command, say \"DISCARD TEMP_BUFFERS\", which \n> evicts temp table buffers from memory *without* dropping the temp \n> tables. If they're currently in-memory tuplestores they'd be written \n> out and evicted. That way a connection pooler could \"clean\" the \n> backend, at the cost of some probably pretty cheap buffered writes to \n> the system buffer cache. The kernel might not even bother to write out \n> the buffercache and it won't be forced to do so by fsync, checkpoints, \n> etc, nor will the writes go via WAL so such evictions could be pretty \n> cheap - and if not under lots of memory pressure the backend could \n> often read the temp table back in from system buffer cache without \n> disk I/O.\n\nYes, this is one of th possible solutions for session migration. But \nfrankly speaking flushing local buffers on each session reschedule seems \nto be not so good solution. Even if OS file cache is large enough and \nflushed buffers are still present in memory (but them will be written \nto the disk in this case even if data of temp table is not intended to \nbe persisted).\n>\n> That's my suggestion for how to solve your pooler problem, assuming \n> I've understood it correctly.\n>\n> Along these lines I suggest adding the following to DISCARD at some \n> point, obviously not as part of your patch:\n>\n> * DISCARD TEMP_BUFFERS\n> * DISCARD SHARED_BUFFERS\n> * DISCARD TEMP_FILES\n> * DISCARD CATALOG_CACHE\n> * DISCARD HOLD_CURSORS\n> * DISCARD ADVISORY_LOCKS\n>\n> where obviously DISCARD SHARED_BUFFERS would be superuser-only and \n> evict only clean buffers.\n>\n> (Also, if we extend DISCARD lets also it to be written as DISCARD \n> (LIST, OF, THINGS, TO, DISCARD) so that we can make the syntax \n> extensible for plugins in future).\n>\n> Thoughts?\n>\n> Would DISCARD TEMP_BUFFERS meet your needs?\n>\nActually I have already implemented DropLocalBuffers function (three \nline of code:)\n\nvoid\nDropLocalBuffers(void)\n{\n RelFileNode rnode;\n rnode.relNode = InvalidOid; /* drop all local buffers */\n DropRelFileNodeAllLocalBuffers(rnode);\n}\n\n\nfor yet another Postgres extension which is not yet included even in \nPgPRO-EE - SnapFS: support of database snapshots.\nI do not think that we need such command at user level (i.e. have \ncorrespondent SQL command).\nBut, as I already wrote above, I do not consider flushing all buffers on \nsession reschedule as acceptable solution.\nAnd moreover, just flushing buffers is not enough. There is still some \nsmgr stuff associated with this relation which is local to the backend.\nWe in any case has to make some changes to be able to access temporary \ndata from other backend even if data is flushed to the file system.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 08.08.2019 5:40, Craig Ringer wrote:\n\n\n\n\n\nOn Tue, 6 Aug 2019 at 16:32,\n Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n wrote:\n\nNew version of the patch\n with several fixes is attached.\n Many thanks to Roman Zharkov for testing.\n\n\n\nFWIW I still don't understand your argument with regards\n to using shared_buffers for temp tables having connection\n pooling benefits. Are you assuming the presence of some\n other extension in your extended version of PostgreSQL ? In\n community PostgreSQL a temp table's contents in one backend\n will not be visible in another backend. So if your\n connection pooler in transaction pooling mode runs txn 1 on\n backend 42 and it populates temp table X, then the pooler\n runs the same app session's txn 2 on backend 45, the\n contents of temp table X are not visible anymore.\n\n\n\n\n Certainly here I mean built-in connection pooler which is not\n currently present in Postgres,\n but it is part of PgPRO-EE and there is my patch for vanilla at\n commitfest:\nhttps://commitfest.postgresql.org/24/2067\n\n\n\n\n\n\nCan you explain? Because AFAICS so long as temp table\n contents are backend-private there's absolutely no point\n ever using shared buffers for their contents.\n\n\n\n\n\n Sure, there is no such problem with temporary tables now.\n There is another problem: you can not use temporary table with any\n existed connection poolers (pgbouncer,...) with pooling level other\n than session unless temporary table is used inside one transaction.\n One of the advantages of built-in connection pooler is that it can\n provide session semantic (GUCs, prepared statement, temporary\n tables,...) with limited number of backends (smaller than number of\n sessions).\n\n In PgPRO-EE this problem was solved by binding session to backend.\n I.e. one backend can manage multiple sessions, \n but session can not migrate to another backend. The drawback of such\n solution is obvious: one long living transaction can block\n transactions of all other sessions scheduled to this backend.\n Possibility to migrate session to another backend is one of the\n obvious solutions of the problem. But the main show stopper for it\n is temporary tables.\n This is why I consider moving temporary tables to shared buffers as\n very important step.\n\n In vanilla version of built-in connection pooler situation is\n slightly different.\n Right now if client is using temporary tables without \"ON COMMIT\n DROP\" clause, backend is marked as \"tainted\" and is pinned for this\n session.\n So it is actually excluded from connection pool and servers only\n this session. Once again - if I will be able to access temporary\n table data from other backend, there will be no need to mark backend\n as tainted in this case.\n Certainly it also requires shared metadata. And here we come to the\n concept of global temp tables (if we forget for a moment that global\n temp tables were \"invented\" long time ago by Oracle and many other\n DBMSes:)\n\n\n\n\nPerhaps you mean that in a connection pooling case, each\n backend may land up filling up temp buffers with contents\n from *multiple different temp tables*? If so, sure, I get\n that, but the answer there seems to be to improve eviction\n and memory accounting, not make backends waste precious\n shared_buffers space on non-shareable data.\n\n\nAnyhow, I strongly suggest you simplify the feature to\n add the basic global temp table feature so the need to\n change pg_class, pg_attribute etc to use temp tables is\n removed, but separate changes to temp table memory handling\n etc into a follow-up patch. That'll make it smaller and\n easier to review and merge too. The two changes are IMO\n logically quite separate anyway.\n\n\n\n\n I agree that them are separate.\n But even if we forget about built-in connection pooler, don't you\n think that possibility to use parallel query plans for temporary\n tables is itself strong enough motivation to access global temp\n table through shared buffers\n (while still supporting private page pool for local temp tables). So\n both approaches (shared vs. private buffers) have their pros and\n contras. This is why it seems to be reasonable to support both of\n them and let user to make choice most suitable for concrete\n application. Certainly it is possible to provide \"global shared temp\n tables\" and \"global private temp tables\". But IMHO it is overkill.\n\n\n\n\n\nCome to think of it, I think connection poolers might\n benefit from an extension to the DISCARD command, say\n \"DISCARD TEMP_BUFFERS\", which evicts temp table buffers from\n memory *without* dropping the temp tables. If they're\n currently in-memory tuplestores they'd be written out and\n evicted. That way a connection pooler could \"clean\" the\n backend, at the cost of some probably pretty cheap buffered\n writes to the system buffer cache. The kernel might not even\n bother to write out the buffercache and it won't be forced\n to do so by fsync, checkpoints, etc, nor will the writes go\n via WAL so such evictions could be pretty cheap - and if not\n under lots of memory pressure the backend could often read\n the temp table back in from system buffer cache without disk\n I/O.\n\n\n\n\n Yes, this is one of th possible solutions for session migration. \n But frankly speaking flushing local buffers on each session\n reschedule seems to be not so good solution. Even if OS file cache\n is large enough and flushed buffers are still present in memory\n (but them will be written to the disk in this case even if data of\n temp table is not intended to be persisted).\n\n\n\n\n\nThat's my suggestion for how to solve your pooler\n problem, assuming I've understood it correctly.\n\n\nAlong these lines I suggest adding the following to\n DISCARD at some point, obviously not as part of your patch:\n\n\n* DISCARD TEMP_BUFFERS\n* DISCARD SHARED_BUFFERS\n* DISCARD TEMP_FILES\n* DISCARD CATALOG_CACHE\n* DISCARD HOLD_CURSORS\n* DISCARD ADVISORY_LOCKS\n\n\nwhere obviously DISCARD SHARED_BUFFERS would be\n superuser-only and evict only clean buffers.\n\n\n(Also, if we extend DISCARD lets also it to be written as\n DISCARD (LIST, OF, THINGS, TO, DISCARD) so that we can make\n the syntax extensible for plugins in future).\n\n\nThoughts?\n\n\nWould DISCARD TEMP_BUFFERS meet your needs?\n\n\n\n\n\n Actually I have already implemented DropLocalBuffers function (three\n line of code:)\n\n void\n DropLocalBuffers(void)\n {\n RelFileNode rnode;\n rnode.relNode = InvalidOid; /* drop all local buffers */\n DropRelFileNodeAllLocalBuffers(rnode);\n }\n\n\n for yet another Postgres extension which is not yet included even in\n PgPRO-EE - SnapFS: support of database snapshots.\n I do not think that we need such command at user level (i.e. have\n correspondent SQL command).\n But, as I already wrote above, I do not consider flushing all\n buffers on session reschedule as acceptable solution.\n And moreover, just flushing buffers is not enough. There is still\n some smgr stuff associated with this relation which is local to the\n backend.\n We in any case has to make some changes to be able to access\n temporary data from other backend even if data is flushed to the\n file system.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 8 Aug 2019 10:03:34 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On Thu, 8 Aug 2019 at 15:03, Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\nwrote:\n\n>\n>\n> On 08.08.2019 5:40, Craig Ringer wrote:\n>\n> On Tue, 6 Aug 2019 at 16:32, Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> wrote:\n>\n>> New version of the patch with several fixes is attached.\n>> Many thanks to Roman Zharkov for testing.\n>>\n>\n> FWIW I still don't understand your argument with regards to using\n> shared_buffers for temp tables having connection pooling benefits. Are you\n> assuming the presence of some other extension in your extended version of\n> PostgreSQL ? In community PostgreSQL a temp table's contents in one backend\n> will not be visible in another backend. So if your connection pooler in\n> transaction pooling mode runs txn 1 on backend 42 and it populates temp\n> table X, then the pooler runs the same app session's txn 2 on backend 45,\n> the contents of temp table X are not visible anymore.\n>\n>\n> Certainly here I mean built-in connection pooler which is not currently\n> present in Postgres,\n> but it is part of PgPRO-EE and there is my patch for vanilla at commitfest:\n> https://commitfest.postgresql.org/24/2067\n>\n\nOK, that's what I assumed.\n\nYou're trying to treat this change as if it's a given that the other\nfunctionality you want/propose is present in core or will be present in\ncore. That's far from given. My suggestion is to split it up so that the\nparts can be reviewed and committed separately.\n\n\n> In PgPRO-EE this problem was solved by binding session to backend. I.e.\n> one backend can manage multiple sessions,\n> but session can not migrate to another backend. The drawback of such\n> solution is obvious: one long living transaction can block transactions of\n> all other sessions scheduled to this backend.\n> Possibility to migrate session to another backend is one of the obvious\n> solutions of the problem. But the main show stopper for it is temporary\n> tables.\n> This is why I consider moving temporary tables to shared buffers as very\n> important step.\n>\n\nI can see why it's important for your use case.\n\nI am not disagreeing.\n\nI am however strongly suggesting that your patch has two fairly distinct\nfunctional changes in it, and you should separate them out.\n\n* Introduce global temp tables, a new relkind that works like a temp table\nbut doesn't require catalog changes. Uses per-backend relfilenode and\ncleanup like existing temp tables. You could extend the relmapper to handle\nthe mapping of relation oid to per-backend relfilenode.\n\n* Associate global temp tables with session state and manage them in\nshared_buffers so they can work with the in-core connection pooler (if\ncommitted)\n\nHistorically we've had a few efforts to get in-core connection pooling that\nhaven't gone anywhere. Without your pooler patch the changes you make to\nuse shared_buffers etc are going to be unhelpful at best, if not actively\nharmful to performance, and will add unnecessary complexity. So I think\nthere's a logical series of patches here:\n\n* global temp table relkind and support for it\n* session state separation\n* connection pooling\n* pooler-friendly temp tables in shared_buffers\n\nMake sense?\n\n\n> But even if we forget about built-in connection pooler, don't you think\n> that possibility to use parallel query plans for temporary tables is itself\n> strong enough motivation to access global temp table through shared buffers?\n>\n\nI can see a way to share temp tables across parallel query backends being\nvery useful for DW/OLAP workloads, yes. But I don't know if putting them in\nshared_buffers is the right answer for that. We have DSM/DSA, we have\nshm_mq, various options for making temp buffers share-able with parallel\nworker backends.\n\nI'm suggesting that you not tie the whole (very useful) global temp tables\nfeature to this, but instead split it up into logical units that can be\nunderstood, reviewed and committed separately.\n\nI would gladly participate in review.\n\nWould DISCARD TEMP_BUFFERS meet your needs?\n>\n>\n> Actually I have already implemented DropLocalBuffers function (three line\n> of code:)\n>\n> [...]\n>\nI do not think that we need such command at user level (i.e. have\n> correspondent SQL command).\n>\n\nI'd be very happy to have it personally, but don't think it needs to be\ntied in with your patch set here. Maybe I can cook up a patch soon.\n\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 8 Aug 2019 at 15:03, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n\n\nOn 08.08.2019 5:40, Craig Ringer wrote:\n\n\n\n\nOn Tue, 6 Aug 2019 at 16:32,\n Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n wrote:\n\nNew version of the patch\n with several fixes is attached.\n Many thanks to Roman Zharkov for testing.\n\n\n\nFWIW I still don't understand your argument with regards\n to using shared_buffers for temp tables having connection\n pooling benefits. Are you assuming the presence of some\n other extension in your extended version of PostgreSQL ? In\n community PostgreSQL a temp table's contents in one backend\n will not be visible in another backend. So if your\n connection pooler in transaction pooling mode runs txn 1 on\n backend 42 and it populates temp table X, then the pooler\n runs the same app session's txn 2 on backend 45, the\n contents of temp table X are not visible anymore.\n\n\n\n\n Certainly here I mean built-in connection pooler which is not\n currently present in Postgres,\n but it is part of PgPRO-EE and there is my patch for vanilla at\n commitfest:\nhttps://commitfest.postgresql.org/24/2067OK, that's what I assumed.You're trying to treat this change as if it's a given that the other functionality you want/propose is present in core or will be present in core. That's far from given. My suggestion is to split it up so that the parts can be reviewed and committed separately. \n In PgPRO-EE this problem was solved by binding session to backend.\n I.e. one backend can manage multiple sessions, \n but session can not migrate to another backend. The drawback of such\n solution is obvious: one long living transaction can block\n transactions of all other sessions scheduled to this backend.\n Possibility to migrate session to another backend is one of the\n obvious solutions of the problem. But the main show stopper for it\n is temporary tables.\n This is why I consider moving temporary tables to shared buffers as\n very important step.I can see why it's important for your use case.I am not disagreeing.I am however strongly suggesting that your patch has two fairly distinct functional changes in it, and you should separate them out.* Introduce global temp tables, a new relkind that works like a temp table but doesn't require catalog changes. Uses per-backend relfilenode and cleanup like existing temp tables. You could extend the relmapper to handle the mapping of relation oid to per-backend relfilenode.* Associate global temp tables with session state and manage them in shared_buffers so they can work with the in-core connection pooler (if committed)Historically we've had a few efforts to get in-core connection pooling that haven't gone anywhere. Without your pooler patch the changes you make to use shared_buffers etc are going to be unhelpful at best, if not actively harmful to performance, and will add unnecessary complexity. So I think there's a logical series of patches here:* global temp table relkind and support for it* session state separation* connection pooling* pooler-friendly temp tables in shared_buffersMake sense? But even if we forget about built-in connection pooler, don't you\n think that possibility to use parallel query plans for temporary\n tables is itself strong enough motivation to access global temp\n table through shared buffers?I can see a way to share temp tables across parallel query backends being very useful for DW/OLAP workloads, yes. But I don't know if putting them in shared_buffers is the right answer for that. We have DSM/DSA, we have shm_mq, various options for making temp buffers share-able with parallel worker backends.I'm suggesting that you not tie the whole (very useful) global temp tables feature to this, but instead split it up into logical units that can be understood, reviewed and committed separately.I would gladly participate in review.Would DISCARD TEMP_BUFFERS meet your needs?\n\n\n\n\n\n Actually I have already implemented DropLocalBuffers function (three\n line of code:)[...] \n I do not think that we need such command at user level (i.e. have\n correspondent SQL command).I'd be very happy to have it personally, but don't think it needs to be tied in with your patch set here. Maybe I can cook up a patch soon.\n\n-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 9 Aug 2019 13:34:21 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 09.08.2019 8:34, Craig Ringer wrote:\n> On Thu, 8 Aug 2019 at 15:03, Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n>\n>\n>\n> On 08.08.2019 5:40, Craig Ringer wrote:\n>> On Tue, 6 Aug 2019 at 16:32, Konstantin Knizhnik\n>> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n>>\n>> New version of the patch with several fixes is attached.\n>> Many thanks to Roman Zharkov for testing.\n>>\n>>\n>> FWIW I still don't understand your argument with regards to using\n>> shared_buffers for temp tables having connection pooling\n>> benefits. Are you assuming the presence of some other extension\n>> in your extended version of PostgreSQL ? In community PostgreSQL\n>> a temp table's contents in one backend will not be visible in\n>> another backend. So if your connection pooler in transaction\n>> pooling mode runs txn 1 on backend 42 and it populates temp table\n>> X, then the pooler runs the same app session's txn 2 on backend\n>> 45, the contents of temp table X are not visible anymore.\n>\n> Certainly here I mean built-in connection pooler which is not\n> currently present in Postgres,\n> but it is part of PgPRO-EE and there is my patch for vanilla at\n> commitfest:\n> https://commitfest.postgresql.org/24/2067\n>\n>\n> OK, that's what I assumed.\n>\n> You're trying to treat this change as if it's a given that the other \n> functionality you want/propose is present in core or will be present \n> in core. That's far from given. My suggestion is to split it up so \n> that the parts can be reviewed and committed separately.\n>\n> In PgPRO-EE this problem was solved by binding session to backend.\n> I.e. one backend can manage multiple sessions,\n> but session can not migrate to another backend. The drawback of\n> such solution is obvious: one long living transaction can block\n> transactions of all other sessions scheduled to this backend.\n> Possibility to migrate session to another backend is one of the\n> obvious solutions of the problem. But the main show stopper for it\n> is temporary tables.\n> This is why I consider moving temporary tables to shared buffers\n> as very important step.\n>\n>\n> I can see why it's important for your use case.\n>\n> I am not disagreeing.\n>\n> I am however strongly suggesting that your patch has two fairly \n> distinct functional changes in it, and you should separate them out.\n>\n> * Introduce global temp tables, a new relkind that works like a temp \n> table but doesn't require catalog changes. Uses per-backend \n> relfilenode and cleanup like existing temp tables. You could extend \n> the relmapper to handle the mapping of relation oid to per-backend \n> relfilenode.\n>\n> * Associate global temp tables with session state and manage them in \n> shared_buffers so they can work with the in-core connection pooler (if \n> committed)\n>\n> Historically we've had a few efforts to get in-core connection pooling \n> that haven't gone anywhere. Without your pooler patch the changes you \n> make to use shared_buffers etc are going to be unhelpful at best, if \n> not actively harmful to performance, and will add unnecessary \n> complexity. So I think there's a logical series of patches here:\n>\n> * global temp table relkind and support for it\n> * session state separation\n> * connection pooling\n> * pooler-friendly temp tables in shared_buffers\n>\n> Make sense?\n>\n> But even if we forget about built-in connection pooler, don't you\n> think that possibility to use parallel query plans for temporary\n> tables is itself strong enough motivation to access global temp\n> table through shared buffers?\n>\n>\n> I can see a way to share temp tables across parallel query backends \n> being very useful for DW/OLAP workloads, yes. But I don't know if \n> putting them in shared_buffers is the right answer for that. We have \n> DSM/DSA, we have shm_mq, various options for making temp buffers \n> share-able with parallel worker backends.\n>\n> I'm suggesting that you not tie the whole (very useful) global temp \n> tables feature to this, but instead split it up into logical units \n> that can be understood, reviewed and committed separately.\n>\n> I would gladly participate in review.\n\nOk, here it is: global_private_temp-1.patch\nAlso I have attached updated version of the global temp tables with \nshared buffers - global_shared_temp-1.patch\nIt is certainly larger (~2k lines vs. 1.5k lines) because it is changing \nBufferTag and related functions.\nBut I do not think that this different is so critical.\nStill have a wish to kill two birds with one stone:)\n\n\n\n\n\n\n\n\n> -- \n\nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 9 Aug 2019 17:07:00 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On Fri, 9 Aug 2019 at 22:07, Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\nwrote:\n\n>\n>\n> Ok, here it is: global_private_temp-1.patch\n>\n\nFantastic.\n\nI'll put that high on my queue.\n\nI'd love to see something like this get in.\n\nDoubly so if it brings us closer to being able to use temp tables on\nphysical read replicas, though I know there are plenty of other barriers\nthere (not least of which being temp tables using persistent txns not\nvtxids)\n\nDoes it have a CF entry?\n\n\n> Also I have attached updated version of the global temp tables with shared\n> buffers - global_shared_temp-1.patch\n>\n\nNice to see that split out. In addition to giving the first patch more hope\nof being committed this time around, it'll help with readability and\ntestability too.\n\nTo be clear, I have long wanted to see PostgreSQL have the \"session\" state\nabstraction you have implemented. I think it's really important for high\nclient count OLTP workloads, working with the endless collection of ORMs\nout there, etc. So I'm all in favour of it in principle so long as it can\nbe made to work reliably with limited performance impact on existing\nworkloads and without making life lots harder when adding new core\nfunctionality, for extension authors etc. The same goes for built-in\npooling. I think PostgreSQL has needed some sort of separation of\n\"connection\", \"backend\", \"session\" and \"executor\" for a long time and I'm\nglad to see you working on it.\n\nWith that said: How do you intend to address the likelihood that this will\ncause performance regressions for existing workloads that use temp tables\n*without* relying on your session state and connection pooler? Consider\nworkloads that use temp tables for mid-long txns where txn pooling is\nunimportant, where they also do plenty of read and write activity on\npersistent tables. Classic OLAP/DW stuff. e.g.:\n\n* four clients, four backends, four connections, session-level connections\nthat stay busy with minimal client sleeps\n* All sessions run the same bench code\n* transactions all read plenty of data from a medium to large persistent\ntable (think fact tables, etc)\n* transactions store a filtered, joined dataset with some pre-computed\nwindow results or something in temp tables\n* benchmark workload makes big-ish temp tables to store intermediate data\nfor its medium-length transactions\n* transactions also write to some persistent relations, say to record their\nsummarised results\n\nHow does it perform with and without your patch? I'm concerned that:\n\n* the extra buffer locking and various IPC may degrade performance of temp\ntables\n* the temp table data in shared_buffers may put pressure on shared_buffers\nspace, cached pages for persistent tables all sessions are sharing;\n* the temp table data in shared_buffers may put pressure on shared_buffers\nspace for dirty buffers, forcing writes of persistent tables out earlier\ntherefore reducing write-combining opportunities;\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Fri, 9 Aug 2019 at 22:07, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n\n Ok, here it is: global_private_temp-1.patchFantastic.I'll put that high on my queue.I'd love to see something like this get in.Doubly so if it brings us closer to being able to use temp tables on physical read replicas, though I know there are plenty of other barriers there (not least of which being temp tables using persistent txns not vtxids)Does it have a CF entry? \n Also I have attached updated version of the global temp tables with\n shared buffers - global_shared_temp-1.patch Nice to see that split out. In addition to giving the first patch more hope of being committed this time around, it'll help with readability and testability too.To be clear, I have long wanted to see PostgreSQL have the \"session\" state abstraction you have implemented. I think it's really important for high client count OLTP workloads, working with the endless collection of ORMs out there, etc. So I'm all in favour of it in principle so long as it can be made to work reliably with limited performance impact on existing workloads and without making life lots harder when adding new core functionality, for extension authors etc. The same goes for built-in pooling. I think PostgreSQL has needed some sort of separation of \"connection\", \"backend\", \"session\" and \"executor\" for a long time and I'm glad to see you working on it.With that said: How do you intend to address the likelihood that this will cause performance regressions for existing workloads that use temp tables *without* relying on your session state and connection pooler? Consider workloads that use temp tables for mid-long txns where txn pooling is unimportant, where they also do plenty of read and write activity on persistent tables. Classic OLAP/DW stuff. e.g.:* four clients, four backends, four connections, session-level connections that stay busy with minimal client sleeps* All sessions run the same bench code* transactions all read plenty of data from a medium to large persistent table (think fact tables, etc)* transactions store a filtered, joined dataset with some pre-computed window results or something in temp tables* benchmark workload makes big-ish temp tables to store intermediate data for its medium-length transactions* transactions also write to some persistent relations, say to record their summarised results How does it perform with and without your patch? I'm concerned that:* the extra buffer locking and various IPC may degrade performance of temp tables* the temp table data in shared_buffers may put pressure on shared_buffers space, cached pages for persistent tables all sessions are sharing;* the temp table data in shared_buffers may put pressure on shared_buffers space for dirty buffers, forcing writes of persistent tables out earlier therefore reducing write-combining opportunities;-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Sat, 10 Aug 2019 10:12:03 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 10.08.2019 5:12, Craig Ringer wrote:\n> On Fri, 9 Aug 2019 at 22:07, Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n>\n>\n>\n> Ok, here it is: global_private_temp-1.patch\n>\n>\n> Fantastic.\n>\n> I'll put that high on my queue.\n>\n> I'd love to see something like this get in.\n>\n> Doubly so if it brings us closer to being able to use temp tables on \n> physical read replicas, though I know there are plenty of other \n> barriers there (not least of which being temp tables using persistent \n> txns not vtxids)\n>\n> Does it have a CF entry?\n\nhttps://commitfest.postgresql.org/24/2233/\n\n> Also I have attached updated version of the global temp tables\n> with shared buffers - global_shared_temp-1.patch\n>\n>\n> Nice to see that split out. In addition to giving the first patch more \n> hope of being committed this time around, it'll help with readability \n> and testability too.\n>\n> To be clear, I have long wanted to see PostgreSQL have the \"session\" \n> state abstraction you have implemented. I think it's really important \n> for high client count OLTP workloads, working with the endless \n> collection of ORMs out there, etc. So I'm all in favour of it in \n> principle so long as it can be made to work reliably with limited \n> performance impact on existing workloads and without making life lots \n> harder when adding new core functionality, for extension authors etc. \n> The same goes for built-in pooling. I think PostgreSQL has needed some \n> sort of separation of \"connection\", \"backend\", \"session\" and \n> \"executor\" for a long time and I'm glad to see you working on it.\n>\n> With that said: How do you intend to address the likelihood that this \n> will cause performance regressions for existing workloads that use \n> temp tables *without* relying on your session state and connection \n> pooler? Consider workloads that use temp tables for mid-long txns \n> where txn pooling is unimportant, where they also do plenty of read \n> and write activity on persistent tables. Classic OLAP/DW stuff. e.g.:\n>\n> * four clients, four backends, four connections, session-level \n> connections that stay busy with minimal client sleeps\n> * All sessions run the same bench code\n> * transactions all read plenty of data from a medium to large \n> persistent table (think fact tables, etc)\n> * transactions store a filtered, joined dataset with some pre-computed \n> window results or something in temp tables\n> * benchmark workload makes big-ish temp tables to store intermediate \n> data for its medium-length transactions\n> * transactions also write to some persistent relations, say to record \n> their summarised results\n>\n> How does it perform with and without your patch? I'm concerned that:\n>\n> * the extra buffer locking and various IPC may degrade performance of \n> temp tables\n> * the temp table data in shared_buffers may put pressure on \n> shared_buffers space, cached pages for persistent tables all sessions \n> are sharing;\n> * the temp table data in shared_buffers may put pressure on \n> shared_buffers space for dirty buffers, forcing writes of persistent \n> tables out earlier therefore reducing write-combining opportunities;\n>\nI agree that access to local buffers is cheaper than to shared buffers \nbecause there is no lock overhead.\nAnd the fact that access to local tables can not affect cached data of \npersistent tables is also important.\nBut most of Postgres tables are still normal (persistent) tables access \nthrough shared buffers.\nAnd huge amount of efforts were made to make this access as efficient as \npossible (use clock algorithm which doesn't require global lock,\natomic operations,...). Also using the same replacement discipline for \nall tables at some workloads may be also preferable.\nSo it is not so obvious to me that in the described scenario local \nbuffer cache for temporary table really will provide significant advantages.\nIt will be interesting to perform some benchmarking - I am going to do it.\n\nWhat I have observed right now is that in type scenario: dumping results \nof huge query to temporary table with subsequent traverse of this table\nold (local) temporary tables provide better performance (may be because \nof small size of local buffer cache and different eviction policy).\nBut subsequent accesses to global shared table are faster (because it \ncompletely fits in large shared buffer cache).\n\nThere is one more problem with global temporary tables for which I do \nnot know good solution now: collecting statistic.\nAs far as each backend has its own data, generally them may need \ndifferent query plans.\nRight now if you perform \"analyze table\" in one backend, then it will \naffect plans in all backends.\nIt can be considered not as bug, but as feature if we assume that \ndistribution if data in all backens is similar.\nBut if this assumption is not true, then it can be a problem.\n\n\n\n\n\n\n\n\n\n\n\n\nOn 10.08.2019 5:12, Craig Ringer wrote:\n\n\n\n\nOn Fri, 9 Aug 2019 at 22:07, Konstantin Knizhnik\n <k.knizhnik@postgrespro.ru>\n wrote:\n\n\n\n \n\n Ok, here it is: global_private_temp-1.patch\n\n\n\n\nFantastic.\n\n\nI'll put that high on my queue.\n\n\nI'd love to see something like this get in.\n\n\nDoubly so if it brings us closer to being able to use\n temp tables on physical read replicas, though I know there\n are plenty of other barriers there (not least of which being\n temp tables using persistent txns not vtxids)\n\n\nDoes it have a CF entry?\n \n\n\n\n\nhttps://commitfest.postgresql.org/24/2233/\n\n\n\n\n\n Also I have attached updated version\n of the global temp tables with shared buffers -\n global_shared_temp-1.patch \n\n\n\n\nNice to see that split out. In addition to giving the\n first patch more hope of being committed this time around,\n it'll help with readability and testability too.\n\n\nTo be clear, I have long wanted to see PostgreSQL have\n the \"session\" state abstraction you have implemented. I\n think it's really important for high client count OLTP\n workloads, working with the endless collection of ORMs out\n there, etc. So I'm all in favour of it in principle so long\n as it can be made to work reliably with limited performance\n impact on existing workloads and without making life lots\n harder when adding new core functionality, for extension\n authors etc. The same goes for built-in pooling. I think\n PostgreSQL has needed some sort of separation of\n \"connection\", \"backend\", \"session\" and \"executor\" for a long\n time and I'm glad to see you working on it.\n\n\nWith that said: How do you intend to address the\n likelihood that this will cause performance regressions for\n existing workloads that use temp tables *without* relying on\n your session state and connection pooler? Consider workloads\n that use temp tables for mid-long txns where txn pooling is\n unimportant, where they also do plenty of read and write\n activity on persistent tables. Classic OLAP/DW stuff. e.g.:\n\n\n* four clients, four backends, four connections,\n session-level connections that stay busy with minimal client\n sleeps\n* All sessions run the same bench code\n* transactions all read plenty of data from a medium to\n large persistent table (think fact tables, etc)\n\n* transactions store a filtered, joined dataset with some\n pre-computed window results or something in temp tables\n* benchmark workload makes big-ish temp tables to store\n intermediate data for its medium-length transactions\n* transactions also write to some persistent relations,\n say to record their summarised results \n\n\nHow does it perform with and without your patch? I'm\n concerned that:\n\n\n* the extra buffer locking and various IPC may degrade\n performance of temp tables\n* the temp table data in shared_buffers may put pressure\n on shared_buffers space, cached pages for persistent tables\n all sessions are sharing;\n\n* the temp table data in shared_buffers may put pressure\n on shared_buffers space for dirty buffers, forcing writes of\n persistent tables out earlier therefore reducing\n write-combining opportunities;\n\n\n\n\n\n I agree that access to local buffers is cheaper than to shared\n buffers because there is no lock overhead.\n And the fact that access to local tables can not affect cached data\n of persistent tables is also important.\n But most of Postgres tables are still normal (persistent) tables\n access through shared buffers.\n And huge amount of efforts were made to make this access as\n efficient as possible (use clock algorithm which doesn't require\n global lock, \n atomic operations,...). Also using the same replacement discipline\n for all tables at some workloads may be also preferable.\n So it is not so obvious to me that in the described scenario local\n buffer cache for temporary table really will provide significant\n advantages.\n It will be interesting to perform some benchmarking - I am going to\n do it.\n\n What I have observed right now is that in type scenario: dumping\n results of huge query to temporary table with subsequent traverse of\n this table\n old (local) temporary tables provide better performance (may be\n because of small size of local buffer cache and different eviction\n policy).\n But subsequent accesses to global shared table are faster (because\n it completely fits in large shared buffer cache).\n\n There is one more problem with global temporary tables for which I\n do not know good solution now: collecting statistic.\n As far as each backend has its own data, generally them may need\n different query plans.\n Right now if you perform \"analyze table\" in one backend, then it\n will affect plans in all backends.\n It can be considered not as bug, but as feature if we assume that\n distribution if data in all backens is similar.\n But if this assumption is not true, then it can be a problem.",
"msg_date": "Sun, 11 Aug 2019 09:52:50 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "Hi\n\n\n> There is one more problem with global temporary tables for which I do not\n> know good solution now: collecting statistic.\n> As far as each backend has its own data, generally them may need different\n> query plans.\n> Right now if you perform \"analyze table\" in one backend, then it will\n> affect plans in all backends.\n> It can be considered not as bug, but as feature if we assume that\n> distribution if data in all backens is similar.\n> But if this assumption is not true, then it can be a problem.\n>\n\nLast point is probably the most difficult issue and I think about it years.\n\nI have a experience with my customers so 99% of usage temp tables is\nwithout statistics - just with good information only about rows. Only few\ncustomers know so manual ANALYZE is necessary for temp tables (when it is\nreally necessary).\n\nSharing meta data about global temporary tables can real problem - probably\nnot about statistics, but surely about number of pages and number of rows.\n\nThere are two requirements:\n\na) we need some special meta data for any instance (per session) of global\ntemporary table (row, pages, statistics, maybe multicolumn statistics, ...)\n\nb) we would not to use persistent global catalogue (against catalogue\nbloating)\n\nI see two possible solution:\n\n1. hold these data only in memory in special buffers\n\n2. hold these data in global temporary tables - it is similar to normal\ntables - we can use global temp tables for metadata like classic persistent\ntables are used for metadata of classic persistent tables. Next syscache\ncan be enhanced to work with union of two system tables.\n\nI prefer @2 because changes can be implemented on deeper level.\n\nSharing metadata for global temp tables (current state if I understand\nwell) is good enough for develop stage, but It is hard to expect so it can\nwork generally in production environment.\n\nRegards\n\np.s. I am very happy so you are working on this topic. It is interesting\nand important problem.\n\nPavel\n\n\n\n\n\n\n\n\n\n\n>\n>\n>\n>\n\nHi\n\n There is one more problem with global temporary tables for which I\n do not know good solution now: collecting statistic.\n As far as each backend has its own data, generally them may need\n different query plans.\n Right now if you perform \"analyze table\" in one backend, then it\n will affect plans in all backends.\n It can be considered not as bug, but as feature if we assume that\n distribution if data in all backens is similar.\n But if this assumption is not true, then it can be a problem.Last point is probably the most difficult issue and I think about it years. I have a experience with my customers so 99% of usage temp tables is without statistics - just with good information only about rows. Only few customers know so manual ANALYZE is necessary for temp tables (when it is really necessary).Sharing meta data about global temporary tables can real problem - probably not about statistics, but surely about number of pages and number of rows.There are two requirements: a) we need some special meta data for any instance (per session) of global temporary table (row, pages, statistics, maybe multicolumn statistics, ...)b) we would not to use persistent global catalogue (against catalogue bloating)I see two possible solution:1. hold these data only in memory in special buffers2. hold these data in global temporary tables - it is similar to normal tables - we can use global temp tables for metadata like classic persistent tables are used for metadata of classic persistent tables. Next syscache can be enhanced to work with union of two system tables.I prefer @2 because changes can be implemented on deeper level.Sharing metadata for global temp tables (current state if I understand well) is good enough for develop stage, but It is hard to expect so it can work generally in production environment.Regardsp.s. I am very happy so you are working on this topic. It is interesting and important problem.Pavel",
"msg_date": "Sun, 11 Aug 2019 09:14:11 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "Hi,\n\nOn 11.08.2019 10:14, Pavel Stehule wrote:\n>\n> Hi\n>\n>\n> There is one more problem with global temporary tables for which I\n> do not know good solution now: collecting statistic.\n> As far as each backend has its own data, generally them may need\n> different query plans.\n> Right now if you perform \"analyze table\" in one backend, then it\n> will affect plans in all backends.\n> It can be considered not as bug, but as feature if we assume that\n> distribution if data in all backens is similar.\n> But if this assumption is not true, then it can be a problem.\n>\n>\n> Last point is probably the most difficult issue and I think about it \n> years.\n>\n> I have a experience with my customers so 99% of usage temp tables is \n> without statistics - just with good information only about rows. Only \n> few customers know so manual ANALYZE is necessary for temp tables \n> (when it is really necessary).\n>\n> Sharing meta data about global temporary tables can real problem - \n> probably not about statistics, but surely about number of pages and \n> number of rows.\n\nBut Postgres is not storing this information now anywhere else except \nstatistic, isn't it?\nThere was proposal to cache relation size, but it is not implemented \nyet. If such cache exists, then we can use it to store local information \nabout global temporary tables.\nSo if 99% of users do not perform analyze for temporary tables, then \nthem will not be faced with this problem, right?\n\n\n>\n> There are two requirements:\n>\n> a) we need some special meta data for any instance (per session) of \n> global temporary table (row, pages, statistics, maybe multicolumn \n> statistics, ...)\n>\n> b) we would not to use persistent global catalogue (against catalogue \n> bloating)\n>\n> I see two possible solution:\n>\n> 1. hold these data only in memory in special buffers\n>\n> 2. hold these data in global temporary tables - it is similar to \n> normal tables - we can use global temp tables for metadata like \n> classic persistent tables are used for metadata of classic persistent \n> tables. Next syscache can be enhanced to work with union of two system \n> tables.\n>\n> I prefer @2 because changes can be implemented on deeper level.\n>\n> Sharing metadata for global temp tables (current state if I understand \n> well) is good enough for develop stage, but It is hard to expect so it \n> can work generally in production environment.\n>\n\nI think that it not possible to assume that temporary data will aways \nfir in memory.\nSo 1) seems to be not acceptable solution.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n Hi,\n\nOn 11.08.2019 10:14, Pavel Stehule\n wrote:\n\n\n\n\n\n\n Hi\n\n\n\n\n \n There is one more problem with global temporary tables\n for which I do not know good solution now: collecting\n statistic.\n As far as each backend has its own data, generally\n them may need different query plans.\n Right now if you perform \"analyze table\" in one\n backend, then it will affect plans in all backends.\n It can be considered not as bug, but as feature if we\n assume that distribution if data in all backens is\n similar.\n But if this assumption is not true, then it can be a\n problem.\n\n\n\n\nLast point is probably the most difficult issue and I\n think about it years. \n\n\n\nI have a experience with my customers so 99% of usage\n temp tables is without statistics - just with good\n information only about rows. Only few customers know so\n manual ANALYZE is necessary for temp tables (when it is\n really necessary).\n\n\nSharing meta data about global temporary tables can\n real problem - probably not about statistics, but surely\n about number of pages and number of rows.\n\n\n\n\n\n\n But Postgres is not storing this information now anywhere else\n except statistic, isn't it?\n There was proposal to cache relation size, but it is not\n implemented yet. If such cache exists, then we can use it to store\n local information about global temporary tables.\n So if 99% of users do not perform analyze for temporary tables, then\n them will not be faced with this problem, right?\n\n\n\n\n\n\n\n\n\nThere are two requirements: \n\n\n\na) we need some special meta data for any instance\n (per session) of global temporary table (row, pages,\n statistics, maybe multicolumn statistics, ...)\n\n\nb) we would not to use persistent global catalogue\n (against catalogue bloating)\n\n\nI see two possible solution:\n\n\n1. hold these data only in memory in special buffers\n\n\n2. hold these data in global temporary tables - it is\n similar to normal tables - we can use global temp tables\n for metadata like classic persistent tables are used for\n metadata of classic persistent tables. Next syscache can\n be enhanced to work with union of two system tables.\n\n\nI prefer @2 because changes can be implemented on\n deeper level.\n\n\nSharing metadata for global temp tables (current\n state if I understand well) is good enough for develop\n stage, but It is hard to expect so it can work generally\n in production environment.\n\n\n\n\n\n\n\n\n I think that it not possible to assume that temporary data will\n aways fir in memory.\n So 1) seems to be not acceptable solution.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 12 Aug 2019 19:19:40 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "po 12. 8. 2019 v 18:19 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n> Hi,\n>\n> On 11.08.2019 10:14, Pavel Stehule wrote:\n>\n>\n> Hi\n>\n>\n>> There is one more problem with global temporary tables for which I do not\n>> know good solution now: collecting statistic.\n>> As far as each backend has its own data, generally them may need\n>> different query plans.\n>> Right now if you perform \"analyze table\" in one backend, then it will\n>> affect plans in all backends.\n>> It can be considered not as bug, but as feature if we assume that\n>> distribution if data in all backens is similar.\n>> But if this assumption is not true, then it can be a problem.\n>>\n>\n> Last point is probably the most difficult issue and I think about it\n> years.\n>\n> I have a experience with my customers so 99% of usage temp tables is\n> without statistics - just with good information only about rows. Only few\n> customers know so manual ANALYZE is necessary for temp tables (when it is\n> really necessary).\n>\n> Sharing meta data about global temporary tables can real problem -\n> probably not about statistics, but surely about number of pages and number\n> of rows.\n>\n>\n> But Postgres is not storing this information now anywhere else except\n> statistic, isn't it?\n>\n\nnot only - critical numbers are reltuples, relpages from pg_class\n\nThere was proposal to cache relation size, but it is not implemented yet.\n> If such cache exists, then we can use it to store local information about\n> global temporary tables.\n> So if 99% of users do not perform analyze for temporary tables, then them\n> will not be faced with this problem, right?\n>\n\nthey use default statistics based on relpages. But for 1% of applications\nstatistics are critical - almost always for OLAP applications.\n\n\n>\n>\n> There are two requirements:\n>\n> a) we need some special meta data for any instance (per session) of global\n> temporary table (row, pages, statistics, maybe multicolumn statistics, ...)\n>\n> b) we would not to use persistent global catalogue (against catalogue\n> bloating)\n>\n> I see two possible solution:\n>\n> 1. hold these data only in memory in special buffers\n>\n> 2. hold these data in global temporary tables - it is similar to normal\n> tables - we can use global temp tables for metadata like classic persistent\n> tables are used for metadata of classic persistent tables. Next syscache\n> can be enhanced to work with union of two system tables.\n>\n> I prefer @2 because changes can be implemented on deeper level.\n>\n> Sharing metadata for global temp tables (current state if I understand\n> well) is good enough for develop stage, but It is hard to expect so it can\n> work generally in production environment.\n>\n>\n> I think that it not possible to assume that temporary data will aways fir\n> in memory.\n> So 1) seems to be not acceptable solution.\n>\n\nI spoke only about metadata. Data should be stored in temp buffers (and\npossibly in temp files)\n\nPavel\n\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\npo 12. 8. 2019 v 18:19 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n Hi,\n\nOn 11.08.2019 10:14, Pavel Stehule\n wrote:\n\n\n\n\n\n Hi\n\n\n\n\n \n There is one more problem with global temporary tables\n for which I do not know good solution now: collecting\n statistic.\n As far as each backend has its own data, generally\n them may need different query plans.\n Right now if you perform \"analyze table\" in one\n backend, then it will affect plans in all backends.\n It can be considered not as bug, but as feature if we\n assume that distribution if data in all backens is\n similar.\n But if this assumption is not true, then it can be a\n problem.\n\n\n\n\nLast point is probably the most difficult issue and I\n think about it years. \n\n\n\nI have a experience with my customers so 99% of usage\n temp tables is without statistics - just with good\n information only about rows. Only few customers know so\n manual ANALYZE is necessary for temp tables (when it is\n really necessary).\n\n\nSharing meta data about global temporary tables can\n real problem - probably not about statistics, but surely\n about number of pages and number of rows.\n\n\n\n\n\n\n But Postgres is not storing this information now anywhere else\n except statistic, isn't it?not only - critical numbers are reltuples, relpages from pg_class \n There was proposal to cache relation size, but it is not\n implemented yet. If such cache exists, then we can use it to store\n local information about global temporary tables.\n So if 99% of users do not perform analyze for temporary tables, then\n them will not be faced with this problem, right?they use default statistics based on relpages. But for 1% of applications statistics are critical - almost always for OLAP applications. \n\n\n\n\n\n\n\n\n\nThere are two requirements: \n\n\n\na) we need some special meta data for any instance\n (per session) of global temporary table (row, pages,\n statistics, maybe multicolumn statistics, ...)\n\n\nb) we would not to use persistent global catalogue\n (against catalogue bloating)\n\n\nI see two possible solution:\n\n\n1. hold these data only in memory in special buffers\n\n\n2. hold these data in global temporary tables - it is\n similar to normal tables - we can use global temp tables\n for metadata like classic persistent tables are used for\n metadata of classic persistent tables. Next syscache can\n be enhanced to work with union of two system tables.\n\n\nI prefer @2 because changes can be implemented on\n deeper level.\n\n\nSharing metadata for global temp tables (current\n state if I understand well) is good enough for develop\n stage, but It is hard to expect so it can work generally\n in production environment.\n\n\n\n\n\n\n\n\n I think that it not possible to assume that temporary data will\n aways fir in memory.\n So 1) seems to be not acceptable solution.I spoke only about metadata. Data should be stored in temp buffers (and possibly in temp files)Pavel\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 12 Aug 2019 18:47:09 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On Tue, 13 Aug 2019 at 00:47, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n\n> But Postgres is not storing this information now anywhere else except\n>> statistic, isn't it?\n>>\n>\n> not only - critical numbers are reltuples, relpages from pg_class\n>\n\nThat's a very good point. relallvisible too. How's the global temp table\nimpl handling that right now, since you won't be changing the pg_class row?\nAFAICS relpages doesn't need to be up to date (and reltuples certainly\ndoesn't) so presumably you're just leaving them as zero?\n\nWhat happens right now if you ANALYZE or VACUUM ANALYZE a global temp\ntable? Is it just disallowed?\n\nI'll need to check, but I wonder if periodically updating those fields in\npg_class impacts logical decoding too. Logical decoding must treat\ntransactions with catalog changes as special cases where it creates custom\nsnapshots and does other expensive additional work.\n(See ReorderBufferXidSetCatalogChanges in reorderbuffer.c and its\ncallsites). We don't actually need to know relpages and reltuples during\nlogical decoding. It can probably ignore relfrozenxid and relminmxid\nchanges too, maybe even pg_statistic changes though I'd be less confident\nabout that one.\n\nAt some point I need to patch in a bunch of extra tracepoints and do some\nperf tracing to see how often we do potentially unnecessary snapshot\nrelated work in logical decoding.\n\n\n> There was proposal to cache relation size, but it is not implemented yet.\n>> If such cache exists, then we can use it to store local information about\n>> global temporary tables.\n>> So if 99% of users do not perform analyze for temporary tables, then them\n>> will not be faced with this problem, right?\n>>\n>\n> they use default statistics based on relpages. But for 1% of applications\n> statistics are critical - almost always for OLAP applications.\n>\n\nAgreed. It's actually quite a common solution to user problem reports /\nsupport queries about temp table performance: \"Run ANALYZE. Consider\ncreating indexes too.\"\n\nWhich reminds me - if global temp tables do get added, it'll further\nincrease the desirability of exposing a feature to let users\ndisable+invalidate and then later reindex+enable indexes without icky\ncatalog hacking. So they can disable indexes for their local copy, load\ndata, re-enable indexes. That'd be \"interesting\" to implement for global\ntemp tables given that index state is part of the pg_index row associated\nwith an index rel though.\n\n\n1. hold these data only in memory in special buffers\n>>\n>>\nI don't see that working well for pg_statistic or anything else that holds\nnontrivial user data though.\n\n> 2. hold these data in global temporary tables - it is similar to normal\n>> tables - we can use global temp tables for metadata like classic persistent\n>> tables are used for metadata of classic persistent tables. Next syscache\n>> can be enhanced to work with union of two system tables.\n>>\n>>\nVery meta. Syscache and relcache are extremely performance critical but\ncould probably skip scans entirely on backends that haven't used any global\ntemp tables.\n\nI don't know the relevant caches well enough to have a useful opinion here.\n\n> I think that it not possible to assume that temporary data will aways fir\n>> in memory.\n>> So 1) seems to be not acceptable solution.\n>>\n>\nIt'd only be the metadata, but if it includes things like column histograms\nand most frequent value data that'd still be undesirable to have pinned in\nbackend memory.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Tue, 13 Aug 2019 at 00:47, Pavel Stehule <pavel.stehule@gmail.com> wrote: \n But Postgres is not storing this information now anywhere else\n except statistic, isn't it?not only - critical numbers are reltuples, relpages from pg_classThat's a very good point. relallvisible too. How's the global temp table impl handling that right now, since you won't be changing the pg_class row? AFAICS relpages doesn't need to be up to date (and reltuples certainly doesn't) so presumably you're just leaving them as zero?What happens right now if you ANALYZE or VACUUM ANALYZE a global temp table? Is it just disallowed?I'll need to check, but I wonder if periodically updating those fields in pg_class impacts logical decoding too. Logical decoding must treat transactions with catalog changes as special cases where it creates custom snapshots and does other expensive additional work. (See ReorderBufferXidSetCatalogChanges in reorderbuffer.c and its callsites). We don't actually need to know relpages and reltuples during logical decoding. It can probably ignore relfrozenxid and relminmxid changes too, maybe even pg_statistic changes though I'd be less confident about that one.At some point I need to patch in a bunch of extra tracepoints and do some perf tracing to see how often we do potentially unnecessary snapshot related work in logical decoding. \n There was proposal to cache relation size, but it is not\n implemented yet. If such cache exists, then we can use it to store\n local information about global temporary tables.\n So if 99% of users do not perform analyze for temporary tables, then\n them will not be faced with this problem, right?they use default statistics based on relpages. But for 1% of applications statistics are critical - almost always for OLAP applications.Agreed. It's actually quite a common solution to user problem reports / support queries about temp table performance: \"Run ANALYZE. Consider creating indexes too.\"Which reminds me - if global temp tables do get added, it'll further increase the desirability of exposing a feature to let users disable+invalidate and then later reindex+enable indexes without icky catalog hacking. So they can disable indexes for their local copy, load data, re-enable indexes. That'd be \"interesting\" to implement for global temp tables given that index state is part of the pg_index row associated with an index rel though. \n1. hold these data only in memory in special buffersI don't see that working well for pg_statistic or anything else that holds nontrivial user data though.\n2. hold these data in global temporary tables - it is\n similar to normal tables - we can use global temp tables\n for metadata like classic persistent tables are used for\n metadata of classic persistent tables. Next syscache can\n be enhanced to work with union of two system tables.Very meta. Syscache and relcache are extremely performance critical but could probably skip scans entirely on backends that haven't used any global temp tables.I don't know the relevant caches well enough to have a useful opinion here.\nI think that it not possible to assume that temporary data will\n aways fir in memory.\n So 1) seems to be not acceptable solution.It'd only be the metadata, but if it includes things like column histograms and most frequent value data that'd still be undesirable to have pinned in backend memory.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Tue, 13 Aug 2019 13:34:37 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 13.08.2019 8:34, Craig Ringer wrote:\n> On Tue, 13 Aug 2019 at 00:47, Pavel Stehule <pavel.stehule@gmail.com \n> <mailto:pavel.stehule@gmail.com>> wrote:\n>\n> But Postgres is not storing this information now anywhere else\n> except statistic, isn't it?\n>\n>\n> not only - critical numbers are reltuples, relpages from pg_class\n>\n>\n> That's a very good point. relallvisible too. How's the global temp \n> table impl handling that right now, since you won't be changing the \n> pg_class row? AFAICS relpages doesn't need to be up to date (and \n> reltuples certainly doesn't) so presumably you're just leaving them as \n> zero?\nAs far as I understand relpages and reltuples are set only when you \nperform \"analyze\" of the table.\n\n>\n> What happens right now if you ANALYZE or VACUUM ANALYZE a global temp \n> table? Is it just disallowed?\n\nNo, it is not disallowed now.\nIt updates the statistic and also fields in pg_class which are shared by \nall backends.\nSo all backends will now build plans according to this statistic. \nCertainly it may lead to not so efficient plans if there are large \ndifferences in number of tuples stored in this table in different backends.\nBut seems to me critical mostly in case of presence of indexes for \ntemporary table. And it seems to me that users are created indexes for \ntemporary tables even rarely than doing analyze for them.\n>\n> I'll need to check, but I wonder if periodically updating those fields \n> in pg_class impacts logical decoding too. Logical decoding must treat \n> transactions with catalog changes as special cases where it creates \n> custom snapshots and does other expensive additional work. \n> (See ReorderBufferXidSetCatalogChanges in reorderbuffer.c and its \n> callsites). We don't actually need to know relpages and reltuples \n> during logical decoding. It can probably ignore relfrozenxid \n> and relminmxid changes too, maybe even pg_statistic changes though I'd \n> be less confident about that one.\n>\n> At some point I need to patch in a bunch of extra tracepoints and do \n> some perf tracing to see how often we do potentially unnecessary \n> snapshot related work in logical decoding.\n\nTemporary tables (both local and global) as well as unlogged tables are \nnot subject of logical replication, aren't them?\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 13.08.2019 8:34, Craig Ringer wrote:\n\n\n\n\nOn Tue, 13 Aug 2019 at 00:47, Pavel Stehule <pavel.stehule@gmail.com>\n wrote:\n\n\n \n\n\n\n\n But Postgres is not storing\n this information now anywhere else except statistic,\n isn't it?\n\n\n\n\nnot only - critical numbers are reltuples, relpages\n from pg_class\n\n\n\n\n\nThat's a very good point. relallvisible too. How's the\n global temp table impl handling that right now, since you\n won't be changing the pg_class row? AFAICS relpages doesn't\n need to be up to date (and reltuples certainly doesn't) so\n presumably you're just leaving them as zero?\n\n\n\n As far as I understand relpages and reltuples are set only when you\n perform \"analyze\" of the table.\n\n\n\n\n\n\nWhat happens right now if you ANALYZE or VACUUM ANALYZE a\n global temp table? Is it just disallowed?\n\n\n\n\n No, it is not disallowed now. \n It updates the statistic and also fields in pg_class which are\n shared by all backends.\n So all backends will now build plans according to this statistic.\n Certainly it may lead to not so efficient plans if there are large\n differences in number of tuples stored in this table in different\n backends.\n But seems to me critical mostly in case of presence of indexes for\n temporary table. And it seems to me that users are created indexes\n for temporary tables even rarely than doing analyze for them.\n\n\n\n\n\nI'll need to check, but I wonder if periodically updating\n those fields in pg_class impacts logical decoding too.\n Logical decoding must treat transactions with catalog\n changes as special cases where it creates custom snapshots\n and does other expensive additional work.\n (See ReorderBufferXidSetCatalogChanges in reorderbuffer.c\n and its callsites). We don't actually need to know relpages\n and reltuples during logical decoding. It can probably\n ignore relfrozenxid and relminmxid changes too, maybe even\n pg_statistic changes though I'd be less confident about that\n one.\n\n\nAt some point I need to patch in a bunch of extra\n tracepoints and do some perf tracing to see how often we do\n potentially unnecessary snapshot related work in logical\n decoding.\n\n\n\n\n Temporary tables (both local and global) as well as unlogged tables\n are not subject of logical replication, aren't them?\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 13 Aug 2019 11:19:19 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On Fri, 9 Aug 2019 at 22:07, Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\nwrote:\n\n>\n>\n> Ok, here it is: global_private_temp-1.patch\n>\n\n\nInitial pass review follows.\n\nRelation name \"SESSION\" is odd. I guess you're avoiding \"global\" because\nthe data is session-scoped, not globally temporary. But I'm not sure\n\"session\" fits either - after all regular temp tables are also session\ntemporary tables. I won't focus on naming further beyond asking that it be\nconsistent though, right now there's a mix of \"global\" in some places and\n\"session\" in others.\n\n\nWhy do you need to do all this indirection with changing RelFileNode to\nRelFileNodeBackend in the bufmgr, changing BufferGetTag etc? Similarly,\nyour changes of RelFileNodeBackendIsTemp to RelFileNodeBackendIsLocalTemp .\nDid you look into my suggestion of extending the relmapper so that global\ntemp tables would have a relfilenode of 0 like pg_class etc, and use a\nbackend-local map of oid-to-relfilenode mappings? I'm guessing you did it\nthe way you did instead to lay the groundwork for cross-backend sharing,\nbut if so it should IMO be in that patch. Maybe my understanding of the\nexisting temp table mechanics is just insufficient as I\nsee RelFileNodeBackendIsTemp is already used in some aspects of existing\ntemp relation handling.\n\nSimilarly, TruncateSessionRelations probably shouldn't need to exist in\nthis patch in its current form; there's no shared_buffers use to clean and\nthe same file cleanup mechanism should handle both session-temp and\nlocal-temp relfilenodes.\n\n\nA number of places make a change like this:\n\nrel->rd_rel->relpersistence == RELPERSISTENCE_TEMP ||\n+ rel->rd_rel->relpersistence == RELPERSISTENCE_SESSION\n\nand I'd like to see a test macro or inline static for it since it's\nrepeated so much. Mostly to make the intent clear: \"is this a relation with\ntemporary backend-scoped data?\"\n\n\nThis test:\n\n+ if (blkno == BTREE_METAPAGE && PageIsNew(BufferGetPage(buf)) &&\nIsSessionRelationBackendId(rel->rd_backend))\n+ _bt_initmetapage(BufferGetPage(buf), P_NONE, 0);\n\nseems sensible but I'm wondering if there's a way to channel initialization\nof global-temp objects through a bit more of a common path, so it reads\nmore obviously as a common test applying to all global-temp tables. \"Global\ntemp table not initialized in session yet? Initialize it.\" So we don't have\nto have every object type do an object type specific test for \"am I\nactually uninitialized?\" in all paths it might hit. I guess I expected to\nsee something more like a\n\nif (RelGlobalTempUninitialized(rel))\n\nbut maybe I've been doing too much Java ;)\n\nA similar test reappears here for sequences:\n\n+ if (rel->rd_rel->relpersistence == RELPERSISTENCE_SESSION &&\nPageIsNew(page))\n\nWhy is this written differently?\n\n\nSequence initialization ignores sequence startval/firstval settings. Why?\n\n\n- else if (newrelpersistence == RELPERSISTENCE_PERMANENT)\n+ else if (newrelpersistence != RELPERSISTENCE_TEMP)\n\nDoesn't this change the test outcome for RELPERSISTENCE_UNLOGGED?\n\n\nIn PreCommit_on_commit_actions, in the the ONCOMMIT_DELETE_ROWS case, is\nthere any way to still respect the XACT_FLAGS_ACCESSEDTEMPNAMESPACE flag if\nthe oid is for a backend-temp table not a global-temp table?\n\n\n+ bool isLocalBuf = SmgrIsTemp(smgr) && relpersistence ==\nRELPERSISTENCE_TEMP;\n\nWon't session-temp tables have local buffers too? Until your next patch\nthat adds shared_buffers storage for them anyway?\n\n\n+ * These is no need to separate them at file system level, so just\nsubtract SessionRelFirstBackendId\n+ * to avoid too long file names.\n\nI agree that there's no reason to be able to differentiate between\nlocal-temp and session-temp relfilenodes at the filesystem level.\n\n\n\n\n\n\n\n\n\n> Also I have attached updated version of the global temp tables with shared\n> buffers - global_shared_temp-1.patch\n> It is certainly larger (~2k lines vs. 1.5k lines) because it is changing\n> BufferTag and related functions.\n> But I do not think that this different is so critical.\n> Still have a wish to kill two birds with one stone:)\n>\n>\n>\n>\n>\n>\n>\n>\n> --\n>\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Fri, 9 Aug 2019 at 22:07, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n\n Ok, here it is: global_private_temp-1.patchInitial pass review follows.Relation name \"SESSION\" is odd. I guess you're avoiding \"global\" because the data is session-scoped, not globally temporary. But I'm not sure \"session\" fits either - after all regular temp tables are also session temporary tables. I won't focus on naming further beyond asking that it be consistent though, right now there's a mix of \"global\" in some places and \"session\" in others.Why do you need to do all this indirection with changing RelFileNode to RelFileNodeBackend in the bufmgr, changing BufferGetTag etc? Similarly, your changes of RelFileNodeBackendIsTemp to RelFileNodeBackendIsLocalTemp . Did you look into my suggestion of extending the relmapper so that global temp tables would have a relfilenode of 0 like pg_class etc, and use a backend-local map of oid-to-relfilenode mappings? I'm guessing you did it the way you did instead to lay the groundwork for cross-backend sharing, but if so it should IMO be in that patch. Maybe my understanding of the existing temp table mechanics is just insufficient as I see RelFileNodeBackendIsTemp is already used in some aspects of existing temp relation handling.Similarly, TruncateSessionRelations probably shouldn't need to exist in this patch in its current form; there's no shared_buffers use to clean and the same file cleanup mechanism should handle both session-temp and local-temp relfilenodes.A number of places make a change like this: rel->rd_rel->relpersistence == RELPERSISTENCE_TEMP ||+\t\trel->rd_rel->relpersistence == RELPERSISTENCE_SESSIONand I'd like to see a test macro or inline static for it since it's repeated so much. Mostly to make the intent clear: \"is this a relation with temporary backend-scoped data?\"This test:+\t\tif (blkno == BTREE_METAPAGE && PageIsNew(BufferGetPage(buf)) && IsSessionRelationBackendId(rel->rd_backend))+\t\t\t_bt_initmetapage(BufferGetPage(buf), P_NONE, 0);seems sensible but I'm wondering if there's a way to channel initialization of global-temp objects through a bit more of a common path, so it reads more obviously as a common test applying to all global-temp tables. \"Global temp table not initialized in session yet? Initialize it.\" So we don't have to have every object type do an object type specific test for \"am I actually uninitialized?\" in all paths it might hit. I guess I expected to see something more like aif (RelGlobalTempUninitialized(rel))but maybe I've been doing too much Java ;)A similar test reappears here for sequences:+\tif (rel->rd_rel->relpersistence == RELPERSISTENCE_SESSION && PageIsNew(page))Why is this written differently?Sequence initialization ignores sequence startval/firstval settings. Why?-\telse if (newrelpersistence == RELPERSISTENCE_PERMANENT)+\telse if (newrelpersistence != RELPERSISTENCE_TEMP)Doesn't this change the test outcome for RELPERSISTENCE_UNLOGGED?In PreCommit_on_commit_actions, in the the ONCOMMIT_DELETE_ROWS case, is there any way to still respect the XACT_FLAGS_ACCESSEDTEMPNAMESPACE flag if the oid is for a backend-temp table not a global-temp table?+\tbool\t\tisLocalBuf = SmgrIsTemp(smgr) && relpersistence == RELPERSISTENCE_TEMP;Won't session-temp tables have local buffers too? Until your next patch that adds shared_buffers storage for them anyway?+\t\t\t * These is no need to separate them at file system level, so just subtract SessionRelFirstBackendId+\t\t\t * to avoid too long file names.I agree that there's no reason to be able to differentiate between local-temp and session-temp relfilenodes at the filesystem level. \n Also I have attached updated version of the global temp tables with\n shared buffers - global_shared_temp-1.patch \n It is certainly larger (~2k lines vs. 1.5k lines) because it is\n changing BufferTag and related functions.\n But I do not think that this different is so critical.\n Still have a wish to kill two birds with one stone:)\n\n\n\n\n\n\n\n\n\n\n-- \n\n\nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company \n\n-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Tue, 13 Aug 2019 16:21:58 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On Tue, 13 Aug 2019 at 16:19, Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\nwrote:\n\n>\n>\n> On 13.08.2019 8:34, Craig Ringer wrote:\n>\n> On Tue, 13 Aug 2019 at 00:47, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>\n>> But Postgres is not storing this information now anywhere else except\n>>> statistic, isn't it?\n>>>\n>>\n>> not only - critical numbers are reltuples, relpages from pg_class\n>>\n>\n> That's a very good point. relallvisible too. How's the global temp table\n> impl handling that right now, since you won't be changing the pg_class row?\n> AFAICS relpages doesn't need to be up to date (and reltuples certainly\n> doesn't) so presumably you're just leaving them as zero?\n>\n> As far as I understand relpages and reltuples are set only when you\n> perform \"analyze\" of the table.\n>\n\nAlso autovacuum's autoanalyze.\n\nWhat happens right now if you ANALYZE or VACUUM ANALYZE a global temp\n> table? Is it just disallowed?\n>\n>\n> No, it is not disallowed now.\n> It updates the statistic and also fields in pg_class which are shared by\n> all backends.\n> So all backends will now build plans according to this statistic.\n> Certainly it may lead to not so efficient plans if there are large\n> differences in number of tuples stored in this table in different backends.\n> But seems to me critical mostly in case of presence of indexes for\n> temporary table. And it seems to me that users are created indexes for\n> temporary tables even rarely than doing analyze for them.\n>\n\nThat doesn't seem too bad TBH. Hacky but it doesn't seem dangerously wrong\nand as likely to be helpful as not if clearly documented.\n\n\n> Temporary tables (both local and global) as well as unlogged tables are\n> not subject of logical replication, aren't them?\n>\n>\nRight. But in the same way that they're still present in the catalogs, I\nthink they still affect catalog snapshots maintained by logical decoding's\nhistoric snapshot manager as temp table creation/drop will still AFAIK\ncause catalog invalidations to be written on commit. I need to double check\nthat.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Tue, 13 Aug 2019 at 16:19, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n\n\nOn 13.08.2019 8:34, Craig Ringer wrote:\n\n\n\nOn Tue, 13 Aug 2019 at 00:47, Pavel Stehule <pavel.stehule@gmail.com>\n wrote:\n\n\n \n\n\n\n\n But Postgres is not storing\n this information now anywhere else except statistic,\n isn't it?\n\n\n\n\nnot only - critical numbers are reltuples, relpages\n from pg_class\n\n\n\n\n\nThat's a very good point. relallvisible too. How's the\n global temp table impl handling that right now, since you\n won't be changing the pg_class row? AFAICS relpages doesn't\n need to be up to date (and reltuples certainly doesn't) so\n presumably you're just leaving them as zero?\n\n\n\n As far as I understand relpages and reltuples are set only when you\n perform \"analyze\" of the table.Also autovacuum's autoanalyze.What happens right now if you ANALYZE or VACUUM ANALYZE a\n global temp table? Is it just disallowed?\n\n\n\n\n No, it is not disallowed now. \n It updates the statistic and also fields in pg_class which are\n shared by all backends.\n So all backends will now build plans according to this statistic.\n Certainly it may lead to not so efficient plans if there are large\n differences in number of tuples stored in this table in different\n backends.\n But seems to me critical mostly in case of presence of indexes for\n temporary table. And it seems to me that users are created indexes\n for temporary tables even rarely than doing analyze for them.That doesn't seem too bad TBH. Hacky but it doesn't seem dangerously wrong and as likely to be helpful as not if clearly documented. \n Temporary tables (both local and global) as well as unlogged tables\n are not subject of logical replication, aren't them?\nRight. But in the same way that they're still present in the catalogs, I think they still affect catalog snapshots maintained by logical decoding's historic snapshot manager as temp table creation/drop will still AFAIK cause catalog invalidations to be written on commit. I need to double check that.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Tue, 13 Aug 2019 16:27:10 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 13.08.2019 11:21, Craig Ringer wrote:\n> On Fri, 9 Aug 2019 at 22:07, Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n>\n>\n>\n> Ok, here it is: global_private_temp-1.patch\n>\n>\n>\n> Initial pass review follows.\n>\n> Relation name \"SESSION\" is odd. I guess you're avoiding \"global\" \n> because the data is session-scoped, not globally temporary. But I'm \n> not sure \"session\" fits either - after all regular temp tables are \n> also session temporary tables. I won't focus on naming further beyond \n> asking that it be consistent though, right now there's a mix of \n> \"global\" in some places and \"session\" in others.\n>\nI have supported both forms \"create session table\" and \"create global temp\".\nBoth \"session\" and \"global\" are expected PostgreSQL keywords so we do \nnot need to introduce new one (unlike \"public\" or \"shared\").\nThe form \"global temp\" is used in Oracle so it seems to be natural to \nprovide similar syntax.\n\nI am not insisting on this syntax and will consider all other suggestions.\nBut IMHO almost any SQL keyword is overloaded and have different meaning.\nTemporary tables has session as living area (or transaction if created \nwith \"ON COMMIT DROP\" clause).\nSo calling them \"session tables\" is actually more clear than just \n\"temporary tables\".\nBut \"local temp tables\" can be also considered as session tables. So may \nbe it is really not so good idea\nto use \"session\" keyword for them.\n\n>\n> Why do you need to do all this indirection with changing RelFileNode \n> to RelFileNodeBackend in the bufmgr, changing BufferGetTag etc? \n> Similarly, your changes of RelFileNodeBackendIsTemp \n> to RelFileNodeBackendIsLocalTemp . Did you look into my suggestion of \n> extending the relmapper so that global temp tables would have a \n> relfilenode of 0 like pg_class etc, and use a backend-local map of \n> oid-to-relfilenode mappings? I'm guessing you did it the way you did \n> instead to lay the groundwork for cross-backend sharing, but if so it \n> should IMO be in that patch. Maybe my understanding of the existing \n> temp table mechanics is just insufficient as I \n> see RelFileNodeBackendIsTemp is already used in some aspects of \n> existing temp relation handling.\n\nSorry, are you really speaking about global_private_temp-1.patch?\nThis patch doesn't change bufmgr file at all.\nMay be you looked at another patch - global_shared_temp-1.patch\nwhich is accessing shared tables though shared buffers and so have to \nchange buffer tag to include backend ID in it.\n\n>\n> Similarly, TruncateSessionRelations probably shouldn't need to exist \n> in this patch in its current form; there's no shared_buffers use to \n> clean and the same file cleanup mechanism should handle both \n> session-temp and local-temp relfilenodes.\n>\nIn global_private_temp-1.patch TruncateSessionRelations does nothing \nwith shared buffers, it just delete relation files.\n\n>\n> A number of places make a change like this:\n> rel->rd_rel->relpersistence == RELPERSISTENCE_TEMP ||\n> + rel->rd_rel->relpersistence == RELPERSISTENCE_SESSION\n>\n> and I'd like to see a test macro or inline static for it since it's \n> repeated so much. Mostly to make the intent clear: \"is this a relation \n> with temporary backend-scoped data?\"\n>\nI consider to call such macro IsSessionRelation() or something like this \nbut you do not like notion \"session\".\nIs IsBackendScopedRelation() a better choice?\n\n>\n> This test:\n>\n> + if (blkno == BTREE_METAPAGE && PageIsNew(BufferGetPage(buf)) && \n> IsSessionRelationBackendId(rel->rd_backend))\n> + _bt_initmetapage(BufferGetPage(buf), P_NONE, 0);\n>\n> seems sensible but I'm wondering if there's a way to channel \n> initialization of global-temp objects through a bit more of a common \n> path, so it reads more obviously as a common test applying to all \n> global-temp tables. \"Global temp table not initialized in session yet? \n> Initialize it.\" So we don't have to have every object type do an \n> object type specific test for \"am I actually uninitialized?\" in all \n> paths it might hit. I guess I expected to see something more like a\n>\n> if (RelGlobalTempUninitialized(rel))\n>\n> but maybe I've been doing too much Java ;)\n>\n> A similar test reappears here for sequences:\n>\n> + if (rel->rd_rel->relpersistence == RELPERSISTENCE_SESSION && \n> PageIsNew(page))\n>\n> Why is this written differently?\n>\n>\nJust because I wrote them in different moment of times:)\nI think that adding RelGlobalTempUninitialized is really good idea.\n\n\n> Sequence initialization ignores sequence startval/firstval settings. Why?\n>\n>\nI am handling only case of implicitly created sequences for \nSERIAL/BIGSERIAL columns.\nIs it possible to explicitly specify initial value and step for them?\nIf so, this place should definitely be rewritten.\n\n\n> - else if (newrelpersistence == RELPERSISTENCE_PERMANENT)\n> + else if (newrelpersistence != RELPERSISTENCE_TEMP)\n>\n> Doesn't this change the test outcome for RELPERSISTENCE_UNLOGGED?\n>\nRELPERSISTENCE_UNLOGGED case is handle in previous IF branch.\n\n>\n> In PreCommit_on_commit_actions, in the the ONCOMMIT_DELETE_ROWS case, \n> is there any way to still respect the XACT_FLAGS_ACCESSEDTEMPNAMESPACE \n> flag if the oid is for a backend-temp table not a global-temp table?\n>\nIf it is local temp table, then XACT_FLAGS_ACCESSEDTEMPNAMESPACE is set \nand so there is no need to check this flag.\nAnd as far as XACT_FLAGS_ACCESSEDTEMPNAMESPACE is not set now for \nglobal temp tables, I have to remove this check.\nSo for local temp table original behavior is preserved.\n\nThe question is why I do not set XACT_FLAGS_ACCESSEDTEMPNAMESPACE for \nglobal temp tables?\nI wanted to avoid current limitation for using temp tables in prepared \ntransactions.\nGlobal metadata allows to eliminate some problems related with using \ntemp tables in 2PC.\nBut I am not sure that it eliminates ALL problems and there are no \nstrange effects related with\nprepared transactions&global temp tables.\n\n\n>\n> + bool isLocalBuf = SmgrIsTemp(smgr) && relpersistence == \n> RELPERSISTENCE_TEMP;\n>\n> Won't session-temp tables have local buffers too? Until your next \n> patch that adds shared_buffers storage for them anyway?\n>\nOnce again, it is change from global_shared_temp-1.patch, not from \nglobal_private_temp-1.patch\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 13.08.2019 11:21, Craig Ringer\n wrote:\n\n\n\n\nOn Fri, 9 Aug 2019 at 22:07, Konstantin Knizhnik\n <k.knizhnik@postgrespro.ru>\n wrote:\n\n\n\n \n\n Ok, here it is: global_private_temp-1.patch\n\n\n\n\n\n\nInitial pass review follows.\n\n\nRelation name \"SESSION\" is odd. I guess you're avoiding\n \"global\" because the data is session-scoped, not globally\n temporary. But I'm not sure \"session\" fits either - after\n all regular temp tables are also session temporary tables. I\n won't focus on naming further beyond asking that it be\n consistent though, right now there's a mix of \"global\" in\n some places and \"session\" in others.\n\n\n\n\n\n I have supported both forms \"create session table\" and \"create\n global temp\".\n Both \"session\" and \"global\" are expected PostgreSQL keywords so we\n do not need to introduce new one (unlike \"public\" or \"shared\").\n The form \"global temp\" is used in Oracle so it seems to be natural\n to provide similar syntax.\n\n I am not insisting on this syntax and will consider all other\n suggestions.\n But IMHO almost any SQL keyword is overloaded and have different\n meaning.\n Temporary tables has session as living area (or transaction if\n created with \"ON COMMIT DROP\" clause).\n So calling them \"session tables\" is actually more clear than just\n \"temporary tables\".\n But \"local temp tables\" can be also considered as session tables. So\n may be it is really not so good idea\n to use \"session\" keyword for them.\n\n\n\n\n\n\nWhy do you need to do all this indirection with changing\n RelFileNode to RelFileNodeBackend in the bufmgr, changing\n BufferGetTag etc? Similarly, your changes\n of RelFileNodeBackendIsTemp to RelFileNodeBackendIsLocalTemp\n . Did you look into my suggestion of extending the relmapper\n so that global temp tables would have a relfilenode of 0\n like pg_class etc, and use a backend-local map of\n oid-to-relfilenode mappings? I'm guessing you did it the way\n you did instead to lay the groundwork for cross-backend\n sharing, but if so it should IMO be in that patch. Maybe my\n understanding of the existing temp table mechanics is just\n insufficient as I see RelFileNodeBackendIsTemp is already\n used in some aspects of existing temp relation handling.\n\n\n\n\n Sorry, are you really speaking about global_private_temp-1.patch?\n This patch doesn't change bufmgr file at all.\n May be you looked at another patch - global_shared_temp-1.patch\n which is accessing shared tables though shared buffers and so have\n to change buffer tag to include backend ID in it.\n\n\n\n\n\n\nSimilarly, TruncateSessionRelations probably shouldn't\n need to exist in this patch in its current form; there's no\n shared_buffers use to clean and the same file cleanup\n mechanism should handle both session-temp and local-temp\n relfilenodes.\n\n\n\n\n\n In global_private_temp-1.patch TruncateSessionRelations does nothing\n with shared buffers, it just delete relation files.\n\n\n\n\n\n\nA number of places make a change like this:\n \n rel->rd_rel->relpersistence == RELPERSISTENCE_TEMP ||\n+ rel->rd_rel->relpersistence ==\n RELPERSISTENCE_SESSION\n\n\nand I'd like to see a test macro or inline static for it\n since it's repeated so much. Mostly to make the intent\n clear: \"is this a relation with temporary backend-scoped\n data?\"\n\n\n\n\n\n I consider to call such macro IsSessionRelation() or something like\n this but you do not like notion \"session\".\n Is IsBackendScopedRelation() a better choice?\n\n\n\n\n\n\nThis test:\n\n\n+ if (blkno == BTREE_METAPAGE &&\n PageIsNew(BufferGetPage(buf)) &&\n IsSessionRelationBackendId(rel->rd_backend))\n + _bt_initmetapage(BufferGetPage(buf), P_NONE, 0);\n\n\n\nseems sensible but I'm wondering if there's a way to\n channel initialization of global-temp objects through a bit\n more of a common path, so it reads more obviously as a\n common test applying to all global-temp tables. \"Global temp\n table not initialized in session yet? Initialize it.\" So we\n don't have to have every object type do an object type\n specific test for \"am I actually uninitialized?\" in all\n paths it might hit. I guess I expected to see something more\n like a\n\n\nif (RelGlobalTempUninitialized(rel))\n\n\nbut maybe I've been doing too much Java ;)\n\n\nA similar test reappears here for sequences:\n\n\n+ if (rel->rd_rel->relpersistence ==\n RELPERSISTENCE_SESSION && PageIsNew(page))\n\n\n\nWhy is this written differently?\n\n\n\n\n\n\n\n Just because I wrote them in different moment of times:)\n I think that adding RelGlobalTempUninitialized is really good idea.\n\n\n\n\n\nSequence initialization ignores sequence\n startval/firstval settings. Why?\n\n\n\n\n\n\n\n I am handling only case of implicitly created sequences for\n SERIAL/BIGSERIAL columns.\n Is it possible to explicitly specify initial value and step for\n them?\n If so, this place should definitely be rewritten.\n\n\n\n\n\n- else if (newrelpersistence == RELPERSISTENCE_PERMANENT)\n + else if (newrelpersistence != RELPERSISTENCE_TEMP)\n\n\n\nDoesn't this change the test outcome for\n RELPERSISTENCE_UNLOGGED?\n\n\n\n\n\n RELPERSISTENCE_UNLOGGED case is handle in previous IF branch.\n\n\n\n\n\n\nIn PreCommit_on_commit_actions, in the\n the ONCOMMIT_DELETE_ROWS case, is there any way to still\n respect the XACT_FLAGS_ACCESSEDTEMPNAMESPACE flag if the oid\n is for a backend-temp table not a global-temp table?\n\n\n\n\n\n If it is local temp table, then XACT_FLAGS_ACCESSEDTEMPNAMESPACE is\n set and so there is no need to check this flag.\n And as far as XACT_FLAGS_ACCESSEDTEMPNAMESPACE is not set now for\n global temp tables, I have to remove this check.\n So for local temp table original behavior is preserved.\n\n The question is why I do not set XACT_FLAGS_ACCESSEDTEMPNAMESPACE\n for global temp tables?\n I wanted to avoid current limitation for using temp tables in\n prepared transactions. \n Global metadata allows to eliminate some problems related with using\n temp tables in 2PC.\n But I am not sure that it eliminates ALL problems and there are no\n strange effects related with\n prepared transactions&global temp tables.\n\n\n\n\n\n\n\n+ bool isLocalBuf = SmgrIsTemp(smgr) &&\n relpersistence == RELPERSISTENCE_TEMP;\n\n\n\nWon't session-temp tables have local buffers too? Until\n your next patch that adds shared_buffers storage for them\n anyway?\n\n\n\n\n\n\n Once again, it is change from global_shared_temp-1.patch, not from\n global_private_temp-1.patch\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 13 Aug 2019 12:20:07 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 13.08.2019 11:27, Craig Ringer wrote:\n>\n>\n> On Tue, 13 Aug 2019 at 16:19, Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n>\n>\n>\n> On 13.08.2019 8:34, Craig Ringer wrote:\n>> On Tue, 13 Aug 2019 at 00:47, Pavel Stehule\n>> <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> wrote:\n>>\n>> But Postgres is not storing this information now anywhere\n>> else except statistic, isn't it?\n>>\n>>\n>> not only - critical numbers are reltuples, relpages from pg_class\n>>\n>>\n>> That's a very good point. relallvisible too. How's the global\n>> temp table impl handling that right now, since you won't be\n>> changing the pg_class row? AFAICS relpages doesn't need to be up\n>> to date (and reltuples certainly doesn't) so presumably you're\n>> just leaving them as zero?\n> As far as I understand relpages and reltuples are set only when\n> you perform \"analyze\" of the table.\n>\n>\n> Also autovacuum's autoanalyze.\n\nWhen it happen?\nI have created normal table, populated it with some data and then wait \nseveral hours but pg_class was not updated for this table.\n\n\nI attach to this mail slightly refactored versions of this patches with \nfixes of issues reported in your review.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 13 Aug 2019 16:50:17 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On Tue, 13 Aug 2019 at 21:50, Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\nwrote:\n\n> As far as I understand relpages and reltuples are set only when you\n>> perform \"analyze\" of the table.\n>>\n>\n> Also autovacuum's autoanalyze.\n>\n>\n> When it happen?\n> I have created normal table, populated it with some data and then wait\n> several hours but pg_class was not updated for this table.\n>\n>\n\nheap_vacuum_rel() in src/backend/access/heap/vacuumlazy.c below\n\n * Update statistics in pg_class.\n\nwhich I'm pretty sure is common to explicit vacuum and autovacuum. I\nhaven't run up a test to verify 100% but most DBs would never have relpages\netc set if autovac didn't do it since most aren't explicitly VACUUMed at\nall.\n\nI thought it was done when autovac ran an analyze, but it looks like it's\nall autovac. Try setting very aggressive autovac thresholds and inserting +\ndeleting a bunch of tuples maybe.\n\nI attach to this mail slightly refactored versions of this patches with\n> fixes of issues reported in your review.\n>\n\nThanks.\n\nDid you have a chance to consider my questions too? I see a couple of\nthings where there's no patch change, which is fine, but I'd be interested\nin your thoughts on the question/issue in those cases.\n\n>\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Tue, 13 Aug 2019 at 21:50, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\nAs far as I understand relpages and reltuples are set only\n when you perform \"analyze\" of the table.\n\n\n\n\nAlso autovacuum's autoanalyze.\n\n\n\n\n When it happen?\n I have created normal table, populated it with some data and then\n wait several hours but pg_class was not updated for this table.\nheap_vacuum_rel() in src/backend/access/heap/vacuumlazy.c below * Update statistics in pg_class.which I'm pretty sure is common to explicit vacuum and autovacuum. I haven't run up a test to verify 100% but most DBs would never have relpages etc set if autovac didn't do it since most aren't explicitly VACUUMed at all.I thought it was done when autovac ran an analyze, but it looks like it's all autovac. Try setting very aggressive autovac thresholds and inserting + deleting a bunch of tuples maybe.I attach to this mail slightly refactored versions of this patches\n with fixes of issues reported in your review.Thanks.Did you have a chance to consider my questions too? I see a couple of things where there's no patch change, which is fine, but I'd be interested in your thoughts on the question/issue in those cases.\n\n-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 16 Aug 2019 14:25:03 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 16.08.2019 9:25, Craig Ringer wrote:\n> On Tue, 13 Aug 2019 at 21:50, Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n>\n>> As far as I understand relpages and reltuples are set only\n>> when you perform \"analyze\" of the table.\n>>\n>>\n>> Also autovacuum's autoanalyze.\n>\n> When it happen?\n> I have created normal table, populated it with some data and then\n> wait several hours but pg_class was not updated for this table.\n>\n>\n>\n> heap_vacuum_rel() in src/backend/access/heap/vacuumlazy.c below\n>\n> * Update statistics in pg_class.\n>\n> which I'm pretty sure is common to explicit vacuum and autovacuum. I \n> haven't run up a test to verify 100% but most DBs would never have \n> relpages etc set if autovac didn't do it since most aren't explicitly \n> VACUUMed at all.\n\nSorry, I already understood it myself.\nBut to make vacuum process the table it is necessary to remove or update \nsome rows in it.\nIt seems to be yet another Postgres problem, which was noticed by \nDarafei Praliaskouski some time ago: append-only tables are never \nproceeded by autovacuum.\n\n\n>\n> I thought it was done when autovac ran an analyze, but it looks like \n> it's all autovac. Try setting very aggressive autovac thresholds and \n> inserting + deleting a bunch of tuples maybe.\n>\n> I attach to this mail slightly refactored versions of this patches\n> with fixes of issues reported in your review.\n>\n>\n> Thanks.\n>\n> Did you have a chance to consider my questions too? I see a couple of \n> things where there's no patch change, which is fine, but I'd be \n> interested in your thoughts on the question/issue in those cases.\n>\n>\nSorry, may be I didn't notice some your questions. I have a filling that \nI have replied on all your comments/questions.\nRight now I reread all this thread and see two open issues:\n\n1. Statistic for global temporary tables (including number of tuples, \npages and all visible flag).\nMy position is the following: while in most cases it should not be a \nproblem, because users rarely create indexes or do analyze for temporary \ntables,\nthere can be situations when differences in data sets of global \ntemporary tables in different backends can really be a problem.\nUnfortunately I can not propose good solution for this problem. It is \ncertainly possible to create some private (per-backend) cache for this \nmetadata.\nBut it seems to requires changes in many places.\n\n2. Your concerns about performance penalty of global temp tables \naccessed through shared buffers comparing with local temp tables access \nthrough local buffers.\nI think that this concern is not actual any more because there is \nimplementation of global temp tables using local buffers.\nBut my experiments doesn't show significant difference in access speed \nof shared and local buffers. As far as shared buffers are used to be \nmuch larger than local buffers,\nthere are more chances to hold all temp relation in memory without \nspilling it to the disk. In this case access to global temp table will \nbe much faster comparing with access to\nlocal temp tables. But the fact is that right now in the most frequent \nscenario of temp table usage:\n\n SELECT ... FROM PersistentTable INTO TempTable WHERE ...;\n SELECT * FROM TempTable;\n\nlocal temp table are more efficient than global temp table access \nthrough shared buffer.\nI think it is explained by caching and eviction policies.\nIn case of pulling all content of temp table in memory (pg_prewarm) \nglobal temp table with shared buffers becomes faster.\n\n\nI forget or do not notice some of your questions, would you be so kind \nas to repeat them?\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 16.08.2019 9:25, Craig Ringer wrote:\n\n\n\n\nOn Tue, 13 Aug 2019 at 21:50, Konstantin Knizhnik\n <k.knizhnik@postgrespro.ru>\n wrote:\n\n\n\n\n\n\n\n\nAs far as I understand\n relpages and reltuples are set only when you\n perform \"analyze\" of the table.\n\n\n\n\nAlso autovacuum's autoanalyze.\n\n\n\n\n When it happen?\n I have created normal table, populated it with some data\n and then wait several hours but pg_class was not updated\n for this table.\n\n\n\n\n\n\n\nheap_vacuum_rel() in src/backend/access/heap/vacuumlazy.c\n below\n\n\n\n * Update statistics in pg_class.\n\n\nwhich I'm pretty sure is common to explicit vacuum and\n autovacuum. I haven't run up a test to verify 100% but most\n DBs would never have relpages etc set if autovac didn't do\n it since most aren't explicitly VACUUMed at all.\n\n\n\n\n Sorry, I already understood it myself.\n But to make vacuum process the table it is necessary to remove or\n update some rows in it.\n It seems to be yet another Postgres problem, which was noticed by\n Darafei Praliaskouski some time ago: append-only tables are never\n proceeded by autovacuum.\n\n\n\n\n\n\n\nI thought it was done when autovac ran an analyze, but it\n looks like it's all autovac. Try setting very aggressive\n autovac thresholds and inserting + deleting a bunch of\n tuples maybe.\n\n\n\nI attach to this mail slightly\n refactored versions of this patches with fixes of issues\n reported in your review.\n\n\n\n\nThanks.\n\n\nDid you have a chance to consider my questions too? I see\n a couple of things where there's no patch change, which is\n fine, but I'd be interested in your thoughts on the\n question/issue in those cases.\n\n \n\n\n\n\n\n Sorry, may be I didn't notice some your questions. I have a filling\n that I have replied on all your comments/questions.\n Right now I reread all this thread and see two open issues:\n\n 1. Statistic for global temporary tables (including number of\n tuples, pages and all visible flag).\n My position is the following: while in most cases it should not be a\n problem, because users rarely create indexes or do analyze for\n temporary tables, \n there can be situations when differences in data sets of global\n temporary tables in different backends can really be a problem.\n Unfortunately I can not propose good solution for this problem. It\n is certainly possible to create some private (per-backend) cache for\n this metadata.\n But it seems to requires changes in many places.\n\n 2. Your concerns about performance penalty of global temp tables\n accessed through shared buffers comparing with local temp tables\n access through local buffers.\n I think that this concern is not actual any more because there is\n implementation of global temp tables using local buffers.\n But my experiments doesn't show significant difference in access\n speed of shared and local buffers. As far as shared buffers are used\n to be much larger than local buffers,\n there are more chances to hold all temp relation in memory without\n spilling it to the disk. In this case access to global temp table\n will be much faster comparing with access to \n local temp tables. But the fact is that right now in the most\n frequent scenario of temp table usage: \n\n SELECT ... FROM PersistentTable INTO TempTable WHERE ...; \n SELECT * FROM TempTable;\n\n local temp table are more efficient than global temp table access\n through shared buffer.\n I think it is explained by caching and eviction policies.\n In case of pulling all content of temp table in memory (pg_prewarm)\n global temp table with shared buffers becomes faster.\n\n\n I forget or do not notice some of your questions, would you be so\n kind as to repeat them?\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 16 Aug 2019 10:30:20 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On Fri, 16 Aug 2019 at 15:30, Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\nwrote:\n\n\n>\n> 1. Statistic for global temporary tables (including number of tuples,\n> pages and all visible flag).\n> My position is the following: while in most cases it should not be a\n> problem, because users rarely create indexes or do analyze for temporary\n> tables,\n> there can be situations when differences in data sets of global temporary\n> tables in different backends can really be a problem.\n> Unfortunately I can not propose good solution for this problem. It is\n> certainly possible to create some private (per-backend) cache for this\n> metadata.\n> But it seems to requires changes in many places.\n>\n\nYeah. I don't really like just sharing them but it's not that bad either.\n\n\n> 2. Your concerns about performance penalty of global temp tables accessed\n> through shared buffers comparing with local temp tables access through\n> local buffers.\n> I think that this concern is not actual any more because there is\n> implementation of global temp tables using local buffers.\n> But my experiments doesn't show significant difference in access speed of\n> shared and local buffers. As far as shared buffers are used to be much\n> larger than local buffers,\n> there are more chances to hold all temp relation in memory without\n> spilling it to the disk. In this case access to global temp table will be\n> much faster comparing with access to\n> local temp tables.\n>\n\nYou ignore the costs of evicting non-temporary data from shared_buffers,\ni.e. contention for space. Also increased chance of backends being forced\nto do direct write-out due to lack of s_b space for dirty buffers.\n\n> In case of pulling all content of temp table in memory (pg_prewarm)\nglobal temp table with shared buffers becomes faster.\n\nWho would ever do that?\n\nI forget or do not notice some of your questions, would you be so kind as\n> to repeat them?\n>\n\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Fri, 16 Aug 2019 at 15:30, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote: \n\n 1. Statistic for global temporary tables (including number of\n tuples, pages and all visible flag).\n My position is the following: while in most cases it should not be a\n problem, because users rarely create indexes or do analyze for\n temporary tables, \n there can be situations when differences in data sets of global\n temporary tables in different backends can really be a problem.\n Unfortunately I can not propose good solution for this problem. It\n is certainly possible to create some private (per-backend) cache for\n this metadata.\n But it seems to requires changes in many places.Yeah. I don't really like just sharing them but it's not that bad either. 2. Your concerns about performance penalty of global temp tables\n accessed through shared buffers comparing with local temp tables\n access through local buffers.\n I think that this concern is not actual any more because there is\n implementation of global temp tables using local buffers.\n But my experiments doesn't show significant difference in access\n speed of shared and local buffers. As far as shared buffers are used\n to be much larger than local buffers,\n there are more chances to hold all temp relation in memory without\n spilling it to the disk. In this case access to global temp table\n will be much faster comparing with access to \n local temp tables.You ignore the costs of evicting non-temporary data from shared_buffers, i.e. contention for space. Also increased chance of backends being forced to do direct write-out due to lack of s_b space for dirty buffers. > In case of pulling all content of temp table in memory (pg_prewarm)\n global temp table with shared buffers becomes faster.Who would ever do that?I forget or do not notice some of your questions, would you be so\n kind as to repeat them? -- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 16 Aug 2019 16:32:52 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": ">\n>\n> On Fri, 16 Aug 2019 at 15:30, Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> wrote:\n>\n>\n>> I forget or do not notice some of your questions, would you be so kind as\n>> to repeat them?\n>>\n>\n>\n\nSent early by accident.\n\nRepeating questions:\n\n\nWhy do you need to do all this indirection with changing RelFileNode to\nRelFileNodeBackend in the bufmgr, changing BufferGetTag etc? Similarly,\nyour changes of RelFileNodeBackendIsTemp to RelFileNodeBackendIsLocalTemp .\nI'm guessing you did it the way you did instead to lay the groundwork for\ncross-backend sharing, but if so it should IMO be in your second patch that\nadds support for using shared_buffers for temp tables, not in the first\npatch that adds a minimal global temp tables implementation. Maybe my\nunderstanding of the existing temp table mechanics is just insufficient as\nI see RelFileNodeBackendIsTemp is already used in some aspects of existing\ntemp relation handling.\n\nDid you look into my suggestion of extending the relmapper so that global\ntemp tables would have a relfilenode of 0 like pg_class etc, and use a\nbackend-local map of oid-to-relfilenode mappings?\n\nSimilarly, TruncateSessionRelations probably shouldn't need to exist in\nthis patch in its current form; there's no shared_buffers use to clean and\nthe same file cleanup mechanism should handle both session-temp and\nlocal-temp relfilenodes.\n\nSequence initialization ignores sequence startval/firstval settings. Why?\n+ value[SEQ_COL_LASTVAL-1] = Int64GetDatumFast(1); /* start\nsequence with 1 */\n\n\n\nDoesn't this change the test outcome for RELPERSISTENCE_UNLOGGED?:\n- else if (newrelpersistence == RELPERSISTENCE_PERMANENT)\n+ else if (newrelpersistence != RELPERSISTENCE_TEMP)\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Fri, 16 Aug 2019 at 15:30, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote: I forget or do not notice some of your questions, would you be so\n kind as to repeat them? \nSent early by accident.Repeating questions:Why do you need to do all this indirection with changing RelFileNode to RelFileNodeBackend in the bufmgr, changing BufferGetTag etc? Similarly, your changes of RelFileNodeBackendIsTemp to RelFileNodeBackendIsLocalTemp . I'm guessing you did it the way you did instead to lay the groundwork for cross-backend sharing, but if so it should IMO be in your second patch that adds support for using shared_buffers for temp tables, not in the first patch that adds a minimal global temp tables implementation. Maybe my understanding of the existing temp table mechanics is just insufficient as I see RelFileNodeBackendIsTemp is already used in some aspects of existing temp relation handling.Did you look into my suggestion of extending the relmapper so that global temp tables would have a relfilenode of 0 like pg_class etc, and use a backend-local map of oid-to-relfilenode mappings?Similarly, TruncateSessionRelations probably shouldn't need to exist in this patch in its current form; there's no shared_buffers use to clean and the same file cleanup mechanism should handle both session-temp and local-temp relfilenodes.Sequence initialization ignores sequence startval/firstval settings. Why?+ value[SEQ_COL_LASTVAL-1] = Int64GetDatumFast(1); /* start sequence with 1 */Doesn't this change the test outcome for RELPERSISTENCE_UNLOGGED?:-\telse if (newrelpersistence == RELPERSISTENCE_PERMANENT)+\telse if (newrelpersistence != RELPERSISTENCE_TEMP)-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 16 Aug 2019 16:37:41 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 16.08.2019 11:37, Craig Ringer wrote:\n>\n>\n> On Fri, 16 Aug 2019 at 15:30, Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n>\n> I forget or do not notice some of your questions, would you be\n> so kind as to repeat them?\n>\n>\n> Sent early by accident.\n>\n> Repeating questions:\n\nSorry, but I have answered them (my e-mail from 13.08)!\nLooks like you have looed at wrong version of the patch:\nglobal_shared_temp-1.patch instead of global_private_temp-1.patch which \nimplements global tables accessed through local buffers.\n\n>\n>\n> Why do you need to do all this indirection with changing RelFileNode \n> to RelFileNodeBackend in the bufmgr, changing BufferGetTag etc? \n> Similarly, your changes of RelFileNodeBackendIsTemp \n> to RelFileNodeBackendIsLocalTemp . I'm guessing you did it the way you \n> did instead to lay the groundwork for cross-backend sharing, but if so \n> it should IMO be in your second patch that adds support for using \n> shared_buffers for temp tables, not in the first patch that adds a \n> minimal global temp tables implementation. Maybe my understanding of \n> the existing temp table mechanics is just insufficient as I \n> see RelFileNodeBackendIsTemp is already used in some aspects of \n> existing temp relation handling.\n>\n\nSorry, are you really speaking about global_private_temp-1.patch?\nThis patch doesn't change bufmgr file at all.\nMay be you looked at another patch - global_shared_temp-1.patch\nwhich is accessing shared tables though shared buffers and so have to \nchange buffer tag to include backend ID in it.\n\n\n> Did you look into my suggestion of extending the relmapper so that \n> global temp tables would have a relfilenode of 0 like pg_class etc, \n> and use a backend-local map of oid-to-relfilenode mappings?\n>\n> Similarly, TruncateSessionRelations probably shouldn't need to exist \n> in this patch in its current form; there's no shared_buffers use to \n> clean and the same file cleanup mechanism should handle both \n> session-temp and local-temp relfilenodes.\n\nIn global_private_temp-1.patch TruncateSessionRelations does nothing \nwith shared buffers, it just delete relation files.\n\n\n\n>\n> Sequence initialization ignores sequence startval/firstval settings. Why?\n> + value[SEQ_COL_LASTVAL-1] = Int64GetDatumFast(1); /* \n> start sequence with 1 */\n>\n>\n>\n\nI am handling only case of implicitly created sequences for \nSERIAL/BIGSERIAL columns.\nIs it possible to explicitly specify initial value and step for them?\nIf so, this place should definitely be rewritten.\n\n\n> Doesn't this change the test outcome for RELPERSISTENCE_UNLOGGED?:\n> - else if (newrelpersistence == RELPERSISTENCE_PERMANENT)\n> + else if (newrelpersistence != RELPERSISTENCE_TEMP)\n>\n\nRELPERSISTENCE_UNLOGGED case is handle in previous IF branch.\n-\n\nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 16.08.2019 11:37, Craig Ringer\n wrote:\n\n\n\n\n\n\n\n\n On Fri, 16 Aug 2019 at 15:30, Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n wrote:\n\n\n \n\nI forget or do not notice some\n of your questions, would you be so kind as to repeat\n them?\n\n\n \n\n\n\n\n\n\n Sent early by accident.\n \n\n\nRepeating questions:\n\n\n\n\n Sorry, but I have answered them (my e-mail from 13.08)!\n Looks like you have looed at wrong version of the patch:\n global_shared_temp-1.patch instead of global_private_temp-1.patch\n which implements global tables accessed through local buffers.\n\n\n\n\n\n\n\n Why do you need to do all this indirection with changing\n RelFileNode to RelFileNodeBackend in the bufmgr, changing\n BufferGetTag etc? Similarly, your changes\n of RelFileNodeBackendIsTemp to RelFileNodeBackendIsLocalTemp\n . I'm guessing you did it the way you did instead to lay the\n groundwork for cross-backend sharing, but if so it should\n IMO be in your second patch that adds support for using\n shared_buffers for temp tables, not in the first patch that\n adds a minimal global temp tables implementation. Maybe my\n understanding of the existing temp table mechanics is just\n insufficient as I see RelFileNodeBackendIsTemp is already\n used in some aspects of existing temp relation handling.\n\n\n\n\n\n\n Sorry, are you really speaking about global_private_temp-1.patch?\n This patch doesn't change bufmgr file at all.\n May be you looked at another patch - global_shared_temp-1.patch\n which is accessing shared tables though shared buffers and so have\n to change buffer tag to include backend ID in it.\n\n\n\n\n\nDid you look into my suggestion of extending the\n relmapper so that global temp tables would have a\n relfilenode of 0 like pg_class etc, and use a backend-local\n map of oid-to-relfilenode mappings?\n\n\n\nSimilarly, TruncateSessionRelations probably shouldn't\n need to exist in this patch in its current form; there's no\n shared_buffers use to clean and the same file cleanup\n mechanism should handle both session-temp and local-temp\n relfilenodes.\n\n\n\n\n In global_private_temp-1.patch TruncateSessionRelations does nothing\n with shared buffers, it just delete relation files.\n\n\n\n\n\n\n\n\n\nSequence initialization ignores sequence\n startval/firstval settings. Why?\n\n+ value[SEQ_COL_LASTVAL-1] =\n Int64GetDatumFast(1); /* start sequence with 1 */\n\n\n\n\n\n\n\n\n\n\n\n\n\n I am handling only case of implicitly created sequences for\n SERIAL/BIGSERIAL columns.\n Is it possible to explicitly specify initial value and step for\n them?\n If so, this place should definitely be rewritten.\n\n\n\n\n\n\nDoesn't this change the test outcome for\n RELPERSISTENCE_UNLOGGED?:\n- else if (newrelpersistence ==\n RELPERSISTENCE_PERMANENT)\n + else if (newrelpersistence != RELPERSISTENCE_TEMP)\n\n\n\n\n\n\n\n\n RELPERSISTENCE_UNLOGGED case is handle in previous IF branch.\n -\n Konstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 16 Aug 2019 14:21:36 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 16.08.2019 11:32, Craig Ringer wrote:\n>\n> You ignore the costs of evicting non-temporary data from \n> shared_buffers, i.e. contention for space. Also increased chance of \n> backends being forced to do direct write-out due to lack of s_b space \n> for dirty buffers.\n> > In case of pulling all content of temp table in memory (pg_prewarm) \n> global temp table with shared buffers becomes faster.\n>\n> Who would ever do that?\n>\n>\n\nI decided to redo my experiments and now get different results which \nillustrates advantages of global temp tables with shared buffer.\nI performed the following test at my desktop with SSD and 16GB of RAM \nand Postgres with default configuration except shared-buffers increased \nto 1Gb.\n\n\npostgres=# create table big(pk bigint primary key, val bigint);\nCREATE TABLE\npostgres=# insert into big values \n(generate_series(1,100000000),generate_series(1,100000000)/100);\nINSERT 0 100000000\npostgres=# select * from buffer_usage limit 3;\n relname | buffered | buffer_percent | percent_of_relation\n----------------+------------+----------------+---------------------\n big | 678 MB | 66.2 | 16.1\n big_pkey | 344 MB | 33.6 | 16.1\n pg_am | 8192 bytes | 0.0 | 20.0\n\npostgres=# create temp table lt(key bigint, count bigint);\npostgres=# \\timing\nTiming is on.\npostgres=# insert into lt (select count(*),val as key from big group by \nval);\nINSERT 0 1000001\nTime: 43265.491 ms (00:43.265)\npostgres=# select sum(count) from lt;\n sum\n--------------\n 500000500000\n(1 row)\n\nTime: 94.194 ms\npostgres=# insert into gt (select count(*),val as key from big group by \nval);\nINSERT 0 1000001\nTime: 42952.671 ms (00:42.953)\npostgres=# select sum(count) from gt;\n sum\n--------------\n 500000500000\n(1 row)\n\nTime: 35.906 ms\npostgres=# select * from buffer_usage limit 3;\n relname | buffered | buffer_percent | percent_of_relation\n----------+----------+----------------+---------------------\n big | 679 MB | 66.3 | 16.1\n big_pkey | 300 MB | 29.3 | 14.0\n gt | 42 MB | 4.1 | 100.0\n\n\nSo time of storing result in global temp table is slightly smaller than \ntime of storing it in local temp table and time of scanning global temp \ntable is twice smaller!\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 16.08.2019 11:32, Craig Ringer\n wrote:\n\n\n\n\n\nYou ignore the costs of evicting non-temporary data from\n shared_buffers, i.e. contention for space. Also increased\n chance of backends being forced to do direct write-out due\n to lack of s_b space for dirty buffers.\n \n> In case of pulling all content of temp table in\n memory (pg_prewarm) global temp table with shared buffers\n becomes faster.\n\n\nWho would ever do that?\n\n\n\n\n\n\n\n I decided to redo my experiments and now get different results which\n illustrates advantages of global temp tables with shared buffer.\n I performed the following test at my desktop with SSD and 16GB of\n RAM and Postgres with default configuration except shared-buffers\n increased to 1Gb.\n\n\n postgres=# create table big(pk bigint primary key, val bigint);\n CREATE TABLE\n postgres=# insert into big values\n (generate_series(1,100000000),generate_series(1,100000000)/100);\n INSERT 0 100000000\n postgres=# select * from buffer_usage limit 3;\n relname | buffered | buffer_percent | percent_of_relation\n \n ----------------+------------+----------------+---------------------\n big | 678 MB | 66.2 | 16.1\n big_pkey | 344 MB | 33.6 | 16.1\n pg_am | 8192 bytes | 0.0 | 20.0\n\n postgres=# create temp table lt(key bigint, count bigint);\n postgres=# \\timing\n Timing is on.\n postgres=# insert into lt (select count(*),val as key from big group\n by val);\n INSERT 0 1000001\n Time: 43265.491 ms (00:43.265)\n postgres=# select sum(count) from lt;\n sum \n --------------\n 500000500000\n (1 row)\n\n Time: 94.194 ms\n postgres=# insert into gt (select count(*),val as key from big group\n by val);\n INSERT 0 1000001\n Time: 42952.671 ms (00:42.953)\n postgres=# select sum(count) from gt;\n sum \n --------------\n 500000500000\n (1 row)\n\n Time: 35.906 ms\n postgres=# select * from buffer_usage limit 3;\n relname | buffered | buffer_percent | percent_of_relation \n ----------+----------+----------------+---------------------\n big | 679 MB | 66.3 | 16.1\n big_pkey | 300 MB | 29.3 | 14.0\n gt | 42 MB | 4.1 | 100.0\n\n\n So time of storing result in global temp table is slightly smaller\n than time of storing it in local temp table and time of scanning\n global temp table is twice smaller!\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 16 Aug 2019 14:41:36 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "I did more investigations of performance of global temp tables with \nshared buffers vs. vanilla (local) temp tables.\n\n1. Combination of persistent and temporary tables in the same query.\n\nPreparation:\ncreate table big(pk bigint primary key, val bigint);\ninsert into big values \n(generate_series(1,100000000),generate_series(1,100000000));\ncreate temp table lt(key bigint, count bigint);\ncreate global temp table gt(key bigint, count bigint);\n\nSize of table is about 6Gb, I run this test on desktop with 16GB of RAM \nand postgres with 1Gb shared buffers.\nI run two queries:\n\ninsert into T (select count(*),pk/P as key from big group by key);\nselect sum(count) from T;\n\nwhere P is (100,10,1) and T is name of temp table (lt or gt).\nThe table below contains times of both queries in msec:\n\nPercent of selected data\n\t1%\n\t10%\n\t100%\nLocal temp table\n\t44610\n90\n\t47920\n891\n\t63414\n21612\nGlobal temp table\n\t44669\n35\n\t47939\n298\n\t59159\n26015\n\n\nAs you can see, time of insertion in temporary table is almost the same\nand time of traversal of temporary table is about twice smaller for \nglobal temp table\nwhen it fits in RAM together with persistent table and slightly worser \nwhen it doesn't fit.\n\n\n\n2. Temporary table only access.\nThe same system, but Postgres is configured with shared_buffers=10GB, \nmax_parallel_workers = 4, max_parallel_workers_per_gather = 4\n\nLocal temp tables:\ncreate temp table local_temp(x1 bigint, x2 bigint, x3 bigint, x4 bigint, \nx5 bigint, x6 bigint, x7 bigint, x8 bigint, x9 bigint);\ninsert into local_temp values \n(generate_series(1,100000000),0,0,0,0,0,0,0,0);\nselect sum(x1) from local_temp;\n\nGlobal temp tables:\ncreate global temporary table global_temp(x1 bigint, x2 bigint, x3 \nbigint, x4 bigint, x5 bigint, x6 bigint, x7 bigint, x8 bigint, x9 bigint);\ninsert into global_temp values \n(generate_series(1,100000000),0,0,0,0,0,0,0,0);\nselect sum(x1) from global_temp;\n\nResults (msec):\n\n\tInsert\n\tSelect\nLocal temp table \t37489\n\t48322\nGlobal temp table \t44358\n\t3003\n\n\nSo insertion in local temp table is performed slightly faster but select \nis 16 times slower!\n\nConclusion:\nIn the assumption then temp table fits in memory, global temp tables \nwith shared buffers provides better performance than local temp table.\nI didn't consider here global temp tables with local buffers because for \nthem results should be similar with local temp tables.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n I did more investigations of performance of global temp tables with\n shared buffers vs. vanilla (local) temp tables.\n\n 1. Combination of persistent and temporary tables in the same query.\n\n Preparation:\n create table big(pk bigint primary key, val bigint);\n insert into big values\n (generate_series(1,100000000),generate_series(1,100000000));\n create temp table lt(key bigint, count bigint);\n create global temp table gt(key bigint, count bigint);\n\n Size of table is about 6Gb, I run this test on desktop with 16GB of\n RAM and postgres with 1Gb shared buffers.\n I run two queries:\n\n insert into T (select count(*),pk/P as key from big group by key);\n select sum(count) from T;\n\n where P is (100,10,1) and T is name of temp table (lt or gt).\n The table below contains times of both queries in msec:\n\n\n\n\nPercent of selected data\n\n1%\n\n10%\n\n100%\n\n\n\nLocal temp table\n\n44610\n 90\n\n47920\n 891\n\n63414\n 21612\n\n\n\nGlobal temp table\n\n44669\n 35\n\n47939\n 298\n\n59159\n 26015\n\n\n\n\n\n As you can see, time of insertion in temporary table is almost the\n same\n and time of traversal of temporary table is about twice smaller for\n global temp table\n when it fits in RAM together with persistent table and slightly\n worser when it doesn't fit.\n\n\n\n 2. Temporary table only access.\n The same system, but Postgres is configured with\n shared_buffers=10GB, max_parallel_workers = 4,\n max_parallel_workers_per_gather = 4\n\n Local temp tables:\n create temp table local_temp(x1 bigint, x2 bigint, x3 bigint, x4\n bigint, x5 bigint, x6 bigint, x7 bigint, x8 bigint, x9 bigint);\n insert into local_temp values\n (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n select sum(x1) from local_temp;\n\n Global temp tables:\n create global temporary table global_temp(x1 bigint, x2 bigint, x3\n bigint, x4 bigint, x5 bigint, x6 bigint, x7 bigint, x8 bigint, x9\n bigint);\n insert into global_temp values\n (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n select sum(x1) from global_temp;\n\n Results (msec):\n\n\n\n\n\nInsert\n\nSelect\n\n\n\nLocal temp table\n37489\n\n48322\n\n\nGlobal temp table\n44358\n\n3003\n\n\n\n\n\n So insertion in local temp table is performed slightly faster but\n select is 16 times slower!\n\n Conclusion:\n In the assumption then temp table fits in memory, global temp tables\n with shared buffers provides better performance than local temp\n table.\n I didn't consider here global temp tables with local buffers because\n for them results should be similar with local temp tables.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 16 Aug 2019 17:12:02 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "pá 16. 8. 2019 v 16:12 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n> I did more investigations of performance of global temp tables with shared\n> buffers vs. vanilla (local) temp tables.\n>\n> 1. Combination of persistent and temporary tables in the same query.\n>\n> Preparation:\n> create table big(pk bigint primary key, val bigint);\n> insert into big values\n> (generate_series(1,100000000),generate_series(1,100000000));\n> create temp table lt(key bigint, count bigint);\n> create global temp table gt(key bigint, count bigint);\n>\n> Size of table is about 6Gb, I run this test on desktop with 16GB of RAM\n> and postgres with 1Gb shared buffers.\n> I run two queries:\n>\n> insert into T (select count(*),pk/P as key from big group by key);\n> select sum(count) from T;\n>\n> where P is (100,10,1) and T is name of temp table (lt or gt).\n> The table below contains times of both queries in msec:\n>\n> Percent of selected data\n> 1%\n> 10%\n> 100%\n> Local temp table\n> 44610\n> 90\n> 47920\n> 891\n> 63414\n> 21612\n> Global temp table\n> 44669\n> 35\n> 47939\n> 298\n> 59159\n> 26015\n>\n> As you can see, time of insertion in temporary table is almost the same\n> and time of traversal of temporary table is about twice smaller for global\n> temp table\n> when it fits in RAM together with persistent table and slightly worser\n> when it doesn't fit.\n>\n>\n>\n> 2. Temporary table only access.\n> The same system, but Postgres is configured with shared_buffers=10GB,\n> max_parallel_workers = 4, max_parallel_workers_per_gather = 4\n>\n> Local temp tables:\n> create temp table local_temp(x1 bigint, x2 bigint, x3 bigint, x4 bigint,\n> x5 bigint, x6 bigint, x7 bigint, x8 bigint, x9 bigint);\n> insert into local_temp values\n> (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n> select sum(x1) from local_temp;\n>\n> Global temp tables:\n> create global temporary table global_temp(x1 bigint, x2 bigint, x3 bigint,\n> x4 bigint, x5 bigint, x6 bigint, x7 bigint, x8 bigint, x9 bigint);\n> insert into global_temp values\n> (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n> select sum(x1) from global_temp;\n>\n> Results (msec):\n>\n> Insert\n> Select\n> Local temp table 37489\n> 48322\n> Global temp table 44358\n> 3003\n>\n> So insertion in local temp table is performed slightly faster but select\n> is 16 times slower!\n>\n> Conclusion:\n> In the assumption then temp table fits in memory, global temp tables with\n> shared buffers provides better performance than local temp table.\n> I didn't consider here global temp tables with local buffers because for\n> them results should be similar with local temp tables.\n>\n\nProbably there is not a reason why shared buffers should be slower than\nlocal buffers when system is under low load.\n\naccess to shared memory is protected by spin locks (are cheap for few\nprocesses), so tests in one or few process are not too important (or it is\njust one side of space)\n\nanother topic can be performance on MS Sys - there are stories about not\nperfect performance of shared memory there.\n\nRegards\n\nPavel\n\n\n\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\npá 16. 8. 2019 v 16:12 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n I did more investigations of performance of global temp tables with\n shared buffers vs. vanilla (local) temp tables.\n\n 1. Combination of persistent and temporary tables in the same query.\n\n Preparation:\n create table big(pk bigint primary key, val bigint);\n insert into big values\n (generate_series(1,100000000),generate_series(1,100000000));\n create temp table lt(key bigint, count bigint);\n create global temp table gt(key bigint, count bigint);\n\n Size of table is about 6Gb, I run this test on desktop with 16GB of\n RAM and postgres with 1Gb shared buffers.\n I run two queries:\n\n insert into T (select count(*),pk/P as key from big group by key);\n select sum(count) from T;\n\n where P is (100,10,1) and T is name of temp table (lt or gt).\n The table below contains times of both queries in msec:\n\n\n\n\nPercent of selected data\n\n1%\n\n10%\n\n100%\n\n\n\nLocal temp table\n\n44610\n 90\n\n47920\n 891\n\n63414\n 21612\n\n\n\nGlobal temp table\n\n44669\n 35\n\n47939\n 298\n\n59159\n 26015\n\n\n\n\n\n As you can see, time of insertion in temporary table is almost the\n same\n and time of traversal of temporary table is about twice smaller for\n global temp table\n when it fits in RAM together with persistent table and slightly\n worser when it doesn't fit.\n\n\n\n 2. Temporary table only access.\n The same system, but Postgres is configured with\n shared_buffers=10GB, max_parallel_workers = 4,\n max_parallel_workers_per_gather = 4\n\n Local temp tables:\n create temp table local_temp(x1 bigint, x2 bigint, x3 bigint, x4\n bigint, x5 bigint, x6 bigint, x7 bigint, x8 bigint, x9 bigint);\n insert into local_temp values\n (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n select sum(x1) from local_temp;\n\n Global temp tables:\n create global temporary table global_temp(x1 bigint, x2 bigint, x3\n bigint, x4 bigint, x5 bigint, x6 bigint, x7 bigint, x8 bigint, x9\n bigint);\n insert into global_temp values\n (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n select sum(x1) from global_temp;\n\n Results (msec):\n\n\n\n\n\nInsert\n\nSelect\n\n\n\nLocal temp table\n37489\n\n48322\n\n\nGlobal temp table\n44358\n\n3003\n\n\n\n\n\n So insertion in local temp table is performed slightly faster but\n select is 16 times slower!\n\n Conclusion:\n In the assumption then temp table fits in memory, global temp tables\n with shared buffers provides better performance than local temp\n table.\n I didn't consider here global temp tables with local buffers because\n for them results should be similar with local temp tables.Probably there is not a reason why shared buffers should be slower than local buffers when system is under low load. access to shared memory is protected by spin locks (are cheap for few processes), so tests in one or few process are not too important (or it is just one side of space)another topic can be performance on MS Sys - there are stories about not perfect performance of shared memory there.RegardsPavel\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 16 Aug 2019 19:17:45 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 16.08.2019 20:17, Pavel Stehule wrote:\n>\n>\n> pá 16. 8. 2019 v 16:12 odesílatel Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> napsal:\n>\n> I did more investigations of performance of global temp tables\n> with shared buffers vs. vanilla (local) temp tables.\n>\n> 1. Combination of persistent and temporary tables in the same query.\n>\n> Preparation:\n> create table big(pk bigint primary key, val bigint);\n> insert into big values\n> (generate_series(1,100000000),generate_series(1,100000000));\n> create temp table lt(key bigint, count bigint);\n> create global temp table gt(key bigint, count bigint);\n>\n> Size of table is about 6Gb, I run this test on desktop with 16GB\n> of RAM and postgres with 1Gb shared buffers.\n> I run two queries:\n>\n> insert into T (select count(*),pk/P as key from big group by key);\n> select sum(count) from T;\n>\n> where P is (100,10,1) and T is name of temp table (lt or gt).\n> The table below contains times of both queries in msec:\n>\n> Percent of selected data\n> \t1%\n> \t10%\n> \t100%\n> Local temp table\n> \t44610\n> 90\n> \t47920\n> 891\n> \t63414\n> 21612\n> Global temp table\n> \t44669\n> 35\n> \t47939\n> 298\n> \t59159\n> 26015\n>\n>\n> As you can see, time of insertion in temporary table is almost the\n> same\n> and time of traversal of temporary table is about twice smaller\n> for global temp table\n> when it fits in RAM together with persistent table and slightly\n> worser when it doesn't fit.\n>\n>\n>\n> 2. Temporary table only access.\n> The same system, but Postgres is configured with\n> shared_buffers=10GB, max_parallel_workers = 4,\n> max_parallel_workers_per_gather = 4\n>\n> Local temp tables:\n> create temp table local_temp(x1 bigint, x2 bigint, x3 bigint, x4\n> bigint, x5 bigint, x6 bigint, x7 bigint, x8 bigint, x9 bigint);\n> insert into local_temp values\n> (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n> select sum(x1) from local_temp;\n>\n> Global temp tables:\n> create global temporary table global_temp(x1 bigint, x2 bigint, x3\n> bigint, x4 bigint, x5 bigint, x6 bigint, x7 bigint, x8 bigint, x9\n> bigint);\n> insert into global_temp values\n> (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n> select sum(x1) from global_temp;\n>\n> Results (msec):\n>\n> \tInsert\n> \tSelect\n> Local temp table \t37489\n> \t48322\n> Global temp table \t44358\n> \t3003\n>\n>\n> So insertion in local temp table is performed slightly faster but\n> select is 16 times slower!\n>\n> Conclusion:\n> In the assumption then temp table fits in memory, global temp\n> tables with shared buffers provides better performance than local\n> temp table.\n> I didn't consider here global temp tables with local buffers\n> because for them results should be similar with local temp tables.\n>\n>\n> Probably there is not a reason why shared buffers should be slower \n> than local buffers when system is under low load.\n>\n> access to shared memory is protected by spin locks (are cheap for few \n> processes), so tests in one or few process are not too important (or \n> it is just one side of space)\n>\n> another topic can be performance on MS Sys - there are stories about \n> not perfect performance of shared memory there.\n>\n> Regards\n>\n> Pavel\n>\n One more test which is used to simulate access to temp tables under \nhigh load.\nI am using \"upsert\" into temp table in multiple connections.\n\ncreate global temp table gtemp (x integer primary key, y bigint);\n\nupsert.sql:\ninsert into gtemp values (random() * 1000000, 0) on conflict(x) do \nupdate set y=gtemp.y+1;\n\npgbench -c 10 -M prepared -T 100 -P 1 -n -f upsert.sql postgres\n\n\nI failed to find some standard way in pgbech to perform per-session \ninitialization to create local temp table,\nso I just insert this code in pgbench code:\n\ndiff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\nindex 570cf33..af6a431 100644\n--- a/src/bin/pgbench/pgbench.c\n+++ b/src/bin/pgbench/pgbench.c\n@@ -5994,6 +5994,7 @@ threadRun(void *arg)\n {\n if ((state[i].con = doConnect()) == NULL)\n goto done;\n+ executeStatement(state[i].con, \"create temp \ntable ltemp(x integer primary key, y bigint)\");\n }\n }\n\n\nResults are the following:\nGlobal temp table: 117526 TPS\nLocal temp table: 107802 TPS\n\n\nSo even for this workload global temp table with shared buffers are a \nlittle bit faster.\nI will be pleased if you can propose some other testing scenario.\n\n\n\n\n\n\n\n\n\nOn 16.08.2019 20:17, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\npá 16. 8. 2019 v 16:12\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n I did more investigations of\n performance of global temp tables with shared buffers vs.\n vanilla (local) temp tables.\n\n 1. Combination of persistent and temporary tables in the\n same query.\n\n Preparation:\n create table big(pk bigint primary key, val bigint);\n insert into big values\n (generate_series(1,100000000),generate_series(1,100000000));\n create temp table lt(key bigint, count bigint);\n create global temp table gt(key bigint, count bigint);\n\n Size of table is about 6Gb, I run this test on desktop\n with 16GB of RAM and postgres with 1Gb shared buffers.\n I run two queries:\n\n insert into T (select count(*),pk/P as key from big group\n by key);\n select sum(count) from T;\n\n where P is (100,10,1) and T is name of temp table (lt or\n gt).\n The table below contains times of both queries in msec:\n\n\n\n\nPercent of selected data\n\n1%\n\n10%\n\n100%\n\n\n\nLocal temp table\n\n44610\n 90\n\n47920\n 891\n\n63414\n 21612\n\n\n\nGlobal temp table\n\n44669\n 35\n\n47939\n 298\n\n59159\n 26015\n\n\n\n\n\n As you can see, time of insertion in temporary table is\n almost the same\n and time of traversal of temporary table is about twice\n smaller for global temp table\n when it fits in RAM together with persistent table and\n slightly worser when it doesn't fit.\n\n\n\n 2. Temporary table only access.\n The same system, but Postgres is configured with\n shared_buffers=10GB, max_parallel_workers = 4,\n max_parallel_workers_per_gather = 4\n\n Local temp tables:\n create temp table local_temp(x1 bigint, x2 bigint, x3\n bigint, x4 bigint, x5 bigint, x6 bigint, x7 bigint, x8\n bigint, x9 bigint);\n insert into local_temp values\n (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n select sum(x1) from local_temp;\n\n Global temp tables:\n create global temporary table global_temp(x1 bigint, x2\n bigint, x3 bigint, x4 bigint, x5 bigint, x6 bigint, x7\n bigint, x8 bigint, x9 bigint);\n insert into global_temp values\n (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n select sum(x1) from global_temp;\n\n Results (msec):\n\n\n\n\n\nInsert\n\nSelect\n\n\n\nLocal temp table\n37489\n\n48322\n\n\nGlobal temp table\n44358\n\n3003\n\n\n\n\n\n So insertion in local temp table is performed slightly\n faster but select is 16 times slower!\n\n Conclusion:\n In the assumption then temp table fits in memory, global\n temp tables with shared buffers provides better\n performance than local temp table.\n I didn't consider here global temp tables with local\n buffers because for them results should be similar with\n local temp tables.\n\n\n\n\nProbably there is not a reason why shared buffers should\n be slower than local buffers when system is under low load.\n \n\n\n\naccess to shared memory is protected by spin locks (are\n cheap for few processes), so tests in one or few process are\n not too important (or it is just one side of space)\n\n\n\nanother topic can be performance on MS Sys - there are\n stories about not perfect performance of shared memory\n there.\n\n\nRegards\n\n\nPavel\n\n\n\n\n\n\n One more test which is used to simulate access to temp tables under\n high load.\n I am using \"upsert\" into temp table in multiple connections.\n\n create global temp table gtemp (x integer primary key, y bigint);\n\n upsert.sql:\n insert into gtemp values (random() * 1000000, 0) on conflict(x) do\n update set y=gtemp.y+1;\n\n pgbench -c 10 -M prepared -T 100 -P 1 -n -f upsert.sql postgres\n\n\n I failed to find some standard way in pgbech to perform per-session\n initialization to create local temp table,\n so I just insert this code in pgbench code:\n\n diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\n index 570cf33..af6a431 100644\n --- a/src/bin/pgbench/pgbench.c\n +++ b/src/bin/pgbench/pgbench.c\n @@ -5994,6 +5994,7 @@ threadRun(void *arg)\n {\n if ((state[i].con = doConnect()) == NULL)\n goto done;\n + executeStatement(state[i].con, \"create temp\n table ltemp(x integer primary key, y bigint)\");\n }\n }\n \n\n Results are the following:\n Global temp table: 117526 TPS\n Local temp table: 107802 TPS\n\n\n So even for this workload global temp table with shared buffers are\n a little bit faster.\n I will be pleased if you can propose some other testing scenario.",
"msg_date": "Sun, 18 Aug 2019 10:01:58 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "ne 18. 8. 2019 v 9:02 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 16.08.2019 20:17, Pavel Stehule wrote:\n>\n>\n>\n> pá 16. 8. 2019 v 16:12 odesílatel Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> napsal:\n>\n>> I did more investigations of performance of global temp tables with\n>> shared buffers vs. vanilla (local) temp tables.\n>>\n>> 1. Combination of persistent and temporary tables in the same query.\n>>\n>> Preparation:\n>> create table big(pk bigint primary key, val bigint);\n>> insert into big values\n>> (generate_series(1,100000000),generate_series(1,100000000));\n>> create temp table lt(key bigint, count bigint);\n>> create global temp table gt(key bigint, count bigint);\n>>\n>> Size of table is about 6Gb, I run this test on desktop with 16GB of RAM\n>> and postgres with 1Gb shared buffers.\n>> I run two queries:\n>>\n>> insert into T (select count(*),pk/P as key from big group by key);\n>> select sum(count) from T;\n>>\n>> where P is (100,10,1) and T is name of temp table (lt or gt).\n>> The table below contains times of both queries in msec:\n>>\n>> Percent of selected data\n>> 1%\n>> 10%\n>> 100%\n>> Local temp table\n>> 44610\n>> 90\n>> 47920\n>> 891\n>> 63414\n>> 21612\n>> Global temp table\n>> 44669\n>> 35\n>> 47939\n>> 298\n>> 59159\n>> 26015\n>>\n>> As you can see, time of insertion in temporary table is almost the same\n>> and time of traversal of temporary table is about twice smaller for\n>> global temp table\n>> when it fits in RAM together with persistent table and slightly worser\n>> when it doesn't fit.\n>>\n>>\n>>\n>> 2. Temporary table only access.\n>> The same system, but Postgres is configured with shared_buffers=10GB,\n>> max_parallel_workers = 4, max_parallel_workers_per_gather = 4\n>>\n>> Local temp tables:\n>> create temp table local_temp(x1 bigint, x2 bigint, x3 bigint, x4 bigint,\n>> x5 bigint, x6 bigint, x7 bigint, x8 bigint, x9 bigint);\n>> insert into local_temp values\n>> (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n>> select sum(x1) from local_temp;\n>>\n>> Global temp tables:\n>> create global temporary table global_temp(x1 bigint, x2 bigint, x3\n>> bigint, x4 bigint, x5 bigint, x6 bigint, x7 bigint, x8 bigint, x9 bigint);\n>> insert into global_temp values\n>> (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n>> select sum(x1) from global_temp;\n>>\n>> Results (msec):\n>>\n>> Insert\n>> Select\n>> Local temp table 37489\n>> 48322\n>> Global temp table 44358\n>> 3003\n>>\n>> So insertion in local temp table is performed slightly faster but select\n>> is 16 times slower!\n>>\n>> Conclusion:\n>> In the assumption then temp table fits in memory, global temp tables with\n>> shared buffers provides better performance than local temp table.\n>> I didn't consider here global temp tables with local buffers because for\n>> them results should be similar with local temp tables.\n>>\n>\n> Probably there is not a reason why shared buffers should be slower than\n> local buffers when system is under low load.\n>\n> access to shared memory is protected by spin locks (are cheap for few\n> processes), so tests in one or few process are not too important (or it is\n> just one side of space)\n>\n> another topic can be performance on MS Sys - there are stories about not\n> perfect performance of shared memory there.\n>\n> Regards\n>\n> Pavel\n>\n> One more test which is used to simulate access to temp tables under high\n> load.\n> I am using \"upsert\" into temp table in multiple connections.\n>\n> create global temp table gtemp (x integer primary key, y bigint);\n>\n> upsert.sql:\n> insert into gtemp values (random() * 1000000, 0) on conflict(x) do update\n> set y=gtemp.y+1;\n>\n> pgbench -c 10 -M prepared -T 100 -P 1 -n -f upsert.sql postgres\n>\n>\n> I failed to find some standard way in pgbech to perform per-session\n> initialization to create local temp table,\n> so I just insert this code in pgbench code:\n>\n> diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\n> index 570cf33..af6a431 100644\n> --- a/src/bin/pgbench/pgbench.c\n> +++ b/src/bin/pgbench/pgbench.c\n> @@ -5994,6 +5994,7 @@ threadRun(void *arg)\n> {\n> if ((state[i].con = doConnect()) == NULL)\n> goto done;\n> + executeStatement(state[i].con, \"create temp table\n> ltemp(x integer primary key, y bigint)\");\n> }\n> }\n>\n>\n> Results are the following:\n> Global temp table: 117526 TPS\n> Local temp table: 107802 TPS\n>\n>\n> So even for this workload global temp table with shared buffers are a\n> little bit faster.\n> I will be pleased if you can propose some other testing scenario.\n>\n\nplease, try to increase number of connections.\n\nRegards\n\nPavel\n\nne 18. 8. 2019 v 9:02 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 16.08.2019 20:17, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\npá 16. 8. 2019 v 16:12\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n I did more investigations of\n performance of global temp tables with shared buffers vs.\n vanilla (local) temp tables.\n\n 1. Combination of persistent and temporary tables in the\n same query.\n\n Preparation:\n create table big(pk bigint primary key, val bigint);\n insert into big values\n (generate_series(1,100000000),generate_series(1,100000000));\n create temp table lt(key bigint, count bigint);\n create global temp table gt(key bigint, count bigint);\n\n Size of table is about 6Gb, I run this test on desktop\n with 16GB of RAM and postgres with 1Gb shared buffers.\n I run two queries:\n\n insert into T (select count(*),pk/P as key from big group\n by key);\n select sum(count) from T;\n\n where P is (100,10,1) and T is name of temp table (lt or\n gt).\n The table below contains times of both queries in msec:\n\n\n\n\nPercent of selected data\n\n1%\n\n10%\n\n100%\n\n\n\nLocal temp table\n\n44610\n 90\n\n47920\n 891\n\n63414\n 21612\n\n\n\nGlobal temp table\n\n44669\n 35\n\n47939\n 298\n\n59159\n 26015\n\n\n\n\n\n As you can see, time of insertion in temporary table is\n almost the same\n and time of traversal of temporary table is about twice\n smaller for global temp table\n when it fits in RAM together with persistent table and\n slightly worser when it doesn't fit.\n\n\n\n 2. Temporary table only access.\n The same system, but Postgres is configured with\n shared_buffers=10GB, max_parallel_workers = 4,\n max_parallel_workers_per_gather = 4\n\n Local temp tables:\n create temp table local_temp(x1 bigint, x2 bigint, x3\n bigint, x4 bigint, x5 bigint, x6 bigint, x7 bigint, x8\n bigint, x9 bigint);\n insert into local_temp values\n (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n select sum(x1) from local_temp;\n\n Global temp tables:\n create global temporary table global_temp(x1 bigint, x2\n bigint, x3 bigint, x4 bigint, x5 bigint, x6 bigint, x7\n bigint, x8 bigint, x9 bigint);\n insert into global_temp values\n (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n select sum(x1) from global_temp;\n\n Results (msec):\n\n\n\n\n\nInsert\n\nSelect\n\n\n\nLocal temp table\n37489\n\n48322\n\n\nGlobal temp table\n44358\n\n3003\n\n\n\n\n\n So insertion in local temp table is performed slightly\n faster but select is 16 times slower!\n\n Conclusion:\n In the assumption then temp table fits in memory, global\n temp tables with shared buffers provides better\n performance than local temp table.\n I didn't consider here global temp tables with local\n buffers because for them results should be similar with\n local temp tables.\n\n\n\n\nProbably there is not a reason why shared buffers should\n be slower than local buffers when system is under low load.\n \n\n\n\naccess to shared memory is protected by spin locks (are\n cheap for few processes), so tests in one or few process are\n not too important (or it is just one side of space)\n\n\n\nanother topic can be performance on MS Sys - there are\n stories about not perfect performance of shared memory\n there.\n\n\nRegards\n\n\nPavel\n\n\n\n\n\n\n One more test which is used to simulate access to temp tables under\n high load.\n I am using \"upsert\" into temp table in multiple connections.\n\n create global temp table gtemp (x integer primary key, y bigint);\n\n upsert.sql:\n insert into gtemp values (random() * 1000000, 0) on conflict(x) do\n update set y=gtemp.y+1;\n\n pgbench -c 10 -M prepared -T 100 -P 1 -n -f upsert.sql postgres\n\n\n I failed to find some standard way in pgbech to perform per-session\n initialization to create local temp table,\n so I just insert this code in pgbench code:\n\n diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\n index 570cf33..af6a431 100644\n --- a/src/bin/pgbench/pgbench.c\n +++ b/src/bin/pgbench/pgbench.c\n @@ -5994,6 +5994,7 @@ threadRun(void *arg)\n {\n if ((state[i].con = doConnect()) == NULL)\n goto done;\n + executeStatement(state[i].con, \"create temp\n table ltemp(x integer primary key, y bigint)\");\n }\n }\n \n\n Results are the following:\n Global temp table: 117526 TPS\n Local temp table: 107802 TPS\n\n\n So even for this workload global temp table with shared buffers are\n a little bit faster.\n I will be pleased if you can propose some other testing scenario.please, try to increase number of connections.RegardsPavel",
"msg_date": "Sun, 18 Aug 2019 10:28:26 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 18.08.2019 11:28, Pavel Stehule wrote:\n>\n>\n> ne 18. 8. 2019 v 9:02 odesílatel Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> napsal:\n>\n>\n>\n> On 16.08.2019 20:17, Pavel Stehule wrote:\n>>\n>>\n>> pá 16. 8. 2019 v 16:12 odesílatel Konstantin Knizhnik\n>> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>>\n>> napsal:\n>>\n>> I did more investigations of performance of global temp\n>> tables with shared buffers vs. vanilla (local) temp tables.\n>>\n>> 1. Combination of persistent and temporary tables in the same\n>> query.\n>>\n>> Preparation:\n>> create table big(pk bigint primary key, val bigint);\n>> insert into big values\n>> (generate_series(1,100000000),generate_series(1,100000000));\n>> create temp table lt(key bigint, count bigint);\n>> create global temp table gt(key bigint, count bigint);\n>>\n>> Size of table is about 6Gb, I run this test on desktop with\n>> 16GB of RAM and postgres with 1Gb shared buffers.\n>> I run two queries:\n>>\n>> insert into T (select count(*),pk/P as key from big group by\n>> key);\n>> select sum(count) from T;\n>>\n>> where P is (100,10,1) and T is name of temp table (lt or gt).\n>> The table below contains times of both queries in msec:\n>>\n>> Percent of selected data\n>> \t1%\n>> \t10%\n>> \t100%\n>> Local temp table\n>> \t44610\n>> 90\n>> \t47920\n>> 891\n>> \t63414\n>> 21612\n>> Global temp table\n>> \t44669\n>> 35\n>> \t47939\n>> 298\n>> \t59159\n>> 26015\n>>\n>>\n>> As you can see, time of insertion in temporary table is\n>> almost the same\n>> and time of traversal of temporary table is about twice\n>> smaller for global temp table\n>> when it fits in RAM together with persistent table and\n>> slightly worser when it doesn't fit.\n>>\n>>\n>>\n>> 2. Temporary table only access.\n>> The same system, but Postgres is configured with\n>> shared_buffers=10GB, max_parallel_workers = 4,\n>> max_parallel_workers_per_gather = 4\n>>\n>> Local temp tables:\n>> create temp table local_temp(x1 bigint, x2 bigint, x3 bigint,\n>> x4 bigint, x5 bigint, x6 bigint, x7 bigint, x8 bigint, x9\n>> bigint);\n>> insert into local_temp values\n>> (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n>> select sum(x1) from local_temp;\n>>\n>> Global temp tables:\n>> create global temporary table global_temp(x1 bigint, x2\n>> bigint, x3 bigint, x4 bigint, x5 bigint, x6 bigint, x7\n>> bigint, x8 bigint, x9 bigint);\n>> insert into global_temp values\n>> (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n>> select sum(x1) from global_temp;\n>>\n>> Results (msec):\n>>\n>> \tInsert\n>> \tSelect\n>> Local temp table \t37489\n>> \t48322\n>> Global temp table \t44358\n>> \t3003\n>>\n>>\n>> So insertion in local temp table is performed slightly faster\n>> but select is 16 times slower!\n>>\n>> Conclusion:\n>> In the assumption then temp table fits in memory, global temp\n>> tables with shared buffers provides better performance than\n>> local temp table.\n>> I didn't consider here global temp tables with local buffers\n>> because for them results should be similar with local temp\n>> tables.\n>>\n>>\n>> Probably there is not a reason why shared buffers should be\n>> slower than local buffers when system is under low load.\n>>\n>> access to shared memory is protected by spin locks (are cheap for\n>> few processes), so tests in one or few process are not too\n>> important (or it is just one side of space)\n>>\n>> another topic can be performance on MS Sys - there are stories\n>> about not perfect performance of shared memory there.\n>>\n>> Regards\n>>\n>> Pavel\n>>\n> One more test which is used to simulate access to temp tables\n> under high load.\n> I am using \"upsert\" into temp table in multiple connections.\n>\n> create global temp table gtemp (x integer primary key, y bigint);\n>\n> upsert.sql:\n> insert into gtemp values (random() * 1000000, 0) on conflict(x) do\n> update set y=gtemp.y+1;\n>\n> pgbench -c 10 -M prepared -T 100 -P 1 -n -f upsert.sql postgres\n>\n>\n> I failed to find some standard way in pgbech to perform\n> per-session initialization to create local temp table,\n> so I just insert this code in pgbench code:\n>\n> diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\n> index 570cf33..af6a431 100644\n> --- a/src/bin/pgbench/pgbench.c\n> +++ b/src/bin/pgbench/pgbench.c\n> @@ -5994,6 +5994,7 @@ threadRun(void *arg)\n> {\n> if ((state[i].con = doConnect()) == NULL)\n> goto done;\n> + executeStatement(state[i].con, \"create\n> temp table ltemp(x integer primary key, y bigint)\");\n> }\n> }\n>\n>\n> Results are the following:\n> Global temp table: 117526 TPS\n> Local temp table: 107802 TPS\n>\n>\n> So even for this workload global temp table with shared buffers\n> are a little bit faster.\n> I will be pleased if you can propose some other testing scenario.\n>\n>\n> please, try to increase number of connections.\n\nWith 20 connections and 4 pgbench threads results are similar: 119k TPS \nfor global temp tables and 115k TPS for local temp tables.\n\nI have tried yet another scenario: read-only access to temp tables:\n\n\\set id random(1,10000000)\nselect sum(y) from ltemp where x=:id;\n\nTables are created and initialized in pgbench session startup:\n\nknizhnik@knizhnik:~/postgresql$ git diff\ndiff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\nindex 570cf33..95295b0 100644\n--- a/src/bin/pgbench/pgbench.c\n+++ b/src/bin/pgbench/pgbench.c\n@@ -5994,6 +5994,8 @@ threadRun(void *arg)\n {\n if ((state[i].con = doConnect()) == NULL)\n goto done;\n+ executeStatement(state[i].con, \"create temp \ntable ltemp(x integer primary key, y bigint)\");\n+ executeStatement(state[i].con, \"insert into \nltemp values (generate_series(1,1000000), generate_series(1,1000000))\");\n }\n }\n\n\nResults for 10 connections with 10 million inserted records per table \nand 100 connections with 1 million inserted record per table :\n\n#connections:\n\t10\n\t100\nlocal temp\n\t68k\n\t90k\nglobal temp, shared_buffers=1G\n\t63k\n\t61k\nglobal temp, shared_buffers=10G \t150k\n\t150k\n\n\n\nSo temporary tables with local buffers are slightly faster when data \ndoesn't fit in shared buffers, but significantly slower when it fits.\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 18.08.2019 11:28, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\nne 18. 8. 2019 v 9:02\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn\n 16.08.2019 20:17, Pavel Stehule wrote:\n\n\n\n\n\n\n\npá 16. 8. 2019\n v 16:12 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n I did more investigations\n of performance of global temp tables with shared\n buffers vs. vanilla (local) temp tables.\n\n 1. Combination of persistent and temporary\n tables in the same query.\n\n Preparation:\n create table big(pk bigint primary key, val\n bigint);\n insert into big values\n (generate_series(1,100000000),generate_series(1,100000000));\n create temp table lt(key bigint, count bigint);\n create global temp table gt(key bigint, count\n bigint);\n\n Size of table is about 6Gb, I run this test on\n desktop with 16GB of RAM and postgres with 1Gb\n shared buffers.\n I run two queries:\n\n insert into T (select count(*),pk/P as key from\n big group by key);\n select sum(count) from T;\n\n where P is (100,10,1) and T is name of temp\n table (lt or gt).\n The table below contains times of both queries\n in msec:\n\n\n\n\nPercent of selected data\n\n1%\n\n10%\n\n100%\n\n\n\nLocal temp table\n\n44610\n 90\n\n47920\n 891\n\n63414\n 21612\n\n\n\nGlobal temp table\n\n44669\n 35\n\n47939\n 298\n\n59159\n 26015\n\n\n\n\n\n As you can see, time of insertion in temporary\n table is almost the same\n and time of traversal of temporary table is\n about twice smaller for global temp table\n when it fits in RAM together with persistent\n table and slightly worser when it doesn't fit.\n\n\n\n 2. Temporary table only access.\n The same system, but Postgres is configured with\n shared_buffers=10GB, max_parallel_workers = 4,\n max_parallel_workers_per_gather = 4\n\n Local temp tables:\n create temp table local_temp(x1 bigint, x2\n bigint, x3 bigint, x4 bigint, x5 bigint, x6\n bigint, x7 bigint, x8 bigint, x9 bigint);\n insert into local_temp values\n (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n select sum(x1) from local_temp;\n\n Global temp tables:\n create global temporary table global_temp(x1\n bigint, x2 bigint, x3 bigint, x4 bigint, x5\n bigint, x6 bigint, x7 bigint, x8 bigint, x9\n bigint);\n insert into global_temp values\n (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n select sum(x1) from global_temp;\n\n Results (msec):\n\n\n\n\n\nInsert\n\nSelect\n\n\n\nLocal temp table\n37489\n\n48322\n\n\nGlobal temp table\n44358\n\n3003\n\n\n\n\n\n So insertion in local temp table is performed\n slightly faster but select is 16 times slower!\n\n Conclusion:\n In the assumption then temp table fits in\n memory, global temp tables with shared buffers\n provides better performance than local temp\n table.\n I didn't consider here global temp tables with\n local buffers because for them results should be\n similar with local temp tables.\n\n\n\n\nProbably there is not a reason why shared\n buffers should be slower than local buffers when\n system is under low load. \n\n\n\naccess to shared memory is protected by spin\n locks (are cheap for few processes), so tests in\n one or few process are not too important (or it is\n just one side of space)\n\n\n\nanother topic can be performance on MS Sys -\n there are stories about not perfect performance of\n shared memory there.\n\n\nRegards\n\n\nPavel\n\n\n\n\n\n\n One more test which is used to simulate access to temp\n tables under high load.\n I am using \"upsert\" into temp table in multiple\n connections.\n\n create global temp table gtemp (x integer primary key, y\n bigint);\n\n upsert.sql:\n insert into gtemp values (random() * 1000000, 0) on\n conflict(x) do update set y=gtemp.y+1;\n\n pgbench -c 10 -M prepared -T 100 -P 1 -n -f upsert.sql\n postgres\n\n\n I failed to find some standard way in pgbech to perform\n per-session initialization to create local temp table,\n so I just insert this code in pgbench code:\n\n diff --git a/src/bin/pgbench/pgbench.c\n b/src/bin/pgbench/pgbench.c\n index 570cf33..af6a431 100644\n --- a/src/bin/pgbench/pgbench.c\n +++ b/src/bin/pgbench/pgbench.c\n @@ -5994,6 +5994,7 @@ threadRun(void *arg)\n {\n if ((state[i].con = doConnect())\n == NULL)\n goto done;\n + executeStatement(state[i].con,\n \"create temp table ltemp(x integer primary key, y\n bigint)\");\n }\n }\n \n\n Results are the following:\n Global temp table: 117526 TPS\n Local temp table: 107802 TPS\n\n\n So even for this workload global temp table with shared\n buffers are a little bit faster.\n I will be pleased if you can propose some other testing\n scenario.\n\n\n\n\nplease, try to increase number of connections.\n\n\n\n\n With 20 connections and 4 pgbench threads results are similar: 119k\n TPS for global temp tables and 115k TPS for local temp tables.\n\n I have tried yet another scenario: read-only access to temp tables:\n\n \\set id random(1,10000000)\n select sum(y) from ltemp where x=:id;\n\n Tables are created and initialized in pgbench session startup:\n\n knizhnik@knizhnik:~/postgresql$ git diff\n diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\n index 570cf33..95295b0 100644\n --- a/src/bin/pgbench/pgbench.c\n +++ b/src/bin/pgbench/pgbench.c\n @@ -5994,6 +5994,8 @@ threadRun(void *arg)\n {\n if ((state[i].con = doConnect()) == NULL)\n goto done;\n + executeStatement(state[i].con, \"create temp\n table ltemp(x integer primary key, y bigint)\");\n + executeStatement(state[i].con, \"insert into\n ltemp values (generate_series(1,1000000),\n generate_series(1,1000000))\");\n }\n }\n\n\n Results for 10 connections with 10 million inserted records per\n table and 100 connections with 1 million inserted record per table :\n\n\n\n\n#connections:\n\n10 \n\n100\n\n\n\nlocal temp\n\n68k\n\n90k\n\n\n\nglobal temp, shared_buffers=1G\n\n63k\n\n61k\n\n\n\nglobal temp, shared_buffers=10G\n150k\n\n150k\n\n\n\n\n\n\n So temporary tables with local buffers are slightly faster when data\n doesn't fit in shared buffers, but significantly slower when it\n fits.\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 19 Aug 2019 11:51:47 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 19.08.2019 11:51, Konstantin Knizhnik wrote:\n>\n>\n> On 18.08.2019 11:28, Pavel Stehule wrote:\n>>\n>>\n>> ne 18. 8. 2019 v 9:02 odesílatel Konstantin Knizhnik \n>> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> napsal:\n>>\n>>\n>>\n>> On 16.08.2019 20:17, Pavel Stehule wrote:\n>>>\n>>>\n>>> pá 16. 8. 2019 v 16:12 odesílatel Konstantin Knizhnik\n>>> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>>\n>>> napsal:\n>>>\n>>> I did more investigations of performance of global temp\n>>> tables with shared buffers vs. vanilla (local) temp tables.\n>>>\n>>> 1. Combination of persistent and temporary tables in the\n>>> same query.\n>>>\n>>> Preparation:\n>>> create table big(pk bigint primary key, val bigint);\n>>> insert into big values\n>>> (generate_series(1,100000000),generate_series(1,100000000));\n>>> create temp table lt(key bigint, count bigint);\n>>> create global temp table gt(key bigint, count bigint);\n>>>\n>>> Size of table is about 6Gb, I run this test on desktop with\n>>> 16GB of RAM and postgres with 1Gb shared buffers.\n>>> I run two queries:\n>>>\n>>> insert into T (select count(*),pk/P as key from big group by\n>>> key);\n>>> select sum(count) from T;\n>>>\n>>> where P is (100,10,1) and T is name of temp table (lt or gt).\n>>> The table below contains times of both queries in msec:\n>>>\n>>> Percent of selected data\n>>> \t1%\n>>> \t10%\n>>> \t100%\n>>> Local temp table\n>>> \t44610\n>>> 90\n>>> \t47920\n>>> 891\n>>> \t63414\n>>> 21612\n>>> Global temp table\n>>> \t44669\n>>> 35\n>>> \t47939\n>>> 298\n>>> \t59159\n>>> 26015\n>>>\n>>>\n>>> As you can see, time of insertion in temporary table is\n>>> almost the same\n>>> and time of traversal of temporary table is about twice\n>>> smaller for global temp table\n>>> when it fits in RAM together with persistent table and\n>>> slightly worser when it doesn't fit.\n>>>\n>>>\n>>>\n>>> 2. Temporary table only access.\n>>> The same system, but Postgres is configured with\n>>> shared_buffers=10GB, max_parallel_workers = 4,\n>>> max_parallel_workers_per_gather = 4\n>>>\n>>> Local temp tables:\n>>> create temp table local_temp(x1 bigint, x2 bigint, x3\n>>> bigint, x4 bigint, x5 bigint, x6 bigint, x7 bigint, x8\n>>> bigint, x9 bigint);\n>>> insert into local_temp values\n>>> (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n>>> select sum(x1) from local_temp;\n>>>\n>>> Global temp tables:\n>>> create global temporary table global_temp(x1 bigint, x2\n>>> bigint, x3 bigint, x4 bigint, x5 bigint, x6 bigint, x7\n>>> bigint, x8 bigint, x9 bigint);\n>>> insert into global_temp values\n>>> (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n>>> select sum(x1) from global_temp;\n>>>\n>>> Results (msec):\n>>>\n>>> \tInsert\n>>> \tSelect\n>>> Local temp table \t37489\n>>> \t48322\n>>> Global temp table \t44358\n>>> \t3003\n>>>\n>>>\n>>> So insertion in local temp table is performed slightly\n>>> faster but select is 16 times slower!\n>>>\n>>> Conclusion:\n>>> In the assumption then temp table fits in memory, global\n>>> temp tables with shared buffers provides better performance\n>>> than local temp table.\n>>> I didn't consider here global temp tables with local buffers\n>>> because for them results should be similar with local temp\n>>> tables.\n>>>\n>>>\n>>> Probably there is not a reason why shared buffers should be\n>>> slower than local buffers when system is under low load.\n>>>\n>>> access to shared memory is protected by spin locks (are cheap\n>>> for few processes), so tests in one or few process are not too\n>>> important (or it is just one side of space)\n>>>\n>>> another topic can be performance on MS Sys - there are stories\n>>> about not perfect performance of shared memory there.\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>> One more test which is used to simulate access to temp tables\n>> under high load.\n>> I am using \"upsert\" into temp table in multiple connections.\n>>\n>> create global temp table gtemp (x integer primary key, y bigint);\n>>\n>> upsert.sql:\n>> insert into gtemp values (random() * 1000000, 0) on conflict(x)\n>> do update set y=gtemp.y+1;\n>>\n>> pgbench -c 10 -M prepared -T 100 -P 1 -n -f upsert.sql postgres\n>>\n>>\n>> I failed to find some standard way in pgbech to perform\n>> per-session initialization to create local temp table,\n>> so I just insert this code in pgbench code:\n>>\n>> diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\n>> index 570cf33..af6a431 100644\n>> --- a/src/bin/pgbench/pgbench.c\n>> +++ b/src/bin/pgbench/pgbench.c\n>> @@ -5994,6 +5994,7 @@ threadRun(void *arg)\n>> {\n>> if ((state[i].con = doConnect()) == NULL)\n>> goto done;\n>> + executeStatement(state[i].con, \"create\n>> temp table ltemp(x integer primary key, y bigint)\");\n>> }\n>> }\n>>\n>>\n>> Results are the following:\n>> Global temp table: 117526 TPS\n>> Local temp table: 107802 TPS\n>>\n>>\n>> So even for this workload global temp table with shared buffers\n>> are a little bit faster.\n>> I will be pleased if you can propose some other testing scenario.\n>>\n>>\n>> please, try to increase number of connections.\n>\n> With 20 connections and 4 pgbench threads results are similar: 119k \n> TPS for global temp tables and 115k TPS for local temp tables.\n>\n> I have tried yet another scenario: read-only access to temp tables:\n>\n> \\set id random(1,10000000)\n> select sum(y) from ltemp where x=:id;\n>\n> Tables are created and initialized in pgbench session startup:\n>\n> knizhnik@knizhnik:~/postgresql$ git diff\n> diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\n> index 570cf33..95295b0 100644\n> --- a/src/bin/pgbench/pgbench.c\n> +++ b/src/bin/pgbench/pgbench.c\n> @@ -5994,6 +5994,8 @@ threadRun(void *arg)\n> {\n> if ((state[i].con = doConnect()) == NULL)\n> goto done;\n> + executeStatement(state[i].con, \"create temp \n> table ltemp(x integer primary key, y bigint)\");\n> + executeStatement(state[i].con, \"insert into \n> ltemp values (generate_series(1,1000000), generate_series(1,1000000))\");\n> }\n> }\n>\n>\n> Results for 10 connections with 10 million inserted records per table \n> and 100 connections with 1 million inserted record per table :\n>\n> #connections:\n> \t10\n> \t100\n> local temp\n> \t68k\n> \t90k\n> global temp, shared_buffers=1G\n> \t63k\n> \t61k\n> global temp, shared_buffers=10G \t150k\n> \t150k\n>\n>\n>\n> So temporary tables with local buffers are slightly faster when data \n> doesn't fit in shared buffers, but significantly slower when it fits.\n>\n>\n\nAll previously reported results were produced at my desktop.\nI also run this read-only test on huge IBM server (POWER9, 2 NUMA nodes, \n176 CPU, 1Tb RAM).\n\nHere the difference between local and global tables is not so large:\n\nLocal temp: 739k TPS\nGlobal temp: 924k TPS\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 19.08.2019 11:51, Konstantin\n Knizhnik wrote:\n\n\n\n\n\nOn 18.08.2019 11:28, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\nne 18. 8. 2019 v 9:02\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn\n 16.08.2019 20:17, Pavel Stehule wrote:\n\n\n\n\n\n\n\npá 16. 8. 2019\n v 16:12 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n I did more\n investigations of performance of global temp\n tables with shared buffers vs. vanilla (local)\n temp tables.\n\n 1. Combination of persistent and temporary\n tables in the same query.\n\n Preparation:\n create table big(pk bigint primary key, val\n bigint);\n insert into big values\n (generate_series(1,100000000),generate_series(1,100000000));\n create temp table lt(key bigint, count\n bigint);\n create global temp table gt(key bigint, count\n bigint);\n\n Size of table is about 6Gb, I run this test on\n desktop with 16GB of RAM and postgres with 1Gb\n shared buffers.\n I run two queries:\n\n insert into T (select count(*),pk/P as key\n from big group by key);\n select sum(count) from T;\n\n where P is (100,10,1) and T is name of temp\n table (lt or gt).\n The table below contains times of both queries\n in msec:\n\n\n\n\nPercent of selected\n data\n\n1%\n\n10%\n\n100%\n\n\n\nLocal temp table\n\n44610\n 90\n\n47920\n 891\n\n63414\n 21612\n\n\n\nGlobal temp table\n\n44669\n 35\n\n47939\n 298\n\n59159\n 26015\n\n\n\n\n\n As you can see, time of insertion in temporary\n table is almost the same\n and time of traversal of temporary table is\n about twice smaller for global temp table\n when it fits in RAM together with persistent\n table and slightly worser when it doesn't fit.\n\n\n\n 2. Temporary table only access.\n The same system, but Postgres is configured\n with shared_buffers=10GB, max_parallel_workers\n = 4, max_parallel_workers_per_gather = 4\n\n Local temp tables:\n create temp table local_temp(x1 bigint, x2\n bigint, x3 bigint, x4 bigint, x5 bigint, x6\n bigint, x7 bigint, x8 bigint, x9 bigint);\n insert into local_temp values\n (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n select sum(x1) from local_temp;\n\n Global temp tables:\n create global temporary table global_temp(x1\n bigint, x2 bigint, x3 bigint, x4 bigint, x5\n bigint, x6 bigint, x7 bigint, x8 bigint, x9\n bigint);\n insert into global_temp values\n (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n select sum(x1) from global_temp;\n\n Results (msec):\n\n\n\n\n\nInsert\n\nSelect\n\n\n\nLocal temp table\n37489\n\n48322\n\n\nGlobal temp table\n44358\n\n3003\n\n\n\n\n\n So insertion in local temp table is performed\n slightly faster but select is 16 times slower!\n\n Conclusion:\n In the assumption then temp table fits in\n memory, global temp tables with shared buffers\n provides better performance than local temp\n table.\n I didn't consider here global temp tables with\n local buffers because for them results should\n be similar with local temp tables.\n\n\n\n\nProbably there is not a reason why shared\n buffers should be slower than local buffers when\n system is under low load. \n\n\n\naccess to shared memory is protected by spin\n locks (are cheap for few processes), so tests in\n one or few process are not too important (or it\n is just one side of space)\n\n\n\nanother topic can be performance on MS Sys -\n there are stories about not perfect performance\n of shared memory there.\n\n\nRegards\n\n\nPavel\n\n\n\n\n\n\n One more test which is used to simulate access to temp\n tables under high load.\n I am using \"upsert\" into temp table in multiple\n connections.\n\n create global temp table gtemp (x integer primary key, y\n bigint);\n\n upsert.sql:\n insert into gtemp values (random() * 1000000, 0) on\n conflict(x) do update set y=gtemp.y+1;\n\n pgbench -c 10 -M prepared -T 100 -P 1 -n -f upsert.sql\n postgres\n\n\n I failed to find some standard way in pgbech to perform\n per-session initialization to create local temp table,\n so I just insert this code in pgbench code:\n\n diff --git a/src/bin/pgbench/pgbench.c\n b/src/bin/pgbench/pgbench.c\n index 570cf33..af6a431 100644\n --- a/src/bin/pgbench/pgbench.c\n +++ b/src/bin/pgbench/pgbench.c\n @@ -5994,6 +5994,7 @@ threadRun(void *arg)\n {\n if ((state[i].con = doConnect())\n == NULL)\n goto done;\n + executeStatement(state[i].con,\n \"create temp table ltemp(x integer primary key, y\n bigint)\");\n }\n }\n \n\n Results are the following:\n Global temp table: 117526 TPS\n Local temp table: 107802 TPS\n\n\n So even for this workload global temp table with shared\n buffers are a little bit faster.\n I will be pleased if you can propose some other testing\n scenario.\n\n\n\n\nplease, try to increase number of connections.\n\n\n\n\n With 20 connections and 4 pgbench threads results are similar:\n 119k TPS for global temp tables and 115k TPS for local temp\n tables.\n\n I have tried yet another scenario: read-only access to temp\n tables:\n\n \\set id random(1,10000000)\n select sum(y) from ltemp where x=:id;\n\n Tables are created and initialized in pgbench session startup:\n\n knizhnik@knizhnik:~/postgresql$ git diff\n diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\n index 570cf33..95295b0 100644\n --- a/src/bin/pgbench/pgbench.c\n +++ b/src/bin/pgbench/pgbench.c\n @@ -5994,6 +5994,8 @@ threadRun(void *arg)\n {\n if ((state[i].con = doConnect()) == NULL)\n goto done;\n + executeStatement(state[i].con, \"create\n temp table ltemp(x integer primary key, y bigint)\");\n + executeStatement(state[i].con, \"insert\n into ltemp values (generate_series(1,1000000),\n generate_series(1,1000000))\");\n }\n }\n\n\n Results for 10 connections with 10 million inserted records per\n table and 100 connections with 1 million inserted record per table\n :\n\n\n\n\n#connections:\n\n10 \n\n100\n\n\n\nlocal temp\n\n68k\n\n90k\n\n\n\nglobal temp, shared_buffers=1G\n\n63k\n\n61k\n\n\n\nglobal temp, shared_buffers=10G\n150k\n\n150k\n\n\n\n\n\n\n So temporary tables with local buffers are slightly faster when\n data doesn't fit in shared buffers, but significantly slower when\n it fits.\n\n\n\n\n All previously reported results were produced at my desktop.\n I also run this read-only test on huge IBM server (POWER9, 2 NUMA\n nodes, 176 CPU, 1Tb RAM).\n\n Here the difference between local and global tables is not so large:\n\n Local temp: 739k TPS\n Global temp: 924k TPS\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 19 Aug 2019 14:16:56 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "po 19. 8. 2019 v 13:16 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 19.08.2019 11:51, Konstantin Knizhnik wrote:\n>\n>\n>\n> On 18.08.2019 11:28, Pavel Stehule wrote:\n>\n>\n>\n> ne 18. 8. 2019 v 9:02 odesílatel Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> napsal:\n>\n>>\n>>\n>> On 16.08.2019 20:17, Pavel Stehule wrote:\n>>\n>>\n>>\n>> pá 16. 8. 2019 v 16:12 odesílatel Konstantin Knizhnik <\n>> k.knizhnik@postgrespro.ru> napsal:\n>>\n>>> I did more investigations of performance of global temp tables with\n>>> shared buffers vs. vanilla (local) temp tables.\n>>>\n>>> 1. Combination of persistent and temporary tables in the same query.\n>>>\n>>> Preparation:\n>>> create table big(pk bigint primary key, val bigint);\n>>> insert into big values\n>>> (generate_series(1,100000000),generate_series(1,100000000));\n>>> create temp table lt(key bigint, count bigint);\n>>> create global temp table gt(key bigint, count bigint);\n>>>\n>>> Size of table is about 6Gb, I run this test on desktop with 16GB of RAM\n>>> and postgres with 1Gb shared buffers.\n>>> I run two queries:\n>>>\n>>> insert into T (select count(*),pk/P as key from big group by key);\n>>> select sum(count) from T;\n>>>\n>>> where P is (100,10,1) and T is name of temp table (lt or gt).\n>>> The table below contains times of both queries in msec:\n>>>\n>>> Percent of selected data\n>>> 1%\n>>> 10%\n>>> 100%\n>>> Local temp table\n>>> 44610\n>>> 90\n>>> 47920\n>>> 891\n>>> 63414\n>>> 21612\n>>> Global temp table\n>>> 44669\n>>> 35\n>>> 47939\n>>> 298\n>>> 59159\n>>> 26015\n>>>\n>>> As you can see, time of insertion in temporary table is almost the same\n>>> and time of traversal of temporary table is about twice smaller for\n>>> global temp table\n>>> when it fits in RAM together with persistent table and slightly worser\n>>> when it doesn't fit.\n>>>\n>>>\n>>>\n>>> 2. Temporary table only access.\n>>> The same system, but Postgres is configured with shared_buffers=10GB,\n>>> max_parallel_workers = 4, max_parallel_workers_per_gather = 4\n>>>\n>>> Local temp tables:\n>>> create temp table local_temp(x1 bigint, x2 bigint, x3 bigint, x4 bigint,\n>>> x5 bigint, x6 bigint, x7 bigint, x8 bigint, x9 bigint);\n>>> insert into local_temp values\n>>> (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n>>> select sum(x1) from local_temp;\n>>>\n>>> Global temp tables:\n>>> create global temporary table global_temp(x1 bigint, x2 bigint, x3\n>>> bigint, x4 bigint, x5 bigint, x6 bigint, x7 bigint, x8 bigint, x9 bigint);\n>>> insert into global_temp values\n>>> (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n>>> select sum(x1) from global_temp;\n>>>\n>>> Results (msec):\n>>>\n>>> Insert\n>>> Select\n>>> Local temp table 37489\n>>> 48322\n>>> Global temp table 44358\n>>> 3003\n>>>\n>>> So insertion in local temp table is performed slightly faster but select\n>>> is 16 times slower!\n>>>\n>>> Conclusion:\n>>> In the assumption then temp table fits in memory, global temp tables\n>>> with shared buffers provides better performance than local temp table.\n>>> I didn't consider here global temp tables with local buffers because for\n>>> them results should be similar with local temp tables.\n>>>\n>>\n>> Probably there is not a reason why shared buffers should be slower than\n>> local buffers when system is under low load.\n>>\n>> access to shared memory is protected by spin locks (are cheap for few\n>> processes), so tests in one or few process are not too important (or it is\n>> just one side of space)\n>>\n>> another topic can be performance on MS Sys - there are stories about not\n>> perfect performance of shared memory there.\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>> One more test which is used to simulate access to temp tables under high\n>> load.\n>> I am using \"upsert\" into temp table in multiple connections.\n>>\n>> create global temp table gtemp (x integer primary key, y bigint);\n>>\n>> upsert.sql:\n>> insert into gtemp values (random() * 1000000, 0) on conflict(x) do update\n>> set y=gtemp.y+1;\n>>\n>> pgbench -c 10 -M prepared -T 100 -P 1 -n -f upsert.sql postgres\n>>\n>>\n>> I failed to find some standard way in pgbech to perform per-session\n>> initialization to create local temp table,\n>> so I just insert this code in pgbench code:\n>>\n>> diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\n>> index 570cf33..af6a431 100644\n>> --- a/src/bin/pgbench/pgbench.c\n>> +++ b/src/bin/pgbench/pgbench.c\n>> @@ -5994,6 +5994,7 @@ threadRun(void *arg)\n>> {\n>> if ((state[i].con = doConnect()) == NULL)\n>> goto done;\n>> + executeStatement(state[i].con, \"create temp table\n>> ltemp(x integer primary key, y bigint)\");\n>> }\n>> }\n>>\n>>\n>> Results are the following:\n>> Global temp table: 117526 TPS\n>> Local temp table: 107802 TPS\n>>\n>>\n>> So even for this workload global temp table with shared buffers are a\n>> little bit faster.\n>> I will be pleased if you can propose some other testing scenario.\n>>\n>\n> please, try to increase number of connections.\n>\n>\n> With 20 connections and 4 pgbench threads results are similar: 119k TPS\n> for global temp tables and 115k TPS for local temp tables.\n>\n> I have tried yet another scenario: read-only access to temp tables:\n>\n> \\set id random(1,10000000)\n> select sum(y) from ltemp where x=:id;\n>\n> Tables are created and initialized in pgbench session startup:\n>\n> knizhnik@knizhnik:~/postgresql$ git diff\n> diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\n> index 570cf33..95295b0 100644\n> --- a/src/bin/pgbench/pgbench.c\n> +++ b/src/bin/pgbench/pgbench.c\n> @@ -5994,6 +5994,8 @@ threadRun(void *arg)\n> {\n> if ((state[i].con = doConnect()) == NULL)\n> goto done;\n> + executeStatement(state[i].con, \"create temp table\n> ltemp(x integer primary key, y bigint)\");\n> + executeStatement(state[i].con, \"insert into ltemp\n> values (generate_series(1,1000000), generate_series(1,1000000))\");\n> }\n> }\n>\n>\n> Results for 10 connections with 10 million inserted records per table and\n> 100 connections with 1 million inserted record per table :\n>\n> #connections:\n> 10\n> 100\n> local temp\n> 68k\n> 90k\n> global temp, shared_buffers=1G\n> 63k\n> 61k\n> global temp, shared_buffers=10G 150k\n> 150k\n>\n>\n> So temporary tables with local buffers are slightly faster when data\n> doesn't fit in shared buffers, but significantly slower when it fits.\n>\n>\n>\n> All previously reported results were produced at my desktop.\n> I also run this read-only test on huge IBM server (POWER9, 2 NUMA nodes,\n> 176 CPU, 1Tb RAM).\n>\n> Here the difference between local and global tables is not so large:\n>\n> Local temp: 739k TPS\n> Global temp: 924k TPS\n>\n\nis not difference between local temp buffers and global temp buffers by too\nlow value of TEMP_BUFFERS?\n\nPavel\n\n\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\npo 19. 8. 2019 v 13:16 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 19.08.2019 11:51, Konstantin\n Knizhnik wrote:\n\n\n\n\nOn 18.08.2019 11:28, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\nne 18. 8. 2019 v 9:02\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn\n 16.08.2019 20:17, Pavel Stehule wrote:\n\n\n\n\n\n\n\npá 16. 8. 2019\n v 16:12 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n I did more\n investigations of performance of global temp\n tables with shared buffers vs. vanilla (local)\n temp tables.\n\n 1. Combination of persistent and temporary\n tables in the same query.\n\n Preparation:\n create table big(pk bigint primary key, val\n bigint);\n insert into big values\n (generate_series(1,100000000),generate_series(1,100000000));\n create temp table lt(key bigint, count\n bigint);\n create global temp table gt(key bigint, count\n bigint);\n\n Size of table is about 6Gb, I run this test on\n desktop with 16GB of RAM and postgres with 1Gb\n shared buffers.\n I run two queries:\n\n insert into T (select count(*),pk/P as key\n from big group by key);\n select sum(count) from T;\n\n where P is (100,10,1) and T is name of temp\n table (lt or gt).\n The table below contains times of both queries\n in msec:\n\n\n\n\nPercent of selected\n data\n\n1%\n\n10%\n\n100%\n\n\n\nLocal temp table\n\n44610\n 90\n\n47920\n 891\n\n63414\n 21612\n\n\n\nGlobal temp table\n\n44669\n 35\n\n47939\n 298\n\n59159\n 26015\n\n\n\n\n\n As you can see, time of insertion in temporary\n table is almost the same\n and time of traversal of temporary table is\n about twice smaller for global temp table\n when it fits in RAM together with persistent\n table and slightly worser when it doesn't fit.\n\n\n\n 2. Temporary table only access.\n The same system, but Postgres is configured\n with shared_buffers=10GB, max_parallel_workers\n = 4, max_parallel_workers_per_gather = 4\n\n Local temp tables:\n create temp table local_temp(x1 bigint, x2\n bigint, x3 bigint, x4 bigint, x5 bigint, x6\n bigint, x7 bigint, x8 bigint, x9 bigint);\n insert into local_temp values\n (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n select sum(x1) from local_temp;\n\n Global temp tables:\n create global temporary table global_temp(x1\n bigint, x2 bigint, x3 bigint, x4 bigint, x5\n bigint, x6 bigint, x7 bigint, x8 bigint, x9\n bigint);\n insert into global_temp values\n (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n select sum(x1) from global_temp;\n\n Results (msec):\n\n\n\n\n\nInsert\n\nSelect\n\n\n\nLocal temp table\n37489\n\n48322\n\n\nGlobal temp table\n44358\n\n3003\n\n\n\n\n\n So insertion in local temp table is performed\n slightly faster but select is 16 times slower!\n\n Conclusion:\n In the assumption then temp table fits in\n memory, global temp tables with shared buffers\n provides better performance than local temp\n table.\n I didn't consider here global temp tables with\n local buffers because for them results should\n be similar with local temp tables.\n\n\n\n\nProbably there is not a reason why shared\n buffers should be slower than local buffers when\n system is under low load. \n\n\n\naccess to shared memory is protected by spin\n locks (are cheap for few processes), so tests in\n one or few process are not too important (or it\n is just one side of space)\n\n\n\nanother topic can be performance on MS Sys -\n there are stories about not perfect performance\n of shared memory there.\n\n\nRegards\n\n\nPavel\n\n\n\n\n\n\n One more test which is used to simulate access to temp\n tables under high load.\n I am using \"upsert\" into temp table in multiple\n connections.\n\n create global temp table gtemp (x integer primary key, y\n bigint);\n\n upsert.sql:\n insert into gtemp values (random() * 1000000, 0) on\n conflict(x) do update set y=gtemp.y+1;\n\n pgbench -c 10 -M prepared -T 100 -P 1 -n -f upsert.sql\n postgres\n\n\n I failed to find some standard way in pgbech to perform\n per-session initialization to create local temp table,\n so I just insert this code in pgbench code:\n\n diff --git a/src/bin/pgbench/pgbench.c\n b/src/bin/pgbench/pgbench.c\n index 570cf33..af6a431 100644\n --- a/src/bin/pgbench/pgbench.c\n +++ b/src/bin/pgbench/pgbench.c\n @@ -5994,6 +5994,7 @@ threadRun(void *arg)\n {\n if ((state[i].con = doConnect())\n == NULL)\n goto done;\n + executeStatement(state[i].con,\n \"create temp table ltemp(x integer primary key, y\n bigint)\");\n }\n }\n \n\n Results are the following:\n Global temp table: 117526 TPS\n Local temp table: 107802 TPS\n\n\n So even for this workload global temp table with shared\n buffers are a little bit faster.\n I will be pleased if you can propose some other testing\n scenario.\n\n\n\n\nplease, try to increase number of connections.\n\n\n\n\n With 20 connections and 4 pgbench threads results are similar:\n 119k TPS for global temp tables and 115k TPS for local temp\n tables.\n\n I have tried yet another scenario: read-only access to temp\n tables:\n\n \\set id random(1,10000000)\n select sum(y) from ltemp where x=:id;\n\n Tables are created and initialized in pgbench session startup:\n\n knizhnik@knizhnik:~/postgresql$ git diff\n diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\n index 570cf33..95295b0 100644\n --- a/src/bin/pgbench/pgbench.c\n +++ b/src/bin/pgbench/pgbench.c\n @@ -5994,6 +5994,8 @@ threadRun(void *arg)\n {\n if ((state[i].con = doConnect()) == NULL)\n goto done;\n + executeStatement(state[i].con, \"create\n temp table ltemp(x integer primary key, y bigint)\");\n + executeStatement(state[i].con, \"insert\n into ltemp values (generate_series(1,1000000),\n generate_series(1,1000000))\");\n }\n }\n\n\n Results for 10 connections with 10 million inserted records per\n table and 100 connections with 1 million inserted record per table\n :\n\n\n\n\n#connections:\n\n10 \n\n100\n\n\n\nlocal temp\n\n68k\n\n90k\n\n\n\nglobal temp, shared_buffers=1G\n\n63k\n\n61k\n\n\n\nglobal temp, shared_buffers=10G\n150k\n\n150k\n\n\n\n\n\n\n So temporary tables with local buffers are slightly faster when\n data doesn't fit in shared buffers, but significantly slower when\n it fits.\n\n\n\n\n All previously reported results were produced at my desktop.\n I also run this read-only test on huge IBM server (POWER9, 2 NUMA\n nodes, 176 CPU, 1Tb RAM).\n\n Here the difference between local and global tables is not so large:\n\n Local temp: 739k TPS\n Global temp: 924k TPSis not difference between local temp buffers and global temp buffers by too low value of TEMP_BUFFERS?Pavel\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 19 Aug 2019 13:25:37 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 19.08.2019 14:25, Pavel Stehule wrote:\n>\n>\n> po 19. 8. 2019 v 13:16 odesílatel Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> napsal:\n>\n>\n>\n> On 19.08.2019 11:51, Konstantin Knizhnik wrote:\n>>\n>>\n>> On 18.08.2019 11:28, Pavel Stehule wrote:\n>>>\n>>>\n>>> ne 18. 8. 2019 v 9:02 odesílatel Konstantin Knizhnik\n>>> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>>\n>>> napsal:\n>>>\n>>>\n>>>\n>>> On 16.08.2019 20:17, Pavel Stehule wrote:\n>>>>\n>>>>\n>>>> pá 16. 8. 2019 v 16:12 odesílatel Konstantin Knizhnik\n>>>> <k.knizhnik@postgrespro.ru\n>>>> <mailto:k.knizhnik@postgrespro.ru>> napsal:\n>>>>\n>>>> I did more investigations of performance of global temp\n>>>> tables with shared buffers vs. vanilla (local) temp tables.\n>>>>\n>>>> 1. Combination of persistent and temporary tables in\n>>>> the same query.\n>>>>\n>>>> Preparation:\n>>>> create table big(pk bigint primary key, val bigint);\n>>>> insert into big values\n>>>> (generate_series(1,100000000),generate_series(1,100000000));\n>>>> create temp table lt(key bigint, count bigint);\n>>>> create global temp table gt(key bigint, count bigint);\n>>>>\n>>>> Size of table is about 6Gb, I run this test on desktop\n>>>> with 16GB of RAM and postgres with 1Gb shared buffers.\n>>>> I run two queries:\n>>>>\n>>>> insert into T (select count(*),pk/P as key from big\n>>>> group by key);\n>>>> select sum(count) from T;\n>>>>\n>>>> where P is (100,10,1) and T is name of temp table (lt\n>>>> or gt).\n>>>> The table below contains times of both queries in msec:\n>>>>\n>>>> Percent of selected data\n>>>> \t1%\n>>>> \t10%\n>>>> \t100%\n>>>> Local temp table\n>>>> \t44610\n>>>> 90\n>>>> \t47920\n>>>> 891\n>>>> \t63414\n>>>> 21612\n>>>> Global temp table\n>>>> \t44669\n>>>> 35\n>>>> \t47939\n>>>> 298\n>>>> \t59159\n>>>> 26015\n>>>>\n>>>>\n>>>> As you can see, time of insertion in temporary table is\n>>>> almost the same\n>>>> and time of traversal of temporary table is about twice\n>>>> smaller for global temp table\n>>>> when it fits in RAM together with persistent table and\n>>>> slightly worser when it doesn't fit.\n>>>>\n>>>>\n>>>>\n>>>> 2. Temporary table only access.\n>>>> The same system, but Postgres is configured with\n>>>> shared_buffers=10GB, max_parallel_workers = 4,\n>>>> max_parallel_workers_per_gather = 4\n>>>>\n>>>> Local temp tables:\n>>>> create temp table local_temp(x1 bigint, x2 bigint, x3\n>>>> bigint, x4 bigint, x5 bigint, x6 bigint, x7 bigint, x8\n>>>> bigint, x9 bigint);\n>>>> insert into local_temp values\n>>>> (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n>>>> select sum(x1) from local_temp;\n>>>>\n>>>> Global temp tables:\n>>>> create global temporary table global_temp(x1 bigint, x2\n>>>> bigint, x3 bigint, x4 bigint, x5 bigint, x6 bigint, x7\n>>>> bigint, x8 bigint, x9 bigint);\n>>>> insert into global_temp values\n>>>> (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n>>>> select sum(x1) from global_temp;\n>>>>\n>>>> Results (msec):\n>>>>\n>>>> \tInsert\n>>>> \tSelect\n>>>> Local temp table \t37489\n>>>> \t48322\n>>>> Global temp table \t44358\n>>>> \t3003\n>>>>\n>>>>\n>>>> So insertion in local temp table is performed slightly\n>>>> faster but select is 16 times slower!\n>>>>\n>>>> Conclusion:\n>>>> In the assumption then temp table fits in memory,\n>>>> global temp tables with shared buffers provides better\n>>>> performance than local temp table.\n>>>> I didn't consider here global temp tables with local\n>>>> buffers because for them results should be similar with\n>>>> local temp tables.\n>>>>\n>>>>\n>>>> Probably there is not a reason why shared buffers should be\n>>>> slower than local buffers when system is under low load.\n>>>>\n>>>> access to shared memory is protected by spin locks (are\n>>>> cheap for few processes), so tests in one or few process\n>>>> are not too important (or it is just one side of space)\n>>>>\n>>>> another topic can be performance on MS Sys - there are\n>>>> stories about not perfect performance of shared memory there.\n>>>>\n>>>> Regards\n>>>>\n>>>> Pavel\n>>>>\n>>> One more test which is used to simulate access to temp\n>>> tables under high load.\n>>> I am using \"upsert\" into temp table in multiple connections.\n>>>\n>>> create global temp table gtemp (x integer primary key, y\n>>> bigint);\n>>>\n>>> upsert.sql:\n>>> insert into gtemp values (random() * 1000000, 0) on\n>>> conflict(x) do update set y=gtemp.y+1;\n>>>\n>>> pgbench -c 10 -M prepared -T 100 -P 1 -n -f upsert.sql postgres\n>>>\n>>>\n>>> I failed to find some standard way in pgbech to perform\n>>> per-session initialization to create local temp table,\n>>> so I just insert this code in pgbench code:\n>>>\n>>> diff --git a/src/bin/pgbench/pgbench.c\n>>> b/src/bin/pgbench/pgbench.c\n>>> index 570cf33..af6a431 100644\n>>> --- a/src/bin/pgbench/pgbench.c\n>>> +++ b/src/bin/pgbench/pgbench.c\n>>> @@ -5994,6 +5994,7 @@ threadRun(void *arg)\n>>> {\n>>> if ((state[i].con = doConnect()) ==\n>>> NULL)\n>>> goto done;\n>>> + executeStatement(state[i].con, \"create temp table ltemp(x\n>>> integer primary key, y bigint)\");\n>>> }\n>>> }\n>>>\n>>>\n>>> Results are the following:\n>>> Global temp table: 117526 TPS\n>>> Local temp table: 107802 TPS\n>>>\n>>>\n>>> So even for this workload global temp table with shared\n>>> buffers are a little bit faster.\n>>> I will be pleased if you can propose some other testing\n>>> scenario.\n>>>\n>>>\n>>> please, try to increase number of connections.\n>>\n>> With 20 connections and 4 pgbench threads results are similar:\n>> 119k TPS for global temp tables and 115k TPS for local temp tables.\n>>\n>> I have tried yet another scenario: read-only access to temp tables:\n>>\n>> \\set id random(1,10000000)\n>> select sum(y) from ltemp where x=:id;\n>>\n>> Tables are created and initialized in pgbench session startup:\n>>\n>> knizhnik@knizhnik:~/postgresql$ git diff\n>> diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\n>> index 570cf33..95295b0 100644\n>> --- a/src/bin/pgbench/pgbench.c\n>> +++ b/src/bin/pgbench/pgbench.c\n>> @@ -5994,6 +5994,8 @@ threadRun(void *arg)\n>> {\n>> if ((state[i].con = doConnect()) == NULL)\n>> goto done;\n>> + executeStatement(state[i].con, \"create\n>> temp table ltemp(x integer primary key, y bigint)\");\n>> + executeStatement(state[i].con, \"insert\n>> into ltemp values (generate_series(1,1000000),\n>> generate_series(1,1000000))\");\n>> }\n>> }\n>>\n>>\n>> Results for 10 connections with 10 million inserted records per\n>> table and 100 connections with 1 million inserted record per table :\n>>\n>> #connections:\n>> \t10\n>> \t100\n>> local temp\n>> \t68k\n>> \t90k\n>> global temp, shared_buffers=1G\n>> \t63k\n>> \t61k\n>> global temp, shared_buffers=10G \t150k\n>> \t150k\n>>\n>>\n>>\n>> So temporary tables with local buffers are slightly faster when\n>> data doesn't fit in shared buffers, but significantly slower when\n>> it fits.\n>>\n>>\n>\n> All previously reported results were produced at my desktop.\n> I also run this read-only test on huge IBM server (POWER9, 2 NUMA\n> nodes, 176 CPU, 1Tb RAM).\n>\n> Here the difference between local and global tables is not so large:\n>\n> Local temp: 739k TPS\n> Global temp: 924k TPS\n>\n>\n> is not difference between local temp buffers and global temp buffers \n> by too low value of TEMP_BUFFERS?\n\n\nCertainly, default (small) temp buffer size plays roles.\nBut it this IPC host this difference is not so important.\nResult with local temp tables and temp_buffers = 1GB: 859k TPS.\n\n> -- \n\nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 19.08.2019 14:25, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\npo 19. 8. 2019 v 13:16\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn\n 19.08.2019 11:51, Konstantin Knizhnik wrote:\n\n \n\nOn\n 18.08.2019 11:28, Pavel Stehule wrote:\n\n\n\n\n\n\n\nne 18. 8. 2019\n v 9:02 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn\n 16.08.2019 20:17, Pavel Stehule wrote:\n\n\n\n\n\n\n\npá 16.\n 8. 2019 v 16:12 odesílatel Konstantin\n Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n I did more\n investigations of performance of\n global temp tables with shared\n buffers vs. vanilla (local) temp\n tables.\n\n 1. Combination of persistent and\n temporary tables in the same query.\n\n Preparation:\n create table big(pk bigint primary\n key, val bigint);\n insert into big values\n (generate_series(1,100000000),generate_series(1,100000000));\n create temp table lt(key bigint,\n count bigint);\n create global temp table gt(key\n bigint, count bigint);\n\n Size of table is about 6Gb, I run\n this test on desktop with 16GB of\n RAM and postgres with 1Gb shared\n buffers.\n I run two queries:\n\n insert into T (select count(*),pk/P\n as key from big group by key);\n select sum(count) from T;\n\n where P is (100,10,1) and T is name\n of temp table (lt or gt).\n The table below contains times of\n both queries in msec:\n\n\n\n\nPercent of\n selected data\n\n1%\n\n10%\n\n100%\n\n\n\nLocal temp\n table\n\n44610\n 90\n\n47920\n 891\n\n63414\n 21612\n\n\n\nGlobal temp\n table\n\n44669\n 35\n\n47939\n 298\n\n59159\n 26015\n\n\n\n\n\n As you can see, time of insertion in\n temporary table is almost the same\n and time of traversal of temporary\n table is about twice smaller for\n global temp table\n when it fits in RAM together with\n persistent table and slightly worser\n when it doesn't fit.\n\n\n\n 2. Temporary table only access.\n The same system, but Postgres is\n configured with shared_buffers=10GB,\n max_parallel_workers = 4,\n max_parallel_workers_per_gather = 4\n\n Local temp tables:\n create temp table local_temp(x1\n bigint, x2 bigint, x3 bigint, x4\n bigint, x5 bigint, x6 bigint, x7\n bigint, x8 bigint, x9 bigint);\n insert into local_temp values\n (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n select sum(x1) from local_temp;\n\n Global temp tables:\n create global temporary table\n global_temp(x1 bigint, x2 bigint, x3\n bigint, x4 bigint, x5 bigint, x6\n bigint, x7 bigint, x8 bigint, x9\n bigint);\n insert into global_temp values\n (generate_series(1,100000000),0,0,0,0,0,0,0,0);\n select sum(x1) from global_temp;\n\n Results (msec):\n\n\n\n\n\nInsert\n\nSelect\n\n\n\nLocal temp\n table\n37489\n\n48322\n\n\nGlobal temp\n table\n44358\n\n3003\n\n\n\n\n\n So insertion in local temp table is\n performed slightly faster but select\n is 16 times slower!\n\n Conclusion:\n In the assumption then temp table\n fits in memory, global temp tables\n with shared buffers provides better\n performance than local temp table.\n I didn't consider here global temp\n tables with local buffers because\n for them results should be similar\n with local temp tables.\n\n\n\n\nProbably there is not a reason why\n shared buffers should be slower than\n local buffers when system is under low\n load. \n\n\n\naccess to shared memory is\n protected by spin locks (are cheap for\n few processes), so tests in one or few\n process are not too important (or it\n is just one side of space)\n\n\n\nanother topic can be performance on\n MS Sys - there are stories about not\n perfect performance of shared memory\n there.\n\n\nRegards\n\n\nPavel\n\n\n\n\n\n\n One more test which is used to simulate\n access to temp tables under high load.\n I am using \"upsert\" into temp table in\n multiple connections.\n\n create global temp table gtemp (x integer\n primary key, y bigint);\n\n upsert.sql:\n insert into gtemp values (random() * 1000000,\n 0) on conflict(x) do update set y=gtemp.y+1;\n\n pgbench -c 10 -M prepared -T 100 -P 1 -n -f\n upsert.sql postgres\n\n\n I failed to find some standard way in pgbech\n to perform per-session initialization to\n create local temp table,\n so I just insert this code in pgbench code:\n\n diff --git a/src/bin/pgbench/pgbench.c\n b/src/bin/pgbench/pgbench.c\n index 570cf33..af6a431 100644\n --- a/src/bin/pgbench/pgbench.c\n +++ b/src/bin/pgbench/pgbench.c\n @@ -5994,6 +5994,7 @@ threadRun(void *arg)\n {\n if ((state[i].con =\n doConnect()) == NULL)\n goto done;\n + \n executeStatement(state[i].con, \"create temp\n table ltemp(x integer primary key, y\n bigint)\");\n }\n }\n \n\n Results are the following:\n Global temp table: 117526 TPS\n Local temp table: 107802 TPS\n\n\n So even for this workload global temp table\n with shared buffers are a little bit faster.\n I will be pleased if you can propose some\n other testing scenario.\n\n\n\n\nplease, try to increase number of\n connections.\n\n\n\n\n With 20 connections and 4 pgbench threads results are\n similar: 119k TPS for global temp tables and 115k TPS\n for local temp tables.\n\n I have tried yet another scenario: read-only access to\n temp tables:\n\n \\set id random(1,10000000)\n select sum(y) from ltemp where x=:id;\n\n Tables are created and initialized in pgbench session\n startup:\n\n knizhnik@knizhnik:~/postgresql$ git diff\n diff --git a/src/bin/pgbench/pgbench.c\n b/src/bin/pgbench/pgbench.c\n index 570cf33..95295b0 100644\n --- a/src/bin/pgbench/pgbench.c\n +++ b/src/bin/pgbench/pgbench.c\n @@ -5994,6 +5994,8 @@ threadRun(void *arg)\n {\n if ((state[i].con = doConnect())\n == NULL)\n goto done;\n + executeStatement(state[i].con,\n \"create temp table ltemp(x integer primary key, y\n bigint)\");\n + executeStatement(state[i].con,\n \"insert into ltemp values (generate_series(1,1000000),\n generate_series(1,1000000))\");\n }\n }\n\n\n Results for 10 connections with 10 million inserted\n records per table and 100 connections with 1 million\n inserted record per table :\n\n\n\n\n#connections:\n\n10 \n\n100\n\n\n\nlocal temp\n\n68k\n\n90k\n\n\n\nglobal temp, shared_buffers=1G\n\n63k\n\n61k\n\n\n\nglobal temp, shared_buffers=10G\n150k\n\n150k\n\n\n\n\n\n\n So temporary tables with local buffers are slightly\n faster when data doesn't fit in shared buffers, but\n significantly slower when it fits.\n\n\n\n\n All previously reported results were produced at my\n desktop.\n I also run this read-only test on huge IBM server (POWER9,\n 2 NUMA nodes, 176 CPU, 1Tb RAM).\n\n Here the difference between local and global tables is not\n so large:\n\n Local temp: 739k TPS\n Global temp: 924k TPS\n\n\n\n\nis not difference between local temp buffers and global\n temp buffers by too low value of TEMP_BUFFERS?\n\n\n\n\n\n Certainly, default (small) temp buffer size plays roles.\n But it this IPC host this difference is not so important.\n Result with local temp tables and temp_buffers = 1GB: 859k TPS.\n\n\n\n-- \n\n\nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 19 Aug 2019 14:51:59 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "> Certainly, default (small) temp buffer size plays roles.\n> But it this IPC host this difference is not so important.\n> Result with local temp tables and temp_buffers = 1GB: 859k TPS.\n>\n\nIt is little bit unexpected result.I understand so it partially it is\ngeneric problem access to smaller dedicated caches versus access to bigger\nshared cache.\n\nBut it is hard to imagine so access to local cache is 10% slower than\naccess to shared cache. Maybe there is some bottle neck - maybe our\nimplementation of local buffers are suboptimal.\n\nUsing local buffers for global temporary tables can be interesting from\nanother reason - it uses temporary files, and temporary files can be\nforwarded on ephemeral IO on Amazon cloud (with much better performance\nthan persistent IO).\n\n\n\n\n>\n> --\n>\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\n\n\n Certainly, default (small) temp buffer size plays roles.\n But it this IPC host this difference is not so important.\n Result with local temp tables and temp_buffers = 1GB: 859k TPS.It is little bit unexpected result.I understand so it partially it is generic problem access to smaller dedicated caches versus access to bigger shared cache. But it is hard to imagine so access to local cache is 10% slower than access to shared cache. Maybe there is some bottle neck - maybe our implementation of local buffers are suboptimal.Using local buffers for global temporary tables can be interesting from another reason - it uses temporary files, and temporary files can be forwarded on ephemeral IO on Amazon cloud (with much better performance than persistent IO). \n\n\n\n-- \n\n\nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 19 Aug 2019 17:53:27 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 19.08.2019 18:53, Pavel Stehule wrote:\n>\n>\n>\n> Certainly, default (small) temp buffer size plays roles.\n> But it this IPC host this difference is not so important.\n> Result with local temp tables and temp_buffers = 1GB: 859k TPS.\n>\n>\n> It is little bit unexpected result.I understand so it partially it is \n> generic problem access to smaller dedicated caches versus access to \n> bigger shared cache.\n>\n> But it is hard to imagine so access to local cache is 10% slower than \n> access to shared cache. Maybe there is some bottle neck - maybe our \n> implementation of local buffers are suboptimal.\n\nIt may be caused by system memory allocator - in case of using shared \nbuffers we do not need to ask OS to allocate more memory.\n\n>\n> Using local buffers for global temporary tables can be interesting \n> from another reason - it uses temporary files, and temporary files can \n> be forwarded on ephemeral IO on Amazon cloud (with much better \n> performance than persistent IO).\n>\n>\n\nMy assumption is that temporary tables almost always fit in memory. So \nin most cases there is on need to write data to file at all.\n\n\nAs I wrote at the beginning of this thread, one of the problems with \ntemporary table sis that it is not possible to use them at replica.\nGlobal temp tables allows to share metadata between master and replica.\nI perform small investigation: how difficult it will be to support \ninserts in temp tables at replica.\nFirst my impression was that it can be done in tricky but simple way.\n\nBy making small changes changing just three places:\n1. Prohibit non-select statements in read-only transactions\n2. Xid assignment (return FrozenTransactionId)\n3. Transaction commit/abort\n\nI managed to provide normal work with global temp tables at replica.\nBut there is one problem with this approach: it is not possible to undo \nchanges in temp tables so rollback doesn't work.\n\nI tried another solution, but assigning some dummy Xids to standby \ntransactions.\nBut this approach require much more changes:\n- Initialize page for such transaction in CLOG\n- Mark transaction as committed/aborted in XCLOG\n- Change snapshot check in visibility function\n\nAnd still I didn't find safe way to cleanup CLOG space.\nAlternative solution is to implement \"local CLOG\" for such transactions.\nThe straightforward solution is to use hashtable. But it may cause \nmemory overflow if we have long living backend which performs huge \nnumber of transactions.\nAlso in this case we need to change visibility check functions.\n\nSo I have implemented simplest solution with frozen xid and force \nbackend termination in case of transaction rollback (so user will no see \ninconsistent behavior).\nAttached please find global_private_temp_replica.patch which implements \nthis approach.\nIt will be nice if somebody can suggest better solution for temporary \ntables at replica.\n\n\n\n\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 20 Aug 2019 17:51:40 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "út 20. 8. 2019 v 16:51 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 19.08.2019 18:53, Pavel Stehule wrote:\n>\n>\n>\n>\n>> Certainly, default (small) temp buffer size plays roles.\n>> But it this IPC host this difference is not so important.\n>> Result with local temp tables and temp_buffers = 1GB: 859k TPS.\n>>\n>\n> It is little bit unexpected result.I understand so it partially it is\n> generic problem access to smaller dedicated caches versus access to bigger\n> shared cache.\n>\n> But it is hard to imagine so access to local cache is 10% slower than\n> access to shared cache. Maybe there is some bottle neck - maybe our\n> implementation of local buffers are suboptimal.\n>\n>\n> It may be caused by system memory allocator - in case of using shared\n> buffers we do not need to ask OS to allocate more memory.\n>\n\nmaybe, but shared buffers you have a overhead with searching free buffers\nand some overhead with synchronization processes.\n\n>\n>\n> Using local buffers for global temporary tables can be interesting from\n> another reason - it uses temporary files, and temporary files can be\n> forwarded on ephemeral IO on Amazon cloud (with much better performance\n> than persistent IO).\n>\n>\n>\n> My assumption is that temporary tables almost always fit in memory. So in\n> most cases there is on need to write data to file at all.\n>\n>\n> As I wrote at the beginning of this thread, one of the problems with\n> temporary table sis that it is not possible to use them at replica.\n> Global temp tables allows to share metadata between master and replica.\n>\n\nI am not sure if I understand to last sentence. Global temp tables should\nbe replicated on replica servers. But the content should not be replicated.\nThis should be session specific.\n\n\n> I perform small investigation: how difficult it will be to support inserts\n> in temp tables at replica.\n> First my impression was that it can be done in tricky but simple way.\n>\n> By making small changes changing just three places:\n> 1. Prohibit non-select statements in read-only transactions\n> 2. Xid assignment (return FrozenTransactionId)\n> 3. Transaction commit/abort\n>\n> I managed to provide normal work with global temp tables at replica.\n> But there is one problem with this approach: it is not possible to undo\n> changes in temp tables so rollback doesn't work.\n>\n> I tried another solution, but assigning some dummy Xids to standby\n> transactions.\n> But this approach require much more changes:\n> - Initialize page for such transaction in CLOG\n> - Mark transaction as committed/aborted in XCLOG\n> - Change snapshot check in visibility function\n>\n> And still I didn't find safe way to cleanup CLOG space.\n> Alternative solution is to implement \"local CLOG\" for such transactions.\n> The straightforward solution is to use hashtable. But it may cause memory\n> overflow if we have long living backend which performs huge number of\n> transactions.\n> Also in this case we need to change visibility check functions.\n>\n> So I have implemented simplest solution with frozen xid and force backend\n> termination in case of transaction rollback (so user will no see\n> inconsistent behavior).\n> Attached please find global_private_temp_replica.patch which implements\n> this approach.\n> It will be nice if somebody can suggest better solution for temporary\n> tables at replica.\n>\n\nThis is another hard issue. Probably backend temination should be\nacceptable solution. I don't understand well to this area, but if replica\nallows writing (to global temp tables), then replica have to have local\nCLOG.\n\nCLOG for global temp tables can be more simple then standard CLOG. Data are\nnot shared, and life of data (and number of transactions) can be low.\n\nAnother solution is wait on ZHeap storage and replica can to have own UNDO\nlog.\n\n\n\n>\n>\n>\n>\n>\n>\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\nút 20. 8. 2019 v 16:51 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 19.08.2019 18:53, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\n \n Certainly, default (small) temp buffer size plays roles.\n But it this IPC host this difference is not so important.\n Result with local temp tables and temp_buffers = 1GB: 859k\n TPS.\n\n\n\n\nIt is little bit unexpected result.I understand so it\n partially it is generic problem access to smaller dedicated\n caches versus access to bigger shared cache. \n\n\n\nBut it is hard to imagine so access to local cache is 10%\n slower than access to shared cache. Maybe there is some\n bottle neck - maybe our implementation of local buffers are\n suboptimal.\n\n\n\n\n It may be caused by system memory allocator - in case of using\n shared buffers we do not need to ask OS to allocate more memory. maybe, but shared buffers you have a overhead with searching free buffers and some overhead with synchronization processes. \n\n\n\n\n\n\nUsing local buffers for global temporary tables can be\n interesting from another reason - it uses temporary files,\n and temporary files can be forwarded on ephemeral IO on\n Amazon cloud (with much better performance than persistent\n IO).\n\n\n\n\n\n\n\n\n My assumption is that temporary tables almost always fit in memory.\n So in most cases there is on need to write data to file at all.\n\n\n As I wrote at the beginning of this thread, one of the problems with\n temporary table sis that it is not possible to use them at replica.\n Global temp tables allows to share metadata between master and\n replica.I am not sure if I understand to last sentence. Global temp tables should be replicated on replica servers. But the content should not be replicated. This should be session specific. \n I perform small investigation: how difficult it will be to support\n inserts in temp tables at replica.\n First my impression was that it can be done in tricky but simple\n way.\n\n By making small changes changing just three places:\n 1. Prohibit non-select statements in read-only transactions\n 2. Xid assignment (return FrozenTransactionId)\n 3. Transaction commit/abort\n\n I managed to provide normal work with global temp tables at replica.\n But there is one problem with this approach: it is not possible to\n undo changes in temp tables so rollback doesn't work.\n\n I tried another solution, but assigning some dummy Xids to standby\n transactions.\n But this approach require much more changes:\n - Initialize page for such transaction in CLOG\n - Mark transaction as committed/aborted in XCLOG\n - Change snapshot check in visibility function\n\n And still I didn't find safe way to cleanup CLOG space.\n Alternative solution is to implement \"local CLOG\" for such\n transactions.\n The straightforward solution is to use hashtable. But it may cause\n memory overflow if we have long living backend which performs huge\n number of transactions.\n Also in this case we need to change visibility check functions.\n\n So I have implemented simplest solution with frozen xid and force\n backend termination in case of transaction rollback (so user will no\n see inconsistent behavior).\n Attached please find global_private_temp_replica.patch which\n implements this approach.\n It will be nice if somebody can suggest better solution for\n temporary tables at replica.This is another hard issue. Probably backend temination should be acceptable solution. I don't understand well to this area, but if replica allows writing (to global temp tables), then replica have to have local CLOG. CLOG for global temp tables can be more simple then standard CLOG. Data are not shared, and life of data (and number of transactions) can be low.Another solution is wait on ZHeap storage and replica can to have own UNDO log. \n\n\n\n\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 20 Aug 2019 18:06:06 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 20.08.2019 19:06, Pavel Stehule wrote:\n>\n>\n> As I wrote at the beginning of this thread, one of the problems\n> with temporary table sis that it is not possible to use them at\n> replica.\n> Global temp tables allows to share metadata between master and\n> replica.\n>\n>\n> I am not sure if I understand to last sentence. Global temp tables \n> should be replicated on replica servers. But the content should not be \n> replicated. This should be session specific.\n\nObviously.\nWhen we run OLAP queries at replica, it will be great if we can do\n\ninsert into temp_table (select ...);\n\nWith local temp tables it is not possible just because you can not \ncreate temp table at replica.\nBut global temp table can be created at master and populated with data \nat replica.\n\n> I perform small investigation: how difficult it will be to support\n> inserts in temp tables at replica.\n> First my impression was that it can be done in tricky but simple way.\n>\n> By making small changes changing just three places:\n> 1. Prohibit non-select statements in read-only transactions\n> 2. Xid assignment (return FrozenTransactionId)\n> 3. Transaction commit/abort\n>\n> I managed to provide normal work with global temp tables at replica.\n> But there is one problem with this approach: it is not possible to\n> undo changes in temp tables so rollback doesn't work.\n>\n> I tried another solution, but assigning some dummy Xids to standby\n> transactions.\n> But this approach require much more changes:\n> - Initialize page for such transaction in CLOG\n> - Mark transaction as committed/aborted in XCLOG\n> - Change snapshot check in visibility function\n>\n> And still I didn't find safe way to cleanup CLOG space.\n> Alternative solution is to implement \"local CLOG\" for such\n> transactions.\n> The straightforward solution is to use hashtable. But it may cause\n> memory overflow if we have long living backend which performs huge\n> number of transactions.\n> Also in this case we need to change visibility check functions.\n>\n> So I have implemented simplest solution with frozen xid and force\n> backend termination in case of transaction rollback (so user will\n> no see inconsistent behavior).\n> Attached please find global_private_temp_replica.patch which\n> implements this approach.\n> It will be nice if somebody can suggest better solution for\n> temporary tables at replica.\n>\n>\n> This is another hard issue. Probably backend temination should be \n> acceptable solution. I don't understand well to this area, but if \n> replica allows writing (to global temp tables), then replica have to \n> have local CLOG.\n\nThere are several problems:\n\n1. How to choose XID for writing transaction at standby. The simplest \nsolution is to just add 0x7fffffff to the current XID.\nIt eliminates possibility of conflict with normal XIDs (received from \nmaster).\nBut requires changes in visibility functions. Visibility check function \ndo not know OID of tuple owner, just XID stored in the tuple header. It \nshould make a decision just based on this XID.\n\n2. How to perform cleanup of not needed XIDs. Right now there is quite \ncomplex logic of how to free CLOG pages.\n\n3. How to implement visibility rules to such XIDs.\n\n>\n> CLOG for global temp tables can be more simple then standard CLOG. \n> Data are not shared, and life of data (and number of transactions) can \n> be low.\n>\n> Another solution is wait on ZHeap storage and replica can to have own \n> UNDO log.\n>\nI thought about implementation of special table access method for \ntemporary tables.\nI am trying to understand now if it is the only possible approach or \nthere are simpler solutions.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 20.08.2019 19:06, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\n\n As I wrote at the beginning of this\n thread, one of the problems with temporary table sis that\n it is not possible to use them at replica.\n Global temp tables allows to share metadata between master\n and replica.\n\n\n\n\nI am not sure if I understand to last sentence. Global\n temp tables should be replicated on replica servers. But the\n content should not be replicated. This should be session\n specific.\n\n\n\n\n\n Obviously.\n When we run OLAP queries at replica, it will be great if we can do \n\n insert into temp_table (select ...);\n\n With local temp tables it is not possible just because you can not\n create temp table at replica.\n But global temp table can be created at master and populated with\n data at replica.\n\n\n\n\n \n\n I perform small investigation: how\n difficult it will be to support inserts in temp tables at\n replica.\n First my impression was that it can be done in tricky but\n simple way.\n\n By making small changes changing just three places:\n 1. Prohibit non-select statements in read-only\n transactions\n 2. Xid assignment (return FrozenTransactionId)\n 3. Transaction commit/abort\n\n I managed to provide normal work with global temp tables\n at replica.\n But there is one problem with this approach: it is not\n possible to undo changes in temp tables so rollback\n doesn't work.\n\n I tried another solution, but assigning some dummy Xids to\n standby transactions.\n But this approach require much more changes:\n - Initialize page for such transaction in CLOG\n - Mark transaction as committed/aborted in XCLOG\n - Change snapshot check in visibility function\n\n And still I didn't find safe way to cleanup CLOG space.\n Alternative solution is to implement \"local CLOG\" for such\n transactions.\n The straightforward solution is to use hashtable. But it\n may cause memory overflow if we have long living backend\n which performs huge number of transactions.\n Also in this case we need to change visibility check\n functions.\n\n So I have implemented simplest solution with frozen xid\n and force backend termination in case of transaction\n rollback (so user will no see inconsistent behavior).\n Attached please find global_private_temp_replica.patch\n which implements this approach.\n It will be nice if somebody can suggest better solution\n for temporary tables at replica.\n\n\n\n\nThis is another hard issue. Probably backend temination\n should be acceptable solution. I don't understand well to\n this area, but if replica allows writing (to global temp\n tables), then replica have to have local CLOG. \n\n\n\n\n\n There are several problems:\n\n 1. How to choose XID for writing transaction at standby. The\n simplest solution is to just add 0x7fffffff to the current XID. \n It eliminates possibility of conflict with normal XIDs (received\n from master).\n But requires changes in visibility functions. Visibility check\n function do not know OID of tuple owner, just XID stored in the\n tuple header. It should make a decision just based on this XID.\n\n 2. How to perform cleanup of not needed XIDs. Right now there is\n quite complex logic of how to free CLOG pages.\n\n 3. How to implement visibility rules to such XIDs.\n\n\n\n\n\n\nCLOG for global temp tables can be more simple then\n standard CLOG. Data are not shared, and life of data (and\n number of transactions) can be low.\n\n\n\nAnother solution is wait on ZHeap storage and replica can\n to have own UNDO log. \n\n\n\n\n\n\n I thought about implementation of special table access method for\n temporary tables.\n I am trying to understand now if it is the only possible approach\n or there are simpler solutions.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 20 Aug 2019 19:42:04 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "út 20. 8. 2019 v 18:42 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 20.08.2019 19:06, Pavel Stehule wrote:\n>\n>\n>\n> As I wrote at the beginning of this thread, one of the problems with\n>> temporary table sis that it is not possible to use them at replica.\n>> Global temp tables allows to share metadata between master and replica.\n>>\n>\n> I am not sure if I understand to last sentence. Global temp tables should\n> be replicated on replica servers. But the content should not be replicated.\n> This should be session specific.\n>\n>\n> Obviously.\n> When we run OLAP queries at replica, it will be great if we can do\n>\n> insert into temp_table (select ...);\n>\n> With local temp tables it is not possible just because you can not create\n> temp table at replica.\n> But global temp table can be created at master and populated with data at\n> replica.\n>\n\nyes\n\n\n>\n>\n>> I perform small investigation: how difficult it will be to support\n>> inserts in temp tables at replica.\n>> First my impression was that it can be done in tricky but simple way.\n>>\n>> By making small changes changing just three places:\n>> 1. Prohibit non-select statements in read-only transactions\n>> 2. Xid assignment (return FrozenTransactionId)\n>> 3. Transaction commit/abort\n>>\n>> I managed to provide normal work with global temp tables at replica.\n>> But there is one problem with this approach: it is not possible to undo\n>> changes in temp tables so rollback doesn't work.\n>>\n>> I tried another solution, but assigning some dummy Xids to standby\n>> transactions.\n>> But this approach require much more changes:\n>> - Initialize page for such transaction in CLOG\n>> - Mark transaction as committed/aborted in XCLOG\n>> - Change snapshot check in visibility function\n>>\n>> And still I didn't find safe way to cleanup CLOG space.\n>> Alternative solution is to implement \"local CLOG\" for such transactions.\n>> The straightforward solution is to use hashtable. But it may cause memory\n>> overflow if we have long living backend which performs huge number of\n>> transactions.\n>> Also in this case we need to change visibility check functions.\n>>\n>> So I have implemented simplest solution with frozen xid and force backend\n>> termination in case of transaction rollback (so user will no see\n>> inconsistent behavior).\n>> Attached please find global_private_temp_replica.patch which implements\n>> this approach.\n>> It will be nice if somebody can suggest better solution for temporary\n>> tables at replica.\n>>\n>\n> This is another hard issue. Probably backend temination should be\n> acceptable solution. I don't understand well to this area, but if replica\n> allows writing (to global temp tables), then replica have to have local\n> CLOG.\n>\n>\n> There are several problems:\n>\n> 1. How to choose XID for writing transaction at standby. The simplest\n> solution is to just add 0x7fffffff to the current XID.\n> It eliminates possibility of conflict with normal XIDs (received from\n> master).\n> But requires changes in visibility functions. Visibility check function do\n> not know OID of tuple owner, just XID stored in the tuple header. It should\n> make a decision just based on this XID.\n>\n> 2. How to perform cleanup of not needed XIDs. Right now there is quite\n> complex logic of how to free CLOG pages.\n>\n\n> 3. How to implement visibility rules to such XIDs.\n>\n\nin theory every session can have own CLOG. When you finish session, you can\ntruncate this file.\n\n>\n>\n> CLOG for global temp tables can be more simple then standard CLOG. Data\n> are not shared, and life of data (and number of transactions) can be low.\n>\n> Another solution is wait on ZHeap storage and replica can to have own UNDO\n> log.\n>\n> I thought about implementation of special table access method for\n> temporary tables.\n>\n\n+1\n\n\n> I am trying to understand now if it is the only possible approach or\n> there are simpler solutions.\n>\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\nút 20. 8. 2019 v 18:42 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 20.08.2019 19:06, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\n As I wrote at the beginning of this\n thread, one of the problems with temporary table sis that\n it is not possible to use them at replica.\n Global temp tables allows to share metadata between master\n and replica.\n\n\n\n\nI am not sure if I understand to last sentence. Global\n temp tables should be replicated on replica servers. But the\n content should not be replicated. This should be session\n specific.\n\n\n\n\n\n Obviously.\n When we run OLAP queries at replica, it will be great if we can do \n\n insert into temp_table (select ...);\n\n With local temp tables it is not possible just because you can not\n create temp table at replica.\n But global temp table can be created at master and populated with\n data at replica.yes \n\n\n\n\n \n\n I perform small investigation: how\n difficult it will be to support inserts in temp tables at\n replica.\n First my impression was that it can be done in tricky but\n simple way.\n\n By making small changes changing just three places:\n 1. Prohibit non-select statements in read-only\n transactions\n 2. Xid assignment (return FrozenTransactionId)\n 3. Transaction commit/abort\n\n I managed to provide normal work with global temp tables\n at replica.\n But there is one problem with this approach: it is not\n possible to undo changes in temp tables so rollback\n doesn't work.\n\n I tried another solution, but assigning some dummy Xids to\n standby transactions.\n But this approach require much more changes:\n - Initialize page for such transaction in CLOG\n - Mark transaction as committed/aborted in XCLOG\n - Change snapshot check in visibility function\n\n And still I didn't find safe way to cleanup CLOG space.\n Alternative solution is to implement \"local CLOG\" for such\n transactions.\n The straightforward solution is to use hashtable. But it\n may cause memory overflow if we have long living backend\n which performs huge number of transactions.\n Also in this case we need to change visibility check\n functions.\n\n So I have implemented simplest solution with frozen xid\n and force backend termination in case of transaction\n rollback (so user will no see inconsistent behavior).\n Attached please find global_private_temp_replica.patch\n which implements this approach.\n It will be nice if somebody can suggest better solution\n for temporary tables at replica.\n\n\n\n\nThis is another hard issue. Probably backend temination\n should be acceptable solution. I don't understand well to\n this area, but if replica allows writing (to global temp\n tables), then replica have to have local CLOG. \n\n\n\n\n\n There are several problems:\n\n 1. How to choose XID for writing transaction at standby. The\n simplest solution is to just add 0x7fffffff to the current XID. \n It eliminates possibility of conflict with normal XIDs (received\n from master).\n But requires changes in visibility functions. Visibility check\n function do not know OID of tuple owner, just XID stored in the\n tuple header. It should make a decision just based on this XID.\n\n 2. How to perform cleanup of not needed XIDs. Right now there is\n quite complex logic of how to free CLOG pages. \n\n 3. How to implement visibility rules to such XIDs.in theory every session can have own CLOG. When you finish session, you can truncate this file. \n\n\n\n\n\n\nCLOG for global temp tables can be more simple then\n standard CLOG. Data are not shared, and life of data (and\n number of transactions) can be low.\n\n\n\nAnother solution is wait on ZHeap storage and replica can\n to have own UNDO log. \n\n\n\n\n\n\n I thought about implementation of special table access method for\n temporary tables.+1 \n I am trying to understand now if it is the only possible approach\n or there are simpler solutions.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 20 Aug 2019 19:01:24 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 20.08.2019 20:01, Pavel Stehule wrote:\n> Another solution is wait on ZHeap storage and replica can to have own \n> UNDO log.\n>\n>>\n> I thought about implementation of special table access method for\n> temporary tables.\n>\n>\n> +1\n>\nUnfortunately implementing special table access method for temporary \ntables doesn't solve all problems.\nXID generation is not part of table access methods.\nSo we still need to assign some XID to write transaction at replica \nwhich will not conflict with XIDs received from master.\nActually only global temp tables can be updated at replica and so \nassigned XIDs can be stored only in tuples of such relations.\nBut still I am not sure that we can use arbitrary XID for such \ntransactions at replica.\n\nAlso I upset by amount of functionality which has to be reimplemented \nfor global temp tables if we really want to provide access method for them:\n\n1. CLOG\n2. vacuum\n3. MVCC visibility\n\nAnd still it is not possible to encapsulate all changes need to support \nwrites to temp tables at replica inside table access method.\nXID assignment, transaction commit and abort, subtransactions - all this \nplaces need to be patched.\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 20.08.2019 20:01, Pavel Stehule\n wrote:\n\n\n\nAnother solution is wait on ZHeap storage and\n replica can to have own UNDO log. \n\n\n\n\n\n\n\n\n\n\n\n I thought about implementation of special table access\n method for temporary tables.\n\n\n\n\n+1\n\n \n\n\n\n\n Unfortunately implementing special table access method for temporary\n tables doesn't solve all problems.\n XID generation is not part of table access methods.\n So we still need to assign some XID to write transaction at replica\n which will not conflict with XIDs received from master.\n Actually only global temp tables can be updated at replica and so\n assigned XIDs can be stored only in tuples of such relations.\n But still I am not sure that we can use arbitrary XID for such\n transactions at replica.\n\n Also I upset by amount of functionality which has to be\n reimplemented for global temp tables if we really want to provide\n access method for them:\n\n 1. CLOG\n 2. vacuum\n 3. MVCC visibility\n\n And still it is not possible to encapsulate all changes need to\n support writes to temp tables at replica inside table access method.\n XID assignment, transaction commit and abort, subtransactions - all\n this places need to be patched.\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 21 Aug 2019 11:54:29 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 21.08.2019 11:54, Konstantin Knizhnik wrote:\n>\n>\n> On 20.08.2019 20:01, Pavel Stehule wrote:\n>> Another solution is wait on ZHeap storage and replica can to have own \n>> UNDO log.\n>>\n>>>\n>> I thought about implementation of special table access method for\n>> temporary tables.\n>>\n>>\n>> +1\n>>\n> Unfortunately implementing special table access method for temporary \n> tables doesn't solve all problems.\n> XID generation is not part of table access methods.\n> So we still need to assign some XID to write transaction at replica \n> which will not conflict with XIDs received from master.\n> Actually only global temp tables can be updated at replica and so \n> assigned XIDs can be stored only in tuples of such relations.\n> But still I am not sure that we can use arbitrary XID for such \n> transactions at replica.\n>\n> Also I upset by amount of functionality which has to be reimplemented \n> for global temp tables if we really want to provide access method for \n> them:\n>\n> 1. CLOG\n> 2. vacuum\n> 3. MVCC visibility\n>\n> And still it is not possible to encapsulate all changes need to \n> support writes to temp tables at replica inside table access method.\n> XID assignment, transaction commit and abort, subtransactions - all \n> this places need to be patched.\n>\n\nI was able to fully support work with global temp tables at replica \n(including subtransactions).\nThe patch is attached. Also you can find this version in \nhttps://github.com/postgrespro/postgresql.builtin_pool/tree/global_temp_hot\n\nRight now transactions at replica updating global temp table are \nassigned special kind of GIDs which are not related with XIDs received \nfrom master.\nSo special visibility rules are used for such tables at replica. Also I \nhave to patch TransactionIdIsInProgress, TransactionIdDidCommit, \nTransactionIdGetCurrent\nfunctions to correctly handle such XIDs. In principle it is possible to \nimplement global temp tables as special heap access method. But it will \nrequire copying a lot of code (heapam.c)\nso I prefer to add few checks to existed functions.\n\nThere are still some limitations:\n- Number of transactions at replica which update temp tables is limited \nby 2^32 (wraparound problem is not addressed).\n- I have to maintain in-memory analog of CLOG for such transactions \nwhich is also not cropped. It means that for 2^32 transaction size of \nbitmap can grow up to 0.5Gb.\n\nI try to understand what are the following steps in global temp tables \nsupport.\nThis is why I want to perform short survey - what people are expecting \nfrom global temp tables:\n\n1. I do not need them at all.\n2. Eliminate catalog bloating.\n3. Mostly needed for compatibility with Oracle (simplify porting,...).\n4. Parallel query execution.\n5. Can be used at replica.\n6. More efficient use of resources (first of all memory).\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 18 Sep 2019 13:04:36 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "I have added support of all indexes (brin, btree, gin, gist, hash, \nspgist) for global temp tables (before only B-Tree index was supported).\nIt will be nice to have some generic mechanism for it, but I do not \nunderstand how it can look like.\nThe problem is that normal relations are initialized at the moment of \ntheir creation.\nBut for global temp relations metadata already exists while data is \nabsent. We should somehow catch such access to not initialized page (but \nnot not all pages, but just first page of relation)\nand perform initialization on demand.\n\nNew patch for global temp tables with shared buffers is attached.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 20 Sep 2019 18:12:46 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "st 18. 9. 2019 v 12:04 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 21.08.2019 11:54, Konstantin Knizhnik wrote:\n>\n>\n>\n> On 20.08.2019 20:01, Pavel Stehule wrote:\n>\n> Another solution is wait on ZHeap storage and replica can to have own UNDO\n> log.\n>\n>>\n>> I thought about implementation of special table access method for\n>> temporary tables.\n>>\n>\n> +1\n>\n>\n> Unfortunately implementing special table access method for temporary\n> tables doesn't solve all problems.\n> XID generation is not part of table access methods.\n> So we still need to assign some XID to write transaction at replica which\n> will not conflict with XIDs received from master.\n> Actually only global temp tables can be updated at replica and so assigned\n> XIDs can be stored only in tuples of such relations.\n> But still I am not sure that we can use arbitrary XID for such\n> transactions at replica.\n>\n> Also I upset by amount of functionality which has to be reimplemented for\n> global temp tables if we really want to provide access method for them:\n>\n> 1. CLOG\n> 2. vacuum\n> 3. MVCC visibility\n>\n> And still it is not possible to encapsulate all changes need to support\n> writes to temp tables at replica inside table access method.\n> XID assignment, transaction commit and abort, subtransactions - all this\n> places need to be patched.\n>\n>\n> I was able to fully support work with global temp tables at replica\n> (including subtransactions).\n> The patch is attached. Also you can find this version in\n> https://github.com/postgrespro/postgresql.builtin_pool/tree/global_temp_hot\n>\n> Right now transactions at replica updating global temp table are assigned\n> special kind of GIDs which are not related with XIDs received from master.\n> So special visibility rules are used for such tables at replica. Also I\n> have to patch TransactionIdIsInProgress, TransactionIdDidCommit,\n> TransactionIdGetCurrent\n> functions to correctly handle such XIDs. In principle it is possible to\n> implement global temp tables as special heap access method. But it will\n> require copying a lot of code (heapam.c)\n> so I prefer to add few checks to existed functions.\n>\n> There are still some limitations:\n> - Number of transactions at replica which update temp tables is limited by\n> 2^32 (wraparound problem is not addressed).\n> - I have to maintain in-memory analog of CLOG for such transactions which\n> is also not cropped. It means that for 2^32 transaction size of bitmap can\n> grow up to 0.5Gb.\n>\n> I try to understand what are the following steps in global temp tables\n> support.\n> This is why I want to perform short survey - what people are expecting\n> from global temp tables:\n>\n> 1. I do not need them at all.\n> 2. Eliminate catalog bloating.\n> 3. Mostly needed for compatibility with Oracle (simplify porting,...).\n> 4. Parallel query execution.\n> 5. Can be used at replica.\n> 6. More efficient use of resources (first of all memory).\n>\n\nThere can be other point important for cloud. Inside some cloud usually\nthere are two types of discs - persistent (slow) and ephemeral (fast). We\neffectively used temp tables there because we moved temp tablespace to\nephemeral discs.\n\nI missing one point in your list - developer's comfort - using temp tables\nis just much more comfortable - you don't need create it again, again, ..\nDue this behave is possible to reduce @2 and @3 can be nice side effect. If\nyou reduce @2 to zero, then @5 should be possible without any other.\n\nPavel\n\n\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\nst 18. 9. 2019 v 12:04 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 21.08.2019 11:54, Konstantin\n Knizhnik wrote:\n\n\n\n\nOn 20.08.2019 20:01, Pavel Stehule\n wrote:\n\n\nAnother solution is wait on ZHeap storage and\n replica can to have own UNDO log. \n\n\n\n\n\n\n\n\n\n\n\n I thought about implementation of special table access\n method for temporary tables.\n\n\n\n\n+1\n\n \n\n\n\n\n Unfortunately implementing special table access method for\n temporary tables doesn't solve all problems.\n XID generation is not part of table access methods.\n So we still need to assign some XID to write transaction at\n replica which will not conflict with XIDs received from master.\n Actually only global temp tables can be updated at replica and so\n assigned XIDs can be stored only in tuples of such relations.\n But still I am not sure that we can use arbitrary XID for such\n transactions at replica.\n\n Also I upset by amount of functionality which has to be\n reimplemented for global temp tables if we really want to provide\n access method for them:\n\n 1. CLOG\n 2. vacuum\n 3. MVCC visibility\n\n And still it is not possible to encapsulate all changes need to\n support writes to temp tables at replica inside table access\n method.\n XID assignment, transaction commit and abort, subtransactions -\n all this places need to be patched.\n\n\n\n I was able to fully support work with global temp tables at replica\n (including subtransactions).\n The patch is attached. Also you can find this version in\nhttps://github.com/postgrespro/postgresql.builtin_pool/tree/global_temp_hot\n\n Right now transactions at replica updating global temp table are\n assigned special kind of GIDs which are not related with XIDs\n received from master.\n So special visibility rules are used for such tables at replica.\n Also I have to patch TransactionIdIsInProgress,\n TransactionIdDidCommit, TransactionIdGetCurrent\n functions to correctly handle such XIDs. In principle it is possible\n to implement global temp tables as special heap access method. But\n it will require copying a lot of code (heapam.c)\n so I prefer to add few checks to existed functions.\n\n There are still some limitations: \n - Number of transactions at replica which update temp tables is\n limited by 2^32 (wraparound problem is not addressed).\n - I have to maintain in-memory analog of CLOG for such transactions\n which is also not cropped. It means that for 2^32 transaction size\n of bitmap can grow up to 0.5Gb.\n\n I try to understand what are the following steps in global temp\n tables support.\n This is why I want to perform short survey - what people are\n expecting from global temp tables:\n\n 1. I do not need them at all.\n 2. Eliminate catalog bloating.\n 3. Mostly needed for compatibility with Oracle (simplify\n porting,...).\n 4. Parallel query execution.\n 5. Can be used at replica.\n 6. More efficient use of resources (first of all memory).There can be other point important for cloud. Inside some cloud usually there are two types of discs - persistent (slow) and ephemeral (fast). We effectively used temp tables there because we moved temp tablespace to ephemeral discs. I missing one point in your list - developer's comfort - using temp tables is just much more comfortable - you don't need create it again, again, .. Due this behave is possible to reduce @2 and @3 can be nice side effect. If you reduce @2 to zero, then @5 should be possible without any other.Pavel\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 20 Sep 2019 18:43:42 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 20.09.2019 19:43, Pavel Stehule wrote:\n>\n> 1. I do not need them at all.\n> 2. Eliminate catalog bloating.\n> 3. Mostly needed for compatibility with Oracle (simplify porting,...).\n> 4. Parallel query execution.\n> 5. Can be used at replica.\n> 6. More efficient use of resources (first of all memory).\n>\n>\n> There can be other point important for cloud. Inside some cloud \n> usually there are two types of discs - persistent (slow) and ephemeral \n> (fast). We effectively used temp tables there because we moved temp \n> tablespace to ephemeral discs.\n\nYes, I already heard this argument and agree with it.\nI just want to notice two things:\n1. My assumption is that in most cases data of temporary table can fit \nin memory (certainly if we are not limiting them by temp_buffers = 8MB, \nbut store in shared buffers) and so there is on need to write them to \nthe persistent media at all.\n2. Global temp tables do not substitute local temp tables, accessed \nthrough local buffers. So if you want to use temporary storage, you will \nalways have a way to do it.\nThe question is whether we need to support two kinds of global temp \ntables (with shared or private buffers) or just implement one of them.\n\n>\n> I missing one point in your list - developer's comfort - using temp \n> tables is just much more comfortable - you don't need create it again, \n> again, .. Due this behave is possible to reduce @2 and @3 can be nice \n> side effect. If you reduce @2 to zero, then @5 should be possible \n> without any other.\n>\nSorry, I do not completely understand your point here\nYou can use normal (permanent) table and you will not have to create \nthem again and again. It is also possible to use them for storing \ntemporary data - just need to truncate table when data is not needed any \nmore.\nCertainly you can not use the same table in more than one backend. Here \nis the main advantage of temp tables - you can have storage of \nper-session data and do not worry about possible name conflicts.\n\n From the other side: there are many cases where format of temporary \ndata is not statically known: it is determined dynamically during \nprogram execution.\nIn this case local temp table provides the most convenient mechanism for \nworking with such data.\n\nThis is why I think that ewe need to have both local and global temp tables.\n\nAlso I do not agree with your statement \"If you reduce @2 to zero, then \n@5 should be possible without any other\".\nIn the solution implemented by Aleksander Alekseev metadata of temporary \ntables is kept in memory and not affecting catalog at all.\nBut them still can not be used at replica.\nThere are still some serious problems which need to be fixed to able it:\nallow insert/update/delete statements for read-only transactions, \nsomehow assign XIDs for them, implement savepoints and rollback of such \ntransactions.\nAll this was done in the last version of my patch.\nYes, it doesn't depend on whether we are using shared or private buffers \nfor temporary tables. The same approach can be implemented for both of them.\nThe question is whether we are really need temp tables at replica and if \nso, do we need full transaction support for them, including rollbacks, \nsubtransactions.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 20.09.2019 19:43, Pavel Stehule\n wrote:\n\n\n\n\n\n\n 1. I do not need them at all.\n 2. Eliminate catalog bloating.\n 3. Mostly needed for compatibility with Oracle (simplify\n porting,...).\n 4. Parallel query execution.\n 5. Can be used at replica.\n 6. More efficient use of resources (first of all memory).\n\n\n\n\nThere can be other point important for cloud. Inside some\n cloud usually there are two types of discs - persistent\n (slow) and ephemeral (fast). We effectively used temp tables\n there because we moved temp tablespace to ephemeral discs. \n\n\n\n\n\n Yes, I already heard this argument and agree with it.\n I just want to notice two things:\n 1. My assumption is that in most cases data of temporary table can\n fit in memory (certainly if we are not limiting them by temp_buffers\n = 8MB, but store in shared buffers) and so there is on need to write\n them to the persistent media at all.\n 2. Global temp tables do not substitute local temp tables, accessed\n through local buffers. So if you want to use temporary storage, you\n will always have a way to do it.\n The question is whether we need to support two kinds of global temp\n tables (with shared or private buffers) or just implement one of\n them.\n\n\n\n\n\n\nI missing one point in your list - developer's comfort -\n using temp tables is just much more comfortable - you don't\n need create it again, again, .. Due this behave is possible\n to reduce @2 and @3 can be nice side effect. If you\n reduce @2 to zero, then @5 should be possible without any\n other.\n\n\n\n\n\n Sorry, I do not completely understand your point here\n You can use normal (permanent) table and you will not have to create\n them again and again. It is also possible to use them for storing\n temporary data - just need to truncate table when data is not needed\n any more.\n Certainly you can not use the same table in more than one backend.\n Here is the main advantage of temp tables - you can have storage of\n per-session data and do not worry about possible name conflicts.\n\n From the other side: there are many cases where format of temporary\n data is not statically known: it is determined dynamically during\n program execution.\n In this case local temp table provides the most convenient mechanism\n for working with such data. \n\n This is why I think that ewe need to have both local and global temp\n tables.\n\n Also I do not agree with your statement \"If you reduce @2 to zero,\n then @5 should be possible without any other\".\n In the solution implemented by Aleksander Alekseev metadata of\n temporary tables is kept in memory and not affecting catalog at all.\n But them still can not be used at replica.\n There are still some serious problems which need to be fixed to able\n it:\n allow insert/update/delete statements for read-only transactions,\n somehow assign XIDs for them, implement savepoints and rollback of\n such transactions.\n All this was done in the last version of my patch.\n Yes, it doesn't depend on whether we are using shared or private\n buffers for temporary tables. The same approach can be implemented\n for both of them.\n The question is whether we are really need temp tables at replica\n and if so, do we need full transaction support for them, including\n rollbacks, subtransactions.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 23 Sep 2019 10:57:08 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "po 23. 9. 2019 v 9:57 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 20.09.2019 19:43, Pavel Stehule wrote:\n>\n>\n> 1. I do not need them at all.\n>> 2. Eliminate catalog bloating.\n>> 3. Mostly needed for compatibility with Oracle (simplify porting,...).\n>> 4. Parallel query execution.\n>> 5. Can be used at replica.\n>> 6. More efficient use of resources (first of all memory).\n>>\n>\n> There can be other point important for cloud. Inside some cloud usually\n> there are two types of discs - persistent (slow) and ephemeral (fast). We\n> effectively used temp tables there because we moved temp tablespace to\n> ephemeral discs.\n>\n>\n> Yes, I already heard this argument and agree with it.\n> I just want to notice two things:\n> 1. My assumption is that in most cases data of temporary table can fit in\n> memory (certainly if we are not limiting them by temp_buffers = 8MB, but\n> store in shared buffers) and so there is on need to write them to the\n> persistent media at all.\n> 2. Global temp tables do not substitute local temp tables, accessed\n> through local buffers. So if you want to use temporary storage, you will\n> always have a way to do it.\n> The question is whether we need to support two kinds of global temp tables\n> (with shared or private buffers) or just implement one of them.\n>\n\nIt's valid only for OLTP. OLAP world is totally different. More if all\nusers used temporary tables, and you should to calculate with it - it is\none reason for global temp tables, then you need multiply size by\nmax_connection.\n\nhard to say what is best from implementation perspective, but it can be\nunhappy if global temporary tables has different performance\ncharacteristics and configuration than local temporary tables.\n\n>\n>\n> I missing one point in your list - developer's comfort - using temp tables\n> is just much more comfortable - you don't need create it again, again, ..\n> Due this behave is possible to reduce @2 and @3 can be nice side effect. If\n> you reduce @2 to zero, then @5 should be possible without any other.\n>\n> Sorry, I do not completely understand your point here\n> You can use normal (permanent) table and you will not have to create them\n> again and again. It is also possible to use them for storing temporary data\n> - just need to truncate table when data is not needed any more.\n> Certainly you can not use the same table in more than one backend. Here is\n> the main advantage of temp tables - you can have storage of per-session\n> data and do not worry about possible name conflicts.\n>\n\nYou use temporary tables because you know so you share data between session\nnever. I don't remember any situation when I designed temp tables with\ndifferent schema for different sessions.\n\nUsing global temp table is not effective - you are work with large tables,\nyou need to use delete, .. so you cannot to use classic table like temp\ntables effectively.\n\n\n> From the other side: there are many cases where format of temporary data\n> is not statically known: it is determined dynamically during program\n> execution.\n> In this case local temp table provides the most convenient mechanism for\n> working with such data.\n>\n> This is why I think that ewe need to have both local and global temp\n> tables.\n>\n> Also I do not agree with your statement \"If you reduce @2 to zero, then @5\n> should be possible without any other\".\n> In the solution implemented by Aleksander Alekseev metadata of temporary\n> tables is kept in memory and not affecting catalog at all.\n> But them still can not be used at replica.\n> There are still some serious problems which need to be fixed to able it:\n> allow insert/update/delete statements for read-only transactions, somehow\n> assign XIDs for them, implement savepoints and rollback of such\n> transactions.\n> All this was done in the last version of my patch.\n> Yes, it doesn't depend on whether we are using shared or private buffers\n> for temporary tables. The same approach can be implemented for both of them.\n> The question is whether we are really need temp tables at replica and if\n> so, do we need full transaction support for them, including rollbacks,\n> subtransactions.\n>\n\ntemporary tables (of any type) on replica is interesting feature that opens\nsome possibilities. Some queries cannot be optimized and should be divided\nand some results should be stored to temporary tables, analysed (to get\ncorrect statistics), maybe indexed, and after that the calculation can\ncontinue. Now you can do this just only on master. More - on HotStandBy the\ndata are read only, and without direct impact on master (production), so\nyou can do some harder calculation there. And temporary tables is used\ntechnique how to fix estimation errors.\n\nI don't think so subtransaction, transaction, rollbacks are necessary for\nthese tables. On second hand with out it, it is half cooked features, and\ncan looks pretty strange in pg environment.\n\nI am very happy, how much work you do in this area, I had not a courage to\nstart this job, but I don't think so this work can be reduced just to some\nsupported scenarios - and I hope so correct implementation is possible -\nalthough it is not simply work.\n\n\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\npo 23. 9. 2019 v 9:57 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 20.09.2019 19:43, Pavel Stehule\n wrote:\n\n\n\n\n\n 1. I do not need them at all.\n 2. Eliminate catalog bloating.\n 3. Mostly needed for compatibility with Oracle (simplify\n porting,...).\n 4. Parallel query execution.\n 5. Can be used at replica.\n 6. More efficient use of resources (first of all memory).\n\n\n\n\nThere can be other point important for cloud. Inside some\n cloud usually there are two types of discs - persistent\n (slow) and ephemeral (fast). We effectively used temp tables\n there because we moved temp tablespace to ephemeral discs. \n\n\n\n\n\n Yes, I already heard this argument and agree with it.\n I just want to notice two things:\n 1. My assumption is that in most cases data of temporary table can\n fit in memory (certainly if we are not limiting them by temp_buffers\n = 8MB, but store in shared buffers) and so there is on need to write\n them to the persistent media at all.\n 2. Global temp tables do not substitute local temp tables, accessed\n through local buffers. So if you want to use temporary storage, you\n will always have a way to do it.\n The question is whether we need to support two kinds of global temp\n tables (with shared or private buffers) or just implement one of\n them.It's valid only for OLTP. OLAP world is totally different. More if all users used temporary tables, and you should to calculate with it - it is one reason for global temp tables, then you need multiply size by max_connection. hard to say what is best from implementation perspective, but it can be unhappy if global temporary tables has different performance characteristics and configuration than local temporary tables.\n\n\n\n\n\n\nI missing one point in your list - developer's comfort -\n using temp tables is just much more comfortable - you don't\n need create it again, again, .. Due this behave is possible\n to reduce @2 and @3 can be nice side effect. If you\n reduce @2 to zero, then @5 should be possible without any\n other.\n\n\n\n\n\n Sorry, I do not completely understand your point here\n You can use normal (permanent) table and you will not have to create\n them again and again. It is also possible to use them for storing\n temporary data - just need to truncate table when data is not needed\n any more.\n Certainly you can not use the same table in more than one backend.\n Here is the main advantage of temp tables - you can have storage of\n per-session data and do not worry about possible name conflicts.You use temporary tables because you know so you share data between session never. I don't remember any situation when I designed temp tables with different schema for different sessions. Using global temp table is not effective - you are work with large tables, you need to use delete, .. so you cannot to use classic table like temp tables effectively.\n\n From the other side: there are many cases where format of temporary\n data is not statically known: it is determined dynamically during\n program execution.\n In this case local temp table provides the most convenient mechanism\n for working with such data. \n\n This is why I think that ewe need to have both local and global temp\n tables.\n\n Also I do not agree with your statement \"If you reduce @2 to zero,\n then @5 should be possible without any other\".\n In the solution implemented by Aleksander Alekseev metadata of\n temporary tables is kept in memory and not affecting catalog at all.\n But them still can not be used at replica.\n There are still some serious problems which need to be fixed to able\n it:\n allow insert/update/delete statements for read-only transactions,\n somehow assign XIDs for them, implement savepoints and rollback of\n such transactions.\n All this was done in the last version of my patch.\n Yes, it doesn't depend on whether we are using shared or private\n buffers for temporary tables. The same approach can be implemented\n for both of them.\n The question is whether we are really need temp tables at replica\n and if so, do we need full transaction support for them, including\n rollbacks, subtransactions.temporary tables (of any type) on replica is interesting feature that opens some possibilities. Some queries cannot be optimized and should be divided and some results should be stored to temporary tables, analysed (to get correct statistics), maybe indexed, and after that the calculation can continue. Now you can do this just only on master. More - on HotStandBy the data are read only, and without direct impact on master (production), so you can do some harder calculation there. And temporary tables is used technique how to fix estimation errors.I don't think so subtransaction, transaction, rollbacks are necessary for these tables. On second hand with out it, it is half cooked features, and can looks pretty strange in pg environment. I am very happy, how much work you do in this area, I had not a courage to start this job, but I don't think so this work can be reduced just to some supported scenarios - and I hope so correct implementation is possible - although it is not simply work.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 23 Sep 2019 18:50:19 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "This broke recently. Can you please rebase?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Sep 2019 17:28:16 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 25.09.2019 23:28, Alvaro Herrera wrote:\n> This broke recently. Can you please rebase?\n>\nRebased version of the patch is attached.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 26 Sep 2019 10:05:23 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "As far as both Robert and Pavel think that aspects of using GTT in \nparallel queries and at replica should be considered separately.\nI have prepared simplest version of the patch for GTT which introduces \nminimal differences with current (local) temporary table.\nSo GTT are stored in private buffers, can not be accessed at replica, in \nprepared transactions and parallel queries.\nBut it supports all existed built-on indexes (hash, nbtree, btrin, git, \ngist, spgist) and per-backend statistic.\nThere are no any DDL limitations for GTT.\n\nAlso I have not yet introduced pg_statistic view (as proposed by Pavel). \nI afraid that it may break compatibility with some existed extensions \nand applications.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 8 Nov 2019 15:43:12 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "Yet another version of my GTT patch addressing issues reported by \n曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n* Bug in TRUNCATE is fixed,\n* ON COMMIT DELETE ROWS option is supported\n* ALTER TABLE is correctly handled\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 11 Nov 2019 17:54:42 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "Now pg_gtt_statistic view is provided for global temp tables.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 20 Nov 2019 19:32:14 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 07:32:14PM +0300, Konstantin Knizhnik wrote:\n> Now pg_gtt_statistic view is provided for global temp tables.\n\nLatest patch fails to apply, per Mr Robot's report. Could you please\nrebase and send an updated version? For now I have moved the patch to\nnext CF, waiting on author.\n--\nMichael",
"msg_date": "Sun, 1 Dec 2019 10:56:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 01.12.2019 4:56, Michael Paquier wrote:\n> On Wed, Nov 20, 2019 at 07:32:14PM +0300, Konstantin Knizhnik wrote:\n>> Now pg_gtt_statistic view is provided for global temp tables.\n> Latest patch fails to apply, per Mr Robot's report. Could you please\n> rebase and send an updated version? For now I have moved the patch to\n> next CF, waiting on author.\n> --\n> Michael\nRebeased version of the patch is attached.\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 2 Dec 2019 12:55:57 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "Hi all,\r\n\r\nI am not aware enough in the Postgres internals to give advice about the implementation.\r\n\r\nBut my feeling is that there is another big interest for this feature: simplify the Oracle to PostgreSQL migration of applications that use global termporary tables. And this is quite common when stored procedures are used. In such a case, we currently need to modify the logic of the code, always implementing an ugly solution (either add CREATE TEMP TABLE statements in the code everywhere it is needed, or use a regular table with additional TRUNCATE statements if we can ensure that only a single connection uses the table at a time).\r\n\r\nSo, Konstantin and all, Thanks by advance for all that could be done on this feature :-)\r\n\r\nBest regards.",
"msg_date": "Sun, 22 Dec 2019 17:04:26 +0000",
"msg_from": "Philippe BEAUDOIN <phb07@apra.asso.fr>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "Hi,\n\nthis patch was marked as waiting on author since the beginning of the\nCF, most likely because it no longer applies (not sure). As there has\nbeen very little activity since then, I've marked it as returned with\nfeedback. Feel free to re-submit an updated patch for 2020-03.\n\nThis definitely does not mean the feature is not desirable, but my\nfeeling is most of the discussion happens on the other thread dealing\nwith global temp tables [1] so maybe we should keep just that one and\ncombine the efforts.\n\n[1] https://commitfest.postgresql.org/26/2349/\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 1 Feb 2020 12:49:54 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "On 01.02.2020 14:49, Tomas Vondra wrote:\n> Hi,\n>\n> this patch was marked as waiting on author since the beginning of the\n> CF, most likely because it no longer applies (not sure). As there has\n> been very little activity since then, I've marked it as returned with\n> feedback. Feel free to re-submit an updated patch for 2020-03.\n>\n> This definitely does not mean the feature is not desirable, but my\n> feeling is most of the discussion happens on the other thread dealing\n> with global temp tables [1] so maybe we should keep just that one and\n> combine the efforts.\n>\n> [1] https://commitfest.postgresql.org/26/2349/\n>\n\nNew version of the patch with new method of GTT index construction is \nattached.\nNow GTT indexes are checked before query execution and are initialized \nusing AM build method.\nSo now GTT is supported for all indexes, including custom indexes.\n\n-- \nKonstantin Knizhnik\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 3 Feb 2020 23:56:11 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "Fix GTT index initialization.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 7 Feb 2020 20:31:18 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "Sorry, small typo in the last patch.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 10 Feb 2020 19:48:29 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Global temporary tables"
},
{
"msg_contents": "Hi,\n\n\n\nI am very interested in this feature that will conform to the SQL standard and I read that :\n\n\n\nSession 1:\n\ncreate global temp table gtt(x integer);\n\ninsert into gtt values (generate_series(1,100000));\n\n\n\nSession 2:\n\ninsert into gtt values (generate_series(1,200000));\n\n\n\nSession1:\n\ncreate index on gtt(x);\n\nexplain select * from gtt where x = 1;\n\n\n\nSession2:\n\nexplain select * from gtt where x = 1;\n\n??? Should we use index here?\n\n\n\nMy answer is - yes.\n\nJust because:\n\n- Such behavior is compatible with regular tables. So it will not\n\nconfuse users and doesn't require some complex explanations.\n\n- It is compatible with Oracle.\n\n\n\nThere is a confusion. Sadly it does not work like that at all with Oracle. Their implementation is buggy in my opinion.\n\nHere is a very simple test case to prove it with the latest version (january 2020) :\n\n\n\nConnected to:\n\nOracle Database 19c Enterprise Edition Release 19.0.0.0<http://19.0.0.0>.0 - Production\n\nVersion 19.6.0.0<http://19.6.0.0>.0\n\n\n\n-- session 1\n\ncreate global temporary table gtt(x integer);\n\nTable created.\n\n\n\n-- session 2\n\ninsert into gtt SELECT level FROM dual CONNECT BY LEVEL <= 100000;\n\n100000 rows created.\n\n\n\n-- session 1\n\ncreate index igtt on gtt(x);\n\nIndex created.\n\n\n\n-- session 2\n\nselect * from gtt where x = 9;\n\n\n\nno rows selected\n\n\n\nselect /*+ FULL(gtt) */ * from gtt where x = 9;\n\n\n\n X\n\n----------\n\n 9\n\n\n\nWhat happened ? The optimizer (planner) knows the new index igtt can be efficient via dynamic sampling. Hence, igtt is used at execution time...but it is NOT populated. By default I obtained no line. If I force a full scan of the table with a hint /*+ FULL */ you can see that I obtain my line 9. Different results with different exec plans it's a WRONG RESULT bug, the worst kind of bugs.\n\nPlease don't consider Oracle as a reference for your implementation. I am 100% sure you can implement and document that better than Oracle. E.g index is populated and considered only for transactions that started after the index creation or something like that. It would be far better than this misleading behaviour.\n\nRegards,\n\nPhil\n\n\n\n\n\n\n\n\nTélécharger Outlook pour Android<https://aka.ms/ghei36>\n\n________________________________\nFrom: Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\nSent: Monday, February 10, 2020 5:48:29 PM\nTo: Tomas Vondra <tomas.vondra@2ndquadrant.com>; Philippe BEAUDOIN <phb07@apra.asso.fr>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>; Konstantin Knizhnik <knizhnik@garret.ru>\nSubject: Re: Global temporary tables\n\n\nSorry, small typo in the last patch.\n\n--\nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\n\n\nHi,\n\n\n\n \n\n\n\nI am very interested in this feature that will conform to the SQL standard and I read that :\n\n\n\n \n\n\n\nSession 1:\n\n\n\ncreate global temp table gtt(x integer);\n\n\n\ninsert into gtt values (generate_series(1,100000));\n\n\n\n \n\n\n\nSession 2:\n\n\n\ninsert into gtt values (generate_series(1,200000));\n\n\n\n \n\n\n\nSession1:\n\n\n\ncreate index on gtt(x);\n\n\n\nexplain select * from gtt where x = 1;\n\n\n\n \n\n\n\nSession2:\n\n\n\nexplain select * from gtt where x = 1;\n\n\n\n??? Should we use index here?\n\n\n\n \n\n\n\nMy answer is - yes.\n\n\n\nJust because:\n\n\n\n- Such behavior is compatible with\nregular tables. So it will\nnot\n\n\n\nconfuse users and doesn't require\nsome complex explanations.\n\n\n\n- It is compatible with Oracle.\n\n\n\n \n\n\n\nThere is a confusion. Sadly it does not work like that at all with Oracle. Their implementation is buggy in my opinion.\n\n\n\nHere is a very simple test case to prove it with the latest version (january 2020) :\n\n\n\n \n\n\n\nConnected to:\n\n\n\nOracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production\n\n\n\nVersion 19.6.0.0.0\n\n\n\n \n\n\n\n-- session 1\n\n\n\ncreate global temporary table gtt(x integer);\n\n\n\nTable created.\n\n\n\n \n\n\n\n-- session 2\n\n\n\ninsert into gtt SELECT level FROM dual CONNECT BY LEVEL <= 100000;\n\n\n\n100000 rows created.\n\n\n\n \n\n\n\n-- session 1\n\n\n\ncreate index igtt on gtt(x);\n\n\n\nIndex created.\n\n\n\n \n\n\n\n-- session 2\n\n\n\nselect * from gtt where x = 9;\n\n\n\n \n\n\n\nno rows selected\n\n\n\n \n\n\n\nselect /*+ FULL(gtt) */ * from gtt where x = 9;\n\n\n\n \n\n\n\n X\n\n\n\n----------\n\n\n\n 9\n\n\n\n \n\n\n\nWhat happened ? The optimizer (planner) knows the new index igtt can be efficient via dynamic sampling. Hence, igtt is used at execution time...but it is NOT populated. By default I obtained no line. If I force a full scan of the table with a hint /*+ FULL\n */ you can see that I obtain my line 9. Different results with different exec plans it's a WRONG RESULT bug, the worst kind of bugs.\n\n\n\nPlease don't consider Oracle as a reference for your implementation. I am 100% sure you can implement and document that better than Oracle. E.g index is populated and considered only for transactions that started after the index creation or something like\n that. It would be far better than this misleading behaviour.\n\n\n\nRegards,\n\n\n\nPhil\n\n\n\n \n\n\n\n \n\n\n\n \n\n\n\n\n\n\nTélécharger Outlook pour Android\n\n\n\nFrom: Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\nSent: Monday, February 10, 2020 5:48:29 PM\nTo: Tomas Vondra <tomas.vondra@2ndquadrant.com>; Philippe BEAUDOIN <phb07@apra.asso.fr>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>; Konstantin Knizhnik <knizhnik@garret.ru>\nSubject: Re: Global temporary tables\n \n\n\n\nSorry, small typo in the last patch.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 12 Feb 2020 17:28:58 +0000",
"msg_from": "Phil Florent <philflorent@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Global temporary tables"
}
] |
[
{
"msg_contents": "Hi Tom,\n\nb654714 has reworked the way we handle removal of CLRF for several\ncode paths, and has repeated the same code patterns to do that in 8\ndifferent places. Could it make sense to refactor things as per the\nattached with a new routine in common/string.c?\n\nThanks,\n--\nMichael",
"msg_date": "Thu, 1 Aug 2019 12:18:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Refactoring code stripping trailing \\n and \\r from strings"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 12:18:20PM +0900, Michael Paquier wrote:\n> Hi Tom,\n> \n> b654714 has reworked the way we handle removal of CLRF for several\n> code paths, and has repeated the same code patterns to do that in 8\n> different places. Could it make sense to refactor things as per the\n> attached with a new routine in common/string.c?\n\nYes, I think this is a good idea.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 6 Aug 2019 15:10:33 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring code stripping trailing \\n and \\r from strings"
},
{
"msg_contents": "On Tue, Aug 06, 2019 at 03:10:33PM -0400, Bruce Momjian wrote:\n> On Thu, Aug 1, 2019 at 12:18:20PM +0900, Michael Paquier wrote:\n>> b654714 has reworked the way we handle removal of CLRF for several\n>> code paths, and has repeated the same code patterns to do that in 8\n>> different places. Could it make sense to refactor things as per the\n>> attached with a new routine in common/string.c?\n> \n> Yes, I think this is a good idea.\n\nThanks for the review, Bruce! Tom, do you have any objections?\n--\nMichael",
"msg_date": "Wed, 7 Aug 2019 10:10:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring code stripping trailing \\n and \\r from strings"
},
{
"msg_contents": "On Wed, Aug 07, 2019 at 10:10:36AM +0900, Michael Paquier wrote:\n> Thanks for the review, Bruce! Tom, do you have any objections?\n\nhearing nothing but cicadas from outside, applied. There were some\nwarnings I missed with the first version, which are fixed.\n--\nMichael",
"msg_date": "Fri, 9 Aug 2019 11:15:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring code stripping trailing \\n and \\r from strings"
}
] |
[
{
"msg_contents": "Hi all,\n\n7cce1593 has introduced a new routine makeIndexInfo to create\nIndexInfo nodes. It happens that we can do a bit more refactoring as\nper the attached, as BuildIndexInfo can make use of this routine,\nremoving some duplication on the way (filling in IndexInfo was still\nduplicated in a couple of places before 7cce159).\n\nAny thoughts or objections?\n--\nMichael",
"msg_date": "Thu, 1 Aug 2019 13:13:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "More refactoring for BuildIndexInfo"
},
{
"msg_contents": "On Thu, Aug 01, 2019 at 01:13:22PM +0900, Michael Paquier wrote:\n> Any thoughts or objections?\n\nHearing nothing, done.\n--\nMichael",
"msg_date": "Sun, 4 Aug 2019 11:20:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: More refactoring for BuildIndexInfo"
}
] |
[
{
"msg_contents": "Hi,\n\nIn the undo system, we use full-transaction-id for transactions. For\nrollback of prepared transactions, we were planning to use\nFullTransactionId by combining TransactionId and epoch, but as\nsuggested by multiple people in that email chain [1][2], the better\nidea is to store Full-transactionid in TwoPhaseFileHeader\n\nBackward compatibility need not be handled for this scnario as upgrade\ndoes not support having open prepared transactions.\n\nThere is also one more comment which is yet to be concluded. The\ncomment discusses about changing subxids which are of TransactionId\ntype to FullTransactionId type being written in two phase transaction\nfile. We could not conclude this as the data is similarly stored in\nTransactionStateData.\n\nPlease find the patch having the fix for Storing FullTransactionId in\nTwoPhaseFileHeader/GlobalTransactionData.\n\nLet me know your opinion on the patch and above comment.\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BhUKGJ%2BPg2gE9Hdt6fXHn6ezV7xJnS%2Brm-38ksXZGXYcZh3Gg%40mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/CAA4eK1L9BhvnQfa_RJCTpKQf9QZ15pyUW7s32BH78iBC3KbV0g%40mail.gmail.com\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 1 Aug 2019 15:02:14 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Store FullTransactionId in TwoPhaseFileHeader/GlobalTransactionData"
},
{
"msg_contents": "Hi Vignesh,\n\nOn Thu, Aug 1, 2019 at 9:32 PM vignesh C <vignesh21@gmail.com> wrote:\n> In the undo system, we use full-transaction-id for transactions. For\n> rollback of prepared transactions, we were planning to use\n> FullTransactionId by combining TransactionId and epoch, but as\n> suggested by multiple people in that email chain [1][2], the better\n> idea is to store Full-transactionid in TwoPhaseFileHeader\n\n+1\n\n> Backward compatibility need not be handled for this scnario as upgrade\n> does not support having open prepared transactions.\n\n+1\n\n> There is also one more comment which is yet to be concluded. The\n> comment discusses about changing subxids which are of TransactionId\n> type to FullTransactionId type being written in two phase transaction\n> file. We could not conclude this as the data is similarly stored in\n> TransactionStateData.\n\nNo comment on that question or the patch yet but could you please add\nthis to the next Commitfest so that cfbot starts testing it?\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 2 Aug 2019 00:05:55 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Store FullTransactionId in\n TwoPhaseFileHeader/GlobalTransactionData"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 5:36 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> Hi Vignesh,\n>\n> On Thu, Aug 1, 2019 at 9:32 PM vignesh C <vignesh21@gmail.com> wrote:\n> > In the undo system, we use full-transaction-id for transactions. For\n> > rollback of prepared transactions, we were planning to use\n> > FullTransactionId by combining TransactionId and epoch, but as\n> > suggested by multiple people in that email chain [1][2], the better\n> > idea is to store Full-transactionid in TwoPhaseFileHeader\n>\n> +1\n>\n> > Backward compatibility need not be handled for this scnario as upgrade\n> > does not support having open prepared transactions.\n>\n> +1\n>\n> > There is also one more comment which is yet to be concluded. The\n> > comment discusses about changing subxids which are of TransactionId\n> > type to FullTransactionId type being written in two phase transaction\n> > file. We could not conclude this as the data is similarly stored in\n> > TransactionStateData.\n>\n> No comment on that question or the patch yet but could you please add\n> this to the next Commitfest so that cfbot starts testing it?\n>\nI have added it to the commitfest.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Aug 2019 19:08:40 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Store FullTransactionId in\n TwoPhaseFileHeader/GlobalTransactionData"
},
{
"msg_contents": "On Fri, Aug 2, 2019 at 1:38 AM vignesh C <vignesh21@gmail.com> wrote:\n> On Thu, Aug 1, 2019 at 5:36 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Thu, Aug 1, 2019 at 9:32 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > There is also one more comment which is yet to be concluded. The\n> > > comment discusses about changing subxids which are of TransactionId\n> > > type to FullTransactionId type being written in two phase transaction\n> > > file. We could not conclude this as the data is similarly stored in\n> > > TransactionStateData.\n> >\n> > No comment on that question or the patch yet but could you please add\n> > this to the next Commitfest so that cfbot starts testing it?\n> >\n> I have added it to the commitfest.\n\nThanks. This looks pretty reasonable to me, and I don't think we need\nto worry about the subxid list for now. Here's a version I ran though\npgindent to fix some whitespace problems.\n\nThe pg_prepared_xacts view seems like as good a place as any to start\nshowing FullTransactionId values to users. Here's an experimental\npatch on top to do that, introducing a new \"xid8\" type. After\npg_resetwal -e 1000000000 -D pgdata you can make transactions that\nlook like this:\n\npostgres=# select transaction, gid from pg_prepared_xacts ;\n transaction | gid\n---------------------+-----\n 4294967296000000496 | tx1\n(1 row)\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Fri, 2 Aug 2019 22:36:18 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Store FullTransactionId in\n TwoPhaseFileHeader/GlobalTransactionData"
},
{
"msg_contents": "On Fri, Aug 2, 2019 at 6:37 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Thanks. This looks pretty reasonable to me, and I don't think we need\n> to worry about the subxid list for now.\n\nWhy not just do them all at once?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 2 Aug 2019 08:06:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Store FullTransactionId in\n TwoPhaseFileHeader/GlobalTransactionData"
},
{
"msg_contents": "On Sat, Aug 3, 2019 at 12:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Aug 2, 2019 at 6:37 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Thanks. This looks pretty reasonable to me, and I don't think we need\n> > to worry about the subxid list for now.\n>\n> Why not just do them all at once?\n\n[tries for a couple of hours and abandons for now]\n\nIt's a bit of a can of worms. To do it properly, I think\nTransactionStateData::childXids needs to become a pointer to a\nFullTransactionId array called childFxids, so that\nxactGetCommittedChildren() can return it, and that causes knock on\neffects all over the tree, at least xactdesc.c, clog.c, commit_ts.c,\ntransam.c, twophase.c, xact.c need adjusting and you finish up writing\nthe subxact array into various places in the WAL in 64 bit format (but\nnot yet the main xid). Alternatively you need to convert the array of\nFullTransactionId into an array of TransactionId in various places, or\nconvert TransactionId into FullTransactionId just for the 2PC stuff,\nbut both of those are cop outs and require allocating extra copies.\nOf course I am in favour of moving more things to 64 bit format, but I\ndon't want to do them all at once, and there are a number of policy\ndecisions hiding in there, and it's not strictly needed for the change\nthat Vignesh proposes. Vignesh's patch achieves something important\non its own: it avoids the needs for zheap to do a 32->64 conversion.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 5 Aug 2019 13:15:19 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Store FullTransactionId in\n TwoPhaseFileHeader/GlobalTransactionData"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-05 13:15:19 +1200, Thomas Munro wrote:\n> On Sat, Aug 3, 2019 at 12:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Fri, Aug 2, 2019 at 6:37 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > Thanks. This looks pretty reasonable to me, and I don't think we need\n> > > to worry about the subxid list for now.\n> >\n> > Why not just do them all at once?\n> \n> [tries for a couple of hours and abandons for now]\n> \n> It's a bit of a can of worms. To do it properly, I think\n> TransactionStateData::childXids needs to become a pointer to a\n> FullTransactionId array called childFxids, so that\n> xactGetCommittedChildren() can return it, and that causes knock on\n> effects all over the tree, at least xactdesc.c, clog.c, commit_ts.c,\n> transam.c, twophase.c, xact.c need adjusting and you finish up writing\n> the subxact array into various places in the WAL in 64 bit format (but\n> not yet the main xid). Alternatively you need to convert the array of\n> FullTransactionId into an array of TransactionId in various places, or\n> convert TransactionId into FullTransactionId just for the 2PC stuff,\n> but both of those are cop outs and require allocating extra copies.\n> Of course I am in favour of moving more things to 64 bit format, but I\n> don't want to do them all at once, and there are a number of policy\n> decisions hiding in there, and it's not strictly needed for the change\n> that Vignesh proposes.\n\nHm. Maybe I'm missing something, but what's the point of changing this?\nWe're not anytime soon going to allow transactions that are old enough\nthat 32bit isn't enough to reference them. Nor do I think it's likely\nwe're going to convert the procarray to 64bit xids - keeping the size\ndown is too important for cache efficiency. And as most of the data\nwe're talking about here references *live* transactions, rather than\ninformation that's needed for longer (as e.g. a row's xmin/xmax would),\nI don't see why it'd be useful to convert to 64bit xids?\n\nI do see a point in converting these 32bit xids to 64bit xids when\nviewing them, i.e. have pg_prepared_xacts.transaction,\npg_stat_activity.backend_{xid,xmin}, ... return a 64bit xid.\n\n\n> Vignesh's patch achieves something important on its own: it avoids the\n> needs for zheap to do a 32->64 conversion.\n\nHm, is that an actual problem?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 4 Aug 2019 18:44:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Store FullTransactionId in\n TwoPhaseFileHeader/GlobalTransactionData"
},
{
"msg_contents": "On Mon, Aug 5, 2019 at 1:44 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-08-05 13:15:19 +1200, Thomas Munro wrote:\n> > On Sat, Aug 3, 2019 at 12:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > On Fri, Aug 2, 2019 at 6:37 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > > Thanks. This looks pretty reasonable to me, and I don't think we need\n> > > > to worry about the subxid list for now.\n> > >\n> > > Why not just do them all at once?\n> >\n> > [tries for a couple of hours and abandons for now]\n> >\n> > It's a bit of a can of worms. To do it properly, I think\n> > TransactionStateData::childXids needs to become a pointer to a\n> > FullTransactionId array called childFxids, so that\n> > xactGetCommittedChildren() can return it, and that causes knock on\n> > effects all over the tree, at least xactdesc.c, clog.c, commit_ts.c,\n> > transam.c, twophase.c, xact.c need adjusting and you finish up writing\n> > the subxact array into various places in the WAL in 64 bit format (but\n> > not yet the main xid). Alternatively you need to convert the array of\n> > FullTransactionId into an array of TransactionId in various places, or\n> > convert TransactionId into FullTransactionId just for the 2PC stuff,\n> > but both of those are cop outs and require allocating extra copies.\n> > Of course I am in favour of moving more things to 64 bit format, but I\n> > don't want to do them all at once, and there are a number of policy\n> > decisions hiding in there, and it's not strictly needed for the change\n> > that Vignesh proposes.\n>\n> Hm. Maybe I'm missing something, but what's the point of changing this?\n> We're not anytime soon going to allow transactions that are old enough\n> that 32bit isn't enough to reference them. Nor do I think it's likely\n> we're going to convert the procarray to 64bit xids - keeping the size\n> down is too important for cache efficiency. And as most of the data\n> we're talking about here references *live* transactions, rather than\n> information that's needed for longer (as e.g. a row's xmin/xmax would),\n> I don't see why it'd be useful to convert to 64bit xids?\n\nYeah. I think we're agreed for now that we don't want to change\nprocarray (though we still need to figure out how to compute the 64\nbit horizons correctly and efficiently), and we probably don't want to\nchange any high volume WAL contents, so maybe I was starting down the\nwrong path there (I was thinking of the subxid list for 2pc as\ninfrequent, but obviously it isn't for some people). I don't have\ntime to look into that any more right now, but today's experiment made\nme feel more certain about my earlier statement, that we shouldn't\nworry about the subxid list for now if we don't actually have to.\n\n> I do see a point in converting these 32bit xids to 64bit xids when\n> viewing them, i.e. have pg_prepared_xacts.transaction,\n> pg_stat_activity.backend_{xid,xmin}, ... return a 64bit xid.\n\nYep, I agree, hence xid8.\n\n> > Vignesh's patch achieves something important on its own: it avoids the\n> > needs for zheap to do a 32->64 conversion.\n>\n> Hm, is that an actual problem?\n\nIt creates a place in the undo worker patch set that wants to do an\nxid -> fxid translation, as discussed here:\n\nhttps://www.postgresql.org/message-id/CAA4eK1L9BhvnQfa_RJCTpKQf9QZ15pyUW7s32BH78iBC3KbV0g%40mail.gmail.com\n\nI'm trying to stop people from supplying a general purpose footgun\nthat looks like \"GuessFullTransactionId(xid)\". I suspect that any\ntime you think you want to do that, there is probably a better way\nthat doesn't involve having to convince everyone that we didn't mess\nup the epoch part in some unlikely race, which probably involves\nholding onto an fxid that you had somewhere earlier that came\nultimately from the next fxid generator, or deriving it with reference\nto the next fxid or a known older-but-still-running fxid with the\nright interlocking, or something like that. I and others said, well,\nwhy don't we just put the fxid in the 2pc file. That's what Vignesh\nhas proposed, and AFAIK it solves that immediate problem.\n\nThat caused people to ask -- entirely reasonably -- why we don't\nchange ALL the xids in there to fxids, which brings us here. I think\nit includes lots of tricky decisions that I don't want to make right\nnow, hence inclination to defer that question for now.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 5 Aug 2019 14:44:37 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Store FullTransactionId in\n TwoPhaseFileHeader/GlobalTransactionData"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-05 14:44:37 +1200, Thomas Munro wrote:\n> Yeah. I think we're agreed for now that we don't want to change\n> procarray (though we still need to figure out how to compute the 64\n> bit horizons correctly and efficiently)\n\nHm. Is that actually hard? Can't we just use the current logic to\ncompute the horizons in 32bit, and then extend that to 64bit by\ncomparing to nextFullXid or something like that? That shouldn't be more\nthan a few instructions, outside of the contended locks?\n\n\n> and we probably don't want to change any high volume WAL contents, so\n> maybe I was starting down the wrong path there (I was thinking of the\n> subxid list for 2pc as infrequent, but obviously it isn't for some\n> people). I don't have time to look into that any more right now, but\n> today's experiment made me feel more certain about my earlier\n> statement, that we shouldn't worry about the subxid list for now if we\n> don't actually have to.\n\nI don't think we should start to change existing wal contents to 64bit\nxids before there's a benefit from doing so (that obviously doesn't mean\nthat records for a hypothetical AM employing 64bit xids shouldn't\ncontain them, if they are actually long-term values, rather than just\nabout the current transaction). I think that's just going to be\nconfusing, without providing much in the way of benefits.\n\n\n> > > Vignesh's patch achieves something important on its own: it avoids the\n> > > needs for zheap to do a 32->64 conversion.\n> >\n> > Hm, is that an actual problem?\n> \n> It creates a place in the undo worker patch set that wants to do an\n> xid -> fxid translation, as discussed here:\n> \n> https://www.postgresql.org/message-id/CAA4eK1L9BhvnQfa_RJCTpKQf9QZ15pyUW7s32BH78iBC3KbV0g%40mail.gmail.com\n\nRight, but I think that can just be done the suggestion from above.\n\n\n> I'm trying to stop people from supplying a general purpose footgun\n> that looks like \"GuessFullTransactionId(xid)\".\n\nI think you're right - but I think we should be able to provide\nfunctions that ensure safety for most if not all of these. For WAL\nreplay routines we can reference xlogrecord, for GetOldestXmin() we can\njust expand internally, by referencing a 64bit xid in ShmemVariableCache\nor such.\n\n\n> I suspect that any time you think you want to do that, there is\n> probably a better way that doesn't involve having to convince everyone\n> that we didn't mess up the epoch part in some unlikely race, which\n> probably involves holding onto an fxid that you had somewhere earlier\n> that came ultimately from the next fxid generator, or deriving it with\n> reference to the next fxid or a known older-but-still-running fxid\n> with the right interlocking, or something like that. I and others\n> said, well, why don't we just put the fxid in the 2pc file. That's\n> what Vignesh has proposed, and AFAIK it solves that immediate problem.\n> \n> That caused people to ask -- entirely reasonably -- why we don't\n> change ALL the xids in there to fxids, which brings us here. I think\n> it includes lots of tricky decisions that I don't want to make right\n> now, hence inclination to defer that question for now.\n\nI'm against doing this just for one part of the record. That seems\nsupremely confusing. And I don't buy that it meaningfully helps us in\nthe first place, given the multitude of other records containing 32bit\nxids.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 4 Aug 2019 20:00:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Store FullTransactionId in\n TwoPhaseFileHeader/GlobalTransactionData"
},
{
"msg_contents": "On Mon, Aug 5, 2019 at 8:31 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-08-05 14:44:37 +1200, Thomas Munro wrote:\n> > Yeah. I think we're agreed for now that we don't want to change\n> > procarray (though we still need to figure out how to compute the 64\n> > bit horizons correctly and efficiently)\n>\n> Hm. Is that actually hard? Can't we just use the current logic to\n> compute the horizons in 32bit, and then extend that to 64bit by\n> comparing to nextFullXid or something like that? That shouldn't be more\n> than a few instructions, outside of the contended locks?\n>\n>\n> > and we probably don't want to change any high volume WAL contents, so\n> > maybe I was starting down the wrong path there (I was thinking of the\n> > subxid list for 2pc as infrequent, but obviously it isn't for some\n> > people). I don't have time to look into that any more right now, but\n> > today's experiment made me feel more certain about my earlier\n> > statement, that we shouldn't worry about the subxid list for now if we\n> > don't actually have to.\n>\n> I don't think we should start to change existing wal contents to 64bit\n> xids before there's a benefit from doing so (that obviously doesn't mean\n> that records for a hypothetical AM employing 64bit xids shouldn't\n> contain them, if they are actually long-term values, rather than just\n> about the current transaction). I think that's just going to be\n> confusing, without providing much in the way of benefits.\n>\n>\n> > > > Vignesh's patch achieves something important on its own: it avoids the\n> > > > needs for zheap to do a 32->64 conversion.\n> > >\n> > > Hm, is that an actual problem?\n> >\n> > It creates a place in the undo worker patch set that wants to do an\n> > xid -> fxid translation, as discussed here:\n> >\n> > https://www.postgresql.org/message-id/CAA4eK1L9BhvnQfa_RJCTpKQf9QZ15pyUW7s32BH78iBC3KbV0g%40mail.gmail.com\n>\n> Right, but I think that can just be done the suggestion from above.\n>\n>\n> > I'm trying to stop people from supplying a general purpose footgun\n> > that looks like \"GuessFullTransactionId(xid)\".\n>\n> I think you're right - but I think we should be able to provide\n> functions that ensure safety for most if not all of these. For WAL\n> replay routines we can reference xlogrecord, for GetOldestXmin() we can\n> just expand internally, by referencing a 64bit xid in ShmemVariableCache\n> or such.\n>\n>\n> > I suspect that any time you think you want to do that, there is\n> > probably a better way that doesn't involve having to convince everyone\n> > that we didn't mess up the epoch part in some unlikely race, which\n> > probably involves holding onto an fxid that you had somewhere earlier\n> > that came ultimately from the next fxid generator, or deriving it with\n> > reference to the next fxid or a known older-but-still-running fxid\n> > with the right interlocking, or something like that. I and others\n> > said, well, why don't we just put the fxid in the 2pc file. That's\n> > what Vignesh has proposed, and AFAIK it solves that immediate problem.\n> >\n> > That caused people to ask -- entirely reasonably -- why we don't\n> > change ALL the xids in there to fxids, which brings us here. I think\n> > it includes lots of tricky decisions that I don't want to make right\n> > now, hence inclination to defer that question for now.\n>\n> I'm against doing this just for one part of the record. That seems\n> supremely confusing. And I don't buy that it meaningfully helps us in\n> the first place, given the multitude of other records containing 32bit\n> xids.\n>\nGoing by the discussion shall we conclude that we don't need to\nconvert the subxids into fxid's as part of this fix.\nLet me know if any further changes need to be done.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Aug 2019 16:25:55 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Store FullTransactionId in\n TwoPhaseFileHeader/GlobalTransactionData"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 6:56 AM vignesh C <vignesh21@gmail.com> wrote:\n> Going by the discussion shall we conclude that we don't need to\n> convert the subxids into fxid's as part of this fix.\n> Let me know if any further changes need to be done.\n\nI'm not sure, but I think the prior question is whether we want this\npatch at all, and I'm not sure we've achieved consensus on that.\nThomas's point, at least as I understand it, is that if we start doing\n32-bit => 64-bit XID conversions all over the place, it's not going to\nbe long before some incautious developer inserts one that is not\nactually safe. On the other hand, Andres's point, at least as I\nunderstand it, is that putting information that we don't really need\ninto the twophase state file because of some overly rigid coding rule\nis not smart. Both of those arguments sound right to me, but they\nlead to opposite conclusions.\n\nI am somewhat inclined to Andres's conclusion on balance. I think\nthat we can probably define a set of policies about 32 => 64 bit XID\nconversions both in terms of when you can do them and what comments\nyou have to include justifying them and how the API actually works\nthat makes it safe. It might help to think about defining the API in\nterms of a reference FullTransactionId that must be OLDER than the XID\nyou're promoting to an FXID. For instance, we know that all of the\nrelfrozenxid and datfrozenxids are from the current era because we've\ngot freezing machinery to enforce that. So if you got a tuple from the\nheap, it's XID has got to be new, at least modulo bugs. In other\ncases, we may be able to say, hey, look, this XID can't be from before\nthe CLOG cutoff. I'm not sure of all the details here, but I'm\ntentatively inclined to think that trying to lay down policies for\nwhen promotion can be done safely is more promising than holding our\nbreath and saying we're never going to promote.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 9 Aug 2019 09:32:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Store FullTransactionId in\n TwoPhaseFileHeader/GlobalTransactionData"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached fixes some typos for \"serialise\" => \"serialize\" and \"materialise\"\n=> \"materialize\".\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/",
"msg_date": "Thu, 1 Aug 2019 08:24:17 -0400",
"msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>",
"msg_from_op": true,
"msg_subject": "Fix typos"
},
{
"msg_contents": "On Thu, Aug 01, 2019 at 08:24:17AM -0400, Sehrope Sarkuni wrote:\n> Attached fixes some typos for \"serialise\" => \"serialize\" and \"materialise\"\n> => \"materialize\".\n\nThese don't seem to be typos:\nhttps://en.wiktionary.org/wiki/materialise\nhttps://en.wiktionary.org/wiki/serialise\n--\nMichael",
"msg_date": "Fri, 2 Aug 2019 10:06:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Aug 01, 2019 at 08:24:17AM -0400, Sehrope Sarkuni wrote:\n>> Attached fixes some typos for \"serialise\" => \"serialize\" and \"materialise\"\n>> => \"materialize\".\n\n> These don't seem to be typos:\n> https://en.wiktionary.org/wiki/materialise\n> https://en.wiktionary.org/wiki/serialise\n\nIt's British vs. American spelling. For the most part, Postgres\nfollows American spelling, but there's the odd Briticism here and\nthere. I'm not sure whether it's worth trying to standardize.\nI think the most recent opinion on this was Munro's:\n\nhttps://www.postgresql.org/message-id/CA+hUKGJz-pdMgWXroiwvN-aeG4-AjdWj3gWdQKOSa8g65spdVw@mail.gmail.com\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Aug 2019 22:18:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos"
},
{
"msg_contents": "On 2019-Aug-01, Tom Lane wrote:\n\n> It's British vs. American spelling. For the most part, Postgres\n> follows American spelling, but there's the odd Briticism here and\n> there. I'm not sure whether it's worth trying to standardize.\n> I think the most recent opinion on this was Munro's:\n> \n> https://www.postgresql.org/message-id/CA+hUKGJz-pdMgWXroiwvN-aeG4-AjdWj3gWdQKOSa8g65spdVw@mail.gmail.com\n\nI think slight variations don't really detract from the value of the\nproduct, and consider the odd variation a reminder of the diversity of\nthe project. I don't suggest that we purposefully introduce spelling\nvariations, or that we refrain from fixing ones that appear in code\nwe're changing, but I don't see the point in changing a line for the\nsole reason of standardising the spelling of a word.\n\nThat said, I'm not a native English speaker.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 1 Aug 2019 23:01:59 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos"
},
{
"msg_contents": "On Thu, Aug 01, 2019 at 11:01:59PM -0400, Alvaro Herrera wrote:\n> I think slight variations don't really detract from the value of the\n> product, and consider the odd variation a reminder of the diversity of\n> the project. I don't suggest that we purposefully introduce spelling\n> variations, or that we refrain from fixing ones that appear in code\n> we're changing, but I don't see the point in changing a line for the\n> sole reason of standardising the spelling of a word.\n\nAgreed. This always reminds me of ANALYZE vs. ANALYSE where we don't\nactually document the latter :)\n\n> That said, I'm not a native English speaker.\n\nNeither am I.\n--\nMichael",
"msg_date": "Fri, 2 Aug 2019 13:11:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 10:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> It's British vs. American spelling. For the most part, Postgres\n> follows American spelling, but there's the odd Briticism here and\n> there.\n\n\nThanks for the explanation. I thought that might be the case but didn't\nfind any other usages of \"serialise\" so was not sure.\n\n\n> I'm not sure whether it's worth trying to standardize.\n> I think the most recent opinion on this was Munro's:\n>\n>\n> https://www.postgresql.org/message-id/CA+hUKGJz-pdMgWXroiwvN-aeG4-AjdWj3gWdQKOSa8g65spdVw@mail.gmail.com\n\n\nEither reads fine to me and the best rationale I can think of for going\nwith one spelling is to not have the same \"fix\" come up again.\n\nIf there is a desire to change this, attached is updated to include one\nmore instance of \"materialise\" and a change to the commit message to match\nsome similar ones I found in the past.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/",
"msg_date": "Fri, 2 Aug 2019 06:37:44 -0400",
"msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typos"
},
{
"msg_contents": "On Fri, Aug 2, 2019 at 12:11 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Aug 01, 2019 at 11:01:59PM -0400, Alvaro Herrera wrote:\n> > I think slight variations don't really detract from the value of the\n> > product, and consider the odd variation a reminder of the diversity of\n> > the project. I don't suggest that we purposefully introduce spelling\n> > variations, or that we refrain from fixing ones that appear in code\n> > we're changing, but I don't see the point in changing a line for the\n> > sole reason of standardising the spelling of a word.\n>\n> Agreed. This always reminds me of ANALYZE vs. ANALYSE where we don't\n> actually document the latter :)\n>\n\nI didn't know about that. That's a fun one!\n\n>\n> > That said, I'm not a native English speaker.\n>\n> Neither am I.\n>\n\nI am. Consistency is nice but either reads fine to me. Only brought it up\nas I didn't see many other usages so seemed out of place.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\nOn Fri, Aug 2, 2019 at 12:11 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Aug 01, 2019 at 11:01:59PM -0400, Alvaro Herrera wrote:\n> I think slight variations don't really detract from the value of the\n> product, and consider the odd variation a reminder of the diversity of\n> the project. I don't suggest that we purposefully introduce spelling\n> variations, or that we refrain from fixing ones that appear in code\n> we're changing, but I don't see the point in changing a line for the\n> sole reason of standardising the spelling of a word.\n\nAgreed. This always reminds me of ANALYZE vs. ANALYSE where we don't\nactually document the latter :)I didn't know about that. That's a fun one! \n\n> That said, I'm not a native English speaker.\n\nNeither am I.I am. Consistency is nice but either reads fine to me. Only brought it up as I didn't see many other usages so seemed out of place.Regards,-- Sehrope SarkuniFounder & CEO | JackDB, Inc. | https://www.jackdb.com/",
"msg_date": "Fri, 2 Aug 2019 06:45:08 -0400",
"msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typos"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 15938\nLogged by: Alexander Kukushkin\nEmail address: cyberdemn@gmail.com\nPostgreSQL version: 10.9\nOperating system: Ubuntu 18.04.2 LTS\nDescription: \n\nOn one of our cluster one of the postgres backend processes was killed by\nkernel oom.\r\nIn the postgres log it is visible as:\r\n2019-08-01 12:40:58.550 UTC,,,56,,5d3096d1.38,26,,2019-07-18 15:57:05\nUTC,,0,LOG,00000,\"server process (PID 6637) was terminated by signal 9:\nKilled\",\"Failed process was running: select \"\"code\"\", \"\"purchase_order\"\",\n\"\"po_event_id\"\", \"\"po_delivery_event_id\"\", \"\"created\"\", \"\"last_modified\"\"\nfrom \"\"zpofrog_data\"\".\"\"purchase_order\"\" where\n(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((\",,,,,,,,\"\"\r\n\r\nAs expected postmaster started terminating other processes:\r\n2019-08-01 12:40:58.550 UTC,,,56,,5d3096d1.38,27,,2019-07-18 15:57:05\nUTC,,0,LOG,00000,\"terminating any other active server\nprocesses\",,,,,,,,,\"\"\r\n2019-08-01 12:40:58.550\nUTC,\"read_write_user\",\"purchase_orders_frontend_gateway\",19603,\"10.2.26.20:35944\",5d42dbf7.4c93,3,\"idle\",2019-08-01\n12:32:55 UTC,106/0,0,WARNING,57P02,\"terminating connection because of crash\nof another server process\",\"The postmaster has commanded this server process\nto roll back the current transaction and exit, because another server\nprocess exited abnormally and possibly corrupted shared memory.\",\"In a\nmoment you should be able to reconnect to the database and repeat your\ncommand.\",,,,,,,\"PostgreSQL JDBC Driver\"\r\n... a bunch of similar lines ...\r\n2019-08-01 12:40:59.165 UTC,,,5439,,5d30e37b.153f,1,,2019-07-18 21:24:11\nUTC,,0,FATAL,XX000,\"archive command was terminated by signal 3: Quit\",\"The\nfailed archive command was: envdir \"\"/home/postgres/etc/wal-e.d/env\"\" wal-e\nwal-push \"\"pg_wal/0000005000000B8D00000044\"\"\",,,,,,,,\"\"\r\n2019-08-01 12:40:59.179 UTC,,,56,,5d3096d1.38,28,,2019-07-18 15:57:05\nUTC,,0,LOG,00000,\"archiver process (PID 5439) exited with exit code\n1\",,,,,,,,,\"\"\r\n\r\nAnd crash recovery:\r\n2019-08-01 12:40:59.179 UTC,,,56,,5d3096d1.38,29,,2019-07-18 15:57:05\nUTC,,0,LOG,00000,\"all server processes terminated;\nreinitializing\",,,,,,,,,\"\"\r\n2019-08-01 12:40:59.423\nUTC,,,22698,\"10.2.73.6:47474\",5d42dddb.58aa,1,\"\",2019-08-01 12:40:59\nUTC,,0,LOG,00000,\"connection received: host=10.2.73.6\nport=47474\",,,,,,,,,\"\"\r\n2019-08-01 12:40:59.423 UTC,,,22697,,5d42dddb.58a9,1,,2019-08-01 12:40:59\nUTC,,0,LOG,00000,\"database system was interrupted; last known up at\n2019-08-01 12:37:59 UTC\",,,,,,,,,\"\"\r\n2019-08-01 12:40:59.423\nUTC,,,22699,\"10.2.72.9:33244\",5d42dddb.58ab,1,\"\",2019-08-01 12:40:59\nUTC,,0,LOG,00000,\"connection received: host=10.2.72.9\nport=33244\",,,,,,,,,\"\"\r\n2019-08-01 12:40:59.470\nUTC,\"standby\",\"\",22698,\"10.2.73.6:47474\",5d42dddb.58aa,2,\"\",2019-08-01\n12:40:59 UTC,,0,FATAL,57P03,\"the database system is in recovery\nmode\",,,,,,,,,\"\"\r\n2019-08-01 12:40:59.470\nUTC,\"standby\",\"\",22699,\"10.2.72.9:33244\",5d42dddb.58ab,2,\"\",2019-08-01\n12:40:59 UTC,,0,FATAL,57P03,\"the database system is in recovery\nmode\",,,,,,,,,\"\"\r\n2019-08-01 12:40:59.473\nUTC,,,22700,\"10.2.73.6:47478\",5d42dddb.58ac,1,\"\",2019-08-01 12:40:59\nUTC,,0,LOG,00000,\"connection received: host=10.2.73.6\nport=47478\",,,,,,,,,\"\"\r\n2019-08-01 12:40:59.473\nUTC,\"standby\",\"\",22700,\"10.2.73.6:47478\",5d42dddb.58ac,2,\"\",2019-08-01\n12:40:59 UTC,,0,FATAL,57P03,\"the database system is in recovery\nmode\",,,,,,,,,\"\"\r\n2019-08-01 12:40:59.473\nUTC,,,22701,\"10.2.72.9:33246\",5d42dddb.58ad,1,\"\",2019-08-01 12:40:59\nUTC,,0,LOG,00000,\"connection received: host=10.2.72.9\nport=33246\",,,,,,,,,\"\"\r\n2019-08-01 12:40:59.473\nUTC,\"standby\",\"\",22701,\"10.2.72.9:33246\",5d42dddb.58ad,2,\"\",2019-08-01\n12:40:59 UTC,,0,FATAL,57P03,\"the database system is in recovery\nmode\",,,,,,,,,\"\"\r\n2019-08-01 12:40:59.493 UTC,,,22697,,5d42dddb.58a9,2,,2019-08-01 12:40:59\nUTC,,0,LOG,00000,\"database system was not properly shut down; automatic\nrecovery in progress\",,,,,,,,,\"\"\r\n2019-08-01 12:40:59.497 UTC,,,22697,,5d42dddb.58a9,3,,2019-08-01 12:40:59\nUTC,,0,LOG,00000,\"redo starts at B8D/193A7850\",,,,,,,,,\"\"\r\n... a bunch of connection attempts ...\r\n2019-08-01 12:41:06.376 UTC,,,22697,,5d42dddb.58a9,4,,2019-08-01 12:40:59\nUTC,,0,LOG,00000,\"redo done at B8D/44FFE570\",,,,,,,,,\"\"\r\n2019-08-01 12:41:06.376 UTC,,,22697,,5d42dddb.58a9,5,,2019-08-01 12:40:59\nUTC,,0,LOG,00000,\"last completed transaction was at log time 2019-08-01\n12:40:54.782747+00\",,,,,,,,,\"\"\r\n2019-08-01 12:41:06.381 UTC,,,22697,,5d42dddb.58a9,6,,2019-08-01 12:40:59\nUTC,,0,LOG,00000,\"checkpoint starting: end-of-recovery\nimmediate\",,,,,,,,,\"\"\r\n... a bunch of connection attempts ...\r\n2019-08-01 12:41:15.780 UTC,,,22697,,5d42dddb.58a9,7,,2019-08-01 12:40:59\nUTC,,0,LOG,00000,\"checkpoint complete: wrote 94580 buffers (36.1%); 0 WAL\nfile(s) added, 6 removed, 20 recycled; write=9.349 s, sync=0.007 s,\ntotal=9.400 s; sync files=45, longest=0.007 s, average=0.000 s;\ndistance=717151 kB, estimate=717151 kB\",,,,,,,,,\"\"\r\n2019-08-01 12:41:15.824 UTC,,,56,,5d3096d1.38,30,,2019-07-18 15:57:05\nUTC,,0,LOG,00000,\"database system is ready to accept\nconnections\",,,,,,,,,\"\"\r\n\r\nAt 12:41:17 UTC (after recovery was done), the WAL segment\n0000005000000B8D00000044 was archived.\r\n\r\nAfter that both replicas started failing to replay WAL's (logs from one of\nthe replicas):\r\n2019-08-01 12:37:58.436 UTC,,,285,,5d30e6c4.11d,15294,,2019-07-18 21:38:12\nUTC,,0,LOG,00000,\"recovery restart point at B8C/FF1FB760\",\"last completed\ntransaction was at log time 2019-08-01 12:37:58.161596+00\",,,,,,,,\"\"\r\n2019-08-01 12:38:01.971 UTC,,,285,,5d30e6c4.11d,15295,,2019-07-18 21:38:12\nUTC,,0,LOG,00000,\"restartpoint starting: xlog\",,,,,,,,,\"\"\r\n2019-08-01 12:40:58.589 UTC,,,2369,,5d30e6ca.941,2,,2019-07-18 21:38:18\nUTC,,0,FATAL,XX000,\"could not receive data from WAL stream: SSL SYSCALL\nerror: EOF detected\",,,,,,,,,\"\"\r\n2019-08-01 12:40:59.473 UTC,,,7967,,5d42dddb.1f1f,1,,2019-08-01 12:40:59\nUTC,,0,FATAL,XX000,\"could not connect to the primary server: FATAL: the\ndatabase system is in recovery mode\r\nFATAL: the database system is in recovery mode\",,,,,,,,,\"\"\r\n2019-08-01 12:41:04.311 UTC,,,8134,,5d42dde0.1fc6,1,,2019-08-01 12:41:04\nUTC,,0,FATAL,XX000,\"could not connect to the primary server: FATAL: the\ndatabase system is in recovery mode\r\nFATAL: the database system is in recovery mode\",,,,,,,,,\"\"\r\n2019-08-01 12:41:09.292 UTC,,,8301,,5d42dde5.206d,1,,2019-08-01 12:41:09\nUTC,,0,FATAL,XX000,\"could not connect to the primary server: FATAL: the\ndatabase system is in recovery mode\r\nFATAL: the database system is in recovery mode\",,,,,,,,,\"\"\r\n2019-08-01 12:41:14.296 UTC,,,8468,,5d42ddea.2114,1,,2019-08-01 12:41:14\nUTC,,0,FATAL,XX000,\"could not connect to the primary server: FATAL: the\ndatabase system is in recovery mode\r\nFATAL: the database system is in recovery mode\",,,,,,,,,\"\"\r\n2019-08-01 12:41:19.309 UTC,,,8637,,5d42ddef.21bd,1,,2019-08-01 12:41:19\nUTC,,0,LOG,00000,\"started streaming WAL from primary at B8D/45000000 on\ntimeline 80\",,,,,,,,,\"\"\r\n2019-08-01 12:41:19.351 UTC,,,57,,5d30e6c3.39,37,,2019-07-18 21:38:11\nUTC,1/0,0,LOG,00000,\"invalid contrecord length 3641 at\nB8D/44FFF6D0\",,,,,,,,,\"\"\r\n2019-08-01 12:41:19.359 UTC,,,8637,,5d42ddef.21bd,2,,2019-08-01 12:41:19\nUTC,,0,FATAL,57P01,\"terminating walreceiver process due to administrator\ncommand\",,,,,,,,,\"\"\r\n2019-08-01 12:41:19.837 UTC,,,57,,5d30e6c3.39,38,,2019-07-18 21:38:11\nUTC,1/0,0,LOG,00000,\"restored log file \"\"0000005000000B8D00000044\"\" from\narchive\",,,,,,,,,\"\"\r\n2019-08-01 12:41:19.866 UTC,,,57,,5d30e6c3.39,39,,2019-07-18 21:38:11\nUTC,1/0,0,LOG,00000,\"invalid record length at B8D/44FFF740: wanted 24, got\n0\",,,,,,,,,\"\"\r\n2019-08-01 12:41:19.866 UTC,,,57,,5d30e6c3.39,40,,2019-07-18 21:38:11\nUTC,1/0,0,LOG,00000,\"invalid record length at B8D/44FFF740: wanted 24, got\n0\",,,,,,,,,\"\"\r\n2019-08-01 12:41:24.841 UTC,,,57,,5d30e6c3.39,41,,2019-07-18 21:38:11\nUTC,1/0,0,LOG,00000,\"restored log file \"\"0000005000000B8D00000044\"\" from\narchive\",,,,,,,,,\"\"\r\n2019-08-01 12:41:24.989 UTC,,,57,,5d30e6c3.39,42,,2019-07-18 21:38:11\nUTC,1/0,0,LOG,00000,\"invalid record length at B8D/44FFF740: wanted 24, got\n0\",,,,,,,,,\"\"\r\n2019-08-01 12:41:24.989 UTC,,,57,,5d30e6c3.39,43,,2019-07-18 21:38:11\nUTC,1/0,0,LOG,00000,\"invalid record length at B8D/44FFF740: wanted 24, got\n0\",,,,,,,,,\"\"\r\n\r\nand so on.\r\n\r\nUnfortunately I can't compare files in the archive and on the primary,\nbecause it was recycled despite usage of replication slots.\npg_replication_slots view reports restart_lsn as B8D/451EB540 (the next\nsegment).\r\n\r\nBut I can run pg_waldump, and the last record in this file is\nCHECKPOINT_SHUTDOWN:\r\nrmgr: XLOG len (rec/tot): 106/ 106, tx: 0, lsn:\nB8D/44FFF6D0, prev B8D/44FFE570, desc: CHECKPOINT_SHUTDOWN redo\nB8D/44FFF6D0; tli 80; prev tli 80; fpw true; xid 0:42293547; oid 10741249;\nmulti 1; offset 0; oldest xid 549 in DB 1; oldest multi 1 in DB 1;\noldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdown\r\npg_waldump: FATAL: error in WAL record at B8D/44FFF6D0: invalid record\nlength at B8D/44FFF740: wanted 24, got 0",
"msg_date": "Thu, 01 Aug 2019 13:52:52 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #15938: Corrupted WAL segment after crash recovery"
},
{
"msg_contents": "Hello.\n\nAt Thu, 01 Aug 2019 13:52:52 +0000, PG Bug reporting form <noreply@postgresql.org> wrote in <15938-8591df7e95064538@postgresql.org>\n> The following bug has been logged on the website:\n> \n> Bug reference: 15938\n> Logged by: Alexander Kukushkin\n> Email address: cyberdemn@gmail.com\n> PostgreSQL version: 10.9\n> Operating system: Ubuntu 18.04.2 LTS\n> Description: \n> \n> On one of our cluster one of the postgres backend processes was killed by\n> kernel oom.\n\nAlthough I don't think replication reconnection to\ncrash-recovered master is generally guaranteed, but this seems\ndifferent.\n\n> 2019-08-01 12:41:06.376 UTC,,,22697,,5d42dddb.58a9,4,,2019-08-01 12:40:59\n> UTC,,0,LOG,00000,\"redo done at B8D/44FFE570\",,,,,,,,,\"\"\n> 2019-08-01 12:41:06.376 UTC,,,22697,,5d42dddb.58a9,5,,2019-08-01 12:40:59\n> UTC,,0,LOG,00000,\"last completed transaction was at log time 2019-08-01\n> 12:40:54.782747+00\",,,,,,,,,\"\"\n..\n> Unfortunately I can't compare files in the archive and on the primary,\n> because it was recycled despite usage of replication slots.\n> pg_replication_slots view reports restart_lsn as B8D/451EB540 (the next\n> segment).\n\nWAL records since 44ffe6d0 (maybe) till 451eb540 are somehow\nignored during crash recovery of the master, or lost despite it\nshould have been fsynced out. The only thing I came up for this\nis the fsync problem but it is fixed in 10.7. But the following\nwiki description:\n\nhttps://wiki.postgresql.org/wiki/Fsync_Errors\n\n> Linux 4.13 and 4.15: fsync() only reports writeback errors that\n> occurred after you called open() so our schemes for closing and\n> opening files LRU-style and handing fsync() work off to the\n> checkpointer process can hide write-back errors; also buffers are\n> marked clean after errors so even if you opened the file before\n> the failure, retrying fsync() can falsely report success and the\n> modified buffer can be thrown away at any time due to memory\n> pressure.\n\nIf I read this correctly, after checkpointer failed to fsync\nwhile creating of a preallocate file (this is an ERROR, not a\nPANIC), other processes will never receive fsync error about the\nfile. This scenario is consistent with the fact that it seems\nthat the data loss starts from new segment's beginning (assuming\nthat the original 44ffe6d0 continues to the next segment).\n\nThoughts?\n\n\n> But I can run pg_waldump, and the last record in this file is\n> CHECKPOINT_SHUTDOWN:\n> rmgr: XLOG len (rec/tot): 106/ 106, tx: 0, lsn:\n> B8D/44FFF6D0, prev B8D/44FFE570, desc: CHECKPOINT_SHUTDOWN redo\n> B8D/44FFF6D0; tli 80; prev tli 80; fpw true; xid 0:42293547; oid 10741249;\n> multi 1; offset 0; oldest xid 549 in DB 1; oldest multi 1 in DB 1;\n> oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdown\n> pg_waldump: FATAL: error in WAL record at B8D/44FFF6D0: invalid record\n> length at B8D/44FFF740: wanted 24, got 0\n\nThe shutdown record is written at the end of crash recovery of\nthe master, then replicated. As the result the subsequent bytes\nare inconsistent with the record.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 06 Aug 2019 17:11:40 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15938: Corrupted WAL segment after crash recovery"
}
] |
[
{
"msg_contents": "I pushed a commit that required a new pg_proc entry today. Had I not\nbeen involved with the work that became commit a6417078, I would\ndefinitely not have used an OID from the range reserved for devel\nsystem catalogs (8000 - 8999). As I understand it, this is now\nstandard practice.\n\nPerhaps unsurprisingly, other committers didn't get the memo, and\nhaven't been using the special reserved range since its introduction\nin March. I think that this could be avoided by simply making\nunused_oids print a reminder about the new practice.\n\nIs it within the discretion of committers to not use the reserved\nrange? It seems preferable for everybody to consistently use the\nreserved OID range.\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 1 Aug 2019 13:36:48 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "The unused_oids script should have a reminder to use the 8000-8999\n OID range"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 10:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> I pushed a commit that required a new pg_proc entry today. Had I not\n> been involved with the work that became commit a6417078, I would\n> definitely not have used an OID from the range reserved for devel\n> system catalogs (8000 - 8999). As I understand it, this is now\n> standard practice.\n>\n> Perhaps unsurprisingly, other committers didn't get the memo, and\n> haven't been using the special reserved range since its introduction\n> in March. I think that this could be avoided by simply making\n> unused_oids print a reminder about the new practice.\n\n\nHuge +1. Last time I had to pick a new oid it took me ages to find\nthe correct range for that. The script could even suggest a random\nfree oid in the range, for extra laziness as you also suggested in the\nalmost exact same mail at\nCAH2-WzmCzNMebiN4-8p=ON92m0Rz0ybxNEKrO_2J+9DqWfWP=A@mail.gmail.com :)\n\n\n",
"msg_date": "Thu, 1 Aug 2019 22:57:36 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 1:57 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> Huge +1. Last time I had to pick a new oid it took me ages to find\n> the correct range for that. The script could even suggest a random\n> free oid in the range, for extra laziness as you also suggested in the\n> almost exact same mail at\n> CAH2-WzmCzNMebiN4-8p=ON92m0Rz0ybxNEKrO_2J+9DqWfWP=A@mail.gmail.com :)\n\nSeems like I should propose a patch this time around. I don't do Perl,\nbut I suppose I could manage something as trivial as this.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 1 Aug 2019 18:59:06 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Is it within the discretion of committers to not use the reserved\n> range? It seems preferable for everybody to consistently use the\n> reserved OID range.\n\nI think it's up to the committer in the end. But if someone submits\na patch using high OIDs, I for one would not change that (unless it\nhad a collision through bad luck).\n\nI agree that adjusting the unused_oids script would be an appropriate\nthing to do now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Aug 2019 22:21:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On Thu, Aug 01, 2019 at 06:59:06PM -0700, Peter Geoghegan wrote:\n> Seems like I should propose a patch this time around. I don't do Perl,\n> but I suppose I could manage something as trivial as this.\n\nWell, that new project policy is not that well-advertised then, see\nfor example the recent 5925e55, c085e1c and 313f87a. So having some\nkind of safety net would be nice.\n--\nMichael",
"msg_date": "Fri, 2 Aug 2019 13:20:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On Fri, Aug 2, 2019 at 6:21 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Aug 01, 2019 at 06:59:06PM -0700, Peter Geoghegan wrote:\n> > Seems like I should propose a patch this time around. I don't do Perl,\n> > but I suppose I could manage something as trivial as this.\n>\n> Well, that new project policy is not that well-advertised then, see\n> for example the recent 5925e55, c085e1c and 313f87a. So having some\n> kind of safety net would be nice.\n\nTrivial patch for that attached. The output is now like:\n\n[...]\nUsing an oid in the 8000-9999 range is recommended.\nFor instance: 9427\n\n(checking that the suggested random oid is not used yet.)",
"msg_date": "Fri, 2 Aug 2019 10:42:42 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On Fri, Aug 2, 2019 at 1:42 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> Trivial patch for that attached.\n\nThanks!\n\n> The output is now like:\n>\n> [...]\n> Using an oid in the 8000-9999 range is recommended.\n> For instance: 9427\n>\n> (checking that the suggested random oid is not used yet.)\n\nI've taken your patch, and changed the wording a bit. I think that\nit's worth being a bit more explicit. The attached revision produces\noutput that looks like this:\n\nPatches should use a more-or-less consecutive range of OIDs.\nBest practice is to make a random choice in the range 8000-9999.\nSuggested random unused OID: 9099\n\nI would like to push this patch shortly. How do people feel about this\nwording? (It's based on the documentation added by commit a6417078.)\n\n-- \nPeter Geoghegan",
"msg_date": "Fri, 2 Aug 2019 11:12:34 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "Le ven. 2 août 2019 à 20:12, Peter Geoghegan <pg@bowt.ie> a écrit :\n\n> On Fri, Aug 2, 2019 at 1:42 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > Trivial patch for that attached.\n>\n> Thanks!\n>\n> > The output is now like:\n> >\n> > [...]\n> > Using an oid in the 8000-9999 range is recommended.\n> > For instance: 9427\n> >\n> > (checking that the suggested random oid is not used yet.)\n>\n> I've taken your patch, and changed the wording a bit. I think that\n> it's worth being a bit more explicit. The attached revision produces\n> output that looks like this:\n>\n> Patches should use a more-or-less consecutive range of OIDs.\n> Best practice is to make a random choice in the range 8000-9999.\n> Suggested random unused OID: 9099\n>\n> I would like to push this patch shortly. How do people feel about this\n> wording? (It's based on the documentation added by commit a6417078.)\n>\n\nI'm fine with it!\n\n>\n\nLe ven. 2 août 2019 à 20:12, Peter Geoghegan <pg@bowt.ie> a écrit :On Fri, Aug 2, 2019 at 1:42 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> Trivial patch for that attached.\n\nThanks!\n\n> The output is now like:\n>\n> [...]\n> Using an oid in the 8000-9999 range is recommended.\n> For instance: 9427\n>\n> (checking that the suggested random oid is not used yet.)\n\nI've taken your patch, and changed the wording a bit. I think that\nit's worth being a bit more explicit. The attached revision produces\noutput that looks like this:\n\nPatches should use a more-or-less consecutive range of OIDs.\nBest practice is to make a random choice in the range 8000-9999.\nSuggested random unused OID: 9099\n\nI would like to push this patch shortly. How do people feel about this\nwording? (It's based on the documentation added by commit a6417078.)I'm fine with it!",
"msg_date": "Fri, 2 Aug 2019 21:51:12 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> I've taken your patch, and changed the wording a bit. I think that\n> it's worth being a bit more explicit. The attached revision produces\n> output that looks like this:\n\n> Patches should use a more-or-less consecutive range of OIDs.\n> Best practice is to make a random choice in the range 8000-9999.\n> Suggested random unused OID: 9099\n\nMaybe s/make a/start with/ ?\n\nAlso, once people start doing this, it'd be unfriendly to suggest\n9099 if 9100 is already committed. There should be some attention\nto *how many* consecutive free OIDs will be available if one starts\nat the suggestion. You could perhaps print \"9099 (42 OIDs available\nstarting here)\", and if the user doesn't like the amount of headroom\nin that, they could just run it again for a different suggestion.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Aug 2019 16:49:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On Fri, Aug 2, 2019 at 1:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Maybe s/make a/start with/ ?\n\n> Also, once people start doing this, it'd be unfriendly to suggest\n> 9099 if 9100 is already committed. There should be some attention\n> to *how many* consecutive free OIDs will be available if one starts\n> at the suggestion.\n\nHow about this wording?:\n\nPatches should use a more-or-less consecutive range of OIDs.\nBest practice is to start with a random choice in the range 8000-9999.\nSuggested random unused OID: 9591 (409 consecutive OID(s) available\nstarting here)\n\nAttached is v3, which implements your suggestion, generating output\nlike the above. I haven't written a line of Perl in my life prior to\ntoday, so basic code review would be helpful.\n\n-- \nPeter Geoghegan",
"msg_date": "Fri, 2 Aug 2019 14:50:42 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On Fri, 2 Aug 2019 at 16:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > I've taken your patch, and changed the wording a bit. I think that\n> > it's worth being a bit more explicit. The attached revision produces\n> > output that looks like this:\n>\n> > Patches should use a more-or-less consecutive range of OIDs.\n> > Best practice is to make a random choice in the range 8000-9999.\n> > Suggested random unused OID: 9099\n>\n\nNoob question here: why not start with the next unused OID in the range,\nand on the other hand reserve the range for sequentially-assigned values?\n\nOn Fri, 2 Aug 2019 at 16:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:Peter Geoghegan <pg@bowt.ie> writes:\n> I've taken your patch, and changed the wording a bit. I think that\n> it's worth being a bit more explicit. The attached revision produces\n> output that looks like this:\n\n> Patches should use a more-or-less consecutive range of OIDs.\n> Best practice is to make a random choice in the range 8000-9999.\n> Suggested random unused OID: 9099Noob question here: why not start with the next unused OID in the range, and on the other hand reserve the range for sequentially-assigned values?",
"msg_date": "Fri, 2 Aug 2019 17:52:31 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On Fri, Aug 2, 2019 at 2:52 PM Isaac Morland <isaac.morland@gmail.com> wrote:\n> Noob question here: why not start with the next unused OID in the range, and on the other hand reserve the range for sequentially-assigned values?\n\nThe general idea is to avoid OID collisions while a patch is under\ndevelopment. Choosing a value that aligns nicely with\nalready-allocated OIDs makes these collisions much more likely, which\ncommit a6417078 addressed back in March. We want a random choice among\npatches, but OIDs used within a patch should be consecutive.\n\n(There is still some chance of a collision, but you have to be fairly\nunlucky to have that happen under the system introduced by commit\na6417078.)\n\nIt's probably the case that most patches that create a new pg_proc\nentry only create one. The question of consecutive OIDs only comes up\nwith a fairly small number of patches.\n\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 2 Aug 2019 15:00:18 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Attached is v3, which implements your suggestion, generating output\n> like the above. I haven't written a line of Perl in my life prior to\n> today, so basic code review would be helpful.\n\nThe \"if ($oid > $prev_oid + 2)\" test seems unnecessary.\nIt's certainly wrong to keep iterating beyond the first\noid that's > $suggestion.\n\nPerhaps you meant to go back and try a different suggestion\nif there's not at least 2 free OIDs? But then there needs\nto be an outer loop around both of these loops.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Aug 2019 18:19:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On Fri, Aug 2, 2019 at 3:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The \"if ($oid > $prev_oid + 2)\" test seems unnecessary.\n> It's certainly wrong to keep iterating beyond the first\n> oid that's > $suggestion.\n\nSorry. That was just carelessness on my part. (Being the world's worst\nPerl programmer is no excuse.)\n\nHow about the attached? I've simply removed the \"if ($oid > $prev_oid\n+ 2)\" test.\n\n-- \nPeter Geoghegan",
"msg_date": "Fri, 2 Aug 2019 15:42:29 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> How about the attached? I've simply removed the \"if ($oid > $prev_oid\n> + 2)\" test.\n\nBetter ... but I'm the world's second worst Perl programmer,\nso I have little to say about whether it's idiomatic.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Aug 2019 18:52:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On Fri, Aug 2, 2019 at 3:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Better ... but I'm the world's second worst Perl programmer,\n> so I have little to say about whether it's idiomatic.\n\nPerhaps Michael can weigh in here? I'd rather hear a second opinion on\nv4 of the patch before proceeding.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 2 Aug 2019 17:40:17 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On Sat, Aug 3, 2019 at 2:40 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Aug 2, 2019 at 3:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Better ... but I'm the world's second worst Perl programmer,\n> > so I have little to say about whether it's idiomatic.\n>\n> Perhaps Michael can weigh in here? I'd rather hear a second opinion on\n> v4 of the patch before proceeding.\n\nI probably write less perl than Michael, but it looks just fine to me.\n\n\n",
"msg_date": "Sat, 3 Aug 2019 11:40:24 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On Sat, Aug 03, 2019 at 11:40:24AM +0200, Julien Rouhaud wrote:\n> I probably write less perl than Michael, but it looks just fine to me.\n\nIndentation with pgperltidy complains with the attached diff (based on\ntop of v4).\n\n+printf \"Patches should use a more-or-less consecutive range of OIDs.\\n\";\n\"Patches should try to use a consecutive range of OIDs\"?\n\nWhy choosing a random position within [8000,9999]? This leads to the\nfollowing messages for example with multiple runs, which is confusing:\nSuggested random unused OID: 9473 (527 consecutive OID(s) available\nSuggested random unused OID: 8159 (31 consecutive OID(s) available\nSuggested random unused OID: 9491 (509 consecutive OID(s) available\n\nWouldn't it be better to choose the lowest position in the development\nrange, and then adapt the suggestion based on that? We could\nrecommend the range if there are at least 10 OIDs available in the\nrange from the lowest position, and there are few patches eating more\nthan 5-10 OIDs at once.\n--\nMichael",
"msg_date": "Sun, 4 Aug 2019 11:48:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On Sat, Aug 3, 2019 at 7:48 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Why choosing a random position within [8000,9999]? This leads to the\n> following messages for example with multiple runs, which is confusing:\n> Suggested random unused OID: 9473 (527 consecutive OID(s) available\n> Suggested random unused OID: 8159 (31 consecutive OID(s) available\n> Suggested random unused OID: 9491 (509 consecutive OID(s) available\n>\n> Wouldn't it be better to choose the lowest position in the development\n> range, and then adapt the suggestion based on that?\n\nNo, it wouldn't. The entire point of suggesting a totally random OID\nis that it minimizes the probability of a collision among concurrently\ndeveloped patches, per the policy established by commit a6417078 --\nwhat you suggest would defeat the very purpose of this patch. In fact,\nhaving everybody see the same suggestion from unused_oids would\n*maximize* the number of OID collisions.\n\n> We could\n> recommend the range if there are at least 10 OIDs available in the\n> range from the lowest position, and there are few patches eating more\n> than 5-10 OIDs at once.\n\nThat sounds like an over-engineered solution to a problem that doesn't exist.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 3 Aug 2019 20:25:03 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On Fri, Aug 2, 2019 at 1:28 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> I'm fine with it!\n\nPushed a version with similar wording just now.\n\nThanks!\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 5 Aug 2019 11:51:43 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On Mon, Aug 5, 2019 at 8:51 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> Pushed a version with similar wording just now.\n\nThanks!\n\n\n",
"msg_date": "Mon, 5 Aug 2019 21:00:26 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On Mon, Aug 05, 2019 at 09:00:26PM +0200, Julien Rouhaud wrote:\n> Thanks!\n\nWhat you have committed does that:\n+do\n+{\n+ $suggestion = int(8000 + rand(2000));\n+} while (grep(/^$suggestion$/, @{$oids}));\nSo it would be possible to get 9998-9999 as suggestion. In which\ncase, one can basically finish with this message:\nSuggested random unused OID: 9999 (1 consecutive OID(s) available\nstarting here)\n\nWouldn't it be better to keep some room at the end of the allowed\narray? Or at least avoid suggesting ranges where there is less than\n3-5 OIDs available consecutively.\n--\nMichael",
"msg_date": "Tue, 6 Aug 2019 12:47:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On Mon, Aug 5, 2019 at 8:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n> So it would be possible to get 9998-9999 as suggestion. In which\n> case, one can basically finish with this message:\n> Suggested random unused OID: 9999 (1 consecutive OID(s) available\n> starting here)\n\nI strongly doubt that this will ever be a real problem. Just try again.\n\n> Wouldn't it be better to keep some room at the end of the allowed\n> array? Or at least avoid suggesting ranges where there is less than\n> 3-5 OIDs available consecutively.\n\nNot in my view. There is value in having simple, predictable behavior.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 5 Aug 2019 21:09:49 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Mon, Aug 5, 2019 at 8:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> Wouldn't it be better to keep some room at the end of the allowed\n>> array? Or at least avoid suggesting ranges where there is less than\n>> 3-5 OIDs available consecutively.\n\n> Not in my view. There is value in having simple, predictable behavior.\n\nThere was some discussion of that upthread, and Peter argued that many\npatches only need one OID anyway so why try harder. I'm not totally\nsure I buy that --- my sense is that even simple patches tend to add\nseveral related functions not just one. But as long as the script\ntells you how many OIDs are available, what's the problem? Just run\nit again if you want a different suggestion, or make your own choice.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Aug 2019 01:41:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On Mon, Aug 5, 2019 at 10:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> There was some discussion of that upthread, and Peter argued that many\n> patches only need one OID anyway so why try harder. I'm not totally\n> sure I buy that --- my sense is that even simple patches tend to add\n> several related functions not just one.\n\nThat has been my experience, but it turns out that that was colored by\nthe areas that I work in. I reviewed the history of pg_proc.dat today,\nand found that adding multiple entries at a time is more common that I\nthought it was.\n\n> But as long as the script\n> tells you how many OIDs are available, what's the problem? Just run\n> it again if you want a different suggestion, or make your own choice.\n\nRight. Besides, adding something along the lines Michael described\nnecessitates fixing the problems that it creates. We'll run out of\nblocks of 5 contiguous OIDs (or whatever) far sooner than we'll run\nout of single OIDs. Now we have to worry about doing a second\n(actually a third) pass over the OIDs as a fallback when that happens.\nAnd so on.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 5 Aug 2019 22:58:26 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Mon, Aug 5, 2019 at 10:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> But as long as the script\n>> tells you how many OIDs are available, what's the problem? Just run\n>> it again if you want a different suggestion, or make your own choice.\n\n> Right. Besides, adding something along the lines Michael described\n> necessitates fixing the problems that it creates. We'll run out of\n> blocks of 5 contiguous OIDs (or whatever) far sooner than we'll run\n> out of single OIDs.\n\nWell, if we ever get even close to that situation, this whole approach\nisn't really gonna work. My estimate is that in any one development\ncycle we'll commit order-of-a-couple-dozen patches that consume new OIDs.\nIn that context you'd be just unlucky to get an OID suggestion that\ndoesn't have dozens to hundreds of free OIDs after it. (If the rate\nof manually-assigned-OID consumption were any faster than that, we'd\nhave filled up the 1-10K space long since.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Aug 2019 02:13:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On Mon, Aug 5, 2019 at 11:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Right. Besides, adding something along the lines Michael described\n> > necessitates fixing the problems that it creates. We'll run out of\n> > blocks of 5 contiguous OIDs (or whatever) far sooner than we'll run\n> > out of single OIDs.\n>\n> Well, if we ever get even close to that situation, this whole approach\n> isn't really gonna work.\n\nMy point was that I don't see any reason to draw the line at or after\nwhat Michael suggested, but before handling the exhaustion of\navailable blocks of 5 contiguous OIDs in the range 8000-9999. It's\njust busy work.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 5 Aug 2019 23:39:14 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
},
{
"msg_contents": "On 2019-Aug-06, Tom Lane wrote:\n\n> My estimate is that in any one development\n> cycle we'll commit order-of-a-couple-dozen patches that consume new OIDs.\n> In that context you'd be just unlucky to get an OID suggestion that\n> doesn't have dozens to hundreds of free OIDs after it. (If the rate\n> of manually-assigned-OID consumption were any faster than that, we'd\n> have filled up the 1-10K space long since.)\n\nIf we ever get to a point where this is a real problem in one cycle, we\ncan just run the renumber_oids script before the end of the cycle.\n\nSo IMO what we have now is more than sufficient for the time being.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 6 Aug 2019 12:35:35 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: The unused_oids script should have a reminder to use the\n 8000-8999 OID range"
}
] |
[
{
"msg_contents": "Hi,\n\nThe Release Management Team is pleased to announce that the release\ndate for PostgreSQL 12 Beta 3 is set to be 2019-08-08 (wrapping [1]\nthe release 2019-08-05), together with the next set of planned minor\nreleases. \n\nWe’re excited to make the third beta for this latest major release of\nPostgreSQL available for testing, and we welcome all feedback.\n\nPlease let us know if you have any questions.\n\n[1]: https://wiki.postgresql.org/wiki/Release_process\n\nRegards,\n--\nMichael,\non behalf of the PG 12 RMT",
"msg_date": "Fri, 2 Aug 2019 13:03:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 12 Beta 3 Release: 2019-08-08"
}
] |
[
{
"msg_contents": "Hello,\n\nI currently have the problem that a simple prepared statement for a \nquery like\n\n select * from vw_report_invoice where id = $1\n\nresults in 33MB memory consumption on my side. It is a query that does \nabout >20 joins over partially wide tables, but only a very small subset \nof columns is really needed. I already argued if PreparedStatements \ncontain all the metadata about all used tables and it turns out it is \neven worse (because that metadata is even copied into mem multiple times).\n\nThe reason for the absurd memory consumption are RangeTableEntrys which \nare created for every touched table and for every join set done. I \nprinted the query lists created after preparation and found multiple \nRangeTableEntrys containing >4000 columns, including the names of the \ncolumns as well as a list of all columns types, even when only a very \nsmall subset is really required.\n\nThe minimum memory consumption is 46 bytes + the name of the column, \nguessing it will be 64+32? bytes when palloced, resulting in 400k memory \nfor just one of the many RTEs in the large query where maybe 50 columns \nare really used.\n\nOne can test this easily. When I create two simple tables:\n\nCREATE TABLE invoice (\n id serial PRIMARY KEY,\n number varchar(10),\n amount float8,\n customer_id int4\n);\n\nCREATE TABLE invoiceitem (\n id serial PRIMARY KEY,\n invoice_id int4,\n position int4,\n quantity float8,\n priceperunit float8,\n amount float8,\n description text\n);\n\nALTER TABLE invoiceitem ADD CONSTRAINT fk_invoiceitem_invoice_id\n FOREIGN KEY (invoice_id) REFERENCES invoice(id);\n\nAnd now I preparey a simple join over these tables:\n\nPREPARE invoicequerytest AS\nSELECT inv.id, inv.number, item.id, item.description\n FROM invoice inv\n LEFT JOIN invoiceitem item ON item.invoice_id = inv.id\n WHERE inv.id = $1;\n\nThe pprint-ed RTE for the join alone is this:\n\n{RTE\n :alias <>\n :eref\n {ALIAS\n :aliasname unnamed_join\n :colnames (\"id\" \"number\" \"amount\" \"customer_id\" \"id\" \n\"invoice_id\" \"po\n sition\" \"quantity\" \"priceperunit\" \"amount\" \"description\")\n }\n :rtekind 2\n :jointype 1\n :joinaliasvars (\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varcollid 0\n :varlevelsup 0\n :varnoold 1\n :varoattno 1\n :location -1\n }\n {VAR\n :varno 1\n :varattno 2\n :vartype 1043\n :vartypmod 14\n :varcollid 100\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n :location -1\n }\n {VAR\n :varno 1\n :varattno 3\n :vartype 701\n :vartypmod -1\n :varcollid 0\n :varlevelsup 0\n :varnoold 1\n :varoattno 3\n :location -1\n }\n {VAR\n :varno 1\n :varattno 4\n :vartype 23\n :vartypmod -1\n :varcollid 0\n :varlevelsup 0\n :varnoold 1\n :varoattno 4\n :location -1\n }\n {VAR\n :varno 2\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varcollid 0\n :varlevelsup 0\n :varnoold 2\n :varoattno 1\n :location -1\n }\n {VAR\n :varno 2\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varcollid 0\n :varlevelsup 0\n :varnoold 2\n :varoattno 2\n :location -1\n }\n {VAR\n :varno 2\n :varattno 3\n :vartype 23\n :vartypmod -1\n :varcollid 0\n :varlevelsup 0\n :varnoold 2\n :varoattno 3\n :location -1\n }\n {VAR\n :varno 2\n :varattno 4\n :vartype 701\n :vartypmod -1\n :varcollid 0\n :varlevelsup 0\n :varnoold 2\n :varoattno 4\n :location -1\n }\n {VAR\n :varno 2\n :varattno 5\n :vartype 701\n :vartypmod -1\n :varcollid 0\n :varlevelsup 0\n :varnoold 2\n :varoattno 5\n :location -1\n }\n {VAR\n :varno 2\n :varattno 6\n :vartype 701\n :vartypmod -1\n :varcollid 0\n :varlevelsup 0\n :varnoold 2\n :varoattno 6\n :location -1\n }\n {VAR\n :varno 2\n :varattno 7\n :vartype 25\n :vartypmod -1\n :varcollid 100\n :varlevelsup 0\n :varnoold 2\n :varoattno 7\n :location -1\n }\n )\n :lateral false\n :inh false\n :inFromCl true\n :requiredPerms 0\n :checkAsUser 0\n :selectedCols (b)\n :insertedCols (b)\n :updatedCols (b)\n :securityQuals <>\n }\n\nIt contains every col from both tables.\n\nUseful cols: \"id\" \"number\" \"id\" \"invoice_id\" \"description\"\n\nUseless because complety unreferenced cols: \"amount\" \"customer_id\" \n\"position\" \"quantity\" \"priceperunit\" \"amount\"\n\nI believe one could easily drop the unrefernced names from the RTE as \nwell as the Var nodes which would cut mem usage drastically.\n\n*Final questions:* Is there a reason we don't just null the unused \nvalues from the RTEs? I would love to implement such a cleanup step. Or \nif null is not possible, just replace stuff with a simpler NOVAR node \nand replace names with empty strings?\n\nI believe this would reduce mem usage for PreparedStatements by >90% at \nleast here.\n\nRegards,\nDaniel Migowski\n\n\n\n\n\n\n\nHello,\nI currently have the problem that a simple prepared statement for\n a query like\n select * from vw_report_invoice where id = $1\nresults in 33MB memory consumption on my side. It is a query that\n does about >20 joins over partially wide tables, but only a\n very small subset of columns is really needed. I already argued if\n PreparedStatements contain all the metadata about all used tables\n and it turns out it is even worse (because that metadata is even\n copied into mem multiple times).\n\nThe reason for the absurd memory consumption are RangeTableEntrys\n which are created for every touched table and for every join set\n done. I printed the query lists created after preparation and\n found multiple RangeTableEntrys containing >4000 columns,\n including the names of the columns as well as a list of all\n columns types, even when only a very small subset is really\n required.\nThe minimum memory consumption is 46 bytes + the name of the\n column, guessing it will be 64+32? bytes when palloced, resulting\n in 400k memory for just one of the many RTEs in the large query\n where maybe 50 columns are really used.\nOne can test this easily. When I create two simple tables:\nCREATE TABLE invoice (\n id serial PRIMARY KEY,\n number varchar(10),\n amount float8,\n customer_id int4\n);\n\nCREATE TABLE invoiceitem (\n id serial PRIMARY KEY,\n invoice_id int4,\n position int4,\n quantity float8,\n priceperunit float8,\n amount float8,\n description text\n);\n\nALTER TABLE invoiceitem ADD CONSTRAINT\n fk_invoiceitem_invoice_id \n FOREIGN KEY (invoice_id) REFERENCES invoice(id);\n\nAnd now I preparey a simple join over these tables:\n\nPREPARE invoicequerytest AS\nSELECT inv.id, inv.number, item.id, item.description \n FROM invoice inv \n LEFT JOIN invoiceitem item ON item.invoice_id = inv.id\n WHERE inv.id = $1;\nThe pprint-ed RTE for the join alone is this:\n {RTE\n :alias <>\n :eref\n {ALIAS\n :aliasname unnamed_join\n :colnames (\"id\" \"number\" \"amount\" \"customer_id\"\n \"id\" \"invoice_id\" \"po\n sition\" \"quantity\" \"priceperunit\" \"amount\"\n \"description\")\n }\n :rtekind 2\n :jointype 1\n :joinaliasvars (\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varcollid 0\n :varlevelsup 0\n :varnoold 1\n :varoattno 1\n :location -1\n }\n {VAR\n :varno 1\n :varattno 2\n :vartype 1043\n :vartypmod 14\n :varcollid 100\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n :location -1\n }\n {VAR\n :varno 1\n :varattno 3\n :vartype 701\n :vartypmod -1\n :varcollid 0\n :varlevelsup 0\n :varnoold 1\n :varoattno 3\n :location -1\n }\n {VAR\n :varno 1\n :varattno 4\n :vartype 23\n :vartypmod -1\n :varcollid 0\n :varlevelsup 0\n :varnoold 1\n :varoattno 4\n :location -1\n }\n {VAR\n :varno 2\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varcollid 0\n :varlevelsup 0\n :varnoold 2\n :varoattno 1\n :location -1\n }\n {VAR\n :varno 2\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varcollid 0\n :varlevelsup 0\n :varnoold 2\n :varoattno 2\n :location -1\n }\n {VAR\n :varno 2\n :varattno 3\n :vartype 23\n :vartypmod -1\n :varcollid 0\n :varlevelsup 0\n :varnoold 2\n :varoattno 3\n :location -1\n }\n {VAR\n :varno 2\n :varattno 4\n :vartype 701\n :vartypmod -1\n :varcollid 0\n :varlevelsup 0\n :varnoold 2\n :varoattno 4\n :location -1\n }\n {VAR\n :varno 2\n :varattno 5\n :vartype 701\n :vartypmod -1\n :varcollid 0\n :varlevelsup 0\n :varnoold 2\n :varoattno 5\n :location -1\n }\n {VAR\n :varno 2\n :varattno 6\n :vartype 701\n :vartypmod -1\n :varcollid 0\n :varlevelsup 0\n :varnoold 2\n :varoattno 6\n :location -1\n }\n {VAR\n :varno 2\n :varattno 7\n :vartype 25\n :vartypmod -1\n :varcollid 100\n :varlevelsup 0\n :varnoold 2\n :varoattno 7\n :location -1\n }\n )\n :lateral false\n :inh false\n :inFromCl true\n :requiredPerms 0\n :checkAsUser 0\n :selectedCols (b)\n :insertedCols (b)\n :updatedCols (b)\n :securityQuals <>\n }\n\nIt contains every col from both tables.\n\nUseful cols: \"id\" \"number\" \"id\" \"invoice_id\" \"description\" \n\nUseless because complety unreferenced cols: \"amount\"\n \"customer_id\" \"position\" \"quantity\" \"priceperunit\" \"amount\"\nI believe one could easily drop the unrefernced names from the\n RTE as well as the Var nodes which would cut mem usage\n drastically. \n\nFinal questions: Is there a reason we don't just null the\n unused values from the RTEs? I would love to implement such a\n cleanup step. Or if null is not possible, just replace stuff with\n a simpler NOVAR node and replace names with empty strings?\nI believe this would reduce mem usage for PreparedStatements by\n >90% at least here. \n\nRegards,\n Daniel Migowski",
"msg_date": "Fri, 2 Aug 2019 09:01:26 +0200",
"msg_from": "Daniel Migowski <dmigowski@ikoffice.de>",
"msg_from_op": true,
"msg_subject": "Proposal: Clean up RangeTblEntry nodes after query preparation"
}
] |
[
{
"msg_contents": "Hello,\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=elver&dt=2019-07-24%2003%3A22%3A17\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2019-08-02%2007%3A17%3A25\n\nI wondered if this might be like the recently fixed problem with slapd\nnot being ready to handle requests yet, since we start up krb5kdc\nfirst and then don't do anything explicit to wait for it, but it\ndoesn't look like an obvious failure to reach it. It looks like test\n3 on elver connected successfully but didn't like the answer it got\nfor this query:\n\nSELECT gss_authenticated AND encrypted from pg_stat_gssapi where pid =\npg_backend_pid();\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 2 Aug 2019 21:32:43 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "A couple of random BF failures in kerberosCheck"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=elver&dt=2019-07-24%2003%3A22%3A17\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2019-08-02%2007%3A17%3A25\n\n> I wondered if this might be like the recently fixed problem with slapd\n> not being ready to handle requests yet, since we start up krb5kdc\n> first and then don't do anything explicit to wait for it, but it\n> doesn't look like an obvious failure to reach it.\n\nI spent a bit of time trying to reproduce these failures, using a\nFedora 29 install that should pretty nearly match what crake is\nrunning. I didn't yet match the shown failures, but I did get this\nafter a bunch of attempts:\n\n# Running: /usr/sbin/krb5kdc -P /home/tgl/pgsql/src/test/kerberos/tmp_check/krb5kdc.pid\nBail out! system /usr/sbin/krb5kdc failed\n\nLooking into the tmp_check/krb5kdc.log file finds\n\nAug 03 16:04:01 mini12.sss.pgh.pa.us krb5kdc[14340](info): setting up network...\nkrb5kdc: Address already in use - Cannot bind server socket on 127.0.0.1.55324\nAug 03 16:04:01 mini12.sss.pgh.pa.us krb5kdc[14340](Error): Failed setting up a TCP socket (for 127.0.0.1.55324)\nkrb5kdc: Address already in use - Error setting up network\n\nSo this leads to two points:\n\n* kerberos/t/001_auth.pl just blithely assumes that it can pick\nany random port above 48K and that's guaranteed to be free.\nMaybe we should split out the code in get_new_node for finding\na free TCP port, so we can call it here?\n\n* AFAICS, the only provision for shutting down krb5kdc at the end of\nthe test run is\n\nEND\n{\n\tkill 'INT', `cat $kdc_pidfile` if -f $kdc_pidfile;\n}\n\nI wonder how reliable that is, especially in contexts where the calling\nscript might do \"rm -rf tmp_check\" shortly afterwards. Maybe it'd be\nbetter to try to shut down krb5kdc explicitly before we exit the test\nscript. I'd suggest waiting for krb5kdc to remove its pidfile, except\nit seems not to do so :-(\n\nDespite my suspicions about the shutdown provisions, I found that this is\nsomewhat reproducible and the problematic port is *not* the one assigned\nin the previous iteration of 001_auth.pl. However, I notice that after\nrunning this test in a loop for awhile, there are an awful lot of local\nloopback connections in TIME_WAIT state. I hypothesize that the failures\ncorrespond to cases where we try to re-use a port number that some\nprevious test iteration used, possibly on the client side not the server\nside of GSS. I wonder whether we are doing something that keeps those\nGSS query connections from being closed more cleanly/rapidly. (The\nsockets do go away after a minute or so, but why are they in TIME_WAIT\nat all?)\n\nNone of these points seem to explain the buildfarm failures, though,\nespecially not elver's where only one connection attempt failed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Aug 2019 17:04:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: A couple of random BF failures in kerberosCheck"
},
{
"msg_contents": "I wrote:\n> * kerberos/t/001_auth.pl just blithely assumes that it can pick\n> any random port above 48K and that's guaranteed to be free.\n> Maybe we should split out the code in get_new_node for finding\n> a free TCP port, so we can call it here?\n\nI've confirmed that the reason it's failing on my machine is exactly\nthat krb5kdc tries to bind to a socket that is still in TIME_WAIT state.\nAlso, it looks like the socket is typically one that was used by the\nGSSAPI client side (no surprise, the test leaves a lot more of those\nthan the one server socket), so we'd have no record of it even if we\nwere somehow saving state from prior runs.\n\nSo I propose the attached patch, which seems to fix this for me.\n\nThe particular case I'm looking at (running these tests in a tight\nloop) is of course not that interesting, but I argue that it's just\nincreasing the odds of failure enough that I can isolate the cause.\nA buildfarm animal running both kerberos and ldap tests is almost\ncertainly at risk of such a failure with low probability.\n\n(Still don't know what actually happened in those two buildfarm\nfailures, though.)\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 03 Aug 2019 18:42:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: A couple of random BF failures in kerberosCheck"
},
{
"msg_contents": "\nOn 8/3/19 6:42 PM, Tom Lane wrote:\n> I wrote:\n>> * kerberos/t/001_auth.pl just blithely assumes that it can pick\n>> any random port above 48K and that's guaranteed to be free.\n>> Maybe we should split out the code in get_new_node for finding\n>> a free TCP port, so we can call it here?\n> I've confirmed that the reason it's failing on my machine is exactly\n> that krb5kdc tries to bind to a socket that is still in TIME_WAIT state.\n> Also, it looks like the socket is typically one that was used by the\n> GSSAPI client side (no surprise, the test leaves a lot more of those\n> than the one server socket), so we'd have no record of it even if we\n> were somehow saving state from prior runs.\n>\n> So I propose the attached patch, which seems to fix this for me.\n>\n> The particular case I'm looking at (running these tests in a tight\n> loop) is of course not that interesting, but I argue that it's just\n> increasing the odds of failure enough that I can isolate the cause.\n> A buildfarm animal running both kerberos and ldap tests is almost\n> certainly at risk of such a failure with low probability.\n>\n> (Still don't know what actually happened in those two buildfarm\n> failures, though.)\n>\n> \t\t\t\n\n\nLooks good. A couple of minor nits:\n\n\n. since we're exporting the name there's no need to document it as a\nclass method. I'd remove the \"PostgresNode->\" from the couple of places\nyou have it in the docco. You're not actually calling it that way\nanywhere, and indeed doing so ends up passing 'PostgresNode' as a\nuseless parameter to the subroutine. This is different from calling it\nwith a qualified name (PostgresNode::get_free_port()).\n\n. in the inner loop we should probably exit the loop if we set found to\n0. There's no point testing other addresses in that case. Something like\n\"last unless found;\" would do the trick.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sun, 4 Aug 2019 08:24:25 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: A couple of random BF failures in kerberosCheck"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 8/3/19 6:42 PM, Tom Lane wrote:\n>> So I propose the attached patch, which seems to fix this for me.\n\n> Looks good. A couple of minor nits:\n\nWill fix, thanks for the review!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Aug 2019 11:59:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: A couple of random BF failures in kerberosCheck"
}
] |
[
{
"msg_contents": "Hi,\n\nThere have been five failures on three animals like this, over the\npast couple of months:\n\n step s6a7: LOCK TABLE a7; <waiting ...>\n step s7a8: LOCK TABLE a8; <waiting ...>\n step s8a1: LOCK TABLE a1; <waiting ...>\n-step s8a1: <... completed>\n step s7a8: <... completed>\n-error in steps s8a1 s7a8: ERROR: deadlock detected\n+step s8a1: <... completed>\n+ERROR: deadlock detected\n step s8c: COMMIT;\n step s7c: COMMIT;\n step s6a7: <... completed>\n\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2019-07-18%2021:57:59\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2019-07-10%2005:59:16\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2019-07-08%2015:02:17\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2019-06-23%2004:17:09\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2019-06-12%2021:46:24\n\nBefore that there were some like that a couple of years back:\n\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2017-04-09%2021:58:03\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2017-04-08%2021:58:04\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2017-04-08%2005:19:17\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2017-04-07%2000:23:39\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2017-04-05%2018:58:04\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 3 Aug 2019 00:25:33 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Recent failures in IsolationCheck deadlock-hard"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> There have been five failures on three animals like this, over the\n> past couple of months:\n\nAlso worth noting is that anole failed its first try at the new\ndeadlock-parallel isolation test:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2019-08-01%2015%3A48%3A16\n\nWhat that looks like is the queries got stuck and eventually\nisolationtester gave up and canceled the test. So I'm suspicious\nthat there's a second bug in the parallel deadlock detection code.\n\nPossibly relevant factoids: all three of the animals in question\nrun HEAD with force_parallel_mode = regress, and there's reason\nto think that their timing behavior could be different from other\nanimals (anole and gharial run on HPUX, while hyrax uses\nCLOBBER_CACHE_ALWAYS).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Aug 2019 10:11:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recent failures in IsolationCheck deadlock-hard"
},
{
"msg_contents": "On Sat, Aug 3, 2019 at 2:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > There have been five failures on three animals like this, over the\n> > past couple of months:\n>\n> Also worth noting is that anole failed its first try at the new\n> deadlock-parallel isolation test:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2019-08-01%2015%3A48%3A16\n\nAnd friarbird (also CLOBBER_CACHE_ALWAYS) fails every time.\n\n animal | snapshot | branch | commit | result |\nfail_stage | fail_tests\n-----------+---------------------+--------+---------+---------+----------------+---------------------\n lousyjack | 2019-08-05 11:33:02 | HEAD | a76cfba | FAILURE |\nIsolationCheck | {deadlock-parallel}\n gharial | 2019-08-05 10:30:37 | HEAD | a76cfba | FAILURE |\nIsolationCheck | {deadlock-parallel}\n friarbird | 2019-08-05 05:20:01 | HEAD | 8548ddc | FAILURE |\nIsolationCheck | {deadlock-parallel}\n friarbird | 2019-08-04 05:20:02 | HEAD | 69edf4f | FAILURE |\nIsolationCheck | {deadlock-parallel}\n hyrax | 2019-08-03 12:20:57 | HEAD | 2abd7ae | FAILURE |\nIsolationCheck | {deadlock-parallel}\n friarbird | 2019-08-03 05:20:01 | HEAD | 2abd7ae | FAILURE |\nIsolationCheck | {deadlock-parallel}\n friarbird | 2019-08-02 05:20:00 | HEAD | a9f301d | FAILURE |\nIsolationCheck | {deadlock-parallel}\n anole | 2019-08-01 15:48:16 | HEAD | da9456d | FAILURE |\nIsolationCheck | {deadlock-parallel}\n\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2019-08-05%2011:33:02\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2019-08-05%2010:30:37\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=friarbird&dt=2019-08-05%2005:20:01\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=friarbird&dt=2019-08-04%2005:20:02\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2019-08-03%2012:20:57\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=friarbird&dt=2019-08-03%2005:20:01\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=friarbird&dt=2019-08-02%2005:20:00\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2019-08-01%2015:48:16\n\n 1\n step d2a1: <... completed>\n-sum\n-\n-10000\n+error in steps d1c e1l d2a1: ERROR: canceling statement due to user request\n step e1c: COMMIT;\n-step d2c: COMMIT;\n step e2l: <... completed>\n lock_excl\n\n 1\n+step d2c: COMMIT;\n step e2c: COMMIT;\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Aug 2019 18:06:56 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Recent failures in IsolationCheck deadlock-hard"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sat, Aug 3, 2019 at 2:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Also worth noting is that anole failed its first try at the new\n>> deadlock-parallel isolation test:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2019-08-01%2015%3A48%3A16\n\n> And friarbird (also CLOBBER_CACHE_ALWAYS) fails every time.\n\nYeah, there have been half a dozen failures since deadlock-parallel\nwent in, mostly on critters that are slowed by CLOBBER_CACHE_ALWAYS\nor valgrind. I've tried repeatedly to reproduce that here, without\nsuccess :-(. It's unclear whether the failures represent a real\ncode bug or just a problem in the test case, so I don't really want\nto speculate about fixes till I can reproduce it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Aug 2019 02:18:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recent failures in IsolationCheck deadlock-hard"
},
{
"msg_contents": "On Tue, Aug 6, 2019 at 6:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Sat, Aug 3, 2019 at 2:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Also worth noting is that anole failed its first try at the new\n> >> deadlock-parallel isolation test:\n> >> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2019-08-01%2015%3A48%3A16\n>\n> > And friarbird (also CLOBBER_CACHE_ALWAYS) fails every time.\n>\n> Yeah, there have been half a dozen failures since deadlock-parallel\n> went in, mostly on critters that are slowed by CLOBBER_CACHE_ALWAYS\n> or valgrind. I've tried repeatedly to reproduce that here, without\n> success :-(. It's unclear whether the failures represent a real\n> code bug or just a problem in the test case, so I don't really want\n> to speculate about fixes till I can reproduce it.\n\nI managed to reproduce a failure that looks a lot like lousyjack's\n(note that there are two slightly different failure modes):\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2019-08-05%2011:33:02\n\nI did that by changing the deadlock_timeout values for sessions d1 and\nd2 to just a few milliseconds on my slowest computer, guessing that\nthis might be a race involving the deadlock timeout and the time it\ntakes for workers to fork and join a lock queue. While normally\ndeadlock.c with DEBUG_DEADLOCK defined prints out something like this\nduring this test:\n\nDeadLockCheck: lock 0x80a2812d0 queue 33087 33088 33089 33090 33091\nrearranged to: lock 0x80a2812d0 queue 33091 33090 33089 33088 33087\n\n... when it failed like lousyjack my run printed out:\n\nDeadLockCheck: lock 0x80a2721f8 queue 33108 33114\nrearranged to: lock 0x80a2721f8 queue 33114 33108\n\n... and then it hung for a while, so I could inspect the lock table\nand see that PID 33108 was e1l (not granted), and PID 33114 was gone\nbut was almost certainly the first worker for d2a1 (I can tell because\n33110-33113 are the workers for d1a2 and they're still waiting and\nd2a1's first worker should have had the next sequential PID, on my\nOS).\n\nAnother thing I noticed is that all 4 times I managed to reproduce\nthis, the \"rearranged to\" queue had only two entries; I can understand\nthat d1's workers might not feature yet due to bad timing, but it's\nnot clear to me why there should always be only one d2a1 worker and\nnot more. I don't have time to study this further today and I might\nbe way off, but my first guess is that in theory we need a way to make\nsure that the d1-e2 edge exists before d2's deadlock timer expires,\nno? That's pretty tricky though, so maybe we just need to crank the\ntimes up.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Aug 2019 14:34:23 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Recent failures in IsolationCheck deadlock-hard"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Tue, Aug 6, 2019 at 6:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, there have been half a dozen failures since deadlock-parallel\n>> went in, mostly on critters that are slowed by CLOBBER_CACHE_ALWAYS\n>> or valgrind. I've tried repeatedly to reproduce that here, without\n>> success :-(. It's unclear whether the failures represent a real\n>> code bug or just a problem in the test case, so I don't really want\n>> to speculate about fixes till I can reproduce it.\n\n> I managed to reproduce a failure that looks a lot like lousyjack's\n> (note that there are two slightly different failure modes):\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2019-08-05%2011:33:02\n\n> I did that by changing the deadlock_timeout values for sessions d1 and\n> d2 to just a few milliseconds on my slowest computer, guessing that\n> this might be a race involving the deadlock timeout and the time it\n> takes for workers to fork and join a lock queue.\n\nYeah, I eventually managed to reproduce it (not too reliably) by\nintroducing a randomized delay into parallel worker startup.\n\nThe scenario seems to be: some d1a2 worker arrives so late that it's not\naccounted for in the initial DeadLockCheck performed by some d2a1 worker.\nThe other d1a2 workers are released, and run and finish, but the late one\ngoes to sleep, with a long deadlock_timeout. If the next DeadLockCheck is\nrun by e1l's worker, that prefers to release d2a1 workers, which then all\nrun to completion. When the late d1a2 worker finally wakes up and runs\nDeadLockCheck, *there is no deadlock to resolve*: the d2 session is idle,\nnot waiting for any lock. So the worker goes back to sleep, and we sit\ntill isolationtester times out.\n\nAnother way to look at it is that there is a deadlock condition, but\none of the waits-for constraints is on the client side where DeadLockCheck\ncan't see it. isolationtester is waiting for d1a2 to complete before it\nwill execute d1c which would release session d2, so that d2 is effectively\nwaiting for d1, but DeadLockCheck doesn't know that and thinks that it's\nequally good to unblock either d1 or d2.\n\nThe attached proposed patch resolves this by introducing another lock\nthat is held by d1 and then d2 tries to take it, ensuring that the\ndeadlock detector will recognize that d1 must be released.\n\nI've run several thousand iterations of the test this way without a\nproblem, where before the MTBF was maybe a hundred or two iterations\nwith the variable startup delay active. So I think this fix is good,\nbut I could be wrong. One notable thing is that every so often the\ntest takes ~10s to complete instead of a couple hundred msec. I think\nthat what's happening there is that the last deadlock condition doesn't\nform until after all of session d2's DeadLockChecks have run, meaning\nthat we don't spot the deadlock until some other session runs it. The\ntest still passes, though. This is probably fine given that it would\nnever happen except with platforms that are horridly slow anyway.\nPossibly we could shorten the 10s values to make that case complete\nquicker, but I'm afraid of maybe breaking things on slow machines.\n\n> Another thing I noticed is that all 4 times I managed to reproduce\n> this, the \"rearranged to\" queue had only two entries; I can understand\n> that d1's workers might not feature yet due to bad timing, but it's\n> not clear to me why there should always be only one d2a1 worker and\n> not more.\n\nI noticed that too, and eventually realized that it's a\nmax_worker_processes constraint: we have two parallel workers waiting\nin e1l and e2l, so if d1a2 takes four, there are only two slots left for\nd2a1; and for reasons that aren't totally clear, we don't get to use the\nlast slot. (Not sure if that's a bug in itself.)\n\nThe attached patch therefore also knocks max_parallel_workers_per_gather\ndown to 3 in this test, so that we have room for at least 2 d2a1 workers.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 17 Aug 2019 17:28:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recent failures in IsolationCheck deadlock-hard"
}
] |
[
{
"msg_contents": "Hi hackers,\n\n\nWhile I was reviewing some code in another patch, I stumbled upon a possible optimization in the btree index code in nbtsearch.c for queries using 'LIMIT 1'. I have written a small patch that implements this optimization, but I'd like some feedback on the design of the patch, whether it is correct at all to use this optimization, and whether the performance tradeoffs are deemed worth it by the community.\n\n\nBasically, an example of the case I'd like to optimize is the following. Given a table 'tbl' with an index on columns (k,ts DESC):\n\n\nSELECT* FROM tbl WHERE k=:val AND ts<=:timestamp ORDER BY k, ts DESC LIMIT 1;\n\n\nAnd, even more importantly, when this query gets called in an inner loop like:\n\n\nSELECT* FROM generate_series(:start_ts, :end_ts, :interval) ts -- perhaps thousands of iterations, could also be a loop over values of 'k' rather than timestamps. this is just an example\n\nCROSS JOIN LATERAL (\n\n SELECT* FROM tbl WHERE k=:val AND ts<=:timestamp ORDER BY k, ts DESC LIMIT 1;\n\n) _;\n\n\nWith time-series data, this case often arises as you have a certain natural key for which you store updates as they occur. Getting the state of k at a specific time then boils down to the given query there, which is almost always the fastest way to get this information, since the index scan with LIMIT 1 is very fast already. However, there seems to be a possibility to make this even faster (up to nearly 3x faster in test cases that use this nested loop of index scans)\n\nEvery time the index scan is done, all tuples from the leaf page are read in nbtsearch.c:_bt_readpage. The idea of this patch is to make an exception for this *only* the first time amgettuple gets called. This calls _bt_first in nbtsearch.c, which will, if there are scankeys, descend the tree to a leaf page and read just the first (or possibly two) tuples. It won't touch the rest of the page yet. If indeed just one tuple was required, there won't be a call to _bt_next and we're done. If we do need more than one tuple, _bt_next will resume reading tuples from the index page at the point where we left off.\n\n\nThere are a few caveats:\n\n- Possible performance decrease for queries that need a small number of tuples (but more than one), because they now need to lock the same page twice. This can happen in several cases, for example: LIMIT 3; LIMIT 1 but the first tuple returned does not match other scan conditions; LIMIT 1 but the tuple returned is not visible; no LIMIT at all but there are just only a few matching rows.\n\n- We need to take into account page splits, insertions and vacuums while we do not have the read-lock in between _bt_first and the first call to _bt_next. This made my patch quite a bit more complicated than my initial implementation.\n\n\nI did performance tests for some best case and worst case test scenarios. TPS results were stable and reproducible in re-runs on my, otherwise idle, server. Attached are the full results and how to reproduce. I picked test cases that show best performance as well as worst performance compared to master. Summary: the greatest performance improvement can be seen for the cases with the subquery in a nested loop. In a nested loop of 100 times, the performance is roughly two times better, for 10000 times the performance is roughly three times better. For most test cases that don't use LIMIT 1, I couldn't find a noticeable difference, except for the nested loop with a LIMIT 3 (or similarly, a nested loop without any LIMIT-clause that returns just three tuples). This is also theoretically the worst-case test case, because it has to lock the page again and then read it, just for one tuple. In this case, I saw TPS decrease by 2-3% in a few cases (details in the attached file), due to it having to lock/unlock the same page in both _bt_first and _bt_next.\n\n\nA few concrete questions to the community:\n\n- Does the community also see this as a useful optimization?\n\n- Is the way it is currently implemented safe? I struggled quite a bit to get everything working with respect to page splits and insertions. In particular, I don't know why in my patch, _bt_find_offset_for_tid needs to consider searching for items with an offset *before* the passed offset. As far as my understanding goes, this could only happen when the index gets vacuumed in the mean-time. However, we hold a pin on the buffer the whole time (we even assert this), so vacuum should not have been possible. Still, this code gets triggered sometimes, so it seems to be necessary. Perhaps someone in the community who's more of an expert on this can shed some light on it.\n\n- What are considered acceptable performance tradeoffs in this case? Is a performance degradation in any part generally not acceptable at all?\n\n\nI'd also welcome any feedback on the process - this is my first patch and while I tried to follow the guidelines, I may have missed something along the way.\n\n\nAttachments:\n\n- out_limit.txt: pgbench results for patched branch\n\n- out_master.txt pgbench results for master branch (can be diffed with out_limit.txt to efficiently see the difference)\n\n- init_test.sql: creates a simple table for the test cases and fills it with data\n\n- test_fwd.sql: the nested loop example, parameters :nlimit and :nitems to specify how many rows per inner loop to limit to and how many iterations of the loop need to be done. we're returning a sum() over a column to make sure transferring result data over the socket is not the bottleneck - this is to show the worst-case behavior of this patch. When selecting the full row instead of the sum, the performance difference is negligible for the 'bad' cases, while still providing great (up to 2.5x) improvements for the 'good' cases. This can be tested by changing the sum() to select a column per row instead.\n\n- test_fwd_eq.sql: nested loop with simple unique equality select\n\n- test_fwd_single.sql: single query with LIMIT without nested loop\n\n\n-Floris",
"msg_date": "Fri, 2 Aug 2019 15:23:23 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "Optimize single tuple fetch from nbtree index"
},
{
"msg_contents": "Floris Van Nee <florisvannee@Optiver.com> writes:\n> Every time the index scan is done, all tuples from the leaf page are\n> read in nbtsearch.c:_bt_readpage. The idea of this patch is to make an\n> exception for this *only* the first time amgettuple gets called.\n\nRegardless of whether there's actually a LIMIT 1? That seems disastrous\nfor every other case than the narrow one where the optimization wins.\nBecause every other case is now going to have to touch the index page\ntwice. That's more CPU and about double the contention --- if you could\nnot measure any degradation from that, you're not measuring the right\nthing.\n\nIn principle, you could pass down knowledge of whether there's a LIMIT,\nusing the same mechanism used to enable top-N sorting. But it'd have to\nalso go through the AM interface layer, so I'm not sure how messy this\nwould be.\n\n> This calls _bt_first in nbtsearch.c, which will, if there are scankeys, descend the tree to a leaf page and read just the first (or possibly two) tuples. It won't touch the rest of the page yet. If indeed just one tuple was required, there won't be a call to _bt_next and we're done. If we do need more than one tuple, _bt_next will resume reading tuples from the index page at the point where we left off.\n\nHow do you know how many index entries you have to fetch to get a tuple\nthat's live/visible to the query?\n\n> - We need to take into account page splits, insertions and vacuums while we do not have the read-lock in between _bt_first and the first call to _bt_next. This made my patch quite a bit more complicated than my initial implementation.\n\nMeh. I think the odds that you got this 100% right are small, and the\nodds that it would be maintainable are smaller. There's too much that\ncan happen if you're not holding any lock --- and there's a lot of active\nwork on btree indexes, which could break whatever assumptions you might\nmake today.\n\nI'm not unalterably opposed to doing something like this, but my sense\nis that the complexity and probable negative performance impact on other\ncases are not going to look like a good trade-off for optimizing the\ncase at hand.\n\nBTW, you haven't even really made the case that optimizing a query that\nbehaves this way is the right thing to be doing ... maybe some other\nplan shape that isn't a nestloop around a LIMIT query would be a better\nsolution.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Aug 2019 16:43:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimize single tuple fetch from nbtree index"
},
{
"msg_contents": "Hi Tom,\n\nThanks for your quick reply!\n\n> Regardless of whether there's actually a LIMIT 1? That seems disastrous\n> for every other case than the narrow one where the optimization wins.\n> Because every other case is now going to have to touch the index page\n> twice. That's more CPU and about double the contention --- if you could\n> not measure any degradation from that, you're not measuring the right\n> thing.\n\nI thought the same as well at first. Note that I did measure degradation of 2-3% as mentioned on some cases, but initially I also expected worse. Do you have any ideas on cases that would suffer the most? I thought the tight inner nested loop that I posted in my performance tests would have this index lookup as bottleneck. I know they are the bottleneck for the LIMIT 1 query (because these improve by a factor 2-3 with the patch). And my theory is that for a LIMIT 3, the price paid for this optimization is highest, because it would touch the page twice and read all items from it, while only returning three of them.\n\n> In principle, you could pass down knowledge of whether there's a LIMIT,\n> using the same mechanism used to enable top-N sorting. But it'd have to\n> also go through the AM interface layer, so I'm not sure how messy this\n> would be.\n\nThis was an idea I had as well and I would be willing to implement such a thing if this is deemed interesting enough by the community. However, I didn't want to do this for the first version of this patch, as it would be quite some extra work, which would be useless if the idea of the patch itself gets rejected already. :-) I'd appreciate any pointers in the right direction - I can take a look at how top-N sorting pushes the LIMIT down. Given enough interest for the basic idea of this patch, I will implement it.\n\n>> This calls _bt_first in nbtsearch.c, which will, if there are scankeys, descend the tree to a leaf page and read just the first (or possibly two) tuples. It won't touch the rest of the page yet. If indeed just one tuple was required, there won't be a call to _bt_next and we're done. If we do need more than one tuple, _bt_next will resume reading tuples from the index page at the point where we left off.\n\n> How do you know how many index entries you have to fetch to get a tuple\nthat's live/visible to the query?\n\nIndeed we don't know that - that's why this initial patch does not make any assumptions about this and just assumes the good-weather scenario that everything is visible. I'm not sure if it's possible to give an estimation of this and whether or not that would be useful. Currently, if it turns out that the tuple is not visible, there'll just be another call to _bt_next again which will resume reading the page as normal. I'm open to implement any suggestions that may improve this.\n\n>> - We need to take into account page splits, insertions and vacuums while we do not have the read-lock in between _bt_first and the first call to _bt_next. This made my patch quite a bit more complicated than my initial implementation.\n\n> Meh. I think the odds that you got this 100% right are small, and the\n> odds that it would be maintainable are smaller. There's too much that\n> can happen if you're not holding any lock --- and there's a lot of active\n> work on btree indexes, which could break whatever assumptions you might\n> make today.\n\nAgreed, which is also why I posted this initial version of the patch here already, to get some input from the experts on this topic what assumptions can be made now and in the future. If it turns out that it's completely not feasible to do an optimization like this, because of other constraints in the btree implementation, then we're done pretty quickly here. :-) For what it's worth: the patch at least passes make check consistently - I caught a lot of these edge cases related to page splits and insertions while running the regression tests, which runs the modified bits of code quite often and in parallel. There may be plenty of edge cases left however...\n\n> I'm not unalterably opposed to doing something like this, but my sense\n> is that the complexity and probable negative performance impact on other\n> cases are not going to look like a good trade-off for optimizing the\n> case at hand.\n\nI do think it could be a big win if we could get something like this working. Cases with a LIMIT seem common enough to me to make it possible to add some extra optimizations, especially if that could lead to 2-3x the TPS for these kind of queries. However, it indeed needs to be within a reasonable complexity. If it turns out that in order for us to optimize this, we need to add a lot of extra complexity, it may not be worth it to add it.\n\n> BTW, you haven't even really made the case that optimizing a query that\n> behaves this way is the right thing to be doing ... maybe some other\n> plan shape that isn't a nestloop around a LIMIT query would be a better\n> solution.\n\nIt is pretty difficult to come up with any faster plans than this unfortunately. We have a large database with many tables with timeseries data, and when tables get large and when there's an efficient multi-column index and when you want to do these kind of time-base or item-based lookups, the nested loop is generally the fastest option. I can elaborate more on this, but I'm not sure if this thread is the best place for that.\n\nI appreciate you taking the time to take a look at this. I'd be happy to look more into any suggestions you come up with. Working on this has taught me a lot about the internals of Postgres already - I find it really interesting!\n\n-Floris\n\n\n",
"msg_date": "Fri, 2 Aug 2019 22:18:49 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimize single tuple fetch from nbtree index"
},
{
"msg_contents": "On Fri, Aug 2, 2019 at 1:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Meh. I think the odds that you got this 100% right are small, and the\n> odds that it would be maintainable are smaller. There's too much that\n> can happen if you're not holding any lock --- and there's a lot of active\n> work on btree indexes, which could break whatever assumptions you might\n> make today.\n\nI agree that this sounds very scary.\n\n> BTW, you haven't even really made the case that optimizing a query that\n> behaves this way is the right thing to be doing ... maybe some other\n> plan shape that isn't a nestloop around a LIMIT query would be a better\n> solution.\n\nI wonder if some variety of block nested loop join would be helpful\nhere. I'm not aware of any specific design that would help with\nFloris' case, but the idea of reducing the number of scans required on\nthe inner side by buffering outer side tuples (say based on the\n\"k=:val\" constant) seems like it might generalize well enough. I\nsuggest Floris look into that possibility. This paper might be worth a\nread:\n\nhttps://dl.acm.org/citation.cfm?id=582278\n\n(Though it also might not be worth a read -- I haven't actually read it myself.)\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 2 Aug 2019 17:34:33 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Optimize single tuple fetch from nbtree index"
},
{
"msg_contents": "On Fri, Aug 2, 2019 at 5:34 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I wonder if some variety of block nested loop join would be helpful\n> here. I'm not aware of any specific design that would help with\n> Floris' case, but the idea of reducing the number of scans required on\n> the inner side by buffering outer side tuples (say based on the\n> \"k=:val\" constant) seems like it might generalize well enough. I\n> suggest Floris look into that possibility. This paper might be worth a\n> read:\n>\n> https://dl.acm.org/citation.cfm?id=582278\n\nActually, having looked at the test case in more detail, that now\nseems less likely. The test case seems designed to reward making it\ncheaper to access one specific tuple among a fairly large group of\nrelated tuples -- reducing the number of inner scans is not going to\nbe possible there.\n\nIf this really is totally representative of the case that Floris cares\nabout, I suppose that the approach taken more or less makes sense.\nUnfortunately, it doesn't seem like an optimization that many other\nusers would find compelling, partly because it's only concerned with\nfixed overheads, and partly because most queries don't actually look\nlike this.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 2 Aug 2019 18:00:25 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Optimize single tuple fetch from nbtree index"
},
{
"msg_contents": "Hello everyone.\n\nI am also was looking into possibility of such optimisation few days ago\n(attempt to reduce memcpy overhead on IndexOnlyScan).\n\nOne thing I noticed here - whole page is scanned only if index quals are\n\"opened\" at some side.\n\nSo, in case of\n SELECT* FROM tbl WHERE k=:val AND ts<=:timestamp ORDER BY k, ts DESC\nLIMIT 1;\nwhole index page will be read.\n\nBut\n SELECT* FROM tbl WHERE k=:val AND ts<=:timestamp AND ts<:=timestamp -\n:interval ORDER BY k, ts DESC LIMIT 1;\nis semantically the same, but only few :interval records will be processed.\n\nSo, you could try to compare such query in your benchmarks.\n\nAlso, some info about current design is contained in\nsrc\\backend\\access\\nbtree\\README (\"To minimize lock/unlock traffic, an\nindex scan always searches a leaf page\nto identify all the matching items at once\").\n\nThanks,\n Michail.\n\nHello everyone.I am also was looking into possibility of such optimisation few days ago (attempt to reduce memcpy overhead on IndexOnlyScan).One thing I noticed here - whole page is scanned only if index quals are \"opened\" at some side.So, in case of SELECT* FROM tbl WHERE k=:val AND ts<=:timestamp ORDER BY k, ts DESC LIMIT 1;whole index page will be read.But SELECT* FROM tbl WHERE k=:val AND ts<=:timestamp AND ts<:=timestamp - :interval ORDER BY k, ts DESC LIMIT 1;is semantically the same, but only few :interval records will be processed.So, you could try to compare such query in your benchmarks.Also, some info about current design is contained in src\\backend\\access\\nbtree\\README (\"To minimize lock/unlock traffic, an index scan always searches a leaf pageto identify all the matching items at once\").Thanks, Michail.",
"msg_date": "Sun, 4 Aug 2019 21:13:46 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize single tuple fetch from nbtree index"
},
{
"msg_contents": "Hi Peter,\n\n> Actually, having looked at the test case in more detail, that now\n> seems less likely. The test case seems designed to reward making it\n> cheaper to access one specific tuple among a fairly large group of\n> related tuples -- reducing the number of inner scans is not going to\n> be possible there.\n\n> If this really is totally representative of the case that Floris cares\n> about, I suppose that the approach taken more or less makes sense.\n> Unfortunately, it doesn't seem like an optimization that many other\n> users would find compelling, partly because it's only concerned with\n> fixed overheads, and partly because most queries don't actually look\n> like this.\n\nThanks for taking a look. Unfortunately this is exactly the case I care about. I'm a bit puzzled as to why this case wouldn't come up more often by other users though. We have many large tables with timeseries data and it seems to me that with timeseries data, two of the most common queries are:\n(1) What is the state of { a1,a2, a3 ...} at timepoint t (but you don't know that there's an update *exactly* at timepoint t - so you're left with trying to find the latest update smaller than t)\n(2) What is the state of { a } at timepoints { t1, t2, t3 ... }\nGiven that a1,a2,a3... are indepedently updating, but similar time series (eg. sensor a1 and sensor a2, but both provide a temperature value and update independently from each other).\nBoth of these can also be done with some kind of DISTINCT ON clause, but this is often already much slower than just doing a nested loop of fast index lookups with LIMIT 1 (this depends on the frequency of the timeseries data itself versus the sampling rate of your query though, for high frequency time series and/or low frequency sampling the LIMIT 1 approach is much faster).\n\nNote that there is actually some related work to this - in the Index Skip Scan thread [1] a function called _bt_read_closest was developed which also partially reads the page. A Skip Scan has a very similar access pattern to the use case I describe here, because it's also very likely to just require one tuple from the page. Even though the implementation in that patch is currently incorrect, performance of the Skip Scan would likely also be quite a bit faster if it had a correct implementation of this partial page-read and it wouldn't have to read the full page every time.\n\nI have one further question about these index offsets. There are several comments in master that indicate that it's impossible that an item moves 'left' on a page, if we continuously hold a pin on the page. For example, _bt_killitems has a comment like this:\n \n* Note that if we hold a pin on the target page continuously from initially\n * reading the items until applying this function, VACUUM cannot have deleted\n * any items from the page, and so there is no need to search left from the\n * recorded offset. (This observation also guarantees that the item is still\n * the right one to delete, which might otherwise be questionable since heap\n * TIDs can get recycled.)\tThis holds true even if the page has been modified\n * by inserts and page splits, so there is no need to consult the LSN.\n \nStill, exactly this case happens in practice. In my tests I was able to get behavior like:\n1) pin + lock a page in _bt_first\n2) read a tuple, record indexOffset (for example offset=100) and heap tid\n3) unlock page, but *keep* the pin (end of _bt_first of my patch)\n4) lock page again in _bt_next (we still hold the pin, so vacuum shouldn't have occurred)\n5) look inside the current page for the heap Tid that we registered earlier\n6) we find that we can now find this tuple at indexOffset=98, eg. it moved left. This should not be possible.\nThis case sometimes randomly happens when running 'make check', which is why I added code in my patch to also look left on the page from the previous indexOffset.\n\nHowever, this is in contradiction with the comments (and code) of _bt_killitems.\nIs the comment incorrect/outdated or is there a bug in vacuum or any other part of Postgres that might move index items left even though there are others holding a pin?\n\n-Floris\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGKW4dXTP9G%2BWBskjT09tzD%2B9aMWEm%3DFpeb6RS5SXfPyKw%40mail.gmail.com#21abe755d5cf36aabaaa048c8a282169\n\n\n",
"msg_date": "Mon, 5 Aug 2019 11:34:26 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimize single tuple fetch from nbtree index"
},
{
"msg_contents": "05.08.2019 14:34, Floris Van Nee wrote:\n> I have one further question about these index offsets. There are several comments in master that indicate that it's impossible that an item moves 'left' on a page, if we continuously hold a pin on the page. For example, _bt_killitems has a comment like this:\n> \n> * Note that if we hold a pin on the target page continuously from initially\n> * reading the items until applying this function, VACUUM cannot have deleted\n> * any items from the page, and so there is no need to search left from the\n> * recorded offset. (This observation also guarantees that the item is still\n> * the right one to delete, which might otherwise be questionable since heap\n> * TIDs can get recycled.)\tThis holds true even if the page has been modified\n> * by inserts and page splits, so there is no need to consult the LSN.\n> \n> Still, exactly this case happens in practice. In my tests I was able to get behavior like:\n> 1) pin + lock a page in _bt_first\n> 2) read a tuple, record indexOffset (for example offset=100) and heap tid\n> 3) unlock page, but*keep* the pin (end of _bt_first of my patch)\n> 4) lock page again in _bt_next (we still hold the pin, so vacuum shouldn't have occurred)\n> 5) look inside the current page for the heap Tid that we registered earlier\n> 6) we find that we can now find this tuple at indexOffset=98, eg. it moved left. This should not be possible.\n> This case sometimes randomly happens when running 'make check', which is why I added code in my patch to also look left on the page from the previous indexOffset.\n>\n> However, this is in contradiction with the comments (and code) of _bt_killitems.\n> Is the comment incorrect/outdated or is there a bug in vacuum or any other part of Postgres that might move index items left even though there are others holding a pin?\n\nHello,\nwelcome to hackers with your first patch)\n\nAs far as I understood from the thread above, the design of this \noptimization is under discussion, so I didn't review the proposed patch \nitself.\nThough, I got interested in the comment inconsistency you have found.\nI added debug message into this code branch of the patch and was able to \nsee it in regression.diffs after 'make check':\nSpeaking of your patch, it seems that the buffer was unpinned and pinned \nagain between two reads,\nand the condition of holding it continuously has not been met.\n\nI didn't dig into the code, but this line looks suspicious (see my \nfindings about BTScanPosIsPinned below):\n\n /* bump pin on current buffer for assignment to mark buffer */\n if (BTScanPosIsPinned(so->currPos))\n IncrBufferRefCount(so->currPos.buf);\n\n\nWhile reading the code to answer your question, I noticed that \nBTScanPosIsPinned macro name is misleading.\nIt calls BufferIsValid(), not BufferIsPinned() as one could expect.\nAnd BufferIsValid in bufmgr.h comment explicitly states that it \nshouldn't be confused with BufferIsPinned.\nThe same goes for BTScanPosUnpinIfPinned().\n\nI propose that we update BTScanPosIsPinned macro. Or, at least write a \ncomment, why its current behavior is fine.\nThere are a few existing callers, that are definitely expecting that \nthis macro checks a pin, which it doesn't do.\nI don't quite understand if that already causes any subtle bug, or the \ncurrent algorithm is fine.\n\nPeter, Tom, what do you think?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Fri, 23 Aug 2019 20:14:26 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Optimize single tuple fetch from nbtree index"
},
{
"msg_contents": "On Fri, Aug 23, 2019 at 10:14 AM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n> Though, I got interested in the comment inconsistency you have found.\n> I added debug message into this code branch of the patch and was able to\n> see it in regression.diffs after 'make check':\n> Speaking of your patch, it seems that the buffer was unpinned and pinned\n> again between two reads,\n> and the condition of holding it continuously has not been met.\n\nSee commit 2ed5b87f. The code is supposed to do that, but it might do\nit more often than is truly necessary. We don't want to block VACUUM\nby holding a buffer pin for a very long time, which is theoretically\npossible here. Do you think that it is actually unnecessary here? In\nother words, do you think that we can fix this without breaking cases\nthat commit 2ed5b87f cares about?\n\nI have been suspicious of this commit all along. For example, I\nnoticed that it can cause the kill_prior_tuple mechanism to be\nineffective in a way that didn't happen prior to Postgres 9.5:\n\nhttps://postgr.es/m/CAH2-Wz=SfAKVMv1x9Jh19EJ8am8TZn9f-yECipS9HrrRqSswnA@mail.gmail.com\n\nThat particular complaint was never addressed. I meant to do more on\ncommit 2ed5b87f.\n\n> I didn't dig into the code, but this line looks suspicious (see my\n> findings about BTScanPosIsPinned below):\n>\n> /* bump pin on current buffer for assignment to mark buffer */\n> if (BTScanPosIsPinned(so->currPos))\n> IncrBufferRefCount(so->currPos.buf);\n>\n>\n> While reading the code to answer your question, I noticed that\n> BTScanPosIsPinned macro name is misleading.\n> It calls BufferIsValid(), not BufferIsPinned() as one could expect.\n> And BufferIsValid in bufmgr.h comment explicitly states that it\n> shouldn't be confused with BufferIsPinned.\n> The same goes for BTScanPosUnpinIfPinned().\n\nI have always hated this macro. I think that code like the specific\ncode you quoted might be correct, kind of, but it looks like the\nauthor was trying to change as little as possible about the code as it\nexisted in 2015, rather than changing things so that everything made\nsense. It looks like a messy accretion.\n\nLet me see if I can get it straight:\n\nWe're incrementing the ref count on the buffer if and only if it is\npinned (by which we mean valid), though only when the scan is valid\n(which is not the same as pinned). Whether or not we increment the\ncount of a valid scan doesn't affect anything else we do (i.e. we\nstill restore a marked position either way).\n\nThis is just awful.\n\n> I propose that we update BTScanPosIsPinned macro. Or, at least write a\n> comment, why its current behavior is fine.\n> There are a few existing callers, that are definitely expecting that\n> this macro checks a pin, which it doesn't do.\n> I don't quite understand if that already causes any subtle bug, or the\n> current algorithm is fine.\n\nI think that you're right -- at a minimum, this requires more\ndocumentation. This code is a few years old, but I still wouldn't be\nsurprised if it turned out to be slightly wrong in a way that was\nimportant. We still have no way of detecting if a buffer is accessed\nwithout a pin. There have been numerous bugs like that before. (We\nhave talked about teaching Valgrind to detect the case, but that never\nactually happened.)\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 23 Aug 2019 14:58:17 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Optimize single tuple fetch from nbtree index"
},
{
"msg_contents": "\n> Hello,\n> welcome to hackers with your first patch)\n\nThank you.\n\n> Though, I got interested in the comment inconsistency you have found.\n> I added debug message into this code branch of the patch and was able to\n> see it in regression.diffs after 'make check':\n> Speaking of your patch, it seems that the buffer was unpinned and pinned\n> again between two reads,\n> and the condition of holding it continuously has not been met.\n\nMay I ask what makes you conclude that the condition of holding the pin continuously has not been met?\nYour reply encouraged me to dig a little bit more into this today. First, I wanted to check if indeed the pin was continuously held by the backend or not. I added some debug info to ReleaseBuffer for this: it turned out that the pin on the buffer was definitely never released by the backend between the calls to _bt_first and _bt_next. So the buffer got compacted while the backend held a pin on it.\nAfter some more searching I found the following code: _bt_vacuum_one_page in nbtinsert.c\nThis function compacts one single page without taking a super-exclusive lock. It is used during inserts to make room on a page. I verified that if I comment out the calls to this function, the compacting never happens while I have a pin on the buffer.\nSo I guess that answers my own question: cleaning up garbage during inserts is one of the cases where compacting may happen even while other backends hold a pin to the buffer. Perhaps this should also be more clearly phrased in the comments in eg. _bt_killitems? Because currently those comments make it look like this case never occurs.\n\n> While reading the code to answer your question, I noticed that\n> BTScanPosIsPinned macro name is misleading.\n> It calls BufferIsValid(), not BufferIsPinned() as one could expect.\n> And BufferIsValid in bufmgr.h comment explicitly states that it\n> shouldn't be confused with BufferIsPinned.\n> The same goes for BTScanPosUnpinIfPinned().\n\nI agree the name is misleading. It clearly does something else than how it's named. However, I don't believe this introduces problems in these particular pieces of code, as long as the macro's are always used. BTScanPosIsPinned actually checks whether it's valid and not necessarily whether it's pinned, as you mentioned. However, any time the buffer gets unpinned using the macro BTScanPosUnpin, the buffer gets set to Invalid by the macro as well. Therefore, any consecutive call to BTScanPosIsPinned should indeed return false. It'd definitely be nice if this gets clarified in comments though.\n\n-Floris\n\n\n",
"msg_date": "Sat, 24 Aug 2019 21:59:31 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimize single tuple fetch from nbtree index"
},
{
"msg_contents": "25.08.2019 0:59, Floris Van Nee wrote:\n>> Though, I got interested in the comment inconsistency you have found.\n>> I added debug message into this code branch of the patch and was able to\n>> see it in regression.diffs after 'make check':\n>> Speaking of your patch, it seems that the buffer was unpinned and pinned\n>> again between two reads,\n>> and the condition of holding it continuously has not been met.\n> May I ask what makes you conclude that the condition of holding the pin continuously has not been met?\n> Your reply encouraged me to dig a little bit more into this today. First, I wanted to check if indeed the pin was continuously held by the backend or not. I added some debug info to ReleaseBuffer for this: it turned out that the pin on the buffer was definitely never released by the backend between the calls to _bt_first and _bt_next. So the buffer got compacted while the backend held a pin on it.\n> After some more searching I found the following code: _bt_vacuum_one_page in nbtinsert.c\n> This function compacts one single page without taking a super-exclusive lock. It is used during inserts to make room on a page. I verified that if I comment out the calls to this function, the compacting never happens while I have a pin on the buffer.\n> So I guess that answers my own question: cleaning up garbage during inserts is one of the cases where compacting may happen even while other backends hold a pin to the buffer. Perhaps this should also be more clearly phrased in the comments in eg. _bt_killitems? Because currently those comments make it look like this case never occurs.\n\nYou're right, the pin was not released between page reads.\nI also added debug to UnpinBuffer, but now I see that I had interpreted \nit wrongly.\n\nAs far as I understand, the issue with your patch is that it breaks the \n*scan stops \"between\" pages* assumption\nand thus it unsafely interacts with _bt_vacuum_one_page() cleanup.\n\nSee README:\n >Page read locks are held only for as long as a scan is examining a page.\nTo minimize lock/unlock traffic, an index scan always searches a leaf page\nto identify all the matching items at once, copying their heap tuple IDs\ninto backend-local storage. The heap tuple IDs are then processed while\nnot holding any page lock within the index. We do continue to hold a pin\non the leaf page in some circumstances, to protect against concurrent\ndeletions (see below). In this state the scan is effectively stopped\n\"between\" pages, either before or after the page it has pinned. This is\nsafe in the presence of concurrent insertions and even page splits, because\nitems are never moved across pre-existing page boundaries --- so the scan\ncannot miss any items it should have seen, nor accidentally return the same\nitem twice.\n\nand\n\n >Once an index tuple has been marked LP_DEAD it can actually be removed\nfrom the index immediately; since index scans only stop \"between\" pages,\nno scan can lose its place from such a deletion.\n\nIt seems that it contradicts the very idea of your patch, so probably we \nshould look for other ways to optimize this use-case.\nMaybe this restriction can be relaxed for write only tables, that never \nhave to reread the page because of visibility, or something like that.\nAlso we probably can add to IndexScanDescData info about expected number \nof tuples, to allow index work more optimal\nand avoid the overhead for other loads.=\n\n>> While reading the code to answer your question, I noticed that\n>> BTScanPosIsPinned macro name is misleading.\n>> It calls BufferIsValid(), not BufferIsPinned() as one could expect.\n>> And BufferIsValid in bufmgr.h comment explicitly states that it\n>> shouldn't be confused with BufferIsPinned.\n>> The same goes for BTScanPosUnpinIfPinned().\n> I agree the name is misleading. It clearly does something else than how it's named. However, I don't believe this introduces problems in these particular pieces of code, as long as the macro's are always used. BTScanPosIsPinned actually checks whether it's valid and not necessarily whether it's pinned, as you mentioned. However, any time the buffer gets unpinned using the macro BTScanPosUnpin, the buffer gets set to Invalid by the macro as well. Therefore, any consecutive call to BTScanPosIsPinned should indeed return false. It'd definitely be nice if this gets clarified in comments though.\n\nThat's true. It took me quite some time to understand that existing code \nis correct.\nThere is a comment for the structure's field that claims that \nBufferIsValid is the same that BufferIsPinned in ScanPos context.\nAttached patch contains some comments' updates. Any suggestions on how \nto improve them are welcome.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 26 Aug 2019 19:10:03 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Optimize single tuple fetch from nbtree index"
},
{
"msg_contents": "\n> It seems that it contradicts the very idea of your patch, so probably we\n> should look for other ways to optimize this use-case.\n> Maybe this restriction can be relaxed for write only tables, that never\n> have to reread the page because of visibility, or something like that.\n> Also we probably can add to IndexScanDescData info about expected number\n> of tuples, to allow index work more optimal\n> and avoid the overhead for other loads.=\n\nThe idea of the patch is exactly to relax this limitation. I forgot to update that README file though. The current implementation of the patch should be correct like this - that's why I added the look-back code on the page if the tuple couldn't be found anymore on the same location on the page. Similarly, it'll look on the page to the right if it detected a page split. These two measures combined should give a correct implementation of the 'it's possible that a scan stops in the middle of a page' relaxation. However, as Peter and Tom pointed out earlier, they feel that the performance advantage that this approach gives, does not outweigh the extra complexity at this time. I'd be open to other suggestions though.\n\n> That's true. It took me quite some time to understand that existing code\n> is correct.\n> There is a comment for the structure's field that claims that\n> BufferIsValid is the same that BufferIsPinned in ScanPos context.\n> Attached patch contains some comments' updates. Any suggestions on how\n> to improve them are welcome.\n\nI'll have a look tomorrow. Thanks a lot for writing this up!\n\n-Floris\n\n\n",
"msg_date": "Mon, 26 Aug 2019 20:22:35 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimize single tuple fetch from nbtree index"
},
{
"msg_contents": "\n>> It seems that it contradicts the very idea of your patch, so probably we\n>> should look for other ways to optimize this use-case.\n>> Maybe this restriction can be relaxed for write only tables, that never\n>> have to reread the page because of visibility, or something like that.\n>> Also we probably can add to IndexScanDescData info about expected number\n>> of tuples, to allow index work more optimal\n>> and avoid the overhead for other loads.=\n\n> The idea of the patch is exactly to relax this limitation. I forgot to update that README file though. The current implementation of the patch should be correct like this - that's why I added the look-back code on the page if the tuple couldn't be found anymore on the same location on the page. Similarly, it'll look on the page to the right if it detected a page split. These two measures combined should give a correct implementation of the 'it's possible that a scan stops in the middle of a page' relaxation. However, as Peter and Tom pointed out earlier, they feel that the performance advantage that this approach gives, does not outweigh the extra complexity at this time. I'd be open to other suggestions though.\n\nAlthough now that I think of it - do you mean the case where the tuple that we returned to the caller after _bt_first actually gets deleted (not moved) from the page? I guess that can theoretically happen if _bt_first returns a non-visible tuple (but not DEAD yet in the index at the time of _bt_first). For my understanding, would a situation like the following lead to this (in my patch)?\n1) Backend 1 does an index scan and returns the first tuple on _bt_first - this tuple is actually deleted in the heap already, however it's not marked dead yet in the index.\n2) Backend 1 does a heap fetch to check actual visibility and determines the tuple is actually dead\n3) While backend 1 is busy doing the heap fetch (so in between _bt_first and _bt_next) backend 2 comes in and manages to somehow do 1) a _bt_killitems on the page to mark tuples dead as well as 2) compact items on the page, thereby actually removing this item from the page.\n4) Now backend 1 tries to find the next tuple in _bt_next - it first tries to locate the tuple where it left off, but cannot find it anymore because it got removed completely by backend 2.\n\nIf this is indeed possible then it's a bad issue unfortunately, and quite hard to try to reproduce, as a lot of things need to happen concurrently while doing a visiblity check.\n\nAs for your patch, I've had some time to take a look at it. For the two TODOs:\n\n+\t\t/* TODO Is it possible that currPage is not valid anymore? */\n+\t\tAssert(BTScanPosIsValid(so->currPos))\n\nThis Assert exists already a couple of lines earlier at the start of this function.\n\n+ * TODO It is not clear to me\n+ * why to check scanpos validity based on currPage value.\n+ * I wonder, if we need currPage at all? Is there any codepath that\n+ * assumes that currPage is not the same as BufferGetBlockNumber(buf)?\n+ */\n\nThe comments in the source mention the following about this:\n\t\t * We note the buffer's block number so that we can release the pin later.\n\t\t * This allows us to re-read the buffer if it is needed again for hinting.\n\t\t */\n\t\tso->currPos.currPage = BufferGetBlockNumber(so->currPos.buf);\n\t\t\nAs we figured out earlier, so->currPos.buf gets set to invalid when we release the pin by the unpin macro. So, if we don't store currPage number somewhere else, we cannot obtain the pin again if we need it during killitems. I think that's the reason that currPage is stored.\n\nOther than the two TODOs in the code, I think the comments really help clarifying what's going on in the code - I'd be happy if this gets added.\n\n-Floris\n\n\n",
"msg_date": "Tue, 27 Aug 2019 07:23:18 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimize single tuple fetch from nbtree index"
}
] |
[
{
"msg_contents": "Hoi hackers,\n\nHere is a reworked version of the previous patches.\n\nThe original three patches have been collapsed into one as given the\nchanges discussed it didn't make sense to keep them separate. There\nare now two patches (the third is just to help with testing):\n\nPatch 1: Tracks the listening backends in a list so non-listening\nbackends can be quickly skipped over. This is separate because it's\northogonal to the rest of the changes and there are other ways to do\nthis.\n\nPatch 2: This is the meat of the change. It implements all the\nsuggestions discussed:\n\n- The queue tail is now only updated lazily, whenever the notify queue\nmoves to a new page. This did require a new global to track this state\nthrough the transaction commit, but it seems worth it.\n\n- Only backends for the current database are signalled when a\nnotification is made\n\n- Slow backends are woken up one at a time rather than all at once\n\n- A backend is allowed to lag up to 4 SLRU pages behind before being\nsignalled. This is a tradeoff between how often to get woken up verses\nhow much work to do once woken up.\n\n- All the relevant comments have been updated to describe the new\nalgorithm. Locking should also be correct now.\n\nThis means in the normal case where listening backends get a\nnotification occasionally, no-one will ever be considered slow. An\nexclusive lock for cleanup will happen about once per SLRU page.\nThere's still the exclusive locks on adding notifications but that's\nunavoidable.\n\nOne minor issue is that pg_notification_queue_usage() will now return\na small but non-zero number (about 3e-6) even when nothing is really\ngoing on. This could be fixed by having it take an exclusive lock\ninstead and updating to the latest values but that barely seems worth\nit.\n\nPerformance-wise it's even better than my original patches, with about\n20-25% reduction in CPU usage in my test setup (using the test script\nsent previously).\n\nHere is the log output from my postgres, where you see the signalling in action:\n\n------\n16:42:48.673 [10188] martijn@test_131 DEBUG: PreCommit_Notify\n16:42:48.673 [10188] martijn@test_131 DEBUG: NOTIFY QUEUE = (74,896)...(79,0)\n16:42:48.673 [10188] martijn@test_131 DEBUG: backendTryAdvanceTail -> true\n16:42:48.673 [10188] martijn@test_131 DEBUG: AtCommit_Notify\n16:42:48.673 [10188] martijn@test_131 DEBUG: ProcessCompletedNotifies\n16:42:48.673 [10188] martijn@test_131 DEBUG: backendTryAdvanceTail -> false\n16:42:48.673 [10188] martijn@test_131 DEBUG: asyncQueueAdvanceTail\n16:42:48.673 [10188] martijn@test_131 DEBUG: waking backend 137 (pid 10055)\n16:42:48.673 [10055] martijn@test_067 DEBUG: ProcessIncomingNotify\n16:42:48.673 [10187] martijn@test_131 DEBUG: ProcessIncomingNotify\n16:42:48.673 [10055] martijn@test_067 DEBUG: asyncQueueAdvanceTail\n16:42:48.673 [10055] martijn@test_067 DEBUG: waking backend 138 (pid 10056)\n16:42:48.673 [10187] martijn@test_131 DEBUG: ProcessIncomingNotify: done\n16:42:48.673 [10055] martijn@test_067 DEBUG: ProcessIncomingNotify: done\n16:42:48.673 [10056] martijn@test_067 DEBUG: ProcessIncomingNotify\n16:42:48.673 [10056] martijn@test_067 DEBUG: asyncQueueAdvanceTail\n16:42:48.673 [10056] martijn@test_067 DEBUG: ProcessIncomingNotify: done\n16:42:48.683 [9991] martijn@test_042 DEBUG: Async_Notify(changes)\n16:42:48.683 [9991] martijn@test_042 DEBUG: PreCommit_Notify\n16:42:48.683 [9991] martijn@test_042 DEBUG: NOTIFY QUEUE = (75,7744)...(79,32)\n16:42:48.683 [9991] martijn@test_042 DEBUG: AtCommit_Notify\n-----\n\nHave a nice weekend.\n-- \nMartijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/",
"msg_date": "Fri, 2 Aug 2019 17:40:17 +0200",
"msg_from": "Martijn van Oosterhout <kleptog@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Improve performance of NOTIFY over many databases (v2)"
},
{
"msg_contents": "Martijn van Oosterhout <kleptog@gmail.com> writes:\n> The original three patches have been collapsed into one as given the\n> changes discussed it didn't make sense to keep them separate. There\n> are now two patches (the third is just to help with testing):\n\n> Patch 1: Tracks the listening backends in a list so non-listening\n> backends can be quickly skipped over. This is separate because it's\n> orthogonal to the rest of the changes and there are other ways to do\n> this.\n\n> Patch 2: This is the meat of the change. It implements all the\n> suggestions discussed:\n\nI pushed 0001 after doing some hacking on it --- it was sloppy about\ndatatypes, and about whether the invalid-entry value is 0 or -1,\nand it was just wrong about keeping the list in backendid order.\n(You can't conditionally skip looking for where to put the new\nentry, if you want to maintain the order. I thought about just\ndefining the list as unordered, which would simplify joining the\nlist initially, but that could get pretty cache-unfriendly when\nthere are lots of entries.)\n\n0002 is now going to need a rebase, so please do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Sep 2019 18:18:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (v2)"
},
{
"msg_contents": "Hoi Tom,\n\n\nOn Wed, 11 Sep 2019 at 00:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> I pushed 0001 after doing some hacking on it --- it was sloppy about\n> datatypes, and about whether the invalid-entry value is 0 or -1,\n> and it was just wrong about keeping the list in backendid order.\n> (You can't conditionally skip looking for where to put the new\n> entry, if you want to maintain the order. I thought about just\n> defining the list as unordered, which would simplify joining the\n> list initially, but that could get pretty cache-unfriendly when\n> there are lots of entries.)\n>\n> 0002 is now going to need a rebase, so please do that.\n>\n>\nThanks for this, and good catch. Looks like I didn't test the first patch\nby itself very well.\n\nHere is the rebased second patch.\n\nThanks in advance,\n-- \nMartijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/",
"msg_date": "Wed, 11 Sep 2019 16:07:17 +0200",
"msg_from": "Martijn van Oosterhout <kleptog@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (v2)"
},
{
"msg_contents": "Martijn van Oosterhout <kleptog@gmail.com> writes:\n> Here is the rebased second patch.\n\nThis throws multiple compiler warnings for me:\n\nasync.c: In function 'asyncQueueUnregister':\nasync.c:1293: warning: unused variable 'advanceTail'\nasync.c: In function 'asyncQueueAdvanceTail':\nasync.c:2153: warning: 'slowbackendpid' may be used uninitialized in this function\n\nAlso, I don't exactly believe this bit:\n\n+ /* If we are advancing to a new page, remember this so after the\n+ * transaction commits we can attempt to advance the tail\n+ * pointer, see ProcessCompletedNotifies() */\n+ if (QUEUE_POS_OFFSET(QUEUE_HEAD) == 0)\n+ backendTryAdvanceTail = true;\n\nIt seems unlikely that insertion would stop exactly at a page boundary,\nbut that seems to be what this is looking for.\n\nBut, really ... do we need the backendTryAdvanceTail flag at all?\nI'm dubious, because it seems like asyncQueueReadAllNotifications\nwould have already covered the case if we're listening. If we're\nnot listening, but we signalled some other listeners, it falls\nto them to kick us if we're the slowest backend. If we're not the\nslowest backend then doing asyncQueueAdvanceTail isn't useful.\n\nI agree with getting rid of the asyncQueueAdvanceTail call in\nasyncQueueUnregister; on reflection doing that there seems pretty unsafe,\nbecause we're not necessarily in a transaction and hence anything that\ncould possibly error is a bad idea. However, it'd be good to add a\ncomment explaining that we're not doing that and why it's ok not to.\n\nI'm fairly unimpressed with the \"kick a random slow backend\" logic.\nThere can be no point in kicking any but the slowest backend, ie\none whose pointer is exactly the oldest. Since we're already computing\nthe min pointer in that loop, it would actually take *less* logic inside\nthe loop to remember the/a backend that had that pointer value, and then\ndecide afterwards whether it's slow enough to merit a kick.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 13 Sep 2019 16:04:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (v2)"
},
{
"msg_contents": "Hoi Tom,\n\n\nOn Fri, 13 Sep 2019 at 22:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> This throws multiple compiler warnings for me:\n\nFixed.\n\n> Also, I don't exactly believe this bit:\n[snip]\n> It seems unlikely that insertion would stop exactly at a page boundary,\n> but that seems to be what this is looking for.\n\nThis is how asyncQueueAddEntries() works. Entries are never split over\npages. If there is not enough room, then it advances to the beginning\nof the next page and returns. Hence here the offset is zero. I could\nset the global inside asyncQueueAddEntries() but that seems icky.\nAnother alternative is to have asyncQueueAddEntries() return a boolean\n\"moved to new page\", but that's just a long-winded way of doing what\nit is now.\n\n> But, really ... do we need the backendTryAdvanceTail flag at all?\n> I'm dubious, because it seems like asyncQueueReadAllNotifications\n> would have already covered the case if we're listening. If we're\n> not listening, but we signalled some other listeners, it falls\n> to them to kick us if we're the slowest backend. If we're not the\n> slowest backend then doing asyncQueueAdvanceTail isn't useful.\n\nThere are multiple issues here. asyncQueueReadAllNotifications() is\ngoing to be called by each listener simultaneously, so each listener\nis going to come to the same conclusion. On the other side, there is\nno guarantee we wake up anyone as a result of the NOTIFY, e.g. if\nthere are no listeners in the current database. To be sure you try to\nadvance the tail, you have to trigger on the sending side. The global\nis there because at the point we are inserting entries we are still in\na user transaction, potentially holding many table locks (the issue we\nwere running into in the first place). By setting\nbackendTryAdvanceTail we can move the work to\nProcessCompletedNotifies() which is after the transaction has\ncommitted and the locks released.\n\n> I agree with getting rid of the asyncQueueAdvanceTail call in\n> asyncQueueUnregister; on reflection doing that there seems pretty unsafe,\n> because we're not necessarily in a transaction and hence anything that\n> could possibly error is a bad idea. However, it'd be good to add a\n> comment explaining that we're not doing that and why it's ok not to.\n\nComment added.\n\n> I'm fairly unimpressed with the \"kick a random slow backend\" logic.\n> There can be no point in kicking any but the slowest backend, ie\n> one whose pointer is exactly the oldest. Since we're already computing\n> the min pointer in that loop, it would actually take *less* logic inside\n> the loop to remember the/a backend that had that pointer value, and then\n> decide afterwards whether it's slow enough to merit a kick.\n\nAdjusted this. I'm not sure it's actually clearer this way, but it is\nless work inside the loop. A small change is that now it won't signal\nanyone if this backend is the slowest, which more correct.\n\nThanks for the feedback. Attached is version 3.\n\nHave a nice weekend,\n-- \nMartijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/",
"msg_date": "Sat, 14 Sep 2019 14:04:25 +0200",
"msg_from": "Martijn van Oosterhout <kleptog@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (v2)"
},
{
"msg_contents": "Martijn van Oosterhout <kleptog@gmail.com> writes:\n> On Fri, 13 Sep 2019 at 22:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> But, really ... do we need the backendTryAdvanceTail flag at all?\n\n> There are multiple issues here. asyncQueueReadAllNotifications() is\n> going to be called by each listener simultaneously, so each listener\n> is going to come to the same conclusion. On the other side, there is\n> no guarantee we wake up anyone as a result of the NOTIFY, e.g. if\n> there are no listeners in the current database. To be sure you try to\n> advance the tail, you have to trigger on the sending side. The global\n> is there because at the point we are inserting entries we are still in\n> a user transaction, potentially holding many table locks (the issue we\n> were running into in the first place). By setting\n> backendTryAdvanceTail we can move the work to\n> ProcessCompletedNotifies() which is after the transaction has\n> committed and the locks released.\n\nNone of this seems to respond to my point: it looks to me like it would\nwork fine if you simply dropped the patch's additions in PreCommit_Notify\nand ProcessCompletedNotifies, because there is already enough logic to\ndecide when to call asyncQueueAdvanceTail. In particular, the result from\nSignal[MyDB]Backends tells us whether anyone else was awakened, and\nProcessCompletedNotifies already does asyncQueueAdvanceTail if not.\nAs long as we did awaken someone, the ball's now in their court to\nmake sure asyncQueueAdvanceTail happens eventually.\n\nThere are corner cases where someone else might get signaled but never\ndo asyncQueueAdvanceTail -- for example, if they're in process of exiting\n--- but I think the whole point of this patch is that we don't care too\nmuch if that occasionally fails to happen. If there's a continuing\nstream of NOTIFY activity, asyncQueueAdvanceTail will happen often\nenough to ensure that the queue storage doesn't bloat unreasonably.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 14 Sep 2019 11:08:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (v2)"
},
{
"msg_contents": "On Sat, 14 Sep 2019 at 17:08, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Martijn van Oosterhout <kleptog@gmail.com> writes:\n> > On Fri, 13 Sep 2019 at 22:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> But, really ... do we need the backendTryAdvanceTail flag at all?\n\n> None of this seems to respond to my point: it looks to me like it would\n> work fine if you simply dropped the patch's additions in PreCommit_Notify\n> and ProcessCompletedNotifies, because there is already enough logic to\n> decide when to call asyncQueueAdvanceTail. In particular, the result from\n> Signal[MyDB]Backends tells us whether anyone else was awakened, and\n> ProcessCompletedNotifies already does asyncQueueAdvanceTail if not.\n> As long as we did awaken someone, the ball's now in their court to\n> make sure asyncQueueAdvanceTail happens eventually.\n\nAh, I think I see what you're getting at. As written,\nasyncQueueReadAllNotifications() only calls asyncQueueAdvanceTail() if\n*it* was a slow backend (advanceTail =\nQUEUE_SLOW_BACKEND(MyBackendId)). In a situation where some databases\nare regularly using NOTIFY and a few others never (but still\nlistening) it will lead to the situation where the tail never gets\nadvanced.\n\nHowever, I guess you're thinking of asyncQueueReadAllNotifications()\ntriggering if the queue as a whole was too long. This could in\nprinciple work but it does mean that at some point all backends\nsending NOTIFY are going to start calling asyncQueueAdvanceTail()\nevery time, until the tail gets advanced, and if there are many idle\nlistening backends behind this could take a while. The slowest backend\nmight receive more signals while it is processing and so end up\nrunning asyncQueueAdvanceTail() twice. The fact that signals coalesce\nstops the process getting completely out of hand but it does feel a\nlittle uncontrolled.\n\nThe whole point of this patch is to ensure that at any time only one\nbackend is being woken up and calling asyncQueueAdvanceTail() at a\ntime.\n\nBut you do point out that the use of the return value of\nSignalMyDBBackends() is used wrongly. The fact that no-one got\nsignalled only meant there were no other listeners on this database\nwhich means nothing in terms of global queue cleanup. What you want to\nknow is if you're the only listener in the whole system and you can\ntest for that directly (QUEUE_FIRST_BACKEND == MyBackendId &&\nQUEUE_NEXT_BACKEND(MyBackendId) == InvalidBackendId). I can adjust\nthis in the next version if necessary, it's fairly harmless as is as\nit only triggers in the case where a database is only notifying\nitself, which probably isn't that common.\n\nI hope I have correctly understood this time.\n\nHave a nice weekend.\n-- \nMartijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/\n\n\n",
"msg_date": "Sun, 15 Sep 2019 12:21:05 +0200",
"msg_from": "Martijn van Oosterhout <kleptog@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (v2)"
},
{
"msg_contents": "Martijn van Oosterhout <kleptog@gmail.com> writes:\n> On Sat, 14 Sep 2019 at 17:08, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> None of this seems to respond to my point: it looks to me like it would\n>> work fine if you simply dropped the patch's additions in PreCommit_Notify\n>> and ProcessCompletedNotifies, because there is already enough logic to\n>> decide when to call asyncQueueAdvanceTail.\n\n> ...\n> However, I guess you're thinking of asyncQueueReadAllNotifications()\n> triggering if the queue as a whole was too long. This could in\n> principle work but it does mean that at some point all backends\n> sending NOTIFY are going to start calling asyncQueueAdvanceTail()\n> every time, until the tail gets advanced, and if there are many idle\n> listening backends behind this could take a while. The slowest backend\n> might receive more signals while it is processing and so end up\n> running asyncQueueAdvanceTail() twice. The fact that signals coalesce\n> stops the process getting completely out of hand but it does feel a\n> little uncontrolled.\n> The whole point of this patch is to ensure that at any time only one\n> backend is being woken up and calling asyncQueueAdvanceTail() at a\n> time.\n\nI spent some more time thinking about this, and I'm still not too\nsatisfied with this patch's approach. It seems to me the key insights\nwe're trying to make use of are:\n\n1. We don't really need to keep the global tail pointer exactly\nup to date. It's bad if it falls way behind, but a few pages back\nis fine.\n\n2. When sending notifies, only listening backends connected to our\nown database need be awakened immediately. Backends connected to\nother DBs will need to advance their queue pointer sometime, but\nagain it doesn't need to be right away.\n\n3. It's bad for multiple processes to all be trying to do\nasyncQueueAdvanceTail concurrently: they'll contend for exclusive\naccess to the AsyncQueueLock. Therefore, having the listeners\ndo it is really the wrong thing, and instead we should do it on\nthe sending side.\n\nHowever, the patch as presented doesn't go all the way on point 3,\ninstead having listeners maybe-or-maybe-not do asyncQueueAdvanceTail\nin asyncQueueReadAllNotifications. I propose that we should go all\nthe way and just define tail-advancing as something that happens on\nthe sending side, and only once every few pages. I also think we\ncan simplify the handling of other-database listeners by including\nthem in the set signaled by SignalBackends, but only if they're\nseveral pages behind. So that leads me to the attached patch;\nwhat do you think?\n\nBTW, in my hands it seems like point 2 (skip wakening other-database\nlisteners) is the only really significant win here, and of course\nthat only wins when the notify traffic is spread across a fair number\nof databases. Which I fear is not the typical use-case. In single-DB\nuse-cases, point 2 helps not at all. I had a really hard time measuring\nany benefit from point 3 --- I eventually saw a noticeable savings\nwhen I tried having one notifier and 100 listen-only backends, but\nagain that doesn't seem like a typical use-case. I could not replicate\nyour report of lots of time spent in asyncQueueAdvanceTail's lock\nacquisition. I wonder whether you're using a very large max_connections\nsetting and we already fixed most of the problem with that in bca6e6435.\nStill, this patch doesn't seem to make any cases worse, so I don't mind\nif it's just improving unusual use-cases.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 15 Sep 2019 18:14:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (v2)"
},
{
"msg_contents": "Hoi Tom,\n\nOn Mon, 16 Sep 2019 at 00:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I spent some more time thinking about this, and I'm still not too\n> satisfied with this patch's approach. It seems to me the key insights\n> we're trying to make use of are:\n>\n> 1. We don't really need to keep the global tail pointer exactly\n> up to date. It's bad if it falls way behind, but a few pages back\n> is fine.\n\nAgreed.\n\n> 2. When sending notifies, only listening backends connected to our\n> own database need be awakened immediately. Backends connected to\n> other DBs will need to advance their queue pointer sometime, but\n> again it doesn't need to be right away.\n\nAgreed.\n\n> 3. It's bad for multiple processes to all be trying to do\n> asyncQueueAdvanceTail concurrently: they'll contend for exclusive\n> access to the AsyncQueueLock. Therefore, having the listeners\n> do it is really the wrong thing, and instead we should do it on\n> the sending side.\n\nAgreed, but I'd add that listeners in databases that are largely idle\nthere may never be a sender, and thus need to be advanced up some\nother way.\n\n> However, the patch as presented doesn't go all the way on point 3,\n> instead having listeners maybe-or-maybe-not do asyncQueueAdvanceTail\n> in asyncQueueReadAllNotifications. I propose that we should go all\n> the way and just define tail-advancing as something that happens on\n> the sending side, and only once every few pages. I also think we\n> can simplify the handling of other-database listeners by including\n> them in the set signaled by SignalBackends, but only if they're\n> several pages behind. So that leads me to the attached patch;\n> what do you think?\n\nI think I like the idea of having SignalBackend do the waking up a\nslow backend but I'm not enthused by the \"lets wake up (at once)\neveryone that is behind\". That's one of the issues I was explicitly\ntrying to solve. If there are any significant number of \"slow\"\nbackends then we get the \"thundering herd\" again. If the number of\nslow backends exceeds the number of cores then commits across the\nsystem could be held up quite a while (which is what caused me to make\nthis patch, multiple seconds was not unusual).\n\nThe maybe/maybe not in asyncQueueReadAllNotifications is that \"if I\nwas behind, then I probably got woken up, hence I need to wake up\nsomeone else\", thus ensuring the cleanup proceeds in an orderly\nfashion, leaving gaps where the lock isn't held allowing COMMITs to\nproceed.\n\n> BTW, in my hands it seems like point 2 (skip wakening other-database\n> listeners) is the only really significant win here, and of course\n> that only wins when the notify traffic is spread across a fair number\n> of databases. Which I fear is not the typical use-case. In single-DB\n> use-cases, point 2 helps not at all. I had a really hard time measuring\n> any benefit from point 3 --- I eventually saw a noticeable savings\n> when I tried having one notifier and 100 listen-only backends, but\n> again that doesn't seem like a typical use-case. I could not replicate\n> your report of lots of time spent in asyncQueueAdvanceTail's lock\n> acquisition. I wonder whether you're using a very large max_connections\n> setting and we already fixed most of the problem with that in bca6e6435.\n> Still, this patch doesn't seem to make any cases worse, so I don't mind\n> if it's just improving unusual use-cases.\n\nI'm not sure if it's an unusual use-case, but it is my use-case :).\nSpecifically, there are 100+ instances of the same application running\non the same cluster with wildly different usage patterns. Some will be\nidle because no-one is logged in, some will be quite busy. Although\nthere are only 2 listeners per database, that's still a lot of\nlisteners that can be behind. Though I agree that bca6e6435 will have\nmitigated quite a lot (yes, max_connections is quite high). Another\nmitigation would be to spread across more smaller database clusters,\nwhich we need to do anyway.\n\nThat said, your approach is conceptually simpler which is also worth\nsomething and it gets essentially all the same benefits for more\nnormal use cases. If the QUEUE_CLEANUP_DELAY were raised a bit then we\ncould do mitigation of the rest on the client side by having idle\ndatabases send dummy notifies every now and then to trigger clean up\nfor their database. The flip-side is that slow backends will then have\nfurther to catch up, thus holding the lock longer. It's not worth\nmaking it configurable so we have to guess, but 16 is perhaps a good\ncompromise.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/\n\n\n",
"msg_date": "Mon, 16 Sep 2019 13:07:49 +0200",
"msg_from": "Martijn van Oosterhout <kleptog@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (v2)"
},
{
"msg_contents": "Martijn van Oosterhout <kleptog@gmail.com> writes:\n> On Mon, 16 Sep 2019 at 00:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... I also think we\n>> can simplify the handling of other-database listeners by including\n>> them in the set signaled by SignalBackends, but only if they're\n>> several pages behind. So that leads me to the attached patch;\n>> what do you think?\n\n> I think I like the idea of having SignalBackend do the waking up a\n> slow backend but I'm not enthused by the \"lets wake up (at once)\n> everyone that is behind\". That's one of the issues I was explicitly\n> trying to solve. If there are any significant number of \"slow\"\n> backends then we get the \"thundering herd\" again.\n\nBut do we care? With asyncQueueAdvanceTail gone from the listeners,\nthere's no longer an exclusive lock for them to contend on. And,\nagain, I failed to see any significant contention even in HEAD as it\nstands; so I'm unconvinced that you're solving a live problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Sep 2019 09:33:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (v2)"
},
{
"msg_contents": "Hoi Tom,\n\nOn Mon, 16 Sep 2019 at 15:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Martijn van Oosterhout <kleptog@gmail.com> writes:\n> > I think I like the idea of having SignalBackend do the waking up a\n> > slow backend but I'm not enthused by the \"lets wake up (at once)\n> > everyone that is behind\". That's one of the issues I was explicitly\n> > trying to solve. If there are any significant number of \"slow\"\n> > backends then we get the \"thundering herd\" again.\n>\n> But do we care? With asyncQueueAdvanceTail gone from the listeners,\n> there's no longer an exclusive lock for them to contend on. And,\n> again, I failed to see any significant contention even in HEAD as it\n> stands; so I'm unconvinced that you're solving a live problem.\n\nYou're right, they only acquire a shared lock which is much less of a\nproblem. And I forgot that we're still reducing the load from a few\nhundred signals and exclusive locks per NOTIFY to perhaps a dozen\nshared locks every thousand messages. You'd be hard pressed to\ndemonstrate there's a real problem here.\n\nSo I think your patch is fine as is.\n\nLooking at the release cycle it looks like the earliest either of\nthese patches will appear in a release is PG13, right?\n\nThanks again.\n-- \nMartijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/\n\n\n",
"msg_date": "Tue, 17 Sep 2019 09:39:22 +0200",
"msg_from": "Martijn van Oosterhout <kleptog@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (v2)"
},
{
"msg_contents": "Martijn van Oosterhout <kleptog@gmail.com> writes:\n> On Mon, 16 Sep 2019 at 15:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> But do we care? With asyncQueueAdvanceTail gone from the listeners,\n>> there's no longer an exclusive lock for them to contend on. And,\n>> again, I failed to see any significant contention even in HEAD as it\n>> stands; so I'm unconvinced that you're solving a live problem.\n\n> You're right, they only acquire a shared lock which is much less of a\n> problem. And I forgot that we're still reducing the load from a few\n> hundred signals and exclusive locks per NOTIFY to perhaps a dozen\n> shared locks every thousand messages. You'd be hard pressed to\n> demonstrate there's a real problem here.\n\n> So I think your patch is fine as is.\n\nOK, pushed.\n\n> Looking at the release cycle it looks like the earliest either of\n> these patches will appear in a release is PG13, right?\n\nRight.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 22 Sep 2019 11:48:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (v2)"
}
] |
[
{
"msg_contents": "See\nhttps://git.postgresql.org/pg/commitdiff/082c9f5f761ced18a6f014f2638096f6a8228164\n\nPlease send comments/corrections before Sunday.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Aug 2019 16:21:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "First draft of back-branch release notes is done"
},
{
"msg_contents": "On 8/2/19 3:21 PM, Tom Lane wrote:\n> See\n> https://git.postgresql.org/pg/commitdiff/082c9f5f761ced18a6f014f2638096f6a8228164\n> \n> Please send comments/corrections before Sunday.\n\nWhile working on the PR, I noticed this line:\n\n\"This fixes a regression introduced in June's minor releases...\"\n\nPerhaps instead of \"June\" it could be the specific version number (which\ncould cause some pain with the back branching?) or the \"2019-06-20\" release?\n\nThanks,\n\nJonathan",
"msg_date": "Sun, 4 Aug 2019 10:16:29 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: First draft of back-branch release notes is done"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Perhaps instead of \"June\" it could be the specific version number (which\n> could cause some pain with the back branching?) or the \"2019-06-20\" release?\n\nPutting in all the version numbers seems like a mess, but specifying\n2019-06-20 would work --- or we could say \"the most recent\" minor\nreleases?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Aug 2019 11:52:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of back-branch release notes is done"
},
{
"msg_contents": "On 8/4/19 10:52 AM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> Perhaps instead of \"June\" it could be the specific version number (which\n>> could cause some pain with the back branching?) or the \"2019-06-20\" release?\n> \n> Putting in all the version numbers seems like a mess, but specifying\n> 2019-06-20 would work --- or we could say \"the most recent\" minor\n> releases?\n\nThat or \"previous minor release\" would seem to work.\n\n(In the PR I'm putting in the versions it was introduced but we have the\nluxury of only having one PR.)\n\nJonathan",
"msg_date": "Sun, 4 Aug 2019 12:08:38 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: First draft of back-branch release notes is done"
},
{
"msg_contents": "On 8/4/19 11:08 AM, Jonathan S. Katz wrote:\n> On 8/4/19 10:52 AM, Tom Lane wrote:\n>> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>>> Perhaps instead of \"June\" it could be the specific version number (which\n>>> could cause some pain with the back branching?) or the \"2019-06-20\" release?\n>>\n>> Putting in all the version numbers seems like a mess, but specifying\n>> 2019-06-20 would work --- or we could say \"the most recent\" minor\n>> releases?\n> \n> That or \"previous minor release\" would seem to work.\n> \n> (In the PR I'm putting in the versions it was introduced but we have the\n> luxury of only having one PR.)\n\nAttached is the first draft of the PR.\n\nJonathan",
"msg_date": "Sun, 4 Aug 2019 12:39:19 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: First draft of back-branch release notes is done"
},
{
"msg_contents": "I realize that this has now been sent, but I wanted to comment on one\nitem:\n\nOn 2019-Aug-04, Jonathan S. Katz wrote:\n\n> * Ensure that partition key columns will not be dropped as the result of an\n> \"indirect drop,\" such as from a cascade from dropping the key column's data\n> type (e.g. a custom data type). This fix is applied only to newly created\n> partitioned tables: if you believe you have an affected partition table (e.g.\n> one where the partition key uses a custom data type), you will need to\n> create a new table and move your data into it.\n\nHmm, if I have this problem, I can pg_upgrade and the new database will\nhave correct dependencies, right? For some people, doing that might be\neasier than creating and reloading large tables.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 8 Aug 2019 13:15:53 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of back-branch release notes is done"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Aug-04, Jonathan S. Katz wrote:\n>> * Ensure that partition key columns will not be dropped as the result of an\n>> \"indirect drop,\" such as from a cascade from dropping the key column's data\n>> type (e.g. a custom data type). This fix is applied only to newly created\n>> partitioned tables: if you believe you have an affected partition table (e.g.\n>> one where the partition key uses a custom data type), you will need to\n>> create a new table and move your data into it.\n\n> Hmm, if I have this problem, I can pg_upgrade and the new database will\n> have correct dependencies, right? For some people, doing that might be\n> easier than creating and reloading large tables.\n\nYeah, that should work.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Aug 2019 14:15:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of back-branch release notes is done"
},
{
"msg_contents": "On 8/8/19 2:15 PM, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> On 2019-Aug-04, Jonathan S. Katz wrote:\n>>> * Ensure that partition key columns will not be dropped as the result of an\n>>> \"indirect drop,\" such as from a cascade from dropping the key column's data\n>>> type (e.g. a custom data type). This fix is applied only to newly created\n>>> partitioned tables: if you believe you have an affected partition table (e.g.\n>>> one where the partition key uses a custom data type), you will need to\n>>> create a new table and move your data into it.\n> \n>> Hmm, if I have this problem, I can pg_upgrade and the new database will\n>> have correct dependencies, right? For some people, doing that might be\n>> easier than creating and reloading large tables.\n> \n> Yeah, that should work.\n\nI modified the copy of the announcement on the website to include the\npg_upgrade option.\n\nhttps://www.postgresql.org/about/news/1960/\n\nThanks!\n\nJonathan",
"msg_date": "Thu, 8 Aug 2019 14:20:04 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: First draft of back-branch release notes is done"
},
{
"msg_contents": "On 2019-Aug-08, Jonathan S. Katz wrote:\n\n> I modified the copy of the announcement on the website to include the\n> pg_upgrade option.\n> \n> https://www.postgresql.org/about/news/1960/\n\nOoh, had I thought you were going to do that, I would have told you\nabout the item ending in a comma :-)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 8 Aug 2019 14:40:03 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of back-branch release notes is done"
},
{
"msg_contents": "On 8/8/19 2:40 PM, Alvaro Herrera wrote:\n> On 2019-Aug-08, Jonathan S. Katz wrote:\n> \n>> I modified the copy of the announcement on the website to include the\n>> pg_upgrade option.\n>>\n>> https://www.postgresql.org/about/news/1960/\n> \n> Ooh, had I thought you were going to do that, I would have told you\n> about the item ending in a comma :-)\n\n:) I made a quick modification and opted for an \"either\" at the\nbeginning of that clause and a capitalized \"OR\" towards the end.\n\nJonathan",
"msg_date": "Thu, 8 Aug 2019 14:42:53 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: First draft of back-branch release notes is done"
},
{
"msg_contents": "On 2019-Aug-08, Jonathan S. Katz wrote:\n\n> On 8/8/19 2:40 PM, Alvaro Herrera wrote:\n> > On 2019-Aug-08, Jonathan S. Katz wrote:\n> > \n> >> I modified the copy of the announcement on the website to include the\n> >> pg_upgrade option.\n> >>\n> >> https://www.postgresql.org/about/news/1960/\n> > \n> > Ooh, had I thought you were going to do that, I would have told you\n> > about the item ending in a comma :-)\n> \n> :) I made a quick modification and opted for an \"either\" at the\n> beginning of that clause and a capitalized \"OR\" towards the end.\n\nOh, heh ... I was thinking of this line:\n\n Fix for multi-column foreign keys when rebuilding a foreign key constraint,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 8 Aug 2019 14:45:59 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of back-branch release notes is done"
},
{
"msg_contents": "On 8/8/19 2:45 PM, Alvaro Herrera wrote:\n> On 2019-Aug-08, Jonathan S. Katz wrote:\n> \n>> On 8/8/19 2:40 PM, Alvaro Herrera wrote:\n>>> On 2019-Aug-08, Jonathan S. Katz wrote:\n>>>\n>>>> I modified the copy of the announcement on the website to include the\n>>>> pg_upgrade option.\n>>>>\n>>>> https://www.postgresql.org/about/news/1960/\n>>>\n>>> Ooh, had I thought you were going to do that, I would have told you\n>>> about the item ending in a comma :-)\n>>\n>> :) I made a quick modification and opted for an \"either\" at the\n>> beginning of that clause and a capitalized \"OR\" towards the end.\n> \n> Oh, heh ... I was thinking of this line:\n> \n> Fix for multi-column foreign keys when rebuilding a foreign key constraint,\n\nOh oops. Fixed :) Thanks,\n\nJonathan",
"msg_date": "Thu, 8 Aug 2019 14:47:21 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: First draft of back-branch release notes is done"
}
] |
[
{
"msg_contents": "Hi, \n\nI have found a bug in jsonb_plperl extension. A possible fix is proposed below.\n\njsonb_perl is the contrib module, which defines TRANSFORM functions for jsonb data type and PL/Perl procedural language.\n\nThe bug can be reproduced as follows:\n\nCREATE EXTENSION plperl;\nCREATE EXTENSION jsonb_plperl;\n\nCREATE OR REPLACE FUNCTION text2jsonb (text) RETURNS jsonb \n LANGUAGE plperl TRANSFORM FOR TYPE jsonb AS \n$$ \n my $x = shift; \n my $ret = {a=>$x};\n return $ret;\n$$;\nSELECT text2jsonb(NULL);\nSELECT text2jsonb('11');\nSELECT text2jsonb(NULL);\n\nThe last SELECT produces a strange error.\n\nERROR: cannot transform this Perl type to jsonb\n\nA brief research has shown that the problem is in an incomplete logic inside the transform function. The reason can be illustrated by the flollowing Perl one-liner:\n\n\nperl -MDevel::Peek -e 'sub x { my $x = shift; Dump $x; warn \"----\\n\\n\"; }; x(undef); x(\"a\"); x(undef); '\n\nIt outputs:\nSV = NULL(0x0) at 0x73a1b8\n REFCNT = 1\n FLAGS = (PADMY)\n----\n\nSV = PV(0x71da50) at 0x73a1b8\n REFCNT = 1\n FLAGS = (PADMY,POK,pPOK)\n PV = 0x7409a0 \"a\"\\0\n CUR = 1\n LEN = 16\n----\n\nSV = PV(0x71da50) at 0x73a1b8\n REFCNT = 1\n FLAGS = (PADMY)\n PV = 0x7409a0 \"a\"\\0\n CUR = 1\n LEN = 16\n----\n\nThis shows that internal representation of the same undef in perl is different in first and third function calls. \nIt is the way Perl reuses the the lexical variable, probably, for optimization reasons.\n\nCurrent jsonb_plperl implementation works good for the first (most evident) case, but does not work at all for the third, which results in the abovementioned error.\n\nThe attached patch solves this issue and defines corresponding tests.\n\nRegards,\nIvan",
"msg_date": "Sat, 03 Aug 2019 01:05:33 +0300",
"msg_from": "=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <wao@mail.ru>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?anNvbmJfcGxwZXJsIGJ1Zw==?="
},
{
"msg_contents": "=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <wao@mail.ru> writes:\n> I have found a bug in jsonb_plperl extension. A possible fix is proposed below.\n> ...\n> +\t\t\t\t/* SVt_PV without POK flag is also NULL */\n> +\t\t\t\tif(SvTYPE(in) == SVt_PV) \n\nUgh. Doesn't Perl provide some saner way to determine the type of a SV?\n\nThe core code seems to think that SvOK() is a sufficient test for an\nundef. Should we be doing that before the switch, perhaps?\n\n(My underlying concern here is mostly about whether we have other\nsimilar bugs. There are a lot of places checking SvTYPE.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Aug 2019 18:39:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: =?UTF-8?B?anNvbmJfcGxwZXJsIGJ1Zw==?="
},
{
"msg_contents": "> Tom Lane <tgl@sss.pgh.pa.us>:\n>\n>=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= < wao@mail.ru > writes:\n>> I have found a bug in jsonb_plperl extension. A possible fix is proposed below.\n>> ...\n>> +\t\t\t\t/* SVt_PV without POK flag is also NULL */\n>> +\t\t\t\tif(SvTYPE(in) == SVt_PV) \n>\n>Ugh. Doesn't Perl provide some saner way to determine the type of a SV?\n>\n>The core code seems to think that SvOK() is a sufficient test for an\n>undef. Should we be doing that before the switch, perhaps? \nThank you, Tom. Yes, there is a solution with SvOK(), please see the attached patch.\n\nSvOK() check before the switch seems too early, because in such case we would loose hashes and arrays which are not SvOK. So I put it inside the switch. May be, it's better to remove the switch at all, and rewrite the code with ifs?\n\n>\n>(My underlying concern here is mostly about whether we have other\n>similar bugs. There are a lot of places checking SvTYPE.) \nI looked through plperl.c, but found no similar cases of checking SvTYPE.\n\n>regards, tom lane\nRegards, Ivan\n\n>",
"msg_date": "Sat, 03 Aug 2019 10:03:32 +0300",
"msg_from": "=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <wao@mail.ru>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?UmVbMl06IGpzb25iX3BscGVybCBidWc=?="
},
{
"msg_contents": "=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <wao@mail.ru> writes:\n> Tom Lane <tgl@sss.pgh.pa.us>:\n>> The core code seems to think that SvOK() is a sufficient test for an\n>> undef. Should we be doing that before the switch, perhaps? \n\n> Thank you, Tom. Yes, there is a solution with SvOK(), please see the attached patch.\n\nYeah, that looks cleaner. I suppose we could get rid of the switch()\nbut it would result in a bigger diff for not much reason.\n\n>> (My underlying concern here is mostly about whether we have other\n>> similar bugs. There are a lot of places checking SvTYPE.) \n\n> I looked through plperl.c, but found no similar cases of checking SvTYPE.\n\nYeah, at least there are no other places explicitly checking for\nSVt_NULL.\n\nPushed with minor fiddling with the test case. Thanks!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Aug 2019 14:08:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: =?UTF-8?B?UmVbMl06IGpzb25iX3BscGVybCBidWc=?="
}
] |
[
{
"msg_contents": "Hello,\n\nWhile examining the reasons for excessive memory usage in prepared \nstatements I noticed that RTE_JOIN-kind RTEs contain a bunch of \ncolumnNames and joinaliasvars, that are irrelevant after the Query after \nhas been rewritten. I have some queries that join about 20 tables and \nselect only a few values, mainly names of objects from those tables.\n\nThe attached patch adds a small cleanup function that iterates thought \nthe query and cleans stuff up. I may have missed some places that could \nalso be cleaned up but for now the memory requirements for my largest \nstatements have dropped from 31.2MB to 10.4MB with this patch.\n\nAfter the statement has be executed seven times a generic plan is stored \nin the statement, resulting in an extra 8,8MB memory usage, but still \nthis makes a difference of more than 50% total.\n\nBut the most interesting thing was that this patch reduced query \nexecution time by 50% (~110ms vs. 55ms) when no generic was created yet, \nand by 35% (7.5ms vs. 5.1ms) when the global query plan had been created.\n\nAll tests still pass with my cleanup command, but I am afraid the tests \nmight not contain queries that still need that info after statement \npreparation.\n\nIf anyone might have a look at it and hint me to a situation where this \nmight crash later on? Also, would it be possible for someone to run a \nbenchmark after applying this test to ensure my findings are not totally \noff? I tested on a Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz with SSDs, \nbut everything should have been in memory when I ran the test.\n\nRegards,\nDaniel Migowski",
"msg_date": "Sat, 3 Aug 2019 17:39:33 +0200",
"msg_from": "Daniel Migowski <dmigowski@ikoffice.de>",
"msg_from_op": true,
"msg_subject": "Patch to clean Query after rewrite-and-analyze - reduces memusage up\n to 50% - increases TPS by up to 50%"
},
{
"msg_contents": "Daniel Migowski <dmigowski@ikoffice.de> writes:\n> While examining the reasons for excessive memory usage in prepared \n> statements I noticed that RTE_JOIN-kind RTEs contain a bunch of \n> columnNames and joinaliasvars, that are irrelevant after the Query after \n> has been rewritten.\n\nUh, they're not irrelevant to planning, nor to EXPLAIN. I don't know how\nthoroughly you tested this patch, but it seems certain to break things.\n\nAs far as the final plan goes, setrefs.c's add_rte_to_flat_rtable already\ndrops RTE infrastructure that isn't needed by either the executor or\nEXPLAIN. But we need it up to that point.\n\n(After thinking a bit, I'm guessing that it seemed not to break because\nyour tests never actually exercised the generic-plan path, or perhaps\nthere was always a plancache invalidation before we tried to use the\nquery_list submitted by PrepareQuery. I wonder if this is telling us\nsomething about the value of having PrepareQuery do that at all,\nrather than just caching the raw parse tree and calling it a day.)\n\nA few tips on submitting patches:\n\n* Providing concrete test cases to back up improvement claims is a\ngood idea.\n\n* Please try to make your code follow established PG style. Ignoring\nproject conventions about whitespace and brace layout just makes your\ncode harder to read. (A lot of us would just summarily run the code\nthrough pgindent before trying to review it.)\n\n* Please don't include random cosmetic changes (eg renaming of unrelated\nvariables) in a patch that's meant to illustrate some specific functional\nchange. It just confuses matters.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Aug 2019 12:38:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch to clean Query after rewrite-and-analyze - reduces memusage\n up to 50% - increases TPS by up to 50%"
},
{
"msg_contents": "Am 03.08.2019 um 18:38 schrieb Tom Lane:\n> Daniel Migowski <dmigowski@ikoffice.de> writes:\n>> While examining the reasons for excessive memory usage in prepared\n>> statements I noticed that RTE_JOIN-kind RTEs contain a bunch of\n>> columnNames and joinaliasvars, that are irrelevant after the Query after\n>> has been rewritten.\n> Uh, they're not irrelevant to planning, nor to EXPLAIN. I don't know how\n> thoroughly you tested this patch, but it seems certain to break things.\n>\n> As far as the final plan goes, setrefs.c's add_rte_to_flat_rtable already\n> drops RTE infrastructure that isn't needed by either the executor or\n> EXPLAIN. But we need it up to that point.\nOK, I will investigate.\n> (After thinking a bit, I'm guessing that it seemed not to break because\n> your tests never actually exercised the generic-plan path, or perhaps\n> there was always a plancache invalidation before we tried to use the\n> query_list submitted by PrepareQuery. I wonder if this is telling us\n> something about the value of having PrepareQuery do that at all,\n> rather than just caching the raw parse tree and calling it a day.)\n\nHaving PreparyQuery do _what_ exactly? Sorry, I am still learning how \neverything works here.\n\nIt seems like the patch crashes the postmaster when I use JOINSs \ndirectly in the PreparedStatement, not when I just place all the Joins \nin views. I will also look into this further.\n\n> A few tips on submitting patches:\n>\n> * Providing concrete test cases to back up improvement claims is a\n> good idea.\nOK, I will provide.\n> * Please try to make your code follow established PG style. Ignoring\n> project conventions about whitespace and brace layout just makes your\n> code harder to read. (A lot of us would just summarily run the code\n> through pgindent before trying to review it.)\nOK. I just tried to have git diff stop marking my indentations red, but \nI am also new to git, will use pgindent now.\n> * Please don't include random cosmetic changes (eg renaming of unrelated\n> variables) in a patch that's meant to illustrate some specific functional\n> change. It just confuses matters.\n\nWas useful here because I had to declare a ListCell anyway, and \nListCell's in other places where named 'lc' not 'l', and 'l' was usually \nused for lists, so I thought reusal was nice, but OK.\n\n\n\n\n",
"msg_date": "Sun, 4 Aug 2019 07:54:15 +0200",
"msg_from": "Daniel Migowski <dmigowski@ikoffice.de>",
"msg_from_op": true,
"msg_subject": "Re: Patch to clean Query after rewrite-and-analyze - reduces memusage\n up to 50% - increases TPS by up to 50%"
},
{
"msg_contents": "Daniel Migowski <dmigowski@ikoffice.de> writes:\n> Am 03.08.2019 um 18:38 schrieb Tom Lane:\n>> (After thinking a bit, I'm guessing that it seemed not to break because\n>> your tests never actually exercised the generic-plan path, or perhaps\n>> there was always a plancache invalidation before we tried to use the\n>> query_list submitted by PrepareQuery. I wonder if this is telling us\n>> something about the value of having PrepareQuery do that at all,\n>> rather than just caching the raw parse tree and calling it a day.)\n\n> Having PreparyQuery do _what_ exactly? Sorry, I am still learning how \n> everything works here.\n\nA plancache entry stores a raw parsetree (which is, at least\ntheoretically, an immutable representation of the parsed string),\nand an analyzed-and-rewritten parsetree, and optionally a generic\nplan tree. PrepareQuery is setting up the first two of these,\nbut only the raw parsetree is really essential ... or for that\nmatter, it might be possible to store just the source string\nrepresentation and re-parse that. It's all about space versus\nspeed tradeoffs. Our current philosophy is that if you bothered\nto prepare a query it's because you want speed, but perhaps that\nassumption needs rethinking.\n\n> It seems like the patch crashes the postmaster when I use JOINSs \n> directly in the PreparedStatement, not when I just place all the Joins \n> in views. I will also look into this further.\n\nUm. Now that I think about it, the regression tests probably don't\ntry to PREPARE any complex queries, so it's not impossible that\nthey just missed the fact that you were storing broken parse trees\nfor joins.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Aug 2019 15:14:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch to clean Query after rewrite-and-analyze - reduces memusage\n up to 50% - increases TPS by up to 50%"
}
] |
[
{
"msg_contents": "Improve pruning of a default partition\n\nWhen querying a partitioned table containing a default partition, we\nwere wrongly deciding to include it in the scan too early in the\nprocess, failing to exclude it in some cases. If we reinterpret the\nPruneStepResult.scan_default flag slightly, we can do a better job at\ndetecting that it can be excluded. The change is that we avoid setting\nthe flag for that pruning step unless the step absolutely requires the\ndefault partition to be scanned (in contrast with the previous\narrangement, which was to set it unless the step was able to prune it).\nSo get_matching_partitions() must explicitly check the partition that\neach returned bound value corresponds to in order to determine whether\nthe default one needs to be included, rather than relying on the flag\nfrom the final step result.\n\nAuthor: Yuzuko Hosoya <hosoya.yuzuko@lab.ntt.co.jp>\nReviewed-by: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>\nDiscussion: https://postgr.es/m/00e601d4ca86$932b8bc0$b982a340$@lab.ntt.co.jp\n\nBranch\n------\nREL_12_STABLE\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/86544071484a48c753e719e0c7c9cf816a59a65e\n\nModified Files\n--------------\nsrc/backend/partitioning/partprune.c | 219 ++++++++++++--------------\nsrc/include/partitioning/partbounds.h | 1 -\nsrc/test/regress/expected/partition_prune.out | 20 ++-\nsrc/test/regress/sql/partition_prune.sql | 1 +\n4 files changed, 111 insertions(+), 130 deletions(-)\n\n",
"msg_date": "Sun, 04 Aug 2019 15:22:12 +0000",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Improve pruning of a default partition"
},
{
"msg_contents": "On 2019-Aug-04, Alvaro Herrera wrote:\n\n> Improve pruning of a default partition\n\nI just noticed that I failed to credit Shawn Wang, Thibaut Madeleine,\nYoshikazu Imai, Kyotaro Horiguchi as reviewers of this patch.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 4 Aug 2019 15:00:53 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Improve pruning of a default partition"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nPlease consider fixing the next truss of typos and inconsistencies in\nthe tree:\n\n9.1. NAMESPACE_SQLXML -> remove (not used since the introduction in\n355e05ab)\n9.2. NBXLOG_H -> NBTXLOG_H\n9.3. NEWPAGE -> XLOG_FPI (orphaned since 54685338)\n9.4. newXlogId, newXlogSeg -> newXlogSegNo (orphaned since dfda6eba)\n9.5. nextblno -> nextblkno\n9.6. noCatalogs -> arg\n9.7. nodeWindowFunc.c -> nodeWindowAgg.c\n9.8. NOINHERITS -> NOINHERIT\n9.9. NO_RESPONSE -> PQPING_NO_RESPONSE\n9.10. Normalizationdata.txt- > NormalizationTest.txt\n9.11. NOT_AVAIL -> LOCKACQUIRE_NOT_AVAIL\n9.12. not_point -> is_point (a contradiction with the check and a\ncomment, appeared in 4c1383ef)\n9.13. nptrs, ptrs -> nipd, ipd\n9.14. nbuffers, nrdatas -> max_block_id, ndatas\n9.15. nrow -> nrows\n9.16. nsubxcnt -> subxcnt\n9.17. ntzones, zp -> zonecount, zpfirst\n9.18. nvmblocks -> vm_nblocks\n9.19. objAddress -> objAddr\n9.20. ObjectWithArg -> ObjectWithArgs\n9.21. objOid, relOid -> objectId, classId, subId\n9.22. ObjType -> ObjectType\n9.23. OffsetNumberMask -> remove (not used since PG95-1_01)\n9.24. oid_in_function -> remove (orphaned since 578b2297)\n9.25. oidzero, oidge -> remove (not used since the introduction in 8dc42a3a)\n9.26. oidle -> remove (orphaned since 9f0ae0c8)\n9.27. oideq -> remove (orphaned since 005a1217)\n9.28. OldestMemberMXactID -> OldestMemberMXactId\n9.29. oldestXidDb -> oldestXactDb\n9.30. oldest-Xmin -> oldestXmin\n9.31. old_key_tup -> old_key_tuple\n9.32. on_dsm_callback -> on_dsm_detach callback\n9.33. ONE_PAGE -> TBM_ONE_PAGE\n9.34. opt_boolean -> opt_boolean_or_string (renamed in 5c84fe46)\n9.35. OptimizableStmt -> PreparableStmt (orphaned since aa83bc04)\n9.36. optType -> remove (not used since the introduction in 500b62b0)\n9.37. organizationUnitName -> organizationalUnitName (see\nhttps://github.com/openssl/openssl/issues/1843)\n9.38. origTupDesc -> origTupdesc\n9.39. outputvariables -> output variables\n9.40. ovflpgs -> ovflpages\n9.41. OWNER_TO -> OWNER TO\n9.42. PackedPostingList -> GinPostingList (an inconsistency since 36a35c55)\n9.43. PageClearPrunable -> remove (not used since 6f10eb21)\n9.44. pageLSN -> curPageLSN\n9.45. paramListInfo -> ParamListInfo\n9.46. PARSER_FUNC_H -> PARSE_FUNC_H\n9.47. parse_hba -> parse_hba_line (renamed in 98723810)\n9.48. parse_json -> pg_parse_json\n9.49. pathkeys_contain_in -> pathkeys_contained_in\n9.50. pattern1 -> pattern\n9.51. patters -> patterns\n9.52. PerAggData -> WindowStatePerAggData\n9.53. PerFuncData -> WindowStatePerFuncData\n9.54. pgaddtup -> this function\n9.55. pg_atomic_test_and_set_flag -> pg_atomic_test_set_flag\n9.56. pg_binary_read_file -> pg_read_binary_file\n9.57. pgcommonfiles -> remove (orphaned since a7301839)\n9.58. pgcontrolvalue, xlrecvalue -> oldvalue, newvalue\n9.59. pg_dlsym -> dlsym (orphaned since 842cb9fa)\n9.60. pg_encconv_tbl -> pg_enc2name_tbl (orphaned since eb335a03)\n9.61. PGgetline -> PQgetline\n9.62. pghackers -> pgsql-hackers\n9.63. _pg_keysequal -> remove (not used since d26e1eba)\n9.64. pg_language_metadata -> pg_largeobject_metadata\n9.65. pgp_armor -> pg_armor\n9.66. PGresTuple -> PGresult tuple\n9.67. pgstat_recv_resetshared -> pgstat_recv_resetsharedcounter\n9.68. pgtypes_date_months_short -> months (abbreviations)\n9.69. PgXact -> MyPgXact\n9.70. PlaceholderVars -> PlaceHolderVars\n9.71. placeToPage -> beginPlaceToPage\n9.72. PLpgSQL_dynfors -> PLpgSQL_stmt_dynfors\n9.73. plpgsql_init -> _PG_init (orphaned since b09bfcaa)\n9.74. PLy_munge_source -> PLy_procedure_munge_source\n9.75. Point-in-Time -> Point-In-Time\n9.76. PO_FILES, ALL_PO_FILES -> remove (orphaned since 4e1c7207, 5dd41f35)\n9.77. pop_size -> pool_size\n9.78. posid -> ip_posid, blkid -> ip_blkid\n9.79. pq_close, pq_comm_reset -> socket_close, socket_comm_reset\n(renamed in 2bd9e412)\n9.80. pqcomprim.c -> pqformat.c (orphaned since 95cc41b)\n9.81. PQescapeByte()a -> PQescapeBytea()\n9.82. PQSetenv -> pqSetenv\n9.83. PreCommit_CheckForSerializableConflicts ->\nPreCommit_CheckForSerializationFailure\n9.84. PREFER -> remove (not used since the introduction in 7bcc6d98)\n9.85. prepared_transactions -> prepared-transactions\n9.86. pressurel -> pressure\n9.87. prettyprint -> pretty-print\n9.88. PRETTY_SCHEMA -> remove (not used since the introduction in 3d2aed66)\n9.89. printAttName, terseOutput, width -> PrintAttNames, TerseOutput,\ncolWidth\n9.90. priorWALFileName -> remove (not used since the introduction in\n51be14e9)\n9.91. PROC_H -> _PROC_H_\n9.92. ProcLWLockTranche -> remove (orphaned since 3761fe3c and\nnon-informational)\n9.93. PUTBYTE -> remove (not used since the introduction in 0c0dde61)\n9.94. px_hmac_block_size, px_hmac_reset -> remove (not used since the\nintroduction in df24cb73)\n9.95. PX_MAX_NAMELEN -> remove (not used since 3cc86612)\n\nAlso, please fix my typo (from the previous set), that was somehow\nsilted through a double check.\n\nBest regards,\nAlexander",
"msg_date": "Mon, 5 Aug 2019 00:33:34 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix typos and inconsistencies for HEAD (take 9)"
},
{
"msg_contents": "On Mon, Aug 05, 2019 at 12:33:34AM +0300, Alexander Lakhin wrote:\n> 9.1. NAMESPACE_SQLXML -> remove (not used since the introduction in\n> 355e05ab)\n\nLooks important to keep as a matter of documentation.\n\n> 9.12. not_point -> is_point (a contradiction with the check and a\n> comment, appeared in 4c1383ef)\n\nThat would mean a breakage for anybody relying on it, so I would not\nchange that.\n\n> 9.17. ntzones, zp -> zonecount, zpfirst\n\nNope, this is upstream code.\n\n> 9.41. OWNER_TO -> OWNER TO\n\nThis one needs a back-patch. I'll fix that separately.\n\n> 9.43. PageClearPrunable -> remove (not used since 6f10eb21)\n\nBetter to keep it. Some extensions may rely on it, and it costs\nnothing to keep around.\n\n> 9.75. Point-in-Time -> Point-In-Time\n\nWhy? The original looks correct to me.\n\n> 9.76. PO_FILES, ALL_PO_FILES -> remove (orphaned since 4e1c7207, 5dd41f35)\n\nI am wondering if there are external things relying on that. So I\nhave let them.\n\n> 9.88. PRETTY_SCHEMA -> remove (not used since the introduction in 3d2aed66)\n\nThis should be kept for consistent IMO. Even if not used yet, it\ncould be used for future patches.\n\n> 9.94. px_hmac_block_size, px_hmac_reset -> remove (not used since the\n> introduction in df24cb73)\n\nHm. I think that it makes sense to keep them for consistency with the\nother structures and callback reference lookups, and these macros may\nprove to be useful for future patches.\n\n> Also, please fix my typo (from the previous set), that was somehow\n> silted through a double check.\n\nWhich one is that, please? It is likely possible that some obvious\nstuff has been forgotten.\n\nCommitted a large bunch of this stuff. Please note that the\nindentation was incorrect in a couple of places, including the change\nin src/tools/msvc/.\n--\nMichael",
"msg_date": "Mon, 5 Aug 2019 12:15:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for HEAD (take 9)"
},
{
"msg_contents": "Hello Michael,\n\n05.08.2019 6:15, Michael Paquier wrote:\n>> 9.41. OWNER_TO -> OWNER TO\n> This one needs a back-patch. I'll fix that separately.\n>\nI believe that all the fixes in doc/ should be back-patched too. If it's\nnot too late, I can produce such patches for the nearest releases.\n>> 9.75. Point-in-Time -> Point-In-Time\n> Why? The original looks correct to me.\nacronyms.sgml contains such spelling:\n��� <term><acronym>PITR</acronym></term>\n��� <listitem>\n���� <para>\n����� <link linkend=\"continuous-archiving\">Point-In-Time\n����� Recovery</link> (Continuous Archiving)\n���� </para>\n>> Also, please fix my typo (from the previous set), that was somehow\n>> silted through a double check.\n> Which one is that, please? It is likely possible that some obvious\n> stuff has been forgotten.\n\"It the username successfully retrieved,...\" (fix_for_fix_8.15.patch)\n> Committed a large bunch of this stuff. Please note that the\n> indentation was incorrect in a couple of places, including the change\n> in src/tools/msvc/.\nThank you. I will check my Tab settings next time.\n\nBest regards,\nAlexander",
"msg_date": "Mon, 5 Aug 2019 06:44:46 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typos and inconsistencies for HEAD (take 9)"
},
{
"msg_contents": "On Mon, Aug 05, 2019 at 12:15:21PM +0900, Michael Paquier wrote:\n> On Mon, Aug 05, 2019 at 12:33:34AM +0300, Alexander Lakhin wrote:\n>> 9.41. OWNER_TO -> OWNER TO\n> \n> This one needs a back-patch. I'll fix that separately.\n\nDone separately as of 05ba837, and back-patched down to 9.6 as this\nhas been introduced by d37b816.\n--\nMichael",
"msg_date": "Mon, 5 Aug 2019 14:33:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for HEAD (take 9)"
},
{
"msg_contents": "On Mon, Aug 05, 2019 at 06:44:46AM +0300, Alexander Lakhin wrote:\n> I believe that all the fixes in doc/ should be back-patched too. If it's\n> not too late, I can produce such patches for the nearest releases.\n\nI think that's unfortunately a bit too late for this release. Those\nthings have been around for years, so three months to make them appear\non the website is not a big deal IMO. If you have something which\ncould be used as a base for back-branches with the most obvious typos,\nthat would be welcome (if you could group everything into a single\npatch file, that would be even better...).\n--\nMichael",
"msg_date": "Mon, 5 Aug 2019 14:40:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for HEAD (take 9)"
},
{
"msg_contents": "05.08.2019 8:40, Michael Paquier wrote:\n> On Mon, Aug 05, 2019 at 06:44:46AM +0300, Alexander Lakhin wrote:\n>> I believe that all the fixes in doc/ should be back-patched too. If it's\n>> not too late, I can produce such patches for the nearest releases.\n> I think that's unfortunately a bit too late for this release. Those\n> things have been around for years, so three months to make them appear\n> on the website is not a big deal IMO. If you have something which\n> could be used as a base for back-branches with the most obvious typos,\n> that would be welcome (if you could group everything into a single\n> patch file, that would be even better...).\nOK, I will make the desired patches for all the supported branches upon\nreaching the end of my unique journey.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Mon, 5 Aug 2019 09:30:06 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typos and inconsistencies for HEAD (take 9)"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have noticed today that the two functions in $subject are part of\nlibpq and remain around undocumented since they are around (see\n6ef5846 and 2b84cbb). Isn't it past time to get rid of them? We have\nPQprint as well, which was used in the past by psql but not today,\nstill that's documented. Note that the coverage in this area is a\nperfect 0%:\nhttps://coverage.postgresql.org/src/interfaces/libpq/fe-print.c.gcov.html\n\nI am also wondering how an update to exports.txt should be handled in\nthis case. Just by commenting out the numbers which are not used\nanymore?\n\nThanks,\n--\nMichael",
"msg_date": "Mon, 5 Aug 2019 12:27:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Undocumented PQdisplayTuples and PQprintTuples in libpq"
},
{
"msg_contents": "Bonjour Michaᅵl,\n\n> I have noticed today that the two functions in $subject are part of\n> libpq and remain around undocumented since they are around (see\n> 6ef5846 and 2b84cbb). Isn't it past time to get rid of them? We have\n> PQprint as well, which was used in the past by psql but not today,\n> still that's documented. Note that the coverage in this area is a\n> perfect 0%:\n> https://coverage.postgresql.org/src/interfaces/libpq/fe-print.c.gcov.html\n\n<sigh>\n\n> I am also wondering how an update to exports.txt should be handled in\n> this case. Just by commenting out the numbers which are not used\n> anymore?\n\nHow do you know that they are not used by anyone in the wild?\nIf they are broken, it would be a clue. If not, possibly someone somewhere \ncould be using it, eg for debug (what does this result look like?).\n\n-- \nFabien.",
"msg_date": "Mon, 5 Aug 2019 06:54:32 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Undocumented PQdisplayTuples and PQprintTuples in libpq"
},
{
"msg_contents": "On Mon, Aug 05, 2019 at 06:54:32AM +0200, Fabien COELHO wrote:\n> How do you know that they are not used by anyone in the wild?\n> If they are broken, it would be a clue. If not, possibly someone somewhere\n> could be using it, eg for debug (what does this result look like?).\n\nThey have been around for more than 19 years, and they have been\nundocumented for this much amount of time. github does not report any\nreference to any of them. Of course I cannot say that nobody has code\nusing them, but the odds that this number is close to zero are really\nhigh, and that they ought to use something a bit newer if need be.\n--\nMichael",
"msg_date": "Mon, 5 Aug 2019 14:46:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Undocumented PQdisplayTuples and PQprintTuples in libpq"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Aug 05, 2019 at 06:54:32AM +0200, Fabien COELHO wrote:\n>> How do you know that they are not used by anyone in the wild?\n>> If they are broken, it would be a clue. If not, possibly someone somewhere\n>> could be using it, eg for debug (what does this result look like?).\n\n> They have been around for more than 19 years, and they have been\n> undocumented for this much amount of time. github does not report any\n> reference to any of them. Of course I cannot say that nobody has code\n> using them, but the odds that this number is close to zero are really\n> high, and that they ought to use something a bit newer if need be.\n\nI'm afraid that we will get pushback from vendors who say that removing an\nexported function is an ABI break. At minimum, certain people will insist\nthat this requires an increment in the shlib major version for libpq.so.\nAnd that will cause a lot of pain, because it'd mean that v13 libpq.so\nis no longer usable by applications built against older releases.\n\nOn the whole, I think benign neglect is the best policy here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Aug 2019 15:12:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Undocumented PQdisplayTuples and PQprintTuples in libpq"
}
] |
[
{
"msg_contents": "Hi ,\n\nWhile testing SSL version 1.1.1c , I only enabled TLSv1.2 and rest \nincluding TLSv1.3 has been disabled , like this -\n\npostgres=# show ssl_ciphers ;\n ssl_ciphers\n----------------------------------------------\n TLSv1.2:!aNULL:!SSLv2:!SSLv3:!TLSv1:!TLSv1.3\n\nTo cofirm the same, there is a tool called - sslyze ( SSLyze is a \nPython library and a CLI tool that can analyze the SSL configuration of \na server by connecting to it)\n(https://github.com/nabla-c0d3/sslyze) which i configured on my machine .\n\nRun this command -\n\n[root@localhost Downloads]# python -m sslyze --sslv2 --sslv3 --tlsv1 \n--tlsv1_1 --tlsv1_2 --tlsv1_3 localhost:5432 --starttls=postgres \n--hide_rejected_ciphers\n\n AVAILABLE PLUGINS\n -----------------\n\n CompressionPlugin\n HttpHeadersPlugin\n OpenSslCcsInjectionPlugin\n OpenSslCipherSuitesPlugin\n SessionResumptionPlugin\n FallbackScsvPlugin\n CertificateInfoPlugin\n RobotPlugin\n HeartbleedPlugin\n SessionRenegotiationPlugin\n\n\n\n CHECKING HOST(S) AVAILABILITY\n -----------------------------\n\n localhost:5432 => 127.0.0.1\n\n\n\n\n SCAN RESULTS FOR LOCALHOST:5432 - 127.0.0.1\n -------------------------------------------\n\n * SSLV2 Cipher Suites:\n Server rejected all cipher suites.\n\n** TLSV1_3 Cipher Suites:**\n** Server rejected all cipher suites.**\n*\n * SSLV3 Cipher Suites:\n Server rejected all cipher suites.\n\n * TLSV1_1 Cipher Suites:\n Server rejected all cipher suites.\n\n * TLSV1_2 Cipher Suites:\n Forward Secrecy OK - Supported\n RC4 OK - Not Supported\n\n Preferred:\n TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 ECDH-256 \nbits 256 bits\n Accepted:\n TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 DH-2048 \nbits 256 bits\n RSA_WITH_AES_256_CCM_8 - 256 bits\n TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 - 256 \nbits\n TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256 - 256 bits\n TLS_RSA_WITH_CAMELLIA_256_CBC_SHA256 - 256 bits\n RSA_WITH_AES_256_CCM - 256 bits\n TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256 - 256 bits\n ARIA256-GCM-SHA384 - 256 bits\n TLS_RSA_WITH_AES_256_CBC_SHA256 - 256 bits\n TLS_ECDHE_RSA_WITH_CAMELLIA_256_CBC_SHA384 - 256 bits\n TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 ECDH-256 \nbits 256 bits\n DHE_RSA_WITH_AES_256_CCM_8 - 256 bits\n ECDHE-ARIA256-GCM-SHA384 - 256 bits\n TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 ECDH-256 \nbits 256 bits\n TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 DH-2048 \nbits 256 bits\n TLS_RSA_WITH_AES_256_GCM_SHA384 - 256 bits\n TLS_DHE_RSA_WITH_AES_256_CCM - 256 bits\n DHE-RSA-ARIA256-GCM-SHA384 - 256 bits\n TLS_RSA_WITH_CAMELLIA_128_CBC_SHA256 - 128 bits\n RSA_WITH_AES_128_CCM_8 - 128 bits\n RSA_WITH_AES_128_CCM - 128 bits\n DHE_RSA_WITH_AES_128_CCM - 128 bits\n DHE_RSA_WITH_AES_128_CCM_8 - 128 bits\n ARIA128-GCM-SHA256 - 128 bits\n TLS_ECDHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 - 128 bits\n TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 DH-2048 \nbits 128 bits\n TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 ECDH-256 \nbits 128 bits\n TLS_RSA_WITH_AES_128_CBC_SHA256 - 128 bits\n ECDHE-ARIA128-GCM-SHA256 - 128 bits\n TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 DH-2048 \nbits 128 bits\n TLS_RSA_WITH_AES_128_GCM_SHA256 - 128 bits\n TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 - 128 bits\n DHE-RSA-ARIA128-GCM-SHA256 - 128 bits\n TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 ECDH-256 \nbits 128 bits\n\n * TLSV1 Cipher Suites:\n Server rejected all cipher suites.\n\n\n SCAN COMPLETED IN 0.84 S\n ------------------------\n\n\nThese are the ones which got rejected for TLSV1_3\n\n* TLSV1_3 Cipher Suites:\n Rejected:\n TLS_CHACHA20_POLY1305_SHA256 TLS / Alert: \nprotocol version\n*TLS_AES_256_GCM_SHA384* TLS / Alert: protocol \nversion\n TLS_AES_128_GCM_SHA256 TLS / Alert: \nprotocol version\n TLS_AES_128_CCM_SHA256 TLS / Alert: \nprotocol version\n TLS_AES_128_CCM_8_SHA256 TLS / Alert: \nprotocol version\n\nwhen i connect to psql terminal -\n\npsql.bin (10.9)\nSSL connection (protocol: TLSv1.3, cipher: *TLS_AES_256_GCM_SHA384*, \nbits: 256, compression: off)\nType \"help\" for help.\n\npostgres=# show ssl_ciphers ;\n ssl_ciphers\n----------------------------------------------\n TLSv1.2:!aNULL:!SSLv2:!SSLv3:!TLSv1:!TLSv1.3\n(1 row)\n\npostgres=#\n\nCipher which has been rejected -should not display in the message.\n\nIs this expected ?\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nHi ,\n\nWhile testing SSL version 1.1.1c , I only enabled TLSv1.2 and\n rest including TLSv1.3 has been disabled , like this - \n\n postgres=# show ssl_ciphers ;\n ssl_ciphers \n ----------------------------------------------\n TLSv1.2:!aNULL:!SSLv2:!SSLv3:!TLSv1:!TLSv1.3\n\n To cofirm the same, there is a tool called - sslyze ( SSLyze is a\n Python library and a CLI tool that can analyze the SSL\n configuration of a server by connecting to it)\n (https://github.com/nabla-c0d3/sslyze)\n which i configured on my machine .\n\n Run this command - \n\n [root@localhost Downloads]# python -m sslyze --sslv2 --sslv3\n --tlsv1 --tlsv1_1 --tlsv1_2 --tlsv1_3 localhost:5432\n --starttls=postgres --hide_rejected_ciphers\n\n AVAILABLE PLUGINS\n -----------------\n\n CompressionPlugin\n HttpHeadersPlugin\n OpenSslCcsInjectionPlugin\n OpenSslCipherSuitesPlugin\n SessionResumptionPlugin\n FallbackScsvPlugin\n CertificateInfoPlugin\n RobotPlugin\n HeartbleedPlugin\n SessionRenegotiationPlugin\n\n\n\n CHECKING HOST(S) AVAILABILITY\n -----------------------------\n\n localhost:5432 => 127.0.0.1 \n\n\n\n\n SCAN RESULTS FOR LOCALHOST:5432 - 127.0.0.1\n -------------------------------------------\n\n * SSLV2 Cipher Suites:\n Server rejected all cipher suites.\n\n * TLSV1_3 Cipher Suites:\n Server rejected all cipher suites.\n\n * SSLV3 Cipher Suites:\n Server rejected all cipher suites.\n\n * TLSV1_1 Cipher Suites:\n Server rejected all cipher suites.\n\n * TLSV1_2 Cipher Suites:\n Forward Secrecy OK - Supported\n RC4 OK - Not Supported\n\n Preferred:\n TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 ECDH-256\n bits 256\n bits \n \n Accepted:\n TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 DH-2048\n bits 256\n bits \n \n RSA_WITH_AES_256_CCM_8 \n - 256\n bits \n \n TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 \n - 256\n bits \n \n TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256 \n - 256\n bits \n \n TLS_RSA_WITH_CAMELLIA_256_CBC_SHA256 \n - 256\n bits \n \n RSA_WITH_AES_256_CCM \n - 256\n bits \n \n TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256 \n - 256\n bits \n \n ARIA256-GCM-SHA384 \n - 256\n bits \n \n TLS_RSA_WITH_AES_256_CBC_SHA256 \n - 256\n bits \n \n TLS_ECDHE_RSA_WITH_CAMELLIA_256_CBC_SHA384 \n - 256\n bits \n \n TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 ECDH-256\n bits 256\n bits \n \n DHE_RSA_WITH_AES_256_CCM_8 \n - 256\n bits \n \n ECDHE-ARIA256-GCM-SHA384 \n - 256\n bits \n \n TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 ECDH-256\n bits 256\n bits \n \n TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 DH-2048\n bits 256\n bits \n \n TLS_RSA_WITH_AES_256_GCM_SHA384 \n - 256\n bits \n \n TLS_DHE_RSA_WITH_AES_256_CCM \n - 256\n bits \n \n DHE-RSA-ARIA256-GCM-SHA384 \n - 256\n bits \n \n TLS_RSA_WITH_CAMELLIA_128_CBC_SHA256 \n - 128\n bits \n \n RSA_WITH_AES_128_CCM_8 \n - 128\n bits \n \n RSA_WITH_AES_128_CCM \n - 128\n bits \n \n DHE_RSA_WITH_AES_128_CCM \n - 128\n bits \n \n DHE_RSA_WITH_AES_128_CCM_8 \n - 128\n bits \n \n ARIA128-GCM-SHA256 \n - 128\n bits \n \n TLS_ECDHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 \n - 128\n bits \n \n TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 DH-2048\n bits 128\n bits \n \n TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 ECDH-256\n bits 128\n bits \n \n TLS_RSA_WITH_AES_128_CBC_SHA256 \n - 128\n bits \n \n ECDHE-ARIA128-GCM-SHA256 \n - 128\n bits \n \n TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 DH-2048\n bits 128\n bits \n \n TLS_RSA_WITH_AES_128_GCM_SHA256 \n - 128\n bits \n \n TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 \n - 128\n bits \n \n DHE-RSA-ARIA128-GCM-SHA256 \n - 128\n bits \n \n TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 ECDH-256\n bits 128\n bits \n \n\n * TLSV1 Cipher Suites:\n Server rejected all cipher suites.\n\n\n SCAN COMPLETED IN 0.84 S\n ------------------------\n\n\n These are the ones which got rejected for TLSV1_3 \n\n * TLSV1_3 Cipher Suites:\n Rejected:\n TLS_CHACHA20_POLY1305_SHA256 TLS / Alert:\n protocol version \n TLS_AES_256_GCM_SHA384 TLS\n / Alert: protocol version \n TLS_AES_128_GCM_SHA256 TLS /\n Alert: protocol version \n TLS_AES_128_CCM_SHA256 TLS /\n Alert: protocol version \n TLS_AES_128_CCM_8_SHA256 TLS /\n Alert: protocol version \n\n when i connect to psql terminal - \n\n psql.bin (10.9)\n SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384,\n bits: 256, compression: off)\n Type \"help\" for help.\n\n postgres=# show ssl_ciphers ;\n ssl_ciphers \n ----------------------------------------------\n TLSv1.2:!aNULL:!SSLv2:!SSLv3:!TLSv1:!TLSv1.3\n (1 row)\n\n postgres=# \n\n Cipher which has been rejected -should not display in the message.\n \n\nIs this expected ? \n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 5 Aug 2019 12:59:29 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "SSL Connection still showing TLSv1.3 even it is disabled in\n ssl_ciphers"
},
{
"msg_contents": "tushar <tushar.ahuja@enterprisedb.com> writes:\n> when i connect to psql terminal -\n\n> psql.bin (10.9)\n> SSL connection (protocol: TLSv1.3, cipher: *TLS_AES_256_GCM_SHA384*, \n> bits: 256, compression: off)\n> Type \"help\" for help.\n\n> postgres=# show ssl_ciphers ;\n> ssl_ciphers\n> ----------------------------------------------\n> TLSv1.2:!aNULL:!SSLv2:!SSLv3:!TLSv1:!TLSv1.3\n> (1 row)\n\nMy guess is that OpenSSL ignored your ssl_ciphers setting on the\ngrounds that it's stupid to reject all possible ciphers.\nIn any case, this would be something to raise with them not us.\nPG does nothing with that value except pass it to SSL_CTX_set_cipher_list.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Aug 2019 10:11:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SSL Connection still showing TLSv1.3 even it is disabled in\n ssl_ciphers"
}
] |
[
{
"msg_contents": "Hi,\n\nSorry if this report is duplicate but there is no column relhasoids in\npg_catalog.pg_class required by the underlying query of the \\d meta\ncommand of psql.\n\n\n",
"msg_date": "Mon, 5 Aug 2019 14:38:40 +0300",
"msg_from": "Dmitry Igrishin <dmitigr@gmail.com>",
"msg_from_op": true,
"msg_subject": "psql's meta command \\d is broken as of 12 beta2."
},
{
"msg_contents": "Hi\n\n\npo 5. 8. 2019 v 13:35 odesílatel Dmitry Igrishin <dmitigr@gmail.com> napsal:\n\n> Hi,\n>\n> Sorry if this report is duplicate but there is no column relhasoids in\n> pg_catalog.pg_class required by the underlying query of the \\d meta\n> command of psql.\n>\n\ndo you use psql from this release?\n\nThe psql client should be higher or same like server.\n\nPavel\n\nHipo 5. 8. 2019 v 13:35 odesílatel Dmitry Igrishin <dmitigr@gmail.com> napsal:Hi,\n\nSorry if this report is duplicate but there is no column relhasoids in\npg_catalog.pg_class required by the underlying query of the \\d meta\ncommand of psql.do you use psql from this release? The psql client should be higher or same like server.Pavel",
"msg_date": "Mon, 5 Aug 2019 13:41:31 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql's meta command \\d is broken as of 12 beta2."
},
{
"msg_contents": "po 5. 8. 2019 v 13:46 odesílatel Dmitry Igrishin <dmitigr@gmail.com> napsal:\n\n> пн, 5 авг. 2019 г. в 14:42, Pavel Stehule <pavel.stehule@gmail.com>:\n> >\n> > Hi\n> >\n> >\n> > po 5. 8. 2019 v 13:35 odesílatel Dmitry Igrishin <dmitigr@gmail.com>\n> napsal:\n> >>\n> >> Hi,\n> >>\n> >> Sorry if this report is duplicate but there is no column relhasoids in\n> >> pg_catalog.pg_class required by the underlying query of the \\d meta\n> >> command of psql.\n> >\n> >\n> > do you use psql from this release?\n> >\n> > The psql client should be higher or same like server.\n> >\n> > Pavel\n> Oops, I'm wrong. When I looked at describe.c I understood my mistake.\n> Sorry for noise and thank you!\n>\n\nno problem\n\nPavel\n\npo 5. 8. 2019 v 13:46 odesílatel Dmitry Igrishin <dmitigr@gmail.com> napsal:пн, 5 авг. 2019 г. в 14:42, Pavel Stehule <pavel.stehule@gmail.com>:\n>\n> Hi\n>\n>\n> po 5. 8. 2019 v 13:35 odesílatel Dmitry Igrishin <dmitigr@gmail.com> napsal:\n>>\n>> Hi,\n>>\n>> Sorry if this report is duplicate but there is no column relhasoids in\n>> pg_catalog.pg_class required by the underlying query of the \\d meta\n>> command of psql.\n>\n>\n> do you use psql from this release?\n>\n> The psql client should be higher or same like server.\n>\n> Pavel\nOops, I'm wrong. When I looked at describe.c I understood my mistake.\nSorry for noise and thank you!no problemPavel",
"msg_date": "Mon, 5 Aug 2019 13:50:08 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql's meta command \\d is broken as of 12 beta2."
},
{
"msg_contents": "пн, 5 авг. 2019 г. в 14:42, Pavel Stehule <pavel.stehule@gmail.com>:\n>\n> Hi\n>\n>\n> po 5. 8. 2019 v 13:35 odesílatel Dmitry Igrishin <dmitigr@gmail.com> napsal:\n>>\n>> Hi,\n>>\n>> Sorry if this report is duplicate but there is no column relhasoids in\n>> pg_catalog.pg_class required by the underlying query of the \\d meta\n>> command of psql.\n>\n>\n> do you use psql from this release?\n>\n> The psql client should be higher or same like server.\n>\n> Pavel\nOops, I'm wrong. When I looked at describe.c I understood my mistake.\nSorry for noise and thank you!\n\n\n",
"msg_date": "Mon, 5 Aug 2019 14:50:27 +0300",
"msg_from": "Dmitry Igrishin <dmitigr@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: psql's meta command \\d is broken as of 12 beta2."
}
] |
[
{
"msg_contents": "-hackers,\n\nI went through and made some readability and modernization of the \nintro.sgml today. Patch attached.\n\nJD\n\n-- \nCommand Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc\nPostgres centered full stack support, consulting and development.\nAdvocate: @amplifypostgres || Get help: https://commandprompt.com/\n***** Unless otherwise stated, opinions are my own. *****",
"msg_date": "Mon, 5 Aug 2019 12:20:18 -0700",
"msg_from": "\"Joshua D. Drake\" <jd@commandprompt.com>",
"msg_from_op": true,
"msg_subject": "Cleanup of intro.sgml"
},
{
"msg_contents": "On 8/5/19 3:20 PM, Joshua D. Drake wrote:\n> intro.sgml today. Patch attached.\n\nThings I noticed quickly:\n\nbroken up in to categories s/in to/into/\n\nUnstructured data via JSON (or XML ?)\n\ns/Partioniing/Partitioning/\n\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Mon, 5 Aug 2019 16:13:52 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Cleanup of intro.sgml"
},
{
"msg_contents": "On 8/5/19 1:13 PM, Chapman Flack wrote:\n> On 8/5/19 3:20 PM, Joshua D. Drake wrote:\n>> intro.sgml today. Patch attached.\n> Things I noticed quickly:\n>\n> broken up in to categories s/in to/into/\n\nGot it, I can make that change.\n\n\n> Unstructured data via JSON (or XML ?)\n\nOn this one, there is a lot of argument about whether XML is structured \nor not. I do agree that adding XML support would be good though as many \npeople think that JSON rules the world but the old money companies are \nstill using XML.\n\n\n> s/Partioniing/Partitioning/\n\nThanks for the catch. I will make the change.\n\nJD\n\n\n>\n> Regards,\n> -Chap\n>\n>\n\n-- \nCommand Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc\nPostgres centered full stack support, consulting and development.\nAdvocate: @amplifypostgres || Get help: https://commandprompt.com/\n***** Unless otherwise stated, opinions are my own. *****\n\n\n\n",
"msg_date": "Tue, 6 Aug 2019 10:49:44 -0700",
"msg_from": "\"Joshua D. Drake\" <jd@commandprompt.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleanup of intro.sgml"
},
{
"msg_contents": "Rev 2 attached.\n\n\nAdded:\n\nSQL/JSON\n\nSQL/XML\n\nFixed spelling mistakes\n\nFixed a missing closing tag.\n\n\n\n\n-- \nCommand Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc\nPostgres centered full stack support, consulting and development.\nAdvocate: @amplifypostgres || Get help: https://commandprompt.com/\n***** Unless otherwise stated, opinions are my own. *****",
"msg_date": "Tue, 6 Aug 2019 14:09:43 -0700",
"msg_from": "\"Joshua D. Drake\" <jd@commandprompt.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleanup of intro.sgml"
}
] |
[
{
"msg_contents": "I got frustrated just now because this:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2019-08-05%2021%3A18%3A23\n\nis essentially undebuggable, thanks to the buildfarm's failure to\ncapture any error output from slapd. That's not the buildfarm\nscript's fault: it's willing to capture everything placed in the\nagreed-on log directory. But the TAP test script randomly places\nthe daemon's log file somewhere else, one level up. The kerberos\ntest script has the same problem.\n\nHence, I propose the attached. This just moves the actual log\nfiles ... we could possibly move the daemons' .conf files as well,\nbut I think they're probably not variable enough to be interesting.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 05 Aug 2019 19:26:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Putting kerberos/ldap logs somewhere useful"
},
{
"msg_contents": "On Mon, Aug 05, 2019 at 07:26:45PM -0400, Tom Lane wrote:\n> Hence, I propose the attached. This just moves the actual log\n> files ...\n\n+1 for this. The patch looks good.\n\n> we could possibly move the daemons' .conf files as well,\n> but I think they're probably not variable enough to be interesting.\n\nNot sure that it is actually necessary. If that proves to be needed,\nthis could always be done later on.\n--\nMichael",
"msg_date": "Tue, 6 Aug 2019 14:17:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Putting kerberos/ldap logs somewhere useful"
}
] |
[
{
"msg_contents": "postgres=# create table t (a int, b int);\nCREATE TABLE\npostgres=# create index m on t(a);\nCREATE INDEX\npostgres=# create index m2 on t(a);\nCREATE INDEX\npostgres=# \\d t\n Table \"demo.t\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\n b | integer | | |\nIndexes:\n \"m\" btree (a)\n \"m2\" btree (a)\n\n\nis this by design?\n\npostgres=# create table t (a int, b int);CREATE TABLEpostgres=# create index m on t(a);CREATE INDEXpostgres=# create index m2 on t(a);CREATE INDEXpostgres=# \\d t Table \"demo.t\" Column | Type | Collation | Nullable | Default--------+---------+-----------+----------+--------- a | integer | | | b | integer | | |Indexes: \"m\" btree (a) \"m2\" btree (a)is this by design?",
"msg_date": "Tue, 6 Aug 2019 10:34:19 +0800",
"msg_from": "Alex <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg can create duplicated index without any errors even warnning"
},
{
"msg_contents": "On Mon, Aug 5, 2019 at 7:34 PM Alex <zhihui.fan1213@gmail.com> wrote:\n> is this by design?\n\nYes. Being able to do this is useful for several reasons. For example,\nit's useful to be able to create a new, equivalent index before\ndropping the original when the original is bloated. (You could use\nREINDEX instead, but that has some disadvantages that you might want\nto avoid.)\n\nQuestions like this are better suited to the pgsql-general list.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 5 Aug 2019 20:16:11 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pg can create duplicated index without any errors even warnning"
},
{
"msg_contents": "On Mon, Aug 05, 2019 at 08:16:11PM -0700, Peter Geoghegan wrote:\n> Yes. Being able to do this is useful for several reasons. For example,\n> it's useful to be able to create a new, equivalent index before\n> dropping the original when the original is bloated. (You could use\n> REINDEX instead, but that has some disadvantages that you might want\n> to avoid.)\n\nREINDEX CONCURRENTLY recently added to v12 relies on that heavily\nactually, so as you can finish with the same index definition twice in\nthe state of swapping both index definitions.\n--\nMichael",
"msg_date": "Tue, 6 Aug 2019 12:50:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg can create duplicated index without any errors even warnning"
},
{
"msg_contents": "Alex <zhihui.fan1213@gmail.com> writes:\n> postgres=# create table t (a int, b int);\n> CREATE TABLE\n> postgres=# create index m on t(a);\n> CREATE INDEX\n> postgres=# create index m2 on t(a);\n> CREATE INDEX\n\n> is this by design?\n\nYes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Aug 2019 01:32:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg can create duplicated index without any errors even warnning"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached patch for:\n\ns/incompable/incompatible/g\n\nThanks,\nAmit",
"msg_date": "Tue, 6 Aug 2019 17:34:06 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix a typo in add_partial_path"
},
{
"msg_contents": "On Tue, Aug 06, 2019 at 05:34:06PM +0900, Amit Langote wrote:\n> Attached patch for:\n> \n> s/incompable/incompatible/g\n\nThanks, applied.\n--\nMichael",
"msg_date": "Tue, 6 Aug 2019 18:12:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix a typo in add_partial_path"
},
{
"msg_contents": "On Tue, Aug 6, 2019 at 6:12 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Aug 06, 2019 at 05:34:06PM +0900, Amit Langote wrote:\n> > Attached patch for:\n> >\n> > s/incompable/incompatible/g\n>\n> Thanks, applied.\n\nThank you Michael.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Tue, 6 Aug 2019 18:17:09 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix a typo in add_partial_path"
}
] |
[
{
"msg_contents": "I propose to apply the attached patch (to master) to update the DocBook\nversion to 4.5 (from 4.2). This basically just gets us off some random\nintermediate minor version to the latest within that major version.\n\nMost packagings put all 4.* versions into one package, so you probably\ndon't need to change anything in your tools. The exception is MacPorts,\nas seen in the patch. (DocBook 4.5 was released in 2006, so it should\nbe available everywhere.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 6 Aug 2019 11:49:13 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Update to DocBook 4.5"
}
] |
[
{
"msg_contents": "I'm getting the below, and am unaware of how to fix it....\n\n11.4 on FreeBSD 12.\n\n\n\nler=# reindex (verbose) table dns_query ;\nINFO: index \"dns_query_pkey\" was reindexed\nDETAIL: CPU: user: 114.29 s, system: 207.94 s, elapsed: 698.87 s\nERROR: index \"pg_toast_17760_index\" contains unexpected zero page at \nblock 23686\nHINT: Please REINDEX it.\nCONTEXT: parallel worker\nler=# reindex index pg_toast_17760_index;\nERROR: relation \"pg_toast_17760_index\" does not exist\nler=# reindex (verbose) database ler;\nINFO: index \"pg_class_oid_index\" was reindexed\nDETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\nINFO: index \"pg_class_relname_nsp_index\" was reindexed\nDETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\nINFO: index \"pg_class_tblspc_relfilenode_index\" was reindexed\nDETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\nINFO: table \"pg_catalog.pg_class\" was reindexed\nload: 14.53 cmd: psql 2675 [select] 2765.27r 0.01u 0.01s 0% 8292k\nINFO: index \"dns_query_pkey\" was reindexed\nDETAIL: CPU: user: 112.91 s, system: 205.51 s, elapsed: 688.28 s\nERROR: index \"pg_toast_17760_index\" contains unexpected zero page at \nblock 23686\nHINT: Please REINDEX it.\nler=#\n\nler=# select version();\n \nversion\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 11.4 on amd64-portbld-freebsd12.0, compiled by FreeBSD clang \nversion 8.0.0 (tags/RELEASE_800/final 356365) (based on LLVM 8.0.0), \n64-bit\n(1 row)\n\nler=#\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n",
"msg_date": "Tue, 06 Aug 2019 12:06:45 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "How am I supposed to fix this?"
},
{
"msg_contents": "On Tue, Aug 06, 2019 at 12:06:45PM -0500, Larry Rosenman wrote:\n>I'm getting the below, and am unaware of how to fix it....\n>\n>11.4 on FreeBSD 12.\n>\n>\n>\n>ler=# reindex (verbose) table dns_query ;\n>INFO: index \"dns_query_pkey\" was reindexed\n>DETAIL: CPU: user: 114.29 s, system: 207.94 s, elapsed: 698.87 s\n>ERROR: index \"pg_toast_17760_index\" contains unexpected zero page at \n>block 23686\n>HINT: Please REINDEX it.\n>CONTEXT: parallel worker\n>ler=# reindex index pg_toast_17760_index;\n>ERROR: relation \"pg_toast_17760_index\" does not exist\n\nYou probably need to explicitly say pg_toast.pg_toast_17760_index here,\nbecause pg_toast schema is not part of the search_path by default.\n\n>ler=# reindex (verbose) database ler;\n>INFO: index \"pg_class_oid_index\" was reindexed\n>DETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n>INFO: index \"pg_class_relname_nsp_index\" was reindexed\n>DETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n>INFO: index \"pg_class_tblspc_relfilenode_index\" was reindexed\n>DETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n>INFO: table \"pg_catalog.pg_class\" was reindexed\n>load: 14.53 cmd: psql 2675 [select] 2765.27r 0.01u 0.01s 0% 8292k\n>INFO: index \"dns_query_pkey\" was reindexed\n>DETAIL: CPU: user: 112.91 s, system: 205.51 s, elapsed: 688.28 s\n>ERROR: index \"pg_toast_17760_index\" contains unexpected zero page at \n>block 23686\n>HINT: Please REINDEX it.\n>ler=#\n>\n\nAssuming the toast index is corrupted, this is kinda expected (when trying\nto reindex an index on the toasted data).\n\nThe question is how much other data corruption is there ...\n\n\nregards\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 6 Aug 2019 19:19:01 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: How am I supposed to fix this?"
},
{
"msg_contents": "On 2019-Aug-06, Larry Rosenman wrote:\n\n> ler=# reindex index pg_toast_17760_index;\n> ERROR: relation \"pg_toast_17760_index\" does not exist\n\nMaybe try \"reindex index pg_toast.pg_toast_17760_index\"\n\n> ler=# reindex (verbose) database ler;\n[...]\n> ERROR: index \"pg_toast_17760_index\" contains unexpected zero page at block\n> 23686\n> HINT: Please REINDEX it.\n\nI suspect REINDEX is trying to access that index for some other reason\nthan reindexing it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 6 Aug 2019 13:20:47 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: How am I supposed to fix this?"
},
{
"msg_contents": "On Tue, Aug 6, 2019 at 10:19 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> The question is how much other data corruption is there ...\n\nLarry could try running amcheck on the other indexes. Just the basic\nbt_check_index() checks should be enough to detect problems like this.\nThey can be run fairly non-disruptively. Something like this should do\nit:\n\nSELECT bt_index_check(index => c.oid),\n c.relname,\n c.relpages\nFROM pg_index i\nJOIN pg_opclass op ON i.indclass[0] = op.oid\nJOIN pg_am am ON op.opcmethod = am.oid\nJOIN pg_class c ON i.indexrelid = c.oid\nJOIN pg_namespace n ON c.relnamespace = n.oid\nWHERE am.amname = 'btree'\n-- Don't check temp tables, which may be from another session:\nAND c.relpersistence != 't'\n-- Function may throw an error when this is omitted:\nAND c.relkind = 'i' AND i.indisready AND i.indisvalid\nORDER BY c.relpages DESC;\n\nIf this takes too long, you can always adjust the query to only verify\nsystem indexes or TOAST indexes.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 6 Aug 2019 10:30:21 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: How am I supposed to fix this?"
},
{
"msg_contents": "On 08/06/2019 12:30 pm, Peter Geoghegan wrote:\n> On Tue, Aug 6, 2019 at 10:19 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>> The question is how much other data corruption is there ...\n> \n> Larry could try running amcheck on the other indexes. Just the basic\n> bt_check_index() checks should be enough to detect problems like this.\n> They can be run fairly non-disruptively. Something like this should do\n> it:\n> \n> SELECT bt_index_check(index => c.oid),\n> c.relname,\n> c.relpages\n> FROM pg_index i\n> JOIN pg_opclass op ON i.indclass[0] = op.oid\n> JOIN pg_am am ON op.opcmethod = am.oid\n> JOIN pg_class c ON i.indexrelid = c.oid\n> JOIN pg_namespace n ON c.relnamespace = n.oid\n> WHERE am.amname = 'btree'\n> -- Don't check temp tables, which may be from another session:\n> AND c.relpersistence != 't'\n> -- Function may throw an error when this is omitted:\n> AND c.relkind = 'i' AND i.indisready AND i.indisvalid\n> ORDER BY c.relpages DESC;\n> \n> If this takes too long, you can always adjust the query to only verify\n> system indexes or TOAST indexes.\nler=# SELECT bt_index_check(index => c.oid),\nler-# c.relname,\nler-# c.relpages\nler-# FROM pg_index i\nler-# JOIN pg_opclass op ON i.indclass[0] = op.oid\nler-# JOIN pg_am am ON op.opcmethod = am.oid\nler-# JOIN pg_class c ON i.indexrelid = c.oid\nler-# JOIN pg_namespace n ON c.relnamespace = n.oid\nler-# WHERE am.amname = 'btree'\nler-# -- Don't check temp tables, which may be from another session:\nler-# AND c.relpersistence != 't'\nler-# -- Function may throw an error when this is omitted:\nler-# AND c.relkind = 'i' AND i.indisready AND i.indisvalid\nler-# ORDER BY c.relpages DESC;\nERROR: function bt_index_check(index => oid) does not exist\nLINE 1: SELECT bt_index_check(index => c.oid),\n ^\nHINT: No function matches the given name and argument types. You might \nneed to add explicit type casts.\nler=#\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n",
"msg_date": "Tue, 06 Aug 2019 12:34:26 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: How am I supposed to fix this?"
},
{
"msg_contents": "On Tue, Aug 6, 2019 at 10:34 AM Larry Rosenman <ler@lerctr.org> wrote:\n> ERROR: function bt_index_check(index => oid) does not exist\n> LINE 1: SELECT bt_index_check(index => c.oid),\n> ^\n> HINT: No function matches the given name and argument types. You might\n> need to add explicit type casts.\n\nIt's a contrib extension, so you have to \"create extension amcheck\" first.\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 6 Aug 2019 10:35:49 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: How am I supposed to fix this?"
},
{
"msg_contents": "On 08/06/2019 12:35 pm, Peter Geoghegan wrote:\n> On Tue, Aug 6, 2019 at 10:34 AM Larry Rosenman <ler@lerctr.org> wrote:\n>> ERROR: function bt_index_check(index => oid) does not exist\n>> LINE 1: SELECT bt_index_check(index => c.oid),\n>> ^\n>> HINT: No function matches the given name and argument types. You \n>> might\n>> need to add explicit type casts.\n> \n> It's a contrib extension, so you have to \"create extension amcheck\" \n> first.\n\n\nthe check is running (this is a HUGE table).\n\nFor the initial error, it would be nice if:\n1) the pg_toast schema was mentioned\nor\n2) reindex searched pg_toast as well.\n\nI did do the reindex pg_toast. index.\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n",
"msg_date": "Tue, 06 Aug 2019 12:45:58 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: How am I supposed to fix this?"
},
{
"msg_contents": "On 08/06/2019 12:45 pm, Larry Rosenman wrote:\n> On 08/06/2019 12:35 pm, Peter Geoghegan wrote:\n>> On Tue, Aug 6, 2019 at 10:34 AM Larry Rosenman <ler@lerctr.org> wrote:\n>>> ERROR: function bt_index_check(index => oid) does not exist\n>>> LINE 1: SELECT bt_index_check(index => c.oid),\n>>> ^\n>>> HINT: No function matches the given name and argument types. You \n>>> might\n>>> need to add explicit type casts.\n>> \n>> It's a contrib extension, so you have to \"create extension amcheck\" \n>> first.\n> \n> \n> the check is running (this is a HUGE table).\n> \n> For the initial error, it would be nice if:\n> 1) the pg_toast schema was mentioned\n> or\n> 2) reindex searched pg_toast as well.\n> \n> I did do the reindex pg_toast. index.\n\nAs a followup, btcheck found another index that had issues, and a toast \ntable was missing a chunk.\n\nI have ALL the data I used to create this table still around so I just \ndropped it and am reloading the data.\n\nI still think that the error message should mention the fully qualified \nindex name.\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n",
"msg_date": "Tue, 06 Aug 2019 13:11:14 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: How am I supposed to fix this?"
},
{
"msg_contents": "On Tue, Aug 6, 2019 at 11:11 AM Larry Rosenman <ler@lerctr.org> wrote:\n> As a followup, btcheck found another index that had issues, and a toast\n> table was missing a chunk.\n>\n> I have ALL the data I used to create this table still around so I just\n> dropped it and am reloading the data.\n\nIt sounds like there is a generic storage issue at play here. Often\nTOAST data is the apparent first thing that gets corrupted, because\nthat's only because the inconsistencies are relatively obvious.\n\nI suggest that you rerun amcheck using the same query, though this\ntime specify \"heapallindexed=true\" to bt_check_index(). Increase\nmaintenance_work_mem if it's set to a low value first (ideally you can\ncrank it up to 600MB). This type of verification will take a lot\nlonger, but will find more subtle inconsistencies that could easily be\nmissed.\n\nPlease let us know how this goes. I am always keen to hear about how\nmuch the tooling helps in the real world.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 6 Aug 2019 11:16:44 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: How am I supposed to fix this?"
},
{
"msg_contents": "On 08/06/2019 1:16 pm, Peter Geoghegan wrote:\n> On Tue, Aug 6, 2019 at 11:11 AM Larry Rosenman <ler@lerctr.org> wrote:\n>> As a followup, btcheck found another index that had issues, and a \n>> toast\n>> table was missing a chunk.\n>> \n>> I have ALL the data I used to create this table still around so I just\n>> dropped it and am reloading the data.\n> \n> It sounds like there is a generic storage issue at play here. Often\n> TOAST data is the apparent first thing that gets corrupted, because\n> that's only because the inconsistencies are relatively obvious.\n> \n> I suggest that you rerun amcheck using the same query, though this\n> time specify \"heapallindexed=true\" to bt_check_index(). Increase\n> maintenance_work_mem if it's set to a low value first (ideally you can\n> crank it up to 600MB). This type of verification will take a lot\n> longer, but will find more subtle inconsistencies that could easily be\n> missed.\n> \n> Please let us know how this goes. I am always keen to hear about how\n> much the tooling helps in the real world.\n\nI've already dropped and re-created the table involved (a metric crapton\nof DNS queries). I know why and how this happened as well. I had to\nfully restore my system, and bacula didn't catch all the data etc since \nit\nwas being modified, and I didn't do the smart thing then and restore \nfrom a pg_dump.\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n",
"msg_date": "Tue, 06 Aug 2019 13:19:34 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: How am I supposed to fix this?"
}
] |
[
{
"msg_contents": "While looking at the pending patch for faster GIN index searches\non no-key queries, I was motivated to improve contrib/intarray's\nregression test to exercise the GIN_SEARCH_MODE_ALL case, because\nit didn't. And then I thought well, let's try to bring the code\ncoverage of _int_gin.c up to something respectable, which led me\nto the regression test additions shown in the attached. And I\nwas astonished to observe that the GiST index cases mostly got\nthe wrong answer for the <@ query. Sometimes they got the right\nanswer, but mostly not. After some digging I saw that the problem\nwas that there are a number of empty arrays ('{}') in the data,\nand those should surely all match the WHERE a <@ '{73,23,20}'\ncondition, but the GiST opclasses were not reliably finding them.\n\nThe reason appears to be that the condition for descending through a\nnon-leaf index key for the RTContainedBy case is incorrectly optimistic:\nit supposes that we only need to descend into subtrees whose union key\noverlaps the query array. But this does not guarantee to find subtrees\nthat contain empty-array entries. Worse, such entries could be anywhere\nin the tree, and because of the way that the insertion penalty is\ncalculated, they probably are. (We will compute a zero penalty to add\nan empty array item to any subtree.) The reason it sometimes works\nseems to be that GiST randomizes its insertion decisions when there are\nequal penalties (cf gistchoose()), and sometimes by luck it puts all\nof the empty-array entries into subtrees that the existing rule will\nsearch.\n\nSo as far as I can see, we have little choice but to lobotomize the\nRTContainedBy case and force a whole-index search. This applies to\nboth the gist__int_ops and gist__intbig_ops opclasses. This is\npretty awful for any applications that are depending on such queries\nto be fast, but it's hard to argue with \"it gets the wrong answer,\nand not even reproducibly so\".\n\nIn the future we might think about removing <@ from these opclasses,\nor making a non-backward-compatible change to segregate empty arrays\nfrom everything else in the index. But neither answer seems very\nback-patchable, and I'm not really sure I want to put so much work\ninto a second-class-citizen contrib module anyway.\n\nComments?\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 06 Aug 2019 13:55:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "intarray GiST index gets wrong answers for '{}' <@ anything"
},
{
"msg_contents": "Hi!\n\nOn Tue, Aug 6, 2019 at 8:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The reason appears to be that the condition for descending through a\n> non-leaf index key for the RTContainedBy case is incorrectly optimistic:\n> it supposes that we only need to descend into subtrees whose union key\n> overlaps the query array. But this does not guarantee to find subtrees\n> that contain empty-array entries. Worse, such entries could be anywhere\n> in the tree, and because of the way that the insertion penalty is\n> calculated, they probably are. (We will compute a zero penalty to add\n> an empty array item to any subtree.) The reason it sometimes works\n> seems to be that GiST randomizes its insertion decisions when there are\n> equal penalties (cf gistchoose()), and sometimes by luck it puts all\n> of the empty-array entries into subtrees that the existing rule will\n> search.\n\nRight, existing logic could work correctly, when dataset contains no\nempty arrays. But it clearly doesn't handle empty arrays.\n\n> So as far as I can see, we have little choice but to lobotomize the\n> RTContainedBy case and force a whole-index search. This applies to\n> both the gist__int_ops and gist__intbig_ops opclasses. This is\n> pretty awful for any applications that are depending on such queries\n> to be fast, but it's hard to argue with \"it gets the wrong answer,\n> and not even reproducibly so\".\n\n+1 for pushing this\n\n> In the future we might think about removing <@ from these opclasses,\n> or making a non-backward-compatible change to segregate empty arrays\n> from everything else in the index. But neither answer seems very\n> back-patchable, and I'm not really sure I want to put so much work\n> into a second-class-citizen contrib module anyway.\n\n+1 for removing <@ from opclasses. Trying to segregate empty arrays\nlooks like invention of new opclass rather than bugfix for current\none. One, who is interested in this piece of work, can implement this\nnew opclass.\n\nUsers, who likes existing behavior of handling <@ operator in intarray\nopclasses, may be advised to rewrite their queries as following.\n\n\"col <@ const\" => \"col <@ const AND col && const\"\n\nNew queries would have opclass support and handle non-empty arrays in\nthe same way. It will be slightly slower because of evaluation of two\noperators instead of one. But this doesn't seem critical.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 6 Aug 2019 21:18:41 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: intarray GiST index gets wrong answers for '{}' <@ anything"
},
{
"msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> Users, who likes existing behavior of handling <@ operator in intarray\n> opclasses, may be advised to rewrite their queries as following.\n\n> \"col <@ const\" => \"col <@ const AND col && const\"\n\nOh, that's a good suggestion --- it will work, and work reasonably\nwell, with either unpatched or patched intarray code; and also with\nsome future version that doesn't consider <@ indexable at all.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Aug 2019 14:42:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: intarray GiST index gets wrong answers for '{}' <@ anything"
}
] |
[
{
"msg_contents": "Hi,\n\nThe attached self-documented patch fixes build on Windows in case when\npath to Python has embedded spaces.",
"msg_date": "Tue, 6 Aug 2019 22:50:14 +0300",
"msg_from": "Dmitry Igrishin <dmitigr@gmail.com>",
"msg_from_op": true,
"msg_subject": "Small patch to fix build on Windows"
},
{
"msg_contents": "Hi,\n\nAt Tue, 6 Aug 2019 22:50:14 +0300, Dmitry Igrishin <dmitigr@gmail.com> wrote in <CAAfz9KO4Nt-kDUKAcEKFND+1LeZ6nH_hjPGamonfTeZLRKz0bg@mail.gmail.com>\n> The attached self-documented patch fixes build on Windows in case when\n> path to Python has embedded spaces.\n\n- $solution->{options}->{python} . \"\\\\python -c \\\"$pythonprog\\\"\";\n+ \"\\\"$solution->{options}->{python}\\\\python\\\" -c \\\"$pythonprog\\\"\";\n\nSolution.pm has the following line:\n\n>\tmy $opensslcmd =\n>\t $self->{options}->{openssl} . \"\\\\bin\\\\openssl.exe version 2>&1\";\n\nAFAICS that's all.\n\n\n- if ($lib =~ m/\\s/)\n- {\n- $lib = '"' . $lib . \""\";\n- }\n+ # Since VC automatically quotes paths specified as the data of\n+ # <AdditionalDependencies> in VC project file, it's mistakably\n+ # to quote them here. Thus, it's okay if $lib contains spaces.\n\nI'm not sure, but it's not likely that someone adds it without\nactually stumbling on space-containing paths with the ealier\nversion. Anyway if we shouldn't touch this unless the existing\ncode makes actual problem.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 07 Aug 2019 17:28:51 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Small patch to fix build on Windows"
},
{
"msg_contents": "ср, 7 авг. 2019 г. в 11:29, Kyotaro Horiguchi <horikyota.ntt@gmail.com>:\n>\n> Hi,\n>\n> At Tue, 6 Aug 2019 22:50:14 +0300, Dmitry Igrishin <dmitigr@gmail.com> wrote in <CAAfz9KO4Nt-kDUKAcEKFND+1LeZ6nH_hjPGamonfTeZLRKz0bg@mail.gmail.com>\n> > The attached self-documented patch fixes build on Windows in case when\n> > path to Python has embedded spaces.\n>\n> - $solution->{options}->{python} . \"\\\\python -c \\\"$pythonprog\\\"\";\n> + \"\\\"$solution->{options}->{python}\\\\python\\\" -c \\\"$pythonprog\\\"\";\n>\n> Solution.pm has the following line:\n>\n> > my $opensslcmd =\n> > $self->{options}->{openssl} . \"\\\\bin\\\\openssl.exe version 2>&1\";\n>\n> AFAICS that's all.\nThank you! The attached 2nd version of the patch fixes this too.\n\n>\n>\n> - if ($lib =~ m/\\s/)\n> - {\n> - $lib = '"' . $lib . \""\";\n> - }\n> + # Since VC automatically quotes paths specified as the data of\n> + # <AdditionalDependencies> in VC project file, it's mistakably\n> + # to quote them here. Thus, it's okay if $lib contains spaces.\n>\n> I'm not sure, but it's not likely that someone adds it without\n> actually stumbling on space-containing paths with the ealier\n> version. Anyway if we shouldn't touch this unless the existing\n> code makes actual problem.\nSo, do you think a comment is not needed here?",
"msg_date": "Wed, 7 Aug 2019 12:14:48 +0300",
"msg_from": "Dmitry Igrishin <dmitigr@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small patch to fix build on Windows"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 11:11 AM Dmitry Igrishin <dmitigr@gmail.com> wrote:\n>\n> ср, 7 авг. 2019 г. в 11:29, Kyotaro Horiguchi <horikyota.ntt@gmail.com>:\n> >\n> > Solution.pm has the following line:\n> >\n> > > my $opensslcmd =\n> > > $self->{options}->{openssl} . \"\\\\bin\\\\openssl.exe version 2>&1\";\n> >\n> > AFAICS that's all.\n> Thank you! The attached 2nd version of the patch fixes this too.\n>\n\nAt some point the propossed patch for opensslcmd was like:\n\n+ my $opensslprog = '\\bin\\openssl.exe version 2>&1';\n+ my $opensslcmd = '\"' . $self->{options}->{openssl} . '\"' . $opensslprog;\n\nIt can be a question of taste, but I think the dot is easier to read.\n\nRegards,\n\nJuan José Santamaría Flecha\n\n\n",
"msg_date": "Wed, 7 Aug 2019 14:33:28 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Small patch to fix build on Windows"
},
{
"msg_contents": "ср, 7 авг. 2019 г. в 15:33, Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com>:\n>\n> On Wed, Aug 7, 2019 at 11:11 AM Dmitry Igrishin <dmitigr@gmail.com> wrote:\n> >\n> > ср, 7 авг. 2019 г. в 11:29, Kyotaro Horiguchi <horikyota.ntt@gmail.com>:\n> > >\n> > > Solution.pm has the following line:\n> > >\n> > > > my $opensslcmd =\n> > > > $self->{options}->{openssl} . \"\\\\bin\\\\openssl.exe version 2>&1\";\n> > >\n> > > AFAICS that's all.\n> > Thank you! The attached 2nd version of the patch fixes this too.\n> >\n>\n> At some point the propossed patch for opensslcmd was like:\n>\n> + my $opensslprog = '\\bin\\openssl.exe version 2>&1';\n> + my $opensslcmd = '\"' . $self->{options}->{openssl} . '\"' . $opensslprog;\n>\n> It can be a question of taste, but I think the dot is easier to read.\nWell, the style inconsistent anyway, for example, in file Project.pm\n\n$self->{def} = \"./__CFGNAME__/$self->{name}/$self->{name}.def\";\n$self->{implib} = \"__CFGNAME__/$self->{name}/$libname\";\n\nSo, I don't know what style is preferable. Personally, I don't care.\n\n\n",
"msg_date": "Wed, 7 Aug 2019 16:37:58 +0300",
"msg_from": "Dmitry Igrishin <dmitigr@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small patch to fix build on Windows"
},
{
"msg_contents": "Hello.\n\nAt Wed, 7 Aug 2019 12:14:48 +0300, Dmitry Igrishin <dmitigr@gmail.com> wrote in <CAAfz9KPVff92Np51DvvCDvqvxVchiuuvJCzz56qtM=N0SUnG8A@mail.gmail.com>\n> > - if ($lib =~ m/\\s/)\n> > - {\n> > - $lib = '"' . $lib . \""\";\n> > - }\n> > + # Since VC automatically quotes paths specified as the data of\n> > + # <AdditionalDependencies> in VC project file, it's mistakably\n> > + # to quote them here. Thus, it's okay if $lib contains spaces.\n> >\n> > I'm not sure, but it's not likely that someone adds it without\n> > actually stumbling on space-containing paths with the ealier\n> > version. Anyway if we shouldn't touch this unless the existing\n> > code makes actual problem.\n> So, do you think a comment is not needed here?\n\n# Sorry the last phrase above is broken.\n\nI meant \"if it ain't broke don't fix it\". \n\nI doubt that some older versions of VC might need it. I confirmed\nthat the extra " actually harms at least VC2019 and the code\nremoval in the patch works. As for the replace comment, I'm not\nsure it is needed since I think quoting is not the task for\nAddLibrary/AddIncludeDir in the first place (and AddIncludeDir\ndoesn't have the same comment).\n\nNow I'm trying to install VS2015 into my alomost-filled-up disk..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 08 Aug 2019 12:15:38 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Small patch to fix build on Windows"
},
{
"msg_contents": "чт, 8 авг. 2019 г. в 06:18, Kyotaro Horiguchi <horikyota.ntt@gmail.com>:\n>\n> Hello.\n>\n> At Wed, 7 Aug 2019 12:14:48 +0300, Dmitry Igrishin <dmitigr@gmail.com> wrote in <CAAfz9KPVff92Np51DvvCDvqvxVchiuuvJCzz56qtM=N0SUnG8A@mail.gmail.com>\n> > > - if ($lib =~ m/\\s/)\n> > > - {\n> > > - $lib = '"' . $lib . \""\";\n> > > - }\n> > > + # Since VC automatically quotes paths specified as the data of\n> > > + # <AdditionalDependencies> in VC project file, it's mistakably\n> > > + # to quote them here. Thus, it's okay if $lib contains spaces.\n> > >\n> > > I'm not sure, but it's not likely that someone adds it without\n> > > actually stumbling on space-containing paths with the ealier\n> > > version. Anyway if we shouldn't touch this unless the existing\n> > > code makes actual problem.\n> > So, do you think a comment is not needed here?\n>\n> # Sorry the last phrase above is broken.\n>\n> I meant \"if it ain't broke don't fix it\".\n>\n> I doubt that some older versions of VC might need it. I confirmed\n> that the extra " actually harms at least VC2019 and the code\n> removal in the patch works.\nThe code removal is required also to build on VC2017.\n\n> As for the replace comment, I'm not\n> sure it is needed since I think quoting is not the task for\n> AddLibrary/AddIncludeDir in the first place (and AddIncludeDir\n> doesn't have the same comment).\nThe attached 3rd version of the patch contains no comment in AddLibrary().\n\n>\n> Now I'm trying to install VS2015 into my alomost-filled-up disk..\nThank you!\n\n\n",
"msg_date": "Thu, 8 Aug 2019 10:38:11 +0300",
"msg_from": "Dmitry Igrishin <dmitigr@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small patch to fix build on Windows"
},
{
"msg_contents": "> > As for the replace comment, I'm not\n> > sure it is needed since I think quoting is not the task for\n> > AddLibrary/AddIncludeDir in the first place (and AddIncludeDir\n> > doesn't have the same comment).\nThe attached 3rd version of the patch contains no comment in AddLibrary().\n\nSorry, forgot to attach the patch to the previous mail.",
"msg_date": "Thu, 8 Aug 2019 10:40:29 +0300",
"msg_from": "Dmitry Igrishin <dmitigr@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small patch to fix build on Windows"
},
{
"msg_contents": "Hello.\n\nAt Thu, 08 Aug 2019 12:15:38 +0900 (Tokyo Standard Time), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in <20190808.121538.87367461.horikyota.ntt@gmail.com>\n> At Wed, 7 Aug 2019 12:14:48 +0300, Dmitry Igrishin <dmitigr@gmail.com> wrote in <CAAfz9KPVff92Np51DvvCDvqvxVchiuuvJCzz56qtM=N0SUnG8A@mail.gmail.com>\n> > > - if ($lib =~ m/\\s/)\n> > > - {\n> > > - $lib = '"' . $lib . \""\";\n> > > - }\n> > > + # Since VC automatically quotes paths specified as the data of\n> > > + # <AdditionalDependencies> in VC project file, it's mistakably\n> > > + # to quote them here. Thus, it's okay if $lib contains spaces.\n..\n> I doubt that some older versions of VC might need it. I confirmed\n> that the extra " actually harms at least VC2019 and the code\n> removal in the patch works. As for the replace comment, I'm not\n> sure it is needed since I think quoting is not the task for\n> AddLibrary/AddIncludeDir in the first place (and AddIncludeDir\n> doesn't have the same comment).\n> \n> Now I'm trying to install VS2015 into my alomost-filled-up disk..\n\nI confirmed that VC2015 works with the above diff. So the patch\nis overall fine with me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 08 Aug 2019 17:43:11 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Small patch to fix build on Windows"
},
{
"msg_contents": "On 2019-Aug-08, Dmitry Igrishin wrote:\n\n> \t\tmy $prefixcmd =\n> -\t\t $solution->{options}->{python} . \"\\\\python -c \\\"$pythonprog\\\"\";\n> +\t\t \"\\\"$solution->{options}->{python}\\\\python\\\" -c \\\"$pythonprog\\\"\";\n\nI think you can make this prettier like this:\n\n my $prefixcmd = qq{\"$solution->{options}->{python}\\\\python\" -c \"$pythonprog\"};\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 8 Aug 2019 13:07:25 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Small patch to fix build on Windows"
},
{
"msg_contents": "чт, 8 авг. 2019 г. в 20:07, Alvaro Herrera <alvherre@2ndquadrant.com>:\n>\n> On 2019-Aug-08, Dmitry Igrishin wrote:\n>\n> > my $prefixcmd =\n> > - $solution->{options}->{python} . \"\\\\python -c \\\"$pythonprog\\\"\";\n> > + \"\\\"$solution->{options}->{python}\\\\python\\\" -c \\\"$pythonprog\\\"\";\n>\n> I think you can make this prettier like this:\n>\n> my $prefixcmd = qq{\"$solution->{options}->{python}\\\\python\" -c \"$pythonprog\"};\nThis looks nice for a Perl hacker :-). As for me, it looks unusual and\na bit confusing. I never\nprogrammed in Perl, but I was able to quickly understand where the\nproblem lies due to the\n style adopted in other languages, when the contents are enclosed in\nquotation marks, and\nthe quotation marks are escaped if they are part of the contents.\nSo, should I fix it? Any thoughts?\n\n\n",
"msg_date": "Thu, 8 Aug 2019 22:46:07 +0300",
"msg_from": "Dmitry Igrishin <dmitigr@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small patch to fix build on Windows"
},
{
"msg_contents": "On Thu, Aug 08, 2019 at 10:46:07PM +0300, Dmitry Igrishin wrote:\n> This looks nice for a Perl hacker :-). As for me, it looks unusual and\n> a bit confusing. I never\n> programmed in Perl, but I was able to quickly understand where the\n> problem lies due to the\n> style adopted in other languages, when the contents are enclosed in\n> quotation marks, and\n> the quotation marks are escaped if they are part of the contents.\n> So, should I fix it? Any thoughts?\n\nFWIW, I like Alvaro's suggestion about qq{} in this case, as it makes\nsure that double-quotes are correctly applied where they should.\n--\nMichael",
"msg_date": "Fri, 9 Aug 2019 11:45:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Small patch to fix build on Windows"
},
{
"msg_contents": "пт, 9 авг. 2019 г. в 05:45, Michael Paquier <michael@paquier.xyz>:\n>\n> On Thu, Aug 08, 2019 at 10:46:07PM +0300, Dmitry Igrishin wrote:\n> > This looks nice for a Perl hacker :-). As for me, it looks unusual and\n> > a bit confusing. I never\n> > programmed in Perl, but I was able to quickly understand where the\n> > problem lies due to the\n> > style adopted in other languages, when the contents are enclosed in\n> > quotation marks, and\n> > the quotation marks are escaped if they are part of the contents.\n> > So, should I fix it? Any thoughts?\n>\n> FWIW, I like Alvaro's suggestion about qq{} in this case, as it makes\n> sure that double-quotes are correctly applied where they should.\nThe attached 4rd version of the patch uses qq||. I used qq|| instead\nof qq{} for consistency because qq|| is already used in Solution.pm:\n\n return qq|VisualStudioVersion = $self->{VisualStudioVersion}\n MinimumVisualStudioVersion = $self->{MinimumVisualStudioVersion}\n |;",
"msg_date": "Fri, 9 Aug 2019 09:56:27 +0300",
"msg_from": "Dmitry Igrishin <dmitigr@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small patch to fix build on Windows"
},
{
"msg_contents": "At Fri, 9 Aug 2019 09:56:27 +0300, Dmitry Igrishin <dmitigr@gmail.com> wrote in <CAAfz9KPZbPjoWTqOb5moi_YWvdbSjAMZsrVBW0cBw33Q560CLw@mail.gmail.com>\n> пт, 9 авг. 2019 г. в 05:45, Michael Paquier <michael@paquier.xyz>:\n> >\n> > On Thu, Aug 08, 2019 at 10:46:07PM +0300, Dmitry Igrishin wrote:\n> > > This looks nice for a Perl hacker :-). As for me, it looks unusual and\n> > > a bit confusing. I never\n> > > programmed in Perl, but I was able to quickly understand where the\n> > > problem lies due to the\n> > > style adopted in other languages, when the contents are enclosed in\n> > > quotation marks, and\n> > > the quotation marks are escaped if they are part of the contents.\n> > > So, should I fix it? Any thoughts?\n> >\n> > FWIW, I like Alvaro's suggestion about qq{} in this case, as it makes\n> > sure that double-quotes are correctly applied where they should.\n> The attached 4rd version of the patch uses qq||. I used qq|| instead\n> of qq{} for consistency because qq|| is already used in Solution.pm:\n> \n> return qq|VisualStudioVersion = $self->{VisualStudioVersion}\n> MinimumVisualStudioVersion = $self->{MinimumVisualStudioVersion}\n> |;\n\nHmm. qq is nice but '|' make my eyes twitch (a bit). Couldn't we\nuse other delimites like (), ##, or // ? (I like {} for use in\nthis patch.)\n\nAny opinions?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 09 Aug 2019 16:22:58 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Small patch to fix build on Windows"
},
{
"msg_contents": "пт, 9 авг. 2019 г. в 10:23, Kyotaro Horiguchi <horikyota.ntt@gmail.com>:\n>\n> At Fri, 9 Aug 2019 09:56:27 +0300, Dmitry Igrishin <dmitigr@gmail.com> wrote in <CAAfz9KPZbPjoWTqOb5moi_YWvdbSjAMZsrVBW0cBw33Q560CLw@mail.gmail.com>\n> > пт, 9 авг. 2019 г. в 05:45, Michael Paquier <michael@paquier.xyz>:\n> > >\n> > > On Thu, Aug 08, 2019 at 10:46:07PM +0300, Dmitry Igrishin wrote:\n> > > > This looks nice for a Perl hacker :-). As for me, it looks unusual and\n> > > > a bit confusing. I never\n> > > > programmed in Perl, but I was able to quickly understand where the\n> > > > problem lies due to the\n> > > > style adopted in other languages, when the contents are enclosed in\n> > > > quotation marks, and\n> > > > the quotation marks are escaped if they are part of the contents.\n> > > > So, should I fix it? Any thoughts?\n> > >\n> > > FWIW, I like Alvaro's suggestion about qq{} in this case, as it makes\n> > > sure that double-quotes are correctly applied where they should.\n> > The attached 4rd version of the patch uses qq||. I used qq|| instead\n> > of qq{} for consistency because qq|| is already used in Solution.pm:\n> >\n> > return qq|VisualStudioVersion = $self->{VisualStudioVersion}\n> > MinimumVisualStudioVersion = $self->{MinimumVisualStudioVersion}\n> > |;\n>\n> Hmm. qq is nice but '|' make my eyes twitch (a bit). Couldn't we\n> use other delimites like (), ##, or // ? (I like {} for use in\n> this patch.)\n>\n> Any opinions?\nPersonally I don't care. I used || notation only in order to be\nconsistent, since this notation is already used in Solution.pm. If\nthis consistency is not required let me provide a patch with {}\nnotation. What do you think?\n\n\n",
"msg_date": "Fri, 9 Aug 2019 11:21:52 +0300",
"msg_from": "Dmitry Igrishin <dmitigr@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small patch to fix build on Windows"
},
{
"msg_contents": "On Fri, Aug 09, 2019 at 11:21:52AM +0300, Dmitry Igrishin wrote:\n> Personally I don't care. I used || notation only in order to be\n> consistent, since this notation is already used in Solution.pm. If\n> this consistency is not required let me provide a patch with {}\n> notation. What do you think?\n\nWe are talking about one place in src/tools/msvc/ using qq on HEAD.\nSo one or the other is fine by me as long as we remain in the\nacceptable ASCII ranks.\n--\nMichael",
"msg_date": "Tue, 13 Aug 2019 12:19:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Small patch to fix build on Windows"
},
{
"msg_contents": "вт, 13 авг. 2019 г. в 06:19, Michael Paquier <michael@paquier.xyz>:\n>\n> On Fri, Aug 09, 2019 at 11:21:52AM +0300, Dmitry Igrishin wrote:\n> > Personally I don't care. I used || notation only in order to be\n> > consistent, since this notation is already used in Solution.pm. If\n> > this consistency is not required let me provide a patch with {}\n> > notation. What do you think?\n>\n> We are talking about one place in src/tools/msvc/ using qq on HEAD.\n> So one or the other is fine by me as long as we remain in the\n> acceptable ASCII ranks.\nOkay. 5th version of patch is attached.",
"msg_date": "Tue, 13 Aug 2019 23:22:30 +0300",
"msg_from": "Dmitry Igrishin <dmitigr@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small patch to fix build on Windows"
}
] |
[
{
"msg_contents": "Given the discussion starting at\nhttps://postgr.es/m/CAFjFpRdBiQjZm8sG9+s0x8Re-afHds6MFLgGuw0wVUNLGrVOQg@mail.gmail.com\nwe don't have default-partition support with the hash partitioning\nscheme. That seems a reasonable outcome, but I think we should have a\ncomment about it (I had to search the reason for this restriction in the\nhash-partitioning patch set). How about the attached? Does anyone see\na reason to make this more verbose, and if so to what?\n\n... unless somebody wants to argue that we should have the feature; if\nso please share your patch.\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 6 Aug 2019 18:27:35 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "no default hash partition"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Given the discussion starting at\n> https://postgr.es/m/CAFjFpRdBiQjZm8sG9+s0x8Re-afHds6MFLgGuw0wVUNLGrVOQg@mail.gmail.com\n> we don't have default-partition support with the hash partitioning\n> scheme. That seems a reasonable outcome, but I think we should have a\n> comment about it (I had to search the reason for this restriction in the\n> hash-partitioning patch set). How about the attached? Does anyone see\n> a reason to make this more verbose, and if so to what?\n\nSeems like \"it's likely to cause trouble for users\" is just going to\nbeg the question \"why?\". Can we explain the hazard succinctly?\nOr point to a comment somewhere else that explains it?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Aug 2019 18:36:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "On 2019-Aug-06, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > Given the discussion starting at\n> > https://postgr.es/m/CAFjFpRdBiQjZm8sG9+s0x8Re-afHds6MFLgGuw0wVUNLGrVOQg@mail.gmail.com\n> > we don't have default-partition support with the hash partitioning\n> > scheme. That seems a reasonable outcome, but I think we should have a\n> > comment about it (I had to search the reason for this restriction in the\n> > hash-partitioning patch set). How about the attached? Does anyone see\n> > a reason to make this more verbose, and if so to what?\n> \n> Seems like \"it's likely to cause trouble for users\" is just going to\n> beg the question \"why?\". Can we explain the hazard succinctly?\n> Or point to a comment somewhere else that explains it?\n\nRight ... the \"trouble\" is just that if the user later wants to add the\nmissing partitions, they'll need to acquire some strong lock (IIRC it's AEL)\nin the partitioned table, so it effectively means an outage. With\nlist/range partitioning, there's the slight advantage that you don't\nhave to guess all your partitions in advance, or cover data values that\nare required for a very small number of rows. In hash partitioning you\ncan't really predict which values are those going to be, and the set of\nmissing partitions is perfectly known.\n\nNot enlightened enough ATM for a succint enough explanation, but I'll\ntake suggestions.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 6 Aug 2019 18:53:10 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Aug-06, Tom Lane wrote:\n>> Seems like \"it's likely to cause trouble for users\" is just going to\n>> beg the question \"why?\". Can we explain the hazard succinctly?\n>> Or point to a comment somewhere else that explains it?\n\n> Right ... the \"trouble\" is just that if the user later wants to add the\n> missing partitions, they'll need to acquire some strong lock (IIRC it's AEL)\n> in the partitioned table, so it effectively means an outage. With\n> list/range partitioning, there's the slight advantage that you don't\n> have to guess all your partitions in advance, or cover data values that\n> are required for a very small number of rows. In hash partitioning you\n> can't really predict which values are those going to be, and the set of\n> missing partitions is perfectly known.\n\nHmm. So given the point about it being hard to predict which hash\npartitions would receive what values ... under what circumstances\nwould it be sensible to not create a full set of partitions? Should\nwe just enforce that there is a full set, somehow?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Aug 2019 18:58:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2019-Aug-06, Tom Lane wrote:\n> >> Seems like \"it's likely to cause trouble for users\" is just going to\n> >> beg the question \"why?\". Can we explain the hazard succinctly?\n> >> Or point to a comment somewhere else that explains it?\n> \n> > Right ... the \"trouble\" is just that if the user later wants to add the\n> > missing partitions, they'll need to acquire some strong lock (IIRC it's AEL)\n> > in the partitioned table, so it effectively means an outage. With\n> > list/range partitioning, there's the slight advantage that you don't\n> > have to guess all your partitions in advance, or cover data values that\n> > are required for a very small number of rows. In hash partitioning you\n> > can't really predict which values are those going to be, and the set of\n> > missing partitions is perfectly known.\n> \n> Hmm. So given the point about it being hard to predict which hash\n> partitions would receive what values ... under what circumstances\n> would it be sensible to not create a full set of partitions? Should\n> we just enforce that there is a full set, somehow?\n\nI imagine there's good reasons this wasn't just done (for this or\nvarious other things), but couldn't we enforce it by just creating them\nall..? Sure would simplify a lot of things for users. Similairly for\nlist partitions, I would think. Again, I feel like there's probably a\nreason why it doesn't just work(tm) like that, but it sure would be\nnice.\n\nOf course, there's the other side of things where it'd sure be nice to\nautomatically have partitions created for time-based partitions when\nappropriate (yes, basically doing what pg_partman already does, but in\ncore somehow..), but for hash partitions we don't need to deal with\nthat.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 6 Aug 2019 19:02:28 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Hmm. So given the point about it being hard to predict which hash\n>> partitions would receive what values ... under what circumstances\n>> would it be sensible to not create a full set of partitions? Should\n>> we just enforce that there is a full set, somehow?\n\n> I imagine there's good reasons this wasn't just done (for this or\n> various other things), but couldn't we enforce it by just creating them\n> all..? Sure would simplify a lot of things for users. Similairly for\n> list partitions, I would think.\n\nWell, with lists Alvaro's point holds: you might know a priori that\nsome of the values are infrequent and don't deserve their own partition.\nThe thing about hash is that the entries should (in theory) get spread\nout to all partitions pretty evenly, so it's hard to see why a user\nwould want to treat any partition differently from any other.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Aug 2019 19:35:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> Hmm. So given the point about it being hard to predict which hash\n> >> partitions would receive what values ... under what circumstances\n> >> would it be sensible to not create a full set of partitions? Should\n> >> we just enforce that there is a full set, somehow?\n> \n> > I imagine there's good reasons this wasn't just done (for this or\n> > various other things), but couldn't we enforce it by just creating them\n> > all..? Sure would simplify a lot of things for users. Similairly for\n> > list partitions, I would think.\n> \n> Well, with lists Alvaro's point holds: you might know a priori that\n> some of the values are infrequent and don't deserve their own partition.\n> The thing about hash is that the entries should (in theory) get spread\n> out to all partitions pretty evenly, so it's hard to see why a user\n> would want to treat any partition differently from any other.\n\nYeah, that's a fair argument, but giving the user a way to say that\nwould address it. As in, \"create me a list-partitioned table for these\nvalues, plus a default.\" Anyhow, I'm sure that I'm taking this beyond\nwhat we need to do right now, just sharing where I think it'd be good\nfor things to go.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 6 Aug 2019 19:42:08 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "Hi Alvaro,\n\nOn Wed, Aug 7, 2019 at 7:27 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> Given the discussion starting at\n> https://postgr.es/m/CAFjFpRdBiQjZm8sG9+s0x8Re-afHds6MFLgGuw0wVUNLGrVOQg@mail.gmail.com\n> we don't have default-partition support with the hash partitioning\n> scheme. That seems a reasonable outcome, but I think we should have a\n> comment about it (I had to search the reason for this restriction in the\n> hash-partitioning patch set).\n\nThat hash-partitioned tables can't have default partition is mentioned\nin the CREATE TABLE page:\n\n\"If DEFAULT is specified, the table will be created as a default\npartition of the parent table. The parent can either be a list or\nrange partitioned table. A partition key value not fitting into any\nother partition of the given parent will be routed to the default\npartition. There can be only one default partition for a given parent\ntable.\"\n\n> How about the attached? Does anyone see\n> a reason to make this more verbose, and if so to what?\n\nIf the outcome of this discussion is that we expand our internal\ndocumentation of why there's no default hash partition, then should we\nalso expand the user documentation somehow?\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 7 Aug 2019 11:27:26 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "Hi,\n\nOn Wed, Aug 7, 2019 at 8:02 AM Stephen Frost <sfrost@snowman.net> wrote:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > > On 2019-Aug-06, Tom Lane wrote:\n> > >> Seems like \"it's likely to cause trouble for users\" is just going to\n> > >> beg the question \"why?\". Can we explain the hazard succinctly?\n> > >> Or point to a comment somewhere else that explains it?\n> >\n> > > Right ... the \"trouble\" is just that if the user later wants to add the\n> > > missing partitions, they'll need to acquire some strong lock (IIRC it's AEL)\n> > > in the partitioned table, so it effectively means an outage. With\n> > > list/range partitioning, there's the slight advantage that you don't\n> > > have to guess all your partitions in advance, or cover data values that\n> > > are required for a very small number of rows. In hash partitioning you\n> > > can't really predict which values are those going to be, and the set of\n> > > missing partitions is perfectly known.\n> >\n> > Hmm. So given the point about it being hard to predict which hash\n> > partitions would receive what values ... under what circumstances\n> > would it be sensible to not create a full set of partitions? Should\n> > we just enforce that there is a full set, somehow?\n>\n> I imagine there's good reasons this wasn't just done (for this or\n> various other things), but couldn't we enforce it by just creating them\n> all..? Sure would simplify a lot of things for users. Similairly for\n> list partitions, I would think. Again, I feel like there's probably a\n> reason why it doesn't just work(tm) like that, but it sure would be\n> nice.\n\nMaybe the reason that we don't create all partitions automatically is\nthat hash-partitioning developers thought that such a feature could be\nbuilt later [1]. Maybe you know, but I think it's just that we\nimplemented the syntax needed to get things like pg_dump/upgrade\nworking sanely, that is, a command to define each partition\nseparately, and... stopped there. There're no other intrinsic reasons\nthat I know of for this implementation order. pg_partman helps with\nthe automation, with features that users want in most or all cases --\ndefine all needed partitions for a given modulus, define time series\npartitions for a given window, etc. Maybe not everyone likes to rely\non an external tool, so the core at some point will have features to\nperform some if not all of the tasks that pg_partman does, with the\nadded benefit that the new feature might allow the core to optimize\npartitioning better.\n\nBtw, there was even a discussion started recently to discuss the\nuser-level feature:\n\nSubject: Creating partitions automatically at least on HASH?\nhttps://www.postgresql.org/message-id/alpine.DEB.2.21.1907150711080.22273%40lancre\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmobGH4zK27y42gGbtvfWFPnATHcocMZ%3DHkJF51KLkKY_xw%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 7 Aug 2019 11:32:31 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "On Tue, Aug 6, 2019 at 6:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmm. So given the point about it being hard to predict which hash\n> partitions would receive what values ... under what circumstances\n> would it be sensible to not create a full set of partitions? Should\n> we just enforce that there is a full set, somehow?\n\nI think it would only be sensible as a temporary state. The system\nallows more than one modulus so that you can do partition split\nincrementally. For example if you have 8 partitions all with modulus\n8 and with remainders 0..7, you could:\n\n- detach the partition with (modulus 8, remainder 0)\n- attach two new partitions with (modulus 16, remainder 0) and\n(modulus 16, remainder 8)\n- move the data from the old partition to the new ones\n\nThen you'd have 9 partitions, and you'd only have taken the amount of\ndowntime needed to repartition 1/8th of your data. You could then\nrepeat this process one partition at a time during additional\nmaintenance windows, and end up with 16 partitions in the end.\nWithout the ability to have more than one modulus, or if you had\nchosen not to double the modulus but to change it to some other value\nlike 13, you would've needed to repartition all the data at once,\nwhich would have required one much longer outage. You can argue about\nwhether the ability to do this kind of thing is useful, but it seemed\nto me that it was.\n\nI think, as Amit says, that having an automatic partition creation\nfeature for hash partitions (and maybe other kinds, but certainly for\nhash) would be a useful thing to add to the system. I also think that\nit might be useful to add some commands to automate partition\nsplitting (and maybe combining) although I think there's some design\nwork to be done there to figure out exactly what we should build. I\ndon't think it's ever useful to have a hash-partitioned table with an\nincomplete set of partitions long term, but it makes things simpler to\nallow that temporarily, for example during dump restoration.\nTherefore, I see no reason why we would want to go to the trouble of\nallowing hash-partitioned tables to have default partitions; it would\njust encourage people to do things that don't really make any sense.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 6 Aug 2019 23:26:19 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "On Tue, Aug 06, 2019 at 06:58:44PM -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2019-Aug-06, Tom Lane wrote:\n> >> Seems like \"it's likely to cause trouble for users\" is just going to\n> >> beg the question \"why?\". Can we explain the hazard succinctly?\n> >> Or point to a comment somewhere else that explains it?\n> \n> > Right ... the \"trouble\" is just that if the user later wants to add the\n> > missing partitions, they'll need to acquire some strong lock (IIRC it's AEL)\n> > in the partitioned table, so it effectively means an outage. With\n> > list/range partitioning, there's the slight advantage that you don't\n> > have to guess all your partitions in advance, or cover data values that\n> > are required for a very small number of rows. In hash partitioning you\n> > can't really predict which values are those going to be, and the set of\n> > missing partitions is perfectly known.\n> \n> Hmm. So given the point about it being hard to predict which hash\n> partitions would receive what values ... under what circumstances\n> would it be sensible to not create a full set of partitions? Should\n> we just enforce that there is a full set, somehow?\n\n+1 for requiring that hash partitions not have gaps, ideally by making\none call create all the partitions.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Wed, 7 Aug 2019 05:46:58 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "At Tue, 6 Aug 2019 23:26:19 -0400, Robert Haas <robertmhaas@gmail.com> wrote in <CA+TgmoZpAsYY+naYpuw+fG=J1wYTXrhk=3uEYYa_Nz=Jwck+eg@mail.gmail.com>\n> On Tue, Aug 6, 2019 at 6:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think, as Amit says, that having an automatic partition creation\n> feature for hash partitions (and maybe other kinds, but certainly for\n> hash) would be a useful thing to add to the system. I also think that\n> it might be useful to add some commands to automate partition\n> splitting (and maybe combining) although I think there's some design\n> work to be done there to figure out exactly what we should build. I\n> don't think it's ever useful to have a hash-partitioned table with an\n> incomplete set of partitions long term, but it makes things simpler to\n> allow that temporarily, for example during dump restoration.\n> Therefore, I see no reason why we would want to go to the trouble of\n> allowing hash-partitioned tables to have default partitions; it would\n> just encourage people to do things that don't really make any sense.\n\n+1.\n\nBy the way, couldn't we offer a means to check for gaps in a hash\npartition? For example, the output of current \\d+ <parent>\ncontains the Partitoins section that shows a list of\npartitions. I think that we can show all gaps there.\n\n=# \\d+ p\n Partitioned table \"public.p\"\n...\nPartition key: HASH (a)\nPartitions: c1 FOR VALUES WITH (modulus 4, remainder 0),\n c3 FOR VALUES WITH (modulus 4, remainder 3),\n GAP (modulus 4, remainder 1),\n GAP (modulus 4, remainder 2)\n\nOr\n\nPartitions: c1 FOR VALUES WITH (modulus 4, remainder 0),\n c3 FOR VALUES WITH (modulus 4, remainder 3),\nGaps: (modulus 4, remainder 1), (modulus 4, remainder 2)\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 07 Aug 2019 13:58:34 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "Horiguchi-san,\n\nOn Wed, Aug 7, 2019 at 1:59 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Tue, 6 Aug 2019 23:26:19 -0400, Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Tue, Aug 6, 2019 at 6:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I think, as Amit says, that having an automatic partition creation\n> > feature for hash partitions (and maybe other kinds, but certainly for\n> > hash) would be a useful thing to add to the system. I also think that\n> > it might be useful to add some commands to automate partition\n> > splitting (and maybe combining) although I think there's some design\n> > work to be done there to figure out exactly what we should build. I\n> > don't think it's ever useful to have a hash-partitioned table with an\n> > incomplete set of partitions long term, but it makes things simpler to\n> > allow that temporarily, for example during dump restoration.\n> > Therefore, I see no reason why we would want to go to the trouble of\n> > allowing hash-partitioned tables to have default partitions; it would\n> > just encourage people to do things that don't really make any sense.\n>\n> +1.\n>\n> By the way, couldn't we offer a means to check for gaps in a hash\n> partition? For example, the output of current \\d+ <parent>\n> contains the Partitoins section that shows a list of\n> partitions. I think that we can show all gaps there.\n>\n> =# \\d+ p\n> Partitioned table \"public.p\"\n> ...\n> Partition key: HASH (a)\n> Partitions: c1 FOR VALUES WITH (modulus 4, remainder 0),\n> c3 FOR VALUES WITH (modulus 4, remainder 3),\n> GAP (modulus 4, remainder 1),\n> GAP (modulus 4, remainder 2)\n>\n> Or\n>\n> Partitions: c1 FOR VALUES WITH (modulus 4, remainder 0),\n> c3 FOR VALUES WITH (modulus 4, remainder 3),\n> Gaps: (modulus 4, remainder 1), (modulus 4, remainder 2)\n\nI imagine showing this output would require some non-trivial code on\nthe client side (?) to figure out the gaps. If our intention in the\nlong run is to make sure that such gaps only ever appear temporarily,\nthat is, when running a command to increase the number of hash\npartitions (as detailed in Robert's email), then a user would never\nsee those gaps. So, maybe writing such code wouldn't be worthwhile in\nthe long run?\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 7 Aug 2019 14:51:45 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 5:26 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Aug 6, 2019 at 6:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Hmm. So given the point about it being hard to predict which hash\n> > partitions would receive what values ... under what circumstances\n> > would it be sensible to not create a full set of partitions? Should\n> > we just enforce that there is a full set, somehow?\n>\n> I think it would only be sensible as a temporary state. The system\n> allows more than one modulus so that you can do partition split\n> incrementally. For example if you have 8 partitions all with modulus\n> 8 and with remainders 0..7, you could:\n>\n> - detach the partition with (modulus 8, remainder 0)\n> - attach two new partitions with (modulus 16, remainder 0) and\n> (modulus 16, remainder 8)\n> - move the data from the old partition to the new ones\n>\n> Then you'd have 9 partitions, and you'd only have taken the amount of\n> downtime needed to repartition 1/8th of your data. You could then\n> repeat this process one partition at a time during additional\n> maintenance windows, and end up with 16 partitions in the end.\n> Without the ability to have more than one modulus, or if you had\n> chosen not to double the modulus but to change it to some other value\n> like 13, you would've needed to repartition all the data at once,\n> which would have required one much longer outage. You can argue about\n> whether the ability to do this kind of thing is useful, but it seemed\n> to me that it was.\n>\n> I think, as Amit says, that having an automatic partition creation\n> feature for hash partitions (and maybe other kinds, but certainly for\n> hash) would be a useful thing to add to the system. I also think that\n> it might be useful to add some commands to automate partition\n> splitting (and maybe combining) although I think there's some design\n> work to be done there to figure out exactly what we should build. I\n> don't think it's ever useful to have a hash-partitioned table with an\n> incomplete set of partitions long term, but it makes things simpler to\n> allow that temporarily, for example during dump restoration.\n> Therefore, I see no reason why we would want to go to the trouble of\n> allowing hash-partitioned tables to have default partitions; it would\n> just encourage people to do things that don't really make any sense.\n>\n\nAnother usecase for not having all partitions temporarily is if some of\nthem should be different enough that you don't want them auto-created. A\ncommon one would be that they should be on different tablespaces, but that\ncan of course be solved by moving the partition after it had been\nauto-created (and should be fast since at this point it would be empty).\nBut imagine you wanted one partition to be a FOREIGN one for example, you\ncan't ALTER a partition to become foreign, you'd have to drop it and\nrecreate it, in which case not having created it in the first place\nwould've been better. That's pretty weird for hash partitioning, but one\ncould certainly imagine having *all* partitions of a hash partitioned table\nbe FOREIGN...\n\nNone of that is solved by having a default partition for it though, since\nit would only be a temporary state. It only goes to that if we do want to\nauto-create the hash partitions (which I think would be really useful for\nthe most common usecase), we should have a way not to do it. Either by only\nautocreating them if a specific keyword is given, or by having a keyword\nthat would prevent it.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Aug 7, 2019 at 5:26 AM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Aug 6, 2019 at 6:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmm. So given the point about it being hard to predict which hash\n> partitions would receive what values ... under what circumstances\n> would it be sensible to not create a full set of partitions? Should\n> we just enforce that there is a full set, somehow?\n\nI think it would only be sensible as a temporary state. The system\nallows more than one modulus so that you can do partition split\nincrementally. For example if you have 8 partitions all with modulus\n8 and with remainders 0..7, you could:\n\n- detach the partition with (modulus 8, remainder 0)\n- attach two new partitions with (modulus 16, remainder 0) and\n(modulus 16, remainder 8)\n- move the data from the old partition to the new ones\n\nThen you'd have 9 partitions, and you'd only have taken the amount of\ndowntime needed to repartition 1/8th of your data. You could then\nrepeat this process one partition at a time during additional\nmaintenance windows, and end up with 16 partitions in the end.\nWithout the ability to have more than one modulus, or if you had\nchosen not to double the modulus but to change it to some other value\nlike 13, you would've needed to repartition all the data at once,\nwhich would have required one much longer outage. You can argue about\nwhether the ability to do this kind of thing is useful, but it seemed\nto me that it was.\n\nI think, as Amit says, that having an automatic partition creation\nfeature for hash partitions (and maybe other kinds, but certainly for\nhash) would be a useful thing to add to the system. I also think that\nit might be useful to add some commands to automate partition\nsplitting (and maybe combining) although I think there's some design\nwork to be done there to figure out exactly what we should build. I\ndon't think it's ever useful to have a hash-partitioned table with an\nincomplete set of partitions long term, but it makes things simpler to\nallow that temporarily, for example during dump restoration.\nTherefore, I see no reason why we would want to go to the trouble of\nallowing hash-partitioned tables to have default partitions; it would\njust encourage people to do things that don't really make any sense.Another usecase for not having all partitions temporarily is if some of them should be different enough that you don't want them auto-created. A common one would be that they should be on different tablespaces, but that can of course be solved by moving the partition after it had been auto-created (and should be fast since at this point it would be empty). But imagine you wanted one partition to be a FOREIGN one for example, you can't ALTER a partition to become foreign, you'd have to drop it and recreate it, in which case not having created it in the first place would've been better. That's pretty weird for hash partitioning, but one could certainly imagine having *all* partitions of a hash partitioned table be FOREIGN...None of that is solved by having a default partition for it though, since it would only be a temporary state. It only goes to that if we do want to auto-create the hash partitions (which I think would be really useful for the most common usecase), we should have a way not to do it. Either by only autocreating them if a specific keyword is given, or by having a keyword that would prevent it.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 7 Aug 2019 11:01:36 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "Greetings,\n\n* Amit Langote (amitlangote09@gmail.com) wrote:\n> On Wed, Aug 7, 2019 at 1:59 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Tue, 6 Aug 2019 23:26:19 -0400, Robert Haas <robertmhaas@gmail.com> wrote:\n> > > On Tue, Aug 6, 2019 at 6:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > I think, as Amit says, that having an automatic partition creation\n> > > feature for hash partitions (and maybe other kinds, but certainly for\n> > > hash) would be a useful thing to add to the system. I also think that\n> > > it might be useful to add some commands to automate partition\n> > > splitting (and maybe combining) although I think there's some design\n> > > work to be done there to figure out exactly what we should build. I\n> > > don't think it's ever useful to have a hash-partitioned table with an\n> > > incomplete set of partitions long term, but it makes things simpler to\n> > > allow that temporarily, for example during dump restoration.\n> > > Therefore, I see no reason why we would want to go to the trouble of\n> > > allowing hash-partitioned tables to have default partitions; it would\n> > > just encourage people to do things that don't really make any sense.\n> >\n> > +1.\n> >\n> > By the way, couldn't we offer a means to check for gaps in a hash\n> > partition? For example, the output of current \\d+ <parent>\n> > contains the Partitoins section that shows a list of\n> > partitions. I think that we can show all gaps there.\n> >\n> > =# \\d+ p\n> > Partitioned table \"public.p\"\n> > ...\n> > Partition key: HASH (a)\n> > Partitions: c1 FOR VALUES WITH (modulus 4, remainder 0),\n> > c3 FOR VALUES WITH (modulus 4, remainder 3),\n> > GAP (modulus 4, remainder 1),\n> > GAP (modulus 4, remainder 2)\n> >\n> > Or\n> >\n> > Partitions: c1 FOR VALUES WITH (modulus 4, remainder 0),\n> > c3 FOR VALUES WITH (modulus 4, remainder 3),\n> > Gaps: (modulus 4, remainder 1), (modulus 4, remainder 2)\n> \n> I imagine showing this output would require some non-trivial code on\n> the client side (?) to figure out the gaps. If our intention in the\n> long run is to make sure that such gaps only ever appear temporarily,\n> that is, when running a command to increase the number of hash\n> partitions (as detailed in Robert's email), then a user would never\n> see those gaps. So, maybe writing such code wouldn't be worthwhile in\n> the long run?\n\nI tend to agree that it might not be useful to have this code,\nparticularly not on the client side, but we've dealt with the issue of\n\"the client would need non-trivial code for this\" in the past by having\na server-side function for the client to call (eg: pg_get_expr(),\npg_get_ruledef()). If we really think this would be valuable to show\nand we don't want the client to have to have a bunch of code for it,\ndoing something similar here could address that.\n\nOne thing I've wished for is a function that would give me a range type\nback for a partition (would be neat to be able to use a range type to\nspecify a partition's range when creating it too).\n\nThanks,\n\nStephen",
"msg_date": "Wed, 7 Aug 2019 10:04:03 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "On 2019-Aug-07, Amit Langote wrote:\n\n> That hash-partitioned tables can't have default partition is mentioned\n> in the CREATE TABLE page:\n> \n> \"If DEFAULT is specified, the table will be created as a default\n> partition of the parent table. The parent can either be a list or\n> range partitioned table. A partition key value not fitting into any\n> other partition of the given parent will be routed to the default\n> partition. There can be only one default partition for a given parent\n> table.\"\n\nThis approach of documenting by omission seems unhelpful. Yes, I'd like\nto expand that too.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 7 Aug 2019 10:59:13 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "On 2019-Aug-06, Stephen Frost wrote:\n\n> Yeah, that's a fair argument, but giving the user a way to say that\n> would address it. As in, \"create me a list-partitioned table for these\n> values, plus a default.\" Anyhow, I'm sure that I'm taking this beyond\n> what we need to do right now, just sharing where I think it'd be good\n> for things to go.\n\nFabien Coelho already submitted a patch for this IIRC.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 7 Aug 2019 12:25:02 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "On 2019-Aug-07, Amit Langote wrote:\n\n> That hash-partitioned tables can't have default partition is mentioned\n> in the CREATE TABLE page:\n> \n> \"If DEFAULT is specified, the table will be created as a default\n> partition of the parent table. The parent can either be a list or\n> range partitioned table. A partition key value not fitting into any\n> other partition of the given parent will be routed to the default\n> partition. There can be only one default partition for a given parent\n> table.\"\n\nActually, it also says this (in the blurb for the PARTITION OF clause):\n\n Creates the table as a <firstterm>partition</firstterm> of the specified\n parent table. The table can be created either as a partition for specific\n values using <literal>FOR VALUES</literal> or as a default partition\n using <literal>DEFAULT</literal>. This option is not available for\n hash-partitioned tables.\n\nwhich I think is sufficient.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 7 Aug 2019 12:29:45 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Actually, it also says this (in the blurb for the PARTITION OF clause):\n\n> Creates the table as a <firstterm>partition</firstterm> of the specified\n> parent table. The table can be created either as a partition for specific\n> values using <literal>FOR VALUES</literal> or as a default partition\n> using <literal>DEFAULT</literal>. This option is not available for\n> hash-partitioned tables.\n\n> which I think is sufficient.\n\nHm, that's rather confusingly worded IMO. Is the antecedent of \"this\noption\" just DEFAULT, or does it mean that you can't use FOR VALUES,\nor perchance it means that you can't use a PARTITION OF clause\nat all?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Aug 2019 12:44:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "On 2019-Aug-07, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > Actually, it also says this (in the blurb for the PARTITION OF clause):\n> \n> > Creates the table as a <firstterm>partition</firstterm> of the specified\n> > parent table. The table can be created either as a partition for specific\n> > values using <literal>FOR VALUES</literal> or as a default partition\n> > using <literal>DEFAULT</literal>. This option is not available for\n> > hash-partitioned tables.\n> \n> > which I think is sufficient.\n> \n> Hm, that's rather confusingly worded IMO. Is the antecedent of \"this\n> option\" just DEFAULT, or does it mean that you can't use FOR VALUES,\n> or perchance it means that you can't use a PARTITION OF clause\n> at all?\n\nUh, you're right, I hadn't noticed that. Not my text. I think this can\nbe fixed easily as in the attached. There are other options, but I like\nthis one the best.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 7 Aug 2019 12:55:59 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Aug-07, Tom Lane wrote:\n>> Hm, that's rather confusingly worded IMO. Is the antecedent of \"this\n>> option\" just DEFAULT, or does it mean that you can't use FOR VALUES,\n>> or perchance it means that you can't use a PARTITION OF clause\n>> at all?\n\n> Uh, you're right, I hadn't noticed that. Not my text. I think this can\n> be fixed easily as in the attached. There are other options, but I like\n> this one the best.\n\nOK, but maybe also s/created as a default partition/created as the default\npartition/ ? Writing \"a\" carries the pretty clear implication that there\ncan be more than one, and contradicting that a sentence later doesn't\nimprove it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Aug 2019 17:22:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "On Thu, Aug 8, 2019 at 6:22 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2019-Aug-07, Tom Lane wrote:\n> >> Hm, that's rather confusingly worded IMO. Is the antecedent of \"this\n> >> option\" just DEFAULT, or does it mean that you can't use FOR VALUES,\n> >> or perchance it means that you can't use a PARTITION OF clause\n> >> at all?\n>\n> > Uh, you're right, I hadn't noticed that. Not my text. I think this can\n> > be fixed easily as in the attached. There are other options, but I like\n> > this one the best.\n>\n> OK, but maybe also s/created as a default partition/created as the default\n> partition/ ? Writing \"a\" carries the pretty clear implication that there\n> can be more than one, and contradicting that a sentence later doesn't\n> improve it.\n\n+1. Maybe also remove the last sentence of the 2nd paragraph, that\nis, this one:\n\nThere can be only one default partition for a given parent table.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Thu, 8 Aug 2019 10:01:38 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: no default hash partition"
},
{
"msg_contents": "On 2019-Aug-08, Amit Langote wrote:\n\n> On Thu, Aug 8, 2019 at 6:22 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > OK, but maybe also s/created as a default partition/created as the default\n> > partition/ ? Writing \"a\" carries the pretty clear implication that there\n> > can be more than one, and contradicting that a sentence later doesn't\n> > improve it.\n> \n> +1. Maybe also remove the last sentence of the 2nd paragraph, that\n> is, this one:\n> \n> There can be only one default partition for a given parent table.\n\nThanks! I pushed with these two changes.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 8 Aug 2019 16:08:51 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: no default hash partition"
}
] |
[
{
"msg_contents": "Hello, here's a pretty trivial cleanup.\n\nCurrently, you have to pass the errmsg text to convert_tuples_by_name\nand convert_tuples_by_position that's going to be raised if the tuple\ndescriptors don't match. In the latter's case that makes sense, as each\ncase is pretty specific and tailored messages can be offered, so this is\nuseful.\n\nHowever, in the case of convert_tuples_by_name, it seems we don't have\nenough control over what is being called, so there's no way to\nproduce tailored messages -- all the callers are using the same generic\nwording: \"could not convert row type\".\n\nThis code was introduced by dcb2bda9b704; I think back then we were\nthinking that it would be possible to give different error messages for\ndifferent cases (as convert_tuples_by_position was already doing then),\nhowever it seems clear now that that'll never happen.\n\nI propose we get rid of it by having convert_tuples_by_name supply the\nerror message by itself, as in the attached patch.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 6 Aug 2019 18:47:28 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "remove \"msg\" parameter from convert_tuples_by_name"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 7:47 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> Hello, here's a pretty trivial cleanup.\n>\n> Currently, you have to pass the errmsg text to convert_tuples_by_name\n> and convert_tuples_by_position that's going to be raised if the tuple\n> descriptors don't match. In the latter's case that makes sense, as each\n> case is pretty specific and tailored messages can be offered, so this is\n> useful.\n>\n> However, in the case of convert_tuples_by_name, it seems we don't have\n> enough control over what is being called, so there's no way to\n> produce tailored messages -- all the callers are using the same generic\n> wording: \"could not convert row type\".\n>\n> This code was introduced by dcb2bda9b704; I think back then we were\n> thinking that it would be possible to give different error messages for\n> different cases (as convert_tuples_by_position was already doing then),\n> however it seems clear now that that'll never happen.\n>\n> I propose we get rid of it by having convert_tuples_by_name supply the\n> error message by itself, as in the attached patch.\n\n+1. I always wondered when writing partitioning patches why I have to\npass the same string.\n\nIf we're reducing the message string to occur only once in the source\ncode, can we maybe write it to be more informative? I wonder if users\naren't normally supposed to see this message?\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 7 Aug 2019 11:57:08 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: remove \"msg\" parameter from convert_tuples_by_name"
},
{
"msg_contents": "On 2019-Aug-07, Amit Langote wrote:\n\n> If we're reducing the message string to occur only once in the source\n> code, can we maybe write it to be more informative? I wonder if users\n> aren't normally supposed to see this message?\n\nGrepping for the messages given to convert_tuples_by_position yields\nquite a few matches in regression test output, but none for the one in\nconvert_tuples_by_name. This makes me think that it isn't user-visible,\nunless things go very wrong.\n\nPushed the patch, thanks.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 3 Sep 2019 14:52:01 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: remove \"msg\" parameter from convert_tuples_by_name"
},
{
"msg_contents": "On Wed, Sep 4, 2019 at 3:52 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Aug-07, Amit Langote wrote:\n>\n> > If we're reducing the message string to occur only once in the source\n> > code, can we maybe write it to be more informative? I wonder if users\n> > aren't normally supposed to see this message?\n>\n> Grepping for the messages given to convert_tuples_by_position yields\n> quite a few matches in regression test output, but none for the one in\n> convert_tuples_by_name. This makes me think that it isn't user-visible,\n> unless things go very wrong.\n>\n> Pushed the patch, thanks.\n\nThanks. I thought you'd change the ereport to elog while at it.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 4 Sep 2019 10:38:07 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: remove \"msg\" parameter from convert_tuples_by_name"
},
{
"msg_contents": "On 2019-Sep-04, Amit Langote wrote:\n\n> On Wed, Sep 4, 2019 at 3:52 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >\n> > On 2019-Aug-07, Amit Langote wrote:\n> >\n> > > If we're reducing the message string to occur only once in the source\n> > > code, can we maybe write it to be more informative? I wonder if users\n> > > aren't normally supposed to see this message?\n> >\n> > Grepping for the messages given to convert_tuples_by_position yields\n> > quite a few matches in regression test output, but none for the one in\n> > convert_tuples_by_name. This makes me think that it isn't user-visible,\n> > unless things go very wrong.\n> >\n> > Pushed the patch, thanks.\n> \n> Thanks. I thought you'd change the ereport to elog while at it.\n\nOh, that didn't occur to me, but because it has errdetail and an errcode\nit'd be more controversial. I don't see any reason to do it, frankly ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 3 Sep 2019 21:42:14 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: remove \"msg\" parameter from convert_tuples_by_name"
}
] |
[
{
"msg_contents": "Hi\n\nI should to use a cache accessed via fn_extra. There will be stored data\nabout function parameters (types). If I understand correctly, these data\nshould be stable in query, and then recheck is not necessary. Is it true?\n\nRegards\n\nPavel\n\nHiI should to use a cache accessed via fn_extra. There will be stored data about function parameters (types). If I understand correctly, these data should be stable in query, and then recheck is not necessary. Is it true?RegardsPavel",
"msg_date": "Wed, 7 Aug 2019 07:32:04 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "is necessary to recheck cached data in fn_extra?"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I should to use a cache accessed via fn_extra. There will be stored data\n> about function parameters (types). If I understand correctly, these data\n> should be stable in query, and then recheck is not necessary. Is it true?\n\nI wouldn't trust that. You don't really know what the lifespan of\na fn_extra cache is.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Aug 2019 11:39:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: is necessary to recheck cached data in fn_extra?"
},
{
"msg_contents": "On 08/07/19 11:39, Tom Lane wrote:\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> I should to use a cache accessed via fn_extra. There will be stored data\n>> about function parameters (types). If I understand correctly, these data\n>> should be stable in query, and then recheck is not necessary. Is it true?\n> \n> I wouldn't trust that. You don't really know what the lifespan of\n> a fn_extra cache is.\n\nIt is going to be either the last thing I put there, or NULL, right?\nSo a null check is sufficient?\n\nOther than when the SRF_* api has commandeered it for other purposes?\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Wed, 7 Aug 2019 11:59:49 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: is necessary to recheck cached data in fn_extra?"
},
{
"msg_contents": "st 7. 8. 2019 v 17:39 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I should to use a cache accessed via fn_extra. There will be stored data\n> > about function parameters (types). If I understand correctly, these data\n> > should be stable in query, and then recheck is not necessary. Is it true?\n>\n> I wouldn't trust that. You don't really know what the lifespan of\n> a fn_extra cache is.\n>\n\nfn_extra cache cannot be longer than query. And if I understand well, then\nis not possible to change parameter types inside query?\n\nPavel\n\n\n> regards, tom lane\n>\n\nst 7. 8. 2019 v 17:39 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I should to use a cache accessed via fn_extra. There will be stored data\n> about function parameters (types). If I understand correctly, these data\n> should be stable in query, and then recheck is not necessary. Is it true?\n\nI wouldn't trust that. You don't really know what the lifespan of\na fn_extra cache is.fn_extra cache cannot be longer than query. And if I understand well, then is not possible to change parameter types inside query?Pavel\n\n regards, tom lane",
"msg_date": "Wed, 7 Aug 2019 18:18:40 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: is necessary to recheck cached data in fn_extra?"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> st 7. 8. 2019 v 17:39 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> I wouldn't trust that. You don't really know what the lifespan of\n>> a fn_extra cache is.\n\n> fn_extra cache cannot be longer than query.\n\nThere are fn_extra caches that are not tied to queries. Admittedly\nthey're for special purposes like I/O functions and index support\nfunctions, and maybe you can assume that your function can't be\nused in such ways. I don't think it's a great programming model\nthough.\n\n> And if I understand well, then\n> is not possible to change parameter types inside query?\n\nMost places dealing with composite types assume that the rowtype *could*\nchange intraquery. I believe this was a live possibility in the past,\nthough it might not be today. (The issue was inheritance queries, but\nI think we now force tuples from child tables to be converted to the\nparent rowtype. Whether that's 100% bulletproof is unclear.) If you're\nnot dealing with composites then it's an okay assumption. I think.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Aug 2019 12:39:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: is necessary to recheck cached data in fn_extra?"
},
{
"msg_contents": "st 7. 8. 2019 v 18:39 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > st 7. 8. 2019 v 17:39 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> >> I wouldn't trust that. You don't really know what the lifespan of\n> >> a fn_extra cache is.\n>\n> > fn_extra cache cannot be longer than query.\n>\n> There are fn_extra caches that are not tied to queries. Admittedly\n> they're for special purposes like I/O functions and index support\n> functions, and maybe you can assume that your function can't be\n> used in such ways. I don't think it's a great programming model\n> though.\n>\n> > And if I understand well, then\n> > is not possible to change parameter types inside query?\n>\n> Most places dealing with composite types assume that the rowtype *could*\n> change intraquery. I believe this was a live possibility in the past,\n> though it might not be today. (The issue was inheritance queries, but\n> I think we now force tuples from child tables to be converted to the\n> parent rowtype. Whether that's 100% bulletproof is unclear.) If you're\n> not dealing with composites then it's an okay assumption. I think.\n>\n\nok, thank you for your reply.\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n\nst 7. 8. 2019 v 18:39 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> st 7. 8. 2019 v 17:39 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> I wouldn't trust that. You don't really know what the lifespan of\n>> a fn_extra cache is.\n\n> fn_extra cache cannot be longer than query.\n\nThere are fn_extra caches that are not tied to queries. Admittedly\nthey're for special purposes like I/O functions and index support\nfunctions, and maybe you can assume that your function can't be\nused in such ways. I don't think it's a great programming model\nthough.\n\n> And if I understand well, then\n> is not possible to change parameter types inside query?\n\nMost places dealing with composite types assume that the rowtype *could*\nchange intraquery. I believe this was a live possibility in the past,\nthough it might not be today. (The issue was inheritance queries, but\nI think we now force tuples from child tables to be converted to the\nparent rowtype. Whether that's 100% bulletproof is unclear.) If you're\nnot dealing with composites then it's an okay assumption. I think.ok, thank you for your reply.RegardsPavel\n\n regards, tom lane",
"msg_date": "Wed, 7 Aug 2019 19:04:20 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: is necessary to recheck cached data in fn_extra?"
}
] |
[
{
"msg_contents": "The pool_passwd option [1] is specified relative to config file. But for\ngreater flexibility absolute path should be accepted as well.\n\nIf pool_passwd option starts with /, let's treat it as absolute path.\nOtherwise, it is treated as relative path.\n\nPatch attached. Original author - Derek Kulinski [2]. In NixOS,\nconfiguration files often end up in world readable store, which is not the\nbest place for storing password files.\n\n[1]\nhttp://www.pgpool.net/docs/latest/en/html/runtime-config-connection.html#GUC-POOL-PASSWD\n[2] https://github.com/NixOS/nixpkgs/pull/66224\n\nThe pool_passwd option [1] is specified relative to config file. But for greater flexibility absolute path should be accepted as well. If pool_passwd option starts with /, let's treat it as absolute path. Otherwise, it is treated as relative path. Patch attached. Original author - Derek Kulinski [2]. In NixOS, configuration files often end up in world readable store, which is not the best place for storing password files.[1] http://www.pgpool.net/docs/latest/en/html/runtime-config-connection.html#GUC-POOL-PASSWD[2] https://github.com/NixOS/nixpkgs/pull/66224",
"msg_date": "Wed, 7 Aug 2019 09:05:28 +0300",
"msg_from": "Danylo Hlynskyi <abcz2.uprola@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fwd: [PATCH] Absolute passwordfile path"
},
{
"msg_contents": "Hello,\n\nOn Wed, Aug 7, 2019 at 3:05 PM Danylo Hlynskyi <abcz2.uprola@gmail.com> wrote:\n>\n> The pool_passwd option [1] is specified relative to config file. But for greater flexibility absolute path should be accepted as well.\n>\n> If pool_passwd option starts with /, let's treat it as absolute path. Otherwise, it is treated as relative path.\n>\n> Patch attached. Original author - Derek Kulinski [2]. In NixOS, configuration files often end up in world readable store, which is not the best place for storing password files.\n>\n> [1] http://www.pgpool.net/docs/latest/en/html/runtime-config-connection.html#GUC-POOL-PASSWD\n> [2] https://github.com/NixOS/nixpkgs/pull/66224\n\nDid you mean to send this email to pgpool-hackers@pgpool.net or\nsomewhere else like a NixOS mailing list, not\npgsql-hackers@lists.postgresql.org? This list is used to discuss the\ntopics related to PostgreSQL development.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Wed, 7 Aug 2019 15:17:59 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Absolute passwordfile path"
},
{
"msg_contents": "Yes, I've resent it to pgpool-hackers@pgpool.net\nSorry for the noise\n<pgpool-hackers@pgpool.net>\n\nср, 7 серп. 2019 о 09:18 Amit Langote <amitlangote09@gmail.com> пише:\n\n> Hello,\n>\n> On Wed, Aug 7, 2019 at 3:05 PM Danylo Hlynskyi <abcz2.uprola@gmail.com>\n> wrote:\n> >\n> > The pool_passwd option [1] is specified relative to config file. But for\n> greater flexibility absolute path should be accepted as well.\n> >\n> > If pool_passwd option starts with /, let's treat it as absolute path.\n> Otherwise, it is treated as relative path.\n> >\n> > Patch attached. Original author - Derek Kulinski [2]. In NixOS,\n> configuration files often end up in world readable store, which is not the\n> best place for storing password files.\n> >\n> > [1]\n> http://www.pgpool.net/docs/latest/en/html/runtime-config-connection.html#GUC-POOL-PASSWD\n> > [2] https://github.com/NixOS/nixpkgs/pull/66224\n>\n> Did you mean to send this email to pgpool-hackers@pgpool.net or\n> somewhere else like a NixOS mailing list, not\n> pgsql-hackers@lists.postgresql.org? This list is used to discuss the\n> topics related to PostgreSQL development.\n>\n> Regards,\n> Amit\n>\n\nYes, I've resent it to pgpool-hackers@pgpool.netSorry for the noiseср, 7 серп. 2019 о 09:18 Amit Langote <amitlangote09@gmail.com> пише:Hello,\n\nOn Wed, Aug 7, 2019 at 3:05 PM Danylo Hlynskyi <abcz2.uprola@gmail.com> wrote:\n>\n> The pool_passwd option [1] is specified relative to config file. But for greater flexibility absolute path should be accepted as well.\n>\n> If pool_passwd option starts with /, let's treat it as absolute path. Otherwise, it is treated as relative path.\n>\n> Patch attached. Original author - Derek Kulinski [2]. In NixOS, configuration files often end up in world readable store, which is not the best place for storing password files.\n>\n> [1] http://www.pgpool.net/docs/latest/en/html/runtime-config-connection.html#GUC-POOL-PASSWD\n> [2] https://github.com/NixOS/nixpkgs/pull/66224\n\nDid you mean to send this email to pgpool-hackers@pgpool.net or\nsomewhere else like a NixOS mailing list, not\npgsql-hackers@lists.postgresql.org? This list is used to discuss the\ntopics related to PostgreSQL development.\n\nRegards,\nAmit",
"msg_date": "Wed, 7 Aug 2019 09:28:38 +0300",
"msg_from": "Danylo Hlynskyi <abcz2.uprola@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Absolute passwordfile path"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI wonder if there is some particular reason for not handling \nT_RestrictInfo node tag in expression_tree_walker?\nThere are many data structure in Postgres which contains lists of \nRestrictInfo or expression with RestrictInfo as parameter (for example \norclause in RestrictInfo).\nTo handle such cases now it is needed to write code performing list \niteration and calling expression_tree_walker for each list element and \nhandling RrestrictInfo in callback function:\n\nstatic bool\nchange_varno_walker(Node *node, ChangeVarnoContext *context)\n{\n if (node == NULL)\n return false;\n\n if (IsA(node, Var) && ((Var *) node)->varno == context->oldRelid)\n {\n ((Var *) node)->varno = context->newRelid;\n ((Var *) node)->varnoold = context->newRelid;\n return false;\n }\n if (IsA(node, RestrictInfo))\n {\n change_rinfo((RestrictInfo*)node, context->oldRelid, \ncontext->newRelid);\n return false;\n }\n return expression_tree_walker(node, change_varno_walker, context);\n}\n\nAre there any complaints against handling RestrictInfo in \nexpression_tree_walker?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Wed, 7 Aug 2019 10:24:17 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Handling RestrictInfo in expression_tree_walker"
},
{
"msg_contents": "Hi Konstantin,\n\nOn Wed, Aug 7, 2019 at 4:24 PM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n>\n> Hi hackers,\n>\n> I wonder if there is some particular reason for not handling\n> T_RestrictInfo node tag in expression_tree_walker?\n> There are many data structure in Postgres which contains lists of\n> RestrictInfo or expression with RestrictInfo as parameter (for example\n> orclause in RestrictInfo).\n> To handle such cases now it is needed to write code performing list\n> iteration and calling expression_tree_walker for each list element and\n> handling RrestrictInfo in callback function:\n>\n> static bool\n> change_varno_walker(Node *node, ChangeVarnoContext *context)\n> {\n> if (node == NULL)\n> return false;\n>\n> if (IsA(node, Var) && ((Var *) node)->varno == context->oldRelid)\n> {\n> ((Var *) node)->varno = context->newRelid;\n> ((Var *) node)->varnoold = context->newRelid;\n> return false;\n> }\n> if (IsA(node, RestrictInfo))\n> {\n> change_rinfo((RestrictInfo*)node, context->oldRelid,\n> context->newRelid);\n> return false;\n> }\n> return expression_tree_walker(node, change_varno_walker, context);\n> }\n>\n> Are there any complaints against handling RestrictInfo in\n> expression_tree_walker?\n\nAs I understand it, RestrictInfo is not something that appears in\nquery trees or plan trees, but only in the planner data structures as\nmeans of caching some information about the clauses that they wrap. I\nsee this comment describing what expression_tree_walker() is supposed\nto handle:\n\n * The node types handled by expression_tree_walker include all those\n * normally found in target lists and qualifier clauses during the planning\n * stage.\n\nYou may also want to read this discussion:\n\nhttps://www.postgresql.org/message-id/553FC9BC.5060402%402ndquadrant.com\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 7 Aug 2019 16:42:22 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Handling RestrictInfo in expression_tree_walker"
},
{
"msg_contents": "\n\nOn 07.08.2019 10:42, Amit Langote wrote:\n> Hi Konstantin,\n>\n> On Wed, Aug 7, 2019 at 4:24 PM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> Hi hackers,\n>>\n>> I wonder if there is some particular reason for not handling\n>> T_RestrictInfo node tag in expression_tree_walker?\n>> There are many data structure in Postgres which contains lists of\n>> RestrictInfo or expression with RestrictInfo as parameter (for example\n>> orclause in RestrictInfo).\n>> To handle such cases now it is needed to write code performing list\n>> iteration and calling expression_tree_walker for each list element and\n>> handling RrestrictInfo in callback function:\n>>\n>> static bool\n>> change_varno_walker(Node *node, ChangeVarnoContext *context)\n>> {\n>> if (node == NULL)\n>> return false;\n>>\n>> if (IsA(node, Var) && ((Var *) node)->varno == context->oldRelid)\n>> {\n>> ((Var *) node)->varno = context->newRelid;\n>> ((Var *) node)->varnoold = context->newRelid;\n>> return false;\n>> }\n>> if (IsA(node, RestrictInfo))\n>> {\n>> change_rinfo((RestrictInfo*)node, context->oldRelid,\n>> context->newRelid);\n>> return false;\n>> }\n>> return expression_tree_walker(node, change_varno_walker, context);\n>> }\n>>\n>> Are there any complaints against handling RestrictInfo in\n>> expression_tree_walker?\n> As I understand it, RestrictInfo is not something that appears in\n> query trees or plan trees, but only in the planner data structures as\n> means of caching some information about the clauses that they wrap. I\n> see this comment describing what expression_tree_walker() is supposed\n> to handle:\n>\n> * The node types handled by expression_tree_walker include all those\n> * normally found in target lists and qualifier clauses during the planning\n> * stage.\n>\n> You may also want to read this discussion:\n>\n> https://www.postgresql.org/message-id/553FC9BC.5060402%402ndquadrant.com\n>\n> Thanks,\n> Amit\nThank you very much for response and pointing me to this thread.\nUnfortunately I do not understand from this thread how the problem was \nsolved with pullvars - right now pull_varnos_walker and \npull_varattnos_walker\nare not handling RestrictInfo.\n\nAlso I do not completely understand the argument \"RestrictInfo is not a \ngeneral expression node and support for it has\nbeen deliberately omitted from expression_tree_walker()\". If there is \nBoolOp expression which contains RestrictInfo expression as it \narguments, then either this expression is not\ncorrect, either RestrictInfo should be considered as \"expression node\".\n\nFrankly speaking I do not see some good reasons for not handling \nRestrictInfo in expression_tree_worker. It can really simplify writing \nof mutators/walkers.\nAnd I do not think that reporting error instead of handling this tag \nadds some extra safety or error protection.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Wed, 7 Aug 2019 11:07:16 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Handling RestrictInfo in expression_tree_walker"
},
{
"msg_contents": "Hi,\n\nOn Wed, Aug 7, 2019 at 5:07 PM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> On 07.08.2019 10:42, Amit Langote wrote:\n> > You may also want to read this discussion:\n> >\n> > https://www.postgresql.org/message-id/553FC9BC.5060402%402ndquadrant.com\n> >\n> Thank you very much for response and pointing me to this thread.\n> Unfortunately I do not understand from this thread how the problem was\n> solved with pullvars - right now pull_varnos_walker and\n> pull_varattnos_walker\n> are not handling RestrictInfo.\n>\n> Also I do not completely understand the argument \"RestrictInfo is not a\n> general expression node and support for it has\n> been deliberately omitted from expression_tree_walker()\". If there is\n> BoolOp expression which contains RestrictInfo expression as it\n> arguments, then either this expression is not\n> correct, either RestrictInfo should be considered as \"expression node\".\n>\n> Frankly speaking I do not see some good reasons for not handling\n> RestrictInfo in expression_tree_worker. It can really simplify writing\n> of mutators/walkers.\n> And I do not think that reporting error instead of handling this tag\n> adds some extra safety or error protection.\n\nWell, Tom has expressed in various words in that thread that expecting\nto successfully run expression_tree_walker() on something containing\nRestrictInfos may be a sign of bad design somewhere in the code that\nyou're trying to add. I have recollections of submitting such code,\nbut later realizing that there's some other way to do things\ndifferently that doesn't require walking expressions containing\nRestrictInfos.\n\nBtw, looking at the example walker function you've shown in the first\nemail, maybe you want to use a mutator, not a walker. The latter\nclass of functions is only supposed to inspect the input tree, not\nmodify it.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 7 Aug 2019 17:26:31 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Handling RestrictInfo in expression_tree_walker"
},
{
"msg_contents": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:\n> Frankly speaking I do not see some good reasons for not handling \n> RestrictInfo in expression_tree_worker. It can really simplify writing \n> of mutators/walkers.\n\nI don't buy this; what seems more likely is that you're trying to apply\nan expression tree mutator to something you shouldn't. The caching\naspects of RestrictInfo, and the fact that the planner often assumes\nthat RestrictInfos don't get copied (so that pointer equality is a\nuseful test), are both good reasons to be wary of applying general\nmutations to those nodes.\n\nOr in other words, if you want a walker/mutator to descend through\nthose nodes, you almost certainly need special logic at those nodes\nanyway. Omitting them from the nodeFuncs support guarantees you\ndon't forget that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Aug 2019 11:47:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Handling RestrictInfo in expression_tree_walker"
}
] |
[
{
"msg_contents": "Hello,\n\nThe word \"rewinded\" appears in our manual and in a comment. That\nsounds strange to my ears. Isn't it a mistake? Oxford lists the form\nas \"poetic\" and \"rare\", and then says it was used by one specific\nVictorian poet. Perhaps I'll send them a pull request: it's now G. M.\nHopkins and PostgreSQL? Or maybe it's in common usage in another part\nof the world?\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Aug 2019 20:48:48 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "s/rewinded/rewound/?"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 10:49 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> Hello,\n>\n> The word \"rewinded\" appears in our manual and in a comment. That\n> sounds strange to my ears. Isn't it a mistake? Oxford lists the form\n> as \"poetic\" and \"rare\", and then says it was used by one specific\n> Victorian poet. Perhaps I'll send them a pull request: it's now G. M.\n> Hopkins and PostgreSQL? Or maybe it's in common usage in another part\n> of the world?\n>\n\nTo me this sounds like a classic non-English-native-speaker-mistake. But\nit seems at least the one in the docs come from Bruce, who definitely is...\nSo perhaps it's intentional to refer to \"what pg_rewind does\", and not\nnecessarily to the regular word for it?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Aug 7, 2019 at 10:49 AM Thomas Munro <thomas.munro@gmail.com> wrote:Hello,\n\nThe word \"rewinded\" appears in our manual and in a comment. That\nsounds strange to my ears. Isn't it a mistake? Oxford lists the form\nas \"poetic\" and \"rare\", and then says it was used by one specific\nVictorian poet. Perhaps I'll send them a pull request: it's now G. M.\nHopkins and PostgreSQL? Or maybe it's in common usage in another part\nof the world?To me this sounds like a classic non-English-native-speaker-mistake. But it seems at least the one in the docs come from Bruce, who definitely is... So perhaps it's intentional to refer to \"what pg_rewind does\", and not necessarily to the regular word for it?-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 7 Aug 2019 10:53:45 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: s/rewinded/rewound/?"
},
{
"msg_contents": "On Wed, Aug 07, 2019 at 10:53:45AM +0200, Magnus Hagander wrote:\n> To me this sounds like a classic non-English-native-speaker-mistake. But\n> it seems at least the one in the docs come from Bruce, who definitely is...\n> So perhaps it's intentional to refer to \"what pg_rewind does\", and not\n> necessarily to the regular word for it?\n\nI am not sure :)\n\"rewound\" sounds much more natural.\n--\nMichael",
"msg_date": "Wed, 7 Aug 2019 18:00:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: s/rewinded/rewound/?"
},
{
"msg_contents": "On 8/7/19 12:00 PM, Michael Paquier wrote:\n> On Wed, Aug 07, 2019 at 10:53:45AM +0200, Magnus Hagander wrote:\n>> To me this sounds like a classic non-English-native-speaker-mistake. But\n>> it seems at least the one in the docs come from Bruce, who definitely is...\n>> So perhaps it's intentional to refer to \"what pg_rewind does\", and not\n>> necessarily to the regular word for it?\n> I am not sure :)\n> \"rewound\" sounds much more natural.\n> --\n> Michael\n\n+1 for rewound from a non-English-native-speaker. The use of \"rewound\" \nin the same file also supports Michael's view.\n\nIf we decide to fix this, we should probably revise and back-patch the \nwhole paragraph where it appears as it seems to mix up scanning target \ncluster\nWALs and applying source cluster WALs. A small patch is attached for \nyour consideration (originally proposed on pgsql-docs [1]).\n\n[1] \nhttps://www.postgresql.org/message-id/ad6ac5bb-6689-ddb0-dc60-c5fc197d728e%40postgrespro.ru \n\n\n-- \nLiudmila Mantrova\nTechnical writer at Postgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 7 Aug 2019 12:48:29 +0300",
"msg_from": "Liudmila Mantrova <l.mantrova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: s/rewinded/rewound/?"
},
{
"msg_contents": "On 08/07/19 04:48, Thomas Munro wrote:\n\n> as \"poetic\" and \"rare\", and then says it was used by one specific\n> Victorian poet. Perhaps I'll send them a pull request: it's now G. M.\n> Hopkins and PostgreSQL?\n\nIt does seem counter, original, spare, strange.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Wed, 7 Aug 2019 08:09:24 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: s/rewinded/rewound/?"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Wed, Aug 7, 2019 at 10:49 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> The word \"rewinded\" appears in our manual and in a comment. That\n>> sounds strange to my ears. Isn't it a mistake?\n\nCertainly.\n\n> To me this sounds like a classic non-English-native-speaker-mistake. But\n> it seems at least the one in the docs come from Bruce, who definitely is...\n\nHe might've just been committing somebody else's words without having\nreviewed carefully.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Aug 2019 11:49:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: s/rewinded/rewound/?"
},
{
"msg_contents": "On 2019-Aug-07, Tom Lane wrote:\n\n> Magnus Hagander <magnus@hagander.net> writes:\n\n> > To me this sounds like a classic non-English-native-speaker-mistake. But\n> > it seems at least the one in the docs come from Bruce, who definitely is...\n> \n> He might've just been committing somebody else's words without having\n> reviewed carefully.\n\nThe commit message for 878bd9accb55 doesn't mention that. He didn't\nadd a mailing list reference, but this is easy to find at\nhttps://postgr.es/m/20160720180706.GF24559@momjian.us\nI lean towards the view that he was using the literal program name as a\nverb, rather than trying to decline a verb normally. Note that the word\n\"rewound\" did not appear in that SGML source when he committed that;\nthat was only introduced in bfc80683ce51 three years later.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 7 Aug 2019 11:59:44 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: s/rewinded/rewound/?"
},
{
"msg_contents": "On Wed, 7 Aug 2019 at 16:59, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> He didn't\n> add a mailing list reference, but this is easy to find at\n> https://postgr.es/m/20160720180706.GF24559@momjian.us\n> I lean towards the view that he was using the literal program name as a\n> verb, rather than trying to decline a verb normally.\n\nI go with that, although I think it's confusing to not use the full\napp name. If I were discussing a block of data that had been passed to\na \"rewind\" function, I might well put \"this data has been rewind()ed\"\n(or just rewinded). But if I were discussing the concept itself, I\nwould say rewound.\n\neg In the example given, I would accept \"and then\n<application>pg_rewind</application>ed to become a standby\".\n\nAlthough I would probably have reworded it to use \"and then\n<application>pg_rewind</application> run again to set it to standby\"\nor something similar, because the \"ed\" form really does look odd in\ndocumentation.\n\nI don't think using \"rewound\" instead is explicit enough in this instance.\n\nBut that's just me. Feel free to ignore.\n\nGeoff\n\n\n",
"msg_date": "Wed, 7 Aug 2019 17:13:23 +0100",
"msg_from": "Geoff Winkless <pgsqladmin@geoff.dj>",
"msg_from_op": false,
"msg_subject": "Re: s/rewinded/rewound/?"
},
{
"msg_contents": "On Wed, Aug 07, 2019 at 12:48:29PM +0300, Liudmila Mantrova wrote:\n> If we decide to fix this, we should probably revise and back-patch the whole\n> paragraph where it appears as it seems to mix up scanning target cluster\n> WALs and applying source cluster WALs. A small patch is attached for your\n> consideration (originally proposed on pgsql-docs [1]).\n\nOkay, I can see the confusion, and your proposed rewording looks fine\nto me. Any objections?\n--\nMichael",
"msg_date": "Thu, 8 Aug 2019 18:29:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: s/rewinded/rewound/?"
}
] |
[
{
"msg_contents": "Hi!\n\nI was playing around with JSON path quite a bit and might have found one case where the current implementation doesn’t follow the standard.\n\nThe functionality in question are the comparison operators except ==. They use the database default collation rather then the standard-mandated \"Unicode codepoint collation” (SQL-2:2016 9.39 General Rule 12 c iii 2 D, last sentence in first paragraph).\n\nI guess this is the relevant part of the code: src/backend/utils/adt/jsonpath_exec.c (compareItems)\n\n case jbvString:\n if (op == jpiEqual)\n return jb1->val.string.len != jb2->val.string.len ||\n memcmp(jb1->val.string.val,\n jb2->val.string.val,\n jb1->val.string.len) ? jpbFalse : jpbTrue;\n\n cmp = varstr_cmp(jb1->val.string.val, jb1->val.string.len,\n jb2->val.string.val, jb2->val.string.len,\n DEFAULT_COLLATION_OID);\n break;\n\nTestcase:\n\n postgres 12beta3=# select * from jsonb_path_query('\"dummy\"', '$ ? (\"a\" < \"A\")');\n jsonb_path_query\n ------------------\n \"dummy\"\n (1 row)\n\nIn code points, lower case ‘a' is not less than upper case ‘A’—the result should be empty.\n\nTo convince myself:\n\n postgres 12beta3=# select datcollate, 'a' < 'A', 'a' <'A' COLLATE ucs_basic from pg_database where datname=current_database();\n datcollate | ?column? | ?column?\n -------------+----------+----------\n en_US.UTF-8 | t | f\n (1 row)\n\nI also found two minor typos in the docs. Patch attached.\n\n-markus\nps.: I’ve created 230 test cases. Besides the WIP topic .datetime(), the collation issue is the only one I found. Excellent work. Down to the SQLSTATEs. For sure the most complete and correct SQL/JSON path implementation I've seen.",
"msg_date": "Wed, 7 Aug 2019 13:25:36 +0200",
"msg_from": "Markus Winand <markus.winand@winand.at>",
"msg_from_op": true,
"msg_subject": "SQL/JSON path: collation for comparisons, minor typos in docs"
},
{
"msg_contents": "Hi!\n\nOn Wed, Aug 7, 2019 at 2:25 PM Markus Winand <markus.winand@winand.at> wrote:\n> I was playing around with JSON path quite a bit and might have found one case where the current implementation doesn’t follow the standard.\n>\n> The functionality in question are the comparison operators except ==. They use the database default collation rather then the standard-mandated \"Unicode codepoint collation” (SQL-2:2016 9.39 General Rule 12 c iii 2 D, last sentence in first paragraph).\n\nThank you for pointing! Nikita is about to write a patch fixing that.\n\n> I also found two minor typos in the docs. Patch attached.\n\nPushed, thanks.\n\n> -markus\n> ps.: I’ve created 230 test cases. Besides the WIP topic .datetime(), the collation issue is the only one I found. Excellent work. Down to the SQLSTATEs. For sure the most complete and correct SQL/JSON path implementation I've seen.\n\nThank you!\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 7 Aug 2019 16:11:44 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path: collation for comparisons, minor typos in docs"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 4:11 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Wed, Aug 7, 2019 at 2:25 PM Markus Winand <markus.winand@winand.at> wrote:\n> > I was playing around with JSON path quite a bit and might have found one case where the current implementation doesn’t follow the standard.\n> >\n> > The functionality in question are the comparison operators except ==. They use the database default collation rather then the standard-mandated \"Unicode codepoint collation” (SQL-2:2016 9.39 General Rule 12 c iii 2 D, last sentence in first paragraph).\n>\n> Thank you for pointing! Nikita is about to write a patch fixing that.\n\nPlease, see the attached patch.\n\nOur idea is to not sacrifice \"==\" operator performance for standard\nconformance. So, \"==\" remains per-byte comparison. For consistency\nin other operators we compare code points first, then do per-byte\ncomparison. In some edge cases, when same Unicode codepoints have\ndifferent binary representations in database encoding, this behavior\ndiverges standard. In future we can implement strict standard\nconformance by normalization of input JSON strings.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 8 Aug 2019 00:55:55 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path: collation for comparisons, minor typos in docs"
},
{
"msg_contents": "On Thu, Aug 8, 2019 at 12:55 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Wed, Aug 7, 2019 at 4:11 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > On Wed, Aug 7, 2019 at 2:25 PM Markus Winand <markus.winand@winand.at> wrote:\n> > > I was playing around with JSON path quite a bit and might have found one case where the current implementation doesn’t follow the standard.\n> > >\n> > > The functionality in question are the comparison operators except ==. They use the database default collation rather then the standard-mandated \"Unicode codepoint collation” (SQL-2:2016 9.39 General Rule 12 c iii 2 D, last sentence in first paragraph).\n> >\n> > Thank you for pointing! Nikita is about to write a patch fixing that.\n>\n> Please, see the attached patch.\n>\n> Our idea is to not sacrifice \"==\" operator performance for standard\n> conformance. So, \"==\" remains per-byte comparison. For consistency\n> in other operators we compare code points first, then do per-byte\n> comparison. In some edge cases, when same Unicode codepoints have\n> different binary representations in database encoding, this behavior\n> diverges standard. In future we can implement strict standard\n> conformance by normalization of input JSON strings.\n\nPrevious version of patch has buggy implementation of\ncompareStrings(). Revised version is attached.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 8 Aug 2019 03:05:08 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path: collation for comparisons, minor typos in docs"
},
{
"msg_contents": "On Thu, Aug 8, 2019 at 3:05 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Thu, Aug 8, 2019 at 12:55 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > On Wed, Aug 7, 2019 at 4:11 PM Alexander Korotkov\n> > <a.korotkov@postgrespro.ru> wrote:\n> > > On Wed, Aug 7, 2019 at 2:25 PM Markus Winand <markus.winand@winand.at> wrote:\n> > > > I was playing around with JSON path quite a bit and might have found one case where the current implementation doesn’t follow the standard.\n> > > >\n> > > > The functionality in question are the comparison operators except ==. They use the database default collation rather then the standard-mandated \"Unicode codepoint collation” (SQL-2:2016 9.39 General Rule 12 c iii 2 D, last sentence in first paragraph).\n> > >\n> > > Thank you for pointing! Nikita is about to write a patch fixing that.\n> >\n> > Please, see the attached patch.\n> >\n> > Our idea is to not sacrifice \"==\" operator performance for standard\n> > conformance. So, \"==\" remains per-byte comparison. For consistency\n> > in other operators we compare code points first, then do per-byte\n> > comparison. In some edge cases, when same Unicode codepoints have\n> > different binary representations in database encoding, this behavior\n> > diverges standard. In future we can implement strict standard\n> > conformance by normalization of input JSON strings.\n>\n> Previous version of patch has buggy implementation of\n> compareStrings(). Revised version is attached.\n\nNikita pointed me that for UTF-8 strings per-byte comparison result\nmatches codepoints comparison result. That allows simplify patch a\nlot.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 8 Aug 2019 03:27:38 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path: collation for comparisons, minor typos in docs"
},
{
"msg_contents": "Hi!\n\nThe patch makes my tests pass.\n\nI wonder about a few things:\n\n- Isn’t there any code that could be re-used for that (the one triggered by ‘a’ < ‘A’ COLLATE ucs_basic)?\n\n- For object key members, the standard also refers to unicode code point collation (SQL-2:2016 4.46.3, last paragraph).\n- I guess it also applies to the “starts with” predicate, but I cannot find this explicitly stated in the standard.\n\nMy tests check whether those cases do case-sensitive comparisons. With my default collation \"en_US.UTF-8” I cannot discover potential issues there. I haven’t played around with nondeterministic ICU collations yet :(\n\n-markus\nps.: for me, testing the regular expression dialect of like_regex is out of scope\n\n\n> On 8 Aug 2019, at 02:27, Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:\n> \n> On Thu, Aug 8, 2019 at 3:05 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru <mailto:a.korotkov@postgrespro.ru>> wrote:\n>> On Thu, Aug 8, 2019 at 12:55 AM Alexander Korotkov\n>> <a.korotkov@postgrespro.ru> wrote:\n>>> On Wed, Aug 7, 2019 at 4:11 PM Alexander Korotkov\n>>> <a.korotkov@postgrespro.ru> wrote:\n>>>> On Wed, Aug 7, 2019 at 2:25 PM Markus Winand <markus.winand@winand.at> wrote:\n>>>>> I was playing around with JSON path quite a bit and might have found one case where the current implementation doesn’t follow the standard.\n>>>>> \n>>>>> The functionality in question are the comparison operators except ==. They use the database default collation rather then the standard-mandated \"Unicode codepoint collation” (SQL-2:2016 9.39 General Rule 12 c iii 2 D, last sentence in first paragraph).\n>>>> \n>>>> Thank you for pointing! Nikita is about to write a patch fixing that.\n>>> \n>>> Please, see the attached patch.\n>>> \n>>> Our idea is to not sacrifice \"==\" operator performance for standard\n>>> conformance. So, \"==\" remains per-byte comparison. For consistency\n>>> in other operators we compare code points first, then do per-byte\n>>> comparison. In some edge cases, when same Unicode codepoints have\n>>> different binary representations in database encoding, this behavior\n>>> diverges standard. In future we can implement strict standard\n>>> conformance by normalization of input JSON strings.\n>> \n>> Previous version of patch has buggy implementation of\n>> compareStrings(). Revised version is attached.\n> \n> Nikita pointed me that for UTF-8 strings per-byte comparison result\n> matches codepoints comparison result. That allows simplify patch a\n> lot.\n> \n> ------\n> Alexander Korotkov\n> Postgres Professional: http://www.postgrespro.com <http://www.postgrespro.com/>\n> The Russian Postgres Company\n> <0001-Use-Unicode-codepoint-collation-in-jsonpath-4.patch>\n\n\nHi!The patch makes my tests pass.I wonder about a few things:- Isn’t there any code that could be re-used for that (the one triggered by ‘a’ < ‘A’ COLLATE ucs_basic)?- For object key members, the standard also refers to unicode code point collation (SQL-2:2016 4.46.3, last paragraph).- I guess it also applies to the “starts with” predicate, but I cannot find this explicitly stated in the standard.My tests check whether those cases do case-sensitive comparisons. With my default collation \"en_US.UTF-8” I cannot discover potential issues there. I haven’t played around with nondeterministic ICU collations yet :(-markusps.: for me, testing the regular expression dialect of like_regex is out of scopeOn 8 Aug 2019, at 02:27, Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:On Thu, Aug 8, 2019 at 3:05 AM Alexander Korotkov<a.korotkov@postgrespro.ru> wrote:On Thu, Aug 8, 2019 at 12:55 AM Alexander Korotkov<a.korotkov@postgrespro.ru> wrote:On Wed, Aug 7, 2019 at 4:11 PM Alexander Korotkov<a.korotkov@postgrespro.ru> wrote:On Wed, Aug 7, 2019 at 2:25 PM Markus Winand <markus.winand@winand.at> wrote:I was playing around with JSON path quite a bit and might have found one case where the current implementation doesn’t follow the standard.The functionality in question are the comparison operators except ==. They use the database default collation rather then the standard-mandated \"Unicode codepoint collation” (SQL-2:2016 9.39 General Rule 12 c iii 2 D, last sentence in first paragraph).Thank you for pointing! Nikita is about to write a patch fixing that.Please, see the attached patch.Our idea is to not sacrifice \"==\" operator performance for standardconformance. So, \"==\" remains per-byte comparison. For consistencyin other operators we compare code points first, then do per-bytecomparison. In some edge cases, when same Unicode codepoints havedifferent binary representations in database encoding, this behaviordiverges standard. In future we can implement strict standardconformance by normalization of input JSON strings.Previous version of patch has buggy implementation ofcompareStrings(). Revised version is attached.Nikita pointed me that for UTF-8 strings per-byte comparison resultmatches codepoints comparison result. That allows simplify patch alot.------Alexander KorotkovPostgres Professional: http://www.postgrespro.comThe Russian Postgres Company<0001-Use-Unicode-codepoint-collation-in-jsonpath-4.patch>",
"msg_date": "Thu, 8 Aug 2019 10:53:20 +0200",
"msg_from": "Markus Winand <markus.winand@winand.at>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON path: collation for comparisons, minor typos in docs"
},
{
"msg_contents": "Hi, Markus!\n\nOn Thu, Aug 8, 2019 at 11:53 AM Markus Winand <markus.winand@winand.at> wrote:\n> The patch makes my tests pass.\n\nCool.\n\n> I wonder about a few things:\n>\n> - Isn’t there any code that could be re-used for that (the one triggered by ‘a’ < ‘A’ COLLATE ucs_basic)?\n\nPostgreSQL supports ucs_basic, but it's alias to C collation and works\nonly for utf-8. Jsonpath code may work in different encodings. New\nstring comparison code can work in different encodings.\n\n> - For object key members, the standard also refers to unicode code point collation (SQL-2:2016 4.46.3, last paragraph).\n> - I guess it also applies to the “starts with” predicate, but I cannot find this explicitly stated in the standard.\n\nFor object keys we don't actually care about whether strings are less\nor greater. We only search for equal keys. So, per-byte comparison\nwe currently use should be fine. The same states for \"starts with\"\npredicate.\n\n> My tests check whether those cases do case-sensitive comparisons. With my default collation \"en_US.UTF-8” I cannot discover potential issues there. I haven’t played around with nondeterministic ICU collations yet :(\n\nThat's OK. There should be other beta testers around :)\n\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Thu, 8 Aug 2019 23:30:04 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path: collation for comparisons, minor typos in docs"
},
{
"msg_contents": "On Thu, Aug 8, 2019 at 11:30 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Thu, Aug 8, 2019 at 11:53 AM Markus Winand <markus.winand@winand.at> wrote:\n> > The patch makes my tests pass.\n>\n> Cool.\n>\n> > I wonder about a few things:\n> >\n> > - Isn’t there any code that could be re-used for that (the one triggered by ‘a’ < ‘A’ COLLATE ucs_basic)?\n>\n> PostgreSQL supports ucs_basic, but it's alias to C collation and works\n> only for utf-8. Jsonpath code may work in different encodings. New\n> string comparison code can work in different encodings.\n>\n> > - For object key members, the standard also refers to unicode code point collation (SQL-2:2016 4.46.3, last paragraph).\n> > - I guess it also applies to the “starts with” predicate, but I cannot find this explicitly stated in the standard.\n>\n> For object keys we don't actually care about whether strings are less\n> or greater. We only search for equal keys. So, per-byte comparison\n> we currently use should be fine. The same states for \"starts with\"\n> predicate.\n>\n> > My tests check whether those cases do case-sensitive comparisons. With my default collation \"en_US.UTF-8” I cannot discover potential issues there. I haven’t played around with nondeterministic ICU collations yet :(\n>\n> That's OK. There should be other beta testers around :)\n\nSo, I'm going to push this if no objections.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Fri, 9 Aug 2019 17:27:45 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path: collation for comparisons, minor typos in docs"
},
{
"msg_contents": "On Fri, Aug 9, 2019 at 5:27 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Thu, Aug 8, 2019 at 11:30 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > On Thu, Aug 8, 2019 at 11:53 AM Markus Winand <markus.winand@winand.at> wrote:\n> > > The patch makes my tests pass.\n> >\n> > Cool.\n> >\n> > > I wonder about a few things:\n> > >\n> > > - Isn’t there any code that could be re-used for that (the one triggered by ‘a’ < ‘A’ COLLATE ucs_basic)?\n> >\n> > PostgreSQL supports ucs_basic, but it's alias to C collation and works\n> > only for utf-8. Jsonpath code may work in different encodings. New\n> > string comparison code can work in different encodings.\n> >\n> > > - For object key members, the standard also refers to unicode code point collation (SQL-2:2016 4.46.3, last paragraph).\n> > > - I guess it also applies to the “starts with” predicate, but I cannot find this explicitly stated in the standard.\n> >\n> > For object keys we don't actually care about whether strings are less\n> > or greater. We only search for equal keys. So, per-byte comparison\n> > we currently use should be fine. The same states for \"starts with\"\n> > predicate.\n> >\n> > > My tests check whether those cases do case-sensitive comparisons. With my default collation \"en_US.UTF-8” I cannot discover potential issues there. I haven’t played around with nondeterministic ICU collations yet :(\n> >\n> > That's OK. There should be other beta testers around :)\n>\n> So, I'm going to push this if no objections.\n\nSo, pushed.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sun, 11 Aug 2019 23:11:36 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path: collation for comparisons, minor typos in docs"
}
] |
[
{
"msg_contents": "The list of tests in src/test/isolation/isolation_schedule has grown \nover the years. Originally, they were all related to Serializable \nSnapshot Isolation, but there are different kinds of concurrency tests \nthere now. More tests is good, but the schedule file has grown into a \nbig inscrutable list with zero comments.\n\nI propose to categorize the tests and add some divider comments to the \nfile, see attached.\n\n- Heikki",
"msg_date": "Wed, 7 Aug 2019 14:28:06 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Grouping isolationtester tests in the schedule"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 11:28 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> The list of tests in src/test/isolation/isolation_schedule has grown\n> over the years. Originally, they were all related to Serializable\n> Snapshot Isolation, but there are different kinds of concurrency tests\n> there now. More tests is good, but the schedule file has grown into a\n> big inscrutable list with zero comments.\n\n+1\n\n> I propose to categorize the tests and add some divider comments to the\n> file, see attached.\n\nI think I'd put nowait and skip locked under a separate category \"FOR\nUPDATE\" or \"row locking\" or something, but maybe that's just me... can\nyou call that stuff DML?\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Aug 2019 23:42:02 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Grouping isolationtester tests in the schedule"
},
{
"msg_contents": "On 07/08/2019 14:42, Thomas Munro wrote:\n> On Wed, Aug 7, 2019 at 11:28 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> The list of tests in src/test/isolation/isolation_schedule has grown\n>> over the years. Originally, they were all related to Serializable\n>> Snapshot Isolation, but there are different kinds of concurrency tests\n>> there now. More tests is good, but the schedule file has grown into a\n>> big inscrutable list with zero comments.\n> \n> +1\n> \n>> I propose to categorize the tests and add some divider comments to the\n>> file, see attached.\n> \n> I think I'd put nowait and skip locked under a separate category \"FOR\n> UPDATE\" or \"row locking\" or something, but maybe that's just me... can\n> you call that stuff DML?\n\nYeah, I guess SELECT FOR UPDATE isn't really DML. Separate \"Row locking\" \ncategory works for me. Or maybe \"Concurrent DML and row locking\". There \nis also DML in some of those tests.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 7 Aug 2019 15:17:02 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Grouping isolationtester tests in the schedule"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> The list of tests in src/test/isolation/isolation_schedule has grown \n> over the years. Originally, they were all related to Serializable \n> Snapshot Isolation, but there are different kinds of concurrency tests \n> there now. More tests is good, but the schedule file has grown into a \n> big inscrutable list with zero comments.\n\n> I propose to categorize the tests and add some divider comments to the \n> file, see attached.\n\n+1 for concept, didn't review your divisions.\n\nSomething related I've been wondering about is whether we could\nparallelize the isolation tests. A difficulty here is that the\nslowest ones tend to also be timing-sensitive, such that running\nthem in parallel would increase the risk of failure. But we\ncould likely get at least some improvement.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Aug 2019 11:08:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Grouping isolationtester tests in the schedule"
},
{
"msg_contents": "On 2019-Aug-07, Tom Lane wrote:\n\n> Something related I've been wondering about is whether we could\n> parallelize the isolation tests. A difficulty here is that the\n> slowest ones tend to also be timing-sensitive, such that running\n> them in parallel would increase the risk of failure. But we\n> could likely get at least some improvement.\n\nYeah, there's some improvement to be had there. We've discussed it\npreviously:\nhttps://postgr.es/m/20180124231006.z7spaz5gkzbdvob5@alvherre.pgsql\n\nI'm not really happy about this grouping if we mean we're restricted in\nhow we can make tests run in parallel.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 7 Aug 2019 11:52:41 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Grouping isolationtester tests in the schedule"
},
{
"msg_contents": "On 07/08/2019 18:52, Alvaro Herrera wrote:\n> On 2019-Aug-07, Tom Lane wrote:\n> \n>> Something related I've been wondering about is whether we could\n>> parallelize the isolation tests. A difficulty here is that the\n>> slowest ones tend to also be timing-sensitive, such that running\n>> them in parallel would increase the risk of failure. But we\n>> could likely get at least some improvement.\n> \n> Yeah, there's some improvement to be had there. We've discussed it\n> previously:\n> https://postgr.es/m/20180124231006.z7spaz5gkzbdvob5@alvherre.pgsql\n> \n> I'm not really happy about this grouping if we mean we're restricted in\n> how we can make tests run in parallel.\n\nThe elephant in the room is the 'timeouts' test, which takes about 40 \nseconds, out of a total runtime of 90 seconds. So we'd really want to \nrun that in parallel with everything else. Or split 'timeouts' into \nmultiple tests that could run in parallel. I don't think grouping the \nrest of the tests differently will make much difference to how easy or \nhard that is.\n\nIn any case, we can scramble the list again later, if that's needed for \nrunning the tests in parallel, and we think it's worth it. Until then, a \nmore logical grouping and some comments would be nice.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 7 Aug 2019 19:33:08 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Grouping isolationtester tests in the schedule"
},
{
"msg_contents": "On 2019-Aug-07, Heikki Linnakangas wrote:\n\n> The elephant in the room is the 'timeouts' test, which takes about 40\n> seconds, out of a total runtime of 90 seconds. So we'd really want to run\n> that in parallel with everything else. Or split 'timeouts' into multiple\n> tests that could run in parallel.\n\nHmm, that test has 8 permutations, five second each ... if we split it\nin 3 and run those in parallel, we reduce the total isolation runtime\nby 25 seconds even if we do *nothing else at all*. If we tweak the\nother things, I think we could make the whole set run in about 30\nseconds total in a normal machine.\n\nSplitting the test never crossed my mind.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 7 Aug 2019 12:43:50 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Grouping isolationtester tests in the schedule"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> The elephant in the room is the 'timeouts' test, which takes about 40 \n> seconds, out of a total runtime of 90 seconds. So we'd really want to \n> run that in parallel with everything else. Or split 'timeouts' into \n> multiple tests that could run in parallel. I don't think grouping the \n> rest of the tests differently will make much difference to how easy or \n> hard that is.\n\nThe problem in \"timeouts\" is that it has to use drearily long timeouts\nto be sure that the behavior will be stable even on really slow machines\n(think CLOBBER_CACHE_ALWAYS or valgrind --- it can take seconds for them\nto reach a waiting state that other machines reach quickly). If we run\nsuch tests in parallel with anything else, that risks re-introducing the\ninstability. I'm not very sure what we can do about that. But you might\nbe right that unless we can solve that, there's not going to be much to be\ngained from parallelizing the rest.\n\nI wonder if there's some way to scale the timeout values based on\nmachine speed? But how would the test get that info?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Aug 2019 12:52:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Grouping isolationtester tests in the schedule"
},
{
"msg_contents": "On 2019-Aug-07, Tom Lane wrote:\n\n> The problem in \"timeouts\" is that it has to use drearily long timeouts\n> to be sure that the behavior will be stable even on really slow machines\n> (think CLOBBER_CACHE_ALWAYS or valgrind --- it can take seconds for them\n> to reach a waiting state that other machines reach quickly). If we run\n> such tests in parallel with anything else, that risks re-introducing the\n> instability. I'm not very sure what we can do about that. But you might\n> be right that unless we can solve that, there's not going to be much to be\n> gained from parallelizing the rest.\n\nIt runs 8 different permutations serially. If we run the same\npermutations in parallel, it would finish much quicker, and we wouldn't\nrun it in parallel with anything that would take up CPU time, since\nthey're all just sleeping.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 7 Aug 2019 12:58:10 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Grouping isolationtester tests in the schedule"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Aug-07, Tom Lane wrote:\n>> The problem in \"timeouts\" is that it has to use drearily long timeouts\n>> to be sure that the behavior will be stable even on really slow machines\n>> (think CLOBBER_CACHE_ALWAYS or valgrind --- it can take seconds for them\n>> to reach a waiting state that other machines reach quickly). If we run\n>> such tests in parallel with anything else, that risks re-introducing the\n>> instability. I'm not very sure what we can do about that. But you might\n>> be right that unless we can solve that, there's not going to be much to be\n>> gained from parallelizing the rest.\n\n> It runs 8 different permutations serially. If we run the same\n> permutations in parallel, it would finish much quicker, and we wouldn't\n> run it in parallel with anything that would take up CPU time, since\n> they're all just sleeping.\n\nWrong ... they're *not* just sleeping, in the problem cases. They're\neating cycles due to CLOBBER_CACHE_ALWAYS or valgrind. They're on their\nway to sleeping; but they have to get there before the timeout elapses,\nor the test shows unexpected results.\n\nAdmittedly, as long as you've got more CPUs than tests, it should still\nbe OK. But if you don't, boom.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Aug 2019 17:03:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Grouping isolationtester tests in the schedule"
},
{
"msg_contents": "On Wed, Aug 07, 2019 at 03:17:02PM +0300, Heikki Linnakangas wrote:\n> On 07/08/2019 14:42, Thomas Munro wrote:\n>> I think I'd put nowait and skip locked under a separate category \"FOR\n>> UPDATE\" or \"row locking\" or something, but maybe that's just me... can\n>> you call that stuff DML?\n> \n> Yeah, I guess SELECT FOR UPDATE isn't really DML. Separate \"Row locking\"\n> category works for me. Or maybe \"Concurrent DML and row locking\". There is\n> also DML in some of those tests.\n\nOr would it make sense to group the nowait and skip-locked portion\nwith the multixact group, then keep the DML-specific stuff together?\nThere is a test called update-locked-tuple which could enter into the\n\"row locking\" group, and the skip-locked tests have references to\nmultixact locks. So I think that I would group all that into a single\ngroup: \"multixact and row locking\".\n--\nMichael",
"msg_date": "Thu, 22 Aug 2019 12:02:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Grouping isolationtester tests in the schedule"
}
] |
[
{
"msg_contents": "Small patch to simplify some no longer necessary complicated code, using\nvarargs macros.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 7 Aug 2019 14:35:54 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "initdb: Use varargs macro for PG_CMD_PRINTF"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Small patch to simplify some no longer necessary complicated code, using\n> varargs macros.\n\n+1\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Aug 2019 10:04:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: initdb: Use varargs macro for PG_CMD_PRINTF"
},
{
"msg_contents": "Hi,\nPatch does not apply, rebased patch on (\n68343b4ad75305391b38f4b42734dc07f2fe7ee2) attached\n\nOn Wed, Aug 7, 2019 at 7:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > Small patch to simplify some no longer necessary complicated code, using\n> > varargs macros.\n>\n> +1\n>\n> regards, tom lane\n>\n>\n>\n\n-- \nIbrar Ahmed",
"msg_date": "Thu, 8 Aug 2019 00:57:02 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb: Use varargs macro for PG_CMD_PRINTF"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile browsing the buildfarm failures, I have found this problem on\nanole for the test temp:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2019-08-07%2006%3A39%3A35\n select relname from pg_class where relname like 'temp_parted_oncommit_test%';\n relname\n ----------------------------\n- temp_parted_oncommit_test\n temp_parted_oncommit_test1\n (2 rows)\n\n drop table temp_parted_oncommit_test;\n --- 276,283 ----\n select relname from pg_class where relname like 'temp_parted_oncommit_test%';\n relname\n ----------------------------\n temp_parted_oncommit_test1\n+ temp_parted_oncommit_test\n (2 rows)\n\t\t \nThis could be solved just with an ORDER BY as per the attached. Any\nobjections?\n\nThanks,\n--\nMichael",
"msg_date": "Wed, 7 Aug 2019 22:24:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Regression test failure in regression test temp.sql"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> While browsing the buildfarm failures, I have found this problem on\n> anole for the test temp:\n> ...\n> This could be solved just with an ORDER BY as per the attached. Any\n> objections?\n\nThere's no reason to expect stability of row order in pg_class, so\nin principle this is a reasonable fix, but I kind of wonder why it's\nnecessary. The plan I get for this query is\n\nregression=# explain select relname from pg_class where relname like 'temp_parted_oncommit_test%';\n QUERY PLAN \n-------------------------------------------------------------------------------------------------\n Index Only Scan using pg_class_relname_nsp_index on pg_class (cost=0.28..4.30 rows=1 width=64)\n Index Cond: ((relname >= 'temp'::text) AND (relname < 'temq'::text))\n Filter: (relname ~~ 'temp_parted_oncommit_test%'::text)\n(3 rows)\n\nwhich ought to deliver sorted rows natively. Adding ORDER BY doesn't\nchange this plan one bit. So what actually happened on anole to cause\na non-sorted result?\n\nNot objecting to the patch, exactly, just feeling like there's\nmore here than meets the eye. Not quite sure if it's worth\ninvestigating closer, or what we'd even need to do to do so.\n\nBTW, I realize from looking at the plan that LIKE is interpreting the\nunderscores as wildcards. Maybe it's worth s/_/\\_/ while you're\nat it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Aug 2019 10:17:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Regression test failure in regression test temp.sql"
},
{
"msg_contents": "On Wed, Aug 07, 2019 at 10:17:25AM -0400, Tom Lane wrote:\n> Not objecting to the patch, exactly, just feeling like there's\n> more here than meets the eye. Not quite sure if it's worth\n> investigating closer, or what we'd even need to do to do so.\n\nYes, something's weird here. I'd think that the index only scan\nensures a proper ordering in this case, so it could be possible that a\ndifferent plan got selected here? That would mean that the plan\nselected would not be an index-only scan or an index scan. So perhaps\nthat was a bitmap scan?\n\n> BTW, I realize from looking at the plan that LIKE is interpreting the\n> underscores as wildcards. Maybe it's worth s/_/\\_/ while you're\n\nRight. Looking around there are much more tests which have the same\nproblem. This could become a problem if other tests running in\nparallel use relation names with the same pattern, which is not a\nissue as of HEAD, so I'd rather just back-patch the ORDER BY part of\nit (temp.sql is the only test missing that). What do you think about\nthe attached?\n--\nMichael",
"msg_date": "Fri, 9 Aug 2019 13:34:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Regression test failure in regression test temp.sql"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Aug 07, 2019 at 10:17:25AM -0400, Tom Lane wrote:\n>> Not objecting to the patch, exactly, just feeling like there's\n>> more here than meets the eye. Not quite sure if it's worth\n>> investigating closer, or what we'd even need to do to do so.\n\n> Yes, something's weird here. I'd think that the index only scan\n> ensures a proper ordering in this case, so it could be possible that a\n> different plan got selected here? That would mean that the plan\n> selected would not be an index-only scan or an index scan. So perhaps\n> that was a bitmap scan?\n\nI hacked temp.sql to print a couple different plans (doing it that way,\nrather than manually, just to ensure that I was getting plans matching\nwhat would actually happen right there). And what I see, as attached,\nis that IOS and plain index and bitmap scans all have pretty much the\nsame total cost. The planner then ought to prefer IOS or plain on the\nsecondary grounds of cheaper startup cost. However, it's not so hard\nto believe that it might switch to bitmap if something caused the cost\nestimates to change by a few percent. So probably we should write this\noff as \"something affected the plan choice\" and just add the ORDER BY\nas you suggest.\n\n>> BTW, I realize from looking at the plan that LIKE is interpreting the\n>> underscores as wildcards. Maybe it's worth s/_/\\_/ while you're\n\n> Right. Looking around there are much more tests which have the same\n> problem. This could become a problem if other tests running in\n> parallel use relation names with the same pattern, which is not a\n> issue as of HEAD, so I'd rather just back-patch the ORDER BY part of\n> it (temp.sql is the only test missing that). What do you think about\n> the attached?\n\nHmm, I wasn't thinking of changing anything more than this one query.\nI'm not sure that a wide-ranging patch is going to be worth the\npotential back-patching land mines it'd introduce. However, if you\nwant to do it anyway, please at least patch v12 as well --- that\nshould still be a pretty painless back-patch, even if it's not so\neasy to go further.\n\nBTW, most of the problem here seems to be that the SQL committee\nmade an infelicitous choice of wildcard characters for LIKE.\nI wonder if it'd be saner to fix this by switching to regexes?\n\nregression=# explain select relname from pg_class where relname like 'temp_parted_oncommit_test%';\n QUERY PLAN \n-------------------------------------------------------------------------------------------------\n Index Only Scan using pg_class_relname_nsp_index on pg_class (cost=0.28..4.30 rows=1 width=64)\n Index Cond: ((relname >= 'temp'::text) AND (relname < 'temq'::text))\n Filter: (relname ~~ 'temp_parted_oncommit_test%'::text)\n(3 rows)\n\nregression=# explain select relname from pg_class where relname ~ '^temp_parted_oncommit_test';\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------\n Index Only Scan using pg_class_relname_nsp_index on pg_class (cost=0.28..4.30 rows=1 width=64)\n Index Cond: ((relname >= 'temp_parted_oncommit_test'::text) AND (relname < 'temp_parted_oncommit_tesu'::text))\n Filter: (relname ~ '^temp_parted_oncommit_test'::text)\n(3 rows)\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 11 Aug 2019 15:59:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Regression test failure in regression test temp.sql"
},
{
"msg_contents": "On Sun, Aug 11, 2019 at 03:59:06PM -0400, Tom Lane wrote:\n> I hacked temp.sql to print a couple different plans (doing it that way,\n> rather than manually, just to ensure that I was getting plans matching\n> what would actually happen right there). And what I see, as attached,\n> is that IOS and plain index and bitmap scans all have pretty much the\n> same total cost. The planner then ought to prefer IOS or plain on the\n> secondary grounds of cheaper startup cost. However, it's not so hard\n> to believe that it might switch to bitmap if something caused the cost\n> estimates to change by a few percent. So probably we should write this\n> off as \"something affected the plan choice\" and just add the ORDER BY\n> as you suggest.\n\nThat matches what I was seeing, except that I have done those tests\nmanually. Still my plans matched with yours.\n\n> Hmm, I wasn't thinking of changing anything more than this one query.\n> I'm not sure that a wide-ranging patch is going to be worth the\n> potential back-patching land mines it'd introduce. However, if you\n> want to do it anyway, please at least patch v12 as well --- that\n> should still be a pretty painless back-patch, even if it's not so\n> easy to go further.\n\nOkay, I have gone with a minimal fix of only changing some of the\nquals in temp.sql as it could become a problem if other tests begin to\nuse relations beginning with \"temp\". If it proves that we have other\nproblems in this area later on, let's address it at this time.\n\n> BTW, most of the problem here seems to be that the SQL committee\n> made an infelicitous choice of wildcard characters for LIKE.\n> I wonder if it'd be saner to fix this by switching to regexes?\n\nSo that enforces the start of the string to match. This has the merit\nto make the relation name cleaner to grab. I have gone with your\nsuggestion, thanks for the advice!\n--\nMichael",
"msg_date": "Tue, 13 Aug 2019 10:58:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Regression test failure in regression test temp.sql"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 1:58 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Sun, Aug 11, 2019 at 03:59:06PM -0400, Tom Lane wrote:\n> > I hacked temp.sql to print a couple different plans (doing it that way,\n> > rather than manually, just to ensure that I was getting plans matching\n> > what would actually happen right there). And what I see, as attached,\n> > is that IOS and plain index and bitmap scans all have pretty much the\n> > same total cost. The planner then ought to prefer IOS or plain on the\n> > secondary grounds of cheaper startup cost. However, it's not so hard\n> > to believe that it might switch to bitmap if something caused the cost\n> > estimates to change by a few percent. So probably we should write this\n> > off as \"something affected the plan choice\" and just add the ORDER BY\n> > as you suggest.\n>\n> That matches what I was seeing, except that I have done those tests\n> manually. Still my plans matched with yours.\n\nHere's another one that seems to fit that pattern.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2019-08-11%2007%3A33%3A39\n\n+++ /home/andres/build/buildfarm/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/regress/results/collate.icu.utf8.out\n2019-08-11 08:29:11.792695714 +0000\n@@ -1622,15 +1622,15 @@\n SELECT typname FROM pg_type WHERE typname LIKE 'int_' AND typname <>\n'INT2'::text COLLATE case_insensitive;\n typname\n ---------\n- int4\n int8\n+ int4\n (2 rows)\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 13 Aug 2019 14:51:03 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Regression test failure in regression test temp.sql"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 02:51:03PM +1200, Thomas Munro wrote:\n> Here's another one that seems to fit that pattern.\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2019-08-11%2007%3A33%3A39\n\nIndeed. Good catch! Perhaps you would like to fix it? There are two\nqueries in need of an ORDER BY, and the second query even uses two\nsemicolons (spoiler warning: that's a nit).\n--\nMichael",
"msg_date": "Tue, 13 Aug 2019 12:15:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Regression test failure in regression test temp.sql"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 12:15:26PM +0900, Michael Paquier wrote:\n> Indeed. Good catch! Perhaps you would like to fix it? There are two\n> queries in need of an ORDER BY, and the second query even uses two\n> semicolons (spoiler warning: that's a nit).\n\nAnd fixed. The test case was new as of v12.\n--\nMichael",
"msg_date": "Wed, 14 Aug 2019 13:39:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Regression test failure in regression test temp.sql"
}
] |
[
{
"msg_contents": "It works!\n\n(apparently as of Windows 10 version 1803)\n\nHere are some patches to get a discussion rolling.\n\nBasically, it just works, but you need to define your own struct\nsockaddr_un. (This is what configure currently uses as a proxy for\nHAVE_UNIX_SOCKETS, so (a) that needs a bit of tweaking, and (b) that is\nthe reason why builds haven't blown up already.)\n\nBut we'll now need to make things work so that binaries with Unix-domain\nsocket support work on systems without run-time support. We already did\nthat exercise with IPv6 support, so some of the framework is already in\nplace.\n\nDepending on your Windows environment, there might not be a suitable\n/tmp directory, so you'll need to specify a directory explicitly using\npostgres -k or similar. This leads to the question what the default for\nDEFAULT_PGSOCKET_DIR should be on Windows. I think it's probably best,\nat least for now, to set it so that by default, neither server nor libpq\nuse Unix sockets unless explicitly selected. This can be done easily on\nthe server side by defining DEFAULT_PGSOCKET_DIR as \"\". But in libpq, I\ndon't think the code would handle that correctly everywhere, so it would\nneed some more analysis and restructuring possibly.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 7 Aug 2019 15:56:03 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Unix-domain socket support on Windows"
},
{
"msg_contents": "On 07/08/2019 16:56, Peter Eisentraut wrote:\n> It works!\n\nCool!\n\nAm I reading the patches correctly, that getpeereid() still doesn't work \non Windows? That means that peer authentication doesn't work, right? \nThat's a bit sad. One of the big advantages of unix domain sockets over \nTCP is peer authentication.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 7 Aug 2019 17:06:05 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "On 2019-08-07 16:06, Heikki Linnakangas wrote:\n> Am I reading the patches correctly, that getpeereid() still doesn't work \n> on Windows? That means that peer authentication doesn't work, right? \n> That's a bit sad. One of the big advantages of unix domain sockets over \n> TCP is peer authentication.\n\nCorrect, it's not supported. I think it's plausible that they will add\nthis in the future.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 7 Aug 2019 16:58:54 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 4:59 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-08-07 16:06, Heikki Linnakangas wrote:\n> > Am I reading the patches correctly, that getpeereid() still doesn't work\n> > on Windows? That means that peer authentication doesn't work, right?\n> > That's a bit sad. One of the big advantages of unix domain sockets over\n> > TCP is peer authentication.\n>\n> Correct, it's not supported. I think it's plausible that they will add\n> this in the future.\n>\n\nDoes it work well enough that SSPI auth can run over it? SSPI auth with the\nlocal provider gives you more or less the same results as peer, doesn't it?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Aug 7, 2019 at 4:59 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2019-08-07 16:06, Heikki Linnakangas wrote:\n> Am I reading the patches correctly, that getpeereid() still doesn't work \n> on Windows? That means that peer authentication doesn't work, right? \n> That's a bit sad. One of the big advantages of unix domain sockets over \n> TCP is peer authentication.\n\nCorrect, it's not supported. I think it's plausible that they will add\nthis in the future.Does it work well enough that SSPI auth can run over it? SSPI auth with the local provider gives you more or less the same results as peer, doesn't it? -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 7 Aug 2019 17:15:59 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "On 2019-08-07 15:56, Peter Eisentraut wrote:\n> Depending on your Windows environment, there might not be a suitable\n> /tmp directory, so you'll need to specify a directory explicitly using\n> postgres -k or similar. This leads to the question what the default for\n> DEFAULT_PGSOCKET_DIR should be on Windows. I think it's probably best,\n> at least for now, to set it so that by default, neither server nor libpq\n> use Unix sockets unless explicitly selected. This can be done easily on\n> the server side by defining DEFAULT_PGSOCKET_DIR as \"\". But in libpq, I\n> don't think the code would handle that correctly everywhere, so it would\n> need some more analysis and restructuring possibly.\n\nUpdated patches, which now also address that issue: There is no default\nsocket dir on Windows and it's disabled by default on both client and\nserver.\n\nSome comments on the patches:\n\nv2-0001-Enable-Unix-domain-sockets-support-on-Windows.patch\n\nThis is pretty straightforward, apart from maybe some comments, but it\nwould need to be committed last, because it would enable all the Unix\nsocket related code on Windows, which needs to be fixed up by the\nsubsequent patches first.\n\nv2-0002-Sort-out-getpeereid-and-struct-passwd-handling-on.patch\n\nMaybe a more elegant way with fewer #ifdef WIN32 can be found?\n\nv2-0003-psql-Remove-one-use-of-HAVE_UNIX_SOCKETS.patch\n\nThis could be committed independently.\n\nv2-0004-libpq-Remove-unnecessary-uses-of-HAVE_UNIX_SOCKET.patch\n\nThis one as well.\n\nv2-0005-initdb-Detect-Unix-domain-socket-support-dynamica.patch\n\nI think this patch contains some nice improvements in general. How much\nof that ends up being useful depends on how the subsequent patches (esp.\n0007) end up, since with Unix-domain sockets disabled by default on\nWindows, we won't need initdb doing any detection.\n\nv2-0006-Fix-handling-of-Unix-domain-sockets-on-Windows-in.patch\n\nThis is a fairly independent and isolated change.\n\nv2-0007-Disable-Unix-sockets-by-default-on-Windows.patch\n\nThis one is a bit complicated. Since there is no good default location\nfor Unix sockets on Windows, and many systems won't support them for\nsome time, the default implemented here is to not use them by default on\nthe server or client. This needs a fair amount of restructuring in the\nto support the case of \"supports Unix sockets but don't use them by\ndefault\", while maintaining the existing cases of \"doesn't support Unix\nsockets\" and \"use Unix sockets by default\". There is some room for\ndiscussion here.\n\n\nThis patch set needs testers with various Windows versions to test\ndifferent configurations, combinations, and versions.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 13 Aug 2019 20:27:24 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "On Wed, Aug 14, 2019 at 6:27 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> This patch set needs testers with various Windows versions to test\n> different configurations, combinations, and versions.\n\nIt's failing to build on cfbot's AppVeyor setup[1]. That's currently\nusing Windows SDK 7.1, so too old for the new AF_UNIX sockets, but\npresumably something is wrong because it shouldn't fail to compile and\nlink.\n\nsrc/interfaces/libpq/fe-connect.c(2682): warning C4101: 'pwdbuf' :\nunreferenced local variable [C:\\projects\\postgresql\\libpq.vcxproj]\nsrc/interfaces/libpq/fe-connect.c(2687): warning C4101: 'passerr' :\nunreferenced local variable [C:\\projects\\postgresql\\libpq.vcxproj]\n\nfe-connect.obj : error LNK2019: unresolved external symbol getpeereid\nreferenced in function PQconnectPoll\n[C:\\projects\\postgresql\\libpq.vcxproj]\n\n[1] https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.55034?fullLog=true\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 Sep 2019 11:45:27 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "On 2019-09-03 01:45, Thomas Munro wrote:\n> fe-connect.obj : error LNK2019: unresolved external symbol getpeereid\n> referenced in function PQconnectPoll\n> [C:\\projects\\postgresql\\libpq.vcxproj]\n\nThis should be fixed in the attached patches.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 3 Sep 2019 09:24:14 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "On 2019-Sep-03, Peter Eisentraut wrote:\n\n> On 2019-09-03 01:45, Thomas Munro wrote:\n> > fe-connect.obj : error LNK2019: unresolved external symbol getpeereid\n> > referenced in function PQconnectPoll\n> > [C:\\projects\\postgresql\\libpq.vcxproj]\n> \n> This should be fixed in the attached patches.\n\nMinor bitrot in MSVC script; probably trivial to resolve.\n\nI think you should get 0001 (+0002?) pushed and see what the buildfarm\nthinks; move forward from there. 0003+0004 sound like they should just\nbe pushed shortly afterwards, while the three remaining ones might need\nsome more careful review.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 6 Sep 2019 11:02:24 -0400",
"msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "On 2019-Sep-06, Alvaro Herrera from 2ndQuadrant wrote:\n\n> I think you should get 0001 (+0002?) pushed and see what the buildfarm\n> thinks; move forward from there.\n\n... but of course this goes counter to what you said earlier about 0001\nneeding to be pushed last.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 6 Sep 2019 11:03:38 -0400",
"msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "To move this topic a long, I'll submit some preparatory patches in a \ncommittable order.\n\nFirst is the patch to deal with getpeereid() that was already included \nin the previous patch series. This is just some refactoring that \nreduces the difference between Windows and other platforms and prepares \nthe Unix-domain socket specific code to compile cleanly on Windows.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 30 Oct 2019 13:02:07 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "On Wed, Oct 30, 2019 at 10:32 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> To move this topic a long, I'll submit some preparatory patches in a\n> committable order.\n>\n> First is the patch to deal with getpeereid() that was already included\n> in the previous patch series. This is just some refactoring that\n> reduces the difference between Windows and other platforms and prepares\n> the Unix-domain socket specific code to compile cleanly on Windows.\n>\n\n\nThis looks fairly sane and straightforward. Let's give it an outing on\nthe buildfarm ASAP so we can keep moving forward on this.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 16 Dec 2019 15:09:55 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "On 2019-12-16 05:39, Andrew Dunstan wrote:\n> On Wed, Oct 30, 2019 at 10:32 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>>\n>> To move this topic a long, I'll submit some preparatory patches in a\n>> committable order.\n>>\n>> First is the patch to deal with getpeereid() that was already included\n>> in the previous patch series. This is just some refactoring that\n>> reduces the difference between Windows and other platforms and prepares\n>> the Unix-domain socket specific code to compile cleanly on Windows.\n>>\n> \n> \n> This looks fairly sane and straightforward. Let's give it an outing on\n> the buildfarm ASAP so we can keep moving forward on this.\n\npushed\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 16 Dec 2019 09:58:19 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "Next patch: This allows building *with* Unix-domain socket support but \n*without* a default Unix socket path. This is needed because on Windows \nwe don't have a good default location like \"/tmp\" and we probably don't \nwant Unix sockets by default at run time so that older Windows versions \ncontinue to work out of the box with the same binaries.\n\nWe have code paths for Unix socket support and no Unix socket support. \nNow add a third variant: Unix socket support but do not use a Unix \nsocket by default in the client or the server, only if you explicitly\nspecify one.\n\nTo implement this, tweak things so that setting DEFAULT_PGSOCKET_DIR\nto \"\" has the desired effect. This mostly already worked like that;\nonly a few places needed to be adjusted. Notably, the reference to\nDEFAULT_PGSOCKET_DIR in UNIXSOCK_PATH() could be removed because all\ncallers already resolve an empty socket directory setting with a\ndefault if appropriate.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 18 Dec 2019 14:52:15 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 02:52:15PM +0100, Peter Eisentraut wrote:\n> To implement this, tweak things so that setting DEFAULT_PGSOCKET_DIR\n> to \"\" has the desired effect. This mostly already worked like that;\n> only a few places needed to be adjusted. Notably, the reference to\n> DEFAULT_PGSOCKET_DIR in UNIXSOCK_PATH() could be removed because all\n> callers already resolve an empty socket directory setting with a\n> default if appropriate.\n\nWould it make sense to support abstract sockets in PostgreSQL?\n\nI know it's bit unrelated. I haven't read all the code here I just was\nthinking about it because of the code checking the leading \\0 byte of the dir.\n\nGarick\n\n",
"msg_date": "Wed, 18 Dec 2019 14:24:20 +0000",
"msg_from": "\"Hamlin, Garick L\" <ghamlin@isc.upenn.edu>",
"msg_from_op": false,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "On 2019-12-18 15:24, Hamlin, Garick L wrote:\n> On Wed, Dec 18, 2019 at 02:52:15PM +0100, Peter Eisentraut wrote:\n>> To implement this, tweak things so that setting DEFAULT_PGSOCKET_DIR\n>> to \"\" has the desired effect. This mostly already worked like that;\n>> only a few places needed to be adjusted. Notably, the reference to\n>> DEFAULT_PGSOCKET_DIR in UNIXSOCK_PATH() could be removed because all\n>> callers already resolve an empty socket directory setting with a\n>> default if appropriate.\n> \n> Would it make sense to support abstract sockets in PostgreSQL?\n\nMaybe, I'm not sure.\n\n> I know it's bit unrelated. I haven't read all the code here I just was\n> thinking about it because of the code checking the leading \\0 byte of the dir.\n\nWe would probably represent abstract sockets with a leading '@' in the \nuser-facing components and only translate it to the internal format at \nthe last moment, probably in that very same UNIXSOCK_PATH() function. \nSo I think that wouldn't be a problem.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 18 Dec 2019 17:03:40 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "On 2019-12-18 14:52, Peter Eisentraut wrote:\n> Next patch: This allows building *with* Unix-domain socket support but\n> *without* a default Unix socket path. This is needed because on Windows\n> we don't have a good default location like \"/tmp\" and we probably don't\n> want Unix sockets by default at run time so that older Windows versions\n> continue to work out of the box with the same binaries.\n> \n> We have code paths for Unix socket support and no Unix socket support.\n> Now add a third variant: Unix socket support but do not use a Unix\n> socket by default in the client or the server, only if you explicitly\n> specify one.\n> \n> To implement this, tweak things so that setting DEFAULT_PGSOCKET_DIR\n> to \"\" has the desired effect. This mostly already worked like that;\n> only a few places needed to be adjusted. Notably, the reference to\n> DEFAULT_PGSOCKET_DIR in UNIXSOCK_PATH() could be removed because all\n> callers already resolve an empty socket directory setting with a\n> default if appropriate.\n\nPerhaps this patch is too boring to be reviewed. If there are no \nobjections, I'll commit it soon and then submit the final patches with \nthe real functionality for the next commit fest.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 30 Jan 2020 19:28:00 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-12-18 14:52, Peter Eisentraut wrote:\n>> We have code paths for Unix socket support and no Unix socket support.\n>> Now add a third variant: Unix socket support but do not use a Unix\n>> socket by default in the client or the server, only if you explicitly\n>> specify one.\n>> \n>> To implement this, tweak things so that setting DEFAULT_PGSOCKET_DIR\n>> to \"\" has the desired effect. This mostly already worked like that;\n>> only a few places needed to be adjusted. Notably, the reference to\n>> DEFAULT_PGSOCKET_DIR in UNIXSOCK_PATH() could be removed because all\n>> callers already resolve an empty socket directory setting with a\n>> default if appropriate.\n\n> Perhaps this patch is too boring to be reviewed. If there are no \n> objections, I'll commit it soon and then submit the final patches with \n> the real functionality for the next commit fest.\n\nSorry, I'd paid no particular attention to this thread because\nI figured it'd take a Windows-competent person to review. But\nthe patch as it stands isn't that.\n\nThe code looks fine (and a big +1 for not having knowledge of\nDEFAULT_PGSOCKET_DIR wired into UNIXSOCK_PATH). I wonder though\nwhether any user-facing documentation needs to be adjusted.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Jan 2020 13:41:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "On Fri, 31 Jan 2020 at 02:41, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > On 2019-12-18 14:52, Peter Eisentraut wrote:\n> >> We have code paths for Unix socket support and no Unix socket support.\n> >> Now add a third variant: Unix socket support but do not use a Unix\n> >> socket by default in the client or the server, only if you explicitly\n> >> specify one.\n> >>\n> >> To implement this, tweak things so that setting DEFAULT_PGSOCKET_DIR\n> >> to \"\" has the desired effect. This mostly already worked like that;\n> >> only a few places needed to be adjusted. Notably, the reference to\n> >> DEFAULT_PGSOCKET_DIR in UNIXSOCK_PATH() could be removed because all\n> >> callers already resolve an empty socket directory setting with a\n> >> default if appropriate.\n>\n> > Perhaps this patch is too boring to be reviewed. If there are no\n> > objections, I'll commit it soon and then submit the final patches with\n> > the real functionality for the next commit fest.\n>\n> Sorry, I'd paid no particular attention to this thread because\n> I figured it'd take a Windows-competent person to review. But\n> the patch as it stands isn't that.\n>\n> The code looks fine (and a big +1 for not having knowledge of\n> DEFAULT_PGSOCKET_DIR wired into UNIXSOCK_PATH). I wonder though\n> whether any user-facing documentation needs to be adjusted.\n\nProbably, since it won't work with 'peer' auth from what was said upthread.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n",
"msg_date": "Fri, 31 Jan 2020 14:10:41 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "On 2020-01-30 19:41, Tom Lane wrote:\n> The code looks fine (and a big +1 for not having knowledge of\n> DEFAULT_PGSOCKET_DIR wired into UNIXSOCK_PATH). I wonder though\n> whether any user-facing documentation needs to be adjusted.\n\nThere are no user-facing changes in this patch yet. That will come with \nsubsequent patches.\n\nThis patch has now been committed.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 31 Jan 2020 17:55:58 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "Here is another patch set to enable this functionality.\n\n0001 enables Unix-domain sockets on Windows, but leaves them turned off \nby default at run time, using the mechanism introduced by a9cff89f7e. \nThis is relatively straightforward, except perhaps some aesthetic \nquestions about how these different configuration bits are distributed \naround the various files.\n\n0002 deals with pg_upgrade. It preserves the existing behavior of not \nusing Unix-domain sockets on Windows. This could perhaps be enhanced \nlater by either adding a command-line option or a run-time test. It's \ntoo complicated right now.\n\n0003 deals with how initdb should initialize postgresql.conf and \npg_hba.conf. It introduces a run-time test similar to how we detect \npresence of IPv6. After I wrote this patch, I have come to think that \nthis is overkill and we should just always leave the \"local\" line in \npg_hba.conf even if there is no run-time support in the OS. (I think \nthe reason we do the run-time test for IPv6 is that we need it to parse \nthe IPv6 addresses in pg_hba.conf, but there is no analogous requirement \nfor Unix-domain sockets.) This patch is optional in any case.\n\n0004 fixes a bug in the pg_upgrade test.sh script that was exposed by \nthese changes.\n\n0005 fixes up some issues in the test suites. Right now, the TAP tests \nare hardcoded to not use Unix-domain sockets on Windows, where as \npg_regress keys off HAVE_UNIX_SOCKETS, which is no longer a useful \ndistinguisher. The change is to not use Unix-domain sockets for all the \ntests by default on Windows (the previous behavior) but give an option \nto use them. At the moment, I would consider running the test suites \nwith Unix-domain sockets enabled as experimental, but that's only \nbecause of various issues in the test setups. For instance, there is an \nissue in the comment of pg_regress.c remove_temp() that I'm not sure how \nto address. Also, the TAP tests don't seem to work because of some path \nissues. I figured I'd call time on fiddling with this for now and ship \nthe patches.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 12 Feb 2020 09:32:33 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "\nOn 2/12/20 3:32 AM, Peter Eisentraut wrote:\n> Here is another patch set to enable this functionality.\n>\n> 0001 enables Unix-domain sockets on Windows, but leaves them turned\n> off by default at run time, using the mechanism introduced by\n> a9cff89f7e. This is relatively straightforward, except perhaps some\n> aesthetic questions about how these different configuration bits are\n> distributed around the various files.\n>\n> 0002 deals with pg_upgrade. It preserves the existing behavior of not\n> using Unix-domain sockets on Windows. This could perhaps be enhanced\n> later by either adding a command-line option or a run-time test. It's\n> too complicated right now.\n>\n> 0003 deals with how initdb should initialize postgresql.conf and\n> pg_hba.conf. It introduces a run-time test similar to how we detect\n> presence of IPv6. After I wrote this patch, I have come to think that\n> this is overkill and we should just always leave the \"local\" line in\n> pg_hba.conf even if there is no run-time support in the OS. (I think\n> the reason we do the run-time test for IPv6 is that we need it to\n> parse the IPv6 addresses in pg_hba.conf, but there is no analogous\n> requirement for Unix-domain sockets.) This patch is optional in any\n> case.\n>\n> 0004 fixes a bug in the pg_upgrade test.sh script that was exposed by\n> these changes.\n>\n> 0005 fixes up some issues in the test suites. Right now, the TAP\n> tests are hardcoded to not use Unix-domain sockets on Windows, where\n> as pg_regress keys off HAVE_UNIX_SOCKETS, which is no longer a useful\n> distinguisher. The change is to not use Unix-domain sockets for all\n> the tests by default on Windows (the previous behavior) but give an\n> option to use them. At the moment, I would consider running the test\n> suites with Unix-domain sockets enabled as experimental, but that's\n> only because of various issues in the test setups. For instance,\n> there is an issue in the comment of pg_regress.c remove_temp() that\n> I'm not sure how to address. Also, the TAP tests don't seem to work\n> because of some path issues. I figured I'd call time on fiddling with\n> this for now and ship the patches.\n>\n\nI have tested this on drongo/fairywren and it works fine. The patches\napply cleanly (with a bit of fuzz) and a full buildfarm run is happy in\nboth cases.\n\nUnfortunately I don't have a Windows machine that's young enough to\nsupport git master and old enough not to support Unix Domain sockets, so\ni can't test that with socket-enabled binaries.\n\nOn inspection the patches seem fine.\n\nLet's commit this and keep working on the pg_upgrade and test issues.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 27 Mar 2020 13:52:38 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unix-domain socket support on Windows"
},
{
"msg_contents": "On 2020-03-27 18:52, Andrew Dunstan wrote:\n> I have tested this on drongo/fairywren and it works fine. The patches\n> apply cleanly (with a bit of fuzz) and a full buildfarm run is happy in\n> both cases.\n> \n> Unfortunately I don't have a Windows machine that's young enough to\n> support git master and old enough not to support Unix Domain sockets, so\n> i can't test that with socket-enabled binaries.\n> \n> On inspection the patches seem fine.\n> \n> Let's commit this and keep working on the pg_upgrade and test issues.\n\nI have committed this in chunks over the last couple of days. It's done \nnow.\n\nI didn't commit the initdb auto-detection patch. As I mentioned \nearlier, that one is probably not necessary.\n\nBtw., the default AppVeyor images are too old to support this. You need \nsomething like 'image: Visual Studio 2019' to get a new enough image. \nSo that's one way to test what happens when it's not supported at run \ntime. (I did test it and you get a sensible error message.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 30 Mar 2020 17:43:12 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unix-domain socket support on Windows"
}
] |
[
{
"msg_contents": "Over in [1] we realized that it would be a good idea to remove the <@\noperator from contrib/intarray's GiST opclasses. Unfortunately, doing\nthat isn't a simple matter of generating an extension update script\ncontaining ALTER OPERATOR FAMILY DROP OPERATOR, because that operator\nis marked as internally dependent on its opclass which means that\ndependency.c will object. We could do some direct hacking on\npg_depend to let the DROP be allowed, but ugh.\n\nI started to wonder why GiST opclass operators are ever considered\nas required members of their opclass. The existing rule (cf.\nopclasscmds.c) is that everything mentioned in CREATE OPERATOR CLASS\nwill have an internal dependency on the opclass, but if you add\noperators or functions with ALTER OPERATOR FAMILY ADD, those just\nhave AUTO dependencies on their operator family. So the assumption\nis that opclass creators will only put the bare minimum of required\nstuff into CREATE OPERATOR CLASS and then add optional stuff with\nALTER ... ADD. But none of our contrib modules do it like that,\nand I'd lay long odds against any third party code doing it either.\n\nThis leads to the thought that maybe we could put some intelligence\ninto an index-AM-specific callback instead. For example, for btree\nand hash the appropriate rule is probably that cross-type operators\nand functions should be tied to the opfamily while single-type\nmembers are internally tied to the associated opclass. For GiST,\nGIN, and SPGiST it's not clear to me that *any* operator deserves\nan INTERNAL dependency; only the implementation functions do.\n\nFurthermore, if we had an AM callback that were charged with\ndeciding the right dependency links for all the operators/functions,\nwe could also have it do some validity checking on those things,\nthus moving some of the checks made by amvalidate into a more\nuseful place.\n\nIf we went along this line, then a dump/restore or pg_upgrade\nwould be enough to change an opclass's dependencies to the new\nstyle, which would get us to a place where intarray's problem\ncould be fixed with ALTER OPERATOR FAMILY DROP OPERATOR and\nnothing else. Such an upgrade script wouldn't work in older\nreleases, but I think we don't generally care about that.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/458.1565114141@sss.pgh.pa.us\n\n\n",
"msg_date": "Wed, 07 Aug 2019 12:28:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Rethinking opclass member checks and dependency strength"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 7:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Over in [1] we realized that it would be a good idea to remove the <@\n> operator from contrib/intarray's GiST opclasses. Unfortunately, doing\n> that isn't a simple matter of generating an extension update script\n> containing ALTER OPERATOR FAMILY DROP OPERATOR, because that operator\n> is marked as internally dependent on its opclass which means that\n> dependency.c will object. We could do some direct hacking on\n> pg_depend to let the DROP be allowed, but ugh.\n>\n> I started to wonder why GiST opclass operators are ever considered\n> as required members of their opclass. The existing rule (cf.\n> opclasscmds.c) is that everything mentioned in CREATE OPERATOR CLASS\n> will have an internal dependency on the opclass, but if you add\n> operators or functions with ALTER OPERATOR FAMILY ADD, those just\n> have AUTO dependencies on their operator family. So the assumption\n> is that opclass creators will only put the bare minimum of required\n> stuff into CREATE OPERATOR CLASS and then add optional stuff with\n> ALTER ... ADD. But none of our contrib modules do it like that,\n> and I'd lay long odds against any third party code doing it either.\n\nThat's really odd. I don't think any extension SQL scripts does\nreally care about difference between operators defined in CREATE\nOPERATOR CLASS and operators defined in ALTER OPERATOR FAMILY ADD.\nBut if they would care, then all GiST, GIN, SP-GiST and BRIN opclasses\nwould define all their operators using ALTER OPERATOR FAMILY ADD.\n\n> This leads to the thought that maybe we could put some intelligence\n> into an index-AM-specific callback instead. For example, for btree\n> and hash the appropriate rule is probably that cross-type operators\n> and functions should be tied to the opfamily while single-type\n> members are internally tied to the associated opclass. For GiST,\n> GIN, and SPGiST it's not clear to me that *any* operator deserves\n> an INTERNAL dependency; only the implementation functions do.\n>\n> Furthermore, if we had an AM callback that were charged with\n> deciding the right dependency links for all the operators/functions,\n> we could also have it do some validity checking on those things,\n> thus moving some of the checks made by amvalidate into a more\n> useful place.\n\n+1, sounds like a plan for me.\n\n> If we went along this line, then a dump/restore or pg_upgrade\n> would be enough to change an opclass's dependencies to the new\n> style, which would get us to a place where intarray's problem\n> could be fixed with ALTER OPERATOR FAMILY DROP OPERATOR and\n> nothing else. Such an upgrade script wouldn't work in older\n> releases, but I think we don't generally care about that.\n\n+1\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Thu, 8 Aug 2019 03:17:29 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking opclass member checks and dependency strength"
},
{
"msg_contents": ">\n> But none of our contrib modules do it like that, and I'd lay long odds\n> against any third party code doing it either.\n\n\nThoughts?\n>\n\n\nPostGIS has some rarely used box operations as part of GiST opclass, like\n\"overabove\".\n\nThese are source of misunderstanding, as it hinges on the fact that\nnon-square geometry will be coerced into a box on a call, which is not\nobvious when you call it on something like diagonal linestrings.\nIt may happen that we will decide to remove them. On such circumstances, I\nexpect that ALTER OPERATOR CLASS DROP OPERATOR will work.\n\nOther thing that I would expect is that DROP FUNCTION ... CASCADE will\nremove the operator and unregister the operator from operator class without\ndropping operator class itself.\n\nBut none of our contrib modules do it like that, and I'd lay long odds against any third party code doing it either.Thoughts? PostGIS has some rarely used box operations as part of GiST opclass, like \"overabove\". These are source of misunderstanding, as it hinges on the fact that non-square geometry will be coerced into a box on a call, which is not obvious when you call it on something like diagonal linestrings.It may happen that we will decide to remove them. On such circumstances, I expect that ALTER OPERATOR CLASS DROP OPERATOR will work. Other thing that I would expect is that DROP FUNCTION ... CASCADE will remove the operator and unregister the operator from operator class without dropping operator class itself.",
"msg_date": "Fri, 9 Aug 2019 17:19:35 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking opclass member checks and dependency strength"
},
{
"msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> On Wed, Aug 7, 2019 at 7:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This leads to the thought that maybe we could put some intelligence\n>> into an index-AM-specific callback instead. For example, for btree\n>> and hash the appropriate rule is probably that cross-type operators\n>> and functions should be tied to the opfamily while single-type\n>> members are internally tied to the associated opclass. For GiST,\n>> GIN, and SPGiST it's not clear to me that *any* operator deserves\n>> an INTERNAL dependency; only the implementation functions do.\n>> \n>> Furthermore, if we had an AM callback that were charged with\n>> deciding the right dependency links for all the operators/functions,\n>> we could also have it do some validity checking on those things,\n>> thus moving some of the checks made by amvalidate into a more\n>> useful place.\n\n> +1, sounds like a plan for me.\n\nHere's a preliminary patch along these lines. It adds an AM callback\nthat can adjust the dependency types before they're entered into\npg_depend. There's a lot of stuff that's open for debate and/or\nremains to be done:\n\n* Is the parameter list of amcheckmembers() sufficient, or should we\npass more info (if so, what)? In principle, the AM can always look\nup anything else it needs to know from the provided OIDs, but that\nwould be cumbersome if many AMs need the same info.\n\n* Do we need any more flexibility in the set of ways that the pg_depend\nentries can be set up than what I've provided here?\n\n* Are the specific ways that the entries are getting set up appropriate?\nNote in particular that I left btree/hash alone, feeling that the default\n(historical) behavior was designed for them and is not unreasonable; but\nmaybe we should switch them to the cross-type-vs-not-cross-type behavior\nproposed above. Also I didn't touch BRIN because I don't know enough\nabout it to be sure what would be best, and I didn't touch contrib/bloom\nbecause I don't care too much about it.\n\n* I didn't add any actual error checking to the checkmembers functions.\nI figure that can be done in a followup patch, and it'll just be tedious\nboilerplate anyway.\n\n* I refactored things a little bit in opclasscmds.c, mostly by adding\nan is_func boolean to OpFamilyMember and getting rid of parameters\nequivalent to that. This is based on the thought that AMs might prefer\nto process the structs based on such a flag rather than by keeping them\nin separate lists. We could go further and merge the operator and\nfunction structs into one list, forcing the is_func flag to be used;\nbut I'm not sure whether that'd be an improvement.\n\n* I'm not at all impressed with the name, location, or concept of\nopfam_internal.h. I think we should get rid of that header and put\nthe OpFamilyMember struct somewhere else. Given that this patch\nmakes it part of the AM API, it wouldn't be unreasonable to move it\nto amapi.h. But I've not done that here.\n\nI'll add this to the upcoming commitfest.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 18 Aug 2019 14:59:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking opclass member checks and dependency strength"
},
{
"msg_contents": "On Sun, Aug 18, 2019 at 10:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Here's a preliminary patch along these lines. It adds an AM callback\n> that can adjust the dependency types before they're entered into\n> pg_depend. There's a lot of stuff that's open for debate and/or\n> remains to be done:\n>\n> * Is the parameter list of amcheckmembers() sufficient, or should we\n> pass more info (if so, what)? In principle, the AM can always look\n> up anything else it needs to know from the provided OIDs, but that\n> would be cumbersome if many AMs need the same info.\n\nLooks sufficient to me. I didn't yet imagine something else useful.\n\n> * Do we need any more flexibility in the set of ways that the pg_depend\n> entries can be set up than what I've provided here?\n\nFlexibility also looks sufficient to me.\n\n> * Are the specific ways that the entries are getting set up appropriate?\n> Note in particular that I left btree/hash alone, feeling that the default\n> (historical) behavior was designed for them and is not unreasonable; but\n> maybe we should switch them to the cross-type-vs-not-cross-type behavior\n> proposed above. Also I didn't touch BRIN because I don't know enough\n> about it to be sure what would be best, and I didn't touch contrib/bloom\n> because I don't care too much about it.\n\nI think we need ability to remove GiST fetch proc. Presence of this\nprocedure is used to determine whether GiST index supports index only\nscan (IOS). We need to be able to remove this proc to drop IOS\nsupport.\n\n> * I refactored things a little bit in opclasscmds.c, mostly by adding\n> an is_func boolean to OpFamilyMember and getting rid of parameters\n> equivalent to that. This is based on the thought that AMs might prefer\n> to process the structs based on such a flag rather than by keeping them\n> in separate lists. We could go further and merge the operator and\n> function structs into one list, forcing the is_func flag to be used;\n> but I'm not sure whether that'd be an improvement.\n\nI'm also not sure about this. Two lists look OK to me.\n\n> * I'm not at all impressed with the name, location, or concept of\n> opfam_internal.h. I think we should get rid of that header and put\n> the OpFamilyMember struct somewhere else. Given that this patch\n> makes it part of the AM API, it wouldn't be unreasonable to move it\n> to amapi.h. But I've not done that here.\n\n+1\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 10 Sep 2019 00:07:24 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking opclass member checks and dependency strength"
},
{
"msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> On Sun, Aug 18, 2019 at 10:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> * Are the specific ways that the entries are getting set up appropriate?\n>> Note in particular that I left btree/hash alone, feeling that the default\n>> (historical) behavior was designed for them and is not unreasonable; but\n>> maybe we should switch them to the cross-type-vs-not-cross-type behavior\n>> proposed above. Also I didn't touch BRIN because I don't know enough\n>> about it to be sure what would be best, and I didn't touch contrib/bloom\n>> because I don't care too much about it.\n\n> I think we need ability to remove GiST fetch proc. Presence of this\n> procedure is used to determine whether GiST index supports index only\n> scan (IOS). We need to be able to remove this proc to drop IOS\n> support.\n\nOK ... so thinking in more general terms, you're arguing that any optional\nsupport function should have a soft not hard dependency. The attached v2\npatch implements that rule, and also changes btree and hash to use\nthe cross-type-vs-not-cross-type behavior I proposed earlier.\n\nThis change results in a possibly surprising change in the expected output\nfor the 002_pg_dump.pl test: an optional support function that had been\ncreated as part of CREATE OPERATOR CLASS will now be dumped as part of\nALTER OPERATOR FAMILY. Maybe that's too surprising? Another approach\nthat we could use is to give up the premise that soft dependencies are\nalways on the opfamily. If we kept the dependencies pointing to the\nsame objects as before (opclass or opfamily) and only twiddled the\ndependency strength, then pg_dump's output would not change. Now,\nthis would possibly result in dropping a still-useful family member\nif it were incorrectly tied to an opclass that's dropped --- but that\nwould have happened before, too. I'm not quite sure if we really want\nto editorialize on the user's decisions about which grouping to tie\nfamily members to.\n\n>> * I'm not at all impressed with the name, location, or concept of\n>> opfam_internal.h. I think we should get rid of that header and put\n>> the OpFamilyMember struct somewhere else. Given that this patch\n>> makes it part of the AM API, it wouldn't be unreasonable to move it\n>> to amapi.h. But I've not done that here.\n\n> +1\n\nDid that in this revision, too.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 14 Sep 2019 19:01:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking opclass member checks and dependency strength"
},
{
"msg_contents": "Hi,\n\nThe latest version of this patch (from 2019/09/14) no longer applies,\nalthough maybe it's some issue with patch format (applying it using\npatch works fine, git am fails with \"Patch format detection failed.\").\nIn any case, this means cputube can't apply/test it.\n\nOn Sat, Sep 14, 2019 at 07:01:33PM -0400, Tom Lane wrote:\n>Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n>> On Sun, Aug 18, 2019 at 10:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> * Are the specific ways that the entries are getting set up appropriate?\n>>> Note in particular that I left btree/hash alone, feeling that the default\n>>> (historical) behavior was designed for them and is not unreasonable; but\n>>> maybe we should switch them to the cross-type-vs-not-cross-type behavior\n>>> proposed above. Also I didn't touch BRIN because I don't know enough\n>>> about it to be sure what would be best, and I didn't touch contrib/bloom\n>>> because I don't care too much about it.\n>\n>> I think we need ability to remove GiST fetch proc. Presence of this\n>> procedure is used to determine whether GiST index supports index only\n>> scan (IOS). We need to be able to remove this proc to drop IOS\n>> support.\n>\n>OK ... so thinking in more general terms, you're arguing that any optional\n>support function should have a soft not hard dependency. The attached v2\n>patch implements that rule, and also changes btree and hash to use\n>the cross-type-vs-not-cross-type behavior I proposed earlier.\n>\n>This change results in a possibly surprising change in the expected output\n>for the 002_pg_dump.pl test: an optional support function that had been\n>created as part of CREATE OPERATOR CLASS will now be dumped as part of\n>ALTER OPERATOR FAMILY. Maybe that's too surprising? Another approach\n>that we could use is to give up the premise that soft dependencies are\n>always on the opfamily. If we kept the dependencies pointing to the\n>same objects as before (opclass or opfamily) and only twiddled the\n>dependency strength, then pg_dump's output would not change. Now,\n>this would possibly result in dropping a still-useful family member\n>if it were incorrectly tied to an opclass that's dropped --- but that\n>would have happened before, too. I'm not quite sure if we really want\n>to editorialize on the user's decisions about which grouping to tie\n>family members to.\n>\n\nI agree it's a bit weird to add a dependency on an opfamily and not the\nopclass. Not just because of the pg_dump weirdness, but doesn't it mean\nthat after a DROP OPERATOR CLASS we might still reject a DROP OPERATOR\nbecause of the remaining opfamily dependency? (Haven't tried, so maybe\nthat works fine).\n\n>>> * I'm not at all impressed with the name, location, or concept of\n>>> opfam_internal.h. I think we should get rid of that header and put\n>>> the OpFamilyMember struct somewhere else. Given that this patch\n>>> makes it part of the AM API, it wouldn't be unreasonable to move it\n>>> to amapi.h. But I've not done that here.\n>\n>> +1\n>\n>Did that in this revision, too.\n>\n\nOne minor comment from me is that maybe \"amcheckmembers\" is a bit\nmisleading. In my mind \"check\" implies a plain passive check, not\nsomething that may actually tweak the dependency type.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 4 Jan 2020 23:52:32 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking opclass member checks and dependency strength"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> The latest version of this patch (from 2019/09/14) no longer applies,\n> although maybe it's some issue with patch format (applying it using\n> patch works fine, git am fails with \"Patch format detection failed.\").\n\nHm, seems to be just a trivial conflict against the copyright-date-update\npatch. Updated version attached.\n\n> On Sat, Sep 14, 2019 at 07:01:33PM -0400, Tom Lane wrote:\n>> This change results in a possibly surprising change in the expected output\n>> for the 002_pg_dump.pl test: an optional support function that had been\n>> created as part of CREATE OPERATOR CLASS will now be dumped as part of\n>> ALTER OPERATOR FAMILY. Maybe that's too surprising? Another approach\n>> that we could use is to give up the premise that soft dependencies are\n>> always on the opfamily. If we kept the dependencies pointing to the\n>> same objects as before (opclass or opfamily) and only twiddled the\n>> dependency strength, then pg_dump's output would not change. Now,\n>> this would possibly result in dropping a still-useful family member\n>> if it were incorrectly tied to an opclass that's dropped --- but that\n>> would have happened before, too. I'm not quite sure if we really want\n>> to editorialize on the user's decisions about which grouping to tie\n>> family members to.\n\n> I agree it's a bit weird to add a dependency on an opfamily and not the\n> opclass. Not just because of the pg_dump weirdness, but doesn't it mean\n> that after a DROP OPERATOR CLASS we might still reject a DROP OPERATOR\n> because of the remaining opfamily dependency? (Haven't tried, so maybe\n> that works fine).\n\nI poked at the idea of retaining the user's decisions as to whether\na member object is a member of an individual opclass or an opfamily,\nbut soon realized that there's a big problem with that: we don't have\nany ALTER OPERATOR CLASS ADD/DROP syntax, only ALTER OPERATOR FAMILY.\nSo there's no way to express the concept of \"add this at the opclass\nlevel\", if you forgot to add it during initial opclass creation.\n\nI suppose that some case could be made for adding such syntax, but\nit seems like a significant amount of work, and in the end it seems\nlike it's better to trust the system to get these assignments right.\nLetting the user do it doesn't add much except the opportunity\nto shoot oneself in the foot.\n\n> One minor comment from me is that maybe \"amcheckmembers\" is a bit\n> misleading. In my mind \"check\" implies a plain passive check, not\n> something that may actually tweak the dependency type.\n\nHmm. I'm not wedded to that name, but do you have a better proposal?\nThe end goal (not realized in this patch, of course) is that these\ncallbacks would perform fairly thorough checking of opclass members,\nmissing only the ability to check that all required members are present.\nSo I don't want to name them something like \"amfixdependencies\", even\nif that's all they're doing right now.\n\nI see your point that \"check\" suggests a read-only operation, but\nI'm not sure about a better verb. I thought of \"amvalidatemembers\",\nbut that's not really much better than \"check\" is it?\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 05 Jan 2020 12:33:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking opclass member checks and dependency strength"
},
{
"msg_contents": "On Sun, Jan 05, 2020 at 12:33:10PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> The latest version of this patch (from 2019/09/14) no longer applies,\n>> although maybe it's some issue with patch format (applying it using\n>> patch works fine, git am fails with \"Patch format detection failed.\").\n>\n>Hm, seems to be just a trivial conflict against the copyright-date-update\n>patch. Updated version attached.\n>\n\nInteresting. I still get\n\n $ git am ~/am-check-members-callback-3.patch\n Patch format detection failed.\n\nI'm on git 2.21.1, not sure if that matters. Cputube is happy, though.\n\nMeh.\n\n>> On Sat, Sep 14, 2019 at 07:01:33PM -0400, Tom Lane wrote:\n>>> This change results in a possibly surprising change in the expected output\n>>> for the 002_pg_dump.pl test: an optional support function that had been\n>>> created as part of CREATE OPERATOR CLASS will now be dumped as part of\n>>> ALTER OPERATOR FAMILY. Maybe that's too surprising? Another approach\n>>> that we could use is to give up the premise that soft dependencies are\n>>> always on the opfamily. If we kept the dependencies pointing to the\n>>> same objects as before (opclass or opfamily) and only twiddled the\n>>> dependency strength, then pg_dump's output would not change. Now,\n>>> this would possibly result in dropping a still-useful family member\n>>> if it were incorrectly tied to an opclass that's dropped --- but that\n>>> would have happened before, too. I'm not quite sure if we really want\n>>> to editorialize on the user's decisions about which grouping to tie\n>>> family members to.\n>\n>> I agree it's a bit weird to add a dependency on an opfamily and not the\n>> opclass. Not just because of the pg_dump weirdness, but doesn't it mean\n>> that after a DROP OPERATOR CLASS we might still reject a DROP OPERATOR\n>> because of the remaining opfamily dependency? (Haven't tried, so maybe\n>> that works fine).\n>\n>I poked at the idea of retaining the user's decisions as to whether\n>a member object is a member of an individual opclass or an opfamily,\n>but soon realized that there's a big problem with that: we don't have\n>any ALTER OPERATOR CLASS ADD/DROP syntax, only ALTER OPERATOR FAMILY.\n>So there's no way to express the concept of \"add this at the opclass\n>level\", if you forgot to add it during initial opclass creation.\n>\n>I suppose that some case could be made for adding such syntax, but\n>it seems like a significant amount of work, and in the end it seems\n>like it's better to trust the system to get these assignments right.\n>Letting the user do it doesn't add much except the opportunity\n>to shoot oneself in the foot.\n>\n\nOK. So we shall keep the v2 behavior, with opfamily dependencies and\nmodified pg_dump output? Fine with me - I still think it's a bit weird,\nbut I'm willing to commit myself to add the missing syntax. And I doubt\nanyone will notice, probably ...\n\n>> One minor comment from me is that maybe \"amcheckmembers\" is a bit\n>> misleading. In my mind \"check\" implies a plain passive check, not\n>> something that may actually tweak the dependency type.\n>\n>Hmm. I'm not wedded to that name, but do you have a better proposal?\n>The end goal (not realized in this patch, of course) is that these\n>callbacks would perform fairly thorough checking of opclass members,\n>missing only the ability to check that all required members are present.\n>So I don't want to name them something like \"amfixdependencies\", even\n>if that's all they're doing right now.\n>\n\nOK.\n\n>\n>I see your point that \"check\" suggests a read-only operation, but\n>I'm not sure about a better verb. I thought of \"amvalidatemembers\",\n>but that's not really much better than \"check\" is it?\n>\n\nI don't :-(\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 5 Jan 2020 20:15:46 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking opclass member checks and dependency strength"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Sun, Jan 05, 2020 at 12:33:10PM -0500, Tom Lane wrote:\n>> I see your point that \"check\" suggests a read-only operation, but\n>> I'm not sure about a better verb. I thought of \"amvalidatemembers\",\n>> but that's not really much better than \"check\" is it?\n\n> I don't :-(\n\nStill haven't got a better naming idea, but in the meantime here's\na rebase to fix a conflict with 612a1ab76.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 27 Feb 2020 18:32:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking opclass member checks and dependency strength"
},
{
"msg_contents": "I wrote:\n> Still haven't got a better naming idea, but in the meantime here's\n> a rebase to fix a conflict with 612a1ab76.\n\nI see from the cfbot that this needs another rebase, so here 'tis.\nNo changes in the patch itself.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 31 Mar 2020 16:45:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking opclass member checks and dependency strength"
},
{
"msg_contents": "On 2019-Aug-18, Tom Lane wrote:\n\n> * I'm not at all impressed with the name, location, or concept of\n> opfam_internal.h. I think we should get rid of that header and put\n> the OpFamilyMember struct somewhere else. Given that this patch\n> makes it part of the AM API, it wouldn't be unreasonable to move it\n> to amapi.h. But I've not done that here.\n\nI created that file so that it'd be possible to interpret the struct\nwhen dealing with DDL commands in event triggers (commit b488c580aef4).\nThe struct was previously in a .c file, and we didn't have an\nappropriate .h file to put it in. I think amapi.h is a great place for\nit.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 31 Mar 2020 18:09:49 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking opclass member checks and dependency strength"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Aug-18, Tom Lane wrote:\n>> * I'm not at all impressed with the name, location, or concept of\n>> opfam_internal.h. I think we should get rid of that header and put\n>> the OpFamilyMember struct somewhere else. Given that this patch\n>> makes it part of the AM API, it wouldn't be unreasonable to move it\n>> to amapi.h. But I've not done that here.\n\n> I created that file so that it'd be possible to interpret the struct\n> when dealing with DDL commands in event triggers (commit b488c580aef4).\n> The struct was previously in a .c file, and we didn't have an\n> appropriate .h file to put it in. I think amapi.h is a great place for\n> it.\n\nYeah, later versions of the patch put it there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 Mar 2020 17:19:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking opclass member checks and dependency strength"
},
{
"msg_contents": "On 31.03.2020 23:45, Tom Lane wrote:\n> I wrote:\n>> Still haven't got a better naming idea, but in the meantime here's\n>> a rebase to fix a conflict with 612a1ab76.\n\nMaybe \"amadjustmembers\" will work?\n\nI've looked through the patch and noticed this comment:\n\n+��� ��� ��� default:\n+��� ��� ��� ��� /* Probably we should throw error here */\n+��� ��� ��� ��� break;\n\nI suggest adding an ERROR or maybe Assert, so that future developers \nwouldn't\nforget about setting dependencies. Other than that, the patch looks good \nto me.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Tue, 14 Jul 2020 23:22:55 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking opclass member checks and dependency strength"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, failed\nSpec compliant: not tested\nDocumentation: not tested\n\nI've gone through the patch and applied on the master branch, other than a few hunks, and whether as suggested upthread, the default case for \"switch (op->number)\" should throw an error or not, I found that bloom regression is crashing.\r\n-------------\r\ntest bloom ... FAILED (test process exited with exit code 2) 20 ms\r\n\r\n+server closed the connection unexpectedly\r\n+\tThis probably means the server terminated abnormally\r\n+\tbefore or while processing the request.\r\n+connection to server was lost\r\n-------------\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Tue, 28 Jul 2020 14:51:15 +0000",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking opclass member checks and dependency strength"
},
{
"msg_contents": "Hamid Akhtar <hamid.akhtar@gmail.com> writes:\n> I've gone through the patch and applied on the master branch, other than a few hunks, and whether as suggested upthread, the default case for \"switch (op->number)\" should throw an error or not, I found that bloom regression is crashing.\n> -------------\n> test bloom ... FAILED (test process exited with exit code 2) 20 ms\n\nHmm ... I think you must have done something wrong. For me,\nam-check-members-callback-5.patch still applies cleanly (just a few\nsmall offsets), and it passes that test as well as the rest of\ncheck-world. The cfbot agrees [1].\n\nMaybe you didn't \"make clean\" before rebuilding?\n\n\t\t\tregards, tom lane\n\n[1] https://travis-ci.org/github/postgresql-cfbot/postgresql/builds/712599990\n\n\n",
"msg_date": "Tue, 28 Jul 2020 11:43:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking opclass member checks and dependency strength"
},
{
"msg_contents": "On Tue, Jul 28, 2020 at 8:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Hamid Akhtar <hamid.akhtar@gmail.com> writes:\n> > I've gone through the patch and applied on the master branch, other than\n> a few hunks, and whether as suggested upthread, the default case for\n> \"switch (op->number)\" should throw an error or not, I found that bloom\n> regression is crashing.\n> > -------------\n> > test bloom ... FAILED (test process exited with\n> exit code 2) 20 ms\n>\n> Hmm ... I think you must have done something wrong. For me,\n> am-check-members-callback-5.patch still applies cleanly (just a few\n> small offsets), and it passes that test as well as the rest of\n> check-world. The cfbot agrees [1].\n>\n> Maybe you didn't \"make clean\" before rebuilding?\n>\n> regards, tom lane\n>\n> [1]\n> https://travis-ci.org/github/postgresql-cfbot/postgresql/builds/712599990\n>\n\nI was pretty sure I did make clean before testing the patch, but perhaps I\ndidn't as re-running it causes all tests to pass.\n\nSorry for the false alarm. All good with the patch.\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nOn Tue, Jul 28, 2020 at 8:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Hamid Akhtar <hamid.akhtar@gmail.com> writes:\n> I've gone through the patch and applied on the master branch, other than a few hunks, and whether as suggested upthread, the default case for \"switch (op->number)\" should throw an error or not, I found that bloom regression is crashing.\n> -------------\n> test bloom ... FAILED (test process exited with exit code 2) 20 ms\n\nHmm ... I think you must have done something wrong. For me,\nam-check-members-callback-5.patch still applies cleanly (just a few\nsmall offsets), and it passes that test as well as the rest of\ncheck-world. The cfbot agrees [1].\n\nMaybe you didn't \"make clean\" before rebuilding?\n\n regards, tom lane\n\n[1] https://travis-ci.org/github/postgresql-cfbot/postgresql/builds/712599990\nI was pretty sure I did make clean before testing the patch, but perhaps I didn't as re-running it causes all tests to pass.Sorry for the false alarm. All good with the patch.-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus",
"msg_date": "Wed, 29 Jul 2020 09:13:19 +0500",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking opclass member checks and dependency strength"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nLooks good to me. \r\n\r\nCORRECTION:\r\nIn my previous review I had mistakenly mentioned that it was causing a server crash. Tests run perfectly fine without any errors.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Wed, 29 Jul 2020 06:56:40 +0000",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking opclass member checks and dependency strength"
},
{
"msg_contents": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru> writes:\n> On 31.03.2020 23:45, Tom Lane wrote:\n>> Still haven't got a better naming idea, but in the meantime here's\n>> a rebase to fix a conflict with 612a1ab76.\n\n> Maybe \"amadjustmembers\" will work?\n\nNot having any better idea, I adopted that one.\n\n> I've looked through the patch and noticed this comment:\n> + /* Probably we should throw error here */\n\n> I suggest adding an ERROR or maybe Assert, so that future developers \n> wouldn't\n> forget about setting dependencies. Other than that, the patch looks good \n> to me.\n\nI'd figured that adding error checks could be left for a second pass,\nbut there's no strong reason not to insert these particular checks\nnow ... and indeed, doing so showed me that the patch hadn't been\nupdated to cover the recent addition of opclass options procs :-(.\nSo I fixed that and pushed it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 01 Aug 2020 17:17:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking opclass member checks and dependency strength"
}
] |
[
{
"msg_contents": "A daily report crashed repeatedly this morning running pg11.4.\nI compiled 11.5 and reproduced it there, too, so I'm including backtrace with\n-O0.\n\nI'm trying to dig further into it, but it seems to be crashing under load, but\nnot when I try to narrow down to a single report, which seem to run to\ncompletion when run individually.\n\npostmaster[6750]: segfault at 7f4a527545bc ip 00000000004834ae sp 00007ffd547e2760 error 4 in postgres[400000+6e6000]\npostmaster[29786]: segfault at 7f49a968d000 ip 00007f49b24abb0f sp 00007ffd547e2268 error 4 in libc-2.12.so (deleted)[7f49b2422000+18b000]\n\nCore was generated by `postgres: telsasoft ts 192.168.122.11(35454) SELECT '.\nProgram terminated with signal 11, Segmentation fault.\n\n#0 0x00000000004877df in slot_deform_tuple (slot=0xe204638, natts=3) at heaptuple.c:1465\n#1 0x0000000000487f5b in slot_getsomeattrs (slot=0xe204638, attnum=3) at heaptuple.c:1673\n#2 0x00000000006d5aae in ExecInterpExpr (state=0xe257948, econtext=0xe204578, isnull=0x7ffff5f38077) at execExprInterp.c:443\n#3 0x00000000006ed221 in ExecEvalExprSwitchContext (state=0xe257948, econtext=0xe204578, isNull=0x7ffff5f38077) at ../../../src/include/executor/executor.h:313\n#4 0x00000000006ed30e in ExecQual (state=0xe257948, econtext=0xe204578) at ../../../src/include/executor/executor.h:382\n#5 0x00000000006ed5f4 in ExecScan (node=0x252edc8, accessMtd=0x71c54c <SeqNext>, recheckMtd=0x71c623 <SeqRecheck>) at execScan.c:190\n#6 0x000000000071c670 in ExecSeqScan (pstate=0x252edc8) at nodeSeqscan.c:129\n#7 0x00000000006eb855 in ExecProcNodeInstr (node=0x252edc8) at execProcnode.c:461\n#8 0x00000000006f6555 in ExecProcNode (node=0x252edc8) at ../../../src/include/executor/executor.h:247\n#9 0x00000000006f6a05 in ExecAppend (pstate=0xe20e7b0) at nodeAppend.c:294\n#10 0x00000000006eb855 in ExecProcNodeInstr (node=0xe20e7b0) at execProcnode.c:461\n#11 0x0000000000708a96 in ExecProcNode (node=0xe20e7b0) at ../../../src/include/executor/executor.h:247\n#12 0x0000000000709b0d in ExecHashJoinOuterGetTuple (outerNode=0xe20e7b0, hjstate=0xe20e4d8, hashvalue=0x7ffff5f3827c) at nodeHashjoin.c:821\n#13 0x0000000000708f8d in ExecHashJoinImpl (pstate=0xe20e4d8, parallel=false) at nodeHashjoin.c:355\n#14 0x00000000007094ca in ExecHashJoin (pstate=0xe20e4d8) at nodeHashjoin.c:565\n#15 0x00000000006eb855 in ExecProcNodeInstr (node=0xe20e4d8) at execProcnode.c:461\n#16 0x000000000071d9da in ExecProcNode (node=0xe20e4d8) at ../../../src/include/executor/executor.h:247\n#17 0x000000000071db26 in ExecSort (pstate=0xe20e3c0) at nodeSort.c:107\n#18 0x00000000006eb855 in ExecProcNodeInstr (node=0xe20e3c0) at execProcnode.c:461\n#19 0x00000000006eb826 in ExecProcNodeFirst (node=0xe20e3c0) at execProcnode.c:445\n#20 0x00000000006f74c5 in ExecProcNode (node=0xe20e3c0) at ../../../src/include/executor/executor.h:247\n#21 0x00000000006f79bb in fetch_input_tuple (aggstate=0xe20df70) at nodeAgg.c:406\n#22 0x00000000006f9d11 in agg_retrieve_direct (aggstate=0xe20df70) at nodeAgg.c:1755\n#23 0x00000000006f98fa in ExecAgg (pstate=0xe20df70) at nodeAgg.c:1570\n#24 0x00000000006eb855 in ExecProcNodeInstr (node=0xe20df70) at execProcnode.c:461\n#25 0x00000000006eb826 in ExecProcNodeFirst (node=0xe20df70) at execProcnode.c:445\n#26 0x0000000000708a96 in ExecProcNode (node=0xe20df70) at ../../../src/include/executor/executor.h:247\n#27 0x0000000000709b0d in ExecHashJoinOuterGetTuple (outerNode=0xe20df70, hjstate=0x2778580, hashvalue=0x7ffff5f3865c) at nodeHashjoin.c:821\n#28 0x0000000000708f8d in ExecHashJoinImpl (pstate=0x2778580, parallel=false) at nodeHashjoin.c:355\n#29 0x00000000007094ca in ExecHashJoin (pstate=0x2778580) at nodeHashjoin.c:565\n#30 0x00000000006eb855 in ExecProcNodeInstr (node=0x2778580) at execProcnode.c:461\n#31 0x00000000006eb826 in ExecProcNodeFirst (node=0x2778580) at execProcnode.c:445\n#32 0x0000000000702a9b in ExecProcNode (node=0x2778580) at ../../../src/include/executor/executor.h:247\n#33 0x0000000000702d12 in MultiExecPrivateHash (node=0x27783a8) at nodeHash.c:164\n#34 0x0000000000702c8c in MultiExecHash (node=0x27783a8) at nodeHash.c:114\n#35 0x00000000006eb8f7 in MultiExecProcNode (node=0x27783a8) at execProcnode.c:501\n#36 0x0000000000708e2a in ExecHashJoinImpl (pstate=0x2777210, parallel=false) at nodeHashjoin.c:290\n#37 0x00000000007094ca in ExecHashJoin (pstate=0x2777210) at nodeHashjoin.c:565\n#38 0x00000000006eb855 in ExecProcNodeInstr (node=0x2777210) at execProcnode.c:461\n#39 0x00000000006eb826 in ExecProcNodeFirst (node=0x2777210) at execProcnode.c:445\n#40 0x000000000071d9da in ExecProcNode (node=0x2777210) at ../../../src/include/executor/executor.h:247\n#41 0x000000000071db26 in ExecSort (pstate=0x2776ce0) at nodeSort.c:107\n#42 0x00000000006eb855 in ExecProcNodeInstr (node=0x2776ce0) at execProcnode.c:461\n#43 0x00000000006eb826 in ExecProcNodeFirst (node=0x2776ce0) at execProcnode.c:445\n#44 0x00000000006f74c5 in ExecProcNode (node=0x2776ce0) at ../../../src/include/executor/executor.h:247\n#45 0x00000000006f79bb in fetch_input_tuple (aggstate=0x2776ec8) at nodeAgg.c:406\n#46 0x00000000006f9d11 in agg_retrieve_direct (aggstate=0x2776ec8) at nodeAgg.c:1755\n#47 0x00000000006f98fa in ExecAgg (pstate=0x2776ec8) at nodeAgg.c:1570\n#48 0x00000000006eb855 in ExecProcNodeInstr (node=0x2776ec8) at execProcnode.c:461\n#49 0x00000000006eb826 in ExecProcNodeFirst (node=0x2776ec8) at execProcnode.c:445\n#50 0x00000000006e0129 in ExecProcNode (node=0x2776ec8) at ../../../src/include/executor/executor.h:247\n#51 0x00000000006e2abd in ExecutePlan (estate=0x27769d8, planstate=0x2776ec8, use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true, numberTuples=0, direction=ForwardScanDirection, dest=0xa66bef0, execute_once=true) at execMain.c:1723\n#52 0x00000000006e0710 in standard_ExecutorRun (queryDesc=0xdeb5f38, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:364\n#53 0x00007fd37e757633 in pgss_ExecutorRun (queryDesc=0xdeb5f38, direction=ForwardScanDirection, count=0, execute_once=true) at pg_stat_statements.c:892\n#54 0x00007fd37e2d86ef in explain_ExecutorRun (queryDesc=0xdeb5f38, direction=ForwardScanDirection, count=0, execute_once=true) at auto_explain.c:281\n#55 0x00000000006e0511 in ExecutorRun (queryDesc=0xdeb5f38, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:305\n#56 0x00000000008cfebe in PortalRunSelect (portal=0x21d4bf8, forward=true, count=0, dest=0xa66bef0) at pquery.c:932\n#57 0x00000000008cfb4c in PortalRun (portal=0x21d4bf8, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0xa66bef0, altdest=0xa66bef0, completionTag=0x7ffff5f39100 \"\") at pquery.c:773\n#58 0x00000000008c9b22 in exec_simple_query (query_string=0xe2fbd80 \"--BEGIN SQL\\nSELECT site_office AS site_gran,\\n\\tsite_location||'.'||sect_num AS bs,\\n\\tgsm_site_name_to_sect_name(site_name, sect_num, sect_name) AS sitename,\\n\\tperiod AS start_time,\\n\\t(maybe_div(sum(rrc_su\"...)\n at postgres.c:1145\n#59 0x00000000008cddf3 in PostgresMain (argc=1, argv=0x218fef8, dbname=0x215da48 \"ts\", username=0x218fed0 \"telsasoft\") at postgres.c:4182\n#60 0x000000000082a098 in BackendRun (port=0x21867d0) at postmaster.c:4358\n#61 0x0000000000829806 in BackendStartup (port=0x21867d0) at postmaster.c:4030\n#62 0x0000000000825cab in ServerLoop () at postmaster.c:1707\n#63 0x00000000008255dd in PostmasterMain (argc=3, argv=0x215b9a0) at postmaster.c:1380\n#64 0x000000000074ba30 in main (argc=3, argv=0x215b9a0) at main.c:228\n(gdb) \n\n#0 0x00000000004877df in slot_deform_tuple (slot=0xe204638, natts=3) at heaptuple.c:1465\n thisatt = 0x5265db0\n tuple = 0xe838d40\n tupleDesc = 0x5265d20\n values = 0xe204698\n isnull = 0xe2057e8\n tup = 0x7fd3eb1b8058\n hasnulls = false\n attnum = 1\n tp = 0x7fd3eb1b80bd <Address 0x7fd3eb1b80bd out of bounds>\n off = 583369983\n bp = 0x7fd3eb1b806f <Address 0x7fd3eb1b806f out of bounds>\n slow = true\n#1 0x0000000000487f5b in slot_getsomeattrs (slot=0xe204638, attnum=3) at heaptuple.c:1673\n tuple = 0xe838d40\n attno = 3\n __func__ = \"slot_getsomeattrs\"\n#2 0x00000000006d5aae in ExecInterpExpr (state=0xe257948, econtext=0xe204578, isnull=0x7ffff5f38077) at execExprInterp.c:443\n op = 0xe257530\n resultslot = 0x0\n innerslot = 0x0\n outerslot = 0x0\n scanslot = 0xe204638\n dispatch_table = {0x6d5a0d, 0x6d5a32, 0x6d5a61, 0x6d5a93, 0x6d5ac5, 0x6d5b74, 0x6d5c23, 0x6d5cd2, 0x6d5da3, 0x6d5e74, 0x6d5f45, 0x6d5f7c, 0x6d6056, 0x6d6130, 0x6d620a, 0x6d627e, 0x6d6347, 0x6d638f, 0x6d640e, 0x6d64ec, 0x6d6523, 0x6d655a, 0x6d6568, 0x6d65e9, 0x6d6657, 0x6d6665, \n 0x6d66e6, 0x6d6754, 0x6d6790, 0x6d6821, 0x6d6856, 0x6d68b4, 0x6d6915, 0x6d6986, 0x6d69ca, 0x6d6a14, 0x6d6a4b, 0x6d6a82, 0x6d6acb, 0x6d6b3b, 0x6d6bab, 0x6d6bf4, 0x6d6c2b, 0x6d6c62, 0x6d6ca1, 0x6d6dc3, 0x6d6e2e, 0x6d704c, 0x6d7161, 0x6d726f, 0x6d7376, 0x6d73a6, 0x6d73d6, 0x6d7406, \n 0x6d7436, 0x6d746d, 0x6d749d, 0x6d7625, 0x6d7724, 0x6d7754, 0x6d778b, 0x6d77c2, 0x6d77f9, 0x6d7862, 0x6d7892, 0x6d78c2, 0x6d6d32, 0x6d7959, 0x6d7989, 0x6d78f2, 0x6d7929, 0x6d79b9, 0x6d79e9, 0x6d7a98, 0x6d7ac8, 0x6d7b77, 0x6d7bae, 0x6d7be5, 0x6d7c40, 0x6d7cf5, 0x6d7d9a, 0x6d7ead, \n 0x6d7f52, 0x6d80b4, 0x6d8261, 0x6d8298, 0x6d82cf}\n#3 0x00000000006ed221 in ExecEvalExprSwitchContext (state=0xe257948, econtext=0xe204578, isNull=0x7ffff5f38077) at ../../../src/include/executor/executor.h:313\n retDatum = 140737319764160\n oldContext = 0x27768c0\n#4 0x00000000006ed30e in ExecQual (state=0xe257948, econtext=0xe204578) at ../../../src/include/executor/executor.h:382\n ret = 3018996923\n isnull = false\n#5 0x00000000006ed5f4 in ExecScan (node=0x252edc8, accessMtd=0x71c54c <SeqNext>, recheckMtd=0x71c623 <SeqRecheck>) at execScan.c:190\n slot = 0xe204638\n econtext = 0xe204578\n qual = 0xe257948\n projInfo = 0xe257498\n#6 0x000000000071c670 in ExecSeqScan (pstate=0x252edc8) at nodeSeqscan.c:129\n---Type <return> to continue, or q <return> to quit---\n node = 0x252edc8\n#7 0x00000000006eb855 in ExecProcNodeInstr (node=0x252edc8) at execProcnode.c:461\n result = 0x702ace\n#8 0x00000000006f6555 in ExecProcNode (node=0x252edc8) at ../../../src/include/executor/executor.h:247\nNo locals.\n#9 0x00000000006f6a05 in ExecAppend (pstate=0xe20e7b0) at nodeAppend.c:294\n subnode = 0x252edc8\n result = 0xe258508\n node = 0xe20e7b0\n#10 0x00000000006eb855 in ExecProcNodeInstr (node=0xe20e7b0) at execProcnode.c:461\n result = 0x20e257058\n#11 0x0000000000708a96 in ExecProcNode (node=0xe20e7b0) at ../../../src/include/executor/executor.h:247\nNo locals.\n#12 0x0000000000709b0d in ExecHashJoinOuterGetTuple (outerNode=0xe20e7b0, hjstate=0xe20e4d8, hashvalue=0x7ffff5f3827c) at nodeHashjoin.c:821\n hashtable = 0xe8387c8\n curbatch = 0\n slot = 0x0\n#13 0x0000000000708f8d in ExecHashJoinImpl (pstate=0xe20e4d8, parallel=false) at nodeHashjoin.c:355\n node = 0xe20e4d8\n outerNode = 0xe20e7b0\n hashNode = 0xe258720\n joinqual = 0x0\n otherqual = 0x0\n econtext = 0xe20e6f0\n hashtable = 0xe8387c8\n outerTupleSlot = 0xe257058\n hashvalue = 3018996923\n batchno = 0\n parallel_state = 0x0\n __func__ = \"ExecHashJoinImpl\"\n#14 0x00000000007094ca in ExecHashJoin (pstate=0xe20e4d8) at nodeHashjoin.c:565\nNo locals.\n#15 0x00000000006eb855 in ExecProcNodeInstr (node=0xe20e4d8) at execProcnode.c:461\n result = 0x20c4\n#16 0x000000000071d9da in ExecProcNode (node=0xe20e4d8) at ../../../src/include/executor/executor.h:247\nNo locals.\n#17 0x000000000071db26 in ExecSort (pstate=0xe20e3c0) at nodeSort.c:107\n plannode = 0xa6309c0\n outerNode = 0xe20e4d8\n---Type <return> to continue, or q <return> to quit---\n tupDesc = 0xe2598a0\n node = 0xe20e3c0\n estate = 0x27769d8\n dir = ForwardScanDirection\n tuplesortstate = 0x2766998\n slot = 0xe25b8b8\n#18 0x00000000006eb855 in ExecProcNodeInstr (node=0xe20e3c0) at execProcnode.c:461\n result = 0x4785d0\n#19 0x00000000006eb826 in ExecProcNodeFirst (node=0xe20e3c0) at execProcnode.c:445\nNo locals.\n#20 0x00000000006f74c5 in ExecProcNode (node=0xe20e3c0) at ../../../src/include/executor/executor.h:247\nNo locals.\n#21 0x00000000006f79bb in fetch_input_tuple (aggstate=0xe20df70) at nodeAgg.c:406\n slot = 0xe20e240\n#22 0x00000000006f9d11 in agg_retrieve_direct (aggstate=0xe20df70) at nodeAgg.c:1755\n node = 0xa6315d0\n econtext = 0xe20e300\n tmpcontext = 0xe20e1a8\n peragg = 0x252e198\n pergroups = 0xe7cdd40\n outerslot = 0x7ffff5f38520\n firstSlot = 0xe263e70\n result = 0x0\n hasGroupingSets = false\n numGroupingSets = 1\n currentSet = 222632768\n nextSetSize = 0\n numReset = 1\n i = 1\n#23 0x00000000006f98fa in ExecAgg (pstate=0xe20df70) at nodeAgg.c:1570\n node = 0xe20df70\n result = 0x0\n#24 0x00000000006eb855 in ExecProcNodeInstr (node=0xe20df70) at execProcnode.c:461\n result = 0x4785d0\n#25 0x00000000006eb826 in ExecProcNodeFirst (node=0xe20df70) at execProcnode.c:445\nNo locals.\n#26 0x0000000000708a96 in ExecProcNode (node=0xe20df70) at ../../../src/include/executor/executor.h:247\nNo locals.\n#27 0x0000000000709b0d in ExecHashJoinOuterGetTuple (outerNode=0xe20df70, hjstate=0x2778580, hashvalue=0x7ffff5f3865c) at nodeHashjoin.c:821\n#28 0x0000000000708f8d in ExecHashJoinImpl (pstate=0x2778580, parallel=false) at nodeHashjoin.c:355\n node = 0x2778580\n outerNode = 0xe20df70\n hashNode = 0xe7e2560\n joinqual = 0x0\n otherqual = 0x0\n econtext = 0x2778798\n hashtable = 0xe837df0\n outerTupleSlot = 0x4040\n hashvalue = 0\n batchno = 10926240\n parallel_state = 0x0\n __func__ = \"ExecHashJoinImpl\"\n#29 0x00000000007094ca in ExecHashJoin (pstate=0x2778580) at nodeHashjoin.c:565\nNo locals.\n#30 0x00000000006eb855 in ExecProcNodeInstr (node=0x2778580) at execProcnode.c:461\n result = 0x4785d0\n#31 0x00000000006eb826 in ExecProcNodeFirst (node=0x2778580) at execProcnode.c:445\nNo locals.\n#32 0x0000000000702a9b in ExecProcNode (node=0x2778580) at ../../../src/include/executor/executor.h:247\nNo locals.\n#33 0x0000000000702d12 in MultiExecPrivateHash (node=0x27783a8) at nodeHash.c:164\n outerNode = 0x2778580\n hashkeys = 0xe7ebf78\n hashtable = 0xe837cd8\n slot = 0x80000000001\n econtext = 0x27784c0\n hashvalue = 0\n#34 0x0000000000702c8c in MultiExecHash (node=0x27783a8) at nodeHash.c:114\nNo locals.\n#35 0x00000000006eb8f7 in MultiExecProcNode (node=0x27783a8) at execProcnode.c:501\n result = 0x4\n __func__ = \"MultiExecProcNode\"\n#36 0x0000000000708e2a in ExecHashJoinImpl (pstate=0x2777210, parallel=false) at nodeHashjoin.c:290\n node = 0x2777210\n outerNode = 0x27774e8\n---Type <return> to continue, or q <return> to quit---\n hashNode = 0x27783a8\n joinqual = 0x0\n otherqual = 0x0\n econtext = 0x2777428\n hashtable = 0xe837cd8\n outerTupleSlot = 0x7ffff5f388ac\n hashvalue = 0\n batchno = 0\n parallel_state = 0x0\n __func__ = \"ExecHashJoinImpl\"\n#37 0x00000000007094ca in ExecHashJoin (pstate=0x2777210) at nodeHashjoin.c:565\nNo locals.\n#38 0x00000000006eb855 in ExecProcNodeInstr (node=0x2777210) at execProcnode.c:461\n result = 0x4785d0\n#39 0x00000000006eb826 in ExecProcNodeFirst (node=0x2777210) at execProcnode.c:445\nNo locals.\n#40 0x000000000071d9da in ExecProcNode (node=0x2777210) at ../../../src/include/executor/executor.h:247\nNo locals.\n#41 0x000000000071db26 in ExecSort (pstate=0x2776ce0) at nodeSort.c:107\n plannode = 0xa632c18\n outerNode = 0x2777210\n tupDesc = 0xe7e6980\n node = 0x2776ce0\n estate = 0x27769d8\n dir = ForwardScanDirection\n tuplesortstate = 0x24feec8\n slot = 0x219bfa0\n#42 0x00000000006eb855 in ExecProcNodeInstr (node=0x2776ce0) at execProcnode.c:461\n result = 0x4785d0\n#43 0x00000000006eb826 in ExecProcNodeFirst (node=0x2776ce0) at execProcnode.c:445\nNo locals.\n#44 0x00000000006f74c5 in ExecProcNode (node=0x2776ce0) at ../../../src/include/executor/executor.h:247\nNo locals.\n#45 0x00000000006f79bb in fetch_input_tuple (aggstate=0x2776ec8) at nodeAgg.c:406\n slot = 0x2776df8\n#46 0x00000000006f9d11 in agg_retrieve_direct (aggstate=0x2776ec8) at nodeAgg.c:1755\n node = 0xa633eb0\n econtext = 0x2777150\n tmpcontext = 0x2776bf0\n---Type <return> to continue, or q <return> to quit---\n peragg = 0xe824520\n pergroups = 0xe832e88\n outerslot = 0x7ffff5f38b40\n firstSlot = 0xe81f6b8\n result = 0x7ffff5f38b10\n hasGroupingSets = false\n numGroupingSets = 1\n currentSet = 243398904\n nextSetSize = 0\n numReset = 1\n i = 1\n#47 0x00000000006f98fa in ExecAgg (pstate=0x2776ec8) at nodeAgg.c:1570\n node = 0x2776ec8\n result = 0x0\n#48 0x00000000006eb855 in ExecProcNodeInstr (node=0x2776ec8) at execProcnode.c:461\n result = 0x4785d0\n#49 0x00000000006eb826 in ExecProcNodeFirst (node=0x2776ec8) at execProcnode.c:445\nNo locals.\n#50 0x00000000006e0129 in ExecProcNode (node=0x2776ec8) at ../../../src/include/executor/executor.h:247\nNo locals.\n#51 0x00000000006e2abd in ExecutePlan (estate=0x27769d8, planstate=0x2776ec8, use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true, numberTuples=0, direction=ForwardScanDirection, dest=0xa66bef0, execute_once=true) at execMain.c:1723\n slot = 0x150\n current_tuple_count = 0\n#52 0x00000000006e0710 in standard_ExecutorRun (queryDesc=0xdeb5f38, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:364\n estate = 0x27769d8\n operation = CMD_SELECT\n dest = 0xa66bef0\n sendTuples = true\n oldcontext = 0xdeb5e20\n __func__ = \"standard_ExecutorRun\"\n#53 0x00007fd37e757633 in pgss_ExecutorRun (queryDesc=0xdeb5f38, direction=ForwardScanDirection, count=0, execute_once=true) at pg_stat_statements.c:892\n save_exception_stack = 0x7ffff5f38dd0\n save_context_stack = 0x0\n local_sigjmp_buf = {{__jmpbuf = {233529144, 9176610699338068304, 4687312, 140737319770032, 0, 0, 9176610699377914192, 9151536278965596496}, __mask_was_saved = 0, __saved_mask = {__val = {4294967302, 243497128, 4295032832, 140737319767408, 7255973, 140737319767408, 68726688595, \n 41380312, 174276272, 174278840, 41381576, 0, 41381576, 16, 174278840, 140737319767696}}}}\n#54 0x00007fd37e2d86ef in explain_ExecutorRun (queryDesc=0xdeb5f38, direction=ForwardScanDirection, count=0, execute_once=true) at auto_explain.c:281\n save_exception_stack = 0x7ffff5f38fa0\n save_context_stack = 0x0\n local_sigjmp_buf = {{__jmpbuf = {233529144, 9176610699451314512, 4687312, 140737319770032, 0, 0, 9176610699340165456, 9151535941362266448}, __mask_was_saved = 0, __saved_mask = {__val = {10926707, 24, 41371824, 41371824, 238009728, 41372168, 8600905475, 35241888, 336, 243497664, \n---Type <return> to continue, or q <return> to quit---\n 41380032, 243498000, 233528864, 336, 174505712, 140737319767760}}}}\n#55 0x00000000006e0511 in ExecutorRun (queryDesc=0xdeb5f38, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:305\nNo locals.\n#56 0x00000000008cfebe in PortalRunSelect (portal=0x21d4bf8, forward=true, count=0, dest=0xa66bef0) at pquery.c:932\n queryDesc = 0xdeb5f38\n direction = ForwardScanDirection\n nprocessed = 233528864\n __func__ = \"PortalRunSelect\"\n#57 0x00000000008cfb4c in PortalRun (portal=0x21d4bf8, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0xa66bef0, altdest=0xa66bef0, completionTag=0x7ffff5f39100 \"\") at pquery.c:773\n save_exception_stack = 0x7ffff5f39240\n save_context_stack = 0x0\n local_sigjmp_buf = {{__jmpbuf = {174505712, 9176610698715214160, 4687312, 140737319770032, 0, 0, 9176610699432440144, -9176589804060397232}, __mask_was_saved = 0, __saved_mask = {__val = {233529080, 2405784, 21306251184, 128, 13, 616, 174505656, 140737319768192, 10970883, 480, 112, \n 174505712, 34999312, 174505824, 233528864, 112}}}}\n result = false\n nprocessed = 4687312\n saveTopTransactionResourceOwner = 0x2197a20\n saveTopTransactionContext = 0x27748b0\n saveActivePortal = 0x0\n saveResourceOwner = 0x2197a20\n savePortalContext = 0x0\n saveMemoryContext = 0x27748b0\n __func__ = \"PortalRun\"\n#58 0x00000000008c9b22 in exec_simple_query (query_string=0xe2fbd80 \"--BEGIN SQL\\nSELECT site_office AS site_gran,\\n\\tsite_location||'.'||sect_num AS bs,\\n\\tgsm_site_name_to_sect_name(site_name, sect_num, sect_name) AS sitename,\\n\\tperiod AS start_time,\\n\\t(maybe_div(sum(rrc_su\"...)\n at postgres.c:1145\n parsetree = 0xd4f1c38\n portal = 0x21d4bf8\n snapshot_set = true\n commandTag = 0xc5ed4d \"SELECT\"\n completionTag = \"\\000\\000\\000\\000\\002\\000\\000\\000\\n\\000\\000\\000\\000\\000\\000\\000P\\221\\363\\365\\377\\177\\000\\000\\017s\\245\\000\\000\\000\\000\\000\\036\\020\\000\\000\\000\\000\\000\\000\\000\\f\\026\\002\\030\\020\\000\\000\\200\\275/\\016\\000\\000\\000\\000(\\r\\026\\002\\006\\000\\000\"\n querytree_list = 0xe230d58\n plantree_list = 0xa66beb8\n receiver = 0xa66bef0\n format = 0\n dest = DestRemote\n oldcontext = 0x27748b0\n parsetree_list = 0xd4f1c98\n parsetree_item = 0xd4f1c70\n save_log_statement_stats = false\n was_logged = false\n---Type <return> to continue, or q <return> to quit---\n use_implicit_block = false\n msec_str = \"\\220\\221\\363\\365\\377\\177\\000\\000&C\\245\\000\\000\\000\\000\\000\\006\\000\\000\\000\\030\\020\\000\\000\\200\\275/\\016\\000\\000\\000\"\n __func__ = \"exec_simple_query\"\n#59 0x00000000008cddf3 in PostgresMain (argc=1, argv=0x218fef8, dbname=0x215da48 \"ts\", username=0x218fed0 \"telsasoft\") at postgres.c:4182\n query_string = 0xe2fbd80 \"--BEGIN SQL\\nSELECT site_office AS site_gran,\\n\\tsite_location||'.'||sect_num AS bs,\\n\\tgsm_site_name_to_sect_name(site_name, sect_num, sect_name) AS sitename,\\n\\tperiod AS start_time,\\n\\t(maybe_div(sum(rrc_su\"...\n firstchar = 81\n input_message = {data = 0xe2fbd80 \"--BEGIN SQL\\nSELECT site_office AS site_gran,\\n\\tsite_location||'.'||sect_num AS bs,\\n\\tgsm_site_name_to_sect_name(site_name, sect_num, sect_name) AS sitename,\\n\\tperiod AS start_time,\\n\\t(maybe_div(sum(rrc_su\"..., len = 4121, maxlen = 8192, \n cursor = 4121}\n local_sigjmp_buf = {{__jmpbuf = {35192568, 9176610698734088528, 4687312, 140737319770032, 0, 0, 9176610698788614480, -9176589805133483696}, __mask_was_saved = 1, __saved_mask = {__val = {0, 0, 4494597, 0, 0, 0, 10923965, 1024, 35138464, 140737319768848, 10928134, 35138464, 34999312, \n 34999312, 35138440, 8}}}}\n send_ready_for_query = false\n disable_idle_in_transaction_timeout = false\n __func__ = \"PostgresMain\"\n#60 0x000000000082a098 in BackendRun (port=0x21867d0) at postmaster.c:4358\n av = 0x218fef8\n maxac = 2\n ac = 1\n secs = 618513625\n usecs = 637903\n i = 1\n __func__ = \"BackendRun\"\n#61 0x0000000000829806 in BackendStartup (port=0x21867d0) at postmaster.c:4030\n bn = 0x2185ef0\n pid = 0\n __func__ = \"BackendStartup\"\n#62 0x0000000000825cab in ServerLoop () at postmaster.c:1707\n port = 0x21867d0\n i = 0\n rmask = {fds_bits = {8, 0 <repeats 15 times>}}\n selres = 1\n now = 1565198425\n readmask = {fds_bits = {56, 0 <repeats 15 times>}}\n nSockets = 6\n last_lockfile_recheck_time = 1565198368\n last_touch_time = 1565197325\n __func__ = \"ServerLoop\"\n#63 0x00000000008255dd in PostmasterMain (argc=3, argv=0x215b9a0) at postmaster.c:1380\n opt = -1\n status = 0\n---Type <return> to continue, or q <return> to quit---\n userDoption = 0x217d500 \"/var/lib/pgsql/11/data\"\n listen_addr_saved = true\n i = 64\n output_config_variable = 0x0\n __func__ = \"PostmasterMain\"\n#64 0x000000000074ba30 in main (argc=3, argv=0x215b9a0) at main.c:228\n do_check_root = true\n\n\n",
"msg_date": "Wed, 7 Aug 2019 12:30:13 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "crash 11.5~"
},
{
"msg_contents": "I checked this still happens with max_parallel_workers_per_gather=0.\nNow, I just reproduced using SELECT * FROM that table:\n\n(gdb) p thisatt->attrelid\n$4 = 2015128626\n\nts=# SELECT 2015128626::regclass;\nregclass | child.huawei_umts_ucell_201908\n\n(gdb) p thisatt->attnum \n$1 = 2\n(gdb) p attnum # For earlier crash this is 1....\n$2 = 1\n\nts=# \\dt+ child.huawei_umts_ucell_201908\n child | huawei_umts_ucell_201908 | table | telsasoft | 612 MB | \n\n\\d+ child.huawei_umts_ucell_201908\n bsc6900ucell | text | | not null | | extended | | \n ne_name | text | | not null | | extended | | \n[...]\nPartition of: huawei_umts_ucell_metrics FOR VALUES FROM ('2019-08-01 00:00:00') TO ('2019-09-01 00:00:00')\nPartition constraint: ((start_time IS NOT NULL) AND (start_time >= '2019-08-01 00:00:00'::timestamp without time zone) AND (start_time < '2019-09-01 00:00:00'::timestamp without time zone))\nIndexes:\n \"huawei_umts_ucell_201908_unique_idx\" UNIQUE, btree (start_time, device_id, bsc6900ucell, ne_name, interval_seconds)\n \"huawei_umts_ucell_201908_idx\" brin (start_time) WITH (autosummarize='true')\n \"huawei_umts_ucell_201908_site_idx\" btree (site_id)\nCheck constraints:\n \"huawei_umts_ucell_201908_start_time_check\" CHECK (start_time >= '2019-08-01 00:00:00'::timestamp without time zone AND start_time < '2019-09-01 00:00:00'::timestamp without time zone)\nStatistics objects:\n \"child\".\"huawei_umts_ucell_201908\" (ndistinct) ON bsc6900ucell, ne_name, start_time, interval_seconds, device_id FROM child.huawei_umts_ucell_201908\nOptions: autovacuum_analyze_threshold=2, autovacuum_analyze_scale_factor=0.005\n\nts=# SELECT COUNT(1) FROM pg_attribute WHERE attrelid='child.huawei_umts_ucell_201908'::regclass;\ncount | 560\n\nts=# SELECT * FROM pg_attribute WHERE attrelid='child.huawei_umts_ucell_201908'::regclass AND attnum>=0 ORDER BY attnum;\n\nattrelid | 2015128626\nattname | bsc6900ucell\natttypid | 25\nattstattarget | -1\nattlen | -1\nattnum | 1\nattndims | 0\nattcacheoff | -1\natttypmod | -1\nattbyval | f\nattstorage | x\nattalign | i\nattnotnull | t\natthasdef | f\natthasmissing | f\nattidentity | \nattisdropped | f\nattislocal | f\nattinhcount | 1\nattcollation | 100\nattacl | \nattoptions | \nattfdwoptions | \nattmissingval | \n\nThe only interesting thing about the parent/siblings is that the previous month\nwas partitioned with \"daily\" granularity. I adjusted our threshold for that,\nso that August is partitioned monthly:\n\n [...]\n child.huawei_umts_ucell_20190730 FOR VALUES FROM ('2019-07-30 00:00:00') TO ('2019-07-31 00:00:00'),\n child.huawei_umts_ucell_20190731 FOR VALUES FROM ('2019-07-31 00:00:00') TO ('2019-08-01 00:00:00'),\n child.huawei_umts_ucell_201908 FOR VALUES FROM ('2019-08-01 00:00:00') TO ('2019-09-01 00:00:00')\n\nProgram terminated with signal 11, Segmentation fault.\n#0 0x00000000004877df in slot_deform_tuple (slot=0x22b5860, natts=554) at heaptuple.c:1465\n1465 off = att_align_pointer(off, thisatt->attalign, -1,\n\n(gdb) bt\n#0 0x00000000004877df in slot_deform_tuple (slot=0x22b5860, natts=554) at heaptuple.c:1465\n#1 0x0000000000487e4b in slot_getallattrs (slot=0x22b5860) at heaptuple.c:1626\n#2 0x000000000048aab9 in printtup (slot=0x22b5860, self=0x2161b28) at printtup.c:392\n#3 0x00000000008d0346 in RunFromStore (portal=0x21d5638, direction=ForwardScanDirection, count=0, dest=0x2161b28) at pquery.c:1106\n#4 0x00000000008cfe88 in PortalRunSelect (portal=0x21d5638, forward=true, count=0, dest=0x2161b28) at pquery.c:928\n#5 0x00000000008cfb4c in PortalRun (portal=0x21d5638, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2161b28, altdest=0x2161b28, completionTag=0x7ffff5f39100 \"\") at pquery.c:773\n#6 0x00000000008c9b22 in exec_simple_query (query_string=0x2160d28 \"FETCH FORWARD 999 FROM _psql_cursor\") at postgres.c:1145\n#7 0x00000000008cddf3 in PostgresMain (argc=1, argv=0x218bfb0, dbname=0x218beb0 \"ts\", username=0x218be90 \"pryzbyj\") at postgres.c:4182\n#8 0x000000000082a098 in BackendRun (port=0x2181ac0) at postmaster.c:4358\n#9 0x0000000000829806 in BackendStartup (port=0x2181ac0) at postmaster.c:4030\n#10 0x0000000000825cab in ServerLoop () at postmaster.c:1707\n#11 0x00000000008255dd in PostmasterMain (argc=3, argv=0x215b9a0) at postmaster.c:1380\n#12 0x000000000074ba30 in main (argc=3, argv=0x215b9a0) at main.c:228\n\n#0 0x00000000004877df in slot_deform_tuple (slot=0x22b5860, natts=554) at heaptuple.c:1465\n thisatt = 0x22dec50\n tuple = 0x22b58a0\n tupleDesc = 0x22debc0\n values = 0x22b58c0\n isnull = 0x22b6a10\n tup = 0x263f9a0\n hasnulls = false\n attnum = 1\n tp = 0x263fa05 \"\\374\\023\\026\\213^s#\\347(\\235=\\326\\321\\067\\032\\245\\321B\\026}܋FS\\375\\244\\003\\065\\336\\277;\\252O\\006\\065\\320\\353\\211}F\\237\\373B\\243\\357J~\\270\\\"\\230ƣ\\024xǍ\\334\\377\\202\\277S\\031\\375\\351\\003\\220{\\004\"\n off = 583369983\n bp = 0x263f9b7 \"\\270\\027$U}\\232\\246\\235\\004\\255\\331\\033\\006Qp\\376E\\316h\\376\\363\\247\\366Նgy7\\311E\\224~F\\274\\023ϋ%\\216,\\221\\331@\\024\\363\\233\\070\\275\\004\\254L\\217t\\262X\\227\\352\\346\\347\\371\\070\\321ш\\221\\350fc\\316\\r\\356\\351h\\275\\213\\230\\360\\203\\374\\023\\026\\213^s#\\347(\\235=\\326\\321\\067\\032\\245\\321B\\026}܋FS\\375\\244\\003\\065\\336\\277;\\252O\\006\\065\\320\\353\\211}F\\237\\373B\\243\\357J~\\270\\\"\\230ƣ\\024xǍ\\334\\377\\202\\277S\\031\\375\\351\\003\\220{\\004\"\n slow = true\n#1 0x0000000000487e4b in slot_getallattrs (slot=0x22b5860) at heaptuple.c:1626\n tdesc_natts = 554\n attnum = 554\n tuple = 0x22b58a0\n __func__ = \"slot_getallattrs\"\n\n#2 0x000000000048aab9 in printtup (slot=0x22b5860, self=0x2161b28) at printtup.c:392\n typeinfo = 0x22debc0\n myState = 0x2161b28\n oldcontext = 0xa8e29a\n buf = 0x2161b78\n natts = 554\n i = 0\n---Type <return> to continue, or q <return> to quit---\n#3 0x00000000008d0346 in RunFromStore (portal=0x21d5638, direction=ForwardScanDirection, count=0, dest=0x2161b28) at pquery.c:1106\n oldcontext = 0x21931a0\n ok = true\n forward = true\n current_tuple_count = 879\n slot = 0x22b5860\n#4 0x00000000008cfe88 in PortalRunSelect (portal=0x21d5638, forward=true, count=0, dest=0x2161b28) at pquery.c:928\n queryDesc = 0x0\n direction = ForwardScanDirection\n nprocessed = 35205816\n __func__ = \"PortalRunSelect\"\n#5 0x00000000008cfb4c in PortalRun (portal=0x21d5638, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2161b28, altdest=0x2161b28, completionTag=0x7ffff5f39100 \"\") at pquery.c:773\n save_exception_stack = 0x7ffff5f39240\n save_context_stack = 0x0\n local_sigjmp_buf = {{__jmpbuf = {35003176, 9176610698715214160, 4687312, 140737319770032, 0, 0, 9176610699432440144, -9176589804060397232}, __mask_was_saved = 0, __saved_mask = {__val = {\n 36636288, 4352, 35002176, 1112, 1176, 0, 35003120, 140737319768192, 10970883, 35488536, 112, 35003176, 34999312, 35003288, 35205536, 112}}}}\n result = false\n nprocessed = 4687312\n saveTopTransactionResourceOwner = 0x2187b88\n saveTopTransactionContext = 0x21f5d90\n saveActivePortal = 0x0\n saveResourceOwner = 0x2187b88\n savePortalContext = 0x0\n saveMemoryContext = 0x21f5d90\n __func__ = \"PortalRun\"\n#6 0x00000000008c9b22 in exec_simple_query (query_string=0x2160d28 \"FETCH FORWARD 999 FROM _psql_cursor\") at postgres.c:1145\n[...]\n\n\n\n\n",
"msg_date": "Wed, 7 Aug 2019 13:27:44 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: crash 11.5~ (and 11.4)"
},
{
"msg_contents": "Just found this, although I'm not sure what to do about it. If it's corrupt\ntable data, I can restore from backup.\n\nts=# VACUUM FREEZE VERBOSE child.huawei_umts_ucell_201908;\nINFO: 00000: aggressively vacuuming \"child.huawei_umts_ucell_201908\"\nLOCATION: lazy_scan_heap, vacuumlazy.c:502\nERROR: XX001: found xmin 73850277 from before relfrozenxid 111408920\nLOCATION: heap_prepare_freeze_tuple, heapam.c:6853\n\nI confirmed I updated to 11.4 immediately on its release:\n[pryzbyj@database ~]$ ls -ltc /usr/pgsql-11/bin/postgres\n-rwxr-xr-x. 1 root root 7291736 Jun 20 07:09 /usr/pgsql-11/bin/postgres\n\nThat table would've been created shortly after midnight on Aug 1 when we loaded\nfirst data for the month. So it was created and processed only on pg11.4,\nalthough the parent has probably been around since pg_upgrade last October.\n\nHere's usable looking bt\n\n#0 heap_prepare_freeze_tuple (tuple=0x7fd3d8fa0058, relfrozenxid=111408920, \n relminmxid=1846, cutoff_xid=111658731, cutoff_multi=1846, frz=0x223e930, \n totally_frozen_p=0x7ffff5f3848f) at heapam.c:6832\n changed = 211\n xmax_already_frozen = 127\n xmin_frozen = false\n freeze_xmax = false\n xid = 0\n __func__ = \"heap_prepare_freeze_tuple\"\n\n#1 0x00000000006c8e6e in lazy_scan_heap (onerel=0x7fd37e4f8fa0, options=13, \n vacrelstats=0x223e790, Irel=0x223e8d8, nindexes=3, aggressive=true)\n at vacuumlazy.c:1151\n tuple_totally_frozen = false\n itemid = 0x7fd3d8f9e918\n buf = 168313\n page = 0x7fd3d8f9e900 \"\\020:\"\n offnum = 1\n maxoff = 3\n hastup = true\n nfrozen = 0\n freespace = 35907664\n all_frozen = true\n tupgone = false\n prev_dead_count = 0\n all_visible_according_to_vm = true\n all_visible = true\n has_dead_tuples = false\n visibility_cutoff_xid = 73850277\n nblocks = 75494\n blkno = 34915\n tuple = {t_len = 2212, t_self = {ip_blkid = {bi_hi = 0, \n bi_lo = 34915}, ip_posid = 1}, t_tableOid = 2015128626, \n t_data = 0x7fd3d8fa0058}\n relname = 0x7fd37e4f6ac8 \"huawei_umts_ucell_201908\"\n relfrozenxid = 111408920\n relminmxid = 1846\n empty_pages = 0\n vacuumed_pages = 0\n next_fsm_block_to_vacuum = 0\n num_tuples = 1\n live_tuples = 1\n tups_vacuumed = 0\n nkeep = 0\n nunused = 0\n indstats = 0x223e850\n i = 32512\n ru0 = {tv = {tv_sec = 1565204981, tv_usec = 609217}, ru = {ru_utime = {\n tv_sec = 13, tv_usec = 865892}, ru_stime = {tv_sec = 1, \n tv_usec = 960701}, ru_maxrss = 136988, ru_ixrss = 0, \n ru_idrss = 0, ru_isrss = 0, ru_minflt = 48841, ru_majflt = 1, \n ru_nswap = 0, ru_inblock = 196152, ru_oublock = 292656, \n ru_msgsnd = 0, ru_msgrcv = 0, ru_nsignals = 0, ru_nvcsw = 849, \n ru_nivcsw = 6002}}\n vmbuffer = 4267\n next_unskippable_block = 34916\n skipping_blocks = false\n frozen = 0x223e930\n buf = {data = 0x0, len = -168590068, maxlen = 32767, \n cursor = -168589904}\n initprog_index = {0, 1, 5}\n initprog_val = {1, 75494, 21968754}\n __func__ = \"lazy_scan_heap\"\n#2 0x00000000006c77c7 in lazy_vacuum_rel (onerel=0x7fd37e4f8fa0, options=13, \n params=0x7ffff5f38a70, bstrategy=0x22b8cf8) at vacuumlazy.c:265\n vacrelstats = 0x223e790\n Irel = 0x223e8d8\n nindexes = 3\n ru0 = {tv = {tv_sec = 140733193388036, tv_usec = 281474985219646}, \n ru = {ru_utime = {tv_sec = 12933152, tv_usec = 35123496}, \n ru_stime = {tv_sec = 140737319765996, tv_usec = 4687312}, \n ru_maxrss = 0, ru_ixrss = -1887226286461968708, \n ru_idrss = 35351240, ru_isrss = 35249392, \n ru_minflt = 140737319765776, ru_majflt = 35249392, ru_nswap = 20, \n ru_inblock = 0, ru_oublock = 140737319765936, \n ru_msgsnd = 10990471, ru_msgrcv = 140546333970336, \n ru_nsignals = 35249264, ru_nvcsw = 4, ru_nivcsw = 4687312}}\n starttime = 0\n secs = 72057594037927936\n usecs = 2015128626\n read_rate = 6.9533474784178538e-310\n write_rate = 6.9439115263673584e-310\n aggressive = true\n scanned_all_unfrozen = false\n xidFullScanLimit = 111658735\n mxactFullScanLimit = 1846\n new_rel_pages = 0\n new_rel_allvisible = 36424984\n new_live_tuples = 6.9533474784194348e-310\n new_frozen_xid = 9080197\n new_min_multi = 0\n __func__ = \"lazy_vacuum_rel\"\n#3 0x00000000006c72b3 in vacuum_rel (relid=2015128626, relation=0x2161820, \n options=13, params=0x7ffff5f38a70) at vacuum.c:1557\n lmode = 4\n onerel = 0x7fd37e4f8fa0\n onerelid = {relId = 2015128626, dbId = 16564}\n toast_relid = 2015128673\n save_userid = 18712\n save_sec_context = 0\n save_nestlevel = 2\n rel_lock = true\n __func__ = \"vacuum_rel\"\n#4 0x00000000006c5939 in vacuum (options=13, relations=0x22b8e70, \n params=0x7ffff5f38a70, bstrategy=0x22b8cf8, isTopLevel=true)\n at vacuum.c:340\n vrel = 0x22b8e10\n cur = 0x22b8e48\n save_exception_stack = 0x7ffff5f38c20\n save_context_stack = 0x0\n local_sigjmp_buf = {{__jmpbuf = {35002584, 9176610699604406608, \n 4687312, 140737319770032, 0, 0, 9176610699520520528, \n -9176587885370396336}, __mask_was_saved = 0, __saved_mask = {\n __val = {35906192, 140737319766400, 10964897, 6853, 35905936, \n 208, 35890568, 21474836680, 35890776, 140737319766560, \n 10927612, 208, 35890288, 35890288, 35890504, 35890544}}}}\n in_vacuum = true\n stmttype = 0xc03841 \"VACUUM\"\n in_outer_xact = false\n use_own_xacts = true\n __func__ = \"vacuum\"\n#5 0x00000000006c54e0 in ExecVacuum (vacstmt=0x2161910, isTopLevel=true)\n at vacuum.c:141\n params = {freeze_min_age = 0, freeze_table_age = 0, \n multixact_freeze_min_age = 0, multixact_freeze_table_age = 0, \n is_wraparound = false, log_min_duration = -1}\n __func__ = \"ExecVacuum\"\n\n\nerrfinish one seems to be not useful?\n\n(gdb) bt\n#0 errfinish (dummy=0) at elog.c:415\n#1 0x000000000054aefd in ShowTransactionStateRec (\n str=0xafed12 \"StartTransaction\", s=0xf9efa0) at xact.c:5157\n#2 0x000000000054ad5c in ShowTransactionState (\n str=0xafed12 \"StartTransaction\") at xact.c:5130\n#3 0x00000000005470c8 in StartTransaction () at xact.c:1961\n#4 0x0000000000547db9 in StartTransactionCommand () at xact.c:2734\n#5 0x00000000008cbe65 in start_xact_command () at postgres.c:2500\n#6 0x00000000008c9772 in exec_simple_query (\n query_string=0x2160d28 \"VACUUM FREEZE VERBOSE child.huawei_umts_ucell_201908;\") at postgres.c:948\n#7 0x00000000008cddf3 in PostgresMain (argc=1, argv=0x218c768, \n dbname=0x218c5d0 \"ts\", username=0x218c5b0 \"pryzbyj\") at postgres.c:4182\n#8 0x000000000082a098 in BackendRun (port=0x21818d0) at postmaster.c:4358\n#9 0x0000000000829806 in BackendStartup (port=0x21818d0) at postmaster.c:4030\n#10 0x0000000000825cab in ServerLoop () at postmaster.c:1707\n#11 0x00000000008255dd in PostmasterMain (argc=3, argv=0x215b9a0)\n at postmaster.c:1380\n#12 0x000000000074ba30 in main (argc=3, argv=0x215b9a0) at main.c:228\n\n\n\n#0 errfinish (dummy=0) at elog.c:415\n edata = 0xfe7b80\n elevel = 0\n oldcontext = 0x215df90\n econtext = 0xa4d11a\n __func__ = \"errfinish\"\n#1 0x000000000054aefd in ShowTransactionStateRec (\n str=0xafed12 \"StartTransaction\", s=0xf9efa0) at xact.c:5157\n buf = {data = 0x22462d8 \"\", len = 0, maxlen = 1024, cursor = 0}\n __func__ = \"ShowTransactionStateRec\"\n#2 0x000000000054ad5c in ShowTransactionState (\n str=0xafed12 \"StartTransaction\") at xact.c:5130\nNo locals.\n#3 0x00000000005470c8 in StartTransaction () at xact.c:1961\n s = 0xf9efa0\n vxid = {backendId = 5, localTransactionId = 16}\n#4 0x0000000000547db9 in StartTransactionCommand () at xact.c:2734\n s = 0xf9efa0\n __func__ = \"StartTransactionCommand\"\n#5 0x00000000008cbe65 in start_xact_command () at postgres.c:2500\nNo locals.\n\nI looked at this briefly:\n\nts=# SELECT * FROM page_header(get_raw_page('child.huawei_umts_ucell_201908', 0));\nlsn | 3A49/66F44310\nchecksum | 0\nflags | 4\nlower | 36\nupper | 1432\nspecial | 8192\npagesize | 8192\nversion | 4\nprune_xid | 0\n\nIn case someone wants me to look at bits and pieces, I saved a copy of:\nrelfilenode | 2015128626\n\n\n",
"msg_date": "Wed, 7 Aug 2019 14:29:28 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: crash 11.5~ (and 11.4)"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-07 14:29:28 -0500, Justin Pryzby wrote:\n> Just found this, although I'm not sure what to do about it. If it's corrupt\n> table data, I can restore from backup.\n> \n> ts=# VACUUM FREEZE VERBOSE child.huawei_umts_ucell_201908;\n> INFO: 00000: aggressively vacuuming \"child.huawei_umts_ucell_201908\"\n> LOCATION: lazy_scan_heap, vacuumlazy.c:502\n> ERROR: XX001: found xmin 73850277 from before relfrozenxid 111408920\n> LOCATION: heap_prepare_freeze_tuple, heapam.c:6853\n\nUgh :(\n\nWe really need to add a error context to vacuumlazy that shows which\nblock is being processed.\n\n\n> I confirmed I updated to 11.4 immediately on its release:\n> [pryzbyj@database ~]$ ls -ltc /usr/pgsql-11/bin/postgres\n> -rwxr-xr-x. 1 root root 7291736 Jun 20 07:09 /usr/pgsql-11/bin/postgres\n\nAny chance you might not have restarted postgres at that time?\n\n\n> That table would've been created shortly after midnight on Aug 1 when we loaded\n> first data for the month. So it was created and processed only on pg11.4,\n> although the parent has probably been around since pg_upgrade last October.\n\nI don't think there's a way the parent could play a role.\n\n\n> Here's usable looking bt\n> \n> #0 heap_prepare_freeze_tuple (tuple=0x7fd3d8fa0058, relfrozenxid=111408920, \n> relminmxid=1846, cutoff_xid=111658731, cutoff_multi=1846, frz=0x223e930, \n> totally_frozen_p=0x7ffff5f3848f) at heapam.c:6832\n> changed = 211\n> xmax_already_frozen = 127\n> xmin_frozen = false\n> freeze_xmax = false\n> xid = 0\n> __func__ = \"heap_prepare_freeze_tuple\"\n\nHm. That's a backtrace to what precisely? Are you sure it's the the\nerroring call to heap_prepare_freeze_tuple? Because I think that's at\nthe function's start - which is why some of the stack variables have\nbogus contents.\n\nI think you'd need to set the breakpoint to heapam.c:6850 to be sure to\ncatch the error (while the error message heapam.c:6853, that's because\nthe line macro in some compilers expands to the end of the statement).\n\n\n> errfinish one seems to be not useful?\n> \n> (gdb) bt\n> #0 errfinish (dummy=0) at elog.c:415\n> #1 0x000000000054aefd in ShowTransactionStateRec (\n> str=0xafed12 \"StartTransaction\", s=0xf9efa0) at xact.c:5157\n> #2 0x000000000054ad5c in ShowTransactionState (\n> str=0xafed12 \"StartTransaction\") at xact.c:5130\n> #3 0x00000000005470c8 in StartTransaction () at xact.c:1961\n> #4 0x0000000000547db9 in StartTransactionCommand () at xact.c:2734\n> #5 0x00000000008cbe65 in start_xact_command () at postgres.c:2500\n> #6 0x00000000008c9772 in exec_simple_query (\n> query_string=0x2160d28 \"VACUUM FREEZE VERBOSE child.huawei_umts_ucell_201908;\") at postgres.c:948\n> #7 0x00000000008cddf3 in PostgresMain (argc=1, argv=0x218c768, \n> dbname=0x218c5d0 \"ts\", username=0x218c5b0 \"pryzbyj\") at postgres.c:4182\n> #8 0x000000000082a098 in BackendRun (port=0x21818d0) at postmaster.c:4358\n> #9 0x0000000000829806 in BackendStartup (port=0x21818d0) at postmaster.c:4030\n> #10 0x0000000000825cab in ServerLoop () at postmaster.c:1707\n> #11 0x00000000008255dd in PostmasterMain (argc=3, argv=0x215b9a0)\n> at postmaster.c:1380\n> #12 0x000000000074ba30 in main (argc=3, argv=0x215b9a0) at main.c:228\n\nThat's probably just the errfinish for a debug message. Might be easier\nto set a breakpoint to pg_re_throw().\n\n\n> I looked at this briefly:\n> \n> ts=# SELECT * FROM page_header(get_raw_page('child.huawei_umts_ucell_201908', 0));\n> lsn | 3A49/66F44310\n> checksum | 0\n> flags | 4\n> lower | 36\n> upper | 1432\n> special | 8192\n> pagesize | 8192\n> version | 4\n> prune_xid | 0\n\nThat's not necessarily the target block though, right? It'd be useful\nto get this for the block with the corruption, and perhaps one\nbefore/after. If the backtrace I commented on at the top is correct,\nthe relevant tuple was at \"blkno = 34915\" (visible a frame or two\nup).\n\nAlso heap_page_items() for those pages would be useful - you might want\nto not include the t_data column, as it has raw tuple data.\n\n\nCould you show\n* pg_controldata\n* SELECT oid, datname, datfrozenxid, datminmxid FROM pg_database;\n\nThat should be easy to redact if necessary. It'd be useful to know which\ndatabase the corrupted table is in.\n\nIt'd also be helpful to get something roughly like\nSELECT\n oid, oid::regclass,\n relkind, reltoastrelid,\n relfrozenxid, age(relfrozenxid),\n relminmxid, mxid_age(relminmxid),\n relpages,\n (SELECT txid_current())\nFROM pg_class\nWHERE relfrozenxid <> 0;\n\nBut I'm not sure if including the tablenames would be a problematic for\nyour case. Possibly a bit hard to infer \"usage\" correlations without,\nbut probably still worthwhile.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Aug 2019 16:51:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: crash 11.5~ (and 11.4)"
},
{
"msg_contents": "On Wed, Aug 07, 2019 at 04:51:54PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2019-08-07 14:29:28 -0500, Justin Pryzby wrote:\n> > Just found this, although I'm not sure what to do about it. If it's corrupt\n> > table data, I can restore from backup.\n\nIn the meantime, I've renamed+uninherited the table and restored from backup,\nwith the broken table preserved.\n\nHowever, the previous days' backup did this, so I used Monday's backup.\n\n[pryzbyj@database ~]$ time sudo -u postgres pg_restore /srv/cdrperfbackup/ts/2019-08-06/curtables/ -t huawei_umts_ucell_201908 --verbose -d ts --host /tmp\npg_restore: connecting to database for restore\npg_restore: creating TABLE \"child.huawei_umts_ucell_201908\"\npg_restore: INFO: partition constraint for table \"huawei_umts_ucell_201908\" is implied by existing constraints\npg_restore: processing data for table \"child.huawei_umts_ucell_201908\"\npg_restore: [archiver (db)] Error while PROCESSING TOC:\npg_restore: [archiver (db)] Error from TOC entry 62710; 0 2015128626 TABLE DATA huawei_umts_ucell_201908 telsasoft\npg_restore: [archiver (db)] COPY failed for table \"huawei_umts_ucell_201908\": ERROR: invalid byte sequence for encoding \"UTF8\": 0xe7 0x28 0x9d\nCONTEXT: COPY huawei_umts_ucell_201908, line 104746\n\nAlso, I found core with this BT from my own manual invocation of ANALYZE.\n\n(gdb) bt\n#0 0x00000039674324f5 in raise () from /lib64/libc.so.6\n#1 0x0000003967433cd5 in abort () from /lib64/libc.so.6\n#2 0x0000000000a3c4b2 in ExceptionalCondition (conditionName=0xaad9e7 \"!(j > attnum)\", errorType=0xaad84c \"FailedAssertion\", fileName=0xaad840 \"heaptuple.c\", lineNumber=582) at assert.c:54\n#3 0x00000000004856a7 in nocachegetattr (tuple=0x1c116b08, attnum=1, tupleDesc=0x23332b0) at heaptuple.c:582\n#4 0x000000000062981d in std_fetch_func (stats=0x22700c8, rownum=54991, isNull=0x7ffff5f3844f) at analyze.c:1718\n#5 0x000000000062aa99 in compute_scalar_stats (stats=0x22700c8, fetchfunc=0x62942b <std_fetch_func>, samplerows=120000, totalrows=227970) at analyze.c:2370\n#6 0x00000000006270db in do_analyze_rel (onerel=0x7fd37e2afa50, options=2, params=0x7ffff5f38a70, va_cols=0x0, acquirefunc=0x627e9e <acquire_sample_rows>, relpages=75494, inh=false,\n in_outer_xact=false, elevel=13) at analyze.c:579\n#7 0x0000000000626616 in analyze_rel (relid=2015128626, relation=0x2161820, options=2, params=0x7ffff5f38a70, va_cols=0x0, in_outer_xact=false, bstrategy=0x21f7848) at analyze.c:310\n#8 0x00000000006c59b2 in vacuum (options=2, relations=0x21f79c0, params=0x7ffff5f38a70, bstrategy=0x21f7848, isTopLevel=true) at vacuum.c:357\n#9 0x00000000006c54e0 in ExecVacuum (vacstmt=0x2161910, isTopLevel=true) at vacuum.c:141\n#10 0x00000000008d1d7e in standard_ProcessUtility (pstmt=0x21619d0, queryString=0x2160d28 \"ANALYZE child. huawei_umts_ucell_201908;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\n dest=0x2161cc0, completionTag=0x7ffff5f39100 \"\") at utility.c:670\n\n#3 0x00000000004856a7 in nocachegetattr (tuple=0x1c116b08, attnum=1, tupleDesc=0x23332b0) at heaptuple.c:582\n natts = 554\n j = 1\n tup = 0x1c116b20\n tp = 0x1c116b85 \"\\374\\023\\026\\213^s#\\347(\\235=\\326\\321\\067\\032\\245\\321B\\026}܋FS\\375\\244\\003\\065\\336\\277;\\252O\\006\\065\\320\\353\\211}F\\237\\373B\\243\\357J~\\270\\\"\\230ƣ\\024xǍ\\334\\377\\202\\277S\\031\\375\\351\\003\\220{\\004\"\n bp = 0x1c116b37 \"\\270\\027$U}\\232\\246\\235\\004\\255\\331\\033\\006Qp\\376E\\316h\\376\\363\\247\\366Նgy7\\311E\\224~F\\274\\023ϋ%\\216,\\221\\331@\\024\\363\\233\\070\\275\\004\\254L\\217t\\262X\\227\\352\\346\\347\\371\\070\\321ш\\221\\350fc\\316\\r\\356\\351h\\275\\213\\230\\360\\203\\374\\023\\026\\213^s#\\347(\\235=\\326\\321\\067\\032\\245\\321B\\026}܋FS\\375\\244\\003\\065\\336\\277;\\252O\\006\\065\\320\\353\\211}F\\237\\373B\\243\\357J~\\270\\\"\\230ƣ\\024xǍ\\334\\377\\202\\277S\\031\\375\\351\\003\\220{\\004\"\n slow = false\n off = -1\n#4 0x000000000062981d in std_fetch_func (stats=0x22700c8, rownum=54991, isNull=0x7ffff5f3844f) at analyze.c:1718\n attnum = 2\n tuple = 0x1c116b08\n tupDesc = 0x23332b0\n#5 0x000000000062aa99 in compute_scalar_stats (stats=0x22700c8, fetchfunc=0x62942b <std_fetch_func>, samplerows=120000, totalrows=227970) at analyze.c:2370\n value = 546120480\n isnull = false\n i = 54991\n null_cnt = 0\n nonnull_cnt = 54991\n toowide_cnt = 0\n---Type <return> to continue, or q <return> to quit---\n total_width = 274955\n is_varlena = true\n is_varwidth = true\n corr_xysum = 3.9525251667299724e-323\n ssup = {ssup_cxt = 0x22ae080, ssup_collation = 100, ssup_reverse = false, ssup_nulls_first = false, ssup_attno = 0, ssup_extra = 0x22ae5b0, comparator = 0xa06720 <varstrfastcmp_locale>,\n abbreviate = false, abbrev_converter = 0, abbrev_abort = 0, abbrev_full_comparator = 0}\n values = 0x204f9970\n values_cnt = 54991\n tupnoLink = 0x206ce5c0\n track = 0x22ae198\n track_cnt = 0\n num_mcv = 100\n num_bins = 100\n mystats = 0x2270490\n\n> > ts=# VACUUM FREEZE VERBOSE child.huawei_umts_ucell_201908;\n> > INFO: 00000: aggressively vacuuming \"child.huawei_umts_ucell_201908\"\n> > LOCATION: lazy_scan_heap, vacuumlazy.c:502\n> > ERROR: XX001: found xmin 73850277 from before relfrozenxid 111408920\n> > LOCATION: heap_prepare_freeze_tuple, heapam.c:6853\n> \n> Ugh :(\n> \n> We really need to add a error context to vacuumlazy that shows which\n> block is being processed.\n\n> > I confirmed I updated to 11.4 immediately on its release:\n> > [pryzbyj@database ~]$ ls -ltc /usr/pgsql-11/bin/postgres\n> > -rwxr-xr-x. 1 root root 7291736 Jun 20 07:09 /usr/pgsql-11/bin/postgres\n> \n> Any chance you might not have restarted postgres at that time?\n\nI don't think so, nagios would've been yelling at me. Also..I found I have a\nlog of pg_settings, which shows server_version updated 2019-06-20\n07:05:01.425645. Is there a bug in 11.3 which could explain it ?\n\n> > Here's usable looking bt\n> > \n> > #0 heap_prepare_freeze_tuple (tuple=0x7fd3d8fa0058, relfrozenxid=111408920, \n> > relminmxid=1846, cutoff_xid=111658731, cutoff_multi=1846, frz=0x223e930, \n> > totally_frozen_p=0x7ffff5f3848f) at heapam.c:6832\n> > changed = 211\n> > xmax_already_frozen = 127\n> > xmin_frozen = false\n> > freeze_xmax = false\n> > xid = 0\n> > __func__ = \"heap_prepare_freeze_tuple\"\n> \n> Hm. That's a backtrace to what precisely? Are you sure it's the the\n> erroring call to heap_prepare_freeze_tuple? Because I think that's at\n> the function's start - which is why some of the stack variables have\n> bogus contents.\n\nYes, I had just done this: b heap_prepare_freeze_tuple.\n\n> I think you'd need to set the breakpoint to heapam.c:6850 to be sure to\n> catch the error (while the error message heapam.c:6853, that's because\n> the line macro in some compilers expands to the end of the statement).\n\nI did 6848 and got;\n\n#0 heap_prepare_freeze_tuple (tuple=0x7fd403a02058, relfrozenxid=111408920, relminmxid=1846, cutoff_xid=111665057, cutoff_multi=1846, frz=0x22aa6f8, totally_frozen_p=0x7ffff5f3848f) at heapam.c:6849\n changed = false\n xmax_already_frozen = false\n xmin_frozen = false\n freeze_xmax = false\n xid = 73850277\n __func__ = \"heap_prepare_freeze_tuple\"\n\n> > errfinish one seems to be not useful?\n> > \n> > (gdb) bt\n> > #0 errfinish (dummy=0) at elog.c:415\n> > #1 0x000000000054aefd in ShowTransactionStateRec (\n> > str=0xafed12 \"StartTransaction\", s=0xf9efa0) at xact.c:5157\n> > [...]\n>\n> That's probably just the errfinish for a debug message. Might be easier\n> to set a breakpoint to pg_re_throw().\n\nRight, that works.\n\n(gdb) bt\n#0 pg_re_throw () at elog.c:1730\n#1 0x0000000000a3cb1e in errfinish (dummy=0) at elog.c:467\n#2 0x00000000004e72c4 in heap_prepare_freeze_tuple (tuple=0x7fd403a02058, relfrozenxid=111408920, relminmxid=1846, cutoff_xid=111665065, cutoff_multi=1846, frz=0x2260558, totally_frozen_p=0x7ffff5f3848f) at heapam.c:6850\n#3 0x00000000006c8e6e in lazy_scan_heap (onerel=0x7fd37d5f99c8, options=13, vacrelstats=0x22603b8, Irel=0x2260500, nindexes=3, aggressive=true) at vacuumlazy.c:1151\n\n> > I looked at this briefly:\n> > \n> > ts=# SELECT * FROM page_header(get_raw_page('child.huawei_umts_ucell_201908', 0));\n> > lsn | 3A49/66F44310\n> > checksum | 0\n> > flags | 4\n> > lower | 36\n> > upper | 1432\n> > special | 8192\n> > pagesize | 8192\n> > version | 4\n> > prune_xid | 0\n> \n> That's not necessarily the target block though, right? It'd be useful\n> to get this for the block with the corruption, and perhaps one\n> before/after. If the backtrace I commented on at the top is correct,\n> the relevant tuple was at \"blkno = 34915\" (visible a frame or two\n> up).\n\nAh, right.\n\nts=# SELECT * FROM page_header(get_raw_page('child.huawei_umts_ucell_201908', 34914));\n lsn | checksum | flags | lower | upper | special | pagesize | version | prune_xid \n---------------+----------+-------+-------+-------+---------+----------+---------+-----------\n 3A4A/4F2192B8 | 0 | 0 | 36 | 1520 | 8192 | 8192 | 4 | 0\n\nts=# SELECT * FROM page_header(get_raw_page('child.huawei_umts_ucell_201908', 34915));\n 3A4A/4F21C268 | 0 | 0 | 36 | 1520 | 8192 | 8192 | 4 | 0\n\nts=# SELECT * FROM page_header(get_raw_page('child.huawei_umts_ucell_201908', 34916));\n 3A4A/4F21DC90 | 0 | 0 | 36 | 1496 | 8192 | 8192 | 4 | 0\n\n> Also heap_page_items() for those pages would be useful - you might want\n> to not include the t_data column, as it has raw tuple data.\n\nts=# SELECT lp, lp_off, lp_flags, lp_len, t_xmin, t_xmax, t_field3, t_ctid, t_infomask2, t_infomask, t_hoff, t_bits, t_oid FROM heap_page_items(get_raw_page('child.huawei_umts_ucell_201908', 34914));\n lp | lp_off | lp_flags | lp_len | t_xmin | t_xmax | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits ... | t_oid \n\n 1 | 5952 | 1 | 2236 | 111659903 | 0 | 0 | (34914,1) | 554 | 2311 | 96 | 11111111111111111111111111111111111111011100111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111000001111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111001111111111111111111111001111111111111111111111011111111111111111011111111101101111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111000000000001111111111111111111111111111111111111111111100111111000000 | \n 2 | 3736 | 1 | 2212 | 111659903 | 0 | 0 | (34914,2) | 554 | 2311 | 96 | 11111111111111111111111111111111111111011100111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111100000000011111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111001111111111111111111111000111111111111111111111011111111111111111010111111101101111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111000000000001111111111111111111111111111111111111111111100111111000000 | \n 3 | 1520 | 1 | 2216 | 111659903 | 0 | 0 | (34914,3) | 554 | 2311 | 96 | 11111111111111111111111111111111111111011100111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111100000001011111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111001111111111111111111111000111111111111111111111011111111111111111011111111101101111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111000000000001111111111111111111111111111111111111111111100111111000000 | \n\nts=# SELECT lp, lp_off, lp_flags, lp_len, t_xmin, t_xmax, t_field3, t_ctid, t_infomask2, t_infomask, t_hoff, t_bits, t_oid FROM heap_page_items(get_raw_page('child.huawei_umts_ucell_201908', 34915));\n 1 | 5976 | 1 | 2212 | 111659903 | 0 | 0 | (34915,1) | 554 | 2311 | 96 | 11111111111111111111111111111111111111011100111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111100000001011111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111001111111111111111111111000111111111111111111111011111111111111111010111111101101111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111000000000001111111111111111111111111111111111111111111100111111000000 | \n 2 | 3744 | 1 | 2232 | 111659903 | 0 | 0 | (34915,2) | 554 | 2311 | 96 | 11111111111111111111111111111111111111011100111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111100001111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111001111111111111111111111001111111111111111111111011111111111111111010111111101101111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111000000000001111111111111111111111111111111111111111111100111111000000 | \n 3 | 1520 | 1 | 2220 | 111659903 | 0 | 0 | (34915,3) | 554 | 2311 | 96 | 11111111111111111111111111111111111111011100111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111000001011111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111001111111111111111111111000111111111111111111111011111111111111111010111111101101111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111000000000001111111111111111111111111111111111111111111100111111000000 | \n\nts=# SELECT lp, lp_off, lp_flags, lp_len, t_xmin, t_xmax, t_field3, t_ctid, t_infomask2, t_infomask, t_hoff, t_bits, t_oid FROM heap_page_items(get_raw_page('child.huawei_umts_ucell_201908', 34916));\n 1 | 5952 | 1 | 2236 | 111659903 | 0 | 0 | (34916,1) | 554 | 2311 | 96 | 11111111111111111111111111111111111111011100111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111000001111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111001111111111111111111111001111111111111111111111011111111111111111011111111101101111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111000000000001111111111111111111111111111111111111111111100111111000000 | \n 2 | 3736 | 1 | 2212 | 111659903 | 0 | 0 | (34916,2) | 554 | 2311 | 96 | 11111111111111111111111111111111111111011100111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111100000001011111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111001111111111111111111111000111111111111111111111011111111111111111010111111101101111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111000000000001111111111111111111111111111111111111111111100111111000000 | \n 3 | 1496 | 1 | 2240 | 111659903 | 0 | 0 | (34916,3) | 554 | 2311 | 96 | 11111111111111111111111111111111111111011100111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111000001111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111001111111111111111111111001111111111111111111111011111111111111111010111111101101111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111001100000001111111111111111111111111111111111111111111100111111000000 | \n> Could you show\n> * pg_controldata\n\n[pryzbyj@database ~]$ sudo -u postgres ./src/postgresql.bin/bin/pg_controldata -D /var/lib/pgsql/11/data\n[sudo] password for pryzbyj:\npg_control version number: 1100\nCatalog version number: 201809051\nDatabase system identifier: 6616072377454496350\nDatabase cluster state: in production\npg_control last modified: Wed 07 Aug 2019 06:32:05 PM MDT\nLatest checkpoint location: 3A4C/D1BB5BD8\nLatest checkpoint's REDO location: 3A4C/CCC1DF90\nLatest checkpoint's REDO WAL file: 0000000100003A4C000000CC\nLatest checkpoint's TimeLineID: 1\nLatest checkpoint's PrevTimeLineID: 1\nLatest checkpoint's full_page_writes: on\nLatest checkpoint's NextXID: 0:111664949\nLatest checkpoint's NextOID: 2069503852\nLatest checkpoint's NextMultiXactId: 1846\nLatest checkpoint's NextMultiOffset: 3891\nLatest checkpoint's oldestXID: 33682470\nLatest checkpoint's oldestXID's DB: 16564\nLatest checkpoint's oldestActiveXID: 111664945\nLatest checkpoint's oldestMultiXid: 1484\nLatest checkpoint's oldestMulti's DB: 16564\nLatest checkpoint's oldestCommitTsXid:0\nLatest checkpoint's newestCommitTsXid:0\nTime of latest checkpoint: Wed 07 Aug 2019 06:31:35 PM MDT\nFake LSN counter for unlogged rels: 0/1\nMinimum recovery ending location: 0/0\nMin recovery ending loc's timeline: 0\nBackup start location: 0/0\nBackup end location: 0/0\nEnd-of-backup record required: no\nwal_level setting: replica\nwal_log_hints setting: off\nmax_connections setting: 200\nmax_worker_processes setting: 8\nmax_prepared_xacts setting: 0\nmax_locks_per_xact setting: 128\ntrack_commit_timestamp setting: off\nMaximum data alignment: 8\nDatabase block size: 8192\nBlocks per segment of large relation: 131072\nWAL block size: 8192\nBytes per WAL segment: 16777216\nMaximum length of identifiers: 64\nMaximum columns in an index: 32\nMaximum size of a TOAST chunk: 1996\nSize of a large-object chunk: 2048\nDate/time type storage: 64-bit integers\nFloat4 argument passing: by value\nFloat8 argument passing: by value\nData page checksum version: 0\nMock authentication nonce: ff863744b99b849579d561c5117157c78a8ced0563df824f3035f70810fec534\n\n> * SELECT oid, datname, datfrozenxid, datminmxid FROM pg_database;\nts=# SELECT oid, datname, datfrozenxid, datminmxid FROM pg_database;\n oid | datname | datfrozenxid | datminmxid \n-------+-----------+--------------+------------\n 13529 | template0 | 103564559 | 1846\n 16400 | template1 | 52362404 | 1486\n 16443 | postgres | 46060229 | 1485\n 16564 | ts | 33682470 | 1484\n(4 rows)\n\n> That should be easy to redact if necessary. It'd be useful to know which\n> database the corrupted table is in.\n> \n> It'd also be helpful to get something roughly like\n> SELECT oid, oid::regclass, relkind, reltoastrelid, relfrozenxid, age(relfrozenxid), relminmxid, mxid_age(relminmxid), relpages, (SELECT txid_current()) FROM pg_class WHERE relfrozenxid <> 0;\n\n> But I'm not sure if including the tablenames would be a problematic for\n> your case. Possibly a bit hard to infer \"usage\" correlations without,\n> but probably still worthwhile.\n\nI'll send to you individually, I don't think it's secret, but it's 60k lines\nlong... Its OID verifies to me that the table was created on 8/1, and not 7/1,\nwhich was possible. Week-old backup also verifies that.\n\nThanks,\nJustin\n\n\n",
"msg_date": "Wed, 7 Aug 2019 20:24:36 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: crash 11.5~ (and 11.4)"
}
] |
[
{
"msg_contents": "I'm looking at https://www.postgresql.org/docs/current/sql-analyze.html,\nwhere it says “Without a table_and_columns list, ANALYZE processes every\ntable and materialized view in the current database that the current user\nhas permission to analyze.”.\n\nI don’t believe there is a separate “analyze” permission, so which tables\nis this? Tables owned by the user? Ones where it can insert/update/delete?\nOnes where it can select?\n\nIf somebody can tell me, I'll make it a weekend project to propose a\nspecific update to the documentation to make this more clear. Or maybe\nthere should just be a cross-reference to another existing part of the\ndocumentation that explains more about this.\n\nI'm looking at https://www.postgresql.org/docs/current/sql-analyze.html, where it says “Without a table_and_columns list, ANALYZE processes every table and materialized view in the current database that the current user has permission to analyze.”.I don’t believe there is a separate “analyze” permission, so which tables is this? Tables owned by the user? Ones where it can insert/update/delete? Ones where it can select?If somebody can tell me, I'll make it a weekend project to propose a specific update to the documentation to make this more clear. Or maybe there should just be a cross-reference to another existing part of the documentation that explains more about this.",
"msg_date": "Wed, 7 Aug 2019 17:14:04 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": true,
"msg_subject": "Documentation clarification re: ANALYZE"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 2:14 PM Isaac Morland <isaac.morland@gmail.com>\nwrote:\n\n> I'm looking at https://www.postgresql.org/docs/current/sql-analyze.html,\n> where it says “Without a table_and_columns list, ANALYZE processes every\n> table and materialized view in the current database that the current user\n> has permission to analyze.”.\n>\n> I don’t believe there is a separate “analyze” permission, so which tables\n> is this? Tables owned by the user? Ones where it can insert/update/delete?\n> Ones where it can select?\n>\n\nOwners only - at least in previous releases. I don't recall whether the\naddition of new roles to cover subsets of administrative privileges ever\nwas extended to cover vacuum/analyze but I do not think it has.\n\nDavid J.\n\nOn Wed, Aug 7, 2019 at 2:14 PM Isaac Morland <isaac.morland@gmail.com> wrote:I'm looking at https://www.postgresql.org/docs/current/sql-analyze.html, where it says “Without a table_and_columns list, ANALYZE processes every table and materialized view in the current database that the current user has permission to analyze.”.I don’t believe there is a separate “analyze” permission, so which tables is this? Tables owned by the user? Ones where it can insert/update/delete? Ones where it can select?Owners only - at least in previous releases. I don't recall whether the addition of new roles to cover subsets of administrative privileges ever was extended to cover vacuum/analyze but I do not think it has.David J.",
"msg_date": "Wed, 7 Aug 2019 14:31:45 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation clarification re: ANALYZE"
},
{
"msg_contents": "On Wed, 7 Aug 2019 at 17:31, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Wed, Aug 7, 2019 at 2:14 PM Isaac Morland <isaac.morland@gmail.com>\n> wrote:\n>\n>> I'm looking at https://www.postgresql.org/docs/current/sql-analyze.html,\n>> where it says “Without a table_and_columns list, ANALYZE processes every\n>> table and materialized view in the current database that the current user\n>> has permission to analyze.”.\n>>\n>> I don’t believe there is a separate “analyze” permission, so which tables\n>> is this? Tables owned by the user? Ones where it can insert/update/delete?\n>> Ones where it can select?\n>>\n>\n> Owners only - at least in previous releases. I don't recall whether the\n> addition of new roles to cover subsets of administrative privileges ever\n> was extended to cover vacuum/analyze but I do not think it has.\n>\n\nThanks. So presumably I would also have permission if I have SET ROLEd to\nthe owner, or to a role which is an INHERIT member of the owner.\n\nOn Wed, 7 Aug 2019 at 17:31, David G. Johnston <david.g.johnston@gmail.com> wrote:On Wed, Aug 7, 2019 at 2:14 PM Isaac Morland <isaac.morland@gmail.com> wrote:I'm looking at https://www.postgresql.org/docs/current/sql-analyze.html, where it says “Without a table_and_columns list, ANALYZE processes every table and materialized view in the current database that the current user has permission to analyze.”.I don’t believe there is a separate “analyze” permission, so which tables is this? Tables owned by the user? Ones where it can insert/update/delete? Ones where it can select?Owners only - at least in previous releases. I don't recall whether the addition of new roles to cover subsets of administrative privileges ever was extended to cover vacuum/analyze but I do not think it has.Thanks. So presumably I would also have permission if I have SET ROLEd to the owner, or to a role which is an INHERIT member of the owner.",
"msg_date": "Wed, 7 Aug 2019 17:42:23 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Documentation clarification re: ANALYZE"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Wed, Aug 7, 2019 at 2:14 PM Isaac Morland <isaac.morland@gmail.com>\n> wrote:\n>> I'm looking at https://www.postgresql.org/docs/current/sql-analyze.html,\n>> where it says “Without a table_and_columns list, ANALYZE processes every\n>> table and materialized view in the current database that the current user\n>> has permission to analyze.”.\n>> I don’t believe there is a separate “analyze” permission, so which tables\n>> is this? Tables owned by the user? Ones where it can insert/update/delete?\n>> Ones where it can select?\n\n> Owners only - at least in previous releases. I don't recall whether the\n> addition of new roles to cover subsets of administrative privileges ever\n> was extended to cover vacuum/analyze but I do not think it has.\n\nActually, looking in the source code finds\n\n * We allow the user to vacuum or analyze a table if he is superuser, the\n * table owner, or the database owner (but in the latter case, only if\n * it's not a shared relation).\n\nIt's definitely a documentation omission that this isn't spelled out in\nthe ANALYZE reference page (VACUUM's page does have text about it).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Aug 2019 17:54:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Documentation clarification re: ANALYZE"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 2:42 PM Isaac Morland <isaac.morland@gmail.com>\nwrote:\n\n> Thanks. So presumably I would also have permission if I have SET ROLEd to\n> the owner, or to a role which is an INHERIT member of the owner.\n>\n\nYes, the table ownership role check walks up the role membership hierarchy\nif \"inherit\" is on for the current role.\n\nDavid J.\n\nOn Wed, Aug 7, 2019 at 2:42 PM Isaac Morland <isaac.morland@gmail.com> wrote:Thanks. So presumably I would also have permission if I have SET ROLEd to the owner, or to a role which is an INHERIT member of the owner.Yes, the table ownership role check walks up the role membership hierarchy if \"inherit\" is on for the current role.David J.",
"msg_date": "Wed, 7 Aug 2019 15:01:31 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation clarification re: ANALYZE"
},
{
"msg_contents": "On Wed, Aug 07, 2019 at 05:54:14PM -0400, Tom Lane wrote:\n> Actually, looking in the source code finds\n> \n> * We allow the user to vacuum or analyze a table if he is superuser, the\n> * table owner, or the database owner (but in the latter case, only if\n> * it's not a shared relation).\n> \n> It's definitely a documentation omission that this isn't spelled out in\n> the ANALYZE reference page (VACUUM's page does have text about it).\n\nAs far as I recall we have been doing that for ages, so +1 for the\ndocumentation fix you have just done.\n--\nMichael",
"msg_date": "Thu, 8 Aug 2019 18:22:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Documentation clarification re: ANALYZE"
}
] |
[
{
"msg_contents": "Hello all,\n\nPlease find attached a trivial patch making a few arrays const (in addition\nto the data they point to).",
"msg_date": "Thu, 8 Aug 2019 09:46:06 +0300",
"msg_from": "Mark G <markg735@gmail.com>",
"msg_from_op": true,
"msg_subject": "Small const correctness patch"
},
{
"msg_contents": "+1\n\nPatch successfully applied to the master (\n43211c2a02f39d6568496168413dc00e0399dc2e)\n\nOn Thu, Aug 8, 2019 at 12:30 PM Mark G <markg735@gmail.com> wrote:\n\n> Hello all,\n>\n> Please find attached a trivial patch making a few arrays const (in\n> addition to the data they point to).\n>\n>\n>\n\n-- \nIbrar Ahmed\n\n+1Patch successfully applied to the master (43211c2a02f39d6568496168413dc00e0399dc2e)\nOn Thu, Aug 8, 2019 at 12:30 PM Mark G <markg735@gmail.com> wrote:Hello all,Please find attached a trivial patch making a few arrays const (in addition to the data they point to).\n-- Ibrar Ahmed",
"msg_date": "Thu, 8 Aug 2019 22:25:15 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Small const correctness patch"
},
{
"msg_contents": "On 2019-08-08 08:46, Mark G wrote:\n> Please find attached a trivial patch making a few arrays const (in\n> addition to the data they point to).\n\nHow did you find this? Any special compiler settings?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 8 Aug 2019 19:51:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Small const correctness patch"
},
{
"msg_contents": "On Thu, Aug 8, 2019 at 8:51 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n\n> How did you find this? Any special compiler settings?\n>\n\n16 hours stuck in a plane on an international flight. I was just eyeballing\nthe code to kill the boredom.\n\n-Mark\n\nOn Thu, Aug 8, 2019 at 8:51 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote: \nHow did you find this? Any special compiler settings?16 hours stuck in a plane on an international flight. I was just eyeballing the code to kill the boredom. -Mark",
"msg_date": "Thu, 8 Aug 2019 22:56:02 +0300",
"msg_from": "Mark G <markg735@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small const correctness patch"
},
{
"msg_contents": "On Thu, Aug 8, 2019 at 8:25 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n> +1\n>\n> Patch successfully applied to the master (\n> 43211c2a02f39d6568496168413dc00e0399dc2e)\n>\n\nThat change looks like an unrelated patch for initdb. I'm still not seeing\nmy patch there.\n\n-Mark\n\nOn Thu, Aug 8, 2019 at 8:25 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:+1Patch successfully applied to the master (43211c2a02f39d6568496168413dc00e0399dc2e)That change looks like an unrelated patch for initdb. I'm still not seeing my patch there. -Mark",
"msg_date": "Thu, 8 Aug 2019 23:25:24 +0300",
"msg_from": "Mark G <markg735@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small const correctness patch"
},
{
"msg_contents": "On Fri, Aug 9, 2019 at 1:25 AM Mark G <markg735@gmail.com> wrote:\n\n>\n>\n> On Thu, Aug 8, 2019 at 8:25 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n>> +1\n>>\n>> Patch successfully applied to the master (\n>> 43211c2a02f39d6568496168413dc00e0399dc2e)\n>>\n>\n> That change looks like an unrelated patch for initdb. I'm still not seeing\n> my patch there.\n>\n\nI said I checked and verified patch against that hash. It applied to that\nwithout any failure. Sorry for the confusion.\n\n\n>\n> -Mark\n>\n>\n\n-- \nIbrar Ahmed\n\nOn Fri, Aug 9, 2019 at 1:25 AM Mark G <markg735@gmail.com> wrote:On Thu, Aug 8, 2019 at 8:25 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:+1Patch successfully applied to the master (43211c2a02f39d6568496168413dc00e0399dc2e)That change looks like an unrelated patch for initdb. I'm still not seeing my patch there.I said I checked and verified patch against that hash. It applied to that without any failure. Sorry for the confusion. -Mark\n\n-- Ibrar Ahmed",
"msg_date": "Fri, 9 Aug 2019 02:12:04 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Small const correctness patch"
},
{
"msg_contents": "At Thu, 8 Aug 2019 22:56:02 +0300, Mark G <markg735@gmail.com> wrote in <CAEeOP_Y3SAXe8u++9e-CN_+MgY9_u+vu3a80sw+7gzR4s7KjqQ@mail.gmail.com>\n> On Thu, Aug 8, 2019 at 8:51 PM Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> wrote:\n> \n> \n> > How did you find this? Any special compiler settings?\n> >\n> \n> 16 hours stuck in a plane on an international flight. I was just eyeballing\n> the code to kill the boredom.\n\nA similar loose typing is seen, for example:p\n\n-const char *\n+const char * const\n\nsrc/backend/access/rmgrdesc/*.c\n relmap_identify(uint8 info)\n seq_identify(uint8 info)\n smgr_identify(uint8 info)\n.... (many)...\n\nsrc/backend/access/transam/xact.c:\n BlockStateAsString(TBlockState blockState)\n\n\nI foundnd them by \n\nfind $(TOP) -type f -exec egrep -nH -e '^(static )?const char \\*' {} +\n\nthen eyeballing on the first ones. I don't know an automated way\nto detect such possibly-loose constness of variables or\nfunctions.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 09 Aug 2019 17:23:16 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Small const correctness patch"
},
{
"msg_contents": "On 2019-08-08 08:46, Mark G wrote:\n> Please find attached a trivial patch making a few arrays const (in\n> addition to the data they point to).\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 17 Sep 2019 22:06:16 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Small const correctness patch"
}
] |
[
{
"msg_contents": "Hi,\n\nIn RelationBuildPartitionDesc(), a memory space that use to gather\npartitioning\nbound info wasn't free at the end. This might not a problem because this\nallocated memory will eventually be recovered when the top-level context is\nfreed, but the case when a partitioned table having 1000s or more\npartitions and\nthis partitioned relation open & close, and its cached entry invalidated in\nloop\nthen we'll have too may call to RelationBuildPartitionDesc() which will keep\nwasting some space with every loop.\n\nFor a demonstration purpose, I did the following changes to\nheap_drop_with_catalog() and tried to drop a partitioned table having 5000\npartitions(attached create script) which hit OOM on a machine in no time:\n\ndiff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c\nindex b7bcdd9d0f..6b7bc0d7ae 100644\n--- a/src/backend/catalog/heap.c\n+++ b/src/backend/catalog/heap.c\n@@ -1842,6 +1842,8 @@ heap_drop_with_catalog(Oid relid)\n parentOid = get_partition_parent(relid);\n LockRelationOid(parentOid, AccessExclusiveLock);\n\n+ rel = relation_open(parentOid, NoLock);\n+ relation_close(rel, NoLock);\n /*\n * If this is not the default partition, dropping it will change the\n * default partition's partition constraint, so we must lock it.\n\n\nI think we should do all partitioned bound information gathering and\ncalculation in temporary memory context which can be released at the end of\nRelationBuildPartitionDesc(), thoughts/Comments?\n\nI did the same in the attached patch and the aforesaid OOM issue is\ndisappeared.\n\nRegards,\nAmul",
"msg_date": "Thu, 8 Aug 2019 12:44:23 +0530",
"msg_from": "amul sul <sulamul@gmail.com>",
"msg_from_op": true,
"msg_subject": "Some memory not freed at the exit of RelationBuildPartitionDesc()"
},
{
"msg_contents": "Hi Amul,\n\nOn Thu, Aug 8, 2019 at 4:15 PM amul sul <sulamul@gmail.com> wrote:\n>\n> Hi,\n>\n> In RelationBuildPartitionDesc(), a memory space that use to gather partitioning\n> bound info wasn't free at the end. This might not a problem because this\n> allocated memory will eventually be recovered when the top-level context is\n> freed, but the case when a partitioned table having 1000s or more partitions and\n> this partitioned relation open & close, and its cached entry invalidated in loop\n> then we'll have too may call to RelationBuildPartitionDesc() which will keep\n> wasting some space with every loop.\n>\n> For a demonstration purpose, I did the following changes to\n> heap_drop_with_catalog() and tried to drop a partitioned table having 5000\n> partitions(attached create script) which hit OOM on a machine in no time:\n>\n> diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c\n> index b7bcdd9d0f..6b7bc0d7ae 100644\n> --- a/src/backend/catalog/heap.c\n> +++ b/src/backend/catalog/heap.c\n> @@ -1842,6 +1842,8 @@ heap_drop_with_catalog(Oid relid)\n> parentOid = get_partition_parent(relid);\n> LockRelationOid(parentOid, AccessExclusiveLock);\n>\n> + rel = relation_open(parentOid, NoLock);\n> + relation_close(rel, NoLock);\n> /*\n> * If this is not the default partition, dropping it will change the\n> * default partition's partition constraint, so we must lock it.\n>\n>\n> I think we should do all partitioned bound information gathering and\n> calculation in temporary memory context which can be released at the end of\n> RelationBuildPartitionDesc(), thoughts/Comments?\n>\n> I did the same in the attached patch and the aforesaid OOM issue is disappeared.\n\nThanks for the patch. This was discussed recently in the \"hyrax vs.\nRelationBuildPartitionDesc()\" thread [1] and I think Alvaro proposed\nan approach that's similar to yours. Not sure why it wasn't pursued\nthough. Maybe the reason is buried somewhere in that discussion.\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BTgmoY3bRmGB6-DUnoVy5fJoreiBJ43rwMrQRCdPXuKt4Ykaw%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 8 Aug 2019 16:56:40 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some memory not freed at the exit of RelationBuildPartitionDesc()"
},
{
"msg_contents": "On Thu, Aug 8, 2019 at 1:27 PM Amit Langote <amitlangote09@gmail.com> wrote:\n\n> Hi Amul,\n>\n> On Thu, Aug 8, 2019 at 4:15 PM amul sul <sulamul@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > In RelationBuildPartitionDesc(), a memory space that use to gather\n> partitioning\n> > bound info wasn't free at the end. This might not a problem because this\n> > allocated memory will eventually be recovered when the top-level context\n> is\n> > freed, but the case when a partitioned table having 1000s or more\n> partitions and\n> > this partitioned relation open & close, and its cached entry invalidated\n> in loop\n> > then we'll have too may call to RelationBuildPartitionDesc() which will\n> keep\n> > wasting some space with every loop.\n> >\n> > For a demonstration purpose, I did the following changes to\n> > heap_drop_with_catalog() and tried to drop a partitioned table having\n> 5000\n> > partitions(attached create script) which hit OOM on a machine in no time:\n> >\n> > diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c\n> > index b7bcdd9d0f..6b7bc0d7ae 100644\n> > --- a/src/backend/catalog/heap.c\n> > +++ b/src/backend/catalog/heap.c\n> > @@ -1842,6 +1842,8 @@ heap_drop_with_catalog(Oid relid)\n> > parentOid = get_partition_parent(relid);\n> > LockRelationOid(parentOid, AccessExclusiveLock);\n> >\n> > + rel = relation_open(parentOid, NoLock);\n> > + relation_close(rel, NoLock);\n> > /*\n> > * If this is not the default partition, dropping it will change\n> the\n> > * default partition's partition constraint, so we must lock it.\n> >\n> >\n> > I think we should do all partitioned bound information gathering and\n> > calculation in temporary memory context which can be released at the end\n> of\n> > RelationBuildPartitionDesc(), thoughts/Comments?\n> >\n> > I did the same in the attached patch and the aforesaid OOM issue is\n> disappeared.\n>\n> Thanks for the patch. This was discussed recently in the \"hyrax vs.\n> RelationBuildPartitionDesc()\" thread [1] and I think Alvaro proposed\n> an approach that's similar to yours. Not sure why it wasn't pursued\n> though. Maybe the reason is buried somewhere in that discussion.\n>\n> Thanks,\n> Amit\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CA%2BTgmoY3bRmGB6-DUnoVy5fJoreiBJ43rwMrQRCdPXuKt4Ykaw%40mail.gmail.com\n\n\nOh, quite similar, thanks Amit for pointing that out.\n\nLook like \"hyrax vs.RelationBuildPartitionDesc()\" is in discussion for the\nmaster\nbranch only, not sure though, but we need the similar fix for the back\nbranches as well.\n\nRegards,\nAmul\n\nOn Thu, Aug 8, 2019 at 1:27 PM Amit Langote <amitlangote09@gmail.com> wrote:Hi Amul,\n\nOn Thu, Aug 8, 2019 at 4:15 PM amul sul <sulamul@gmail.com> wrote:\n>\n> Hi,\n>\n> In RelationBuildPartitionDesc(), a memory space that use to gather partitioning\n> bound info wasn't free at the end. This might not a problem because this\n> allocated memory will eventually be recovered when the top-level context is\n> freed, but the case when a partitioned table having 1000s or more partitions and\n> this partitioned relation open & close, and its cached entry invalidated in loop\n> then we'll have too may call to RelationBuildPartitionDesc() which will keep\n> wasting some space with every loop.\n>\n> For a demonstration purpose, I did the following changes to\n> heap_drop_with_catalog() and tried to drop a partitioned table having 5000\n> partitions(attached create script) which hit OOM on a machine in no time:\n>\n> diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c\n> index b7bcdd9d0f..6b7bc0d7ae 100644\n> --- a/src/backend/catalog/heap.c\n> +++ b/src/backend/catalog/heap.c\n> @@ -1842,6 +1842,8 @@ heap_drop_with_catalog(Oid relid)\n> parentOid = get_partition_parent(relid);\n> LockRelationOid(parentOid, AccessExclusiveLock);\n>\n> + rel = relation_open(parentOid, NoLock);\n> + relation_close(rel, NoLock);\n> /*\n> * If this is not the default partition, dropping it will change the\n> * default partition's partition constraint, so we must lock it.\n>\n>\n> I think we should do all partitioned bound information gathering and\n> calculation in temporary memory context which can be released at the end of\n> RelationBuildPartitionDesc(), thoughts/Comments?\n>\n> I did the same in the attached patch and the aforesaid OOM issue is disappeared.\n\nThanks for the patch. This was discussed recently in the \"hyrax vs.\nRelationBuildPartitionDesc()\" thread [1] and I think Alvaro proposed\nan approach that's similar to yours. Not sure why it wasn't pursued\nthough. Maybe the reason is buried somewhere in that discussion.\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BTgmoY3bRmGB6-DUnoVy5fJoreiBJ43rwMrQRCdPXuKt4Ykaw%40mail.gmail.comOh, quite similar, thanks Amit for pointing that out.Look like \"hyrax vs.RelationBuildPartitionDesc()\" is in discussion for the masterbranch only, not sure though, but we need the similar fix for the back branches as well.Regards,Amul",
"msg_date": "Thu, 8 Aug 2019 14:02:26 +0530",
"msg_from": "amul sul <sulamul@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Some memory not freed at the exit of RelationBuildPartitionDesc()"
},
{
"msg_contents": "On Thu, Aug 8, 2019 at 5:33 PM amul sul <sulamul@gmail.com> wrote:\n> On Thu, Aug 8, 2019 at 1:27 PM Amit Langote <amitlangote09@gmail.com> wrote:\n\n>> Thanks for the patch. This was discussed recently in the \"hyrax vs.\n>> RelationBuildPartitionDesc()\" thread [1] and I think Alvaro proposed\n>> an approach that's similar to yours. Not sure why it wasn't pursued\n>> though. Maybe the reason is buried somewhere in that discussion.\n>\n> Oh, quite similar, thanks Amit for pointing that out.\n>\n> Look like \"hyrax vs.RelationBuildPartitionDesc()\" is in discussion for the master\n> branch only, not sure though, but we need the similar fix for the back branches as well.\n\nWell, this is not a bug as such, so it's very unlikely that a fix like\nthis will be back-patched. Also, if this becomes an issue only for\nmore than over 1000 partitions, then it's not very relevant for PG 10\nand PG 11, because we don't recommend using so many partitions with\nthem. Maybe we can consider fixing PG 12 though.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 8 Aug 2019 17:42:21 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some memory not freed at the exit of RelationBuildPartitionDesc()"
},
{
"msg_contents": "On Thu, Aug 08, 2019 at 05:42:21PM +0900, Amit Langote wrote:\n> On Thu, Aug 8, 2019 at 5:33 PM amul sul <sulamul@gmail.com> wrote:\n> > On Thu, Aug 8, 2019 at 1:27 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> \n> >> Thanks for the patch. This was discussed recently in the \"hyrax vs.\n> >> RelationBuildPartitionDesc()\" thread [1] and I think Alvaro proposed\n> >> an approach that's similar to yours. Not sure why it wasn't pursued\n> >> though. Maybe the reason is buried somewhere in that discussion.\n> >\n> > Oh, quite similar, thanks Amit for pointing that out.\n> >\n> > Look like \"hyrax vs.RelationBuildPartitionDesc()\" is in discussion for the master\n> > branch only, not sure though, but we need the similar fix for the back branches as well.\n> \n> Well, this is not a bug as such, so it's very unlikely that a fix like\n> this will be back-patched. Also, if this becomes an issue only for\n> more than over 1000 partitions, then it's not very relevant for PG 10\n> and PG 11, because we don't recommend using so many partitions with\n> them. Maybe we can consider fixing PG 12 though.\n\nA fix for the thousands-of-partitions case would be very welcome for 12.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Fri, 9 Aug 2019 20:46:09 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Some memory not freed at the exit of RelationBuildPartitionDesc()"
},
{
"msg_contents": "On Sat, Aug 10, 2019 at 12:16 AM David Fetter <david@fetter.org> wrote:\n\n> On Thu, Aug 08, 2019 at 05:42:21PM +0900, Amit Langote wrote:\n> > On Thu, Aug 8, 2019 at 5:33 PM amul sul <sulamul@gmail.com> wrote:\n> > > On Thu, Aug 8, 2019 at 1:27 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> >\n> > >> Thanks for the patch. This was discussed recently in the \"hyrax vs.\n> > >> RelationBuildPartitionDesc()\" thread [1] and I think Alvaro proposed\n> > >> an approach that's similar to yours. Not sure why it wasn't pursued\n> > >> though. Maybe the reason is buried somewhere in that discussion.\n> > >\n> > > Oh, quite similar, thanks Amit for pointing that out.\n> > >\n> > > Look like \"hyrax vs.RelationBuildPartitionDesc()\" is in discussion for\n> the master\n> > > branch only, not sure though, but we need the similar fix for the back\n> branches as well.\n> >\n> > Well, this is not a bug as such, so it's very unlikely that a fix like\n> > this will be back-patched. Also, if this becomes an issue only for\n> > more than over 1000 partitions, then it's not very relevant for PG 10\n> > and PG 11, because we don't recommend using so many partitions with\n> > them. Maybe we can consider fixing PG 12 though.\n>\n> A fix for the thousands-of-partitions case would be very welcome for 12.\n>\n>\nLook like commit # d3f48dfae42 added the required fix but is enabled only\nfor\nthe clobber-cache builds :(\n\nRegards,\nAmul\n\nOn Sat, Aug 10, 2019 at 12:16 AM David Fetter <david@fetter.org> wrote:On Thu, Aug 08, 2019 at 05:42:21PM +0900, Amit Langote wrote:\n> On Thu, Aug 8, 2019 at 5:33 PM amul sul <sulamul@gmail.com> wrote:\n> > On Thu, Aug 8, 2019 at 1:27 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> \n> >> Thanks for the patch. This was discussed recently in the \"hyrax vs.\n> >> RelationBuildPartitionDesc()\" thread [1] and I think Alvaro proposed\n> >> an approach that's similar to yours. Not sure why it wasn't pursued\n> >> though. Maybe the reason is buried somewhere in that discussion.\n> >\n> > Oh, quite similar, thanks Amit for pointing that out.\n> >\n> > Look like \"hyrax vs.RelationBuildPartitionDesc()\" is in discussion for the master\n> > branch only, not sure though, but we need the similar fix for the back branches as well.\n> \n> Well, this is not a bug as such, so it's very unlikely that a fix like\n> this will be back-patched. Also, if this becomes an issue only for\n> more than over 1000 partitions, then it's not very relevant for PG 10\n> and PG 11, because we don't recommend using so many partitions with\n> them. Maybe we can consider fixing PG 12 though.\n\nA fix for the thousands-of-partitions case would be very welcome for 12.Look like commit # d3f48dfae42 added the required fix but is enabled only forthe clobber-cache builds :(Regards,Amul",
"msg_date": "Tue, 13 Aug 2019 15:08:34 +0530",
"msg_from": "amul sul <sulamul@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Some memory not freed at the exit of RelationBuildPartitionDesc()"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 03:08:34PM +0530, amul sul wrote:\n> On Sat, Aug 10, 2019 at 12:16 AM David Fetter <david@fetter.org> wrote:\n> \n> > On Thu, Aug 08, 2019 at 05:42:21PM +0900, Amit Langote wrote:\n> > > On Thu, Aug 8, 2019 at 5:33 PM amul sul <sulamul@gmail.com> wrote:\n> > > > On Thu, Aug 8, 2019 at 1:27 PM Amit Langote <amitlangote09@gmail.com>\n> > wrote:\n> > >\n> > > >> Thanks for the patch. This was discussed recently in the \"hyrax vs.\n> > > >> RelationBuildPartitionDesc()\" thread [1] and I think Alvaro proposed\n> > > >> an approach that's similar to yours. Not sure why it wasn't pursued\n> > > >> though. Maybe the reason is buried somewhere in that discussion.\n> > > >\n> > > > Oh, quite similar, thanks Amit for pointing that out.\n> > > >\n> > > > Look like \"hyrax vs.RelationBuildPartitionDesc()\" is in discussion for\n> > the master\n> > > > branch only, not sure though, but we need the similar fix for the back\n> > branches as well.\n> > >\n> > > Well, this is not a bug as such, so it's very unlikely that a fix like\n> > > this will be back-patched. Also, if this becomes an issue only for\n> > > more than over 1000 partitions, then it's not very relevant for PG 10\n> > > and PG 11, because we don't recommend using so many partitions with\n> > > them. Maybe we can consider fixing PG 12 though.\n> >\n> > A fix for the thousands-of-partitions case would be very welcome for 12.\n> >\n> >\n> Look like commit # d3f48dfae42 added the required fix but is enabled only\n> for\n> the clobber-cache builds :(\n\nI've got a real world multi-tenancy case that would really be helped\nby this. Can we enable it for all builds, please?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 13 Aug 2019 16:18:58 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Some memory not freed at the exit of RelationBuildPartitionDesc()"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> On Tue, Aug 13, 2019 at 03:08:34PM +0530, amul sul wrote:\n>> Look like commit # d3f48dfae42 added the required fix but is enabled only\n>> for the clobber-cache builds :(\n\n> I've got a real world multi-tenancy case that would really be helped\n> by this. Can we enable it for all builds, please?\n\nThis sounds like nonsense to me. As pointed out by the commit message,\nthat fix was just taking care of bloat observed with CCA on.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Aug 2019 10:30:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Some memory not freed at the exit of RelationBuildPartitionDesc()"
}
] |
[
{
"msg_contents": "I am Yonathan Misgan from Ethiopia, want to add some functionalities on PostgreSQL to support Ethiopian locales. I want your advice where I start to hack the PostgresSQL source code. I have attached some synopsis about the existing problems of PostgresSQL related with Ethiopian locale specially related with calendar, date and time format. Please don't mind about date and time written with Amharic because I used only to show the problems.\r\nCalendar: A calendar is a system of organizing days for social, religious, commercial or administrative purposes. This is done by giving names to periods of time, typically days, weeks, months and years. A date is the designation of a single, specific day within such a system. The Gregorian calendar is the most widely used calendar in the world today specially for database and other computer system. It is the calendar used in the international standard for representation of dates and times: ISO 8601:2004. It is a solar calendar based on a 365 days common year divideinto 12 months of irregular lengths 11 of the months have either 30 or 31 days, while the second month, February, has only 28 days during the common year. However, nearly every four years is a leap year, when one extra or intercalary day is added on February. Making the leap year in the Gregorian calendar 366 days long. The days of the year in the calendar are divided into 7 days weeks. The international standard is to start the week on Monday. However, several countries, including the Ethiopia, US and Canada, count Sunday as the first day of the week .\r\nThe Ethiopian calendar is the principal calendar used in Ethiopia and serves as the liturgical year for Christians in Eritrea and Ethiopia belonging to the Eritrean Orthodox Tewahedo Church, Ethiopian Orthodox Tewahedo Church, Eastern Catholic Churches and Coptic Orthodox Church of Alexandria. The Ethiopian calendar has difference with the Gregorian calendar in terms of day, month and year. Like the Coptic calendar, the Ethiopic calendar has 12 months of 30 days plus 5 or 6 epagomenal days, which comprise a thirteenth month. Maybe the clearest example of the different ways cultures view the world around them is demonstrated by our perceptions of the nontangible entity of time. The seven days week is nearly universal but we disagree on what we consider the first day of the week to be. This is the case even under the same calendar system.\r\nDate and Time: Ethiopia shares the 24 hour day convention with the rest of the world but differs on when the day begins. The two part day division around mid-day (AM for anti-meridian and PM for post-meridian) is also a foreign notion taken for universal in localization systems. Where the “<am>” and “<pm>” day divisions of Amharic translation is an approximation that is no more than serviceable:\r\n<am>ጠዋት</am> and <pm>ከሰዓት</pm>\r\nWhile these translations could be understood under the context of the foreign conventions that they map, they are not ideal for Ethiopia. Naturally, Ethiopia will want to apply its own conventions that are already millennium old. ጠዋት, ረፋድ, እኩለ ቀን, እኩለ ሌሊት some of the examples of day division in Ethiopia. Ethiopia does not have a well-established preference for digital time formats in database system and other computer systems, but we want to establish computerized systems into a society. An example digital time format under United States English conventions appears as: Mon 27 Feb 2018 12:00:00 PM EAT\r\nThe equivalent date and time under the Ethiopian Amharic convention as available on Linux systems today appears as:\r\nማክሰ ፌብሩ 26 ቀን 12: 00: 00 ከሰዓት EAT 2018 ዓ/ም\r\nThis represents a loose mapping of some Amharic conventions onto an external reckoning of time. This is only translation and not localization in its truest sense. A hypothetical Ethiopic date and time presentation might looks as:\r\nማክሰኞ፣ የካቲት 19 ቀን 6: 00: 00 እኩለ ቀን 2010 ዓ/ም or\r\nማክሰኞ፣ የካቲት 19 ቀን 6: 00: 00 እኩለ ቀን፳ ፻ ፲ዓ/ም\r\nLet see the drawbacks that exist in PostgreSQL with examples\r\nMost of the database systems use time stamp to store data is in Gregorian calendar system for specific time zone.\r\nHowever, most the data in Ethiopia are available with Ethiopian calendar system and users in Ethiopia are not comfortable with the Gregorian calendar as they use Ethiopian calendar in their day-to-day activities. This usually create inconvenience when the user want to have a reference to Ethiopic date as they had Gregorian calendar at database system. An example query to demonstrate this is given below.\r\n Q2: Select current_date;\r\nQ2 returns ‘2019-08-08’ but currently in Ethiopia calendar the year is ‘2011’,\r\n Q3: Select to_char(to_timestamp(to_char(4, '999'), 'MM'), 'Month');\r\nQ3 returns ‘April’ whereas the 4th month is ታህሳስ(December) in Ethiopian calendar system.\r\n Q4: Select to_char(to_timestamp (to_char(13, '999'), 'MM'), 'Month');\r\nQ4 returns an error message since the GC have only ‘12’ month per a year.\r\nWhere Q2, Q3 and Q4 are queries.\r\n\n\n\n\n\n\nI\r\n am Yonathan Misgan from Ethiopia, want to add some functionalities on PostgreSQL to support\r\nEthiopian\r\nlocales.\r\n I want your advice where I start to hack the PostgresSQL source code. I have attached some synopsis about the existing problems of PostgresSQL related with Ethiopian locale specially related with calendar, date and time format. Please don't mind about date\r\n and time written with Amharic because I used only to show the problems. \nCalendar:\r\n A calendar is a system of organizing days for social, religious, commercial or administrative purposes. This is done by giving names to periods of time, typically days, weeks, months and years. A date is the designation of a single, specific day within such\r\n a system. The Gregorian calendar is the most widely used calendar in the world today specially for database and other computer system. It is the calendar used in the international standard for representation of dates and\r\ntimes: ISO 8601:2004. It is a solar calendar based on a 365 days common year divideinto 12 months of irregular lengths 11 of the months have either 30 or 31 days, while the second month, February, has only 28 days during the common year. However, nearly\r\n every four years is a leap year, when one extra or intercalary day is added on February. Making the leap year in the Gregorian calendar 366 days long. The days of the year in the calendar are divided into 7 days weeks. The international standard is to start\r\n the week on Monday. However, several countries, including the Ethiopia, US and Canada, count Sunday as the first day of the week .\nThe Ethiopian calendar is the principal calendar used in Ethiopia and serves as the liturgical year for Christians in Eritrea and Ethiopia\r\n belonging to the Eritrean Orthodox Tewahedo Church, Ethiopian Orthodox Tewahedo Church, Eastern Catholic Churches and Coptic Orthodox Church of Alexandria. The Ethiopian calendar has difference with the Gregorian calendar in terms of day, month and year. Like\r\n the Coptic calendar, the Ethiopic calendar has 12 months of 30 days plus 5 or 6 epagomenal days, which comprise a thirteenth month. Maybe the clearest example of the different ways cultures view the world around them is demonstrated by our perceptions of the\r\n nontangible entity of time. The seven days week is nearly universal but we disagree on what we consider the first day of the week to be. This is the case even under the same calendar system.\nDate and\r\n Time: Ethiopia shares the 24 hour day convention with the rest of the world but differs on when the day begins. The two part day division around mid-day (AM for anti-meridian and PM for post-meridian) is also a foreign notion taken for universal in\r\n localization systems. Where the \r\n“<am>” and “<pm>” day divisions of Amharic translation is an approximation that is no more than serviceable:\n<am>ጠዋት</am> and <pm>ከሰዓት</pm>\nWhile these translations could be understood under the context of the foreign conventions that they map, they are not ideal for Ethiopia. Naturally,\r\n Ethiopia will want to apply its own conventions that are already millennium old. ጠዋት, ረፋድ, እኩለ ቀን, እኩለ ሌሊት some of the examples of day division in Ethiopia. Ethiopia does not have a well-established preference for digital time formats in database system and\r\n other computer systems, but we want to establish computerized systems into a society. An example digital time format under United States English conventions appears as:\r\nMon 27 Feb 2018 12:00:00 PM EAT\nThe equivalent date and time under the Ethiopian Amharic convention as available on Linux\r\nsystems today appears as:\nማክሰ ፌብሩ 26 ቀን 12: 00: 00 ከሰዓት EAT 2018 ዓ/ም\nThis represents a loose mapping of some Amharic conventions onto an external reckoning of time. This is only translation and not localization in\r\n its truest sense. A hypothetical Ethiopic date and time presentation might looks as:\nማክሰኞ፣ የካቲት 19 ቀን 6: 00: 00 እኩለ ቀን 2010 ዓ/ም or\nማክሰኞ፣ የካቲት 19 ቀን 6: 00: 00 እኩለ ቀን፳ ፻ ፲ዓ/ም\nLet see the drawbacks that exist in PostgreSQL with examples\nMost of the database systems use time stamp to store data is in Gregorian calendar system for specific time zone.\nHowever, most the data in Ethiopia are available with Ethiopian calendar system and users in Ethiopia are not comfortable with the Gregorian calendar\r\n as they use Ethiopian calendar in their day-to-day activities. This usually create inconvenience when the user want to have a reference to Ethiopic date as they had Gregorian calendar at database system. An example query to demonstrate this is given below.\n\r\n Q2: Select current_date;\nQ2 returns ‘2019-08-08’\r\n but currently in Ethiopia calendar the year is ‘2011’,\n\r\n Q3: Select to_char(to_timestamp(to_char(4, '999'), 'MM'), 'Month');\nQ3\r\n returns ‘April’ whereas the 4th month is ታህሳስ(December) in Ethiopian calendar system.\n\r\n Q4: Select to_char(to_timestamp (to_char(13, '999'), 'MM'), 'Month');\nQ4\r\n returns an error message since the GC have only ‘12’ month per a year.\nWhere\r\nQ2, Q3 and Q4\nare queries.",
"msg_date": "Thu, 8 Aug 2019 07:30:36 +0000",
"msg_from": "Yonatan Misgan <yonamis@dtu.edu.et>",
"msg_from_op": true,
"msg_subject": "Locale support"
},
{
"msg_contents": "On Thu, Aug 8, 2019 at 7:31 PM Yonatan Misgan <yonamis@dtu.edu.et> wrote:\n> I am Yonathan Misgan from Ethiopia, want to add some functionalities on PostgreSQL to support Ethiopian locales. I want your advice where I start to hack the PostgresSQL source code. I have attached some synopsis about the existing problems of PostgresSQL related with Ethiopian locale specially related with calendar, date and time format.\n\nHi Yonatan,\n\nI'm not sure if this requires hacking the PostgreSQL source code. It\nsounds more like an extension. My first impression is that you might\nnot need new types like \"date\". Instead you might be able to develop\na suite of functions that can convert the existing types to and from\nthe display formats (ie strings) and perhaps also components (year,\nmonth, day etc) that you use in your calendar system. For example:\n\nSELECT to_char_ethiopian(CURRENT_DATE, 'YYYY-MM-DD'), or whatever kind\nof format control string would be more appropriate.\n\nHowever, I see from https://en.wikipedia.org/wiki/Time_in_Ethiopia\nthat new days start at 1 o'clock, not midnight, so that makes\nCURRENT_DATE a bit more confusing -- you might need to write a\nfunction current_eth_date() to deal with that small difference. Other\nthan that detail, which is really a detail of CURRENT_DATE and not of\nthe date type, dates are internally represented as a number of days\nsince some arbitrary \"epoch\" day (I think Gregorian 2000-01-01), not\nas the components you see when you look at the output of SELECT\nCURRENT_DATE. That is, the Gregorian calendar concepts exist mostly\nin the display/input functions, and the operators that can add\nintervals etc. You could supply a different set of functions, but use\nthe same types, and I suspect that'd be convenient because then you'll\nbe able to use Gregorian and Ethiopian conventions with the same data,\nwhenever you need to. It's much the same for timestamps, but with\nmore complicated details.\n\nI see that there are libraries and bits of example code around to do\nthe various kinds of calendar maths required for Ethiopian dates in\nPerl, Python etc. If I were you I think I'd experiment with a\nprototype implementation using PL/Perl, PL/Python etc (a way to\ndefine new PostgreSQL functions written in those languages), and if\nthat goes well, try writing an extension in C to do it more\nefficiently.\n\nThe end goal of that woudn't need to be part of PostgreSQL itself, but\njust an extension that anyone can download and install to use\nEthiopian dates conveniently.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Aug 2019 20:34:01 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Locale support"
},
{
"msg_contents": "Thank you for your quick response. I am also impressed to develop Ethiopian calendar as an extension on PostgreSQL and I I have already developed the function that convert Gregorian calendar time to Ethiopian calendar time. But the difficulty is on how to use this function on PostgreSQL as well on PostgreSQL month names are key words when I am developing Ethiopian calendar the date data type is doesn't accept Ethiopian month name as a date data type value only the numeric representation of the months are accepted by compiler.\nSo my question is after developing the converter function where I put it for accessing it on PostgreSQL.\n\n\n\n-------- Original message --------\nFrom: Thomas Munro <thomas.munro@gmail.com>\nDate: 8/8/19 11:34 AM (GMT+03:00)\nTo: Yonatan Misgan <yonamis@dtu.edu.et>\nCc: pgsql-hackers@lists.postgresql.org\nSubject: Re: Locale support\n\nOn Thu, Aug 8, 2019 at 7:31 PM Yonatan Misgan <yonamis@dtu.edu.et> wrote:\n> I am Yonathan Misgan from Ethiopia, want to add some functionalities on PostgreSQL to support Ethiopian locales. I want your advice where I start to hack the PostgresSQL source code. I have attached some synopsis about the existing problems of PostgresSQL related with Ethiopian locale specially related with calendar, date and time format.\n\nHi Yonatan,\n\nI'm not sure if this requires hacking the PostgreSQL source code. It\nsounds more like an extension. My first impression is that you might\nnot need new types like \"date\". Instead you might be able to develop\na suite of functions that can convert the existing types to and from\nthe display formats (ie strings) and perhaps also components (year,\nmonth, day etc) that you use in your calendar system. For example:\n\nSELECT to_char_ethiopian(CURRENT_DATE, 'YYYY-MM-DD'), or whatever kind\nof format control string would be more appropriate.\n\nHowever, I see from https://en.wikipedia.org/wiki/Time_in_Ethiopia\nthat new days start at 1 o'clock, not midnight, so that makes\nCURRENT_DATE a bit more confusing -- you might need to write a\nfunction current_eth_date() to deal with that small difference. Other\nthan that detail, which is really a detail of CURRENT_DATE and not of\nthe date type, dates are internally represented as a number of days\nsince some arbitrary \"epoch\" day (I think Gregorian 2000-01-01), not\nas the components you see when you look at the output of SELECT\nCURRENT_DATE. That is, the Gregorian calendar concepts exist mostly\nin the display/input functions, and the operators that can add\nintervals etc. You could supply a different set of functions, but use\nthe same types, and I suspect that'd be convenient because then you'll\nbe able to use Gregorian and Ethiopian conventions with the same data,\nwhenever you need to. It's much the same for timestamps, but with\nmore complicated details.\n\nI see that there are libraries and bits of example code around to do\nthe various kinds of calendar maths required for Ethiopian dates in\nPerl, Python etc. If I were you I think I'd experiment with a\nprototype implementation using PL/Perl, PL/Python etc (a way to\ndefine new PostgreSQL functions written in those languages), and if\nthat goes well, try writing an extension in C to do it more\nefficiently.\n\nThe end goal of that woudn't need to be part of PostgreSQL itself, but\njust an extension that anyone can download and install to use\nEthiopian dates conveniently.\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n\n\n\n\n\n\n\n\nThank you for your quick response. I am also impressed to develop Ethiopian calendar as an extension on PostgreSQL and I I have already developed the function that convert Gregorian calendar time to Ethiopian calendar time. But the difficulty is on how\n to use this function on PostgreSQL as well on PostgreSQL month names are key words when I am developing Ethiopian calendar the date data type is doesn't accept Ethiopian month name as a date data type value only the numeric representation of the months are\n accepted by compiler.\nSo my question is after developing the converter function where I put it for accessing it on PostgreSQL.\n\n\n\n\n-------- Original message --------\nFrom: Thomas Munro <thomas.munro@gmail.com> \nDate: 8/8/19 11:34 AM (GMT+03:00) \nTo: Yonatan Misgan <yonamis@dtu.edu.et> \nCc: pgsql-hackers@lists.postgresql.org \nSubject: Re: Locale support \n\n\n\nOn Thu, Aug 8, 2019 at 7:31 PM Yonatan Misgan <yonamis@dtu.edu.et> wrote:\n> I am Yonathan Misgan from Ethiopia, want to add some functionalities on PostgreSQL to support Ethiopian locales. I want your advice where I start to hack the PostgresSQL source code. I have attached some synopsis about the existing problems of PostgresSQL\n related with Ethiopian locale specially related with calendar, date and time format.\n\nHi Yonatan,\n\nI'm not sure if this requires hacking the PostgreSQL source code. It\nsounds more like an extension. My first impression is that you might\nnot need new types like \"date\". Instead you might be able to develop\na suite of functions that can convert the existing types to and from\nthe display formats (ie strings) and perhaps also components (year,\nmonth, day etc) that you use in your calendar system. For example:\n\nSELECT to_char_ethiopian(CURRENT_DATE, 'YYYY-MM-DD'), or whatever kind\nof format control string would be more appropriate.\n\nHowever, I see from https://en.wikipedia.org/wiki/Time_in_Ethiopia\nthat new days start at 1 o'clock, not midnight, so that makes\nCURRENT_DATE a bit more confusing -- you might need to write a\nfunction current_eth_date() to deal with that small difference. Other\nthan that detail, which is really a detail of CURRENT_DATE and not of\nthe date type, dates are internally represented as a number of days\nsince some arbitrary \"epoch\" day (I think Gregorian 2000-01-01), not\nas the components you see when you look at the output of SELECT\nCURRENT_DATE. That is, the Gregorian calendar concepts exist mostly\nin the display/input functions, and the operators that can add\nintervals etc. You could supply a different set of functions, but use\nthe same types, and I suspect that'd be convenient because then you'll\nbe able to use Gregorian and Ethiopian conventions with the same data,\nwhenever you need to. It's much the same for timestamps, but with\nmore complicated details.\n\nI see that there are libraries and bits of example code around to do\nthe various kinds of calendar maths required for Ethiopian dates in\nPerl, Python etc. If I were you I think I'd experiment with a\nprototype implementation using PL/Perl, PL/Python etc (a way to\ndefine new PostgreSQL functions written in those languages), and if\nthat goes well, try writing an extension in C to do it more\nefficiently.\n\nThe end goal of that woudn't need to be part of PostgreSQL itself, but\njust an extension that anyone can download and install to use\nEthiopian dates conveniently.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Thu, 8 Aug 2019 13:29:08 +0000",
"msg_from": "Yonatan Misgan <yonamis@dtu.edu.et>",
"msg_from_op": true,
"msg_subject": "RE: Locale support"
},
{
"msg_contents": "On 8/8/19 9:29 AM, Yonatan Misgan wrote:\n> From: Thomas Munro <thomas.munro@gmail.com>\n>> Perl, Python etc. If I were you I think I'd experiment with a\n>> prototype implementation using PL/Perl, PL/Python etc (a way to\n\nAs a bit of subtlety that might matter, the internal representation\nin PostgreSQL, as in ISO 8601, applies the Gregorian calendar\n'proleptically', that is, forever into the future, and forever into\nthe past, before it was even invented or in use anywhere.\n\nThat matches the documented behavior of the standard 'datetime'\nclass included with Python (though in Python you need an add-on\nmodule to support time zones).\n\nIn Perl, an add-on module may be required.\n\n(In Java, the classes in the java.time package match the PostgreSQL /\nISO 8601 behavior, while the classes in the java.sql package do not.)\n\nThe effects of any mismatch are most likely to show up in dates\nearlier than 15 October 1582 Gregorian.\n\nThis was freshly on my mind from a recent thread over here[1].\n\nRegards,\n-Chap\n\n\n[1]\nhttps://www.postgresql.org/message-id/5D3AF944.6020900%40anastigmatix.net\n\n\n",
"msg_date": "Thu, 8 Aug 2019 10:22:45 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Locale support"
},
{
"msg_contents": "On Thu, Aug 8, 2019 at 6:29 AM Yonatan Misgan <yonamis@dtu.edu.et> wrote:\n> So my question is after developing the converter function where I put it for accessing it on PostgreSQL.\n\nMaybe you can take some inspiration from the postgresql-unit extension:\n\nhttps://github.com/df7cb/postgresql-unit\n\nNote that it is built on top of GNU units, which is itself highly extensible.\n\nI'm not sure if this will be useful, since I am not an expert on\ncalendar systems.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 8 Aug 2019 11:19:39 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Locale support"
},
{
"msg_contents": "On Fri, Aug 9, 2019 at 6:19 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Thu, Aug 8, 2019 at 6:29 AM Yonatan Misgan <yonamis@dtu.edu.et> wrote:\n> > So my question is after developing the converter function where I put it for accessing it on PostgreSQL.\n>\n> Maybe you can take some inspiration from the postgresql-unit extension:\n>\n> https://github.com/df7cb/postgresql-unit\n\nHere's a 5 minute bare bones extension with place holders functions\nshowing what I had in mind. That is, assuming that \"date\" is a\nreasonable type, and we're just talking about different ways of\nconverting to/from text.\n\nhttps://github.com/macdice/calendars\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 9 Aug 2019 10:16:27 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Locale support"
},
{
"msg_contents": "Can I implement it as a locale support? When the user want to change the lc _time = am_ET(Amharic Ethiopia ) the date and time representation of the database systems be in Ethiopian calendar.\n\n-------- Original message --------\nFrom: Thomas Munro <thomas.munro@gmail.com>\nDate: 8/9/19 1:17 AM (GMT+03:00)\nTo: Peter Geoghegan <pg@bowt.ie>\nCc: Yonatan Misgan <yonamis@dtu.edu.et>, pgsql-hackers@lists.postgresql.org\nSubject: Re: Locale support\n\nOn Fri, Aug 9, 2019 at 6:19 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Thu, Aug 8, 2019 at 6:29 AM Yonatan Misgan <yonamis@dtu.edu.et> wrote:\n> > So my question is after developing the converter function where I put it for accessing it on PostgreSQL.\n>\n> Maybe you can take some inspiration from the postgresql-unit extension:\n>\n> https://github.com/df7cb/postgresql-unit\n\nHere's a 5 minute bare bones extension with place holders functions\nshowing what I had in mind. That is, assuming that \"date\" is a\nreasonable type, and we're just talking about different ways of\nconverting to/from text.\n\nhttps://github.com/macdice/calendars\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n\n\n\n\n\n\nCan I implement it as a locale support? When the user want to change the lc _time = am_ET(Amharic Ethiopia ) the date and time representation of the database systems be in Ethiopian calendar.\n\n-------- Original message --------\nFrom: Thomas Munro <thomas.munro@gmail.com> \nDate: 8/9/19 1:17 AM (GMT+03:00) \nTo: Peter Geoghegan <pg@bowt.ie> \nCc: Yonatan Misgan <yonamis@dtu.edu.et>, pgsql-hackers@lists.postgresql.org \nSubject: Re: Locale support \n\n\n\nOn Fri, Aug 9, 2019 at 6:19 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Thu, Aug 8, 2019 at 6:29 AM Yonatan Misgan <yonamis@dtu.edu.et> wrote:\n> > So my question is after developing the converter function where I put it for accessing it on PostgreSQL.\n>\n> Maybe you can take some inspiration from the postgresql-unit extension:\n>\n> https://github.com/df7cb/postgresql-unit\n\nHere's a 5 minute bare bones extension with place holders functions\nshowing what I had in mind. That is, assuming that \"date\" is a\nreasonable type, and we're just talking about different ways of\nconverting to/from text.\n\nhttps://github.com/macdice/calendars\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Fri, 9 Aug 2019 06:15:51 +0000",
"msg_from": "Yonatan Misgan <yonamis@dtu.edu.et>",
"msg_from_op": true,
"msg_subject": "RE: Locale support"
},
{
"msg_contents": "On Fri, Aug 9, 2019 at 6:15 PM Yonatan Misgan <yonamis@dtu.edu.et> wrote:\n> Can I implement it as a locale support? When the user want to change the lc _time = am_ET(Amharic Ethiopia ) the date and time representation of the database systems be in Ethiopian calendar.\n\nHi Yonatan,\n\nI'm not an expert in this stuff, but it seem to me that both the\noperating system and the date/time localisation code in PostgreSQL use\nGregorian calendar logic, even though they can use local language\nnames for them. That is, they allow you to display the French, Greek,\nEthiopian ... names for the Gregorian months, but not a a completely\ndifferent calendar system. For example, according to my operating\nsystem:\n\n$ LC_TIME=en_GB.UTF-8 date\nSat 10 Aug 2019 10:58:42 NZST\n$ LC_TIME=fr_FR.UTF-8 date\nsam. 10 août 2019 10:58:48 NZST\n$ LC_TIME=el_GR.UTF-8 date\nΣάβ 10 Αυγ 2019 10:58:51 NZST\n$ LC_TIME=am_ET.UTF-8 date\nቅዳሜ ኦገስ 10 10:58:55 NZST 2019\n\nThese all say it's Saturday (ቅዳሜ) the 10th of August (ኦገስ) on the\nGregorian calendar. Looking at POSIX date[1] you can see they\ncontemplated the existence of non-Gregorian calendars in a very\nlimited way, but no operating system I have access to does anything\nother than Gregorian with %x and %c:\n\n\"The date string formatting capabilities are intended for use in\nGregorian-style calendars, possibly with a different starting year (or\nyears). The %x and %c conversion specifications, however, are intended\nfor local representation; these may be based on a different,\nnon-Gregorian calendar.\"\n\nPostgreSQL behaves the same way when you ask for the localised month in am_ET:\n\ntmunro=> set lc_time to 'am_ET.UTF-8';\nSET\ntmunro=> select to_char(now(), 'TMMon');\n to_char\n-----------\n ኦገስ\n(1 row)\n\nThis is hard coded into the system, as you can see from\nsrc/backend/utils/adt/pg_locale.c where the *twelve* month names are\nloaded into localized_abbrev_months and other similar arrays. That's\nthe first clue that this system can't handle the thirteen Ethiopian\nmonths, not to mention the maths required to work with them.\n\nThat's why I think you need a new, different to_char() function (and\nprobably more functions). In that skeleton code I posted, you can see\nI defined a function to_char(date, format, calendar) that takes a\nthird argument, for the calendar name. You might also wonder if that\nnew function should respect the locale settings, but I'm not sure if\nit could in general; you'd have to be able to get (say) the Greek\nnames for the Ethiopian calendar's months, which the OS won't be able\nto give you. Though perhaps you'd want some way to select between the\nEthiopian script and the transliterations into Latin script, which\nwould presumably be hard coded into the extension, I have no idea if\nthat's useful to anyone...\n\nBTW there have been earlier discussions of this:\n\nhttps://www.postgresql.org/message-id/flat/CAM7dW9iBXDJuwZrEXW%2Bdsa_%3Dew%3D%2BFdv7mcF51nQLGSkTkQp2MQ%40mail.gmail.com\n\nIt shows that Apple has an Objective-C NSCalendar class that\nunderstands Ehtiopian, Persian, Hebrew, ... calendars, which made me\nwonder if LC_TIME=am_ET.UTF-8 would trigger something special on a\nMac, but nope, its libc still just gives Ethiopian names for Gregorian\nmonths as I expected.\n\n[1] http://pubs.opengroup.org/onlinepubs/009695399/utilities/date.html\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 10 Aug 2019 11:50:56 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Locale support"
},
{
"msg_contents": "On Sat, Aug 10, 2019 at 11:50 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Aug 9, 2019 at 6:15 PM Yonatan Misgan <yonamis@dtu.edu.et> wrote:\n> > Can I implement it as a locale support? When the user want to change the lc _time = am_ET(Amharic Ethiopia ) the date and time representation of the database systems be in Ethiopian calendar.\n>\n> I'm not an expert in this stuff, but it seem to me that both the\n> operating system and the date/time localisation code in PostgreSQL use\n> Gregorian calendar logic, even though they can use local language\n> names for them. That is, they allow you to display the French, Greek,\n> Ethiopian ... names for the Gregorian months, but not a a completely\n> different calendar system. For example, according to my operating\n> system:\n> ...\n\nReading about that led me to the ICU model of calendars. Here's the C\nAPI (there are C++ and Java APIs too, but to maximise the possibility\nof producing something that could ever be part of core PostgreSQL, I'd\nstick to pure C):\n\nhttp://userguide.icu-project.org/datetime/calendar\nhttp://icu-project.org/apiref/icu4c/ucal_8h.html\nhttp://icu-project.org/apiref/icu4c/udat_8h.html\n\nIt does in fact work using locales to select calendars as you were\nsuggesting (unlike POSIX or at least the POSIX systems I tried), and\nthere it knows that am_ET is associated with the Ethiopic calendar, as\nyou were suggesting (and likewise for nearly a dozen other calendars).\nWhen you call ucal_open() you have to say whether you want\nUCAL_TRADITIONAL (the traditional calendar associated with the locale)\nor UCAL_GREGORIAN. In the usual ICU fashion it lets you mix and match\nmore explicitly, so you can use locale names like\n\"fr_FR@calendar=buddhist\". Then you can us the udat.h stuff to format\ndates and so forth.\n\nSo now I'm wondering if the best idea would be to write an extension\nthat provides a handful of carefully crafted functions to expose the\nICU calendar date/time formatting/parsing/manipulation stuff to SQL.\nI think that should be doable in a way that is not specific to\nEthiopic, so that users of Indian, Persian, Islamic etc calendars can\nalso benefit from this. I'm not sure if it should use LC_TIME --\npossibly not, because that would interfere with libc-based stuff and\nthe locale names don't match up; that might only make sense if this\nfacility completely replaced the built in date/time stuff, which seems\nunlikely. Perhaps you'd just want to pass in the complete ICU locale\nstring to each of the functions, to_char_icu(current_date, 'Month',\n'fr_FR@calendar=buddhist'), or perhaps you'd want a GUC to control it\n(extensions can have their own GUCs). I'm not sure.\n\nThere has certainly been interest in exposing other bits of ICU to\nPostgreSQL, either in core or in extensions: collations, Unicode\nnormalisation, and now this. Hmm.\n\nAnother thing you might want to look into is whether the SQL standard\nhas anything to say about non-Gregorian calendars, and whether DB2 has\nanything to support them (it certainly has ICU inside it, as does\nPostgreSQL (optionally), so I wonder if they've exposed this part of\nit to SQL and if so how).\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 10 Aug 2019 15:10:52 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Locale support"
}
] |
[
{
"msg_contents": "Hi, hackers\n\n\"The local variables that do not have the volatile type and have been changed\nbetween the setjmp() invocation and longjmp() call are indeterminate\". This is\nwhat the POSIX (and C standard for setjmp) says.\n\nThat's fine actually, but if we put the PG_TRY()/CATCH() in a loop, high\nversion gcc might complain.\n\nVersion:\n$ gcc-9 --version\ngcc-9 (Ubuntu 9.1.0-2ubuntu2~19.04) 9.1.0\n(Actually from gcc 7)\n\nReproducer:\n```\n#include <setjmp.h>\n\nextern int other(void);\nextern void trigger(int *cond1);\nextern sigjmp_buf *PG_exception_stack;\n\nvoid\ntrigger(int *cond1)\n{\n while (1)\n {\n if (*cond1 == 0)\n *cond1 = other();\n\n while (*cond1)\n {\n sigjmp_buf *save_exception_stack = PG_exception_stack;\n sigjmp_buf local_sigjmp_buf;\n\n if (sigsetjmp(local_sigjmp_buf, 0) == 0)\n PG_exception_stack = &local_sigjmp_buf;\n else\n PG_exception_stack = (sigjmp_buf *) save_exception_stack;\n\n PG_exception_stack = (sigjmp_buf *) save_exception_stack;\n }\n }\n}\n```\n\n```\n$ gcc-9 -O1 -Werror=uninitialized -fexpensive-optimizations -ftree-pre -c -o /dev/null reproducer.c\nreproducer.c: In function 'trigger':\nreproducer.c:17:16: error: 'save_exception_stack' is used uninitialized in this function [-Werror=uninitialized]\n 17 | sigjmp_buf *save_exception_stack = PG_exception_stack;\n | ^~~~~~~~~~~~~~~~~~~~\ncc1: some warnings being treated as errors\n```\n\nCodes re-ordering matters, when it warns:\n```\n sigjmp_buf *save_exception_stack = PG_exception_stack;\n 2f: 48 8b 1d 00 00 00 00 mov 0x0(%rip),%rbx # 36 <trigger+0x36>\n 36: 48 89 5c 24 18 mov %rbx,0x18(%rsp)\n sigjmp_buf local_sigjmp_buf;\n\n if (sigsetjmp(local_sigjmp_buf, 0) == 0)\n```\n\nWhen it doesn't complain:\n```\n sigjmp_buf *save_exception_stack = PG_exception_stack;\n sigjmp_buf local_sigjmp_buf;\n\n if (sigsetjmp(local_sigjmp_buf, 0) == 0)\n 29: 48 8d 44 24 20 lea 0x20(%rsp),%rax\n 2e: 48 89 44 24 08 mov %rax,0x8(%rsp)\n...\n sigjmp_buf *save_exception_stack = PG_exception_stack;\n 3c: 48 8b 1d 00 00 00 00 mov 0x0(%rip),%rbx # 43 <trigger+0x43>\n 43: 48 89 5c 24 18 mov %rbx,0x18(%rsp)\n```\n\nGreenplum had an issue reporting save_exception_stack and save_context_stack\nnot initialized.\nhttps://github.com/greenplum-db/gpdb/issues/8262\n\nI filed a bug report for gcc, they think it's expected.\nhttps://gcc.gnu.org/bugzilla/show_bug.cgi?id=91395\n\nSince high version gcc thinks it's supposed to report warning, we need to make\nthese two variables volatile? Or have I missed something?\n\n-- \nAdam Lee",
"msg_date": "Thu, 8 Aug 2019 16:20:08 +0800",
"msg_from": "Adam Lee <ali@pivotal.io>",
"msg_from_op": true,
"msg_subject": "PG_TRY()/CATCH() in a loop reports uninitialized variables"
},
{
"msg_contents": "Adam Lee <ali@pivotal.io> writes:\n> That's fine actually, but if we put the PG_TRY()/CATCH() in a loop, high\n> version gcc might complain.\n\nI'd be inclined to say \"so don't do that then\". Given this interpretation\n(which sure looks like a bug to me, gcc maintainers' opinion or no),\nyou're basically going to have to mark EVERYTHING in that function\nvolatile. Better to structure the code so you don't have to do that,\nwhich would mean not putting the TRY and the loop in the same level\nof function.\n\nI've seen other weird-maybe-bug gcc behaviors in the vicinity of\nsetjmp calls, too, which is another factor that pushes me not to\nwant to assume too much about what you can do in the same function\nas a TRY call.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Aug 2019 10:47:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG_TRY()/CATCH() in a loop reports uninitialized variables"
}
] |
[
{
"msg_contents": "A long time ago, we changed LC_COLLATE and LC_CTYPE from cluster-wide to\nper-database (61d967498802ab86d8897cb3c61740d7e9d712f6). There is some\nleftover code from that in initdb.c and backend/main/main.c to pass\nthese environment variables around in the expectations that the backend\nwill write them to pg_control during bootstrap, which is of course all a\nlie now.\n\nThe attached patch cleans that up. (Not totally sure about the WIN32\nblock, but the change seems good to me.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 8 Aug 2019 16:27:25 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "clean up obsolete initdb locale handling"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> A long time ago, we changed LC_COLLATE and LC_CTYPE from cluster-wide to\n> per-database (61d967498802ab86d8897cb3c61740d7e9d712f6). There is some\n> leftover code from that in initdb.c and backend/main/main.c to pass\n> these environment variables around in the expectations that the backend\n> will write them to pg_control during bootstrap, which is of course all a\n> lie now.\n\nWell, the comments' references to pg_control are indeed obsolete, but why\nwouldn't we just replace that with references to \"the appropriate entry in\npg_database\"? I don't see why that movement changed anything about what\nshould happen here. In particular, I'm concerned that this patch will\nresult in subtle changes in what settings get chosen during initdb.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Aug 2019 10:56:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: clean up obsolete initdb locale handling"
},
{
"msg_contents": "I wrote:\n> ... In particular, I'm concerned that this patch will\n> result in subtle changes in what settings get chosen during initdb.\n\nOK, after reviewing the code a bit more I take that back --- initdb's\nchoices are entirely made within initdb.\n\nHowever, I don't much like the choice to set LC_COLLATE and LC_CTYPE\ndifferently. That seems to be risking weird behavior, and for what?\nI'd be inclined to just remove the WIN32 stanza, initialize all\nthree of these variables with \"\", and explain it along the lines of\n\n * In the postmaster, absorb the environment values for LC_COLLATE\n * and LC_CTYPE. Individual backends will change these later to\n * settings taken from pg_database, but the postmaster cannot do\n * that. If we leave these set to \"C\" then message localization\n * might not work well in the postmaster.\n\nThat ends up being no code change in main.c, except on Windows.\nI concur that we can drop the transmission of LC_COLLATE and\nLC_CTYPE via environment variables.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Aug 2019 11:51:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: clean up obsolete initdb locale handling"
},
{
"msg_contents": "On 2019-08-08 17:51, Tom Lane wrote:\n> However, I don't much like the choice to set LC_COLLATE and LC_CTYPE\n> differently. That seems to be risking weird behavior, and for what?\n> I'd be inclined to just remove the WIN32 stanza, initialize all\n> three of these variables with \"\", and explain it along the lines of\n> \n> * In the postmaster, absorb the environment values for LC_COLLATE\n> * and LC_CTYPE. Individual backends will change these later to\n> * settings taken from pg_database, but the postmaster cannot do\n> * that. If we leave these set to \"C\" then message localization\n> * might not work well in the postmaster.\n\nOK, let's do it like that. Updated patch attached.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 12 Aug 2019 20:11:55 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: clean up obsolete initdb locale handling"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> OK, let's do it like that. Updated patch attached.\n\nLGTM, but I don't have the ability to test it on Windows.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Aug 2019 14:17:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: clean up obsolete initdb locale handling"
},
{
"msg_contents": "On 2019-08-12 20:17, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> OK, let's do it like that. Updated patch attached.\n> \n> LGTM, but I don't have the ability to test it on Windows.\n\nCommitted, after some testing on Windows.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 14 Aug 2019 06:53:31 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: clean up obsolete initdb locale handling"
}
] |
[
{
"msg_contents": "Libpq doesn't have a way to control which password protocols are used.\nFor example, the client might expect the server to be using SCRAM, but\nit actually ends up using plain password authentication instead.\n\nThis patch adds:\n\n password_protocol = {plaintext|md5|scram-sha-256|scram-sha-256-plus}\n\nas a connection parameter. Libpq will then reject any authentication\nrequest from the server that is less secure than this setting. Setting\nit to \"plaintext\" (default) will answer to any kind of authentication\nrequest.\n\nI'm not 100% happy with the name \"password_protocol\", but other names I\ncould think of seemed likely to cause confusion.\n\nRegards,\n\tJeff Davis",
"msg_date": "Thu, 08 Aug 2019 15:38:20 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Thu, Aug 08, 2019 at 03:38:20PM -0700, Jeff Davis wrote:\n> Libpq doesn't have a way to control which password protocols are used.\n> For example, the client might expect the server to be using SCRAM, but\n> it actually ends up using plain password authentication instead.\n\nThanks for working on this!\n\n> I'm not 100% happy with the name \"password_protocol\", but other names I\n> could think of seemed likely to cause confusion.\n\nWhat about auth_protocol then? It seems to me that it could be useful\nto have the restriction on AUTH_REQ_MD5 as well.\n\n> Sets the least-secure password protocol allowable when using password\n> authentication. Options are: \"plaintext\", \"md5\", \"scram-sha-256\", or\n> \"scram-sha-256-plus\".\n\nThis makes it sound like there is a linear hierarchy among all those\nprotocols, which is true in this case, but if the list of supported\nprotocols is extended in the future it may be not.\n\nI think that this should have TAP tests in src/test/authentication/ so\nas we make sure of the semantics. For the channel-binding part, the\nlogic path for the test would be src/test/ssl.\n\n+#define DefaultPasswordProtocol \"plaintext\"\nI think that we are going to need another default value for that, like\n\"all\" to reduce the confusion that SCRAM, MD5 and co are still\nincluded in the authorized set in this case.\n\nAnother thing that was discussed on the topic would be to allow a list\nof authorized protocols instead. I personally don't think that we\nneed to go necessarily this way, but it could make the integration of\nthings line scram-sha-256,scram-sha-256-plus easier to integrate in\napplication flows.\n--\nMichael",
"msg_date": "Fri, 9 Aug 2019 12:00:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Fri, 2019-08-09 at 12:00 +0900, Michael Paquier wrote:\n> What about auth_protocol then? It seems to me that it could be\n> useful\n> to have the restriction on AUTH_REQ_MD5 as well.\n\nauth_protocol does sound like a good name. I'm not sure what you mean\nregarding MD5 though.\n\n> This makes it sound like there is a linear hierarchy among all those\n> protocols, which is true in this case, but if the list of supported\n> protocols is extended in the future it may be not.\n\nWe already have that concept to a lesser extent, with the md5\nauthentication method also permitting scram-sha-256.\n\nAlso note that the server chooses what kind of authentication request\nto send, which imposes a hierarchy of password < md5 < sasl. Within the\nsasl authentication request the server can advertise multiple supported\nmechanisms, though, so there doesn't need to be a hierarchy among sasl\nmechanisms.\n\n> I think that this should have TAP tests in src/test/authentication/\n> so\n> as we make sure of the semantics. For the channel-binding part, the\n> logic path for the test would be src/test/ssl.\n\nWill do.\n\n> Another thing that was discussed on the topic would be to allow a\n> list\n> of authorized protocols instead. I personally don't think that we\n> need to go necessarily this way, but it could make the integration of\n> things line scram-sha-256,scram-sha-256-plus easier to integrate in\n> application flows.\n\nThat sounds good, but there are a lot of possibilities and I can't\nquite decide which way to go.\n\nWe could expose it as an SASL option like:\n\n saslmode = {disable|prefer|require-scram-sha-256|require-scram-sha-\n256-plus}\n\nBut that doesn't allow for multiple acceptable mechanisms, which could\nmake migration a pain. We try to use a comma-list like:\n\n saslmode = {disable|prefer|require}\n saslmech = all | {scram-hash-256|scram-hash-256-plus}[,...]\n\nOr we could over-engineer it to do something like:\n\n saslmode = {disable|prefer|require}\n saslmech = all | {scram|future_mech}[,...]\n scramhash = all | {sha-256|future_hash}[,...]\n scram_channel_binding = {disable|prefer|require}\n\n(Aside: is the channel binding only a SCRAM concept, or also a SASL\nconcept?)\n\nAlso, working with libpq I found myself wondering why everything is\nbased on strings instead of enums or some other structure. Do you know\nwhy it's done that way?\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Thu, 08 Aug 2019 23:16:24 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Thu, Aug 08, 2019 at 11:16:24PM -0700, Jeff Davis wrote:\n> On Fri, 2019-08-09 at 12:00 +0900, Michael Paquier wrote:\n> > What about auth_protocol then? It seems to me that it could be\n> > useful\n> > to have the restriction on AUTH_REQ_MD5 as well.\n> \n> auth_protocol does sound like a good name. I'm not sure what you mean\n> regarding MD5 though.\n\nSorry, I meant krb5 here.\n\n> We already have that concept to a lesser extent, with the md5\n> authentication method also permitting scram-sha-256.\n\nThat's present to ease upgrades, and once the AUTH_REQ part is\nreceived the client knows what it needs to go through.\n\n> That sounds good, but there are a lot of possibilities and I can't\n> quite decide which way to go.\n> \n> We could expose it as an SASL option like:\n> \n> saslmode = {disable|prefer|require-scram-sha-256|require-scram-sha-\n> 256-plus}\n\nOr we could shape password_protocol so as it takes a list of\nprotocols, as a white list of authorized things in short.\n--\nMichael",
"msg_date": "Fri, 9 Aug 2019 19:09:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Thu, Aug 08, 2019 at 11:16:24PM -0700, Jeff Davis wrote:\n> > On Fri, 2019-08-09 at 12:00 +0900, Michael Paquier wrote:\n> > > What about auth_protocol then? It seems to me that it could be\n> > > useful\n> > > to have the restriction on AUTH_REQ_MD5 as well.\n> > \n> > auth_protocol does sound like a good name. I'm not sure what you mean\n> > regarding MD5 though.\n\nI don't really care for auth_protocol as that's pretty close to\n\"auth_method\" and that isn't what we're talking about here- this isn't\nthe user picking the auth method, per se, but rather saying which of the\npassword-based mechanisms for communicating that the user knows the\npassword is acceptable. Letting users choose which auth methods are\nallowed might also be interesting (as in- we are in a Kerberized\nenvironment and therefore no client should ever be using any auth method\nexcept GSS, could be a reasonable ask) but it's not the same thing.\n\n> Sorry, I meant krb5 here.\n\nWhat restriction are you suggesting here wrt krb5..?\n\n> > We already have that concept to a lesser extent, with the md5\n> > authentication method also permitting scram-sha-256.\n> \n> That's present to ease upgrades, and once the AUTH_REQ part is\n> received the client knows what it needs to go through.\n\nI don't think there's any question that, of the methods available to\nprove that you know what the password is, simply sending the password to\nthe server as cleartext is the least secure. If I, as a user, decide\nthat I don't really care what method is used, it's certainly simpler to\njust say 'plaintext' than to have to list out every possible option.\n\nHaving an 'any' option, as mentioned before, could be an alternative\nthough.\n\nI agree with the point that there isn't any guarantee that it'll always\nbe clear-cut as to which of two methods is \"better\".\n\nFrom a user perspective, it seems like the main things are \"don't send\nmy password in the clear to the server\", and \"require channel binding to\nprove there isn't a MITM\". I have to admit that I like the idea of\nrequiring scram to be used and not allowing md5 though.\n\n> > That sounds good, but there are a lot of possibilities and I can't\n> > quite decide which way to go.\n> > \n> > We could expose it as an SASL option like:\n> > \n> > saslmode = {disable|prefer|require-scram-sha-256|require-scram-sha-\n> > 256-plus}\n> \n> Or we could shape password_protocol so as it takes a list of\n> protocols, as a white list of authorized things in short.\n\nI'm not really a fan of \"saslmode\" or anything else involving SASL\nfor this because we don't *really* do SASL- if we did, we'd have that as\nan auth method and we'd have our Kerberos support be through SASL along\nwith potentially everything else... I'm not against going there but I\ndon't think that's what you were suggesting here.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 9 Aug 2019 09:28:50 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Fri, 2019-08-09 at 09:28 -0400, Stephen Frost wrote:\n> Having an 'any' option, as mentioned before, could be an alternative\n> though.\n\n...\n\n> I agree with the point that there isn't any guarantee that it'll\n> always\n> be clear-cut as to which of two methods is \"better\".\n> \n> From a user perspective, it seems like the main things are \"don't\n> send\n> my password in the clear to the server\", and \"require channel binding\n> to\n> prove there isn't a MITM\". I have to admit that I like the idea of\n> requiring scram to be used and not allowing md5 though.\n\nSo it seems like we are leaning toward:\n\n password_protocol = any | {plaintext,md5,scram-sha-256,scram-sha-\n256-plus}[,...]\n\nOr maybe:\n\n channel_binding = {disable|prefer|require}\n password_plaintext = {disable|enable}\n password_md5 = {disable|enable}\n\nThat seems reasonable. It's three options, but no normal use case would\nneed to set more than two, because channel binding forces scram-sha-\n256-plus.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 09 Aug 2019 08:51:19 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On 8/9/19 11:51 AM, Jeff Davis wrote:\n> On Fri, 2019-08-09 at 09:28 -0400, Stephen Frost wrote:\n>> Having an 'any' option, as mentioned before, could be an alternative\n>> though.\n> \n> ...\n> \n>> I agree with the point that there isn't any guarantee that it'll\n>> always\n>> be clear-cut as to which of two methods is \"better\".\n>>\n>> From a user perspective, it seems like the main things are \"don't\n>> send\n>> my password in the clear to the server\", and \"require channel binding\n>> to\n>> prove there isn't a MITM\". I have to admit that I like the idea of\n>> requiring scram to be used and not allowing md5 though.\n> \n> So it seems like we are leaning toward:\n> \n> password_protocol = any | {plaintext,md5,scram-sha-256,scram-sha-\n> 256-plus}[,...]\n\nFirst, thanks for proposing / working on this, I like the idea! :) Happy\nto test/review.\n\nAs long as this one can handle the current upgrade path that's in place\nfor going from md5 to SCRAM (which AIUI it should) this makes sense to\nme. As stated above, there is a clear hierarchy.\n\nI would almost argue that \"plaintext\" shouldn't even be an option...if\nyou have \"any\" set (arguably default?) then plaintext is available. With\nour currently supported versions + driver ecosystem, I hope no one needs\nto support a forced plaintext setup.\n\n> \n> Or maybe:\n> \n> channel_binding = {disable|prefer|require}\n> password_plaintext = {disable|enable}\n> password_md5 = {disable|enable}\n> \n> That seems reasonable. It's three options, but no normal use case would\n> need to set more than two, because channel binding forces scram-sha-\n> 256-plus.\n\nSeems to be a lot to configure. I'm more of a fan of the previous\nmethod; it'd work nicely with how we've presently defined things and\nshould be easy to put into a DSN/URI/env variable.\n\nThanks,\n\nJonathan",
"msg_date": "Fri, 9 Aug 2019 16:27:16 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On 09/08/2019 23:27, Jonathan S. Katz wrote:\n> On 8/9/19 11:51 AM, Jeff Davis wrote:\n>> On Fri, 2019-08-09 at 09:28 -0400, Stephen Frost wrote:\n>>> Having an 'any' option, as mentioned before, could be an alternative\n>>> though.\n>>\n>> ...\n>>\n>>> I agree with the point that there isn't any guarantee that it'll\n>>> always\n>>> be clear-cut as to which of two methods is \"better\".\n>>>\n>>> From a user perspective, it seems like the main things are \"don't\n>>> send\n>>> my password in the clear to the server\", and \"require channel binding\n>>> to\n>>> prove there isn't a MITM\". I have to admit that I like the idea of\n>>> requiring scram to be used and not allowing md5 though.\n>>\n>> So it seems like we are leaning toward:\n>>\n>> password_protocol = any | {plaintext,md5,scram-sha-256,scram-sha-\n>> 256-plus}[,...]\n> \n> First, thanks for proposing / working on this, I like the idea! :) Happy\n> to test/review.\n> \n> As long as this one can handle the current upgrade path that's in place\n> for going from md5 to SCRAM (which AIUI it should) this makes sense to\n> me. As stated above, there is a clear hierarchy.\n> \n> I would almost argue that \"plaintext\" shouldn't even be an option...if\n> you have \"any\" set (arguably default?) then plaintext is available. With\n> our currently supported versions + driver ecosystem, I hope no one needs\n> to support a forced plaintext setup.\n\nKeep in mind that RADIUS, LDAP and PAM authentication methods are \n'plaintext' over the wire. It's not that bad, when used with \nsslmode=verify-ca/full.\n\n>> Or maybe:\n>>\n>> channel_binding = {disable|prefer|require}\n>> password_plaintext = {disable|enable}\n>> password_md5 = {disable|enable}\n>>\n>> That seems reasonable. It's three options, but no normal use case would\n>> need to set more than two, because channel binding forces scram-sha-\n>> 256-plus.\n> \n> Seems to be a lot to configure. I'm more of a fan of the previous\n> method; it'd work nicely with how we've presently defined things and\n> should be easy to put into a DSN/URI/env variable.\n\nThis is a multi-dimensional problem. \"channel_binding=require\" is one \nway to prevent MITM attacks, but sslmode=verify-ca is another. (Does \nKerberos also prevent MITM?) Or you might want to enable plaintext \npasswords over SSL, but not without SSL.\n\nI think we'll need something like the 'ssl_ciphers' GUC, where you can \nchoose from a few reasonable default rules, but also enable/disable \nspecific methods:\n\n# anything goes (the default)\nauth_methods = 'ANY'\n\n# Disable plaintext password authentication. Anything else is accepted.\nauth_methods = '-password'\n\n# Only authentication methods that protect from\n# Man-in-the-Middle attacks. This allows anything if the server's SSL\n# certificate can be verified, and otherwise only SCRAM with\n# channel binding\nauth_methods = 'MITM'\n\n# The same, but disable plaintext passwords and md5 altogether\nauth_methods = 'MITM, -password, -md5'\n\n\nI'm tempted to also allow 'SSL' and 'SSL-verify-full' as part of the \nsame string, so that you could configure everything related to \nconnection security in the same option. Not sure if that would make \nthings simpler for users, or create more confusion.\n\n- Heikki\n\n\n",
"msg_date": "Sat, 10 Aug 2019 00:17:50 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Fri, 2019-08-09 at 16:27 -0400, Jonathan S. Katz wrote:\n> Seems to be a lot to configure. I'm more of a fan of the previous\n> method; it'd work nicely with how we've presently defined things and\n> should be easy to put into a DSN/URI/env variable.\n\nProposals on the table:\n\n1. Hierarchical semantics, where you specify the least-secure\nacceptable method:\n\n password_protocol = {any,md5,scram-sha-256,scram-sha-256-plus}\n\n2. Comma-list approach, where you specify exactly which protocols are\nacceptable, or \"any\" to mean that we don't care.\n\n3. three-setting approach:\n\n channel_binding = {disable|prefer|require}\n password_plaintext = {disable|enable}\n password_md5 = {disable|enable}\n\nIt looks like Jonathan prefers #1.\n\n#1 seems direct and clearly applies today, and corresponds to auth\nmethods on the server side.\n\nI'm not a fan of #2, it seems likely to result in a bunch of clients\nwith overly-specific lists of things with long names that can never\nreally go away.\n\n#3 is a little more abstract, but also seems more future-proof, and may\ntie in to what Stephen is talking about with respect to controlling\nauth methods from the client, or moving more protocols within SASL.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 09 Aug 2019 14:56:28 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Sat, 2019-08-10 at 00:17 +0300, Heikki Linnakangas wrote:\n> This is a multi-dimensional problem. \"channel_binding=require\" is\n> one \n> way to prevent MITM attacks, but sslmode=verify-ca is another. (Does \n> Kerberos also prevent MITM?) Or you might want to enable plaintext \n> passwords over SSL, but not without SSL.\n> \n> I think we'll need something like the 'ssl_ciphers' GUC, where you\n> can \n> choose from a few reasonable default rules, but also enable/disable \n> specific methods:\n\n..\n\n> auth_methods = 'MITM, -password, -md5'\n\nKeep in mind this is client configuration, so something reasonable in\npostgresql.conf might not be so reasonable in the form:\n\npostgresql://foo:secret@myhost/mydb?auth_methods=MITM%2C%20-\npassword%2C%20-md5\n\nAnother thing to consider is that there's less control configuring on\nthe client than on the server. The server will send at most one\nauthentication request based on its own rules, and all the client can\ndo is either answer it, or disconnect. And the SSL stuff all happens\nbefore that, and won't use an authentication request message at all.\n\nSome protocols allow negotiation within them, like SASL, which gives\nthe client a bit more freedom. But FE/BE doesn't allow for arbitrary\nsubsets of authentication methods to be negoitated between client and\nserver, so I'm worried trying to express it that way will just lead to\nclients that break when you upgrade your server.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 09 Aug 2019 16:54:14 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> On Sat, 2019-08-10 at 00:17 +0300, Heikki Linnakangas wrote:\n> > auth_methods = 'MITM, -password, -md5'\n> \n> Keep in mind this is client configuration, so something reasonable in\n> postgresql.conf might not be so reasonable in the form:\n\nYeah, that's a really good point.\n\n> postgresql://foo:secret@myhost/mydb?auth_methods=MITM%2C%20-\n> password%2C%20-md5\n> \n> Another thing to consider is that there's less control configuring on\n> the client than on the server. The server will send at most one\n> authentication request based on its own rules, and all the client can\n> do is either answer it, or disconnect. And the SSL stuff all happens\n> before that, and won't use an authentication request message at all.\n\nNote that GSSAPI Encryption works the same as SSL in this regard.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 9 Aug 2019 20:03:40 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Fri, 9 Aug 2019 at 11:00, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Aug 08, 2019 at 03:38:20PM -0700, Jeff Davis wrote:\n> > Libpq doesn't have a way to control which password protocols are used.\n> > For example, the client might expect the server to be using SCRAM, but\n> > it actually ends up using plain password authentication instead.\n>\n> Thanks for working on this!\n>\n> > I'm not 100% happy with the name \"password_protocol\", but other names I\n> > could think of seemed likely to cause confusion.\n>\n> What about auth_protocol then? It seems to me that it could be useful\n> to have the restriction on AUTH_REQ_MD5 as well.\n>\n> > Sets the least-secure password protocol allowable when using password\n> > authentication. Options are: \"plaintext\", \"md5\", \"scram-sha-256\", or\n> > \"scram-sha-256-plus\".\n>\n> This makes it sound like there is a linear hierarchy among all those\n> protocols, which is true in this case, but if the list of supported\n> protocols is extended in the future it may be not.\n>\n\nBefore we go too far along with this, lets look at how other established\nprotocols do things and the flaws that've been discovered in their\napproaches. If this isn't done with extreme care then there's a large risk\nof negating the benefits offered by adopting recent things like SCRAM.\nFrankly I kind of wish we could just use SASL, but there are many (many)\nreasons no to.\n\nOff the top of my head I can think of these risks:\n\n* Protocols that allow naïve pre-auth client/server auth negotiation (e.g.\nby finding the overlap in exchanged supported auth-mode lists) are subject\nto MiTM downgrade attacks where the attacker filters out protocols it\ncannot intercept and break from the proposed alternatives.\n\n* Protocols that specify a hierarchy tend to be inflexible and result in\nhard to predict auth mode selections as the options grow. If my app wants\nGSSAPI or SuperStrongAuth but doesn't accept SSPI, and the hierarchy is\nGSSAPI > SSPI > SuperStrongAuth, it has to fall back to a disconnect and\nretry model like now.\n\n* Protocols that announce supported auth methods before any kind of trust\nis established make life easier for vulnerability scanners and worms\n\nand I'm sure there are more when it comes to auth handshakes.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Fri, 9 Aug 2019 at 11:00, Michael Paquier <michael@paquier.xyz> wrote:On Thu, Aug 08, 2019 at 03:38:20PM -0700, Jeff Davis wrote:\n> Libpq doesn't have a way to control which password protocols are used.\n> For example, the client might expect the server to be using SCRAM, but\n> it actually ends up using plain password authentication instead.\n\nThanks for working on this!\n\n> I'm not 100% happy with the name \"password_protocol\", but other names I\n> could think of seemed likely to cause confusion.\n\nWhat about auth_protocol then? It seems to me that it could be useful\nto have the restriction on AUTH_REQ_MD5 as well.\n\n> Sets the least-secure password protocol allowable when using password\n> authentication. Options are: \"plaintext\", \"md5\", \"scram-sha-256\", or\n> \"scram-sha-256-plus\".\n\nThis makes it sound like there is a linear hierarchy among all those\nprotocols, which is true in this case, but if the list of supported\nprotocols is extended in the future it may be not.Before we go too far along with this, lets look at how other established protocols do things and the flaws that've been discovered in their approaches. If this isn't done with extreme care then there's a large risk of negating the benefits offered by adopting recent things like SCRAM. Frankly I kind of wish we could just use SASL, but there are many (many) reasons no to. Off the top of my head I can think of these risks:* Protocols that allow naïve pre-auth client/server auth negotiation (e.g. by finding the overlap in exchanged supported auth-mode lists) are subject to MiTM downgrade attacks where the attacker filters out protocols it cannot intercept and break from the proposed alternatives.* Protocols that specify a hierarchy tend to be inflexible and result in hard to predict auth mode selections as the options grow. If my app wants GSSAPI or SuperStrongAuth but doesn't accept SSPI, and the hierarchy is GSSAPI > SSPI > SuperStrongAuth, it has to fall back to a disconnect and retry model like now.* Protocols that announce supported auth methods before any kind of trust is established make life easier for vulnerability scanners and wormsand I'm sure there are more when it comes to auth handshakes.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Sat, 10 Aug 2019 10:24:25 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Sat, 2019-08-10 at 10:24 +0800, Craig Ringer wrote:\n> Before we go too far along with this, lets look at how other\n> established protocols do things and the flaws that've been discovered\n> in their approaches. If this isn't done with extreme care then\n> there's a large risk of negating the benefits offered by adopting\n> recent things like SCRAM.\n\nAgreed. I'm happy to hear any proposals better informed by history.\n\n> Frankly I kind of wish we could just use SASL, but there are many\n> (many) reasons no to.\n\nI'm curious what the reasons are not to use SASL; do you have a\nreference?\n\n> Off the top of my head I can think of these risks:\n> \n> * Protocols that allow naïve pre-auth client/server auth negotiation\n> (e.g. by finding the overlap in exchanged supported auth-mode lists)\n> are subject to MiTM downgrade attacks where the attacker filters out\n> protocols it cannot intercept and break from the proposed\n> alternatives.\n\nWe already have the downgrade problem. That's what I'm trying to solve.\n\n> * Protocols that specify a hierarchy tend to be inflexible and result\n> in hard to predict auth mode selections as the options grow. If my\n> app wants GSSAPI or SuperStrongAuth but doesn't accept SSPI, and the\n> hierarchy is GSSAPI > SSPI > SuperStrongAuth, it has to fall back to\n> a disconnect and retry model like now.\n\nWhat do you mean \"disconnect and retry model\"?\n\nI agree that hierarchies are unweildly as the options grow. Then again,\nas options grow, we need new versions of the client to support them,\nand those new versions might offer more flexible ways to choose between\nthem.\n\nOf course, we should try to think ahead to avoid needing to constantly\nchange client connection syntax, but I'm just pointing out that it's\nnot a one-way door.\n\n> * Protocols that announce supported auth methods before any kind of\n> trust is established make life easier for vulnerability scanners and\n> worms\n\nThis discussion is about the client so I don't see how vulnerability\nscanners are relevant.\n\nRegards,\n\tJeff Davis\n\n\n\n\n\n",
"msg_date": "Fri, 09 Aug 2019 22:03:39 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On 8/9/19 7:54 PM, Jeff Davis wrote:\n> On Sat, 2019-08-10 at 00:17 +0300, Heikki Linnakangas wrote:\n>> This is a multi-dimensional problem. \"channel_binding=require\" is\n>> one \n>> way to prevent MITM attacks, but sslmode=verify-ca is another. (Does \n>> Kerberos also prevent MITM?) Or you might want to enable plaintext \n>> passwords over SSL, but not without SSL.\n>>\n>> I think we'll need something like the 'ssl_ciphers' GUC, where you\n>> can \n>> choose from a few reasonable default rules, but also enable/disable \n>> specific methods:\n> \n> ..\n> \n>> auth_methods = 'MITM, -password, -md5'\n> \n> Keep in mind this is client configuration, so something reasonable in\n> postgresql.conf might not be so reasonable in the form:\n> \n> postgresql://foo:secret@myhost/mydb?auth_methods=MITM%2C%20-\n> password%2C%20-md5\n\nYeah, and I do agree it is a multi-dimensional problem, but the context\nin which I gave my opinion was for the password authentication methods\nthat PostgreSQL supports natively, i.e. not requiring a 3rd party to\narbitrate via GSSAPI, LDAP etc.\n\nThat said, I dove into the code a bit more to look at the behavior\nspecifically with LDAP, which as described does send back a request for\n\"AuthenticationCleartextPassword\"\n\nIf we go with the client sending up a \"password_protocol\" that is not\nplaintext, and the server only provides LDAP authentication, does the\nclient close the connection? I would say yes.\n\n(And as such, I would also consider adding \"plaintext\" back to the list,\njust to have the explicit option).\n\nThe other question I have is that do we have it occur in the\nhierarchical manner, i.e. \"md5 or better?\" I would also say yes to that,\nwe would just need to clearly document that. Perhaps we adopt a similar\nname to \"sslmode\" e.g. \"password_protocol_mode\" but that can be debated :)\n\n> Another thing to consider is that there's less control configuring on\n> the client than on the server. The server will send at most one\n> authentication request based on its own rules, and all the client can\n> do is either answer it, or disconnect. And the SSL stuff all happens\n> before that, and won't use an authentication request message at all.\n\nYes. Using the LDAP example above, the client also needs some general\nawareness of how it can connect to the server, e.g. \"You may want\nscram-sha-256 but authentication occurs over LDAP, so don't stop\nrequesting scram-sha-256!\" That said, part of that is a human problem:\nit's up to the server administrator to inform clients how they can\nconnect to PostgreSQL.\n\n> Some protocols allow negotiation within them, like SASL, which gives\n> the client a bit more freedom. But FE/BE doesn't allow for arbitrary\n> subsets of authentication methods to be negoitated between client and\n> server, so I'm worried trying to express it that way will just lead to\n> clients that break when you upgrade your server.\n\nAgreed. I see this as a way of a client saying \"Hey, I really want to\nauthenticate with scram-sha-256 or better, so if you don't let me do\nthat, I'm out.\" In addition to ensuring it uses the client's desired\npassword protocol, this could be helpful for testing that the\nappropriate authentication rules are set in a server, e.g. one that is\nrolling out SCRAM authentication.\n\nAnd as Heikki mentions, there are other protections a client can use,\ne.g. verify-ca/full, to guard against eavesdropping, MITM etc.\n\nJonathan",
"msg_date": "Sat, 10 Aug 2019 11:08:46 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On 2019-08-09 23:56, Jeff Davis wrote:\n> 1. Hierarchical semantics, where you specify the least-secure\n> acceptable method:\n> \n> password_protocol = {any,md5,scram-sha-256,scram-sha-256-plus}\n\nWhat would the hierarchy be if scram-sha-512 and scram-sha-512-plus are\nadded?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 11 Aug 2019 19:00:54 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On 8/11/19 1:00 PM, Peter Eisentraut wrote:\n> On 2019-08-09 23:56, Jeff Davis wrote:\n>> 1. Hierarchical semantics, where you specify the least-secure\n>> acceptable method:\n>>\n>> password_protocol = {any,md5,scram-sha-256,scram-sha-256-plus}\n> \n> What would the hierarchy be if scram-sha-512 and scram-sha-512-plus are\n> added?\n\npassword_protocol =\n{any,md5,scram-sha-256,scram-sha-512,scram-sha-256-plus,scram-sha-512-plus}?\n\nI'd put one length of digest over another, but I'd still rank a method\nthat uses channel binding has more protections than one that does not.\n\nJonathan",
"msg_date": "Sun, 11 Aug 2019 15:46:46 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On 2019-08-11 21:46, Jonathan S. Katz wrote:\n> On 8/11/19 1:00 PM, Peter Eisentraut wrote:\n>> On 2019-08-09 23:56, Jeff Davis wrote:\n>>> 1. Hierarchical semantics, where you specify the least-secure\n>>> acceptable method:\n>>>\n>>> password_protocol = {any,md5,scram-sha-256,scram-sha-256-plus}\n>>\n>> What would the hierarchy be if scram-sha-512 and scram-sha-512-plus are\n>> added?\n> \n> password_protocol =\n> {any,md5,scram-sha-256,scram-sha-512,scram-sha-256-plus,scram-sha-512-plus}?\n> \n> I'd put one length of digest over another, but I'd still rank a method\n> that uses channel binding has more protections than one that does not.\n\nSure, but the opposite opinion is also possible.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 11 Aug 2019 21:56:50 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On 8/11/19 3:56 PM, Peter Eisentraut wrote:\n> On 2019-08-11 21:46, Jonathan S. Katz wrote:\n>> On 8/11/19 1:00 PM, Peter Eisentraut wrote:\n>>> On 2019-08-09 23:56, Jeff Davis wrote:\n>>>> 1. Hierarchical semantics, where you specify the least-secure\n>>>> acceptable method:\n>>>>\n>>>> password_protocol = {any,md5,scram-sha-256,scram-sha-256-plus}\n>>>\n>>> What would the hierarchy be if scram-sha-512 and scram-sha-512-plus are\n>>> added?\n>>\n>> password_protocol =\n>> {any,md5,scram-sha-256,scram-sha-512,scram-sha-256-plus,scram-sha-512-plus}?\n>>\n>> I'd put one length of digest over another, but I'd still rank a method\n>> that uses channel binding has more protections than one that does not.\n> \n> Sure, but the opposite opinion is also possible.\n\nThat's true, and when originally started composing my note I had it as\n(256,512,256-plus,512-plus).\n\nBut upon further reflection, the reason I ranked the digest-plus methods\nabove the digest methods is that there is any additional requirement\nimposed by them. The digest methods could be invoked either with/without\nTLS, whereas the digest-plus methods *must* use TLS. As such, 256-plus\nis explicitly asking for an additional security parameter over 512, i.e.\ntransmission over TLS, so even if it's a smaller digest, it has the\nadditional channel binding requirement.\n\nJonathan",
"msg_date": "Mon, 12 Aug 2019 10:30:33 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Sun, 2019-08-11 at 19:00 +0200, Peter Eisentraut wrote:\n> On 2019-08-09 23:56, Jeff Davis wrote:\n> > 1. Hierarchical semantics, where you specify the least-secure\n> > acceptable method:\n> > \n> > password_protocol = {any,md5,scram-sha-256,scram-sha-256-plus}\n> \n> What would the hierarchy be if scram-sha-512 and scram-sha-512-plus\n> are\n> added?\n\n\nhttps://postgr.es/m/daf0017a1a5c2caabf88a4e00f66b4fcbdfeccad.camel%40j-davis.com\n\nThe weakness of proposal #1 is that it's not very \"future-proof\" and we\nwould likely need to change something about it later when we support\nnew methods. That wouldn't break clients, but it would be annoying to\nneed to support some old syntax and some new syntax for the connection\nparameters.\n\nProposal #3 does not have this weakness. When we add sha-512, we could\nalso add a parameter to specify that the client requires a certain hash\nalgorithm for SCRAM.\n\nDo you favor that existing proposal #3, or are you proposing a fourth\noption?\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Mon, 12 Aug 2019 09:02:50 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On 2019-08-12 18:02, Jeff Davis wrote:\n> https://postgr.es/m/daf0017a1a5c2caabf88a4e00f66b4fcbdfeccad.camel%40j-davis.com\n> \n> The weakness of proposal #1 is that it's not very \"future-proof\" and we\n> would likely need to change something about it later when we support\n> new methods. That wouldn't break clients, but it would be annoying to\n> need to support some old syntax and some new syntax for the connection\n> parameters.\n> \n> Proposal #3 does not have this weakness. When we add sha-512, we could\n> also add a parameter to specify that the client requires a certain hash\n> algorithm for SCRAM.\n> \n> Do you favor that existing proposal #3, or are you proposing a fourth\n> option?\n\nIn this context, I would prefer #2, but I would expand that to cover all\nauthentication methods, not only password methods.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 12 Aug 2019 19:05:08 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n> On 2019-08-12 18:02, Jeff Davis wrote:\n> > https://postgr.es/m/daf0017a1a5c2caabf88a4e00f66b4fcbdfeccad.camel%40j-davis.com\n> > \n> > The weakness of proposal #1 is that it's not very \"future-proof\" and we\n> > would likely need to change something about it later when we support\n> > new methods. That wouldn't break clients, but it would be annoying to\n> > need to support some old syntax and some new syntax for the connection\n> > parameters.\n> > \n> > Proposal #3 does not have this weakness. When we add sha-512, we could\n> > also add a parameter to specify that the client requires a certain hash\n> > algorithm for SCRAM.\n> > \n> > Do you favor that existing proposal #3, or are you proposing a fourth\n> > option?\n> \n> In this context, I would prefer #2, but I would expand that to cover all\n> authentication methods, not only password methods.\n\nI'm not really thrilled with approach #2 because it means the user\nwill have to know which of the PG authentication methods involve, eg,\nsending the password in the clear to the server, and which don't, if\nwhat they're really looking for is \"don't send my password in the clear\nto the server\" which seems like a really useful and sensible thing to\nask for.\n\nIt also ends up not being very future-proof either, since a user who is\nfine with scram-sha-256-plus will probably also be ok with\nscram-sha-512-plus, should we ever implement it.\n\nNot to mention that, at least at the moment, we don't let users pick\nauthentication methods with that kind of specificity on the server side\n(how do you require channel binding..?), so the set of \"authentication\nmethods\" on the client side and those on the server side end up being\ndifferent sets, which strikes me as awfully confusing...\n\nOr did I misunderstand what you were suggesting here wrt \"all\nauthentication methods\"?\n\nThanks,\n\nStephen",
"msg_date": "Mon, 12 Aug 2019 13:14:15 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> I'm not really thrilled with approach #2 because it means the user\n> will have to know which of the PG authentication methods involve, eg,\n> sending the password in the clear to the server, and which don't, if\n> what they're really looking for is \"don't send my password in the clear\n> to the server\" which seems like a really useful and sensible thing to\n> ask for.\n\nWhat problem do we actually need to solve here?\n\nIf the known use-case is just \"don't send my password in the clear\",\nmaybe we should just change libpq to refuse to do that, ie reject\nplain-password auth methods unless SSL is on (except maybe over\nunix sockets?). Or invent a bool connection option that enables\nexactly that.\n\nI'm not really convinced that there is a use-case for client side\nspecification of allowed auth methods beyond that. In the end,\nif you don't trust the server you're connecting to to handle your\npassword with reasonable safety, you have got bigger problems than\nthis one. And we already have coverage for MITM problems (or if\nwe don't, this sideshow isn't fixing it).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Aug 2019 13:26:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > I'm not really thrilled with approach #2 because it means the user\n> > will have to know which of the PG authentication methods involve, eg,\n> > sending the password in the clear to the server, and which don't, if\n> > what they're really looking for is \"don't send my password in the clear\n> > to the server\" which seems like a really useful and sensible thing to\n> > ask for.\n> \n> What problem do we actually need to solve here?\n> \n> If the known use-case is just \"don't send my password in the clear\",\n> maybe we should just change libpq to refuse to do that, ie reject\n> plain-password auth methods unless SSL is on (except maybe over\n> unix sockets?). Or invent a bool connection option that enables\n> exactly that.\n\nRight, inventing a bool connection for it was discussed up-thread and\nseems like a reasonable idea to me (note: we should absolutely allow the\nuser to refuse to send the password to the server even over SSL, if they\nwould prefer to not do so).\n\n> I'm not really convinced that there is a use-case for client side\n> specification of allowed auth methods beyond that. In the end,\n> if you don't trust the server you're connecting to to handle your\n> password with reasonable safety, you have got bigger problems than\n> this one. And we already have coverage for MITM problems (or if\n> we don't, this sideshow isn't fixing it).\n\nUh, no, we really don't have MITM protection in certain cases, which is\nexactly what channel-binding is intended to address, but we can't have\nthe \"server\" be able to say \"oh, well, I don't support channel binding\"\nand have the client go \"oh, ok, that's just fine, we won't use it then\"-\nthat's a downgrade attack.\n\nWhat was suggest up-thread to deal with that downgrade risk was a clear\nconnection option along the lines of \"require channel binding\" to\nprevent that kind of a MITM downgrade attack from working.\n\nI could possibly see some value in a client-side option along the lines\nof \"only authenticate using GSSAPI\", which could prevent some users from\nbeing fooled into sending their PW to a malicious server. GSSAPI does\nprevent MITM attacks (as much as it's able to anyway- each key is\nspecific to a particular server, so you'd have to have the specific\nserver's key in order to become a MITM), but if the server says \"we\ndon't do GSSAPI, we do password, please give us your password\" then psql\nwill happily go along with that even in an otherwise properly set up\nGSSAPI environment.\n\nRequiring GSSAPI Encryption on the client side should prevent that\nthough, as of v12, since psql will just refuse if the server claims to\nnot support GSSAPI Encryption.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 12 Aug 2019 13:37:31 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On 2019-08-12 19:26, Tom Lane wrote:\n> What problem do we actually need to solve here?\n> \n> If the known use-case is just \"don't send my password in the clear\",\n> maybe we should just change libpq to refuse to do that, ie reject\n> plain-password auth methods unless SSL is on (except maybe over\n> unix sockets?). Or invent a bool connection option that enables\n> exactly that.\n\nThere are several overlapping problems:\n\n1) A downgrade attack by a malicious server. The server can collect\npasswords from unsuspecting clients by just requesting some weak\nauthentication like plain-text or md5. This can currently be worked\naround by using SSL with server verification, except when considering\nthe kind of attack that channel binding is supposed to address.\n\n2) A downgrade attack to evade channel binding. This cannot currently\nbe worked around.\n\n3) A user not wanting to expose a weakly hashed password to the\n(otherwise trusted) server. This cannot currently be done.\n\n4) A user not wanting to send a password in plain text over the wire.\nThis can currently be done by requiring SSL.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 12 Aug 2019 19:46:41 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Fri, Aug 09, 2019 at 09:28:50AM -0400, Stephen Frost wrote:\n> I don't really care for auth_protocol as that's pretty close to\n> \"auth_method\" and that isn't what we're talking about here- this isn't\n> the user picking the auth method, per se, but rather saying which of the\n> password-based mechanisms for communicating that the user knows the\n> password is acceptable. Letting users choose which auth methods are\n> allowed might also be interesting (as in- we are in a Kerberized\n> environment and therefore no client should ever be using any auth method\n> except GSS, could be a reasonable ask) but it's not the same thing.\n>\n> What restriction are you suggesting here wrt krb5..?\n\nWhat I suggested in this previous set of emails is if it would make\nsense to extend what libpq can restrict at authentication time to not\nonly be password-based authentication methods, but also if we could\nhave a connection parameter allowing us to say \"please I want krb5/gss\nand nothing else\". My point is that password-based authentication is\nonly one portion of the problem as what we are looking at is applying\na filtering on AUTH_REQ messages that libpq receives from the server\n(SCRAM with and without channel binding is an exception as that's\nhandled as part of the SASL set of messages), but at a high level we\nare going to need a filtering of the first authentication message\nreceived anyway.\n\nBut that's also basically what you outline in this previous paragraph\nof yours.\n--\nMichael",
"msg_date": "Tue, 13 Aug 2019 11:53:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Mon, Aug 12, 2019 at 07:05:08PM +0200, Peter Eisentraut wrote:\n> In this context, I would prefer #2, but I would expand that to cover all\n> authentication methods, not only password methods.\n\nI tend to prefer #2 as well and that's the kind of approach we were\ntending to agree on when we discussed this issue during the v11 beta\nfor the downgrade issues with libpq. And as you say extend it so as\nwe can apply filtering of more AUTH_REQ requests, inclusing GSS and\nkrb5.\n--\nMichael",
"msg_date": "Tue, 13 Aug 2019 11:56:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Tue, 2019-08-13 at 11:56 +0900, Michael Paquier wrote:\n> I tend to prefer #2 as well and that's the kind of approach we were\n> tending to agree on when we discussed this issue during the v11 beta\n> for the downgrade issues with libpq. And as you say extend it so as\n> we can apply filtering of more AUTH_REQ requests, inclusing GSS and\n> krb5.\n\nCan you please offer a concrete proposal? I know the proposals I've put\nout aren't perfect (otherwise there wouldn't be three of them), so if\nyou have something better, please share.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Tue, 13 Aug 2019 09:25:53 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On 8/13/19 12:25 PM, Jeff Davis wrote:\n> On Tue, 2019-08-13 at 11:56 +0900, Michael Paquier wrote:\n>> I tend to prefer #2 as well and that's the kind of approach we were\n>> tending to agree on when we discussed this issue during the v11 beta\n>> for the downgrade issues with libpq. And as you say extend it so as\n>> we can apply filtering of more AUTH_REQ requests, inclusing GSS and\n>> krb5.\n> \n> Can you please offer a concrete proposal? I know the proposals I've put\n> out aren't perfect (otherwise there wouldn't be three of them), so if\n> you have something better, please share.\n\nI think all of them get at the same thing, i.e. specifying which\npassword protocol you want to use, and a lot of it is a matter of how\nmuch onus we want to put on the user.\n\nBack to the thee proposals[1], I've warmed up to #3 a bit. I do think it\nputs more onus on the client to set the correct knobs to get the desired\noutcome, but what I like is the specific `channel_binding=require`\nattribute.\n\nHowever, I don't think it's completely future proof to adding a new hash\ndigest. If we wanted to prevent someone from using scram-sha-256 in a\nscram-sha-512 world, we'd likely need an option for that.\n\nAlternatively, we could combine 2 & 3, e.g.:\n\n channel_binding = {disable|prefer|require}\n\n # comma-separated list of protocols that are ok to the user, remove\n # ones you don't want. empty means all is ok\n password_protocol = \"plaintext,md5,scram-sha-256,scram-sha-256-plus\"\n\nIf the client selects \"channel_binding=require\" but does not include a\nprotocol that supports it, we should error. Likewise, if the client does\nsomething like \"channel_binding=require\" and\n\"password_protocol=scram-sha-256,scram-sha-256-plus\" but the server\nrefuses to do channel binding, we should error.\n\nI think this gives us both future-proofing against newer password digest\nmethods + the fix for the downgrade issue.\n\nI would not be opposed to extending \"password_protocol\" to read\n\"auth_protocol\" or the like and work for everything covered in AUTH_REQ,\nbut I would need to think about it some more.\n\nThanks,\n\nJonathan\n\n[1]\nhttps://www.postgresql.org/message-id/daf0017a1a5c2caabf88a4e00f66b4fcbdfeccad.camel%40j-davis.com",
"msg_date": "Tue, 13 Aug 2019 16:51:57 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Tue, 2019-08-13 at 16:51 -0400, Jonathan S. Katz wrote:\n> Alternatively, we could combine 2 & 3, e.g.:\n> \n> channel_binding = {disable|prefer|require}\n> \n> # comma-separated list of protocols that are ok to the user, remove\n> # ones you don't want. empty means all is ok\n> password_protocol = \"plaintext,md5,scram-sha-256,scram-sha-256-\n> plus\"\n\nI still feel like lists are over-specifying things. Let me step back\nand offer an MVP of a single new parameter:\n\n channel_binding={prefer|require}\n\nAnd has a lot of benefits:\n * solves the immediate need to make channel binding useful, which\nis a really nice feature\n * compatible with most of the other proposals we're considering, so\nwe can always extend it when we have a better understanding and\nconsensus\n * clear purpose for the user\n * doesn't introduce new concepts that might be confusing to the\nuser, like SASL or the use of \"-plus\" to mean \"with channel binding\"\n * guides users toward the good practice of using SSL and SCRAM\n * simple to implement\n\nThe other use cases are less clear to me, and seem less urgent.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Tue, 13 Aug 2019 15:25:06 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On 8/13/19 6:25 PM, Jeff Davis wrote:\n> On Tue, 2019-08-13 at 16:51 -0400, Jonathan S. Katz wrote:\n>> Alternatively, we could combine 2 & 3, e.g.:\n>>\n>> channel_binding = {disable|prefer|require}\n>>\n>> # comma-separated list of protocols that are ok to the user, remove\n>> # ones you don't want. empty means all is ok\n>> password_protocol = \"plaintext,md5,scram-sha-256,scram-sha-256-\n>> plus\"\n> \n> I still feel like lists are over-specifying things. Let me step back\n> and offer an MVP of a single new parameter:\n> \n> channel_binding={prefer|require}\n> \n> And has a lot of benefits:\n> * solves the immediate need to make channel binding useful, which\n> is a really nice feature\n> * compatible with most of the other proposals we're considering, so\n> we can always extend it when we have a better understanding and\n> consensus\n> * clear purpose for the user\n> * doesn't introduce new concepts that might be confusing to the\n> user, like SASL or the use of \"-plus\" to mean \"with channel binding\"\n> * guides users toward the good practice of using SSL and SCRAM\n> * simple to implement\n\n+1; I agree with your overall argument. The only thing I debate is if we\nwant to have an explicit \"disable\" option. Looking at the negotiation\nstandard[1] specified for channel binding with SCRAM, I don't think we\nneed an explicit disable option. I can't think of any good use cases for\n\"disable\" off the top of my head either. The only thing is it would be\nconsistent with some of our other parameters in terms of having an\nexplicit \"opt-out.\"\n\nJonathan\n\n[1] https://tools.ietf.org/html/rfc5802#section-6",
"msg_date": "Tue, 13 Aug 2019 19:04:07 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 04:51:57PM -0400, Jonathan S. Katz wrote:\n> On 8/13/19 12:25 PM, Jeff Davis wrote:\n>> On Tue, 2019-08-13 at 11:56 +0900, Michael Paquier wrote:\n>>> I tend to prefer #2 as well and that's the kind of approach we were\n>>> tending to agree on when we discussed this issue during the v11 beta\n>>> for the downgrade issues with libpq. And as you say extend it so as\n>>> we can apply filtering of more AUTH_REQ requests, inclusing GSS and\n>>> krb5.\n>> \n>> Can you please offer a concrete proposal? I know the proposals I've put\n>> out aren't perfect (otherwise there wouldn't be three of them), so if\n>> you have something better, please share.\n> \n> I think all of them get at the same thing, i.e. specifying which\n> password protocol you want to use, and a lot of it is a matter of how\n> much onus we want to put on the user.\n\nWhat I got in mind was a comma-separated list of authorized protocols\nwhich can be specified as a connection parameter, which extends to all\nthe types of AUTH_REQ requests that libpq can understand, plus an\nextra for channel binding. I also liked the idea mentioned upthread\nof \"any\" to be an alias to authorize everything, which should be the\ndefault. So you basically get at that:\nauth_protocol = {any,password,md5,scram-sha-256,scram-sha-256-plus,krb5,gss,sspi}\n\nSo from an implementation point of view, just using bit flags would\nmake things rather clean.\n\n> Back to the thee proposals[1], I've warmed up to #3 a bit. I do think it\n> puts more onus on the client to set the correct knobs to get the desired\n> outcome, but what I like is the specific `channel_binding=require`\n> attribute.\n\nI could see a point in separating the channel binding part into a\nsecond parameter though. We don't have (at least yet) an hba option\nto allow only channel binding with scram, so a one-one mapping with\nthe elements of the connection parameter brings some consistency.\n\n> If the client selects \"channel_binding=require\" but does not include a\n> protocol that supports it, we should error.\n\nYep.\n\n> Likewise, if the client does\n> something like \"channel_binding=require\" and\n> \"password_protocol=scram-sha-256,scram-sha-256-plus\" but the server\n> refuses to do channel binding, we should error.\n\nIf using a second parameter to control channel binding requirement, I\ndon't think that there is any point in keeping scram-sha-256-plus in\npassword_protocol.\n\n> I would not be opposed to extending \"password_protocol\" to read\n> \"auth_protocol\" or the like and work for everything covered in AUTH_REQ,\n> but I would need to think about it some more.\n\nAnd for this one I would like to push for not only having\npassword-based methods considered.\n--\nMichael",
"msg_date": "Wed, 14 Aug 2019 11:38:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Wed, 2019-08-14 at 11:38 +0900, Michael Paquier wrote:\n> What I got in mind was a comma-separated list of authorized protocols\n> which can be specified as a connection parameter, which extends to\n> all\n> the types of AUTH_REQ requests that libpq can understand, plus an\n> extra for channel binding. I also liked the idea mentioned upthread\n> of \"any\" to be an alias to authorize everything, which should be the\n> default. So you basically get at that:\n> auth_protocol = {any,password,md5,scram-sha-256,scram-sha-256-\n> plus,krb5,gss,sspi}\n\nWhat about something corresponding to the auth methods \"trust\" and\n\"cert\", where no authentication request is sent? That's a funny case,\nbecause the server trusts the client; but that doesn't imply that the\nclient trusts the server.\n\nThis is another reason I don't really like the list. It's impossible to\nmake it cleanly map to the auth methods, and there are a few ways it\ncould be confusing to the users.\n\nGiven that we all pretty much agree on the need for the separate\nchannel_binding param, the question is whether we want to (a) address\nadditional use cases with specific parameters that also justify\nthemselves; or (b) have a generic list that is supposed to solve many\nfuture use cases.\n\nI vote (a). With (b), the generic list is likely to cause more\nconfusion, ugliness, and clients that break needlessly in the future.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Wed, 14 Aug 2019 16:55:17 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> On Wed, 2019-08-14 at 11:38 +0900, Michael Paquier wrote:\n> > What I got in mind was a comma-separated list of authorized protocols\n> > which can be specified as a connection parameter, which extends to\n> > all\n> > the types of AUTH_REQ requests that libpq can understand, plus an\n> > extra for channel binding. I also liked the idea mentioned upthread\n> > of \"any\" to be an alias to authorize everything, which should be the\n> > default. So you basically get at that:\n> > auth_protocol = {any,password,md5,scram-sha-256,scram-sha-256-\n> > plus,krb5,gss,sspi}\n> \n> What about something corresponding to the auth methods \"trust\" and\n> \"cert\", where no authentication request is sent? That's a funny case,\n> because the server trusts the client; but that doesn't imply that the\n> client trusts the server.\n\nI agree that \"trust\" is odd. If you want my 2c, we should have to\nexplicitly *enable* that for psql, otherwise there's the potential that\na MITM could perform a downgrade attack to \"trust\" and while that might\nnot expose a user's password, it could expose other information that the\nclient ends up sending (INSERTs, UPDATEs, etc).\n\nWhen it comes to \"cert\"- there is certainly an authentication that\nhappens and we already have options in psql/libpq to require validation\nof the server. If users want that, they should enable it (I wish we\ncould make it the default too but that's a different discussion...).\n\n> This is another reason I don't really like the list. It's impossible to\n> make it cleanly map to the auth methods, and there are a few ways it\n> could be confusing to the users.\n\nI agree with these concerns, just to be clear.\n\n> Given that we all pretty much agree on the need for the separate\n> channel_binding param, the question is whether we want to (a) address\n> additional use cases with specific parameters that also justify\n> themselves; or (b) have a generic list that is supposed to solve many\n> future use cases.\n> \n> I vote (a). With (b), the generic list is likely to cause more\n> confusion, ugliness, and clients that break needlessly in the future.\n\nAdmittedly, one doesn't preclude the other, and so we could move forward\nwith the channel binding param, and that's fine- but I seriously hope\nthat someone finds time to work on further improving the ability for\nclients to control what happens during authentication as this, imv\nanyway, is an area that we are weak in and it'd be great to improve on\nit.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 15 Aug 2019 21:28:15 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On 8/15/19 9:28 PM, Stephen Frost wrote:\n> Greetings,\n> \n> * Jeff Davis (pgsql@j-davis.com) wrote:\n>> On Wed, 2019-08-14 at 11:38 +0900, Michael Paquier wrote:\n>>> What I got in mind was a comma-separated list of authorized protocols\n>>> which can be specified as a connection parameter, which extends to\n>>> all\n>>> the types of AUTH_REQ requests that libpq can understand, plus an\n>>> extra for channel binding. I also liked the idea mentioned upthread\n>>> of \"any\" to be an alias to authorize everything, which should be the\n>>> default. So you basically get at that:\n>>> auth_protocol = {any,password,md5,scram-sha-256,scram-sha-256-\n>>> plus,krb5,gss,sspi}\n>>\n>> What about something corresponding to the auth methods \"trust\" and\n>> \"cert\", where no authentication request is sent? That's a funny case,\n>> because the server trusts the client; but that doesn't imply that the\n>> client trusts the server.\n> \n> I agree that \"trust\" is odd. If you want my 2c, we should have to\n> explicitly *enable* that for psql, otherwise there's the potential that\n> a MITM could perform a downgrade attack to \"trust\" and while that might\n> not expose a user's password, it could expose other information that the\n> client ends up sending (INSERTs, UPDATEs, etc).\n> \n> When it comes to \"cert\"- there is certainly an authentication that\n> happens and we already have options in psql/libpq to require validation\n> of the server. If users want that, they should enable it (I wish we\n> could make it the default too but that's a different discussion...).\n> \n>> This is another reason I don't really like the list. It's impossible to\n>> make it cleanly map to the auth methods, and there are a few ways it\n>> could be confusing to the users.\n> \n> I agree with these concerns, just to be clear.\n\n+1.\n\n> \n>> Given that we all pretty much agree on the need for the separate\n>> channel_binding param, the question is whether we want to (a) address\n>> additional use cases with specific parameters that also justify\n>> themselves; or (b) have a generic list that is supposed to solve many\n>> future use cases.\n>>\n>> I vote (a). With (b), the generic list is likely to cause more\n>> confusion, ugliness, and clients that break needlessly in the future.\n> \n> Admittedly, one doesn't preclude the other, and so we could move forward\n> with the channel binding param, and that's fine- but I seriously hope\n> that someone finds time to work on further improving the ability for\n> clients to control what happens during authentication as this, imv\n> anyway, is an area that we are weak in and it'd be great to improve on\n> it.\n\nTo be pedantic, +1 on the channel_binding param.\n\nI do agree with option (a), but we should narrow down what that means\nfor this iteration.\n\nI do see \"password_protocol\" making sense as a comma-separated list of\noptions e.g. {plaintext, md5, scram-sha-256}. I would ask if\nscram-sha-256-plus makes the list if we have the channel_binding param?\n\nIf channel_binding = require, it would essentially ignore any non-plus\noptions in password_protocol and require scram-sha-256-plus. In a future\nscram-sha-512 world, then having scram-sha-256-plus or\nscram-sha-512-plus options in \"password_protocol\" then could be\nnecessary based on what the user would prefer or require in their\napplication.\n\nSo if we do add a \"password_protocol\" parameter, then we likely need to\ninclude the -plus's.\n\nI think this is also fairly easy for a user to configure. Some scenarios\nscenarios I can see for this are:\n\n1. The user requiring channel binding, so only \"channel_binding=require\"\n is set.\n\n2. A PostgreSQL cluster transitioning between SCRAM + MD5, and the user\nsetting password_protocol=\"scram-sha-256\" to guarantee md5 auth does not\ntake place.\n\n3. A user wanting to ensure a stronger method is used, with some\ncombination of the scram methods or md5, i.e. ensuring plaintext is not\nused.\n\nWe would need to provide documentation around the types of password\nvalidation methods are used for the external validators (e.g. LDAP) so\nthe user's known what to expect if their server is using those methods.\n\nJonathan",
"msg_date": "Fri, 16 Aug 2019 14:11:57 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Fri, Aug 16, 2019 at 02:11:57PM -0400, Jonathan S. Katz wrote:\n> To be pedantic, +1 on the channel_binding param.\n\nSeems like we are moving in this direction then. I don't object to\nthe introduction of this parameter. We would likely want to do\nsomething for downgrade attacks in other cases where channel binding\nis not used, still there is verify-ca/full even in this case which\noffer similar protections for MITM and eavesdropping.\n\n> I would ask if scram-sha-256-plus makes the list if we have the\n> channel_binding param?\n\nNo in my opinion.\n\n> If channel_binding = require, it would essentially ignore any non-plus\n> options in password_protocol and require scram-sha-256-plus. In a future\n> scram-sha-512 world, then having scram-sha-256-plus or\n> scram-sha-512-plus options in \"password_protocol\" then could be\n> necessary based on what the user would prefer or require in their\n> application.\n\nNot including scram-sha-512-plus or scram-sha-256-plus in the\ncomma-separated list would be a problem for users willing to give for\nexample scram-sha-256,scram-sha-512-plus as an authorized list of\nprotocols but I don't think that it makes much sense as they basically\nrequire an SSL connection for tls-server-end-point per the second\nelement.\n--\nMichael",
"msg_date": "Mon, 19 Aug 2019 14:51:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Mon, 2019-08-19 at 14:51 +0900, Michael Paquier wrote:\n> On Fri, Aug 16, 2019 at 02:11:57PM -0400, Jonathan S. Katz wrote:\n> > To be pedantic, +1 on the channel_binding param.\n> \n> Seems like we are moving in this direction then. I don't object to\n> the introduction of this parameter.\n\nOK, new patch attached. Seems like everyone is in agreement that we\nneed a channel_binding param.\n\nRegards,\n\tJeff Davis",
"msg_date": "Tue, 20 Aug 2019 19:09:25 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Tue, Aug 20, 2019 at 07:09:25PM -0700, Jeff Davis wrote:\n> OK, new patch attached. Seems like everyone is in agreement that we\n> need a channel_binding param.\n\n+ <para>\n+ A setting of <literal>require</literal> means that the connection must\n+ employ channel binding; and that the client will not respond to\n+ requests from the server for cleartext password or MD5\n+ authentication. The default setting is <literal>prefer</literal>,\n+ which employs channel binding if available.\n+ </para>\nThis counts for other auth requests as well like krb5, no? I think\nthat we should also add the \"disable\" flavor for symmetry, and\nalso...\n\n+#define DefaultChannelBinding \"prefer\"\nIf the build of Postgres does not support SSL (USE_SSL is not\ndefined), I think that DefaultChannelBinding should be \"disable\".\nThat would make things consistent with sslmode.\n\n+ with <productname>PostgreSQL</productname> 11.0 or later servers using\nHere I would use PostgreSQL 11, only mentioning the major version as\nit was also available at beta time.\n\n case AUTH_REQ_OK:\n+ if (conn->channel_binding[0] == 'r' && !conn->channel_bound)\n+ {\n+ printfPQExpBuffer(&conn->errorMessage,\n+ libpq_gettext(\"Channel binding required but not offered by server\\n\"));\n+ return STATUS_ERROR;\n+ }\nDoing the filtering at the time of AUTH_REQ_OK is necessary for\n\"trust\", but shouldn't this be done as well earlier for other request\ntypes? This could save round-trips to the server if we know that an\nexchange begins with a protocol which will never satisfy this\nrequest, saving efforts for a client doomed to fail after the first\nAUTH_REQ received. That would be the case for all AUTH_REQ, except\nthe SASL ones and of course AUTH_REQ_OK.\n\nCould you please add negative tests in src/test/authentication/? What\ncould be covered there is that the case where \"prefer\" (and\n\"disable\"?) is defined then the authentication is able to go through,\nand that with \"require\" we get a proper failure as SSL is not used.\nTests in src/test/ssl/ could include:\n- Make sure that \"require\" works properly.\n- Test after \"disable\".\n\n+ if (conn->channel_binding[0] == 'r')\nMaybe this should comment that this means \"require\", in a fashion\nsimilar to what is done when checking conn->sslmode.\n--\nMichael",
"msg_date": "Wed, 21 Aug 2019 16:12:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Wed, 2019-08-21 at 16:12 +0900, Michael Paquier wrote:\n> This counts for other auth requests as well like krb5, no? I think\n> that we should also add the \"disable\" flavor for symmetry, and\n> also...\n\nAdded \"disable\" option, and I refactored so that it would look\nexplicitly for an expected auth req based on the connection parameters.\n\nI had to also make the client tell the server that it does not support\nchannel binding if channel_binding=disable, otherwise the server\ncomplains.\n\n> +#define DefaultChannelBinding \"prefer\"\n> If the build of Postgres does not support SSL (USE_SSL is not\n> defined), I think that DefaultChannelBinding should be \"disable\".\n> That would make things consistent with sslmode.\n\nDone.\n\n> Doing the filtering at the time of AUTH_REQ_OK is necessary for\n> \"trust\", but shouldn't this be done as well earlier for other request\n> types? This could save round-trips to the server if we know that an\n> exchange begins with a protocol which will never satisfy this\n> request, saving efforts for a client doomed to fail after the first\n> AUTH_REQ received. That would be the case for all AUTH_REQ, except\n> the SASL ones and of course AUTH_REQ_OK.\n\nDone.\n\n> \n> Could you please add negative tests in\n> src/test/authentication/? What\n> could be covered there is that the case where \"prefer\" (and\n> \"disable\"?) is defined then the authentication is able to go through,\n> and that with \"require\" we get a proper failure as SSL is not used.\n> Tests in src/test/ssl/ could include:\n> - Make sure that \"require\" works properly.\n> - Test after \"disable\".\n\nDone.\n\n> + if (conn->channel_binding[0] == 'r')\n> Maybe this should comment that this means \"require\", in a fashion\n> similar to what is done when checking conn->sslmode.\n\nDone.\n\nNew patch attached.\n\nRegards,\n\tJeff Davis",
"msg_date": "Wed, 04 Sep 2019 21:22:33 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Wed, Sep 04, 2019 at 09:22:33PM -0700, Jeff Davis wrote:\n> New patch attached.\n\nThanks for the new version, Jeff.\n\n+ with <productname>PostgreSQL</productname> 11 or later servers using\n+ the <literal>scram-sha-256</literal> authentication method.\n\nNit here: \"scram-sha-256\" refers to the HBA entry. I would\njust use \"SCRAM\" instead.\n\nIn pg_SASL_init(), if the server sends SCRAM-SHA-256-PLUS as SASL\nmechanism over a non-SSL connection, should we complain even if\nthe \"disable\" mode is used? It seems to me that there is a point to\ncomplain in this case as a sanity check as the server should really\npublicize \"SCRAM-SHA-256-PLUS\" only over SSL.\n\nWhen the server only sends SCRAM-SHA-256 in the mechanism list and\n\"require\" mode is used, we complain about \"none of the server's SASL\nauthentication mechanisms are supported\" which can be confusing. Why\nnot generating a custom error if libpq selects SCRAM-SHA-256 when\n\"require\" is used, say:\n\"SASL authentication mechanism SCRAM-SHA-256 selected but channel\nbinding is required\"\nThat could be done by adding an error message when selecting\nSCRAM-SHA-256 and then goto the error step.\n\nActually, it looks that the handling of channel_bound is incorrect.\nIf the server sends AUTH_REQ_SASL and libpq processes it, then the\nflag gets already set. Now let's imagine that the server is a rogue\none and sends AUTH_REQ_OK just after AUTH_REQ_SASL, then your check\nwould pass. It seems to me that we should switch the flag once we are\nsure that the exchange is completed, aka with AUTH_REQ_SASL_FIN when\nthe final message is done within pg_SASL_continue().\n\n+# SSL not in use; channel binding still can't work\n+reset_pg_hba($node, 'scram-sha-256');\n+$ENV{\"PGCHANNELBINDING\"} = 'require';\n+test_login($node, 'saslpreptest4a_role', \"a\", 2);\nWorth testing md5 here?\n\nPGCHANNELBINDING needs documentation.\n--\nMichael",
"msg_date": "Fri, 6 Sep 2019 16:05:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Fri, 2019-09-06 at 16:05 +0900, Michael Paquier wrote:\n> Nit here: \"scram-sha-256\" refers to the HBA entry. I would\n> just use \"SCRAM\" instead.\n\nDone.\n\n> In pg_SASL_init(), if the server sends SCRAM-SHA-256-PLUS as SASL\n> mechanism over a non-SSL connection, should we complain even if\n> the \"disable\" mode is used? It seems to me that there is a point to\n> complain in this case as a sanity check as the server should really\n> publicize \"SCRAM-SHA-256-PLUS\" only over SSL.\n\nDone.\n\n> When the server only sends SCRAM-SHA-256 in the mechanism list and\n> \"require\" mode is used, we complain about \"none of the server's SASL\n> authentication mechanisms are supported\" which can be confusing. Why\n> not generating a custom error if libpq selects SCRAM-SHA-256 when\n> \"require\" is used, say:\n> \"SASL authentication mechanism SCRAM-SHA-256 selected but channel\n> binding is required\"\n> That could be done by adding an error message when selecting\n> SCRAM-SHA-256 and then goto the error step.\n\nDone.\n\n> Actually, it looks that the handling of channel_bound is incorrect.\n> If the server sends AUTH_REQ_SASL and libpq processes it, then the\n> flag gets already set. Now let's imagine that the server is a rogue\n> one and sends AUTH_REQ_OK just after AUTH_REQ_SASL, then your check\n> would pass. It seems to me that we should switch the flag once we\n> are\n> sure that the exchange is completed, aka with AUTH_REQ_SASL_FIN when\n> the final message is done within pg_SASL_continue().\n\nThank you! Fixed. I now track whether channel binding is selected, and\nalso whether SASL actually finished successfully.\n\n> +# SSL not in use; channel binding still can't work\n> +reset_pg_hba($node, 'scram-sha-256');\n> +$ENV{\"PGCHANNELBINDING\"} = 'require';\n> +test_login($node, 'saslpreptest4a_role', \"a\", 2);\n> Worth testing md5 here?\n\nI added a new md5 test in the ssl test suite. Testing it in the non-SSL \npath doesn't seem like it adds much.\n\n> PGCHANNELBINDING needs documentation.\n\nDone.\n\nRegards,\n\tJeff Davis",
"msg_date": "Sat, 14 Sep 2019 08:42:53 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Sat, Sep 14, 2019 at 08:42:53AM -0700, Jeff Davis wrote:\n> On Fri, 2019-09-06 at 16:05 +0900, Michael Paquier wrote:\n>> Actually, it looks that the handling of channel_bound is incorrect.\n>> If the server sends AUTH_REQ_SASL and libpq processes it, then the\n>> flag gets already set. Now let's imagine that the server is a rogue\n>> one and sends AUTH_REQ_OK just after AUTH_REQ_SASL, then your check\n>> would pass. It seems to me that we should switch the flag once we\n>> are\n>> sure that the exchange is completed, aka with AUTH_REQ_SASL_FIN when\n>> the final message is done within pg_SASL_continue().\n> \n> Thank you! Fixed. I now track whether channel binding is selected, and\n> also whether SASL actually finished successfully.\n\nAh, I see. So you have added an extra flag \"sasl_finished\" to make\nsure of that, and still kept around \"channel_bound\". Hence the two\nflags have to be set to make sure that the SASL exchanged has been\nfinished and that channel binding has been enabled. \"channel_bound\"\nis linked to the selected mechanism when the exchange begins, meaning\nthat it could be possible to do the same check with the selected\nmechanism directly from fe_scram_state instead. \"sasl_finished\" is\nlinked to the state where the SASL exchange is finished, so this\nbasically maps into checking after FE_SCRAM_FINISHED. Instead of\nthose two flags, wouldn't it be cleaner to add an extra routine to\nfe-auth-scram.c which does the same sanity checks, say\npg_fe_scram_check_state()? This needs to be careful about three\nthings, taking in input an opaque pointer to fe_scram_state if channel\nbinding is required:\n- The data is not NULL.\n- The sasl mechanism selected is SCRAM-SHA-256-PLUS.\n- The state is FE_SCRAM_FINISHED.\n\nWhat do you think? There is no need to save down the connection\nparameter value into fe_scram_state.\n\n>> +# SSL not in use; channel binding still can't work\n>> +reset_pg_hba($node, 'scram-sha-256');\n>> +$ENV{\"PGCHANNELBINDING\"} = 'require';\n>> +test_login($node, 'saslpreptest4a_role', \"a\", 2);\n>> Worth testing md5 here?\n> \n> I added a new md5 test in the ssl test suite. Testing it in the non-SSL \n> path doesn't seem like it adds much.\n\nGood idea.\n\n+ if (conn->channel_binding[0] == 'r' && /* require */\n+ strcmp(selected_mechanism, SCRAM_SHA_256_NAME) == 0)\n+ {\n+ printfPQExpBuffer(&conn->errorMessage,\n+ libpq_gettext(\"SASL authentication mechanism\nSCRAM-SHA-256 selected but channel binding is required\\n\"));\n+ goto error;\n+ }\nNit here as there are only two mechanisms handled: I would rather\ncause the error if the selected mechanism does not match\nSCRAM-SHA-256-PLUS, instead of complaining if the selected mechanism\nmatches SCRAM-SHA-256. Either way is actually fine :)\n\n+ printfPQExpBuffer(&conn->errorMessage,\n+ libpq_gettext(\"Channel binding required but not\noffered by server\\n\"));\n+ result = false;\nShould that be \"completed by server\" instead?\n\n+ if (areq == AUTH_REQ_SASL_FIN)\n+ conn->sasl_finished = true;\nThis should have a comment about the why it is done if you go this way\nwith the two flags added to PGconn.\n\n+ if (conn->channel_binding[0] == 'r' && /* require */\n+ !conn->ssl_in_use)\n+ {\n+ printfPQExpBuffer(&conn->errorMessage,\n+ libpq_gettext(\"Channel binding required, but\nSSL not in use\\n\"));\n+ goto error;\n+ }\nThis is not necessary? If SSL is not in use but the server publishes\nSCRAM-SHA-256-PLUS, libpq complains. If the server sends only\nSCRAM-SHA-256 but channel binding is required, we complain down on\n\"SASL authentication mechanism SCRAM-SHA selected but channel binding\nis required\". Or you have in mind that this error message is better?\n\nI think that pgindent would complain with the comment block in\ncheck_expected_areq(). \n--\nMichael",
"msg_date": "Tue, 17 Sep 2019 16:04:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Tue, 2019-09-17 at 16:04 +0900, Michael Paquier wrote:\n> basically maps into checking after FE_SCRAM_FINISHED. Instead of\n> those two flags, wouldn't it be cleaner to add an extra routine to\n> fe-auth-scram.c which does the same sanity checks, say\n> pg_fe_scram_check_state()? This needs to be careful about three\n> things, taking in input an opaque pointer to fe_scram_state if\n> channel\n> binding is required:\n> - The data is not NULL.\n> - The sasl mechanism selected is SCRAM-SHA-256-PLUS.\n> - The state is FE_SCRAM_FINISHED.\n\nYes, I think this does come out a bit cleaner, thank you.\n\n> What do you think? There is no need to save down the connection\n> parameter value into fe_scram_state.\n\nI'm not sure I understand this comment, but I removed the extra boolean\nflags.\n\n> Nit here as there are only two mechanisms handled: I would rather\n> cause the error if the selected mechanism does not match\n> SCRAM-SHA-256-PLUS, instead of complaining if the selected mechanism\n> matches SCRAM-SHA-256. Either way is actually fine :)\n\nDone.\n\n> + printfPQExpBuffer(&conn->errorMessage,\n> + libpq_gettext(\"Channel binding required but\n> not\n> offered by server\\n\"));\n> + result = false;\n> Should that be \"completed by server\" instead?\n\nDone.\n\n> is required\". Or you have in mind that this error message is better?\n\nI felt it would be a more useful error message.\n\n> I think that pgindent would complain with the comment block in\n> check_expected_areq(). \n\nChanged.\n\nRegards,\n\tJeff Davis",
"msg_date": "Thu, 19 Sep 2019 17:40:15 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 05:40:15PM -0700, Jeff Davis wrote:\n> On Tue, 2019-09-17 at 16:04 +0900, Michael Paquier wrote:\n>> What do you think? There is no need to save down the connection\n>> parameter value into fe_scram_state.\n> \n> I'm not sure I understand this comment, but I removed the extra boolean\n> flags.\n\nThanks for considering it. I was just asking about removing those\nflags and your thoughts about my thoughts from upthread.\n\n>> is required\". Or you have in mind that this error message is better?\n> \n> I felt it would be a more useful error message.\n\nOkay, fine by me.\n\n>> I think that pgindent would complain with the comment block in\n>> check_expected_areq(). \n> \n> Changed.\n\nA trick to make pgindent to ignore some comment blocks is to use\n/*--------- at its top and bottom, FWIW.\n\n+$ENV{PGUSER} = \"ssltestuser\";\n $ENV{PGPASSWORD} = \"pass\";\ntest_connect_ok() can use a complementary string, so I would use that\nin the SSL test part instead of relying too much on the environment\nfor readability, particularly for the last test added with md5testuser.\nUsing the environment variable in src/test/authentication/ makes\nsense. Maybe that's just a matter of taste :)\n\n+ return (state != NULL && state->state == FE_SCRAM_FINISHED &&\n+ strcmp(state->sasl_mechanism, SCRAM_SHA_256_PLUS_NAME) == 0);\nI think that we should document in the code why those reasons are\nchosen.\n\nI would also add a test for an invalid value of channel_binding.\n\nA comment update is forgotten in libpq-int.h.\n\n+# using the password authentication method; channel binding can't\nwork\n+reset_pg_hba($node, 'password');\n+$ENV{\"PGCHANNELBINDING\"} = 'require';\n+test_login($node, 'saslpreptest4a_role', \"a\", 2);\n+\n+# SSL not in use; channel binding still can't work\n+reset_pg_hba($node, 'scram-sha-256');\n+$ENV{\"PGCHANNELBINDING\"} = 'require';\n+test_login($node, 'saslpreptest4a_role', \"a\", 2);\nThose two tests are in the test suite dedicated to SASLprep. I think\nthat it would be more consistent to just move them to\n001_password.pl. And this does not impact the error coverage.\n\nMissing some indentation in the perl scripts (per pgperltidy).\n\nThose are mainly nits, and attached are the changes I would do to your\npatch. Please feel free to consider those changes as you see fit.\nAnyway, the base logic looks good to me, so I am switching the patch\nas ready for committer.\n--\nMichael",
"msg_date": "Fri, 20 Sep 2019 13:07:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Fri, 2019-09-20 at 13:07 +0900, Michael Paquier wrote:\n> Those are mainly nits, and attached are the changes I would do to\n> your\n> patch. Please feel free to consider those changes as you see fit.\n> Anyway, the base logic looks good to me, so I am switching the patch\n> as ready for committer.\n\nThank you, applied.\n\n* I also changed the comment above pg_fe_scram_channel_bound() to\nclarify that the caller must also check that the exchange was\nsuccessful.\n\n* I changed the error message when AUTH_REQ_OK is received without\nchannel binding. It seemed confusing before. I also added a test.\n\nRegards,\n\tJeff Davis",
"msg_date": "Fri, 20 Sep 2019 10:57:04 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 10:57:04AM -0700, Jeff Davis wrote:\n> Thank you, applied.\n\nOkay, I can see which parts you have changed.\n\n> * I also changed the comment above pg_fe_scram_channel_bound() to\n> clarify that the caller must also check that the exchange was\n> successful.\n> \n> * I changed the error message when AUTH_REQ_OK is received without\n> channel binding. It seemed confusing before. I also added a test.\n\nAnd both make sense.\n\n+ * Return true if channel binding was employed and the scram exchange\nupper('scram')?\n\nExcept for this nit, it looks good to me.\n--\nMichael",
"msg_date": "Sat, 21 Sep 2019 11:24:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
},
{
"msg_contents": "On Sat, Sep 21, 2019 at 11:24:30AM +0900, Michael Paquier wrote:\n> And both make sense.\n> \n> + * Return true if channel binding was employed and the scram exchange\n> upper('scram')?\n> \n> Except for this nit, it looks good to me.\n\nFor the archive's sake: this has been committed as of d6e612f.\n\n- * We pretend that the connection is OK for the duration of these\n- * queries.\n+ * We pretend that the connection is OK for the duration of\n+ * these queries.\nThe result had some noise diffs. Perhaps some leftover from the\nindentation run? That's no big deal anyway.\n--\nMichael",
"msg_date": "Tue, 24 Sep 2019 11:07:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add \"password_protocol\" connection parameter to libpq"
}
] |
[
{
"msg_contents": "On Fri, Sep 9, 2016 at 6:14 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> 1. SortTuple.tupindex is not used when the sort fits in memory. If we\n> also got rid of the isnull1 flag, we could shrink SortTuple from 24\n> bytes to 16 bytes (on 64-bit systems). That would allow you to pack more\n> tuples into memory, which generally speeds things up. We could for\n> example steal the least-significant bit from the 'tuple' pointer for\n> that, because the pointers are always at least 4-byte aligned. (But see\n> next point)\n\nI implemented what you sketched here, back in 2016. That is, I found a\nway to make SortTuples 16 bytes on 64-bit platforms, down from 24\nbytes (24 bytes includes alignment padding). Note that it only became\npossible to do this after commit 8b304b8b72b removed replacement\nselection sort, which reduced the number of things that use the\nSortTuple.tupindex field to just one: it is now only used for tapenum\nduring merging. (I also had to remove SortTuple.isnull1 to get down to\n16 bytes.)\n\nThe easy part was removing SortTuple.tupindex itself -- it was fairly\nnatural to stash that in the slab allocation for each tape. I used the\naset.c trick of having a metadata \"chunk\" immediately prior to address\nthat represents the allocation proper -- we can just back up by a few\nbytes from stup.tuple to find the place to stash the tape number\nduring merging. The worst thing about this change was that it makes a\ntape slab allocation mandatory in cases that previously didn't have\nany need for a stup.tuple allocation (e.g. datum tuplesorts of\npass-by-value types), though only during merging. Since we must always\nstore the tapenum when merging, we always need a slab buffer for each\ntape when merging. This aspect wasn't so bad.\n\nThe hard/ugly part was getting rid of the remaining \"unnecessary\"\nSortTuple field, isnull1. This involved squeezing an extra bit out of\nthe stup.tuple pointer, by stealing the least-significant bit. This\nwas invasive in about the way you'd expect it to be. It wasn't awful,\nbut it also wasn't something I'd countenance pursuing without getting\na fairly noticeable benefit for users. (Actually, the code that I\nwrote so far *is* pretty awful, but I could certainly clean it up some\nmore if I felt like it.)\n\nI think that the rough patch that I came up with gives us an accurate\npicture of what the benefits of having SortTuples that are only 16\nbytes wide are. The benefits seem kind of underwhelming at this point.\nFor something like a \"SELECT COUNT(distinct my_int4_col) FROM tab\"\nquery, which uses the qsort_ssup() qsort specialization, we can easily\ngo from getting an external sort to getting an internal sort. We can\nmaybe end up sorting about 20% faster if things really work out for\nthe patch. But in cases that users really care about, such as REINDEX,\nthe difference is in the noise. ISTM that this is simple not worth the\ntrouble at this time. These days, external sorts are often slightly\nfaster than internal sorts in practice, due to the fact that we can do\nan on-the-fly merge with external sorts, so we could easily hurt\nperformance by making more memory available!\n\nI don't want to completely close the door on the idea of shrinking\nSortTuple to 16 bytes. I can imagine a world in which that matters.\nFor example, perhaps there will one day be a strong case to be made\nfor SIMD-based internal sort algorithms for simple cases, such as the\n\"COUNT(distinct my_int4_col)\" query case that I mentioned. Probably\nSIMD-based multiway merge sort. I understand that such algorithms are\nvery sensitive to things like SIMD CPU register sizes, and were only\nconsidered plausible competitors to quicksort due to the advent of\n512-bit SIMD registers. 512 bit SIMD registers haven't been available\nin mainstream CPUs for that long.\n\nI have to admit that this SIMD sorting stuff seems like a bit of a\nstretch, though. The papers in this area all seem to make rather\noptimistic assumptions about the distribution of values. And, I think\nwe'd have to be even more aggressive about shrinking SortTuples in\norder to realize the benefits of SIMD-based sorting. Besides, sorting\nitself is the bottleneck for tuplesort-using operations less and less\nthese days -- the only remaining interesting bottleneck is probably in\ncode like index_form_tuple(), which is probably a good target for JIT.\nIn general, it's much harder to make tuplesort.c noticeably faster\nthan it used to be -- we've picked all the low-hanging fruit.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 9 Aug 2019 16:14:19 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Shrinking tuplesort.c's SortTuple struct (Was: More ideas for\n speeding up sorting)"
},
{
"msg_contents": "On 10/08/2019 02:14, Peter Geoghegan wrote:\n> The easy part was removing SortTuple.tupindex itself -- it was fairly\n> natural to stash that in the slab allocation for each tape. I used the\n> aset.c trick of having a metadata \"chunk\" immediately prior to address\n> that represents the allocation proper -- we can just back up by a few\n> bytes from stup.tuple to find the place to stash the tape number\n> during merging. The worst thing about this change was that it makes a\n> tape slab allocation mandatory in cases that previously didn't have\n> any need for a stup.tuple allocation (e.g. datum tuplesorts of\n> pass-by-value types), though only during merging. Since we must always\n> store the tapenum when merging, we always need a slab buffer for each\n> tape when merging. This aspect wasn't so bad.\n\nHmm. Wouldn't it be more straightforward to have the extra tupindex \nfield at the end of the struct? Something like:\n\ntypedef struct\n{\n\tvoid\t *tuple;\t\t\t/* the tuple itself */\n\tDatum\t\tdatum1;\t\t\t/* value of first key column */\n\tbool\t\tisnull1;\t\t/* is first key column NULL? */\n} SortTuple;\n\ntypedef struct\n{\n\tSortTuple stuple;\n\tint\t\t\ttupindex;\t\t/* see notes above */\n} MergeTuple;\n\nThe initial sorting phase would deal with SortTuples, and the merge \nphase would deal with MergeTuples. The same comparison routines work \nwith both.\n\n> The hard/ugly part was getting rid of the remaining \"unnecessary\"\n> SortTuple field, isnull1. This involved squeezing an extra bit out of\n> the stup.tuple pointer, by stealing the least-significant bit. This\n> was invasive in about the way you'd expect it to be. It wasn't awful,\n> but it also wasn't something I'd countenance pursuing without getting\n> a fairly noticeable benefit for users. (Actually, the code that I\n> wrote so far *is* pretty awful, but I could certainly clean it up some\n> more if I felt like it.)\n> \n> I think that the rough patch that I came up with gives us an accurate\n> picture of what the benefits of having SortTuples that are only 16\n> bytes wide are. The benefits seem kind of underwhelming at this point.\n> For something like a \"SELECT COUNT(distinct my_int4_col) FROM tab\"\n> query, which uses the qsort_ssup() qsort specialization, we can easily\n> go from getting an external sort to getting an internal sort. We can\n> maybe end up sorting about 20% faster if things really work out for\n> the patch.\n\nIf you separate the NULLs from non-NULLs in a separate array, as was \ndiscussed back in 2016, instead of stealing a bit, you can squeeze some \ninstructions out of the comparison routines, which might give some extra \nspeedup.\n\n> But in cases that users really care about, such as REINDEX,\n> the difference is in the noise. ISTM that this is simple not worth the\n> trouble at this time. These days, external sorts are often slightly\n> faster than internal sorts in practice, due to the fact that we can do\n> an on-the-fly merge with external sorts, so we could easily hurt\n> performance by making more memory available!\n\nYeah, that's a bit sad.\n\nThat makes me think: even when everything fits in memory, it might make \nsense to divide the input into a few batches, qsort them individually, \nand do an on-the-fly merge of the batches. I guess I'm essentially \nsuggesting that we should use merge instead of quicksort for the \nin-memory case, too.\n\nIf we had the concept of in-memory batches, you could merge together \nin-memory and external batches. That might be handy. For example, when \ndoing an external sort, instead of flushing the last run to disk before \nyou start merging, you could keep it in memory. That might be \nsignificant in the cases where the input is only slightly too big to fit \nin memory.\n\n- Heikki\n\n\n",
"msg_date": "Sat, 10 Aug 2019 11:20:10 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Shrinking tuplesort.c's SortTuple struct (Was: More ideas for\n speeding up sorting)"
},
{
"msg_contents": "On Sat, Aug 10, 2019 at 1:20 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Hmm. Wouldn't it be more straightforward to have the extra tupindex\n> field at the end of the struct?\n\n> The initial sorting phase would deal with SortTuples, and the merge\n> phase would deal with MergeTuples. The same comparison routines work\n> with both.\n\nMaybe, but then you would have to use MergeTuples in the\ntuplesort_heap* routines, which are not just used when merging\nexternal sort runs. You'd probably incur a penalty for top-N heap\nsorts too. Now, that could still be worth it, but it's something to\nconsider.\n\n> If you separate the NULLs from non-NULLs in a separate array, as was\n> discussed back in 2016, instead of stealing a bit, you can squeeze some\n> instructions out of the comparison routines, which might give some extra\n> speedup.\n\nThat might work well, but partitioning the memtuples array isn't\ntrivial. Routines like grow_memtuples() still need to work, and that\nseems like it would be tricky. So again, this may well be a better way\nto do it, but that isn't obvious.\n\n> > But in cases that users really care about, such as REINDEX,\n> > the difference is in the noise. ISTM that this is simple not worth the\n> > trouble at this time. These days, external sorts are often slightly\n> > faster than internal sorts in practice, due to the fact that we can do\n> > an on-the-fly merge with external sorts, so we could easily hurt\n> > performance by making more memory available!\n>\n> Yeah, that's a bit sad.\n\nI think that this is likely to be the problem with any combination of\nenhancements that remove fields from the SortTuple struct, to get it\ndown to 16 bytes: Making SortTuples only 16 bytes just isn't that\ncompelling.\n\n> That makes me think: even when everything fits in memory, it might make\n> sense to divide the input into a few batches, qsort them individually,\n> and do an on-the-fly merge of the batches. I guess I'm essentially\n> suggesting that we should use merge instead of quicksort for the\n> in-memory case, too.\n\nThat might make sense. The Alphasort paper [1] recommends using\nquicksort on CPU-cached sized chunks, and merging the chunks together\nas they're written out as a single on-disk run. The Alphasort paper is\nprobably the first place where the abbreviated keys technique is\ndescribed, and had a lot of good ideas.\n\n> If we had the concept of in-memory batches, you could merge together\n> in-memory and external batches. That might be handy. For example, when\n> doing an external sort, instead of flushing the last run to disk before\n> you start merging, you could keep it in memory. That might be\n> significant in the cases where the input is only slightly too big to fit\n> in memory.\n\nThe patch that I wrote to make tuplesort.c use quicksort in preference\nto replacement selection sort for generating initial runs starting out\nwith an implementation of something that I called \"quicksort with\nspillover\". The idea was that you could only spill a few extra tuples\nto disk when you almost had enough workMem, and then merge the on-disk\nrun with the larger, quicksorted in memory run. It worked alright, but\nit felt more important to make external sorts use quicksort in\ngeneral. Robert Haas really hated it at the time, because it relied on\nvarious magic numbers, based on heuristics.\n\nThe easiest and least controversial way to make internal sorting\nfaster may be to update our Quicksort algorithm to use the same\nimplementation that was added to Java 7 [2]. It uses all of the same\ntricks as our existing the Bentley & McIlroy implementation, but is\nmore cache efficient. It's considered the successor to B&M, and had\ninput from Bentley himself. It is provably faster than B&M for a wide\nvariety of inputs, at least on modern hardware.\n\n[1] http://www.vldb.org/journal/VLDBJ4/P603.pdf\n[2] https://codeblab.com/wp-content/uploads/2009/09/DualPivotQuicksort.pdf\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 10 Aug 2019 11:46:49 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Shrinking tuplesort.c's SortTuple struct (Was: More ideas for\n speeding up sorting)"
}
] |
[
{
"msg_contents": "Hi all,\n\nFollowing a recent bug report \n(<https://www.postgresql.org/message-id/20190725015448.e5a3rwa22kpnzfe3%40alap3.anarazel.de>) \nit was suggested that that I submit a feature request for the ability \nto test whether an (insert / on conflict update) \"upsert\" resulted in \nan insert or an update.\n\nI am currently testing if xmax = 0 to achieve this however I understand \nthis is not reliable.\n\nAn excerpt follows - I am performing bulk data maintenance, hence the \ninsert into/select from. The \"returning\" clause would ideally reference \nsomething more reliable than xmax.\n\ninsert into test_table (test_id, test_code, test_name)\nselect test_code, test_name\nfrom bulk_test_data\non conflict (test_code) do update\n set test_name = test_name_in\n where test_table.test_name is distinct from excluded.test_name\nreturning test_id, case when (xmax = 0)::boolean as inserted\n\nRegards,\nRoby.\n\n\n\nHi all,Following a recent bug report (https://www.postgresql.org/message-id/20190725015448.e5a3rwa22kpnzfe3%40alap3.anarazel.de) it was suggested that that I submit a feature request for the ability to test whether an (insert / on conflict update) \"upsert\" resulted in an insert or an update.I am currently testing if xmax = 0 to achieve this however I understand this is not reliable.An excerpt follows - I am performing bulk data maintenance, hence the insert into/select from. The \"returning\" clause would ideally reference something more reliable than xmax.insert into test_table (test_id, test_code, test_name)select test_code, test_namefrom bulk_test_dataon conflict (test_code) do update set test_name = test_name_in where test_table.test_name is distinct from excluded.test_namereturning test_id, case when (xmax = 0)::boolean as insertedRegards,Roby.",
"msg_date": "Sun, 11 Aug 2019 11:16:55 +1000",
"msg_from": "Roby <pacman@finefun.com.au>",
"msg_from_op": true,
"msg_subject": "Feature Request: insert/on conflict update status"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nPlease consider fixing the next heap of typos and inconsistencies in the\ntree:\n\n10.1. query_txt -> query text\n10.2. quote_object_names -> quote_object_name\n10.3. ragetypes_typanalyze.c -> rangetypes_typanalyze.c\n10.4. RAISE_EXCEPTION -> ERRCODE_RAISE_EXCEPTION\n10.5. rb_lower -> remove (orphaned since introduction in f0e44751)\n10.6. rd_newRelfilenode -> rd_newRelfilenodeSubid\n10.7. readfs -> readfds\n10.8. readInt -> ReadInt\n10.9. recoder, dst -> record, page\n10.10. recync -> resync\n10.11. RedoRecptr -> RedoRecPtr\n10.12. referenting -> referencing\n10.13. REG_PEND -> remove (orphaned since 7bcc6d98)\n10.14. regreesion -> regression\n10.15. REG_STARTEND -> remove (not used since the introduction in 985acb8e)\n10.16. RelationGetOidIndex -> remove a comment (irrelevant since 578b2297)\n10.17. relcheck -> relchecks\n10.18. releasebuf -> unlockbuf\n10.19. releaseOk -> RELEASE_OK flag\n10.20. relevants -> relevant\n10.21. relfile -> file\n10.22. RelID -> Relation ID\n10.23. ReplicationSlotShmemInit -> ReplicationSlotsShmemInit\n10.24. res_items -> res_nitems\n10.25. restablish -> reestablish\n10.26. REWRITE_H -> PRS2LOCK_H\n10.27. rfmt.c -> remove (as irrelevant)\n10.28. rmbuf -> buf\n10.29. RunningXacts -> RunningTransactionsData (and remove a nearby\ncomment) (orphaned since 8431e296)\n10.30. rvsToCluster -> RelToCluster\n10.31. s1_delete -> remove (not used since the introduction in 8b21b416)\n10.32. s1_reindex, s1_vacuum, s3_vacuum -> remove (not used since the\nintroduction in 9c2f0a6c)\n10.33. s1reset -> s1restart\n10.34. s1setval -> remove (not used since the introduction in 3d79013b)\n10.35. s2_alter_tbl1_text, s2_alter_tbl1_add_int,\ns2_alter_tbl1_add_float, s2_alter_tbl1_add_char,\ns2_alter_tbl1_add_boolean, s2_alter_tbl1_add_text, \ns2_alter_tbl2_add_boolean -> remove (not used since the introduction in\nb89e1510)\n10.36. saop_leftop -> leftop\n10.37. ScalarArrayOps -> ScalarArrayOpExprs\n10.38. seekPos -> remove (orphaned since c24dcd0c)\n10.39. segv -> SIGSEGV\n10.40. semphores -> semaphores\n10.41. sepgsql_audit_hook -> remove (and remove a comment) (not present\nsince the introduction in 968bc6fa)\n10.42. seqNo -> commitSeqNo\n10.43. seqtable -> seqhashtab\n10.44. serendipitiously -> serendipitously\n10.45. serializefn, deserializefn -> serialfn, deserialfn\n10.46. setstmt -> altersysstmt\n10.47. serv, servlen, host, hostlen -> service, servicelen, node, nodelen\n10.48. server_version_int -> server_version_num\n10.49. session_timestamp -> session_timezone\n10.50. sigsetmask -> pgsigsetmask\n10.51. SimpleLRU -> SimpleLru\n10.52. SIXBIT -> PGSIXBIT\n10.53. SizeOfHashCreateIdx, xl_hash_createidx -> remove (not used since\nthe introduction in c11453ce)\n10.54. SizeOfXlogRecord -> SizeOfXLogRecord\n10.55. sk_flag -> sk_flags\n10.56. SLAB_CHUNKHDRSZ -> sizeof(SlabChunk) (orphaned after 7e3aa03b)\n10.57. SlotAcquire -> ReplicationSlotAcquire\n10.58. SM_DATABASE_USER -> remove (with a wrong nearby comment)\n(orphaned since cb7fb3ca)\n10.59. smgrscheduleunlink -> RelationDropStorage (orphaned since 33960006)\n10.60. SnapBuildProcessRecord -> SnapBuildProcessRunningXacts\n10.61. snapshotname -> snapshot_id\n10.62. snap_state -> fctx\n10.63. SO_EXLUSIVEADDRUSE -> SO_EXCLUSIVEADDRUSE\n10.64. sortTuple -> SortTuple\n10.65. sourcexid -> sourcevxid\n10.66. spcpath -> spclocation\n10.67. SPI_plan -> _SPI_plan\n10.68. SPI_stack -> _SPI_stack\n10.69. splited -> splitted\n10.70. sqlcabc -> sqlabc\n10.71. standbydef.h -> standbydefs.h\n10.72. _StaticAssert -> _Static_assert\n10.73. StdRdOption -> StdRdOptions\n10.74. StringSortSupport -> VarStringSortSupport\n10.75. STrNCpy -> StrNCpy\n10.76. stxid -> relid\n10.77. subtransactiony -> subtransaction\n10.78. subtransations -> subtransactions\n10.79. SubtransSetParent -> SubTransSetParent\n10.80. succeeeded -> succeeded\n10.81. SUE -> ShareUpdateExclusive\n10.82. SYMENC_MDC -> SYMENCRYPTED_DATA_MDC\n10.83. SYMENCRYPTED_MDC -> SYMENCRYPTED_DATA_MDC\n10.84. symkey -> sym_key\n10.85. synchroniation -> synchronization\n10.86. system_id -> system_identifier\n10.87. sysdate -> now()\n10.88. SYSTEM_RESEED_* -> remove (orphaned since fe0a0b59)\n10.89. http://www.cs.auckland.ac.nz/software/AlgAnim/niemann/s_man.htm\n-> https://www.epaperpress.com/sortsearch/\n10.90. www.pgbuildfarm.org -> buildfarm.postgresql.org\n10.91. oft -> of\n10.92. It the username successfully retrieved (fix_for_fix_8.15)\n\nBest regards,\nAlexander",
"msg_date": "Sun, 11 Aug 2019 11:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typos and inconsistencies for HEAD (take 10)"
},
{
"msg_contents": "On Sun, Aug 11, 2019 at 11:00:00AM +0300, Alexander Lakhin wrote:\n> 10.44. serendipitiously -> serendipitously\n\nI didn't know that this even was a word:\nhttps://www.thefreedictionary.com/serendipitously\nBut it seems to come from Horace Walpole.\n\n> 10.50. sigsetmask -> pgsigsetmask\n\nIt should be pqsigsetmask.\n\n> 10.81. SUE -> ShareUpdateExclusive\n\nYour new comment had a typo here.\n\nPlease note that I have discarded the portion about the isolation\ntests for now. I need to have a second look at that part, and there\nwere already a lot of changes to review.\n--\nMichael",
"msg_date": "Tue, 13 Aug 2019 13:56:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for HEAD (take 10)"
}
] |
[
{
"msg_contents": "Hi all,\nI hope this is the right list for my question, if not, please let me know\nto which list I should send the question.\n\nI'd like to know whether there is any way to control the order of inherited\ncolumns?\n\nThis is purely an issue of syntactic sugar for me. I'd like to be able to\ndo the following:\n\ncreate table my_standard_metadata (created_ts timestamp default now() not\nnull, updated_ts timestamp, deactived_ts timestamp);\n\ncreate table foo (field1 text, field2 text) inherits (my_standard_metadata);\n\n\nCurrently, this results in:\n\npostgres=# \\d foo\n Table \"public.foo\"\n Column | Type | Collation | Nullable |\nDefault\n--------------+-----------------------------+-----------+----------+---------\n created_ts | timestamp without time zone | | not null | now()\n updated_ts | timestamp without time zone | | |\n deactived_ts | timestamp without time zone | | |\n field1 | text | | |\n field2 | text | | |\nIndexes:\n \"foo_pkey\" PRIMARY KEY, btree (email)\nInherits: my_standard_metadata\n\nand have the resulting table be described as follows\n\npostgres=# \\d foo\n Table \"public.foo\"\n Column | Type | Collation | Nullable |\nDefault\n--------------+-----------------------------+-----------+----------+---------\n field1 | text | | |\n field2 | text | | |\n created_ts | timestamp without time zone | | not null | now()\n updated_ts | timestamp without time zone | | |\n deactived_ts | timestamp without time zone | | |\nIndexes:\n \"foo_pkey\" PRIMARY KEY, btree (email)\nInherits: my_standard_metadata\n\n\nIs what I am thinking of worth pursuing? Or does it misuse the concept of\ninheritance in the postgres context?\n\nThanks for any thoughts!\n-Steve\n\nHi all,I hope this is the right list for my question, if not, please let me know to which list I should send the question.I'd like to know whether there is any way to control the order of inherited columns?This is purely an issue of syntactic sugar for me. I'd like to be able to do the following:create table my_standard_metadata (created_ts timestamp default now() not null, updated_ts timestamp, deactived_ts timestamp);create table foo (field1 text, field2 text) inherits (my_standard_metadata);Currently, this results in:postgres=# \\d foo Table \"public.foo\" Column | Type | Collation | Nullable | Default --------------+-----------------------------+-----------+----------+--------- created_ts | timestamp without time zone | | not null | now() updated_ts | timestamp without time zone | | | deactived_ts | timestamp without time zone | | | field1 | text | | | field2 | text | | | Indexes: \"foo_pkey\" PRIMARY KEY, btree (email)Inherits: my_standard_metadataand have the resulting table be described as followspostgres=# \\d foo Table \"public.foo\" Column | Type | Collation | Nullable | Default --------------+-----------------------------+-----------+----------+--------- field1 | text | | | field2 | text | | | created_ts | timestamp without time zone | | not null | now() updated_ts | timestamp without time zone | | | deactived_ts | timestamp without time zone | | | Indexes: \"foo_pkey\" PRIMARY KEY, btree (email)Inherits: my_standard_metadataIs what I am thinking of worth pursuing? Or does it misuse the concept of inheritance in the postgres context?Thanks for any thoughts!-Steve",
"msg_date": "Sun, 11 Aug 2019 12:36:10 -0700",
"msg_from": "Stephan Doliov <stephan.doliov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Table inheritance and column ordering question"
},
{
"msg_contents": "Stephan Doliov <stephan.doliov@gmail.com> writes:\n> I'd like to know whether there is any way to control the order of inherited\n> columns?\n\nNope, not at present.\n\nThere's a lot of wished-for functionality around separating the\npresentation order of table columns from their physical storage order.\nIf we had that it'd fix your problem too. But right now, those are\ntied together and also tied to the columns' catalog identifiers (attno).\nPeople have investigated changing that, but it looks enormously bug-prone\nsince the existing code doesn't distinguish these concepts.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Aug 2019 16:37:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Table inheritance and column ordering question"
}
] |
[
{
"msg_contents": "Hello,\n\nPostgreSQL optimizer right now considers join pairs on only\nnon-partition - non-partition or\npartition-leaf - partition-leaf relations. On the other hands, it is\nharmless and makes sense to\nconsider a join pair on non-partition - partition-leaf.\n\nSee the example below. ptable is partitioned by hash, and contains 10M\nrows. ftable is not\npartitioned and contains 50 rows. Most of ptable::fkey shall not have\nmatched rows in this\njoin.\n\ncreate table ptable (fkey int, dist text) partition by hash (dist);\ncreate table ptable_p0 partition of ptable for values with (modulus 3,\nremainder 0);\ncreate table ptable_p1 partition of ptable for values with (modulus 3,\nremainder 1);\ncreate table ptable_p2 partition of ptable for values with (modulus 3,\nremainder 2);\ninsert into ptable (select x % 10000, md5(x::text) from\ngenerate_series(1,10000000) x);\n\ncreate table ftable (pkey int primary key, memo text);\ninsert into ftable (select x, 'ftable__#' || x::text from\ngenerate_series(1,50) x);\nvacuum analyze;\n\npostgres=# explain analyze select count(*) from ptable p, ftable f\nwhere p.fkey = f.pkey;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=266393.38..266393.39 rows=1 width=8) (actual\ntime=2333.193..2333.194 rows=1 loops=1)\n -> Hash Join (cost=2.12..260143.38 rows=2500000 width=0) (actual\ntime=0.056..2330.079 rows=50000 loops=1)\n Hash Cond: (p.fkey = f.pkey)\n -> Append (cost=0.00..233335.00 rows=10000000 width=4)\n(actual time=0.012..1617.268 rows=10000000 loops=1)\n -> Seq Scan on ptable_p0 p (cost=0.00..61101.96\nrows=3332796 width=4) (actual time=0.011..351.137 rows=3332796\nloops=1)\n -> Seq Scan on ptable_p1 p_1 (cost=0.00..61106.25\nrows=3333025 width=4) (actual time=0.005..272.925 rows=3333025\nloops=1)\n -> Seq Scan on ptable_p2 p_2 (cost=0.00..61126.79\nrows=3334179 width=4) (actual time=0.006..416.141 rows=3334179\nloops=1)\n -> Hash (cost=1.50..1.50 rows=50 width=4) (actual\ntime=0.033..0.034 rows=50 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 10kB\n -> Seq Scan on ftable f (cost=0.00..1.50 rows=50\nwidth=4) (actual time=0.004..0.017 rows=50 loops=1)\n Planning Time: 0.286 ms\n Execution Time: 2333.264 ms\n(12 rows)\n\nWe can manually rewrite this query as follows:\n\npostgres=# explain analyze select count(*) from (\n select * from ptable_p0 p, ftable f where p.fkey =\nf.pkey union all\n select * from ptable_p1 p, ftable f where p.fkey =\nf.pkey union all\n select * from ptable_p2 p, ftable f where p.fkey = f.pkey) subqry;\n\nBecause Append does not process tuples that shall have no matched\ntuples in ftable,\nthis query has cheaper cost and short query execution time.\n(2333ms --> 1396ms)\n\npostgres=# explain analyze select count(*) from (\n select * from ptable_p0 p, ftable f where p.fkey =\nf.pkey union all\n select * from ptable_p1 p, ftable f where p.fkey =\nf.pkey union all\n select * from ptable_p2 p, ftable f where p.fkey = f.pkey) subqry;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=210478.25..210478.26 rows=1 width=8) (actual\ntime=1396.024..1396.024 rows=1 loops=1)\n -> Append (cost=2.12..210353.14 rows=50042 width=0) (actual\ntime=0.058..1393.008 rows=50000 loops=1)\n -> Subquery Scan on \"*SELECT* 1\" (cost=2.12..70023.66\nrows=16726 width=0) (actual time=0.057..573.197 rows=16789 loops=1)\n -> Hash Join (cost=2.12..69856.40 rows=16726\nwidth=72) (actual time=0.056..571.718 rows=16789 loops=1)\n Hash Cond: (p.fkey = f.pkey)\n -> Seq Scan on ptable_p0 p (cost=0.00..61101.96\nrows=3332796 width=4) (actual time=0.009..255.791 rows=3332796\nloops=1)\n -> Hash (cost=1.50..1.50 rows=50 width=4)\n(actual time=0.034..0.035 rows=50 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 10kB\n -> Seq Scan on ftable f (cost=0.00..1.50\nrows=50 width=4) (actual time=0.004..0.019 rows=50 loops=1)\n -> Subquery Scan on \"*SELECT* 2\" (cost=2.12..70027.43\nrows=16617 width=0) (actual time=0.036..409.712 rows=16578 loops=1)\n -> Hash Join (cost=2.12..69861.26 rows=16617\nwidth=72) (actual time=0.036..408.626 rows=16578 loops=1)\n Hash Cond: (p_1.fkey = f_1.pkey)\n -> Seq Scan on ptable_p1 p_1\n(cost=0.00..61106.25 rows=3333025 width=4) (actual time=0.005..181.422\nrows=3333025 loops=1)\n -> Hash (cost=1.50..1.50 rows=50 width=4)\n(actual time=0.020..0.020 rows=50 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 10kB\n -> Seq Scan on ftable f_1\n(cost=0.00..1.50 rows=50 width=4) (actual time=0.004..0.011 rows=50\nloops=1)\n -> Subquery Scan on \"*SELECT* 3\" (cost=2.12..70051.84\nrows=16699 width=0) (actual time=0.025..407.103 rows=16633 loops=1)\n -> Hash Join (cost=2.12..69884.85 rows=16699\nwidth=72) (actual time=0.025..406.048 rows=16633 loops=1)\n Hash Cond: (p_2.fkey = f_2.pkey)\n -> Seq Scan on ptable_p2 p_2\n(cost=0.00..61126.79 rows=3334179 width=4) (actual time=0.004..181.015\nrows=3334179 loops=1)\n -> Hash (cost=1.50..1.50 rows=50 width=4)\n(actual time=0.014..0.014 rows=50 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 10kB\n -> Seq Scan on ftable f_2\n(cost=0.00..1.50 rows=50 width=4) (actual time=0.003..0.008 rows=50\nloops=1)\n Planning Time: 0.614 ms\n Execution Time: 1396.131 ms\n(25 rows)\n\nHow about your opinions for this kind of asymmetric partition-wise\nJOIN support by the optimizer?\nI think we can harmlessly push-down inner-join and left-join if\npartition-leaf is left side.\n\nProbably, we need to implement two key functionalities.\n1. Construction of RelOpInfo for join on non-partition table and\npartition-leafs for each pairs.\n Instead of JoinPaths, this logic adds AppendPath that takes\nasymmetric partition-wise join\n paths as sub-paths. Other optimization logic is equivalent as we\nare currently doing.\n2. Allow to share the hash-table built from table scan distributed to\nindividual partition leafs.\n In the above example, SeqScan on ftable and relevant Hash path\nwill make identical hash-\n table for the upcoming hash-join. If sibling paths have equivalent\nresults, it is reasonable to\n reuse it.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Mon, 12 Aug 2019 15:03:14 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Asymmetric partition-wise JOIN"
},
{
"msg_contents": "Hello,\n\nEven though nobody has respond the thread, I tried to make a prototype of\nthe asymmetric partition-wise join support.\nThis feature tries to join non-partitioned and partitioned relation\nbefore append.\n\nSee the example below:\n\ncreate table ptable (dist int, a int, b int) partition by hash (dist);\ncreate table ptable_p0 partition of ptable for values with (modulus 3,\nremainder 0);\ncreate table ptable_p1 partition of ptable for values with (modulus 3,\nremainder 1);\ncreate table ptable_p2 partition of ptable for values with (modulus 3,\nremainder 2);\ncreate table t1 (aid int, label text);\ncreate table t2 (bid int, label text);\ninsert into ptable (select x, (1000*random())::int,\n(1000*random())::int from generate_series(1,1000000) x);\ninsert into t1 (select x, md5(x::text) from generate_series(1,50) x);\ninsert into t2 (select x, md5(x::text) from generate_series(1,50) x);\nvacuum analyze ptable;\nvacuum analyze t1;\nvacuum analyze t2;\n\nptable.a has values between 0 and 1000, and t1.aid has values between 1 and 50.\nTherefore, tables join on ptable and t1 by a=aid can reduce almost 95% rows.\nOn the other hands, t1 is not partitioned and join-keys are not partition keys.\nSo, Append must process million rows first, then HashJoin processes\nthe rows read\nfrom the partitioned table, and 95% of them are eventually dropped.\nOn the other words, 95% of jobs by Append are waste of time and CPU cycles.\n\npostgres=# explain select * from ptable, t1 where a = aid;\n QUERY PLAN\n------------------------------------------------------------------------------\n Hash Join (cost=2.12..24658.62 rows=49950 width=49)\n Hash Cond: (ptable_p0.a = t1.aid)\n -> Append (cost=0.00..20407.00 rows=1000000 width=12)\n -> Seq Scan on ptable_p0 (cost=0.00..5134.63 rows=333263 width=12)\n -> Seq Scan on ptable_p1 (cost=0.00..5137.97 rows=333497 width=12)\n -> Seq Scan on ptable_p2 (cost=0.00..5134.40 rows=333240 width=12)\n -> Hash (cost=1.50..1.50 rows=50 width=37)\n -> Seq Scan on t1 (cost=0.00..1.50 rows=50 width=37)\n(8 rows)\n\nThe asymmetric partitionwise join allows to join non-partitioned tables and\npartitioned tables prior to Append.\n\npostgres=# set enable_partitionwise_join = on;\nSET\npostgres=# explain select * from ptable, t1 where a = aid;\n QUERY PLAN\n------------------------------------------------------------------------------\n Append (cost=2.12..19912.62 rows=49950 width=49)\n -> Hash Join (cost=2.12..6552.96 rows=16647 width=49)\n Hash Cond: (ptable_p0.a = t1.aid)\n -> Seq Scan on ptable_p0 (cost=0.00..5134.63 rows=333263 width=12)\n -> Hash (cost=1.50..1.50 rows=50 width=37)\n -> Seq Scan on t1 (cost=0.00..1.50 rows=50 width=37)\n -> Hash Join (cost=2.12..6557.29 rows=16658 width=49)\n Hash Cond: (ptable_p1.a = t1.aid)\n -> Seq Scan on ptable_p1 (cost=0.00..5137.97 rows=333497 width=12)\n -> Hash (cost=1.50..1.50 rows=50 width=37)\n -> Seq Scan on t1 (cost=0.00..1.50 rows=50 width=37)\n -> Hash Join (cost=2.12..6552.62 rows=16645 width=49)\n Hash Cond: (ptable_p2.a = t1.aid)\n -> Seq Scan on ptable_p2 (cost=0.00..5134.40 rows=333240 width=12)\n -> Hash (cost=1.50..1.50 rows=50 width=37)\n -> Seq Scan on t1 (cost=0.00..1.50 rows=50 width=37)\n(16 rows)\n\nWe can consider the table join ptable X t1 above is equivalent to:\n (ptable_p0 + ptable_p1 + ptable_p2) X t1\n= (ptable_p0 X t1) + (ptable_p1 X t1) + (ptable_p2 X t1)\nIt returns an equivalent result, however, rows are already reduced by HashJoin\nin the individual leaf of Append, so CPU-cycles consumed by Append node can\nbe cheaper.\n\nOn the other hands, it has a downside because t1 must be read 3 times and\nhash table also must be built 3 times. It increases the expected cost,\nso planner\nmay not choose the asymmetric partition-wise join plan.\n\nOne idea I have is, sibling HashJoin shares a hash table that was built once\nby any of the sibling Hash plan. Right now, it is not implemented yet.\n\nHow about your thought for this feature?\n\nBest regards,\n\n2019年8月12日(月) 15:03 Kohei KaiGai <kaigai@heterodb.com>:\n>\n> Hello,\n>\n> PostgreSQL optimizer right now considers join pairs on only\n> non-partition - non-partition or\n> partition-leaf - partition-leaf relations. On the other hands, it is\n> harmless and makes sense to\n> consider a join pair on non-partition - partition-leaf.\n>\n> See the example below. ptable is partitioned by hash, and contains 10M\n> rows. ftable is not\n> partitioned and contains 50 rows. Most of ptable::fkey shall not have\n> matched rows in this\n> join.\n>\n> create table ptable (fkey int, dist text) partition by hash (dist);\n> create table ptable_p0 partition of ptable for values with (modulus 3,\n> remainder 0);\n> create table ptable_p1 partition of ptable for values with (modulus 3,\n> remainder 1);\n> create table ptable_p2 partition of ptable for values with (modulus 3,\n> remainder 2);\n> insert into ptable (select x % 10000, md5(x::text) from\n> generate_series(1,10000000) x);\n>\n> create table ftable (pkey int primary key, memo text);\n> insert into ftable (select x, 'ftable__#' || x::text from\n> generate_series(1,50) x);\n> vacuum analyze;\n>\n> postgres=# explain analyze select count(*) from ptable p, ftable f\n> where p.fkey = f.pkey;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=266393.38..266393.39 rows=1 width=8) (actual\n> time=2333.193..2333.194 rows=1 loops=1)\n> -> Hash Join (cost=2.12..260143.38 rows=2500000 width=0) (actual\n> time=0.056..2330.079 rows=50000 loops=1)\n> Hash Cond: (p.fkey = f.pkey)\n> -> Append (cost=0.00..233335.00 rows=10000000 width=4)\n> (actual time=0.012..1617.268 rows=10000000 loops=1)\n> -> Seq Scan on ptable_p0 p (cost=0.00..61101.96\n> rows=3332796 width=4) (actual time=0.011..351.137 rows=3332796\n> loops=1)\n> -> Seq Scan on ptable_p1 p_1 (cost=0.00..61106.25\n> rows=3333025 width=4) (actual time=0.005..272.925 rows=3333025\n> loops=1)\n> -> Seq Scan on ptable_p2 p_2 (cost=0.00..61126.79\n> rows=3334179 width=4) (actual time=0.006..416.141 rows=3334179\n> loops=1)\n> -> Hash (cost=1.50..1.50 rows=50 width=4) (actual\n> time=0.033..0.034 rows=50 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 10kB\n> -> Seq Scan on ftable f (cost=0.00..1.50 rows=50\n> width=4) (actual time=0.004..0.017 rows=50 loops=1)\n> Planning Time: 0.286 ms\n> Execution Time: 2333.264 ms\n> (12 rows)\n>\n> We can manually rewrite this query as follows:\n>\n> postgres=# explain analyze select count(*) from (\n> select * from ptable_p0 p, ftable f where p.fkey =\n> f.pkey union all\n> select * from ptable_p1 p, ftable f where p.fkey =\n> f.pkey union all\n> select * from ptable_p2 p, ftable f where p.fkey = f.pkey) subqry;\n>\n> Because Append does not process tuples that shall have no matched\n> tuples in ftable,\n> this query has cheaper cost and short query execution time.\n> (2333ms --> 1396ms)\n>\n> postgres=# explain analyze select count(*) from (\n> select * from ptable_p0 p, ftable f where p.fkey =\n> f.pkey union all\n> select * from ptable_p1 p, ftable f where p.fkey =\n> f.pkey union all\n> select * from ptable_p2 p, ftable f where p.fkey = f.pkey) subqry;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=210478.25..210478.26 rows=1 width=8) (actual\n> time=1396.024..1396.024 rows=1 loops=1)\n> -> Append (cost=2.12..210353.14 rows=50042 width=0) (actual\n> time=0.058..1393.008 rows=50000 loops=1)\n> -> Subquery Scan on \"*SELECT* 1\" (cost=2.12..70023.66\n> rows=16726 width=0) (actual time=0.057..573.197 rows=16789 loops=1)\n> -> Hash Join (cost=2.12..69856.40 rows=16726\n> width=72) (actual time=0.056..571.718 rows=16789 loops=1)\n> Hash Cond: (p.fkey = f.pkey)\n> -> Seq Scan on ptable_p0 p (cost=0.00..61101.96\n> rows=3332796 width=4) (actual time=0.009..255.791 rows=3332796\n> loops=1)\n> -> Hash (cost=1.50..1.50 rows=50 width=4)\n> (actual time=0.034..0.035 rows=50 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 10kB\n> -> Seq Scan on ftable f (cost=0.00..1.50\n> rows=50 width=4) (actual time=0.004..0.019 rows=50 loops=1)\n> -> Subquery Scan on \"*SELECT* 2\" (cost=2.12..70027.43\n> rows=16617 width=0) (actual time=0.036..409.712 rows=16578 loops=1)\n> -> Hash Join (cost=2.12..69861.26 rows=16617\n> width=72) (actual time=0.036..408.626 rows=16578 loops=1)\n> Hash Cond: (p_1.fkey = f_1.pkey)\n> -> Seq Scan on ptable_p1 p_1\n> (cost=0.00..61106.25 rows=3333025 width=4) (actual time=0.005..181.422\n> rows=3333025 loops=1)\n> -> Hash (cost=1.50..1.50 rows=50 width=4)\n> (actual time=0.020..0.020 rows=50 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 10kB\n> -> Seq Scan on ftable f_1\n> (cost=0.00..1.50 rows=50 width=4) (actual time=0.004..0.011 rows=50\n> loops=1)\n> -> Subquery Scan on \"*SELECT* 3\" (cost=2.12..70051.84\n> rows=16699 width=0) (actual time=0.025..407.103 rows=16633 loops=1)\n> -> Hash Join (cost=2.12..69884.85 rows=16699\n> width=72) (actual time=0.025..406.048 rows=16633 loops=1)\n> Hash Cond: (p_2.fkey = f_2.pkey)\n> -> Seq Scan on ptable_p2 p_2\n> (cost=0.00..61126.79 rows=3334179 width=4) (actual time=0.004..181.015\n> rows=3334179 loops=1)\n> -> Hash (cost=1.50..1.50 rows=50 width=4)\n> (actual time=0.014..0.014 rows=50 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 10kB\n> -> Seq Scan on ftable f_2\n> (cost=0.00..1.50 rows=50 width=4) (actual time=0.003..0.008 rows=50\n> loops=1)\n> Planning Time: 0.614 ms\n> Execution Time: 1396.131 ms\n> (25 rows)\n>\n> How about your opinions for this kind of asymmetric partition-wise\n> JOIN support by the optimizer?\n> I think we can harmlessly push-down inner-join and left-join if\n> partition-leaf is left side.\n>\n> Probably, we need to implement two key functionalities.\n> 1. Construction of RelOpInfo for join on non-partition table and\n> partition-leafs for each pairs.\n> Instead of JoinPaths, this logic adds AppendPath that takes\n> asymmetric partition-wise join\n> paths as sub-paths. Other optimization logic is equivalent as we\n> are currently doing.\n> 2. Allow to share the hash-table built from table scan distributed to\n> individual partition leafs.\n> In the above example, SeqScan on ftable and relevant Hash path\n> will make identical hash-\n> table for the upcoming hash-join. If sibling paths have equivalent\n> results, it is reasonable to\n> reuse it.\n>\n> Best regards,\n> --\n> HeteroDB, Inc / The PG-Strom Project\n> KaiGai Kohei <kaigai@heterodb.com>\n\n\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>",
"msg_date": "Fri, 23 Aug 2019 01:05:19 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On Fri, Aug 23, 2019 at 4:05 AM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> We can consider the table join ptable X t1 above is equivalent to:\n> (ptable_p0 + ptable_p1 + ptable_p2) X t1\n> = (ptable_p0 X t1) + (ptable_p1 X t1) + (ptable_p2 X t1)\n> It returns an equivalent result, however, rows are already reduced by HashJoin\n> in the individual leaf of Append, so CPU-cycles consumed by Append node can\n> be cheaper.\n>\n> On the other hands, it has a downside because t1 must be read 3 times and\n> hash table also must be built 3 times. It increases the expected cost,\n> so planner\n> may not choose the asymmetric partition-wise join plan.\n\nWhat if you include the partition constraint as a filter on t1? So you get:\n\nptable X t1 =\n (ptable_p0 X (σ hash(dist)%4=0 (t1))) +\n (ptable_p1 X (σ hash(dist)%4=1 (t1))) +\n (ptable_p2 X (σ hash(dist)%4=2 (t1))) +\n (ptable_p3 X (σ hash(dist)%4=3 (t1)))\n\nPros:\n1. The hash tables will not contain unnecessary junk.\n2. You'll get the right answer if t1 is on the outer side of an outer join.\n3. If this runs underneath a Parallel Append and t1 is big enough\nthen workers will hopefully cooperate and do a synchronised scan of\nt1.\n4. The filter might enable a selective and efficient plan like an index scan.\n\nCons:\n1. The filter might not enable a selective and efficient plan, and\ntherefore cause extra work.\n\n(It's a little weird in this example because don't usually see hash\nfunctions in WHERE clauses, but that could just as easily be dist\nBETWEEN 1 AND 99 or any other partition constraint.)\n\n> One idea I have is, sibling HashJoin shares a hash table that was built once\n> by any of the sibling Hash plan. Right now, it is not implemented yet.\n\nYeah, I've thought a little bit about that in the context of Parallel\nRepartition. I'm interested in combining intra-node partitioning\n(where a single plan node repartitions data among workers on the fly)\nwith inter-node partitioning (like PWJ, where partitions are handled\nby different parts of the plan, considered at planning time); you\nfinish up needing to have nodes in the plan that 'receive' tuples for\neach partition, to match up with the PWJ plan structure. That's not\nentirely unlike CTE references, and not entirely unlike your idea of\nsomehow sharing the same hash table. I ran into a number of problems\nwhile thinking about that, which I should write about in another\nthread.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 24 Aug 2019 10:01:41 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "2019年8月24日(土) 7:02 Thomas Munro <thomas.munro@gmail.com>:\n>\n> On Fri, Aug 23, 2019 at 4:05 AM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > We can consider the table join ptable X t1 above is equivalent to:\n> > (ptable_p0 + ptable_p1 + ptable_p2) X t1\n> > = (ptable_p0 X t1) + (ptable_p1 X t1) + (ptable_p2 X t1)\n> > It returns an equivalent result, however, rows are already reduced by HashJoin\n> > in the individual leaf of Append, so CPU-cycles consumed by Append node can\n> > be cheaper.\n> >\n> > On the other hands, it has a downside because t1 must be read 3 times and\n> > hash table also must be built 3 times. It increases the expected cost,\n> > so planner\n> > may not choose the asymmetric partition-wise join plan.\n>\n> What if you include the partition constraint as a filter on t1? So you get:\n>\n> ptable X t1 =\n> (ptable_p0 X (σ hash(dist)%4=0 (t1))) +\n> (ptable_p1 X (σ hash(dist)%4=1 (t1))) +\n> (ptable_p2 X (σ hash(dist)%4=2 (t1))) +\n> (ptable_p3 X (σ hash(dist)%4=3 (t1)))\n>\n> Pros:\n> 1. The hash tables will not contain unnecessary junk.\n> 2. You'll get the right answer if t1 is on the outer side of an outer join.\n> 3. If this runs underneath a Parallel Append and t1 is big enough\n> then workers will hopefully cooperate and do a synchronised scan of\n> t1.\n> 4. The filter might enable a selective and efficient plan like an index scan.\n>\n> Cons:\n> 1. The filter might not enable a selective and efficient plan, and\n> therefore cause extra work.\n>\n> (It's a little weird in this example because don't usually see hash\n> functions in WHERE clauses, but that could just as easily be dist\n> BETWEEN 1 AND 99 or any other partition constraint.)\n>\nIt requires the join-key must include the partition key and also must be\nequality-join, doesn't it?\nIf ptable and t1 are joined using ptable.dist = t1.foo, we can distribute\nt1 for each leaf table with \"WHERE hash(foo)%4 = xxx\" according to\nthe partition bounds, indeed.\n\nIn case when some of partition leafs are pruned, it is exactly beneficial\nbecause relevant rows to be referenced by the pruned child relations\nare waste of memory.\n\nOn the other hands, it eventually consumes almost equivalent amount\nof memory to load the inner relations, if no leafs are pruned, and if we\ncould extend the Hash-node to share the hash-table with sibling join-nodess.\n\n> > One idea I have is, sibling HashJoin shares a hash table that was built once\n> > by any of the sibling Hash plan. Right now, it is not implemented yet.\n>\n> Yeah, I've thought a little bit about that in the context of Parallel\n> Repartition. I'm interested in combining intra-node partitioning\n> (where a single plan node repartitions data among workers on the fly)\n> with inter-node partitioning (like PWJ, where partitions are handled\n> by different parts of the plan, considered at planning time); you\n> finish up needing to have nodes in the plan that 'receive' tuples for\n> each partition, to match up with the PWJ plan structure. That's not\n> entirely unlike CTE references, and not entirely unlike your idea of\n> somehow sharing the same hash table. I ran into a number of problems\n> while thinking about that, which I should write about in another\n> thread.\n>\nHmm. Do you intend the inner-path may have different behavior according\nto the partition bounds definition where the outer-path to be joined?\nLet me investigate its pros & cons.\n\nThe reasons why I think the idea of sharing the same hash table is reasonable\nin this scenario are:\n1. We can easily extend the idea for parallel optimization. A hash table on DSM\n segment, once built, can be shared by all the siblings in all the\nparallel workers.\n2. We can save the memory consumption regardless of the join-keys and\n partition-keys, even if these are not involved in the query.\n\nOn the other hands, below are the downside. Potentially, combined use of\nyour idea may help these cases:\n3. Distributed inner-relation cannot be outer side of XXX OUTER JOIN.\n4. Hash table contains rows to be referenced by only pruned partition leafs.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Sat, 24 Aug 2019 17:33:01 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On Sat, Aug 24, 2019 at 05:33:01PM +0900, Kohei KaiGai wrote:\n> On the other hands, it eventually consumes almost equivalent amount\n> of memory to load the inner relations, if no leafs are pruned, and if we\n> could extend the Hash-node to share the hash-table with sibling\n> join-nodess.\n\nThe patch crashes when running the regression tests, per the report of\nthe automatic patch tester. Could you look at that? I have moved the\npatch to nexf CF, waiting on author.\n--\nMichael",
"msg_date": "Sun, 1 Dec 2019 12:24:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "Hello,\n\nThis crash was reproduced on our environment also.\nIt looks to me adjust_child_relids_multilevel() didn't expect a case\nwhen supplied 'relids'\n(partially) indicate normal and non-partitioned relation.\nIt tries to build a new 'parent_relids' that is a set of\nappinfo->parent_relid related to the\nsupplied 'child_relids'. However, bits in child_relids that indicate\nnormal relations are\nunintentionally dropped here. Then, adjust_child_relids_multilevel()\ngoes to an infinite\nrecursion until stack limitation.\n\nThe attached v2 fixed the problem, and regression test finished correctly.\n\nBest regards,\n\n2019年12月1日(日) 12:24 Michael Paquier <michael@paquier.xyz>:\n>\n> On Sat, Aug 24, 2019 at 05:33:01PM +0900, Kohei KaiGai wrote:\n> > On the other hands, it eventually consumes almost equivalent amount\n> > of memory to load the inner relations, if no leafs are pruned, and if we\n> > could extend the Hash-node to share the hash-table with sibling\n> > join-nodess.\n>\n> The patch crashes when running the regression tests, per the report of\n> the automatic patch tester. Could you look at that? I have moved the\n> patch to nexf CF, waiting on author.\n> --\n> Michael\n\n\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>",
"msg_date": "Fri, 27 Dec 2019 16:34:30 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "Hi Thomas,\n\nOn 12/27/19 2:34 AM, Kohei KaiGai wrote:\n >\n> This crash was reproduced on our environment also.\n> It looks to me adjust_child_relids_multilevel() didn't expect a case\n> when supplied 'relids'\n> (partially) indicate normal and non-partitioned relation.\n> It tries to build a new 'parent_relids' that is a set of\n> appinfo->parent_relid related to the\n> supplied 'child_relids'. However, bits in child_relids that indicate\n> normal relations are\n> unintentionally dropped here. Then, adjust_child_relids_multilevel()\n> goes to an infinite\n> recursion until stack limitation.\n> \n> The attached v2 fixed the problem, and regression test finished correctly.\n\nAny thoughts on the new version of this patch?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 27 Mar 2020 10:44:03 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "> On 27 Dec 2019, at 08:34, Kohei KaiGai <kaigai@heterodb.com> wrote:\n\n> The attached v2 fixed the problem, and regression test finished correctly.\n\nThis patch no longer applies to HEAD, please submit an rebased version.\nMarking the entry Waiting on Author in the meantime.\n\ncheers ./daniel\n\n\n",
"msg_date": "Wed, 1 Jul 2020 11:10:54 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 12/27/19 12:34 PM, Kohei KaiGai wrote:\n> The attached v2 fixed the problem, and regression test finished correctly.\nUsing your patch I saw incorrect value of predicted rows at the top node \nof the plan: \"Append (cost=270.02..35165.37 rows=40004 width=16)\"\nFull explain of the query plan see in attachment - \nexplain_with_asymmetric.sql\n\nif I disable enable_partitionwise_join then:\n\"Hash Join (cost=270.02..38855.25 rows=10001 width=16)\"\nFull explain - explain_no_asymmetric.sql\n\nI thought that is the case of incorrect usage of cached values of \nnorm_selec, but it is a corner-case problem of the eqjoinsel() routine :\n\nselectivity = 1/size_of_larger_relation; (selfuncs.c:2567)\ntuples = selectivity * outer_tuples * inner_tuples; (costsize.c:4607)\n\ni.e. number of tuples depends only on size of smaller relation.\nIt is not a bug of your patch but I think you need to know because it \nmay affect on planner decision.\n\n===\nP.S. Test case:\nCREATE TABLE t0 (a serial, b int);\nINSERT INTO t0 (b) (SELECT * FROM generate_series(1e4, 2e4) as g);\nCREATE TABLE parts (a serial, b int) PARTITION BY HASH(a)\nINSERT INTO parts (b) (SELECT * FROM generate_series(1, 1e6) as g);\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Mon, 6 Jul 2020 13:46:23 +0500",
"msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 7/1/20 2:10 PM, Daniel Gustafsson wrote:\n>> On 27 Dec 2019, at 08:34, Kohei KaiGai <kaigai@heterodb.com> wrote:\n> \n>> The attached v2 fixed the problem, and regression test finished correctly.\n> \n> This patch no longer applies to HEAD, please submit an rebased version.\n> Marking the entry Waiting on Author in the meantime.\nRebased version of the patch on current master (d259afa736).\n\nI rebased it because it is a base of my experimental feature than we \ndon't break partitionwise join of a relation with foreign partition and \na local relation if we have info that remote server has foreign table \nlink to the local relation (by analogy with shippable extensions).\n\nMaybe mark as 'Needs review'?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Fri, 21 Aug 2020 11:02:30 +0500",
"msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "> On 21 Aug 2020, at 08:02, Andrey V. Lepikhov <a.lepikhov@postgrespro.ru> wrote:\n> \n> On 7/1/20 2:10 PM, Daniel Gustafsson wrote:\n>>> On 27 Dec 2019, at 08:34, Kohei KaiGai <kaigai@heterodb.com> wrote:\n>>> The attached v2 fixed the problem, and regression test finished correctly.\n>> This patch no longer applies to HEAD, please submit an rebased version.\n>> Marking the entry Waiting on Author in the meantime.\n> Rebased version of the patch on current master (d259afa736).\n> \n> I rebased it because it is a base of my experimental feature than we don't break partitionwise join of a relation with foreign partition and a local relation if we have info that remote server has foreign table link to the local relation (by analogy with shippable extensions).\n> \n> Maybe mark as 'Needs review'?\n\nThanks for the rebase, I've updated the commitfest entry to reflect that it\nneeds a round of review.\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 25 Aug 2020 11:12:17 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On Sat, Aug 24, 2019 at 2:03 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n>\n> 2019年8月24日(土) 7:02 Thomas Munro <thomas.munro@gmail.com>:\n> >\n> > On Fri, Aug 23, 2019 at 4:05 AM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > > We can consider the table join ptable X t1 above is equivalent to:\n> > > (ptable_p0 + ptable_p1 + ptable_p2) X t1\n> > > = (ptable_p0 X t1) + (ptable_p1 X t1) + (ptable_p2 X t1)\n> > > It returns an equivalent result, however, rows are already reduced by HashJoin\n> > > in the individual leaf of Append, so CPU-cycles consumed by Append node can\n> > > be cheaper.\n> > >\n> > > On the other hands, it has a downside because t1 must be read 3 times and\n> > > hash table also must be built 3 times. It increases the expected cost,\n> > > so planner\n> > > may not choose the asymmetric partition-wise join plan.\n> >\n> > What if you include the partition constraint as a filter on t1? So you get:\n> >\n> > ptable X t1 =\n> > (ptable_p0 X (σ hash(dist)%4=0 (t1))) +\n> > (ptable_p1 X (σ hash(dist)%4=1 (t1))) +\n> > (ptable_p2 X (σ hash(dist)%4=2 (t1))) +\n> > (ptable_p3 X (σ hash(dist)%4=3 (t1)))\n> >\n> > Pros:\n> > 1. The hash tables will not contain unnecessary junk.\n> > 2. You'll get the right answer if t1 is on the outer side of an outer join.\n> > 3. If this runs underneath a Parallel Append and t1 is big enough\n> > then workers will hopefully cooperate and do a synchronised scan of\n> > t1.\n> > 4. The filter might enable a selective and efficient plan like an index scan.\n> >\n> > Cons:\n> > 1. The filter might not enable a selective and efficient plan, and\n> > therefore cause extra work.\n> >\n> > (It's a little weird in this example because don't usually see hash\n> > functions in WHERE clauses, but that could just as easily be dist\n> > BETWEEN 1 AND 99 or any other partition constraint.)\n> >\n> It requires the join-key must include the partition key and also must be\n> equality-join, doesn't it?\n> If ptable and t1 are joined using ptable.dist = t1.foo, we can distribute\n> t1 for each leaf table with \"WHERE hash(foo)%4 = xxx\" according to\n> the partition bounds, indeed.\n>\n> In case when some of partition leafs are pruned, it is exactly beneficial\n> because relevant rows to be referenced by the pruned child relations\n> are waste of memory.\n>\n> On the other hands, it eventually consumes almost equivalent amount\n> of memory to load the inner relations, if no leafs are pruned, and if we\n> could extend the Hash-node to share the hash-table with sibling join-nodess.\n>\n> > > One idea I have is, sibling HashJoin shares a hash table that was built once\n> > > by any of the sibling Hash plan. Right now, it is not implemented yet.\n> >\n> > Yeah, I've thought a little bit about that in the context of Parallel\n> > Repartition. I'm interested in combining intra-node partitioning\n> > (where a single plan node repartitions data among workers on the fly)\n> > with inter-node partitioning (like PWJ, where partitions are handled\n> > by different parts of the plan, considered at planning time); you\n> > finish up needing to have nodes in the plan that 'receive' tuples for\n> > each partition, to match up with the PWJ plan structure. That's not\n> > entirely unlike CTE references, and not entirely unlike your idea of\n> > somehow sharing the same hash table. I ran into a number of problems\n> > while thinking about that, which I should write about in another\n> > thread.\n> >\n> Hmm. Do you intend the inner-path may have different behavior according\n> to the partition bounds definition where the outer-path to be joined?\n> Let me investigate its pros & cons.\n>\n> The reasons why I think the idea of sharing the same hash table is reasonable\n> in this scenario are:\n> 1. We can easily extend the idea for parallel optimization. A hash table on DSM\n> segment, once built, can be shared by all the siblings in all the\n> parallel workers.\n> 2. We can save the memory consumption regardless of the join-keys and\n> partition-keys, even if these are not involved in the query.\n>\n> On the other hands, below are the downside. Potentially, combined use of\n> your idea may help these cases:\n> 3. Distributed inner-relation cannot be outer side of XXX OUTER JOIN.\n> 4. Hash table contains rows to be referenced by only pruned partition leafs.\n>\n\n+ many, for the sharable hash of the inner table of the join. IMHO,\nthis could be the most interesting and captivating thing about this feature.\nBut might be a complicated piece, is that still on the plan?\n\nRegards,\nAmul\n\n\n",
"msg_date": "Wed, 26 Aug 2020 19:02:55 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 21.08.2020 09:02, Andrey V. Lepikhov wrote:\n> On 7/1/20 2:10 PM, Daniel Gustafsson wrote:\n>>> On 27 Dec 2019, at 08:34, Kohei KaiGai <kaigai@heterodb.com> wrote:\n>>\n>>> The attached v2 fixed the problem, and regression test finished \n>>> correctly.\n>>\n>> This patch no longer applies to HEAD, please submit an rebased version.\n>> Marking the entry Waiting on Author in the meantime.\n> Rebased version of the patch on current master (d259afa736).\n>\n> I rebased it because it is a base of my experimental feature than we \n> don't break partitionwise join of a relation with foreign partition \n> and a local relation if we have info that remote server has foreign \n> table link to the local relation (by analogy with shippable extensions).\n>\n> Maybe mark as 'Needs review'?\n>\nStatus update for a commitfest entry.\n\nAccording to cfbot, the patch fails to apply. Could you please send a \nrebased version?\n\nThis thread was inactive for quite some time. Is anyone going to \ncontinue working on it?\n\nI see some interest in the idea of sharable hash, but I don't see even a \nprototype in this thread. So, probably, it is a matter of a separate \ndiscussion.\n\nAlso, I took a look at the code. It looks like it needs some extra work. \nI am not a big expert in this area, so I'm sorry if questions are obvious.\n\n1. What would happen if this assumption is not met?\n\n+ * MEMO: We assume this pathlist keeps at least one AppendPath that\n+ * represents partitioned table-scan, symmetric or asymmetric\n+ * partition-wise join. It is not correct right now, however, a \nhook\n+ * on add_path() to give additional decision for path removel \nallows\n+ * to retain this kind of AppendPath, regardless of its cost.\n\n2. Why do we wrap extract_asymmetric_partitionwise_subjoin() call into \nPG_TRY/PG_CATCH? What errors do we expect?\n\n3. It looks like a crutch. If it isn't, I'd like to see a better comment \nabout why \"dynamic programming\" is not applicable here.\nAnd shouldn't we also handle a root->join_cur_level?\n\n+ /* temporary disables \"dynamic programming\" algorithm */\n+ root->join_rel_level = NULL;\n\n4. This change looks like it can lead to a memory leak for old code. \nMaybe it is never the case, but again I think it worth a comment.\n\n- /* If there's nothing to adjust, don't call this function. */\n- Assert(nappinfos >= 1 && appinfos != NULL);\n+ /* If there's nothing to adjust, just return a duplication */\n+ if (nappinfos == 0)\n+ return copyObject(node);\n\n5. extract_asymmetric_partitionwise_subjoin() lacks a comment\n\nThe new status of this patch is: Waiting on Author\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Mon, 9 Nov 2020 13:53:47 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 09.11.2020 13:53, Anastasia Lubennikova wrote:\n> On 21.08.2020 09:02, Andrey V. Lepikhov wrote:\n>> On 7/1/20 2:10 PM, Daniel Gustafsson wrote:\n>>>> On 27 Dec 2019, at 08:34, Kohei KaiGai <kaigai@heterodb.com> wrote:\n>>>\n>>>> The attached v2 fixed the problem, and regression test finished \n>>>> correctly.\n>>>\n>>> This patch no longer applies to HEAD, please submit an rebased version.\n>>> Marking the entry Waiting on Author in the meantime.\n>> Rebased version of the patch on current master (d259afa736).\n>>\n>> I rebased it because it is a base of my experimental feature than we \n>> don't break partitionwise join of a relation with foreign partition \n>> and a local relation if we have info that remote server has foreign \n>> table link to the local relation (by analogy with shippable extensions).\n>>\n>> Maybe mark as 'Needs review'?\n>>\n> Status update for a commitfest entry.\n>\n> According to cfbot, the patch fails to apply. Could you please send a \n> rebased version?\n>\n> This thread was inactive for quite some time. Is anyone going to \n> continue working on it?\n>\n> I see some interest in the idea of sharable hash, but I don't see even \n> a prototype in this thread. So, probably, it is a matter of a separate \n> discussion.\n>\n> Also, I took a look at the code. It looks like it needs some extra \n> work. I am not a big expert in this area, so I'm sorry if questions \n> are obvious.\n>\n> 1. What would happen if this assumption is not met?\n>\n> + * MEMO: We assume this pathlist keeps at least one \n> AppendPath that\n> + * represents partitioned table-scan, symmetric or asymmetric\n> + * partition-wise join. It is not correct right now, however, \n> a hook\n> + * on add_path() to give additional decision for path removel \n> allows\n> + * to retain this kind of AppendPath, regardless of its cost.\n>\n> 2. Why do we wrap extract_asymmetric_partitionwise_subjoin() call into \n> PG_TRY/PG_CATCH? What errors do we expect?\n>\n> 3. It looks like a crutch. If it isn't, I'd like to see a better \n> comment about why \"dynamic programming\" is not applicable here.\n> And shouldn't we also handle a root->join_cur_level?\n>\n> + /* temporary disables \"dynamic programming\" algorithm */\n> + root->join_rel_level = NULL;\n>\n> 4. This change looks like it can lead to a memory leak for old code. \n> Maybe it is never the case, but again I think it worth a comment.\n>\n> - /* If there's nothing to adjust, don't call this function. */\n> - Assert(nappinfos >= 1 && appinfos != NULL);\n> + /* If there's nothing to adjust, just return a duplication */\n> + if (nappinfos == 0)\n> + return copyObject(node);\n>\n> 5. extract_asymmetric_partitionwise_subjoin() lacks a comment\n>\n> The new status of this patch is: Waiting on Author\n>\nStatus update for a commitfest entry.\n\nThis entry was inactive during this CF, so I've marked it as returned \nwith feedback. Feel free to resubmit an updated version to a future \ncommitfest.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Mon, 30 Nov 2020 17:43:09 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 11/30/20 7:43 PM, Anastasia Lubennikova wrote:\n> This entry was inactive during this CF, so I've marked it as returned \n> with feedback. Feel free to resubmit an updated version to a future \n> commitfest.\n> \nAttached version is rebased on current master and fixes problems with \ncomplex parameterized plans - 'reparameterize by child' feature.\nProblems with reparameterization machinery can be demonstrated by TPC-H \nbenchmark.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Fri, 9 Apr 2021 16:14:29 +0500",
"msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 11/30/20 7:43 PM, Anastasia Lubennikova wrote:\n> This entry was inactive during this CF, so I've marked it as returned \n> with feedback. Feel free to resubmit an updated version to a future \n> commitfest. \nI return the patch to commitfest. My current reason differs from reason \nof origin author.\nThis patch can open a door for more complex optimizations in the \npartitionwise join push-down technique.\nI mean, we can push-down join not only of two partitioned tables with \nthe same partition schema, but a partitioned (sharded) table with an \narbitrary subplan that is provable independent of local resources.\n\nExample:\n\nCREATE TABLE p(a int) PARTITION BY HASH (a);\nCREATE TABLE p1 PARTITION OF p FOR VALUES WITH (MODULUS 3, REMAINDER 0);\nCREATE TABLE p2 PARTITION OF p FOR VALUES WITH (MODULUS 3, REMAINDER 1);\nCREATE TABLE p3 PARTITION OF p FOR VALUES WITH (MODULUS 3, REMAINDER 2);\n\nSELECT * FROM p, (SELECT * FROM generate_series(1,2) AS a) AS s\nWHERE p.a=s.a;\n\n Hash Join\n Hash Cond: (p.a = a.a)\n -> Append\n -> Seq Scan on p1 p_1\n -> Seq Scan on p2 p_2\n -> Seq Scan on p3 p_3\n -> Hash\n -> Function Scan on generate_series a\n\nBut with asymmetric join feature we have the plan:\n\n Append\n -> Hash Join\n Hash Cond: (p_1.a = a.a)\n -> Seq Scan on p1 p_1\n -> Hash\n -> Function Scan on generate_series a\n -> Hash Join\n Hash Cond: (p_2.a = a.a)\n -> Seq Scan on p2 p_2\n -> Hash\n -> Function Scan on generate_series a\n -> Hash Join\n Hash Cond: (p_3.a = a.a)\n -> Seq Scan on p3 p_3\n -> Hash\n -> Function Scan on generate_series a\n\nIn the case of FDW-sharding it means that if we can prove that the inner \nrelation is independent from the execution server, we can push-down \nthese joins and execute it in parallel.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Fri, 30 Apr 2021 08:10:19 +0500",
"msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "Next version of the patch.\nFor searching any problems I forced this patch during 'make check' \ntests. Some bugs were found and fixed.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Thu, 27 May 2021 09:27:51 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "Andrey Lepikhov писал 2021-05-27 07:27:\n> Next version of the patch.\n> For searching any problems I forced this patch during 'make check'\n> tests. Some bugs were found and fixed.\n\nHi.\nI've tested this patch and haven't found issues, but I have some \ncomments.\n\nsrc/backend/optimizer/path/joinrels.c:\n\n1554\n1555 /*\n1556 * Build RelOptInfo on JOIN of each partition of the outer relation \nand the inner\n1557 * relation. Return List of such RelOptInfo's. Return NIL, if at \nleast one of\n1558 * these JOINs are impossible to build.\n1559 */\n1560 static List *\n1561 extract_asymmetric_partitionwise_subjoin(PlannerInfo *root,\n1562 \n RelOptInfo *joinrel,\n1563 \n AppendPath *append_path,\n1564 \n RelOptInfo *inner_rel,\n1565 \n JoinType jointype,\n1566 \n JoinPathExtraData *extra)\n1567 {\n1568 List *result = NIL;\n1569 ListCell *lc;\n1570\n1571 foreach (lc, append_path->subpaths)\n1572 {\n1573 Path *child_path = lfirst(lc);\n1574 RelOptInfo *child_rel = \nchild_path->parent;\n1575 Relids child_join_relids;\n1576 Relids parent_relids;\n1577 RelOptInfo *child_join_rel;\n1578 SpecialJoinInfo *child_sjinfo;\n1579 List *child_restrictlist;\n\n\nVariable names - child_join_rel and child_join_relids seem to be \ninconsistent with rest of the file (I see child_joinrelids in \ntry_partitionwise_join() and child_joinrel in try_partitionwise_join() \nand get_matching_part_pairs()).\n\n\n1595 child_join_rel = build_child_join_rel(root,\n1596 \n child_rel,\n1597 \n inner_rel,\n1598 \n joinrel,\n1599 \n child_restrictlist,\n1600 \n child_sjinfo,\n1601 \n jointype);\n1602 if (!child_join_rel)\n1603 {\n1604 /*\n1605 * If can't build JOIN between \ninner relation and one of the outer\n1606 * partitions - return immediately.\n1607 */\n1608 return NIL;\n1609 }\n\nWhen build_child_join_rel() can return NULL?\nIf I read code correctly, joinrel is created in the begining of \nbuild_child_join_rel() with makeNode(), makeNode() wraps newNode() and \nnewNode() uses MemoryContextAllocZero()/MemoryContextAllocZeroAligned(), \nwhich would error() on alloc() failure.\n\n1637\n1638 static bool\n1639 is_asymmetric_join_capable(PlannerInfo *root,\n1640 RelOptInfo \n*outer_rel,\n1641 RelOptInfo \n*inner_rel,\n1642 JoinType \njointype)\n1643 {\n\nFunction misses a comment.\n\n1656 /*\n1657 * Don't allow asymmetric JOIN of two append subplans.\n1658 * In the case of a parameterized NL join, a \nreparameterization procedure will\n1659 * lead to large memory allocations and a CPU consumption:\n1660 * each reparameterize will induce subpath duplication, \ncreating new\n1661 * ParamPathInfo instance and increasing of ppilist up to \nnumber of partitions\n1662 * in the inner. Also, if we have many partitions, each \nbitmapset\n1663 * variable will large and many leaks of such variable \n(caused by relid\n1664 * replacement) will highly increase memory consumption.\n1665 * So, we deny such paths for now.\n1666 */\n\n\nMissing word:\neach bitmapset variable will large => each bitmapset variable will be \nlarge\n\n\n1694 foreach (lc, outer_rel->pathlist)\n1695 {\n1696 AppendPath *append_path = lfirst(lc);\n1697\n1698 /*\n1699 * MEMO: We assume this pathlist keeps at least one \nAppendPath that\n1700 * represents partitioned table-scan, symmetric or \nasymmetric\n1701 * partition-wise join. It is not correct right \nnow, however, a hook\n1702 * on add_path() to give additional decision for \npath removal allows\n1703 * to retain this kind of AppendPath, regardless of \nits cost.\n1704 */\n1705 if (IsA(append_path, AppendPath))\n\n\nWhat hook do you refer to?\n\nsrc/backend/optimizer/plan/setrefs.c:\n\n282 /*\n283 * Adjust RT indexes of AppendRelInfos and add to final \nappendrels list.\n284 * We assume the AppendRelInfos were built during planning \nand don't need\n285 * to be copied.\n286 */\n287 foreach(lc, root->append_rel_list)\n288 {\n289 AppendRelInfo *appinfo = lfirst_node(AppendRelInfo, \nlc);\n290 AppendRelInfo *newappinfo;\n291\n292 /* flat copy is enough since all valuable fields are \nscalars */\n293 newappinfo = (AppendRelInfo *) \npalloc(sizeof(AppendRelInfo));\n294 memcpy(newappinfo, appinfo, sizeof(AppendRelInfo));\n\nYou've changed function to copy appinfo, so now comment is incorrect.\n\nsrc/backend/optimizer/util/appendinfo.c:\n588 /* Construct relids set for the immediate parent of the \ngiven child. */\n589 normal_relids = bms_copy(child_relids);\n590 for (cnt = 0; cnt < nappinfos; cnt++)\n591 {\n592 AppendRelInfo *appinfo = appinfos[cnt];\n593\n594 parent_relids = bms_add_member(parent_relids, \nappinfo->parent_relid);\n595 normal_relids = bms_del_member(normal_relids, \nappinfo->child_relid);\n596 }\n597 parent_relids = bms_union(parent_relids, normal_relids);\n\nDo I understand correctly that now parent_relids also contains relids of \nrelations from 'global' inner relation, which we join to childs?\n\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n",
"msg_date": "Fri, 18 Jun 2021 15:02:39 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 18/6/21 15:02, Alexander Pyhalov wrote:\n> Andrey Lepikhov писал 2021-05-27 07:27:\n>> Next version of the patch.\n>> For searching any problems I forced this patch during 'make check'\n>> tests. Some bugs were found and fixed.\n> \n> Hi.\n> I've tested this patch and haven't found issues, but I have some comments.\nThank you for review!\n> Variable names - child_join_rel and child_join_relids seem to be \n> inconsistent with rest of the file\nfixed\n\n> When build_child_join_rel() can return NULL?\nFixed\n> Missing word:\n> each bitmapset variable will large => each bitmapset variable will be large\nFixed\n> What hook do you refer to?\nRemoved> You've changed function to copy appinfo, so now comment is \nincorrect.\nThanks, fixed> Do I understand correctly that now parent_relids also \ncontains relids of\n> relations from 'global' inner relation, which we join to childs?\nYes\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Mon, 5 Jul 2021 12:57:45 +0300",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On Mon, Jul 5, 2021 at 2:57 AM Andrey Lepikhov <a.lepikhov@postgrespro.ru>\nwrote:\n\n> On 18/6/21 15:02, Alexander Pyhalov wrote:\n> > Andrey Lepikhov писал 2021-05-27 07:27:\n> >> Next version of the patch.\n> >> For searching any problems I forced this patch during 'make check'\n> >> tests. Some bugs were found and fixed.\n> >\n> > Hi.\n> > I've tested this patch and haven't found issues, but I have some\n> comments.\n> Thank you for review!\n> > Variable names - child_join_rel and child_join_relids seem to be\n> > inconsistent with rest of the file\n> fixed\n>\n> > When build_child_join_rel() can return NULL?\n> Fixed\n> > Missing word:\n> > each bitmapset variable will large => each bitmapset variable will be\n> large\n> Fixed\n> > What hook do you refer to?\n> Removed> You've changed function to copy appinfo, so now comment is\n> incorrect.\n> Thanks, fixed> Do I understand correctly that now parent_relids also\n> contains relids of\n> > relations from 'global' inner relation, which we join to childs?\n> Yes\n>\n> --\n> regards,\n> Andrey Lepikhov\n> Postgres Professional\n>\nHi,\n\nrelations because it could cause CPU and memory huge consumption\nduring reparameterization of NestLoop path.\n\nCPU and memory huge consumption -> huge consumption of CPU and memory\n\n+ * relation. Return List of such RelOptInfo's. Return NIL, if at least one\nof\n+ * these JOINs are impossible to build.\n\nat least one of these JOINs are impossible to build. -> at least one\nof these JOINs is impossible to build.\n\n+ * Can't imagine situation when join relation already exists.\nBut in\n+ * the 'partition_join' regression test it happens.\n+ * It may be an indicator of possible problems.\n\nShould a log be added in the above case ?\n\n+is_asymmetric_join_capable(PlannerInfo *root,\n\nis_asymmetric_join_capable -> is_asymmetric_join_feasible\n\nCheers\n\nOn Mon, Jul 5, 2021 at 2:57 AM Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote:On 18/6/21 15:02, Alexander Pyhalov wrote:\n> Andrey Lepikhov писал 2021-05-27 07:27:\n>> Next version of the patch.\n>> For searching any problems I forced this patch during 'make check'\n>> tests. Some bugs were found and fixed.\n> \n> Hi.\n> I've tested this patch and haven't found issues, but I have some comments.\nThank you for review!\n> Variable names - child_join_rel and child_join_relids seem to be \n> inconsistent with rest of the file\nfixed\n\n> When build_child_join_rel() can return NULL?\nFixed\n> Missing word:\n> each bitmapset variable will large => each bitmapset variable will be large\nFixed\n> What hook do you refer to?\nRemoved> You've changed function to copy appinfo, so now comment is \nincorrect.\nThanks, fixed> Do I understand correctly that now parent_relids also \ncontains relids of\n> relations from 'global' inner relation, which we join to childs?\nYes\n\n-- \nregards,\nAndrey Lepikhov\nPostgres ProfessionalHi,relations because it could cause CPU and memory huge consumptionduring reparameterization of NestLoop path.CPU and memory huge consumption -> huge consumption of CPU and memory + * relation. Return List of such RelOptInfo's. Return NIL, if at least one of+ * these JOINs are impossible to build.at least one of these JOINs are impossible to build. -> at least one of these JOINs is impossible to build.+ * Can't imagine situation when join relation already exists. But in+ * the 'partition_join' regression test it happens.+ * It may be an indicator of possible problems.Should a log be added in the above case ?+is_asymmetric_join_capable(PlannerInfo *root,is_asymmetric_join_capable -> is_asymmetric_join_feasibleCheers",
"msg_date": "Mon, 5 Jul 2021 13:15:26 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 5/7/21 23:15, Zhihong Yu wrote:\n> On Mon, Jul 5, 2021 at 2:57 AM Andrey Lepikhov \n> <a.lepikhov@postgrespro.ru <mailto:a.lepikhov@postgrespro.ru>> wrote:\n> + * Can't imagine situation when join relation already \n> exists. But in\n> + * the 'partition_join' regression test it happens.\n> + * It may be an indicator of possible problems.\n> Should a log be added in the above case ?\nI made additional analysis of this branch of code. This situation can \nhappen in the case of one child or if we join two plane tables with \npartitioned. Both situations are legal and I think we don't needed to \nadd any log message here.\nOther mistakes were fixed.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Tue, 6 Jul 2021 12:28:05 +0300",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "Andrey Lepikhov писал 2021-07-06 12:28:\n> On 5/7/21 23:15, Zhihong Yu wrote:\n>> On Mon, Jul 5, 2021 at 2:57 AM Andrey Lepikhov \n>> <a.lepikhov@postgrespro.ru <mailto:a.lepikhov@postgrespro.ru>> wrote:\n>> + * Can't imagine situation when join relation already \n>> exists. But in\n>> + * the 'partition_join' regression test it happens.\n>> + * It may be an indicator of possible problems.\n>> Should a log be added in the above case ?\n> I made additional analysis of this branch of code. This situation can\n> happen in the case of one child or if we join two plane tables with\n> partitioned. Both situations are legal and I think we don't needed to\n> add any log message here.\n> Other mistakes were fixed.\n\nHi.\n\nSmall typo in comment in src/backend/optimizer/plan/setrefs.c:\n\n 281\n 282 /*\n 283 * Adjust RT indexes of AppendRelInfos and add to final \nappendrels list.\n 284 * The AppendRelInfos are copied, because as a part of a \nsubplan its could\n 285 * be visited many times in the case of asymmetric join.\n 286 */\n 287 foreach(lc, root->append_rel_list)\n 288 {\n\nits -> it (or they) ?\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n",
"msg_date": "Tue, 06 Jul 2021 16:09:19 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 5/7/21 23:15, Zhihong Yu wrote:\n> On Mon, Jul 5, 2021 at 2:57 AM Andrey Lepikhov \n> <a.lepikhov@postgrespro.ru <mailto:a.lepikhov@postgrespro.ru>> wrote:\n> + * Can't imagine situation when join relation already \n> exists. But in\n> + * the 'partition_join' regression test it happens.\n> + * It may be an indicator of possible problems.\n> \n> Should a log be added in the above case ?\nI worked more on this case and found more serious mistake. During \npopulation of additional paths on the existed RelOptInfo we can remove \nsome previously generated paths that pointed from a higher-level list of \nsubplans and it could cause to lost of subplan links. I prohibit such \nsituation (you can read comments in the new version of the patch).\nAlso, choosing of a cheapest path after appendrel creation was added.\nUnstable tests were fixed.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Thu, 15 Jul 2021 09:32:34 +0300",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 11:32 AM Andrey Lepikhov <a.lepikhov@postgrespro.ru>\nwrote:\n\n> On 5/7/21 23:15, Zhihong Yu wrote:\n> > On Mon, Jul 5, 2021 at 2:57 AM Andrey Lepikhov\n> > <a.lepikhov@postgrespro.ru <mailto:a.lepikhov@postgrespro.ru>> wrote:\n> > + * Can't imagine situation when join relation already\n> > exists. But in\n> > + * the 'partition_join' regression test it happens.\n> > + * It may be an indicator of possible problems.\n> >\n> > Should a log be added in the above case ?\n> I worked more on this case and found more serious mistake. During\n> population of additional paths on the existed RelOptInfo we can remove\n> some previously generated paths that pointed from a higher-level list of\n> subplans and it could cause to lost of subplan links. I prohibit such\n> situation (you can read comments in the new version of the patch).\n> Also, choosing of a cheapest path after appendrel creation was added.\n> Unstable tests were fixed.\n>\n> --\n> regards,\n> Andrey Lepikhov\n> Postgres Professional\n>\n\nPatch is failing the regression, can you please take a look at that.\n\npartition_join ... FAILED 6328 ms\n\n--\nIbrar Ahmed\n\nOn Thu, Jul 15, 2021 at 11:32 AM Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote:On 5/7/21 23:15, Zhihong Yu wrote:\n> On Mon, Jul 5, 2021 at 2:57 AM Andrey Lepikhov \n> <a.lepikhov@postgrespro.ru <mailto:a.lepikhov@postgrespro.ru>> wrote:\n> + * Can't imagine situation when join relation already \n> exists. But in\n> + * the 'partition_join' regression test it happens.\n> + * It may be an indicator of possible problems.\n> \n> Should a log be added in the above case ?\nI worked more on this case and found more serious mistake. During \npopulation of additional paths on the existed RelOptInfo we can remove \nsome previously generated paths that pointed from a higher-level list of \nsubplans and it could cause to lost of subplan links. I prohibit such \nsituation (you can read comments in the new version of the patch).\nAlso, choosing of a cheapest path after appendrel creation was added.\nUnstable tests were fixed.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\nPatch is failing the regression, can you please take a look at that.partition_join ... FAILED 6328 ms--Ibrar Ahmed",
"msg_date": "Thu, 15 Jul 2021 18:16:43 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "It looks like this patch needs to be updated. According to http://cfbot.cputube.org/ it applies but doesn't pass any tests. Changing the status to save time for reviewers.\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Thu, 09 Sep 2021 09:50:46 +0000",
"msg_from": "Aleksander Alekseev <afiskon@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On Thu, Sep 09, 2021 at 09:50:46AM +0000, Aleksander Alekseev wrote:\n> It looks like this patch needs to be updated. According to http://cfbot.cputube.org/ it applies but doesn't pass any tests. Changing the status to save time for reviewers.\n> \n> The new status of this patch is: Waiting on Author\n\nJust to give some more info to work on I found this patch made postgres\ncrash with a segmentation fault.\n\n\"\"\"\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x0000556e37ef1b55 in bms_equal (a=0x7f6e37a9c5b0, b=0x7f6e37a9c5b0) at bitmapset.c:126\n126\t\t\tif (shorter->words[i] != longer->words[i])\n\"\"\"\n\nattached are the query that triggers the crash and the backtrace.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL",
"msg_date": "Thu, 9 Sep 2021 10:38:33 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 9/9/21 8:38 PM, Jaime Casanova wrote:\n> On Thu, Sep 09, 2021 at 09:50:46AM +0000, Aleksander Alekseev wrote:\n>> It looks like this patch needs to be updated. According to http://cfbot.cputube.org/ it applies but doesn't pass any tests. Changing the status to save time for reviewers.\n>>\n>> The new status of this patch is: Waiting on Author\n> \n> Just to give some more info to work on I found this patch made postgres\n> crash with a segmentation fault.\n> \n> \"\"\"\n> Program terminated with signal SIGSEGV, Segmentation fault.\n> #0 0x0000556e37ef1b55 in bms_equal (a=0x7f6e37a9c5b0, b=0x7f6e37a9c5b0) at bitmapset.c:126\n> 126\t\t\tif (shorter->words[i] != longer->words[i])\n> \"\"\"\n> \n> attached are the query that triggers the crash and the backtrace.\n> \n\nThank you for this good catch!\nThe problem was in the adjust_child_relids_multilevel routine. The \ntmp_result variable sometimes points to original required_outer.\nThis patch adds new ways which optimizer can generate plans. One \npossible way is optimizer reparameterizes an inner by a plain relation \nfrom the outer (maybe as a result of join of the plain relation and \npartitioned relation). In this case we have to compare tmp_result with \noriginal pointer to realize, it was changed or not.\nThe patch in attachment fixes this problem. Additional regression test \nadded.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Tue, 14 Sep 2021 11:37:39 +0500",
"msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 14/9/21 11:37, Andrey V. Lepikhov wrote:\n> Thank you for this good catch!\n> The problem was in the adjust_child_relids_multilevel routine. The \n> tmp_result variable sometimes points to original required_outer.\n> This patch adds new ways which optimizer can generate plans. One \n> possible way is optimizer reparameterizes an inner by a plain relation \n> from the outer (maybe as a result of join of the plain relation and \n> partitioned relation). In this case we have to compare tmp_result with \n> original pointer to realize, it was changed or not.\n> The patch in attachment fixes this problem. Additional regression test \n> added.\n> \nI thought more and realized there isn't necessary to recurse in the \nadjust_child_relids_multilevel() routine if required_outer contains only\nnormal_relids.\nAlso, regression tests were improved a bit.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Wed, 15 Sep 2021 11:31:15 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "Andrey Lepikhov писал 2021-09-15 09:31:\n> On 14/9/21 11:37, Andrey V. Lepikhov wrote:\n>> Thank you for this good catch!\n>> The problem was in the adjust_child_relids_multilevel routine. The \n>> tmp_result variable sometimes points to original required_outer.\n>> This patch adds new ways which optimizer can generate plans. One \n>> possible way is optimizer reparameterizes an inner by a plain relation \n>> from the outer (maybe as a result of join of the plain relation and \n>> partitioned relation). In this case we have to compare tmp_result with \n>> original pointer to realize, it was changed or not.\n>> The patch in attachment fixes this problem. Additional regression test \n>> added.\n>> \n> I thought more and realized there isn't necessary to recurse in the\n> adjust_child_relids_multilevel() routine if required_outer contains\n> only\n> normal_relids.\n> Also, regression tests were improved a bit.\n\nHi.\nThe patch does not longer apply cleanly, so I rebased it. Attaching \nrebased version.\nI've looked through it once again and have several questions.\n\n1) In adjust_appendrel_attrs_multilevel(), can it happen that \nchild_relids is zero-length list (in this case pfree's will fail)? It \nseems, no, but should we at least assert this? Note that in \nadjust_appendrel_attrs() we add logic for nappinfos being 0.\n\n2) In try_asymmetric_partitionwise_join() we state that 'Asymmetric join \nisn't needed if the append node has only one child'. This is not \ncompletely correct. Asymmetric join with one partition can be \nadvantageous when JOIN(A, UNION(B)) is more expensive than UNION(JOIN \n(A, B)). The later is true, for example, when we join partitioned table \nhaving foreign partitions with another foreign table and only one \npartition is left.\nLet's take the attached case (foreign_join.sql). When \nlist_length(append_path->subpaths) > 1 is present, we get the following \nplan\n\nset enable_partitionwise_join = on;\n\nexplain SELECT t1.a,t2.b FROM fprt1 t1 INNER JOIN ftprt2_p1 t2 ON (t1.a \n= t2.b) WHERE t1.a < 250 AND t2.c like '%0004' ORDER BY 1,2;\n QUERY PLAN\n---------------------------------------------------------------------------------------\n Sort (cost=208.65..208.69 rows=17 width=8)\n Sort Key: t1.a\n -> Hash Join (cost=202.60..208.30 rows=17 width=8)\n Hash Cond: (t1.a = t2.b)\n -> Foreign Scan on ftprt1_p1 t1 (cost=100.00..105.06 rows=125 \nwidth=4)\n -> Hash (cost=102.39..102.39 rows=17 width=4)\n -> Foreign Scan on ftprt2_p1 t2 (cost=100.00..102.39 \nrows=17 width=4)\n\nIn case when we change it to list_length(append_path->subpaths) > 0, we \nget foreign join and cheaper plan:\n\nexplain verbose SELECT t1.a,t2.b FROM fprt1 t1 INNER JOIN ftprt2_p1 t2 \nON (t1.a = t2.b) WHERE t1.a < 250 AND t2.c like '%0004' ORDER BY 1,2;\n \n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=106.15..106.19 rows=17 width=8)\n Output: t1.a, t2.b\n Sort Key: t1.a\n -> Foreign Scan (cost=102.26..105.80 rows=17 width=8)\n Output: t1.a, t2.b\n Relations: (public.ftprt1_p1 t1) INNER JOIN (public.ftprt2_p1 \nt2)\n Remote SQL: SELECT r4.a, r2.b FROM (public.fprt1_p1 r4 INNER \nJOIN public.fprt2_p1 r2 ON (((r4.a = r2.b)) AND ((r2.c ~~ '%0004')) AND \n((r4.a < 250))))\n\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Mon, 17 Jan 2022 13:42:10 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "Hi Alexander,\nHi Andrey,\n\nThank you for your work on this subject.\n\nOn Mon, Jan 17, 2022 at 1:42 PM Alexander Pyhalov\n<a.pyhalov@postgrespro.ru> wrote:\n> The patch does not longer apply cleanly, so I rebased it. Attaching\n> rebased version.\n\nNot surprising that the patch doesn't apply after 1.5 years since the\nlast message. Could you please rebase it?\n\nI read the thread and the patch. The patch improves the joining of\npartitioned tables with non-partitioned relations. Let's denote\nnon-partitioned relation as A, partitions as P1 ... PN. The patch\nallows to Append(Join(A, P1), ... Join(A, PN) instead of Join(A,\nAppend(P1, ... PN). That could be cheaper because it's generally\ncheaper to join small pieces rather than do one big join. The\ndrawback is the need to scan A multiple times. But is this really\nnecessary and acceptable? Let's consider multiple options.\n\n1) A is non-table. For instance, A is a function scan. In this case,\ndoing multiple scans of A is not just expensive, but could lead to\nunexpected side effects. When the user includes a function once in\nthe FROM clause, she expects this function to be evaluated once. I\npropose that we should materialize a scan of non-table relations. So,\nmaterialized representation will be scanned multiple times, but the\nsource only scanned once. That would be similar to CTE.\n2) A is the table to be scanned with the parametrized path in the\ninner part of the nested loop join. In this case, there is no big\nscan of A and nothing to materialize.\n3) A is the table to be used in merge join or outer part of nested\nloop join. In this case, it would be nice to consider materialize.\nIt's not always good to materialize, because materialization has its\nadditional costs. I think that could be a cost-based decision.\n4) A is used in the hash join. Could we re-use the hashed\nrepresentation of A between multiple joins? I read upthread it was\nproposed to share a hashed table between multiple background workers\nvia shared memory. But the first step would be to just share it\nbetween multiple join nodes within the same process.\n\nAs we consider joining with each partition individually, there could\nbe chosen different join methods. As I get, the current patch\nconsiders joining with each of the partitions as a separate isolated\noptimization task. However, if we share resources between the\nmultiple joins, then rises a need for some global optimization. For\ninstance, a join type could be expensive when applied to an individual\npartition, but cheap when applied to all the partitions thanks to\nsaving the common work.\n\nMy idea is to consider generated common resources (such as\nmaterialized scans) as a property of the path. For instance, if the\nnested loop join is cheaper than the hash join, but the hash join\ngenerates a common hash map of table A, we don't drop hash join\nimmediately from the consideration and leave it to see how it could\nhelp join other partitions. What do you think?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 15 Oct 2023 03:18:43 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 15/10/2023 07:18, Alexander Korotkov wrote:\n> Hi Alexander,\n> Hi Andrey,\n> \n> Thank you for your work on this subject.\n> \n> On Mon, Jan 17, 2022 at 1:42 PM Alexander Pyhalov\n> <a.pyhalov@postgrespro.ru> wrote:\n>> The patch does not longer apply cleanly, so I rebased it. Attaching\n>> rebased version.\n> \n> Not surprising that the patch doesn't apply after 1.5 years since the\n> last message. Could you please rebase it?\n> \n> I read the thread and the patch. The patch improves the joining of\n> partitioned tables with non-partitioned relations. Let's denote\n> non-partitioned relation as A, partitions as P1 ... PN. The patch\n> allows to Append(Join(A, P1), ... Join(A, PN) instead of Join(A,\n> Append(P1, ... PN). That could be cheaper because it's generally\n> cheaper to join small pieces rather than do one big join. The\n> drawback is the need to scan A multiple times. But is this really\n> necessary and acceptable? Let's consider multiple options.\n> \n> 1) A is non-table. For instance, A is a function scan. In this case,\n> doing multiple scans of A is not just expensive, but could lead to\n> unexpected side effects. When the user includes a function once in\n> the FROM clause, she expects this function to be evaluated once. I\n> propose that we should materialize a scan of non-table relations. So,\n> materialized representation will be scanned multiple times, but the\n> source only scanned once. That would be similar to CTE.\n> 2) A is the table to be scanned with the parametrized path in the\n> inner part of the nested loop join. In this case, there is no big\n> scan of A and nothing to materialize.\n> 3) A is the table to be used in merge join or outer part of nested\n> loop join. In this case, it would be nice to consider materialize.\n> It's not always good to materialize, because materialization has its\n> additional costs. I think that could be a cost-based decision.\n> 4) A is used in the hash join. Could we re-use the hashed\n> representation of A between multiple joins? I read upthread it was\n> proposed to share a hashed table between multiple background workers\n> via shared memory. But the first step would be to just share it\n> between multiple join nodes within the same process.\n> \n> As we consider joining with each partition individually, there could\n> be chosen different join methods. As I get, the current patch\n> considers joining with each of the partitions as a separate isolated\n> optimization task. However, if we share resources between the\n> multiple joins, then rises a need for some global optimization. For\n> instance, a join type could be expensive when applied to an individual\n> partition, but cheap when applied to all the partitions thanks to\n> saving the common work.\n> \n> My idea is to consider generated common resources (such as\n> materialized scans) as a property of the path. For instance, if the\n> nested loop join is cheaper than the hash join, but the hash join\n> generates a common hash map of table A, we don't drop hash join\n> immediately from the consideration and leave it to see how it could\n> help join other partitions. What do you think?\n\nThanks for such detailed feedback!\nThe rationale for this patch was to give the optimizer additional ways \nto push down more joins into foreign servers. And, because of \nasynchronous append, the benefit of that optimization was obvious. \nUnfortunately, we hadn't found other applications for this feature, \nwhich was why this patch was postponed in the core.\nYou have brought new ideas about applying this idea locally. Moreover, \nthe main issue of the patch was massive memory consumption in the case \nof many joins and partitions - because of reparameterization. But now, \npostponing the reparameterization proposed in the thread [1] resolves \nthat problem and gives some insights into the reparameterization \ntechnique of some fields, like lateral references.\nHence, I think we can restart this work.\nThe first thing here (after rebase, of course) is to figure out and \nimplement in the cost model cases of effectiveness when asymmetric join \nwould give significant performance.\n\n[1] Oversight in reparameterize_path_by_child leading to executor crash\nhttps://www.postgresql.org/message-id/flat/CAMbWs496%2BN%3DUAjOc%3DrcD3P7B6oJe4rZw08e_TZRUsWbPxZW3Tw%40mail.gmail.com\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Sun, 15 Oct 2023 12:40:36 +0700",
"msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On Sun, Oct 15, 2023 at 8:40 AM Andrei Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> Thanks for such detailed feedback!\n> The rationale for this patch was to give the optimizer additional ways\n> to push down more joins into foreign servers. And, because of\n> asynchronous append, the benefit of that optimization was obvious.\n> Unfortunately, we hadn't found other applications for this feature,\n> which was why this patch was postponed in the core.\n> You have brought new ideas about applying this idea locally. Moreover,\n> the main issue of the patch was massive memory consumption in the case\n> of many joins and partitions - because of reparameterization. But now,\n> postponing the reparameterization proposed in the thread [1] resolves\n> that problem and gives some insights into the reparameterization\n> technique of some fields, like lateral references.\n> Hence, I think we can restart this work.\n> The first thing here (after rebase, of course) is to figure out and\n> implement in the cost model cases of effectiveness when asymmetric join\n> would give significant performance.\n\nGreat! I'm looking forward to the revised patch.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 15 Oct 2023 13:25:41 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 15/10/2023 17:25, Alexander Korotkov wrote:\n> On Sun, Oct 15, 2023 at 8:40 AM Andrei Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> Thanks for such detailed feedback!\n>> The rationale for this patch was to give the optimizer additional ways\n>> to push down more joins into foreign servers. And, because of\n>> asynchronous append, the benefit of that optimization was obvious.\n>> Unfortunately, we hadn't found other applications for this feature,\n>> which was why this patch was postponed in the core.\n>> You have brought new ideas about applying this idea locally. Moreover,\n>> the main issue of the patch was massive memory consumption in the case\n>> of many joins and partitions - because of reparameterization. But now,\n>> postponing the reparameterization proposed in the thread [1] resolves\n>> that problem and gives some insights into the reparameterization\n>> technique of some fields, like lateral references.\n>> Hence, I think we can restart this work.\n>> The first thing here (after rebase, of course) is to figure out and\n>> implement in the cost model cases of effectiveness when asymmetric join\n>> would give significant performance.\n> \n> Great! I'm looking forward to the revised patch\nBefore preparing a new patch, it would be better to find the common \nground in the next issue:\nSo far, this optimization stays aside, proposing an alternative path for \na join RelOptInfo if we have an underlying append path in the outer.\nMy back burner is redesigning the approach: asymmetric join doesn't \nchange the partitioning scheme and bounds of the partitioned side. So, \nit looks consecutive to make it a part of partitionwise_join machinery \nand implement it as a part of the try_partitionwise_join / \ngenerate_partitionwise_join_paths routines.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Mon, 16 Oct 2023 11:51:24 +0700",
"msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 10:24 AM Andrei Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n>\n> >\n> > Great! I'm looking forward to the revised patch\n> Before preparing a new patch, it would be better to find the common\n> ground in the next issue:\n> So far, this optimization stays aside, proposing an alternative path for\n> a join RelOptInfo if we have an underlying append path in the outer.\n> My back burner is redesigning the approach: asymmetric join doesn't\n> change the partitioning scheme and bounds of the partitioned side. So,\n> it looks consecutive to make it a part of partitionwise_join machinery\n> and implement it as a part of the try_partitionwise_join /\n> generate_partitionwise_join_paths routines.\n>\n\nI think we need an example where such a join will be faster than\nnon-partitioned join when both the sides are local. It might be\npossible to come up with such an example without writing any code. The\nidea would be to rewrite SQL as union of joins.\n\nWhenever I visited this idea, I hit one issue prominently - how would\nwe differentiate different scans of the non-partitioned relation.\nNormally we do that using different Relids but in this case we\nwouldn't be able to know the number of such relations involved in the\nquery unless we start planning such a join. It's late to add new base\nrelations and assign them new Relids. Of course I haven't thought hard\nabout it. I haven't looked at the patch to see whether this problem is\nsolved and how.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 16 Oct 2023 21:51:41 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 16/10/2023 23:21, Ashutosh Bapat wrote:\n> On Mon, Oct 16, 2023 at 10:24 AM Andrei Lepikhov\n> Whenever I visited this idea, I hit one issue prominently - how would\n> we differentiate different scans of the non-partitioned relation.\n> Normally we do that using different Relids but in this case we\n> wouldn't be able to know the number of such relations involved in the\n> query unless we start planning such a join. It's late to add new base\n> relations and assign them new Relids. Of course I haven't thought hard\n> about it. I haven't looked at the patch to see whether this problem is\n> solved and how.\n> \nI'm curious, which type of problems do you afraid here? Why we need a \nrange table entry for each scan of non-partitioned relation?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Tue, 17 Oct 2023 15:34:59 +0700",
"msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 2:05 PM Andrei Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n>\n> On 16/10/2023 23:21, Ashutosh Bapat wrote:\n> > On Mon, Oct 16, 2023 at 10:24 AM Andrei Lepikhov\n> > Whenever I visited this idea, I hit one issue prominently - how would\n> > we differentiate different scans of the non-partitioned relation.\n> > Normally we do that using different Relids but in this case we\n> > wouldn't be able to know the number of such relations involved in the\n> > query unless we start planning such a join. It's late to add new base\n> > relations and assign them new Relids. Of course I haven't thought hard\n> > about it. I haven't looked at the patch to see whether this problem is\n> > solved and how.\n> >\n> I'm curious, which type of problems do you afraid here? Why we need a\n> range table entry for each scan of non-partitioned relation?\n>\n\nNot RTE but RelOptInfo.\n\nUsing the same example as Alexander Korotkov, let's say A is the\nnonpartitioned table and P is partitioned table with partitions P1,\nP2, ... Pn. The partitionwise join would need to compute AP1, AP2, ...\nAPn. Each of these joins may have different properties and thus will\nrequire creating paths. In order to save these paths, we need\nRelOptInfos which are indentified by relids. Let's assume that the\nrelids of these join RelOptInfos are created by union of relid of A\nand relid of Px (the partition being joined). This is notionally\nmisleading but doable.\n\nBut the clauses of A parameterized by P will produce different\ntranslations for each of the partitions. I think we will need\ndifferent RelOptInfos (for A) to store these translations.\n\nThe relid is also used to track the scans at executor level. Since we\nhave so many scans on A, each may be using different plan, we will\nneed different ids for those.\n\nBut if you have developed a way to use a single RelOptInfo of A to do\nall this, may be we don't need all this. Will take a look at your next\nversion of patch.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 17 Oct 2023 15:39:05 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 17/10/2023 17:09, Ashutosh Bapat wrote:\n> On Tue, Oct 17, 2023 at 2:05 PM Andrei Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>>\n>> On 16/10/2023 23:21, Ashutosh Bapat wrote:\n>>> On Mon, Oct 16, 2023 at 10:24 AM Andrei Lepikhov\n>>> Whenever I visited this idea, I hit one issue prominently - how would\n>>> we differentiate different scans of the non-partitioned relation.\n>>> Normally we do that using different Relids but in this case we\n>>> wouldn't be able to know the number of such relations involved in the\n>>> query unless we start planning such a join. It's late to add new base\n>>> relations and assign them new Relids. Of course I haven't thought hard\n>>> about it. I haven't looked at the patch to see whether this problem is\n>>> solved and how.\n>>>\n>> I'm curious, which type of problems do you afraid here? Why we need a\n>> range table entry for each scan of non-partitioned relation?\n>>\n> \n> Not RTE but RelOptInfo.\n> \n> Using the same example as Alexander Korotkov, let's say A is the\n> nonpartitioned table and P is partitioned table with partitions P1,\n> P2, ... Pn. The partitionwise join would need to compute AP1, AP2, ...\n> APn. Each of these joins may have different properties and thus will\n> require creating paths. In order to save these paths, we need\n> RelOptInfos which are indentified by relids. Let's assume that the\n> relids of these join RelOptInfos are created by union of relid of A\n> and relid of Px (the partition being joined). This is notionally\n> misleading but doable.\n\nOk, now I see your disquiet. In current patch we have built RelOptInfo \nfor each JOIN(A, Pi) by the build_child_join_rel() routine. And of \ncourse, they all have different sets of cheapest paths (it is one more \npoint of optimality). At this point the RelOptInfo of relation A is \nfully formed and upper joins use the pathlist \"as is\", without changes.\n\n> But the clauses of A parameterized by P will produce different\n> translations for each of the partitions. I think we will need\n> different RelOptInfos (for A) to store these translations.\n\nDoes the answer above resolved this issue?\n\n> The relid is also used to track the scans at executor level. Since we\n> have so many scans on A, each may be using different plan, we will\n> need different ids for those.\n\nI don't understand this sentence. Which way executor uses this index of \nRelOptInfo ?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Wed, 18 Oct 2023 12:25:23 +0700",
"msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On Wed, Oct 18, 2023 at 10:55 AM Andrei Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n>\n> > But the clauses of A parameterized by P will produce different\n> > translations for each of the partitions. I think we will need\n> > different RelOptInfos (for A) to store these translations.\n>\n> Does the answer above resolved this issue?\n\nMay be. There are other problematic areas like EvalPlanQual, Rescans,\nreparameterised paths which can blow up if we use the same RelOptInfo\nfor different scans of the same relation. It will be good to test\nthose. And also A need not be a simple relation; it could be join as\nwell.\n\n>\n> > The relid is also used to track the scans at executor level. Since we\n> > have so many scans on A, each may be using different plan, we will\n> > need different ids for those.\n>\n> I don't understand this sentence. Which way executor uses this index of\n> RelOptInfo ?\n\nSee Scan::scanrelid\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 18 Oct 2023 15:29:04 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 18/10/2023 16:59, Ashutosh Bapat wrote:\n> On Wed, Oct 18, 2023 at 10:55 AM Andrei Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>>\n>>> But the clauses of A parameterized by P will produce different\n>>> translations for each of the partitions. I think we will need\n>>> different RelOptInfos (for A) to store these translations.\n>>\n>> Does the answer above resolved this issue?\n> \n> May be. There are other problematic areas like EvalPlanQual, Rescans,\n> reparameterised paths which can blow up if we use the same RelOptInfo\n> for different scans of the same relation. It will be good to test\n\nYeah, now I got it. It is already the second place where I see some \nreference to a kind of hidden rule that the rte entry (or RelOptInfo) \nmust correspond to only one plan node. I don't have a quick answer for \nnow - maybe it is a kind of architectural agreement - and I will \nconsider this issue during the development.\n\n> those. And also A need not be a simple relation; it could be join as\n> well.\n\nFor a join RelOptInfo, as well as for any subtree, we have the same \nlogic: the pathlist of this subtree is already formed during the \nprevious level of the search and will not be changed.\n\n>>\n>>> The relid is also used to track the scans at executor level. Since we\n>>> have so many scans on A, each may be using different plan, we will\n>>> need different ids for those.\n>>\n>> I don't understand this sentence. Which way executor uses this index of\n>> RelOptInfo ?\n> \n> See Scan::scanrelid\n> \n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Thu, 19 Oct 2023 11:04:52 +0700",
"msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 15/10/2023 13:25, Alexander Korotkov wrote:\n> Great! I'm looking forward to the revised patch.\nRevising the code and opinions before restarting this work, I found two \ndifferent possible strategies mentioned in the thread:\n1. 'Common Resources' shares the materialised result of the inner table \nscan (a hash table in the case of HashJoin) to join each partition one \nby one. It gives us a profit in the case of parallel append and possibly \nother cases, like the one shown in the initial message.\n2. 'Individual strategies' - By limiting the AJ feature to cases when \nthe JOIN clause contains a partitioning expression, we can push an \nadditional scan clause into each copy of the inner table scan, reduce \nthe number of tuples scanned, and even prune something because of proven \nzero input.\n\nI see the pros and cons of both approaches. The first option is more \nstraightforward, and its outcome is obvious in the case of parallel \nappend. But how can we guarantee the same join type for each join? Why \nshould we ignore the positive effect of different strategies for \ndifferent partitions?\nThe second strategy is more expensive for the optimiser, especially in \nthe multipartition case. But as I can predict, it is easier to implement \nand looks more natural for the architecture. What do you think about that?\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Tue, 2 Apr 2024 10:07:35 +0700",
"msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 18/10/2023 16:59, Ashutosh Bapat wrote:\n> On Wed, Oct 18, 2023 at 10:55 AM Andrei Lepikhov\n>>> The relid is also used to track the scans at executor level. Since we\n>>> have so many scans on A, each may be using different plan, we will\n>>> need different ids for those.\n>>\n>> I don't understand this sentence. Which way executor uses this index of\n>> RelOptInfo ?\n> \n> See Scan::scanrelid\n> \nHi,\n\nIn the attachment, you will find a fresh version of the patch.\nI've analysed the danger of the same RelOptInfo index for the executor. \nIn the examples I found (scared), it is still not a problem because \nExecQual() does all the jobs at one operation and doesn't intersect with \nover operations. Of course, it is not a good design, and we will work on \nthis issue. But at least this code can be used in experiments.\nFurthermore, I've shared some reflections on this feature. To avoid \ncluttering the thread, I've published them in [1]. These thoughts \nprovide additional context and considerations for our ongoing work.\n\n[1] \nhttps://danolivo.substack.com/p/postgresql-asymmetric-join-technique?r=34q1yy\n\n-- \nregards, Andrei Lepikhov",
"msg_date": "Sun, 5 May 2024 21:55:30 +0700",
"msg_from": "Andrei Lepikhov <lepihov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "Hi!\n\nOn Tue, Apr 2, 2024 at 6:07 AM Andrei Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 15/10/2023 13:25, Alexander Korotkov wrote:\n> > Great! I'm looking forward to the revised patch.\n> Revising the code and opinions before restarting this work, I found two\n> different possible strategies mentioned in the thread:\n> 1. 'Common Resources' shares the materialised result of the inner table\n> scan (a hash table in the case of HashJoin) to join each partition one\n> by one. It gives us a profit in the case of parallel append and possibly\n> other cases, like the one shown in the initial message.\n> 2. 'Individual strategies' - By limiting the AJ feature to cases when\n> the JOIN clause contains a partitioning expression, we can push an\n> additional scan clause into each copy of the inner table scan, reduce\n> the number of tuples scanned, and even prune something because of proven\n> zero input.\n>\n> I see the pros and cons of both approaches. The first option is more\n> straightforward, and its outcome is obvious in the case of parallel\n> append. But how can we guarantee the same join type for each join? Why\n> should we ignore the positive effect of different strategies for\n> different partitions?\n> The second strategy is more expensive for the optimiser, especially in\n> the multipartition case. But as I can predict, it is easier to implement\n> and looks more natural for the architecture. What do you think about that?\n\nActually, the idea I tried to express is the combination of #1 and #2:\nto build individual plan for every partition, but consider the 'Common\nResources'. Let me explain this a bit more.\n\nRight now, we basically we consider the following properties during\nselection of paths.\n1) Cost. The cheaper path wins. There a two criteria though: startup\ncost and total cost. So, we can keep both paths with cheaper startup\ncosts and paths with cheaper total cost.\n2) Pathkeys. We can keep a path with more expensive path, which has\npathkeys potentially useful in future.\n\nMy idea is to introduce a new property for paths selection.\n3) Usage of common resources. The common resource can be: hash\nrepresentation of relation, memoize over relation scan, etc. We can\nexclude the cost of common resource generation from the path cost, but\nkeep the reference for the common resource with its generation cost.\nIf one path uses more common resources than another path, it could\ncost-dominate another one only if its cheaper together with its extra\ncommon resources cost. If one path uses less or equal common\nresources than another, it could normally cost-dominate another one.\n\nUsing these rules, we can gather the the plurality of paths for each\nchild join taking common resources into account. After that we can\napply some global optimization finding generation of which common\nresources can reduce the global cost.\n\nHowever, I understand this is huge amount of work given we have to\nintroduce new basic optimizer concepts. I get that the main\napplication of this patch is sharding. If we have global tables\nresiding each shard, we can push down any joins with them. Given this\npatch gives some optimization for non-sharded case, I think we\n*probably* can accept its concept even that it this optimization is\nobviously not perfect.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Thu, 1 Aug 2024 21:56:12 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On Sun, May 5, 2024 at 5:55 PM Andrei Lepikhov <lepihov@gmail.com> wrote:\n> On 18/10/2023 16:59, Ashutosh Bapat wrote:\n> > On Wed, Oct 18, 2023 at 10:55 AM Andrei Lepikhov\n> >>> The relid is also used to track the scans at executor level. Since we\n> >>> have so many scans on A, each may be using different plan, we will\n> >>> need different ids for those.\n> >>\n> >> I don't understand this sentence. Which way executor uses this index of\n> >> RelOptInfo ?\n> >\n> > See Scan::scanrelid\n> >\n> Hi,\n>\n> In the attachment, you will find a fresh version of the patch.\n> I've analysed the danger of the same RelOptInfo index for the executor.\n> In the examples I found (scared), it is still not a problem because\n> ExecQual() does all the jobs at one operation and doesn't intersect with\n> over operations. Of course, it is not a good design, and we will work on\n> this issue. But at least this code can be used in experiments.\n> Furthermore, I've shared some reflections on this feature. To avoid\n> cluttering the thread, I've published them in [1]. These thoughts\n> provide additional context and considerations for our ongoing work.\n>\n> [1]\n> https://danolivo.substack.com/p/postgresql-asymmetric-join-technique?r=34q1yy\n\nI've rebased the patch to the current master. Also, I didn't like the\nneedFlatCopy argument to reparameterize_path_by_child(). It looks\nquite awkward. Instead, as soon as we need to copy paths, I've\nenabled native copy of paths. Now, we can do just copyObject() over\npath in caller. Looks much cleaner for me. What do you think?\n\nOther notes:\n\n1) I think we need to cover the cases, which\nis_inner_rel_safe_for_asymmetric_join() filters out, by regression\ntests.\n2) is_asymmetric_join() looks awkward for me. Should we instead make\na flag in JoinPath?\n3) I understand that you have re-use RelOptInfo multiple times. It's\ntoo late stage of query processing to add a simple relation into\nplanner structs. I tried rescans issued by cursors, EvalPlanQual()\ncaused by concurrent updates, but didn't manage to break this. It\nseems that even if same relation index is used multiple times in\ndifferent places of a query, it never get used simultaneously. But\neven if this somehow is OK, this is significant change of assumptions\nin planner/executor data structures. Perhaps, we need at least Tom's\nopinion on this.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Fri, 2 Aug 2024 01:51:11 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
},
{
"msg_contents": "On 1/8/2024 20:56, Alexander Korotkov wrote:\n> On Tue, Apr 2, 2024 at 6:07 AM Andrei Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n> Actually, the idea I tried to express is the combination of #1 and #2:\n> to build individual plan for every partition, but consider the 'Common\n> Resources'. Let me explain this a bit more.\nThanks for keeping your eye on it!\n> My idea is to introduce a new property for paths selection.\n> 3) Usage of common resources. The common resource can be: hash\n> representation of relation, memoize over relation scan, etc. We can\n> exclude the cost of common resource generation from the path cost, but\n> keep the reference for the common resource with its generation cost.\n> If one path uses more common resources than another path, it could\n> cost-dominate another one only if its cheaper together with its extra\n> common resources cost. If one path uses less or equal common\n> resources than another, it could normally cost-dominate another one.\nThe most challenging part for me is the cost calculation, which is \nbonded with estimations of other paths. To correctly estimate the \neffect, we need to remember at least the whole number of paths sharing \nresources.\nAlso, I wonder if it can cause some corner cases where prediction error \non a shared resource will cause an even worse situation upstream.\nI think we could push off here from an example and a counter-example, \nbut I still can't find them.\n\n> However, I understand this is huge amount of work given we have to\n> introduce new basic optimizer concepts. I get that the main\n> application of this patch is sharding. If we have global tables\n> residing each shard, we can push down any joins with them. Given this\n> patch gives some optimization for non-sharded case, I think we\n> *probably* can accept its concept even that it this optimization is\n> obviously not perfect.\nYes, right now sharding is the most profitable case. We can push down \nparts of the plan which references only some common resources: \nFunctionScan, ValueScan, tables which can be proved are existed \neverywhere and provide the same output. But for now it is too far from \nthe core code, IMO. - So, I search for cases that can be helpful for a \nsingle instance.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Mon, 19 Aug 2024 10:43:35 +0200",
"msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asymmetric partition-wise JOIN"
}
] |
[
{
"msg_contents": "Hi hackers!\n\nCurrently, if we check indexes on standby we often get\n\nman-psbpshn0skhsxynd/xiva_xtable_testing_01 R # select bt_index_check('xiva_loadtest.pk_uid');\nERROR: 58P01: could not open file \"base/16453/125407\": No such file or directory\n\nI think that we should print warning and that's it. Amcheck should not give false positives.\n\nOr, maybe, there are some design considerations that I miss?\n\n\nBTW I really want to enable rightlink-leftlink invariant validation on standby..\n\nThanks!\n\nBest regards, Andrey Borodin.",
"msg_date": "Mon, 12 Aug 2019 14:58:24 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Do not check unlogged indexes on standby"
},
{
"msg_contents": "On Mon, Aug 12, 2019 at 2:58 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> Currently, if we check indexes on standby we often get\n>\n> man-psbpshn0skhsxynd/xiva_xtable_testing_01 R # select bt_index_check('xiva_loadtest.pk_uid');\n> ERROR: 58P01: could not open file \"base/16453/125407\": No such file or directory\n>\n> I think that we should print warning and that's it. Amcheck should not give false positives.\n\nI agree -- amcheck should just skip over unlogged tables during\nrecovery, since there is simply nothing to check.\n\nI pushed your patch to all branches that have amcheck just now, so now\nwe skip over unlogged relations when in recovery, though I made some\nrevisions.\n\nYour patch didn't handle temp tables/indexes that were created in the\nfirst session correctly -- we must be careful about the distinction\nbetween unlogged tables, and tables that don't require WAL logging\n(the later includes temp tables). Also, I thought that it was a good\nidea to actively test for the presence of a main fork when we don't\nskip (i.e. when the system isn't in recovery and the B-Tree indexes\nisn't unlogged) -- we now give a clean report of corruption when that\nhappens, rather than letting an ambiguous \"can't happen\" error get\nraised by low-level code. This might be possible with system catalog\ncorruption, for example. Finally, I thought that the WARNING was a bit\nstrong -- a NOTICE is more appropriate.\n\nThanks!\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 12 Aug 2019 15:23:28 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Do not check unlogged indexes on standby"
},
{
"msg_contents": "On Mon, Aug 12, 2019 at 2:58 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> BTW I really want to enable rightlink-leftlink invariant validation on standby..\n\nThat seems very hard. My hope was that bt_check_index() can detect the\nsame problem a different way. The bt_right_page_check_scankey()\ncross-page check (which needs nothing more than an AccessShareLock)\nwill often detect such problems, because the page image itself will be\ntotally wrong in some way.\n\nI'm guessing that you have direct experience with that *not* being\ngood enough, though. Can you share further details? I suppose that\nbt_right_page_check_scankey() helps with transposed pages, but doesn't\nhelp so much when you have WAL-level inconsistencies.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 12 Aug 2019 15:36:23 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Do not check unlogged indexes on standby"
},
{
"msg_contents": "\n\n> 13 авг. 2019 г., в 3:23, Peter Geoghegan <pg@bowt.ie> написал(а):\n> \n> I pushed your patch to all branches that have amcheck just now, so now\n> we skip over unlogged relations when in recovery, though I made some\n> revisions.\nOh, cool, thanks!\n\n> Your patch didn't handle temp tables/indexes that were created in the\n> first session correctly -- we must be careful about the distinction\n> between unlogged tables, and tables that don't require WAL logging\n> (the later includes temp tables). Also, I thought that it was a good\n> idea to actively test for the presence of a main fork when we don't\n> skip (i.e. when the system isn't in recovery and the B-Tree indexes\n> isn't unlogged) -- we now give a clean report of corruption when that\n> happens, rather than letting an ambiguous \"can't happen\" error get\n> raised by low-level code. This might be possible with system catalog\n> corruption, for example. Finally, I thought that the WARNING was a bit\n> strong -- a NOTICE is more appropriate.\n+1\n\n> 13 авг. 2019 г., в 3:36, Peter Geoghegan <pg@bowt.ie> написал(а):\n> \n> On Mon, Aug 12, 2019 at 2:58 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>> BTW I really want to enable rightlink-leftlink invariant validation on standby..\n> \n> That seems very hard. My hope was that bt_check_index() can detect the\n> same problem a different way. The bt_right_page_check_scankey()\n> cross-page check (which needs nothing more than an AccessShareLock)\n> will often detect such problems, because the page image itself will be\n> totally wrong in some way.\n> \n> I'm guessing that you have direct experience with that *not* being\n> good enough, though. Can you share further details? I suppose that\n> bt_right_page_check_scankey() helps with transposed pages, but doesn't\n> help so much when you have WAL-level inconsistencies.\n\nWe have a bunch of internal testing HA clusters that suffered from corruption conditions.\nWe fixed everything that can be detected with parent-check on primaries or usual check on standbys.\n(page updates were lost both on primary and during WAL replay)\nBut from time to time when clusters switch primary from one availability zone to another we observe\n\"right sibling's left-link doesn't match: block 32709 links to 37022 instead of expected 40953 in index\"\n\nWe are going to search for these clusters with this [0] tolerating possible fraction of false positives, we have them anyway.\nBut I think I could put some effort into making corruption-detection tooling better.\nI think if we observe links discrepancy, we can acquire lock of left and right pages and recheck.\n\n\nBest regards, Andrey Borodin.\n\n[0] https://github.com/x4m/amcheck/commit/894d8bafb3c9a26bbc168ea5f4f33bcd1fc9f495\n\n",
"msg_date": "Tue, 13 Aug 2019 17:17:31 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Do not check unlogged indexes on standby"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 5:17 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> We have a bunch of internal testing HA clusters that suffered from corruption conditions.\n> We fixed everything that can be detected with parent-check on primaries or usual check on standbys.\n> (page updates were lost both on primary and during WAL replay)\n> But from time to time when clusters switch primary from one availability zone to another we observe\n> \"right sibling's left-link doesn't match: block 32709 links to 37022 instead of expected 40953 in index\"\n\nThat sounds like an issue caused by a failure to replay all available\nWAL, where only one page happened to get written out by a checkpoint\nbefore a crash. It's something like that. That wouldn't be caught by\nthe cross-page bt_index_check() check that we do already.\n\n> We are going to search for these clusters with this [0] tolerating possible fraction of false positives, we have them anyway.\n> But I think I could put some effort into making corruption-detection tooling better.\n> I think if we observe links discrepancy, we can acquire lock of left and right pages and recheck.\n\nThat's one possibility. When I first designed amcheck it was important\nto be conservative, so I invented a general rule about never acquiring\nmultiple buffer locks at once. I still think that that was the correct\ndecision for the bt_downlink_check() check (the main extra\nbt_index_parent_check() check), but I think that you're right about\nretrying to verify the sibling links when bt_index_check() is called\nfrom SQL.\n\nnbtree will often \"couple\" buffer locks on the leaf level; it will\nacquire a lock on a leaf page, and not release that lock until it has\nalso acquired a lock on the right sibling page (I'm mostly thinking of\n_bt_stepright()). I am in favor of a patch that makes amcheck perform\nsibling link verification within bt_index_check(), by retrying while\npessimistically coupling buffer locks. (Though I think that that\nshould just happen on the leaf level. We should not try to be too\nclever about ignorable/half-dead/deleted pages, to be conservative.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 13 Aug 2019 10:30:58 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Do not check unlogged indexes on standby"
},
{
"msg_contents": "> 13 авг. 2019 г., в 20:30, Peter Geoghegan <pg@bowt.ie> написал(а):\n> \n> That's one possibility. When I first designed amcheck it was important\n> to be conservative, so I invented a general rule about never acquiring\n> multiple buffer locks at once. I still think that that was the correct\n> decision for the bt_downlink_check() check (the main extra\n> bt_index_parent_check() check), but I think that you're right about\n> retrying to verify the sibling links when bt_index_check() is called\n> from SQL.\n> \n> nbtree will often \"couple\" buffer locks on the leaf level; it will\n> acquire a lock on a leaf page, and not release that lock until it has\n> also acquired a lock on the right sibling page (I'm mostly thinking of\n> _bt_stepright()). I am in favor of a patch that makes amcheck perform\n> sibling link verification within bt_index_check(), by retrying while\n> pessimistically coupling buffer locks. (Though I think that that\n> should just happen on the leaf level. We should not try to be too\n> clever about ignorable/half-dead/deleted pages, to be conservative.)\n\nPFA V1 of this check retry.\n\nBest regards, Andrey Borodin.",
"msg_date": "Thu, 15 Aug 2019 16:57:30 +0300",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Do not check unlogged indexes on standby"
},
{
"msg_contents": "On 2019-Aug-15, Andrey Borodin wrote:\n\n> PFA V1 of this check retry.\n\nCFbot complains that this doesn't apply; can you please rebase?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 11 Sep 2019 22:54:50 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Do not check unlogged indexes on standby"
},
{
"msg_contents": "The patch has been committed already.\n\nPeter Geoghegan\n(Sent from my phone)\n\nThe patch has been committed already. Peter Geoghegan(Sent from my phone)",
"msg_date": "Wed, 11 Sep 2019 19:10:35 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Do not check unlogged indexes on standby"
},
{
"msg_contents": "On Wed, Sep 11, 2019 at 7:10 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> The patch has been committed already.\n\nOh, wait. It hasn't. Andrey didn't create a new thread for his largely\nindependent patch, so I incorrectly assumed he created a CF entry for\nhis original bugfix.\n\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 11 Sep 2019 19:54:17 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Do not check unlogged indexes on standby"
},
{
"msg_contents": "On 2019-Sep-11, Peter Geoghegan wrote:\n\n> On Wed, Sep 11, 2019 at 7:10 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > The patch has been committed already.\n> \n> Oh, wait. It hasn't. Andrey didn't create a new thread for his largely\n> independent patch, so I incorrectly assumed he created a CF entry for\n> his original bugfix.\n\nSo, I'm confused. There appear to be two bugfix patches in this thread,\nwith no relationship between them, and as far as I can tell only one of\nthem has been addressed. What was applied (6754fe65a4c6) is\nsignificantly different from what Andrey submitted. Is that correct?\nIf so, we still have an open bug, right?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 5 Feb 2020 18:27:14 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Do not check unlogged indexes on standby"
},
{
"msg_contents": "On Wed, Feb 5, 2020 at 1:27 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> So, I'm confused. There appear to be two bugfix patches in this thread,\n> with no relationship between them, and as far as I can tell only one of\n> them has been addressed. What was applied (6754fe65a4c6) is\n> significantly different from what Andrey submitted. Is that correct?\n> If so, we still have an open bug, right?\n\nNo. We had two separate patches on this thread:\n\n1. A bugfix patch to make amcheck not do the wrong thing with unlogged\nindexes when operating on a standby.\n\n2. An unrelated feature/enhancement that would allow amcheck to detect\nmore types of corruption with only an AccessShareLock on the relation.\n\nThe first item was dealt with way back in August, without controversy\n-- my commit 6754fe65 was more or less Andrey's bugfix.\n\nThe second item genereated another thread a little after this thread.\nEverything was handled on this other thread. Ultimately, I rejected\nthe enhancement on the grounds that it wasn't safe on standbys in the\nface of concurrent splits and deletions [1].\n\nI believe that all of the items discussed on this thread have been\nresolved. Did I miss a CF entry or something?\n\n[1] https://postgr.es/m/CAH2-Wzmb_QOmHX=uWjCFV4Gf1810kz-yVzK6RA=VS41EFcKh=g@mail.gmail.com\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 5 Feb 2020 13:35:46 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Do not check unlogged indexes on standby"
},
{
"msg_contents": "On 2020-Feb-05, Peter Geoghegan wrote:\n\n> The second item genereated another thread a little after this thread.\n> Everything was handled on this other thread. Ultimately, I rejected\n> the enhancement on the grounds that it wasn't safe on standbys in the\n> face of concurrent splits and deletions [1].\n> \n> I believe that all of the items discussed on this thread have been\n> resolved. Did I miss a CF entry or something?\n\nNah. I just had one of the messages flagged in my inbox, and I wasn't\nsure what had happened since the other thread was not referenced in this\none. I wasn't looking at any CF entries.\n\nThanks for the explanation,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 6 Feb 2020 12:59:09 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Do not check unlogged indexes on standby"
}
] |
[
{
"msg_contents": "Hi,\n\nI got following log messages when measured the heap truncating\nduration in a vacuum.\n\n=====================================================\nINFO: \"dst\": suspending truncate due to conflicting lock request\nINFO: \"dst\": truncated 550073 to 101472 pages\nDETAIL: CPU: user: 0.35 s, system: 4.92 s, elapsed: 6.96 s\nINFO: \"dst\": truncated 101472 to 164 pages\nDETAIL: CPU: user: 0.35 s, system: 11.02 s, elapsed: 13.46 s\n=====================================================\n\nAbove message shows that postgres detected a access to the table\nduring heap truncating so suspend the truncating,\nand then resumed truncating after the access finish. The messages were\nno-problem.\nBut \"usage\" and \"elapsed (time)\" were bit confusing.\nTotal truncating duration was about 13.5s, but log said 6.96s (before\nsuspend) + 13.46s (remain).\n# I confirmed the total truncating duration by elog debugging.\n\nIn lazy_truncate_heap() pg_rusage_init is only called once at the\ntruncating start.\nSo the last-truncating-phase-log shows the total truncating-phase\nusages and elapsed time.\nAttached patch make pg_rusage_init would be called after each\nereport() of heap-truncating,\nso log messages will change like following.\n\n=====================================================\nINFO: \"dst\": suspending truncate due to conflicting lock request\nINFO: \"dst\": truncated 550073 to 108288 pages\nDETAIL: CPU: user: 0.20 s, system: 4.88 s, elapsed: 7.41 s\nINFO: \"dst\": truncated 108288 to 164 pages\nDETAIL: CPU: user: 0.00 s, system: 7.36 s, elapsed: 7.92 s\n=====================================================\n(Total truncating time was about 15.3s in above case)\n\nAny thoughts ?\nBest regards,\n\n-- \nTatsuhito Kasahara\nNTT Open Source Software Center",
"msg_date": "Tue, 13 Aug 2019 13:15:44 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": true,
"msg_subject": "small improvement of the elapsed time for truncating heap in vacuum"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 1:16 PM Kasahara Tatsuhito\n<kasahara.tatsuhito@gmail.com> wrote:\n>\n> Hi,\n>\n> I got following log messages when measured the heap truncating\n> duration in a vacuum.\n>\n> =====================================================\n> INFO: \"dst\": suspending truncate due to conflicting lock request\n> INFO: \"dst\": truncated 550073 to 101472 pages\n> DETAIL: CPU: user: 0.35 s, system: 4.92 s, elapsed: 6.96 s\n> INFO: \"dst\": truncated 101472 to 164 pages\n> DETAIL: CPU: user: 0.35 s, system: 11.02 s, elapsed: 13.46 s\n> =====================================================\n>\n> Above message shows that postgres detected a access to the table\n> during heap truncating so suspend the truncating,\n> and then resumed truncating after the access finish. The messages were\n> no-problem.\n> But \"usage\" and \"elapsed (time)\" were bit confusing.\n> Total truncating duration was about 13.5s, but log said 6.96s (before\n> suspend) + 13.46s (remain).\n> # I confirmed the total truncating duration by elog debugging.\n>\n> In lazy_truncate_heap() pg_rusage_init is only called once at the\n> truncating start.\n> So the last-truncating-phase-log shows the total truncating-phase\n> usages and elapsed time.\n> Attached patch make pg_rusage_init would be called after each\n> ereport() of heap-truncating,\n> so log messages will change like following.\n>\n> =====================================================\n> INFO: \"dst\": suspending truncate due to conflicting lock request\n> INFO: \"dst\": truncated 550073 to 108288 pages\n> DETAIL: CPU: user: 0.20 s, system: 4.88 s, elapsed: 7.41 s\n> INFO: \"dst\": truncated 108288 to 164 pages\n> DETAIL: CPU: user: 0.00 s, system: 7.36 s, elapsed: 7.92 s\n> =====================================================\n> (Total truncating time was about 15.3s in above case)\n>\n> Any thoughts ?\n\n+1. I observed this issue and found this thread.\n\nRegarding the patch, isn't it better to put pg_rusage_init() at the\ntop of do loop block? If we do this, as a side-effect, we can get\nrid of pg_rusage_init() at the top of lazy_truncate_heap().\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Fri, 14 Feb 2020 16:49:52 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: small improvement of the elapsed time for truncating heap in\n vacuum"
},
{
"msg_contents": "Hi,\n\nOn Fri, Feb 14, 2020 at 4:50 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> Regarding the patch, isn't it better to put pg_rusage_init() at the\n> top of do loop block? If we do this, as a side-effect, we can get\n> rid of pg_rusage_init() at the top of lazy_truncate_heap().\nThanks for your reply.\nYeah, it makes sense.\n\nAttached patch moves pg_rusage_init() to the top of do-loop-block.\n\nBest regards,\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com",
"msg_date": "Mon, 17 Feb 2020 12:43:41 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: small improvement of the elapsed time for truncating heap in\n vacuum"
},
{
"msg_contents": "On Mon, 17 Feb 2020 at 12:44, Kasahara Tatsuhito\n<kasahara.tatsuhito@gmail.com> wrote:\n>\n> Hi,\n>\n> On Fri, Feb 14, 2020 at 4:50 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > Regarding the patch, isn't it better to put pg_rusage_init() at the\n> > top of do loop block? If we do this, as a side-effect, we can get\n> > rid of pg_rusage_init() at the top of lazy_truncate_heap().\n> Thanks for your reply.\n> Yeah, it makes sense.\n>\n> Attached patch moves pg_rusage_init() to the top of do-loop-block.\n\n+1 to reset for each truncation loops.\n\nFor the patch, we can put also the declaration of ru0 into the loop.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 17 Feb 2020 13:07:00 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: small improvement of the elapsed time for truncating heap in\n vacuum"
},
{
"msg_contents": "Hi,\n\nOn Mon, Feb 17, 2020 at 1:07 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> For the patch, we can put also the declaration of ru0 into the loop.\nThanks for your reply.\nHmm, certainly that it may be better.\n\nFix the v2 patch and attached.\n\nBest regards,\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com",
"msg_date": "Mon, 17 Feb 2020 14:28:34 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: small improvement of the elapsed time for truncating heap in\n vacuum"
},
{
"msg_contents": "\n\nOn 2020/02/17 14:28, Kasahara Tatsuhito wrote:\n> Hi,\n> \n> On Mon, Feb 17, 2020 at 1:07 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n>> For the patch, we can put also the declaration of ru0 into the loop.\n> Thanks for your reply.\n> Hmm, certainly that it may be better.\n> \n> Fix the v2 patch and attached.\n\nThanks for updating the patch!\nBarring any objection, I will commit this.\n\nAs far as I check the back branches, ISTM that\nthis patch needs to be back-patch to v9.5.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Mon, 17 Feb 2020 17:52:46 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: small improvement of the elapsed time for truncating heap in\n vacuum"
},
{
"msg_contents": "\n\nOn 2020/02/17 17:52, Fujii Masao wrote:\n> \n> \n> On 2020/02/17 14:28, Kasahara Tatsuhito wrote:\n>> Hi,\n>>\n>> On Mon, Feb 17, 2020 at 1:07 PM Masahiko Sawada\n>> <masahiko.sawada@2ndquadrant.com> wrote:\n>>> For the patch, we can put also the declaration of ru0 into the loop.\n>> Thanks for your reply.\n>> Hmm, certainly that it may be better.\n>>\n>> Fix the v2 patch and attached.\n> \n> Thanks for updating the patch!\n> Barring any objection, I will commit this.\n> \n> As far as I check the back branches, ISTM that\n> this patch needs to be back-patch to v9.5.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 19 Feb 2020 20:50:48 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: small improvement of the elapsed time for truncating heap in\n vacuum"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'd like to get some feedback on whether or not implementing a DNS SRV feature\nfor connecting to PostgreSQL would be desirable/useful.\n\nThe main use case is to have a DNS SRV record that lists all the possible\nprimaries of a given replicated PostgreSQL cluster. With auto failover\nsolutions like patroni, pg_auto_failover, stolon, etc. any of these endpoints\ncould be serving the primary server at any point in time.\n\nCombined with target_session_attrs a connection string to a highly-available\ncluster could be something like:\n\n psql \"dnssrv=mydb.prod.example.com target_session_attr=read_write\"\n\nWhich would then resolve the SRV record _postgresql._tcp.mydb.prod.example.com\nand using the method described in RFC 2782 connect to the host/port combination\none by one until it finds the primary.\n\nA benefit of using SRV records would be that the port is also part of the DNS\nrecord and therefore a single IP could be used to serve many databases on\nseparate ports. When working with a cloud environment or containerized setup\n(or both) this would open up some good possibilities.\n\nNote: We currently can already do this somehow by specifying multiple\nhosts/ports in the connection string, however it would be useful if we could\nrefer to a single SRV record instead, as that would have a list of hosts\nand ports to connect to.\n\nDNS SRV is described in detail here:\nhttps://tools.ietf.org/html/rfc2782\n\nI'd love to hear some support/dissent,\n\nregards,\n\nFeike\n\n\n",
"msg_date": "Tue, 13 Aug 2019 11:50:18 +0200",
"msg_from": "Feike Steenbergen <feikesteenbergen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Feature: Use DNS SRV records for connecting"
},
{
"msg_contents": "On 13 Aug 2019, at 11:50, Feike Steenbergen <feikesteenbergen@gmail.com> wrote:\n\n> I'd like to get some feedback on whether or not implementing a DNS SRV feature\n> for connecting to PostgreSQL would be desirable/useful.\n\nA big +1.\n\nWe currently use SRV records to tell postgresql what kind of server it is. This way all of our postgresql servers have an identical configuration, they just tailor themselves on startup as appropriate:\n\n_postgresql-master._tcp.sql.example.com.\n\nThe above record in our case declares who the master is. If the postgresql startup says “hey, that’s me” it configures itself as a master. If the postgresql startup says “hey, that’s not me” it configures itself as a slave of the master.\n\nWe also use TXT records to define the databases we want (with protection against DNS security issues, we never remove a database based on a TXT record, but signed DNS records will help here).\n\n_postgresql.sql.example.com TXT \"v=PGSQL1;d=mydb;u=myuser\"\n\nWe use a series of systemd “daemons” that are configured to run before and after postgresql to do the actual configuration on bootup, but it would be great if postgresql could just do this out the box.\n\nRegards,\nGraham\n—",
"msg_date": "Tue, 13 Aug 2019 12:21:37 +0200",
"msg_from": "Graham Leggett <minfrin@sharp.fm>",
"msg_from_op": false,
"msg_subject": "Re: Feature: Use DNS SRV records for connecting"
},
{
"msg_contents": "Feike Steenbergen <feikesteenbergen@gmail.com> writes:\n> I'd like to get some feedback on whether or not implementing a DNS SRV feature\n> for connecting to PostgreSQL would be desirable/useful.\n\nHow would we get at that data without writing our own DNS client?\n(AFAIK, our existing DNS interactions are all handled by getnameinfo()\nor other library-supplied functions.)\n\nMaybe that'd be worth doing, but it sounds like a lot of work and a\nlot of new code to maintain, relative to the value of the feature.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Aug 2019 10:43:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature: Use DNS SRV records for connecting"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-13 10:43:07 -0400, Tom Lane wrote:\n> How would we get at that data without writing our own DNS client?\n> (AFAIK, our existing DNS interactions are all handled by getnameinfo()\n> or other library-supplied functions.)\n\n> Maybe that'd be worth doing, but it sounds like a lot of work and a\n> lot of new code to maintain, relative to the value of the feature.\n\nIt might have enough independent advantages to make it worthwhile\nthough.\n\nRight now our non-blocking interfaces aren't actually in a number of\ncases, due to name resolution being blocking. While that's documented,\nit imo means that our users need to use a non-blocking DNS library, if\nthey need non-blocking PQconnectPoll() - it's imo not that practical to\njust use IPs in most cases.\n\nWe also don't have particularly good control over the order of hostnames\nreturned by getaddrinfo, which makes it harder to implement reliable\nround-robin etc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 14 Aug 2019 11:01:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Feature: Use DNS SRV records for connecting"
}
] |
[
{
"msg_contents": "I am working on an extension that uses background workers, and interested\nin adding code for it to auto-restart after a crash or server restart\n(similar to as-coded in worker_spi).\n\nBut I'm also interested in being able to configure bgw_restart_time using a\nGUC without having to restart the server, using only SIGHUP. For example,\nI want to normally have the worker restart after 10 seconds. But if I am\ndoing maintenance on the server (without a db restart), I perhaps want to\nchange this to -1 (BGW_NEVER_RESTART), kill the worker, do my business,\nthen restart the worker. Or another reason would be my background worker\nhas some bug and I want to disable it without having to restart my db\nserver. For us as for many, a small outage for a db restart is expensive.\n\nI have played around with this and done some digging around the codebase\nin bgworker.c (with my limited knowledge thus far of the pg codebase), and\nso far as I can tell, it isn't possible to change bgw_restart_time without\na server restart. But I'm not sure if that's just because I don't know how\nthis code works, or if the current libraries actually don't support\nmodifying this part of the background worker. I am setting the GUC in\n_PG_init, but I can see that changing it after it has been registered has\nno effect unless I restart the server.\n\nIf indeed this is possible, I'd be very grateful for some insight on how to\ndo it. I may even try to add such an example to worker_spi.\n\nThanks!\nJeremy\n\nI am working on an extension that uses background workers, and interested in adding code for it to auto-restart after a crash or server restart (similar to as-coded in worker_spi).But I'm also interested in being able to configure bgw_restart_time using a GUC without having to restart the server, using only SIGHUP. For example, I want to normally have the worker restart after 10 seconds. But if I am doing maintenance on the server (without a db restart), I perhaps want to change this to -1 (BGW_NEVER_RESTART), kill the worker, do my business, then restart the worker. Or another reason would be my background worker has some bug and I want to disable it without having to restart my db server. For us as for many, a small outage for a db restart is expensive.I have played around with this and done some digging around the codebase in bgworker.c (with my limited knowledge thus far of the pg codebase), and so far as I can tell, it isn't possible to change bgw_restart_time without a server restart. But I'm not sure if that's just because I don't know how this code works, or if the current libraries actually don't support modifying this part of the background worker. I am setting the GUC in _PG_init, but I can see that changing it after it has been registered has no effect unless I restart the server.If indeed this is possible, I'd be very grateful for some insight on how to do it. I may even try to add such an example to worker_spi.Thanks!Jeremy",
"msg_date": "Tue, 13 Aug 2019 07:37:20 -0500",
"msg_from": "Jeremy Finzel <finzelj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Configuring bgw_restart_time"
},
{
"msg_contents": "On Tue, 13 Aug 2019 at 20:37, Jeremy Finzel <finzelj@gmail.com> wrote:\n\n> I am working on an extension that uses background workers, and interested\n> in adding code for it to auto-restart after a crash or server restart\n> (similar to as-coded in worker_spi).\n>\n\nWhat pglogical does for this is use dynamic background workers with restart\nturned off. It does its own worker exit and restart handling from a manager\nworker that's an always-running static bgworker.\n\nIt's not ideal as it involves a considerable amount of extra work, but with\nBDR we rapidly found that letting postgres itself restart bgworkers was\nmuch too inflexible and hard to control. Especially given the issues around\nsoft-crash restarts and worker registration (see thread\nhttps://www.postgresql.org/message-id/flat/534E6569.1080506%402ndquadrant.com)\n.\n\nBut I'm also interested in being able to configure bgw_restart_time using a\n> GUC without having to restart the server, using only SIGHUP. For example,\n> I want to normally have the worker restart after 10 seconds. But if I am\n> doing maintenance on the server (without a db restart), I perhaps want to\n> change this to -1 (BGW_NEVER_RESTART), kill the worker, do my business,\n> then restart the worker.\n>\n\nInstead of doing that I suggest having a SQL-callable function that sets a\nflag in a shared memory segment used by the worker then sets the worker's\nProcLatch to wake it up if it's sleeping. The flag can *ask* it to exit\ncleanly. If its exit code is 0 it will not be restarted.\n\nYou could also choose to have the worker exit with code 0 on SIGTERM, again\ncausing itself to be unregistered and not restarted.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Tue, 13 Aug 2019 at 20:37, Jeremy Finzel <finzelj@gmail.com> wrote:I am working on an extension that uses background workers, and interested in adding code for it to auto-restart after a crash or server restart (similar to as-coded in worker_spi).What pglogical does for this is use dynamic background workers with restart turned off. It does its own worker exit and restart handling from a manager worker that's an always-running static bgworker.It's not ideal as it involves a considerable amount of extra work, but with BDR we rapidly found that letting postgres itself restart bgworkers was much too inflexible and hard to control. Especially given the issues around soft-crash restarts and worker registration (see thread https://www.postgresql.org/message-id/flat/534E6569.1080506%402ndquadrant.com) .But I'm also interested in being able to configure bgw_restart_time using a GUC without having to restart the server, using only SIGHUP. For example, I want to normally have the worker restart after 10 seconds. But if I am doing maintenance on the server (without a db restart), I perhaps want to change this to -1 (BGW_NEVER_RESTART), kill the worker, do my business, then restart the worker.Instead of doing that I suggest having a SQL-callable function that sets a flag in a shared memory segment used by the worker then sets the worker's ProcLatch to wake it up if it's sleeping. The flag can *ask* it to exit cleanly. If its exit code is 0 it will not be restarted.You could also choose to have the worker exit with code 0 on SIGTERM, again causing itself to be unregistered and not restarted.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Tue, 13 Aug 2019 22:02:51 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Configuring bgw_restart_time"
}
] |
[
{
"msg_contents": "Hello pgdevs,\n\nThe attached patch improves pgbench variable management as discussed in:\n\nhttps://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904081752210.5867@lancre\n\nAs discussed there as well, the overall effect is small compared to libpq \n& system costs when pgbench is talking to a postgres server. When someone \nsays \"pgbench is slow\", they really mean \"libpq & <my-system> are slow\", \nbecause pgbench does not do much beyond jumping from one libpq call to the \nnext. Anyway, the patch has a measurable positive effect.\n\n###\n\nRework pgbench variables and associated values for better performance\n\n - a (hopefully) thread-safe symbol table which maps variable names to integers\n note that all variables are statically known, but \\gset stuff.\n - numbers are then used to access per-client arrays\n\nThe symbol table stores names as distinct leaves in a tree on bytes.\nEach symbol name is the shortest-prefix leaf, possibly including the final\n'\\0'. Some windows-specific hacks are note tested. File \"symbol_table_test.c\"\ndoes what it says and can be compiled standalone.\n\nMost malloc/free cycles are taken out of running a benchmark:\n - there is a (large?) maximum number of variables of 32*MAX_SCRIPTS\n - variable names and string values are statically allocated,\n and limited to, 64 bytes\n - a per-client persistent buffer is used for various purpose,\n to avoid mallocs/frees.\n\nFunctions assignVariables & parseQuery basically shared the same variable\nsubstitution logic, but differed in what was substituted. The logic has been\nabstracted into a common function.\n\nThis patch brings pgbench-specific overheads down on some tests, one \nthread one client, on my laptop, with the attached scripts, in tps:\n - set_x_1.sql: 11.1M -> 14.2M\n - sets.sql: 0.8M -> 2.7M # 20 \\set\n - set.sql: 1.5M -> 2.0M # 3 \\set & \"complex\" expressions\n - empty.sql: 63.9K -> 64.1K (…)\n - select_aid.sql: 29.3K -> 29.3K\n - select_aids.sql: 23.4K -> 24.2K\n - gset_aid.sql: 28.3K -> 29.2K\n\nSo we are talking significant improvements on pgbench-only scripts, only\na few percents once pgbench must interact with a CPU-bound server, because \ntime is spent elsewhere.\n\n-- \nFabien.",
"msg_date": "Tue, 13 Aug 2019 17:54:31 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "pgbench - rework variable management"
},
{
"msg_contents": "On Wed, Aug 14, 2019 at 3:54 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> Some windows-specific hacks are note tested.\n\nSomehow this macro hackery has upset the Windows socket headers:\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.55019\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 Sep 2019 12:13:05 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - rework variable management"
},
{
"msg_contents": "\n>> Some windows-specific hacks are note tested.\n>\n> Somehow this macro hackery has upset the Windows socket headers:\n>\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.55019\n\nI noticed, but I do not have any windows host so I cannot test locally.\n\nThe issue is how to do a mutex on Windows, which does not have pthread so \nit has to be emulated. I'll try again by sending a blind update to the \npatch and see how it goes.\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 3 Sep 2019 06:57:19 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - rework variable management"
},
{
"msg_contents": "On Tue, Sep 3, 2019 at 4:57 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> I noticed, but I do not have any windows host so I cannot test locally.\n>\n> The issue is how to do a mutex on Windows, which does not have pthread so\n> it has to be emulated. I'll try again by sending a blind update to the\n> patch and see how it goes.\n\nIf you have the patience and a github account, you can push code onto\na public github branch having also applied the patch mentioned at\nhttps://wiki.postgresql.org/wiki/Continuous_Integration, go to\nappveyor.com and tell it to watch your git hub account, and then it'll\nbuild and test every time you push a new tweak. Takes a few minutes\nto get the answer each time you try something, but I have managed to\nget things working on Windows that way.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 Sep 2019 17:54:21 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - rework variable management"
},
{
"msg_contents": "Hello Thomas,\n\n>> I noticed, but I do not have any windows host so I cannot test locally.\n>>\n>> The issue is how to do a mutex on Windows, which does not have pthread so\n>> it has to be emulated. I'll try again by sending a blind update to the\n>> patch and see how it goes.\n>\n> If you have the patience and a github account, you can push code onto\n> a public github branch having also applied the patch mentioned at\n> https://wiki.postgresql.org/wiki/Continuous_Integration, go to\n> appveyor.com and tell it to watch your git hub account, and then it'll\n> build and test every time you push a new tweak. Takes a few minutes\n> to get the answer each time you try something, but I have managed to\n> get things working on Windows that way.\n\nThanks for the tip.\n\nI'll try that if the blind attempt attached version does not work.\n\n-- \nFabien.",
"msg_date": "Tue, 3 Sep 2019 13:55:15 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - rework variable management"
},
{
"msg_contents": "Patch v4 is a just a rebase.\n\n-- \nFabien.",
"msg_date": "Wed, 6 Nov 2019 12:08:03 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - rework variable management"
},
{
"msg_contents": "> Patch v4 is a just a rebase.\n\nPatch v5 is a rebase with some adjustements.\n\n-- \nFabien.",
"msg_date": "Thu, 9 Jan 2020 23:04:28 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - rework variable management"
},
{
"msg_contents": "Hi Fabien,\n\nOn 1/9/20 5:04 PM, Fabien COELHO wrote:\n> \n>> Patch v4 is a just a rebase.\n> \n> Patch v5 is a rebase with some adjustements.\n\nThis patch is failing on the Windows build:\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.85698\n\nI'm not sure if this had been fixed in v3 and this is a new issue or if \nit has been failing all along. Either way, it should be updated.\n\nMarked Waiting on Author.\n\nBTW -- sorry if I seem to be picking on your patches but these happen to \nbe the patches with the longest time since any activity.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 27 Mar 2020 12:35:23 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - rework variable management"
},
{
"msg_contents": ">> Patch v5 is a rebase with some adjustements.\n>\n> This patch is failing on the Windows build:\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.85698\n>\n> I'm not sure if this had been fixed in v3 and this is a new issue or if it \n> has been failing all along. Either way, it should be updated.\n\nI don't do windows, so the mutex stuff for windows is just blind \nprogramming.\n\n> Marked Waiting on Author.\n>\n> BTW -- sorry if I seem to be picking on your patches but these happen to be \n> the patches with the longest time since any activity.\n\nBasically, my areas of interest do not match committers' areas of \ninterest.\n\nv6 is a yet-again blind attempt at fixing the windows mutex. If someone \nwith a windows could tell me if it is ok, and if not what to fix, it would \nbe great.\n\n-- \nFabien.",
"msg_date": "Fri, 27 Mar 2020 23:25:58 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - rework variable management"
},
{
"msg_contents": "On 3/27/20 6:25 PM, Fabien COELHO wrote:\n> \n>>> Patch v5 is a rebase with some adjustements.\n>>\n>> This patch is failing on the Windows build:\n>> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.85698 \n>>\n>>\n>> I'm not sure if this had been fixed in v3 and this is a new issue or \n>> if it has been failing all along.� Either way, it should be updated.\n> \n> I don't do windows, so the mutex stuff for windows is just blind \n> programming.\n> \n>> Marked Waiting on Author.\n>>\n>> BTW -- sorry if I seem to be picking on your patches but these happen \n>> to be the patches with the longest time since any activity.\n> \n> Basically, my areas of interest do not match committers' areas of interest.\n> \n> v6 is a yet-again blind attempt at fixing the windows mutex. If someone \n> with a windows could tell me if it is ok, and if not what to fix, it \n> would be great.\n\nRegarding Windows testing you may find this thread useful:\n\nhttps://www.postgresql.org/message-id/CAMN686ExUKturcWp4POaaVz3gR3hauSGBjOCd0E-Jh1zEXqf_Q%40mail.gmail.com\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 27 Mar 2020 18:42:49 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - rework variable management"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 06:42:49PM -0400, David Steele wrote:\n> Regarding Windows testing you may find this thread useful:\n> \n> https://www.postgresql.org/message-id/CAMN686ExUKturcWp4POaaVz3gR3hauSGBjOCd0E-Jh1zEXqf_Q%40mail.gmail.com\n\nSince then, the patch is failing to apply. As this got zero activity\nfor the last six months, I am marking the entry as returned with\nfeedback in the CF app.\n--\nMichael",
"msg_date": "Thu, 17 Sep 2020 14:31:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - rework variable management"
},
{
"msg_contents": "Bonjour Michaël,\n\n>> https://www.postgresql.org/message-id/CAMN686ExUKturcWp4POaaVz3gR3hauSGBjOCd0E-Jh1zEXqf_Q%40mail.gmail.com\n>\n> Since then, the patch is failing to apply. As this got zero activity\n> for the last six months, I am marking the entry as returned with\n> feedback in the CF app.\n\nHmmm… I did not notice it did not apply anymore. I do not have much time \nto contribute much this round and probably the next as well, so fine with \nme.\n\n-- \nFabien.",
"msg_date": "Sat, 19 Sep 2020 09:17:31 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - rework variable management"
}
] |
[
{
"msg_contents": "The easiest way to see this is to BEGIN READ ONLY & then attempt an insert. Execute either of COMMIT AND CHAIN or ROLLBACK AND CHAIN & attempt the insert a second time\n\nThis seems incorrect. The documentation should at least point out this behavior if it's intended\n\n\n\n\n\n\n\n\n\nThe easiest way to see this is to BEGIN READ ONLY & then attempt an insert. Execute either of COMMIT AND CHAIN or ROLLBACK AND CHAIN & attempt the insert a second time\n \nThis seems incorrect. The documentation should at least point out this behavior if it’s intended",
"msg_date": "Tue, 13 Aug 2019 22:43:21 +0000",
"msg_from": "=?iso-8859-1?Q?Philip_Dub=E9?= <Philip.Dub@microsoft.com>",
"msg_from_op": true,
"msg_subject": "12's AND CHAIN doesn't chain when transaction raised an error"
},
{
"msg_contents": "On 2019-Aug-13, Philip Dub� wrote:\n\n> The easiest way to see this is to BEGIN READ ONLY & then attempt an\n> insert. Execute either of COMMIT AND CHAIN or ROLLBACK AND CHAIN &\n> attempt the insert a second time\n> \n> This seems incorrect. The documentation should at least point out this\n> behavior if it's intended\n\nWhat do you mean with \"doesn't chain\"?\n\nA simple experiment shows that \"ROLLBACK AND CHAIN\" in an aborted\ntransaction does indeed start a new transaction; so the \"chain\" part is\nworking to some extent. It is also true that if the original\ntransaction was READ ONLY, then the followup transaction after an error\nis not READ ONLY; but if the first transaction is successful and you do\nCOMMIT AND CHAIN, then the second transaction *is* READ ONLY.\nSo there is some discrepancy here.\n\n<commit statement> (17.7 in SQL:2016) General Rule 10) a) says\n If <commit statement> contains AND CHAIN, then an SQL-transaction is\n initiated. Any branch transactions of the SQL-transaction are\n initiated with the same transaction access mode, transaction isolation\n level, and condition area limit as the corresponding branch of the\n SQL-transaction just terminated.\n\n... which is exactly the same wording used in 17.8 <rollback statement>\nGeneral Rule 2) h) i).\n\n(4.41.3 defines \"An SQL-transaction has a transaction access mode that\nis either read-only or read-write.\")\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 30 Dec 2019 13:29:20 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: 12's AND CHAIN doesn't chain when transaction raised an error"
}
] |
[
{
"msg_contents": "Hi,\n\nHere are three strange recent failures in the \"rules\" test:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=blenny&dt=2019-08-13%2022:19:27\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alewife&dt=2019-07-27%2009:39:05\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dory&dt=2019-07-18%2003:00:27\n\nThey all raised \"ERROR: could not open relation with OID <varies>\"\nwhile running:\n\n SELECT viewname, definition FROM pg_views\n WHERE schemaname IN ('pg_catalog', 'public')\n ORDER BY viewname;\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Aug 2019 11:27:38 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "BF failure: could not open relation with OID XXXX while querying\n pg_views"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Here are three strange recent failures in the \"rules\" test:\n> ...\n> They all raised \"ERROR: could not open relation with OID <varies>\"\n> while running:\n> SELECT viewname, definition FROM pg_views\n> WHERE schemaname IN ('pg_catalog', 'public')\n> ORDER BY viewname;\n\nI think the problem is probably that Peter ignored this bit of advice\nin parallel_schedule:\n\n# rules cannot run concurrently with any test that creates\n# a view or rule in the public schema\n\nwhen he inserted \"collate.linux.utf8\" concurrently with \"rules\".\n\n(I suspect BTW that the point is not so much that you better not\n*create* such an object, as that you better not *drop* it concurrently\nwith that query.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Aug 2019 23:52:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BF failure: could not open relation with OID XXXX while querying\n pg_views"
},
{
"msg_contents": "On 2019-08-14 05:52, Tom Lane wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> Here are three strange recent failures in the \"rules\" test:\n>> ...\n>> They all raised \"ERROR: could not open relation with OID <varies>\"\n>> while running:\n>> SELECT viewname, definition FROM pg_views\n>> WHERE schemaname IN ('pg_catalog', 'public')\n>> ORDER BY viewname;\n> \n> I think the problem is probably that Peter ignored this bit of advice\n> in parallel_schedule:\n> \n> # rules cannot run concurrently with any test that creates\n> # a view or rule in the public schema\n> \n> when he inserted \"collate.linux.utf8\" concurrently with \"rules\".\n\nThis test file is set up to create everything in the \"collate_tests\"\nschema. Is that not working correctly?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 14 Aug 2019 07:00:27 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BF failure: could not open relation with OID XXXX while querying\n pg_views"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-08-14 05:52, Tom Lane wrote:\n>> Thomas Munro <thomas.munro@gmail.com> writes:\n>>> They all raised \"ERROR: could not open relation with OID <varies>\"\n>>> while running:\n>>> SELECT viewname, definition FROM pg_views\n>>> WHERE schemaname IN ('pg_catalog', 'public')\n>>> ORDER BY viewname;\n\n>> I think the problem is probably that Peter ignored this bit of advice\n>> in parallel_schedule:\n>> # rules cannot run concurrently with any test that creates\n>> # a view or rule in the public schema\n>> when he inserted \"collate.linux.utf8\" concurrently with \"rules\".\n\n> This test file is set up to create everything in the \"collate_tests\"\n> schema. Is that not working correctly?\n\nOh, hmm --- yeah, that should mean it's safe. Maybe somebody incautiously\nchanged one of the other tests that run concurrently with \"rules\"?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Aug 2019 01:05:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BF failure: could not open relation with OID XXXX while querying\n pg_views"
},
{
"msg_contents": "On Wed, Aug 14, 2019 at 5:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Oh, hmm --- yeah, that should mean it's safe. Maybe somebody incautiously\n> changed one of the other tests that run concurrently with \"rules\"?\n\nLooks like stats_ext.sql could be the problem. It creates and drops\npriv_test_view, not in a schema. Adding Dean, author of commit\nd7f8d26d.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Aug 2019 17:24:26 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: BF failure: could not open relation with OID XXXX while querying\n pg_views"
},
{
"msg_contents": "On Wed, Aug 14, 2019 at 05:24:26PM +1200, Thomas Munro wrote:\n>On Wed, Aug 14, 2019 at 5:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Oh, hmm --- yeah, that should mean it's safe. Maybe somebody incautiously\n>> changed one of the other tests that run concurrently with \"rules\"?\n>\n>Looks like stats_ext.sql could be the problem. It creates and drops\n>priv_test_view, not in a schema. Adding Dean, author of commit\n>d7f8d26d.\n>\n\nYeah, that seems like it might be the cause. I'll take a look at fixing\nthis, probably by creating the view in a different schema.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 16 Aug 2019 00:37:48 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BF failure: could not open relation with OID XXXX while querying\n pg_views"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Wed, Aug 14, 2019 at 05:24:26PM +1200, Thomas Munro wrote:\n>> On Wed, Aug 14, 2019 at 5:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Oh, hmm --- yeah, that should mean it's safe. Maybe somebody incautiously\n>>> changed one of the other tests that run concurrently with \"rules\"?\n\n>> Looks like stats_ext.sql could be the problem. It creates and drops\n>> priv_test_view, not in a schema. Adding Dean, author of commit\n>> d7f8d26d.\n\n> Yeah, that seems like it might be the cause. I'll take a look at fixing\n> this, probably by creating the view in a different schema.\n\nPing? We're still getting intermittent failures of this ilk, eg\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dragonet&dt=2019-09-14%2003%3A37%3A03\n\nWith v12 release approaching, I'd like to not have failures\nlike this in a released branch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 14 Sep 2019 00:25:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BF failure: could not open relation with OID XXXX while querying\n pg_views"
},
{
"msg_contents": "On Sat, 14 Sep 2019 at 05:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > On Wed, Aug 14, 2019 at 05:24:26PM +1200, Thomas Munro wrote:\n> >> On Wed, Aug 14, 2019 at 5:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> Oh, hmm --- yeah, that should mean it's safe. Maybe somebody incautiously\n> >>> changed one of the other tests that run concurrently with \"rules\"?\n>\n> >> Looks like stats_ext.sql could be the problem. It creates and drops\n> >> priv_test_view, not in a schema. Adding Dean, author of commit\n> >> d7f8d26d.\n>\n> > Yeah, that seems like it might be the cause. I'll take a look at fixing\n> > this, probably by creating the view in a different schema.\n>\n> Ping? We're still getting intermittent failures of this ilk, eg\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dragonet&dt=2019-09-14%2003%3A37%3A03\n>\n> With v12 release approaching, I'd like to not have failures\n> like this in a released branch.\n>\n\nAh sorry, I missed this thread before. As author of that commit, it's\nreally on me to fix it, and the cause seems pretty clear-cut, so I'll\naim to get that done today.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sun, 15 Sep 2019 10:16:30 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BF failure: could not open relation with OID XXXX while querying\n pg_views"
},
{
"msg_contents": "On Sun, Sep 15, 2019 at 10:16:30AM +0100, Dean Rasheed wrote:\n>On Sat, 14 Sep 2019 at 05:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> > On Wed, Aug 14, 2019 at 05:24:26PM +1200, Thomas Munro wrote:\n>> >> On Wed, Aug 14, 2019 at 5:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> >>> Oh, hmm --- yeah, that should mean it's safe. Maybe somebody incautiously\n>> >>> changed one of the other tests that run concurrently with \"rules\"?\n>>\n>> >> Looks like stats_ext.sql could be the problem. It creates and drops\n>> >> priv_test_view, not in a schema. Adding Dean, author of commit\n>> >> d7f8d26d.\n>>\n>> > Yeah, that seems like it might be the cause. I'll take a look at fixing\n>> > this, probably by creating the view in a different schema.\n>>\n>> Ping? We're still getting intermittent failures of this ilk, eg\n>>\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dragonet&dt=2019-09-14%2003%3A37%3A03\n>>\n>> With v12 release approaching, I'd like to not have failures\n>> like this in a released branch.\n>>\n>\n>Ah sorry, I missed this thread before. As author of that commit, it's\n>really on me to fix it, and the cause seems pretty clear-cut, so I'll\n>aim to get that done today.\n>\n\nFWIW here is a draft patch that I was going to propose - it simply moves\nthe table+view into a \"tststats\" schema. I suppose that's rougly what we\ndiscussed earlier in this thread.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 15 Sep 2019 12:11:06 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BF failure: could not open relation with OID XXXX while querying\n pg_views"
},
{
"msg_contents": "On Sun, 15 Sep 2019 at 11:11, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Sun, Sep 15, 2019 at 10:16:30AM +0100, Dean Rasheed wrote:\n> >\n> >Ah sorry, I missed this thread before. As author of that commit, it's\n> >really on me to fix it, and the cause seems pretty clear-cut, so I'll\n> >aim to get that done today.\n>\n> FWIW here is a draft patch that I was going to propose - it simply moves\n> the table+view into a \"tststats\" schema. I suppose that's rougly what we\n> discussed earlier in this thread.\n>\n\nYes, that matches what I just drafted.\nDo you want to push it, or shall I?\n\nRegards,\nDean\n\n\n",
"msg_date": "Sun, 15 Sep 2019 11:27:19 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BF failure: could not open relation with OID XXXX while querying\n pg_views"
},
{
"msg_contents": "On Sun, Sep 15, 2019 at 11:27:19AM +0100, Dean Rasheed wrote:\n>On Sun, 15 Sep 2019 at 11:11, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Sun, Sep 15, 2019 at 10:16:30AM +0100, Dean Rasheed wrote:\n>> >\n>> >Ah sorry, I missed this thread before. As author of that commit, it's\n>> >really on me to fix it, and the cause seems pretty clear-cut, so I'll\n>> >aim to get that done today.\n>>\n>> FWIW here is a draft patch that I was going to propose - it simply moves\n>> the table+view into a \"tststats\" schema. I suppose that's rougly what we\n>> discussed earlier in this thread.\n>>\n>\n>Yes, that matches what I just drafted.\n>Do you want to push it, or shall I?\n\nPlease go ahead and push. I'm temporarily without commit access.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 15 Sep 2019 13:20:25 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BF failure: could not open relation with OID XXXX while querying\n pg_views"
},
{
"msg_contents": "On Sun, 15 Sep 2019 at 12:20, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Sun, Sep 15, 2019 at 11:27:19AM +0100, Dean Rasheed wrote:\n> >On Sun, 15 Sep 2019 at 11:11, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> >>\n> >> On Sun, Sep 15, 2019 at 10:16:30AM +0100, Dean Rasheed wrote:\n> >> >\n> >> >Ah sorry, I missed this thread before. As author of that commit, it's\n> >> >really on me to fix it, and the cause seems pretty clear-cut, so I'll\n> >> >aim to get that done today.\n> >>\n> >> FWIW here is a draft patch that I was going to propose - it simply moves\n> >> the table+view into a \"tststats\" schema. I suppose that's rougly what we\n> >> discussed earlier in this thread.\n> >>\n> >\n> >Yes, that matches what I just drafted.\n> >Do you want to push it, or shall I?\n>\n> Please go ahead and push. I'm temporarily without commit access.\n>\n\nOK, pushed.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sun, 15 Sep 2019 14:17:13 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BF failure: could not open relation with OID XXXX while querying\n pg_views"
}
] |
[
{
"msg_contents": "I'm confused by how the code uses the term \"verifier\" in relation to SCRAM.\n\nISTM that the code uses the term as meaning whatever is or would be\nstored in pg_auth.rolpassword.\n\nI don't see this usage supported in the RFCs. In RFC 5802,\n\n verifier = \"v=\" base64\n ;; base-64 encoded ServerSignature.\n\nwhere\n\n ServerSignature := HMAC(ServerKey, AuthMessage)\n ServerKey := HMAC(SaltedPassword, \"Server Key\")\n AuthMessage := client-first-message-bare + \",\" +\n server-first-message + \",\" +\n client-final-message-without-proof\n\nwhereas what is stored in rolpassword is\n\n SCRAM-SHA-256$<iterations>:<salt>$<storedkey>:<serverkey>\n\nwhere\n\n StoredKey := H(ClientKey)\n ClientKey := HMAC(SaltedPassword, \"Client Key\")\n\nSo while these are all related, I don't think it's accurate to call what\nis in rolpassword a SCRAM \"verifier\".\n\nRFC 5803 is titled \"Lightweight Directory Access Protocol (LDAP) Schema\nfor Storing Salted Challenge Response Authentication Mechanism (SCRAM)\nSecrets\". Following that, I think calling the contents of rolpassword a\n\"secret\" or a \"stored secret\" would be better.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 14 Aug 2019 07:59:16 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "use of the term \"verifier\" with SCRAM"
},
{
"msg_contents": "On 14/08/2019 08:59, Peter Eisentraut wrote:\n> I'm confused by how the code uses the term \"verifier\" in relation to SCRAM.\n> \n> ISTM that the code uses the term as meaning whatever is or would be\n> stored in pg_auth.rolpassword.\n> \n> I don't see this usage supported in the RFCs. In RFC 5802,\n> \n> verifier = \"v=\" base64\n> ;; base-64 encoded ServerSignature.\n> \n> where\n> \n> ServerSignature := HMAC(ServerKey, AuthMessage)\n> ServerKey := HMAC(SaltedPassword, \"Server Key\")\n> AuthMessage := client-first-message-bare + \",\" +\n> server-first-message + \",\" +\n> client-final-message-without-proof\n> \n> whereas what is stored in rolpassword is\n> \n> SCRAM-SHA-256$<iterations>:<salt>$<storedkey>:<serverkey>\n> \n> where\n> \n> StoredKey := H(ClientKey)\n> ClientKey := HMAC(SaltedPassword, \"Client Key\")\n> \n> So while these are all related, I don't think it's accurate to call what\n> is in rolpassword a SCRAM \"verifier\".\n\nHuh, you're right.\n\n> RFC 5803 is titled \"Lightweight Directory Access Protocol (LDAP) Schema\n> for Storing Salted Challenge Response Authentication Mechanism (SCRAM)\n> Secrets\". Following that, I think calling the contents of rolpassword a\n> \"secret\" or a \"stored secret\" would be better.\n\nRFC 5802 uses the term \"Authentication information\". See section \"2.1 \nTerminology\":\n\n o Authentication information: Information used to verify an identity\n claimed by a SCRAM client. The authentication information for a\n SCRAM identity consists of salt, iteration count, \"StoredKey\" and\n \"ServerKey\" (as defined in the algorithm overview) for each\n supported cryptographic hash function.\n\nBut I agree that \"secret\", as used in RFC5803 is better.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 14 Aug 2019 11:41:15 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: use of the term \"verifier\" with SCRAM"
},
{
"msg_contents": "On 2019-08-14 10:41, Heikki Linnakangas wrote:\n>> RFC 5803 is titled \"Lightweight Directory Access Protocol (LDAP) Schema\n>> for Storing Salted Challenge Response Authentication Mechanism (SCRAM)\n>> Secrets\". Following that, I think calling the contents of rolpassword a\n>> \"secret\" or a \"stored secret\" would be better.\n\n> But I agree that \"secret\", as used in RFC5803 is better.\n\nHere is my proposed patch to adjust this.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 10 Oct 2019 09:08:37 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: use of the term \"verifier\" with SCRAM"
},
{
"msg_contents": "On Thu, Oct 10, 2019 at 09:08:37AM +0200, Peter Eisentraut wrote:\n> Here is my proposed patch to adjust this.\n\nLooks fine to me reading through. I think that you are right to not\nchange the descriptions in build_server_final_message(), as that's\ndescribed similarly in RFC 5802. By renaming scram_build_verifier()\nto scram_build_secret() you are going to break one of my in-house\nextensions. I am using it to register for a user SCRAM veri^D^D^D^D\nsecrets with custom iteration and salt length :)\n--\nMichael",
"msg_date": "Thu, 10 Oct 2019 17:03:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: use of the term \"verifier\" with SCRAM"
},
{
"msg_contents": "On 2019-10-10 10:03, Michael Paquier wrote:\n> On Thu, Oct 10, 2019 at 09:08:37AM +0200, Peter Eisentraut wrote:\n>> Here is my proposed patch to adjust this.\n> \n> Looks fine to me reading through. I think that you are right to not\n> change the descriptions in build_server_final_message(), as that's\n> described similarly in RFC 5802.\n\ncommitted\n\n> By renaming scram_build_verifier()\n> to scram_build_secret() you are going to break one of my in-house\n> extensions. I am using it to register for a user SCRAM veri^D^D^D^D\n> secrets with custom iteration and salt length :)\n\nOK, that should be easy to work around with an #ifdef or two.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 12 Oct 2019 21:48:37 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: use of the term \"verifier\" with SCRAM"
}
] |
[
{
"msg_contents": "I want to use valgrind to detect memory leak issue. Then I run into 2\nproblems I want to confirm them here.\n\nQ1: can I use 'leak-check=yes' to detect memory leak issue?\n1. In https://wiki.postgresql.org/wiki/Valgrind, the wiki indicates to\nuse '--leak-check=no'\n2. in https://github.com/afiskon/pgscripts/blob/master/valgrind.sh#L55,\nit use 'leak-check=no' as well with the wold \"No point to check for memory\nleaks, Valgrind doesn't understand MemoryContexts and stuff\"\n3. Per my understanding, if we build pg with USE_VALGRIND, then we can\nuse 'leak-check=yes' to detect memory leak issue, the idea here is\n a). valgrind can't understand MemoryContext,\n b). with USE_VALGRIND, pg use ` VALGRIND_MEMPOOL_ALLOC` and `\nVALGRIND_MEMPOOL_FREE` whenever we we allocate and free memory in\nMemoryContext\n c). finally, valgrind check the valgrind_mempool to know which memory\nis leak.\nSo the answer should be \"yes, we can use \"leak-check=yes\" to detect memory\nleak issue?\n\n\nQ2: do we check memory leak for some new commits or we can ignore them\nbased on we use memory context carefully? If we want to check memory leak\nfor some new commits, how to handle the existing memory leak case?\n\nwith `valgrind --leak-check=yes --suppressions=src/tools/valgrind.supp ...`\nand run `make installcheck` on an unmodified version (commit\nd06fe6ce2c79420fd19ac89ace81b66579f08493) , I run into 711 memory leaks\nwith `match-leak-kinds: definite`. if we want to check memory leak for\nnew commits, this should be a kind of troubles. how do you do with this\neveryday?\n\nThanks",
"msg_date": "Wed, 14 Aug 2019 14:33:54 +0800",
"msg_from": "Alex <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "use valgrind --leak-check=yes to detect memory leak"
},
{
"msg_contents": "Alex <zhihui.fan1213@gmail.com> writes:\n> I want to use valgrind to detect memory leak issue. Then I run into 2\n> problems I want to confirm them here.\n\n> 1. In https://wiki.postgresql.org/wiki/Valgrind, the wiki indicates to\n> use '--leak-check=no'\n\nThat's just a sample configuration.\n\n> 2. in https://github.com/afiskon/pgscripts/blob/master/valgrind.sh#L55,\n> it use 'leak-check=no' as well with the wold \"No point to check for memory\n> leaks, Valgrind doesn't understand MemoryContexts and stuff\"\n\nThat info is many years out-of-date. You can do it with USE_VALGRIND,\nand sometimes that's helpful, but ...\n\n> Q2: do we check memory leak for some new commits or we can ignore them\n> based on we use memory context carefully? If we want to check memory leak\n> for some new commits, how to handle the existing memory leak case?\n\nGenerally, the philosophy in PG is to not bother with freeing data\nexplicitly if letting it be reclaimed at context deletion is good enough.\nSometimes that's not good enough, but it is in most places, and for that\nreason plain valgrind leak checking is of limited use.\n\nvalgrind can be pretty helpful if you're trying to identify the origin\nof a serious leak --- the kind that accumulates memory wastage\nrepetitively over a query, for example. But what you have to do is\nlook for big leaks and ignore all the minor \"leaks\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Aug 2019 10:50:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: use valgrind --leak-check=yes to detect memory leak"
}
] |
[
{
"msg_contents": "This kind of output is usually not helpful:\n\nTRAP: BadArgument(\"((context) != ((void *)0) && (((((const\nNode*)((context)))->type) == T_AllocSetContext) || ((((const\nNode*)((context)))->type) == T_SlabContext) || ((((const\nNode*)((context)))->type) == T_GenerationContext)))\", File:\n\"../../../../src/include/utils/memutils.h\", Line: 129)\n\nWhat we probably want is something like:\n\nTRAP: BadArgument(\"MemoryContextIsValid(context)\", File:\n\"../../../../src/include/utils/memutils.h\", Line: 129)\n\nThe problem is that the way the Assert macros are written they\nmacro-expand the arguments before stringifying them. The attached patch\nfixes that. This requires both replacing CppAsString by plain \"#\" and\nnot redirecting Assert() to Trap().\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 14 Aug 2019 22:28:55 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Improve Assert output"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> This kind of output is usually not helpful:\n> TRAP: BadArgument(\"((context) != ((void *)0) && (((((const\n> Node*)((context)))->type) == T_AllocSetContext) || ((((const\n> Node*)((context)))->type) == T_SlabContext) || ((((const\n> Node*)((context)))->type) == T_GenerationContext)))\", File:\n> \"../../../../src/include/utils/memutils.h\", Line: 129)\n\n> What we probably want is something like:\n\n> TRAP: BadArgument(\"MemoryContextIsValid(context)\", File:\n> \"../../../../src/include/utils/memutils.h\", Line: 129)\n\n+1, that would be a big improvement. The other thing that this\nis fixing is that the existing output for Assert et al shows\nthe *inverted* condition, which I for one always found confusing.\n\nI didn't try to test the patch, but it passes eyeball examination.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Aug 2019 16:36:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improve Assert output"
}
] |
[
{
"msg_contents": "I notice that for a pg_amop object, getObjectDescription does this:\n\n /*------\n translator: %d is the operator strategy (a number), the\n first two %s's are data type names, the third %s is the\n description of the operator family, and the last %s is the\n textual form of the operator with arguments. */\n appendStringInfo(&buffer, _(\"operator %d (%s, %s) of %s: %s\"),\n amopForm->amopstrategy,\n format_type_be(amopForm->amoplefttype),\n format_type_be(amopForm->amoprighttype),\n opfam.data,\n format_operator(amopForm->amopopr));\n\nThis might seem all right in isolation, but it produces completely horrid\nresults as soon as you plug it into some larger message. For example,\n\ncontrib_regression=# alter operator family gin__int_ops using gin drop operator 8 (integer[],integer[]);\nERROR: cannot drop operator 8 (integer[], integer[]) of operator family gin__int_ops for access method gin: <@(integer[],integer[]) because operator class gin__int_ops for access method gin requires it\nHINT: You can drop operator class gin__int_ops for access method gin instead.\n\nThe colon seems like it ought to introduce a subsidiary sentence, but\nit does not, and the reader is led off into the weeds trying to figure\nout what connects to what.\n\nI follow the point of trying to show the actual operator name, but\nwe gotta work harder on the presentation. Perhaps this would work:\n\n appendStringInfo(&buffer, _(\"operator %d (%s, %s) (that is, %s) of %s\"),\n amopForm->amopstrategy,\n format_type_be(amopForm->amoplefttype),\n format_type_be(amopForm->amoprighttype),\n format_operator(amopForm->amopopr),\n opfam.data);\n\nleading to\n\nERROR: cannot drop operator 8 (integer[], integer[]) (that is, <@(integer[],integer[])) of operator family gin__int_ops for access method gin because operator class gin__int_ops for access method gin requires it\n\nLikewise for pg_amproc entries, of course.\n\nOr maybe we're just being too ambitious here and we should discard some of\nthis information. I'm not really sure that the format_operator result\ncan be included without complete loss of intelligibility.\n\nThoughts? I'm particularly unclear on how any of this might translate\ninto other languages, though I doubt that the current text is giving\ngood guidance to translators.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Aug 2019 19:07:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Don't like getObjectDescription results for pg_amop/pg_amproc"
},
{
"msg_contents": "On Thu, Aug 15, 2019 at 2:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Or maybe we're just being too ambitious here and we should discard some of\n> this information. I'm not really sure that the format_operator result\n> can be included without complete loss of intelligibility.\n>\n> Thoughts? I'm particularly unclear on how any of this might translate\n> into other languages, though I doubt that the current text is giving\n> good guidance to translators.\n\nCan left and right types of pg_amop mismatch to those of pg_operatror?\n It probably could for domains, any* types or something. But for\nbuiltin opclasses they always match.\n\n# select * from pg_amop amop join pg_operator op on op.oid =\namop.amopopr where amop.amoplefttype != op.oprleft or\namop.amoprighttype != op.oprright;\n(0 rows)\n\nCould we discard one pair of types from output?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Thu, 15 Aug 2019 05:20:42 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Don't like getObjectDescription results for pg_amop/pg_amproc"
},
{
"msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> On Thu, Aug 15, 2019 at 2:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Or maybe we're just being too ambitious here and we should discard some of\n>> this information. I'm not really sure that the format_operator result\n>> can be included without complete loss of intelligibility.\n\n> Could we discard one pair of types from output?\n\nYeah, it would help to stop using format_operator and just print the\nbare name of the operator. (format_operator can actually make things\na whole lot worse than depicted in my example, because it may insist\non schema-qualification and double-quoting.) In principle that could\nbe ambiguous ... but the pg_amop entry has already been identified fully,\nand I don't think it needs to be part of the charter of this printout\nto *also* identify the underlying operator with complete precision.\n\nI'm still not sure how to cram the operator name into the output\nwithout using a colon, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 15 Aug 2019 09:48:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Don't like getObjectDescription results for pg_amop/pg_amproc"
}
] |
[
{
"msg_contents": "Hello, I am trying to develop calendar extension for PostgreSQL but there is a difficulties on how to get day, month and year from PostgreSQL source code because when am read the PostgreSQL source code it uses DateADT as a data type and this DateADT returns the total numbers of day. So how can I get day, month or year only. For example the below code is PostgreSQL source code to return current date.\n/*\n* GetSQLCurrentDate -- implements CURRENT_DATE\n*/\nDateADT\nGetSQLCurrentDate(void)\n{\n TimestampTz ts;\n struct pg_tm tt,\n *tm = &tt;\n fsec_t fsec;\n int tz;\n\n ts = GetCurrentTransactionStartTimestamp();\n\n if (timestamp2tm(ts, &tz, tm, &fsec, NULL, NULL) != 0)\n ereport(ERROR,\n (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n errmsg(\"timestamp out of range\")));\n\n return date2j(tm->tm_year, tm->tm_mon, tm->tm_mday) - POSTGRES_EPOCH_JDATE;\n}\n From this source code how can I get only the year to convert my own calendar year. I need this because Ethiopian calendar is totally differ from GC in terms of day, month and year.\n\n\nRegards,\n____________________________________\nYonathan Misgan\nAssistant Lecturer, @ Debre Tabor University\nFaculty of Technology\nDepartment of Computer Science\nStudying MSc in Computer Science (in Data and Web Engineering)\n@ Addis Ababa University\nE-mail: yonamis@dtu.edu.et<mailto:yonamis@dtu.edu.et>\n yonathanmisgan.4@gmail.com<mailto:yonathanmisgan.4@gmail.com>\nTel: (+251)-911180185 (mob)\n\n\n\n\n\n\n\n\n\n\nHello, I am trying to develop calendar extension for PostgreSQL but there is a difficulties on how to get day, month and year from PostgreSQL source code because when am read the PostgreSQL source code it uses DateADT as a data type and\n this DateADT returns the total numbers of day. So how can I get day, month or year only. For example the below code is PostgreSQL source code to return current date.\n/*\n* GetSQLCurrentDate -- implements CURRENT_DATE\n*/\nDateADT\nGetSQLCurrentDate(void)\n{\n TimestampTz ts;\n struct pg_tm tt,\n *tm = &tt;\n fsec_t fsec;\n int tz;\n \n ts = GetCurrentTransactionStartTimestamp();\n \n if (timestamp2tm(ts, &tz, tm, &fsec, NULL, NULL) != 0)\n ereport(ERROR,\n (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n errmsg(\"timestamp out of range\")));\n \n return date2j(tm->tm_year, tm->tm_mon, tm->tm_mday) - POSTGRES_EPOCH_JDATE;\n}\nFrom this source code how can I get only the year to convert my own calendar year. I need this because Ethiopian calendar is totally differ from GC in terms of day, month and year.\n\n \nRegards,\n____________________________________\nYonathan Misgan \nAssistant Lecturer, @ Debre Tabor University\nFaculty of Technology\nDepartment of Computer Science\nStudying MSc in Computer Science (in\n Data and Web Engineering) \n@ Addis Ababa University \nE-mail: yonamis@dtu.edu.et\n yonathanmisgan.4@gmail.com\nTel: (+251)-911180185 (mob)",
"msg_date": "Thu, 15 Aug 2019 06:58:07 +0000",
"msg_from": "Yonatan Misgan <yonamis@dtu.edu.et>",
"msg_from_op": true,
"msg_subject": "Extension development"
},
{
"msg_contents": "On Thu, Aug 15, 2019 at 06:58:07AM +0000, Yonatan Misgan wrote:\n>Hello, I am trying to develop calendar extension for PostgreSQL but\n>there is a difficulties on how to get day, month and year from\n>PostgreSQL source code because when am read the PostgreSQL source code\n>it uses DateADT as a data type and this DateADT returns the total\n>numbers of day. So how can I get day, month or year only. For example\n>the below code is PostgreSQL source code to return current date.\n>/*\n>* GetSQLCurrentDate -- implements CURRENT_DATE\n>*/\n>DateADT\n>GetSQLCurrentDate(void)\n>{\n> TimestampTz ts;\n> struct pg_tm tt,\n> *tm = &tt;\n> fsec_t fsec;\n> int tz;\n>\n> ts = GetCurrentTransactionStartTimestamp();\n>\n> if (timestamp2tm(ts, &tz, tm, &fsec, NULL, NULL) != 0)\n> ereport(ERROR,\n> (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> errmsg(\"timestamp out of range\")));\n>\n> return date2j(tm->tm_year, tm->tm_mon, tm->tm_mday) - POSTGRES_EPOCH_JDATE;\n>}\n>From this source code how can I get only the year to convert my own\n>calendar year. I need this because Ethiopian calendar is totally\n>differ from GC in terms of day, month and year.\n>\n\nI think you might want to look at timestamptz_part() function, in\ntimestamp.c. That's what's behind date_part() SQL function, which seems\ndoing the sort of stuff you need.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 15 Aug 2019 22:53:51 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Extension development"
},
{
"msg_contents": "On 08/15/19 02:58, Yonatan Misgan wrote:\n\n> From this source code how can I get only the year to convert my own\n> calendar year. I need this because Ethiopian calendar is totally differ\n> from GC in terms of day, month and year.\n\nI find myself wondering whether getting only the year is sufficient to\ndo the conversion. There is already an Ethiopic calendar available for\nJava (not included, but in org.threeten.extra[1]), and it seems to say\nthe years do not align precisely with Gregorian years (as I would not\nhave expected anyway):\n\n\"Dates are aligned such that 0001-01-01 (Ethiopic) is 0008-08-27 (ISO).\"\n\nSo it seems more likely that you would need a calculation involving the\nyear, month, and day ... or even that the Julian day number already\nstored in PostgreSQL could be the most convenient starting point for\nthe arithmetic you need.\n\nIt's possible you might want to crib some of the algorithm from the\nthreeten-extra Ethiopic date sources [2]. It would need adjustment for\nthe PostgreSQL epoch being Gregorian year 2000 rather than Java's 1970\n(a simple constant offset), and for PostgreSQL using a Julian day number\nrather than java.time's proleptic Gregorian (a difference changing by three\ndays every 400 years).\n\nAnother option would be to take advantage of PL/Java and directly use\nthe threeten-extra Ethiopic calendar.\n\nRegards,\n-Chap\n\n\n[1]\nhttps://www.threeten.org/threeten-extra/apidocs/org.threeten.extra/org/threeten/extra/chrono/EthiopicDate.html\n\n[2]\nhttps://github.com/ThreeTen/threeten-extra/blob/master/src/main/java/org/threeten/extra/chrono/EthiopicDate.java\n\n\n",
"msg_date": "Thu, 15 Aug 2019 18:55:12 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Extension development"
},
{
"msg_contents": "On Fri, Aug 16, 2019 at 10:55 AM Chapman Flack <chap@anastigmatix.net> wrote:\n> On 08/15/19 02:58, Yonatan Misgan wrote:\n> > From this source code how can I get only the year to convert my own\n> > calendar year. I need this because Ethiopian calendar is totally differ\n> > from GC in terms of day, month and year.\n>\n> I find myself wondering whether getting only the year is sufficient to\n> do the conversion. There is already an Ethiopic calendar available for\n> Java (not included, but in org.threeten.extra[1]), and it seems to say\n> the years do not align precisely with Gregorian years (as I would not\n> have expected anyway):\n\nI can't think of a single reason not to use ICU for this. It will\nhandle every kind of conversion you could need here, it's rock solid\nand debugged and established, it'll handle 10+ different calendars\n(not just Ethiopic), and it's already linked into PostgreSQL in most\ndeployments.\n\nObviously if Yonatan wants to write his own calendar logic that's\ncool, but if we're talking about something that might eventually be\npart of PostgreSQL core, or a contrib module shipped with PostgreSQL,\nor even a widely used popular extension shipped separately, I would\nbet on ICU rather than new hand rolled algorithms.\n\n> \"Dates are aligned such that 0001-01-01 (Ethiopic) is 0008-08-27 (ISO).\"\n>\n> So it seems more likely that you would need a calculation involving the\n> year, month, and day ... or even that the Julian day number already\n> stored in PostgreSQL could be the most convenient starting point for\n> the arithmetic you need.\n\nIndeed. I think you should convert between our internal day number\nand date components (year, month, day) + various string formats\nderived from them, and likewise for the internal microsecond number\nthat we use for timestamps. I think it's probably a mistake to start\nfrom the Gregorian y, m, d components and convert to Ethiopic y, m, d\n(conversion algorithms that start from those components are more\nsuitable for humans; PostgreSQL already has days numbered sequentially\nalong one line, not y, m, d). Or just let ICU do it.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 16 Aug 2019 11:46:43 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Extension development"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on the PWJ patch [1], I noticed $SUBJECT. They seem to\nbe leftovers from the original partitionwise-join patch, perhaps.\nAttached is a patch for removing them.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK16wDqJiUof8+e4HuGmrAqqoFzb=iQX4V+xicsJ5_BvJ=g@mail.gmail.com",
"msg_date": "Thu, 15 Aug 2019 20:31:24 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Useless bms_free() calls in build_child_join_rel()"
},
{
"msg_contents": "On Thu, Aug 15, 2019 at 8:31 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> While working on the PWJ patch [1], I noticed $SUBJECT. They seem to\n> be leftovers from the original partitionwise-join patch, perhaps.\n> Attached is a patch for removing them.\n\nPushed.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 16 Aug 2019 14:43:25 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Useless bms_free() calls in build_child_join_rel()"
}
] |
[
{
"msg_contents": "This is an implementation of the idea I mentioned in [0].\n\nThe naming and description perhaps isn't ideal yet but it works in\nprinciple.\n\nThe idea is that if you connect over a Unix-domain socket and the local\n(effective) user is the same as the server's (effective) user, then\naccess should be granted immediately without any checking of\npg_hba.conf. Because it's \"your own\" server and you can do anything you\nwant with it anyway.\n\nI included an option to turn this off because (a) people are going to\ncomplain, (b) you need this for the test suites to be able to test\npg_hba.conf, and (c) conceivably, someone might want to have all access\nto go through pg_hba.conf for some auditing reasons (perhaps via PAM).\n\nThis addresses the shortcomings of using peer as the default mechanism\nin initdb. In a subsequent step, my idea would be to make the default\ninitdb authentication setup to use md5 (or scram, tbd.) for both local\nand host.\n\n\n[0]:\nhttps://www.postgresql.org/message-id/29164e47-8dfb-4737-2a61-e67a18f847f3%402ndquadrant.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 15 Aug 2019 13:37:12 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Allow cluster owner to bypass authentication"
},
{
"msg_contents": "On Thu, Aug 15, 2019 at 9:07 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> This is an implementation of the idea I mentioned in [0].\n>\n> The naming and description perhaps isn't ideal yet but it works in\n> principle.\n>\n> The idea is that if you connect over a Unix-domain socket and the local\n> (effective) user is the same as the server's (effective) user, then\n> access should be granted immediately without any checking of\n> pg_hba.conf. Because it's \"your own\" server and you can do anything you\n> want with it anyway.\n>\n> I included an option to turn this off because (a) people are going to\n> complain, (b) you need this for the test suites to be able to test\n> pg_hba.conf, and (c) conceivably, someone might want to have all access\n> to go through pg_hba.conf for some auditing reasons (perhaps via PAM).\n>\n> This addresses the shortcomings of using peer as the default mechanism\n> in initdb. In a subsequent step, my idea would be to make the default\n> initdb authentication setup to use md5 (or scram, tbd.) for both local\n> and host.\n>\n\n\nThis has been hanging around for a while. I guess the reason it hasn't\ngot much attention is that on its own it's not terribly useful.\nHowever, when you consider that it's a sensible prelude to setting a\nmore secure default for auth in initdb (I'd strongly advocate\nSCRAM-SHA-256 for that) it takes on much more significance.\n\nThe patch on its own is very small and straightforward, The actual\ncode is smaller than the docco.\n\nLet's do this so we can move to a better default auth.\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 17 Dec 2019 14:50:25 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n> The idea is that if you connect over a Unix-domain socket and the local\n> (effective) user is the same as the server's (effective) user, then\n> access should be granted immediately without any checking of\n> pg_hba.conf. Because it's \"your own\" server and you can do anything you\n> want with it anyway.\n\nWhile I understand where you're generally coming from, I'm not entirely\nconvinced that this is a good direction to go in. Yes, you could go\nchange pg_hba.conf (maybe..)- but would doing so trigger an email to\nsomeone else? Would you really be able to change pg_hba.conf when you\nconsider more restrictive environments, like where there are SELinux\nchecks? These days, a simple getpeerid() doesn't actually convey all of\nthe information about a process that would let you be confident that the\nclient really has the same access to the system that the running PG\nserver does.\n\n> I included an option to turn this off because (a) people are going to\n> complain, (b) you need this for the test suites to be able to test\n> pg_hba.conf, and (c) conceivably, someone might want to have all access\n> to go through pg_hba.conf for some auditing reasons (perhaps via PAM).\n\nAuditing is certainly an important consideration.\n\n> This addresses the shortcomings of using peer as the default mechanism\n> in initdb. In a subsequent step, my idea would be to make the default\n> initdb authentication setup to use md5 (or scram, tbd.) for both local\n> and host.\n\nI'm definitely in favor of having 'peer' be used by default in initdb.\n\nI am, however, slightly confused as to why we'd then want to, in a\nsubsequent step, make the default set up use md5 or scram...?\n\n* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:\n> This has been hanging around for a while. I guess the reason it hasn't\n> got much attention is that on its own it's not terribly useful.\n> However, when you consider that it's a sensible prelude to setting a\n> more secure default for auth in initdb (I'd strongly advocate\n> SCRAM-SHA-256 for that) it takes on much more significance.\n\nI'm all for improving the default for auth in initdb, but why wouldn't\nthat be peer auth first, followed by SCRAM..? If that's what you're\nsuggesting then great, but that wasn't very clear from the email text,\nat least. I've not done more than glanced at the patch.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 16 Dec 2019 23:40:49 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "> > This has been hanging around for a while. I guess the reason it hasn't\n> > got much attention is that on its own it's not terribly useful.\n> > However, when you consider that it's a sensible prelude to setting a\n> > more secure default for auth in initdb (I'd strongly advocate\n> > SCRAM-SHA-256 for that) it takes on much more significance.\n>\n> I'm all for improving the default for auth in initdb, but why wouldn't\n> that be peer auth first, followed by SCRAM..? If that's what you're\n> suggesting then great, but that wasn't very clear from the email text,\n> at least.\n\n\n\nWhat this is suggesting is in effect, for the db owner only and only\non a Unix domain socket, peer auth falling back to whatever is in the\nhba file. That makes setting something like scram-sha-256 as the\ndefault more practicable.\n\nIf we don't do something like this then changing the default could\ncause far more disruption than our users might like.\n\n> I've not done more than glanced at the patch.\n\nThat might pay dividends :-)\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 17 Dec 2019 17:20:11 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "On 2019-12-17 05:40, Stephen Frost wrote:\n> * Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n>> The idea is that if you connect over a Unix-domain socket and the local\n>> (effective) user is the same as the server's (effective) user, then\n>> access should be granted immediately without any checking of\n>> pg_hba.conf. Because it's \"your own\" server and you can do anything you\n>> want with it anyway.\n> \n> While I understand where you're generally coming from, I'm not entirely\n> convinced that this is a good direction to go in. Yes, you could go\n> change pg_hba.conf (maybe..)- but would doing so trigger an email to\n> someone else? Would you really be able to change pg_hba.conf when you\n> consider more restrictive environments, like where there are SELinux\n> checks? These days, a simple getpeerid() doesn't actually convey all of\n> the information about a process that would let you be confident that the\n> client really has the same access to the system that the running PG\n> server does.\n\nI realize that there are a number of facilities nowadays to do enhanced \nsecurity setups. But let's consider what 99% of users are using. If \nthe database server runs as user X and you are logged in as user X, you \nshould be able to manage the database server that is running as user X \nwithout further restrictions. Anything else would call into question \nthe entire security model that postgres is built around. But also, \nthere is an option to turn this off in my patch, if you really have the \nneed.\n\n>> This addresses the shortcomings of using peer as the default mechanism\n>> in initdb. In a subsequent step, my idea would be to make the default\n>> initdb authentication setup to use md5 (or scram, tbd.) for both local\n>> and host.\n> \n> I'm definitely in favor of having 'peer' be used by default in initdb.\n\n'peer' is not good default for initdb. Consider setting up a database \nserver on a notional multiuser host with peer authentication. As soon \nas you create a database user, that would allow some random OS user to \nlog into your database server, if the name matches. 'peer' is useful if \nthere is a strong coordination between the OS user creation and the \ndatabase user creation. But the default set up by initdb should really \nonly let the instance owner in by default and require some additional \nauthentication (like passwords) from everybody else. 'peer' cannot \nexpress that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 17 Dec 2019 11:27:09 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 5:27 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> I realize that there are a number of facilities nowadays to do enhanced\n> security setups. But let's consider what 99% of users are using. If\n> the database server runs as user X and you are logged in as user X, you\n> should be able to manage the database server that is running as user X\n> without further restrictions. Anything else would call into question\n> the entire security model that postgres is built around. But also,\n> there is an option to turn this off in my patch, if you really have the\n> need.\n\nI feel like this is taking a policy decision that properly belongs in\npg_hba.conf and making it into a GUC. If you're introducing a GUC\nbecause it's not possible to configure the behavior that you want in\npg_hba.conf, then I think the solution to that is to enhance\npg_hba.conf so that it can support the behavior you want to configure.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 18 Dec 2019 09:09:25 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n> On 2019-12-17 05:40, Stephen Frost wrote:\n> >* Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n> >>The idea is that if you connect over a Unix-domain socket and the local\n> >>(effective) user is the same as the server's (effective) user, then\n> >>access should be granted immediately without any checking of\n> >>pg_hba.conf. Because it's \"your own\" server and you can do anything you\n> >>want with it anyway.\n> >\n> >While I understand where you're generally coming from, I'm not entirely\n> >convinced that this is a good direction to go in. Yes, you could go\n> >change pg_hba.conf (maybe..)- but would doing so trigger an email to\n> >someone else? Would you really be able to change pg_hba.conf when you\n> >consider more restrictive environments, like where there are SELinux\n> >checks? These days, a simple getpeerid() doesn't actually convey all of\n> >the information about a process that would let you be confident that the\n> >client really has the same access to the system that the running PG\n> >server does.\n> \n> I realize that there are a number of facilities nowadays to do enhanced\n> security setups. But let's consider what 99% of users are using. If the\n> database server runs as user X and you are logged in as user X, you should\n> be able to manage the database server that is running as user X without\n> further restrictions. Anything else would call into question the entire\n> security model that postgres is built around. But also, there is an option\n> to turn this off in my patch, if you really have the need.\n\nIf we want to talk about what 99% of users are using, I'd suggest we\nconsider what our packagers are doing, and have been for many, many\nyears, which is setting up pg_hba.conf with peer auth...\n\n> >>This addresses the shortcomings of using peer as the default mechanism\n> >>in initdb. In a subsequent step, my idea would be to make the default\n> >>initdb authentication setup to use md5 (or scram, tbd.) for both local\n> >>and host.\n> >\n> >I'm definitely in favor of having 'peer' be used by default in initdb.\n> \n> 'peer' is not good default for initdb. Consider setting up a database\n> server on a notional multiuser host with peer authentication. As soon as\n> you create a database user, that would allow some random OS user to log into\n> your database server, if the name matches. 'peer' is useful if there is a\n> strong coordination between the OS user creation and the database user\n> creation. But the default set up by initdb should really only let the\n> instance owner in by default and require some additional authentication\n> (like passwords) from everybody else. 'peer' cannot express that.\n\nAnd so saying it's not a good default for initdb strikes me as pretty\ndarn odd. If we're going to change our defaults here, I'd argue that we\nshould be looking to reduce the amount of difference between what\npackagers do here and what our built-in defaults are, not invent a new\nGUC to do something that pg_hba.conf can already be configured to do.\n\nAs for the question about how to set up pg_hba.conf so that just the DB\nowner can log in via peer, the Debian/Ubuntu packages are deployed, by\ndefault, with an explicit message and entry:\n\n# DO NOT DISABLE!\n# If you change this first entry you will need to make sure that the\n# database superuser can access the database using some other method.\n# Noninteractive access to all databases is required during automatic\n# maintenance (custom daily cronjobs, replication, and similar tasks).\n#\n# Database administrative login by Unix domain socket\nlocal all postgres peer\n\nWhich represents pretty much exactly what you're going for here, doesn't\nit..?\n\nOf course, later on in the default Debian/Ubuntu install is:\n\n# \"local\" is for Unix domain socket connections only\nlocal all all peer\n\nand is what a very large number of our users are running with, because\nit's a sensible default installation, even for multi-user systems. If\nyou aren't considering the authentication method when you're creating\nnew users, then that's an education problem, not a technical one.\n\nIf you're curious about where that entry for Debian came from, I can\nshed some light on that too-\n\nhttps://bugs.debian.org/cgi-bin/bugreport.cgi?bug=303274\n\nThanks,\n\nStephen",
"msg_date": "Wed, 18 Dec 2019 10:24:56 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "On 2019-12-18 15:09, Robert Haas wrote:\n> I feel like this is taking a policy decision that properly belongs in\n> pg_hba.conf and making it into a GUC. If you're introducing a GUC\n> because it's not possible to configure the behavior that you want in\n> pg_hba.conf, then I think the solution to that is to enhance\n> pg_hba.conf so that it can support the behavior you want to configure.\n\nYeah, I was not really happy with that either. So I tried a new \napproach: Introduce a new pg_hba.conf line type \"localowner\" that \nmatches on Unix-domain socket connections if the user at the client end \nmatches the owner of the postgres process. Then the behavior I'm after \ncan be expressed with a pg_hba.conf entry like\n\nlocalowner all all trust\n\nor similar, as one chooses.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 27 Dec 2019 18:20:13 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "On 2019-12-18 16:24, Stephen Frost wrote:\n> As for the question about how to set up pg_hba.conf so that just the DB\n> owner can log in via peer, the Debian/Ubuntu packages are deployed, by\n> default, with an explicit message and entry:\n> \n> # DO NOT DISABLE!\n> # If you change this first entry you will need to make sure that the\n> # database superuser can access the database using some other method.\n> # Noninteractive access to all databases is required during automatic\n> # maintenance (custom daily cronjobs, replication, and similar tasks).\n> #\n> # Database administrative login by Unix domain socket\n> local all postgres peer\n> \n> Which represents pretty much exactly what you're going for here, doesn't\n> it..?\n\nThis is similar but not exactly the same thing: (1) It doesn't work if \nthe OS user name and the PG superuser name are not equal, and (2) it \nonly allows access as \"postgres\" and not other users. Both of these \nissues can be worked around to some extent by setting up pg_ident.conf \nmaps, but that can become a bit cumbersome. The underlying problem is \nthat \"peer\" is expressing a relationship between OS user and DB user, \nbut what we (arguably) want is a relationship between the client OS user \nand the server OS user, and making \"peer\" do the latter is just hacking \naround the problem indirectly.\n\n> Of course, later on in the default Debian/Ubuntu install is:\n> \n> # \"local\" is for Unix domain socket connections only\n> local all all peer\n> \n> and is what a very large number of our users are running with, because\n> it's a sensible default installation, even for multi-user systems. If\n> you aren't considering the authentication method when you're creating\n> new users, then that's an education problem, not a technical one.\n\nWell, if this is the pg_hba.conf setup and I am considering the \nauthentication method when creating new users, then my only safe option \nis to not create any new users. Because which OS users exist is not \ncontrolled by the DBA. If the OS admin and the DBA are the same entity, \nthen peer is obviously very nice, but if not, then peer is a trap.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Dec 2019 18:32:29 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n> On 2019-12-18 15:09, Robert Haas wrote:\n> >I feel like this is taking a policy decision that properly belongs in\n> >pg_hba.conf and making it into a GUC. If you're introducing a GUC\n> >because it's not possible to configure the behavior that you want in\n> >pg_hba.conf, then I think the solution to that is to enhance\n> >pg_hba.conf so that it can support the behavior you want to configure.\n> \n> Yeah, I was not really happy with that either. So I tried a new approach:\n> Introduce a new pg_hba.conf line type \"localowner\" that matches on\n> Unix-domain socket connections if the user at the client end matches the\n> owner of the postgres process. Then the behavior I'm after can be expressed\n> with a pg_hba.conf entry like\n> \n> localowner all all trust\n> \n> or similar, as one chooses.\n\nUgh, no thanks. We already have enough top-level \"Types\" that I really\ndon't like inventing another that's \"almost like this other one, but not\nquite\".\n\nWhy not have a special user that can be used for Type: local pg_hba.conf\nlines? So you'd have:\n\nlocal all localowner peer\n\nThat way you're:\n\na) only keeping the types we have today\nb) using peer auth, which is what this actually is\nc) NOT using 'trust', which we shouldn't because it's bad\nd) matching up to what Debian has been doing for decades already\n\nThanks,\n\nStephen",
"msg_date": "Fri, 27 Dec 2019 12:35:51 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n> On 2019-12-18 16:24, Stephen Frost wrote:\n> >Which represents pretty much exactly what you're going for here, doesn't\n> >it..?\n> \n> This is similar but not exactly the same thing: (1) It doesn't work if the\n> OS user name and the PG superuser name are not equal,\n\nFor this part, at least, it's a non-issue for the Debian packaging\nbecause it's hard-coded (more-or-less) to install as the postgres user.\nIf it wasn't though, I'm sure the template that's used to create the\ndefault pg_hba.conf could be adjusted to match whatever you want.\nAdding an option to allow the pg_hba.conf to have an entry instead\nthat's \"whatever the database's unix ID is\" then that might be an\nalright option.\n\n> and (2) it only allows\n> access as \"postgres\" and not other users.\n\nRight... Is the idea here (which didn't seem to be outlined in the\ninitial email) that this will allow the DB \"owner\" to log in directly to\nthe DB as any role..? If so, why would that be applied only to this\nparticular \"owner\" case and not to, say, all superusers (since they can\nall do SET SESSION AUTHORIZATION already...).\n\n> Both of these issues can be\n> worked around to some extent by setting up pg_ident.conf maps, but that can\n> become a bit cumbersome.\n\nIf you have two lines in pg_hba.conf then you don't need an actual\nmapping..\n\n> The underlying problem is that \"peer\" is\n> expressing a relationship between OS user and DB user, but what we\n> (arguably) want is a relationship between the client OS user and the server\n> OS user, and making \"peer\" do the latter is just hacking around the problem\n> indirectly.\n\nWhat pg_hba.conf is really all about is expression a relationship\nbetween \"some outside authentication system\" and a DB user; that's\nexactly what it's for. Redefining it to be about something else strikes\nme as a bad idea that's just going to be confusing and will require a\ngreat deal more explaining whenever someone is first learning about PG.\n\n> >Of course, later on in the default Debian/Ubuntu install is:\n> >\n> ># \"local\" is for Unix domain socket connections only\n> >local all all peer\n> >\n> >and is what a very large number of our users are running with, because\n> >it's a sensible default installation, even for multi-user systems. If\n> >you aren't considering the authentication method when you're creating\n> >new users, then that's an education problem, not a technical one.\n> \n> Well, if this is the pg_hba.conf setup and I am considering the\n> authentication method when creating new users, then my only safe option is\n> to not create any new users. Because which OS users exist is not controlled\n> by the DBA. If the OS admin and the DBA are the same entity, then peer is\n> obviously very nice, but if not, then peer is a trap.\n\nThey don't have to be the same entity, they just have to communicate\nwith each other, which isn't entirely unheard of. We're also just\ntalking about defaults here- and what I'm trying to stress is that a\nhuge number of installations already use this. If there's a serious\nissue with it then perhaps there's something to discuss, but if not then\nI'm not really anxious to move in a direction that's actively away from\nwhat our users are already using without it being clearly a better\noption, which this doesn't seem to be.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 27 Dec 2019 12:49:46 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> Why not have a special user that can be used for Type: local pg_hba.conf\n> lines? So you'd have:\n> local all localowner peer\n> That way you're:\n> a) only keeping the types we have today\n> b) using peer auth, which is what this actually is\n> c) NOT using 'trust', which we shouldn't because it's bad\n> d) matching up to what Debian has been doing for decades already\n\nBut ... if \"peer\" auth allowed all the cases Peter wants to allow,\nwe'd not be having this discussion in the first place, would we?\n\nThe specific case of concern here is the database's OS-owner wanting\nto connect as some other database role than her OS user name.\n\"peer\" doesn't allow that, at least not without messy additional\nconfiguration in the form of a username map.\n\nWhile the syntax you suggest above could be made to implement that,\nit doesn't seem very intuitive to me. Maybe what we want is some\nadditional option that acts like a prefab username map:\n\nlocal all all peer let_OS_owner_in_as_any_role\n\nBikeshedding the actual option name is left for the reader. We'd\nalso have to think whether a regular \"map\" option can be allowed\non the same line, and if so how the two would interact. (It might\nbe as easy as \"allow connection if either option allows it\".)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 Dec 2019 12:59:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Well, if this is the pg_hba.conf setup and I am considering the \n> authentication method when creating new users, then my only safe option \n> is to not create any new users. Because which OS users exist is not \n> controlled by the DBA. If the OS admin and the DBA are the same entity, \n> then peer is obviously very nice, but if not, then peer is a trap.\n\nNot sure about whether this is an interesting consideration or not.\nIf you don't trust the OS-level admin, don't you basically need to\ngo find a different computer to work on?\n\nStill, I take your point that \"peer\" does risk letting in a set of\nconnections wider than what the DBA was thinking about. Enlarging\non my other response that what we want is an auth option not a whole\nnew auth type, maybe we could invent another auth option that limits\nwhich OS user names are accepted by \"peer\", with an easy special case\nif you only want to allow the server's OS owner. (Note that this\nis *not* the existing \"role\" column, which restricts the database\nrole name not the external name; nor is it something you can do\nwith a username map, at least not with the current definition of\nthose.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 Dec 2019 13:08:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > Why not have a special user that can be used for Type: local pg_hba.conf\n> > lines? So you'd have:\n> > local all localowner peer\n> > That way you're:\n> > a) only keeping the types we have today\n> > b) using peer auth, which is what this actually is\n> > c) NOT using 'trust', which we shouldn't because it's bad\n> > d) matching up to what Debian has been doing for decades already\n> \n> But ... if \"peer\" auth allowed all the cases Peter wants to allow,\n> we'd not be having this discussion in the first place, would we?\n\nI'm still not entirely convinced it doesn't, but that's also because I\nkeep thinking we're talking about a sensible default here and I'm coming\nto realize that the idea here is to let the cluster owner not just\nbypass auth to connect as their own DB user, but to allow the cluster\nown to connect as ANY database role, and that's not a sensible *default*\nsetting for us to have, imv.\n\n> The specific case of concern here is the database's OS-owner wanting\n> to connect as some other database role than her OS user name.\n> \"peer\" doesn't allow that, at least not without messy additional\n> configuration in the form of a username map.\n\nYes, to allow that you'd need to have a mapping. Theoretically, we\ncould have a mapping automatically exist which could be used for that,\nbut now I'm trying to understand what the use-case here is for actual\ndeployments. If this is for testing- great, let's have some flag\nsomewhere that we can enable for testing but we shouldn't have it as the\n*default*.\n\n> While the syntax you suggest above could be made to implement that,\n> it doesn't seem very intuitive to me. Maybe what we want is some\n> additional option that acts like a prefab username map:\n> \n> local all all peer let_OS_owner_in_as_any_role\n\nOr ... map=pg_os_user_allow\n\nand declare 'pg_*' as system-defined special mappings, like \"OS user\" ->\n\"anyone\".\n\n> Bikeshedding the actual option name is left for the reader. We'd\n> also have to think whether a regular \"map\" option can be allowed\n> on the same line, and if so how the two would interact. (It might\n> be as easy as \"allow connection if either option allows it\".)\n\nAllowing multiple maps to be used is a different feature.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 27 Dec 2019 14:17:04 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > Well, if this is the pg_hba.conf setup and I am considering the \n> > authentication method when creating new users, then my only safe option \n> > is to not create any new users. Because which OS users exist is not \n> > controlled by the DBA. If the OS admin and the DBA are the same entity, \n> > then peer is obviously very nice, but if not, then peer is a trap.\n> \n> Not sure about whether this is an interesting consideration or not.\n> If you don't trust the OS-level admin, don't you basically need to\n> go find a different computer to work on?\n> \n> Still, I take your point that \"peer\" does risk letting in a set of\n> connections wider than what the DBA was thinking about. Enlarging\n> on my other response that what we want is an auth option not a whole\n> new auth type, maybe we could invent another auth option that limits\n> which OS user names are accepted by \"peer\", with an easy special case\n> if you only want to allow the server's OS owner. (Note that this\n> is *not* the existing \"role\" column, which restricts the database\n> role name not the external name; nor is it something you can do\n> with a username map, at least not with the current definition of\n> those.)\n\nSure you can do this with an existing map- just define a mapping and\nonly include in it the users you want to allow. If no mapping matches,\nthen your connection is denied.\n\nIf you want an equality match in your mapping, then you have to provide\none, like so:\n\ndefault /^(.*)$ \\1\n\nThanks,\n\nStephen",
"msg_date": "Fri, 27 Dec 2019 14:20:20 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> But ... if \"peer\" auth allowed all the cases Peter wants to allow,\n>> we'd not be having this discussion in the first place, would we?\n\n> I'm still not entirely convinced it doesn't, but that's also because I\n> keep thinking we're talking about a sensible default here and I'm coming\n> to realize that the idea here is to let the cluster owner not just\n> bypass auth to connect as their own DB user, but to allow the cluster\n> own to connect as ANY database role,\n\nRight.\n\n> and that's not a sensible *default*\n> setting for us to have, imv.\n\nThere's certainly a discussion to be had about whether that should be\nthe default or not (and I too am doubtful that it should be); but I think\nPeter made a sufficient case that it'd be useful if it were easy to set\nthings up that way. Right now it's a tad painful.\n\n>> While the syntax you suggest above could be made to implement that,\n>> it doesn't seem very intuitive to me. Maybe what we want is some\n>> additional option that acts like a prefab username map:\n>> \n>> local all all peer let_OS_owner_in_as_any_role\n\n> Or ... map=pg_os_user_allow\n\n> and declare 'pg_*' as system-defined special mappings, like \"OS user\" ->\n> \"anyone\".\n\nMaybe, but then we'd need to allow multiple map options. Still, if\nthe semantics are \"union of what any map allows\", that doesn't\nseem too hard.\n\n> Allowing multiple maps to be used is a different feature.\n\nNot really; I think it is quite reasonable to want \"OS owner can\nconnect as anyone\" plus \"joe should be allowed to connect as charlie\".\nIf you want to add the latter to a working setup, you shouldn't have\nto suddenly figure out how to reimplement \"map=pg_os_user_allow\" at\na lower level of detail. That's a recipe for mistakes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 Dec 2019 14:35:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Still, I take your point that \"peer\" does risk letting in a set of\n>> connections wider than what the DBA was thinking about. Enlarging\n>> on my other response that what we want is an auth option not a whole\n>> new auth type, maybe we could invent another auth option that limits\n>> which OS user names are accepted by \"peer\", with an easy special case\n>> if you only want to allow the server's OS owner. (Note that this\n>> is *not* the existing \"role\" column, which restricts the database\n>> role name not the external name; nor is it something you can do\n>> with a username map, at least not with the current definition of\n>> those.)\n\n> Sure you can do this with an existing map- just define a mapping and\n> only include in it the users you want to allow. If no mapping matches,\n> then your connection is denied.\n\nOh, hm ... that wasn't my mental model of it, and the documentation\ndoesn't really spell that out anywhere. It would be reasonable for\npeople to assume that the default behavior is equivalent to a map\nwith no entries, and I don't see anything in the docs that really\ncontradicts that. As best I can tell from the above, the default\ncorresponds to an explicitly-written map like\n\n\tdefault /^(.*)$ \\1\n\nwhich seems unreasonably complicated; it's sure going to look\nlike line noise to somebody who's not already familiar with\nregex notation.\n\nThe other issue is that you can't actually implement the behavior\nPeter wants with the existing username map facility, because there's\nno wildcard for the database role name column. You can't write\n\n\tpg_os_user_allow postgres .*\n\nand even if you could, that's not a great solution because it\nhard-wires the OS username of the database server's owner.\n\nI think it'd be great if this behavior could be implemented\nwithin the notation, because we could then just set up a\nnon-empty default pg_ident.conf with useful behavioral\nexamples in the form of prefab maps. In particular, we\nshould think about how hard it is to do \"I want the default\nbehavior plus allow joe to connect as charlie\". If the\ndefault is a one-liner that you can copy and add to,\nthat's a lot better than if you have to reverse-engineer\nwhat to write.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 Dec 2019 14:56:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> But ... if \"peer\" auth allowed all the cases Peter wants to allow,\n> >> we'd not be having this discussion in the first place, would we?\n> \n> > I'm still not entirely convinced it doesn't, but that's also because I\n> > keep thinking we're talking about a sensible default here and I'm coming\n> > to realize that the idea here is to let the cluster owner not just\n> > bypass auth to connect as their own DB user, but to allow the cluster\n> > own to connect as ANY database role,\n> \n> Right.\n> \n> > and that's not a sensible *default*\n> > setting for us to have, imv.\n> \n> There's certainly a discussion to be had about whether that should be\n> the default or not (and I too am doubtful that it should be); but I think\n> Peter made a sufficient case that it'd be useful if it were easy to set\n> things up that way. Right now it's a tad painful.\n\nI'm also concerned with the \"where does this end?\" question. What if\nthe role is set to not allow connections? What if the database is set\nto not allow connections? I mean, sure, it's the OS user, so they can\ndo *anything*, technically, but we generally want some intelligent\nsafe-guards in place where we make them jump through an extra hoop or\ntwo to make a change that could really break things. Further, some of\nthose \"safe-guards\" that we have might be \"auditing requirements\" to\nother people who expect to see in their audit logs when a change is made\nto, say, allow the OS user to log in as some other role. I suppose as\nlong as this can be turned off (and, ideally, isn't the default anyway)\nthen hopefully it won't be too much of an issue.\n\n> >> While the syntax you suggest above could be made to implement that,\n> >> it doesn't seem very intuitive to me. Maybe what we want is some\n> >> additional option that acts like a prefab username map:\n> >> \n> >> local all all peer let_OS_owner_in_as_any_role\n> \n> > Or ... map=pg_os_user_allow\n> \n> > and declare 'pg_*' as system-defined special mappings, like \"OS user\" ->\n> > \"anyone\".\n> \n> Maybe, but then we'd need to allow multiple map options. Still, if\n> the semantics are \"union of what any map allows\", that doesn't\n> seem too hard.\n\nI agree that doesn't seem too hard and generally seems reasonable.\nSeems like we might also want an explicit way of saying '*' on the\nright-hand-side, or something like it, so users could set this up for\nanyone they want and not only have this option exist for the user who\nhappens to be logging in with the same unix uid of the PG server.\n\n> > Allowing multiple maps to be used is a different feature.\n> \n> Not really; I think it is quite reasonable to want \"OS owner can\n> connect as anyone\" plus \"joe should be allowed to connect as charlie\".\n> If you want to add the latter to a working setup, you shouldn't have\n> to suddenly figure out how to reimplement \"map=pg_os_user_allow\" at\n> a lower level of detail. That's a recipe for mistakes.\n\nEh, I wouldn't argue if someone wrote a single patch that does both, but\nconsidering we don't support multiple maps today, I wouldn't push on\nsomeone wanting to extend the way maps work today to require that they\nimplement support for multiple maps too.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 27 Dec 2019 14:57:59 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> Still, I take your point that \"peer\" does risk letting in a set of\n> >> connections wider than what the DBA was thinking about. Enlarging\n> >> on my other response that what we want is an auth option not a whole\n> >> new auth type, maybe we could invent another auth option that limits\n> >> which OS user names are accepted by \"peer\", with an easy special case\n> >> if you only want to allow the server's OS owner. (Note that this\n> >> is *not* the existing \"role\" column, which restricts the database\n> >> role name not the external name; nor is it something you can do\n> >> with a username map, at least not with the current definition of\n> >> those.)\n> \n> > Sure you can do this with an existing map- just define a mapping and\n> > only include in it the users you want to allow. If no mapping matches,\n> > then your connection is denied.\n> \n> Oh, hm ... that wasn't my mental model of it, and the documentation\n> doesn't really spell that out anywhere.\n\nThat documentation also refers to 'ident' still, unfortunately.\n\n> It would be reasonable for\n> people to assume that the default behavior is equivalent to a map\n> with no entries, and I don't see anything in the docs that really\n> contradicts that. As best I can tell from the above, the default\n> corresponds to an explicitly-written map like\n> \n> \tdefault /^(.*)$ \\1\n> \n> which seems unreasonably complicated; it's sure going to look\n> like line noise to somebody who's not already familiar with\n> regex notation.\n\nRight- the default mapping is an 'equality' mapping, which, implemented\nas a regexp, looks like the above. When it comes to what happens when\nyou add 'map=' to an entry in your pg_hba.conf, I view that as \"I am\nreplacing the default mapping with this one of my own\". That's\nnecessary if your OS users don't map to your DB users (I want to be able\nto support having 'alice' map to 'bob', and 'bob' map to 'alice',\nwithout 'alice' being allowed to log in as 'alice' or 'bob' to log in as\n'bob'...).\n\n> The other issue is that you can't actually implement the behavior\n> Peter wants with the existing username map facility, because there's\n> no wildcard for the database role name column. You can't write\n> \n> \tpg_os_user_allow postgres .*\n> \n> and even if you could, that's not a great solution because it\n> hard-wires the OS username of the database server's owner.\n\nYeah, that is true, though we could make both halves of that work, I\nwould think.\n\n> I think it'd be great if this behavior could be implemented\n> within the notation, because we could then just set up a\n> non-empty default pg_ident.conf with useful behavioral\n> examples in the form of prefab maps. In particular, we\n> should think about how hard it is to do \"I want the default\n> behavior plus allow joe to connect as charlie\". If the\n> default is a one-liner that you can copy and add to,\n> that's a lot better than if you have to reverse-engineer\n> what to write.\n\nThis direction certainly sounds more appealing to me.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 27 Dec 2019 15:22:25 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "Hi Peter,\n\nOn 12/27/19 3:22 PM, Stephen Frost wrote:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> \n>> I think it'd be great if this behavior could be implemented\n>> within the notation, because we could then just set up a\n>> non-empty default pg_ident.conf with useful behavioral\n>> examples in the form of prefab maps. In particular, we\n>> should think about how hard it is to do \"I want the default\n>> behavior plus allow joe to connect as charlie\". If the\n>> default is a one-liner that you can copy and add to,\n>> that's a lot better than if you have to reverse-engineer\n>> what to write.\n> \n> This direction certainly sounds more appealing to me.\n\nAny thoughts on the discussion between Stephen and Tom?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 27 Mar 2020 10:58:46 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "On 2020-03-27 15:58, David Steele wrote:\n> Hi Peter,\n> \n> On 12/27/19 3:22 PM, Stephen Frost wrote:\n>> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>>\n>>> I think it'd be great if this behavior could be implemented\n>>> within the notation, because we could then just set up a\n>>> non-empty default pg_ident.conf with useful behavioral\n>>> examples in the form of prefab maps. In particular, we\n>>> should think about how hard it is to do \"I want the default\n>>> behavior plus allow joe to connect as charlie\". If the\n>>> default is a one-liner that you can copy and add to,\n>>> that's a lot better than if you have to reverse-engineer\n>>> what to write.\n>>\n>> This direction certainly sounds more appealing to me.\n> \n> Any thoughts on the discussion between Stephen and Tom?\n\nIt appears that the whole discussion of what a new default security \nconfiguration could or should be hasn't really moved to a new consensus, \nso given the time, I think it's best that we leave things as they are \nand continue the exploration at some future time.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 5 Apr 2020 12:15:14 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
},
{
"msg_contents": "On 4/5/20 6:15 AM, Peter Eisentraut wrote:\n> On 2020-03-27 15:58, David Steele wrote:\n>> Hi Peter,\n>>\n>> On 12/27/19 3:22 PM, Stephen Frost wrote:\n>>> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>>>\n>>>> I think it'd be great if this behavior could be implemented\n>>>> within the notation, because we could then just set up a\n>>>> non-empty default pg_ident.conf with useful behavioral\n>>>> examples in the form of prefab maps.� In particular, we\n>>>> should think about how hard it is to do \"I want the default\n>>>> behavior plus allow joe to connect as charlie\".� If the\n>>>> default is a one-liner that you can copy and add to,\n>>>> that's a lot better than if you have to reverse-engineer\n>>>> what to write.\n>>>\n>>> This direction certainly sounds more appealing to me.\n>>\n>> Any thoughts on the discussion between Stephen and Tom?\n> \n> It appears that the whole discussion of what a new default security \n> configuration could or should be hasn't really moved to a new consensus, \n> so given the time, I think it's best that we leave things as they are \n> and continue the exploration at some future time.\n\nSounds good. I've marked the patch RwF.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 6 Apr 2020 14:12:54 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Allow cluster owner to bypass authentication"
}
] |
[
{
"msg_contents": "Dear all,\n\nI was wondering if someone could help me understands what a union all\nactually does.\n\nFor my thesis I am using Apache Calcite to rewrite queries into using\nmaterialized views which I then give to a Postgres database.\nFor some queries, this means that they will be rewritten in a UNION ALL\nstyle query between an expression and a table scan of a materialized view.\nHowever, contrary to what I expected, the UNION ALL query is actually a lot\nslower.\n\nAs an example, say I have 2 tables: actor and movie. Furthermore, there is\nalso a foreign key index on movie to actor.\nI also have a materialized view with the join of these 2 tables for all\nmovies <= 2015 called A.\nNow, if I want to query all entries in the join between actor and movie, I\nwould assume that a UNION ALL between the join of actor and movie for\nmovies >2015 and A is faster than executing the original query..\nIf I look at the explain analyze part, I can certainly see a reduction in\ncost up until the UNION ALL part, which carries a respective cost more than\nnegating the cost reduction up to a point where I might as well not use the\nexisting materialized view.\n\nI have some trouble understanding this phenomenon.\nOne thought which came to my mind was that perhaps UNION ALL might create a\ntemporary table containing both result sets, and then do a table scan and\nreturn that result.\nthis would greatly increase IO cost which could attribute to the problem.\nHowever, I am really not sure what UNION ALL actually does to append both\nresult sets so I was wondering if someone would be able to help me out with\nthis.\n\n\nMark\n\nDear all,I was wondering if someone could help me understands what a union all actually does.For my thesis I am using Apache Calcite to rewrite queries into using materialized views which I then give to a Postgres database.For some queries, this means that they will be rewritten in a UNION ALL style query between an expression and a table scan of a materialized view.However, contrary to what I expected, the UNION ALL query is actually a lot slower.As an example, say I have 2 tables: actor and movie. Furthermore, there is also a foreign key index on movie to actor.I also have a materialized view with the join of these 2 tables for all movies <= 2015 called A.Now, if I want to query all entries in the join between actor and movie, I would assume that a UNION ALL between the join of actor and movie for movies >2015 and A is faster than executing the original query..If I look at the explain analyze part, I can certainly see a reduction in cost up until the UNION ALL part, which carries a respective cost more than negating the cost reduction up to a point where I might as well not use the existing materialized view.I have some trouble understanding this phenomenon.One thought which came to my mind was that perhaps UNION ALL might create a temporary table containing both result sets, and then do a table scan and return that result.this would greatly increase IO cost which could attribute to the problem.However, I am really not sure what UNION ALL actually does to append both result sets so I was wondering if someone would be able to help me out with this.Mark",
"msg_date": "Thu, 15 Aug 2019 20:37:06 +0200",
"msg_from": "Mark Pasterkamp <markpasterkamp1994@gmail.com>",
"msg_from_op": true,
"msg_subject": "UNION ALL"
},
{
"msg_contents": "Mark Pasterkamp <markpasterkamp1994@gmail.com> writes:\n> I was wondering if someone could help me understands what a union all\n> actually does.\n\nGenerally speaking, it runs the first query and then the second query.\nYou'd really need to provide a lot more detail for anyone to say more\nthan that.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 15 Aug 2019 14:49:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: UNION ALL"
},
{
"msg_contents": "Generally speaking, when executing UNION ; a DISTINCT is run afterward on the resultset.\n\nSo, if you're sure that each part of UNION cannot return a line returned by another one, you may use UNION ALL, you'll cut the cost of the final implicit DISTINCT.\n\n\n----- Mail original -----\nDe: \"Mark Pasterkamp\" <markpasterkamp1994@gmail.com>\nÀ: pgsql-hackers@lists.postgresql.org\nEnvoyé: Jeudi 15 Août 2019 20:37:06\nObjet: UNION ALL\n\n\nDear all, \n\n\nI was wondering if someone could help me understands what a union all actually does. \n\n\nFor my thesis I am using Apache Calcite to rewrite queries into using materialized views which I then give to a Postgres database. \nFor some queries, this means that they will be rewritten in a UNION ALL style query between an expression and a table scan of a materialized view. \nHowever, contrary to what I expected, the UNION ALL query is actually a lot slower. \n\n\nAs an example, say I have 2 tables: actor and movie. Furthermore, there is also a foreign key index on movie to actor. \nI also have a materialized view with the join of these 2 tables for all movies <= 2015 called A. \nNow, if I want to query all entries in the join between actor and movie, I would assume that a UNION ALL between the join of actor and movie for movies >2015 and A is faster than executing the original query.. \nIf I look at the explain analyze part, I can certainly see a reduction in cost up until the UNION ALL part, which carries a respective cost more than negating the cost reduction up to a point where I might as well not use the existing materialized view. \n\n\nI have some trouble understanding this phenomenon. \nOne thought which came to my mind was that perhaps UNION ALL might create a temporary table containing both result sets, and then do a table scan and return that result. \n\nthis would greatly increase IO cost which could attribute to the problem. \nHowever, I am really not sure what UNION ALL actually does to append both result sets so I was wondering if someone would be able to help me out with this. \n\n\n\n\nMark\n\n\n",
"msg_date": "Thu, 15 Aug 2019 21:15:50 +0200 (CEST)",
"msg_from": "066ce286@free.fr",
"msg_from_op": false,
"msg_subject": "Re: UNION ALL"
},
{
"msg_contents": "On Fri, Aug 16, 2019 at 12:16 AM <066ce286@free.fr> wrote:\n\n> Generally speaking, when executing UNION ; a DISTINCT is run afterward on\n> the resultset.\n>\n> So, if you're sure that each part of UNION cannot return a line returned\n> by another one, you may use UNION ALL, you'll cut the cost of the final\n> implicit DISTINCT.\n>\n>\n> ----- Mail original -----\n> De: \"Mark Pasterkamp\" <markpasterkamp1994@gmail.com>\n> À: pgsql-hackers@lists.postgresql.org\n> Envoyé: Jeudi 15 Août 2019 20:37:06\n> Objet: UNION ALL\n>\n>\n> Dear all,\n>\n>\n> I was wondering if someone could help me understands what a union all\n> actually does.\n>\n>\n> For my thesis I am using Apache Calcite to rewrite queries into using\n> materialized views which I then give to a Postgres database.\n> For some queries, this means that they will be rewritten in a UNION ALL\n> style query between an expression and a table scan of a materialized view.\n> However, contrary to what I expected, the UNION ALL query is actually a\n> lot slower.\n>\n>\n> As an example, say I have 2 tables: actor and movie. Furthermore, there is\n> also a foreign key index on movie to actor.\n> I also have a materialized view with the join of these 2 tables for all\n> movies <= 2015 called A.\n> Now, if I want to query all entries in the join between actor and movie, I\n> would assume that a UNION ALL between the join of actor and movie for\n> movies >2015 and A is faster than executing the original query..\n> If I look at the explain analyze part, I can certainly see a reduction in\n> cost up until the UNION ALL part, which carries a respective cost more than\n> negating the cost reduction up to a point where I might as well not use the\n> existing materialized view.\n>\n>\n> I have some trouble understanding this phenomenon.\n> One thought which came to my mind was that perhaps UNION ALL might create\n> a temporary table containing both result sets, and then do a table scan and\n> return that result.\n>\n> this would greatly increase IO cost which could attribute to the problem.\n> However, I am really not sure what UNION ALL actually does to append both\n> result sets so I was wondering if someone would be able to help me out with\n> this.\n>\n>\n>\n>\n> Mark\n>\n>\n> 066ce286@free.fr: Please, avoid top-posting. It makes harder to follow\nthe\ndiscussion.\n\n-- \nIbrar Ahmed\n\nOn Fri, Aug 16, 2019 at 12:16 AM <066ce286@free.fr> wrote:Generally speaking, when executing UNION ; a DISTINCT is run afterward on the resultset.\n\nSo, if you're sure that each part of UNION cannot return a line returned by another one, you may use UNION ALL, you'll cut the cost of the final implicit DISTINCT.\n\n\n----- Mail original -----\nDe: \"Mark Pasterkamp\" <markpasterkamp1994@gmail.com>\nÀ: pgsql-hackers@lists.postgresql.org\nEnvoyé: Jeudi 15 Août 2019 20:37:06\nObjet: UNION ALL\n\n\nDear all, \n\n\nI was wondering if someone could help me understands what a union all actually does. \n\n\nFor my thesis I am using Apache Calcite to rewrite queries into using materialized views which I then give to a Postgres database. \nFor some queries, this means that they will be rewritten in a UNION ALL style query between an expression and a table scan of a materialized view. \nHowever, contrary to what I expected, the UNION ALL query is actually a lot slower. \n\n\nAs an example, say I have 2 tables: actor and movie. Furthermore, there is also a foreign key index on movie to actor. \nI also have a materialized view with the join of these 2 tables for all movies <= 2015 called A. \nNow, if I want to query all entries in the join between actor and movie, I would assume that a UNION ALL between the join of actor and movie for movies >2015 and A is faster than executing the original query.. \nIf I look at the explain analyze part, I can certainly see a reduction in cost up until the UNION ALL part, which carries a respective cost more than negating the cost reduction up to a point where I might as well not use the existing materialized view. \n\n\nI have some trouble understanding this phenomenon. \nOne thought which came to my mind was that perhaps UNION ALL might create a temporary table containing both result sets, and then do a table scan and return that result. \n\nthis would greatly increase IO cost which could attribute to the problem. \nHowever, I am really not sure what UNION ALL actually does to append both result sets so I was wondering if someone would be able to help me out with this. \n\n\n\n\nMark\n\n\n066ce286@free.fr: Please, avoid top-posting. It makes harder to follow thediscussion.-- Ibrar Ahmed",
"msg_date": "Fri, 16 Aug 2019 00:21:17 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: UNION ALL"
},
{
"msg_contents": "First of all, thank you for the replies.\n\nI am using a base installation of postgres 10.10, with no modifications to\nany of the system defaults.\n\nI am trying to speedup a join between two tables: the title table and the\ncast_info table.\n\nThe title table is a table containing information about different movies.\nit contains 4626969 records.\nthe table also has a foreign key index on the cast_info table, enabling the\nplanner to use a hash-join.\n\nThe cast_info table is a table containing the information of which actor\nwas casted in which movie and contains 62039343 records.\n\nThe database also contains a materialized view ci_t_15, defined as:\nselect * from cast_info join title on cast_info.movie_id = title.id\nwhere title.production_year < 2015\n\nI am comparing two queries, q1 and q2 respectively.\nQuery q1 is the original query and q2 is an attempt to reduce the cost of\nexecution via leveraging the materialized view ci_t_15.\n\nQuery q1 is defined as:\nselect * from cast_info join title on cast_info.movie_id = title.id\n\nQuery q2 is defined as\nselect * from cast_info join title on cast_info.movie_id = title.id\nwhere title.production_year >= 2015\nUNION ALL\nselect * from ci_t_15\n\nBoth queries are executed on a Dell xps laptop with an I7-8750H processor\nand 16 (2*8) gb ram on an SSD running on ubuntu 18.04.2 LTS.\n\nRunning explain analyze on both queries I get the following execution plans.\nq1:\n\"Hash Join (cost=199773.80..2561662.10 rows=62155656 width=103) (actual\ntime=855.063..25786.264 rows=62039343 loops=1)\"\n\" Hash Cond: (cast_info.ci_movie_id = title.t_id)\"\n\" -> Seq Scan on cast_info (cost=0.00..1056445.56 rows=62155656\nwidth=42) (actual time=0.027..3837.722 rows=62039343 loops=1)\"\n\" -> Hash (cost=92232.69..92232.69 rows=4626969 width=61) (actual\ntime=854.548..854.548 rows=4626969 loops=1)\"\n\" Buckets: 65536 Batches: 128 Memory Usage: 3431kB\"\n\" -> Seq Scan on title (cost=0.00..92232.69 rows=4626969 width=61)\n(actual time=0.005..327.588 rows=4626969 loops=1)\"\n\"Planning time: 5.097 ms\"\n\"Execution time: 27236.088 ms\"\n\nq2:\n\"Append (cost=123209.65..3713445.65 rows=61473488 width=105) (actual\ntime=442.207..29713.621 rows=60918189 loops=1)\"\n\" -> Gather (cost=123209.65..2412792.77 rows=10639784 width=103) (actual\ntime=442.206..14634.427 rows=10046633 loops=1)\"\n\" Workers Planned: 2\"\n\" Workers Launched: 2\"\n\" -> Hash Join (cost=122209.65..1347814.37 rows=4433243 width=103)\n(actual time=471.969..12527.840 rows=3348878 loops=3)\"\n\" Hash Cond: (cast_info.ci_movie_id = title.t_id)\"\n\" -> Parallel Seq Scan on cast_info (cost=0.00..693870.90\nrows=25898190 width=42) (actual time=0.006..7302.679 rows=20679781 loops=3)\"\n\" -> Hash (cost=103800.11..103800.11 rows=792043 width=61)\n(actual time=471.351..471.351 rows=775098 loops=3)\"\n\" Buckets: 65536 Batches: 32 Memory Usage: 2515kB\"\n\" -> Seq Scan on title (cost=0.00..103800.11\nrows=792043 width=61) (actual time=0.009..376.127 rows=775098 loops=3)\"\n\" Filter: (t_production_year >= 2015)\"\n\" Rows Removed by Filter: 3851871\"\n\" -> Seq Scan on ci_t_15 (cost=0.00..1194255.04 rows=50833704 width=105)\n(actual time=1.143..11967.391 rows=50871556 loops=1)\"\n\"Planning time: 0.268 ms\"\n\"Execution time: 31379.854 ms\"\n\nDue to using the materialized view I can reduce the amount of records going\ninto the hash join, lowering the time from 25786.264 msec to 12527.840 msec.\nHowever, this is where my question comes in, this reduction is completely\nnegated by the cost of appending both results in the UNION ALL command.\nI was wondering if this is normal behaviour.\nIn my mind, I wouldn't expect appending 2 resultsets to have such a\nrelative huge cost associated with it.\nThis is also why I asked what exactly a UNION ALL does to achieve its\nfunctionality, to perhaps gain some insight in its cost.\n\n\nWith kind regards,\n\nMark\n\nOn Thu, 15 Aug 2019 at 21:22, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n>\n> On Fri, Aug 16, 2019 at 12:16 AM <066ce286@free.fr> wrote:\n>\n>> Generally speaking, when executing UNION ; a DISTINCT is run afterward on\n>> the resultset.\n>>\n>> So, if you're sure that each part of UNION cannot return a line returned\n>> by another one, you may use UNION ALL, you'll cut the cost of the final\n>> implicit DISTINCT.\n>>\n>>\n>> ----- Mail original -----\n>> De: \"Mark Pasterkamp\" <markpasterkamp1994@gmail.com>\n>> À: pgsql-hackers@lists.postgresql.org\n>> Envoyé: Jeudi 15 Août 2019 20:37:06\n>> Objet: UNION ALL\n>>\n>>\n>> Dear all,\n>>\n>>\n>> I was wondering if someone could help me understands what a union all\n>> actually does.\n>>\n>>\n>> For my thesis I am using Apache Calcite to rewrite queries into using\n>> materialized views which I then give to a Postgres database.\n>> For some queries, this means that they will be rewritten in a UNION ALL\n>> style query between an expression and a table scan of a materialized view.\n>> However, contrary to what I expected, the UNION ALL query is actually a\n>> lot slower.\n>>\n>>\n>> As an example, say I have 2 tables: actor and movie. Furthermore, there\n>> is also a foreign key index on movie to actor.\n>> I also have a materialized view with the join of these 2 tables for all\n>> movies <= 2015 called A.\n>> Now, if I want to query all entries in the join between actor and movie,\n>> I would assume that a UNION ALL between the join of actor and movie for\n>> movies >2015 and A is faster than executing the original query..\n>> If I look at the explain analyze part, I can certainly see a reduction in\n>> cost up until the UNION ALL part, which carries a respective cost more than\n>> negating the cost reduction up to a point where I might as well not use the\n>> existing materialized view.\n>>\n>>\n>> I have some trouble understanding this phenomenon.\n>> One thought which came to my mind was that perhaps UNION ALL might create\n>> a temporary table containing both result sets, and then do a table scan and\n>> return that result.\n>>\n>> this would greatly increase IO cost which could attribute to the problem.\n>> However, I am really not sure what UNION ALL actually does to append both\n>> result sets so I was wondering if someone would be able to help me out with\n>> this.\n>>\n>>\n>>\n>>\n>> Mark\n>>\n>>\n>> 066ce286@free.fr: Please, avoid top-posting. It makes harder to follow\n> the\n> discussion.\n>\n> --\n> Ibrar Ahmed\n>\n\nFirst of all, thank you for the replies.I am using a base installation of postgres 10.10, with no modifications to any of the system defaults.I am trying to speedup a join between two tables: the title table and the cast_info table.The title table is a table containing information about different movies.it contains 4626969 records.the table also has a foreign key index on the cast_info table, enabling the planner to use a hash-join.The cast_info table is a table containing the information of which actor was casted in which movie and contains 62039343 records.The database also contains a materialized view ci_t_15, defined as:select * from cast_info join title on cast_info.movie_id = title.idwhere title.production_year < 2015I am comparing two queries, q1 and q2 respectively.Query q1 is the original query and q2 is an attempt to reduce the cost of execution via leveraging the materialized view ci_t_15.Query q1 is defined as:select * from cast_info join title on cast_info.movie_id = title.idQuery q2 is defined asselect * from cast_info join title on cast_info.movie_id = title.idwhere title.production_year >= 2015UNION ALLselect * from ci_t_15Both queries are executed on a Dell xps laptop with an I7-8750H processor and 16 (2*8) gb ram on an SSD running on ubuntu 18.04.2 LTS.Running explain analyze on both queries I get the following execution plans.q1:\"Hash Join (cost=199773.80..2561662.10 rows=62155656 width=103) (actual time=855.063..25786.264 rows=62039343 loops=1)\"\" Hash Cond: (cast_info.ci_movie_id = title.t_id)\"\" -> Seq Scan on cast_info (cost=0.00..1056445.56 rows=62155656 width=42) (actual time=0.027..3837.722 rows=62039343 loops=1)\"\" -> Hash (cost=92232.69..92232.69 rows=4626969 width=61) (actual time=854.548..854.548 rows=4626969 loops=1)\"\" Buckets: 65536 Batches: 128 Memory Usage: 3431kB\"\" -> Seq Scan on title (cost=0.00..92232.69 rows=4626969 width=61) (actual time=0.005..327.588 rows=4626969 loops=1)\"\"Planning time: 5.097 ms\"\"Execution time: 27236.088 ms\"q2:\"Append (cost=123209.65..3713445.65 rows=61473488 width=105) (actual time=442.207..29713.621 rows=60918189 loops=1)\"\" -> Gather (cost=123209.65..2412792.77 rows=10639784 width=103) (actual time=442.206..14634.427 rows=10046633 loops=1)\"\" Workers Planned: 2\"\" Workers Launched: 2\"\" -> Hash Join (cost=122209.65..1347814.37 rows=4433243 width=103) (actual time=471.969..12527.840 rows=3348878 loops=3)\"\" Hash Cond: (cast_info.ci_movie_id = title.t_id)\"\" -> Parallel Seq Scan on cast_info (cost=0.00..693870.90 rows=25898190 width=42) (actual time=0.006..7302.679 rows=20679781 loops=3)\"\" -> Hash (cost=103800.11..103800.11 rows=792043 width=61) (actual time=471.351..471.351 rows=775098 loops=3)\"\" Buckets: 65536 Batches: 32 Memory Usage: 2515kB\"\" -> Seq Scan on title (cost=0.00..103800.11 rows=792043 width=61) (actual time=0.009..376.127 rows=775098 loops=3)\"\" Filter: (t_production_year >= 2015)\"\" Rows Removed by Filter: 3851871\"\" -> Seq Scan on ci_t_15 (cost=0.00..1194255.04 rows=50833704 width=105) (actual time=1.143..11967.391 rows=50871556 loops=1)\"\"Planning time: 0.268 ms\"\"Execution time: 31379.854 ms\"Due to using the materialized view I can reduce the amount of records going into the hash join, lowering the time from 25786.264 msec to 12527.840 msec.However, this is where my question comes in, this reduction is completely negated by the cost of appending both results in the UNION ALL command.I was wondering if this is normal behaviour.In my mind, I wouldn't expect appending 2 resultsets to have such a relative huge cost associated with it.This is also why I asked what exactly a UNION ALL does to achieve its functionality, to perhaps gain some insight in its cost.With kind regards,MarkOn Thu, 15 Aug 2019 at 21:22, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:On Fri, Aug 16, 2019 at 12:16 AM <066ce286@free.fr> wrote:Generally speaking, when executing UNION ; a DISTINCT is run afterward on the resultset.\n\nSo, if you're sure that each part of UNION cannot return a line returned by another one, you may use UNION ALL, you'll cut the cost of the final implicit DISTINCT.\n\n\n----- Mail original -----\nDe: \"Mark Pasterkamp\" <markpasterkamp1994@gmail.com>\nÀ: pgsql-hackers@lists.postgresql.org\nEnvoyé: Jeudi 15 Août 2019 20:37:06\nObjet: UNION ALL\n\n\nDear all, \n\n\nI was wondering if someone could help me understands what a union all actually does. \n\n\nFor my thesis I am using Apache Calcite to rewrite queries into using materialized views which I then give to a Postgres database. \nFor some queries, this means that they will be rewritten in a UNION ALL style query between an expression and a table scan of a materialized view. \nHowever, contrary to what I expected, the UNION ALL query is actually a lot slower. \n\n\nAs an example, say I have 2 tables: actor and movie. Furthermore, there is also a foreign key index on movie to actor. \nI also have a materialized view with the join of these 2 tables for all movies <= 2015 called A. \nNow, if I want to query all entries in the join between actor and movie, I would assume that a UNION ALL between the join of actor and movie for movies >2015 and A is faster than executing the original query.. \nIf I look at the explain analyze part, I can certainly see a reduction in cost up until the UNION ALL part, which carries a respective cost more than negating the cost reduction up to a point where I might as well not use the existing materialized view. \n\n\nI have some trouble understanding this phenomenon. \nOne thought which came to my mind was that perhaps UNION ALL might create a temporary table containing both result sets, and then do a table scan and return that result. \n\nthis would greatly increase IO cost which could attribute to the problem. \nHowever, I am really not sure what UNION ALL actually does to append both result sets so I was wondering if someone would be able to help me out with this. \n\n\n\n\nMark\n\n\n066ce286@free.fr: Please, avoid top-posting. It makes harder to follow thediscussion.-- Ibrar Ahmed",
"msg_date": "Fri, 16 Aug 2019 10:28:36 +0200",
"msg_from": "Mark Pasterkamp <markpasterkamp1994@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: UNION ALL"
},
{
"msg_contents": "Mark Pasterkamp <markpasterkamp1994@gmail.com> writes:\n> I am comparing two queries, q1 and q2 respectively.\n> Query q1 is the original query and q2 is an attempt to reduce the cost of\n> execution via leveraging the materialized view ci_t_15.\n> ...\n> Running explain analyze on both queries I get the following execution plans.\n\nHuh ... I wonder why the planner decided to try to parallelize Q2 (and not\nQ1)? That seems like a pretty stupid choice, because if I'm reading the\nplan correctly (I might not be) each worker process has to read all of\nthe \"title\" table and build its own copy of the hash table. That seems\nlikely to swamp whatever performance gain might come from parallelizing\nthe scan of cast_info --- which is likely to be not much, anyway, on a\nlaptop with probably-not-great disk I/O bandwidth.\n\nIn any case, whether that decision was good or bad, making it differently\nrenders the performance of Q1 and Q2 not very comparable. It'd be worth\ndisabling parallelism (SET max_parallel_workers_per_gather = 0) and\nretrying Q2 to get a more apples-to-apples comparison.\n\nAnother bit of advice is to increase work_mem, so the hashes don't\nhave to be split into quite so many batches.\n\nI'm noting also that your queries aren't giving the same results ---\nQ2 reports returning fewer rows overall. Do you have rows where\ntitle.production_year is null, perhaps?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Aug 2019 11:30:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: UNION ALL"
}
] |
[
{
"msg_contents": "It seems that sometimes when DELETE cascades to referencing tables we fail\nto acquire locks on replica identity index.\n\nTo reproduce, set wal_level to logical, and run 1.sql.\n\nI can look into this, but I thought first I should send it here in case\nsomeone who is more familiar with these related functions can solve it\nquickly.\n\nI get the following backtrace:\n\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51\n#1 0x00007f301154b801 in __GI_abort () at abort.c:79\n#2 0x000055df8858a923 in ExceptionalCondition (\n conditionName=conditionName@entry=0x55df885fd138\n\"!(CheckRelationLockedByMe(idx_rel, 1, 1))\",\nerrorType=errorType@entry=0x55df885de8fd\n\"FailedAssertion\",\n fileName=fileName@entry=0x55df885fca32 \"heapam.c\",\nlineNumber=lineNumber@entry=7646)\n at assert.c:54\n#3 0x000055df88165e53 in ExtractReplicaIdentity\n(relation=relation@entry=0x7f3012b54db0,\n\n tp=tp@entry=0x7ffcf47d53f0, key_changed=key_changed@entry=true,\n copy=copy@entry=0x7ffcf47d53d3) at heapam.c:7646\n#4 0x000055df8816c22b in heap_delete (relation=0x7f3012b54db0,\ntid=<optimized out>,\n cid=<optimized out>, crosscheck=0x0, wait=true, tmfd=0x7ffcf47d54b0,\nchangingPart=false)\n at heapam.c:2676\n#5 0x000055df88318b62 in table_tuple_delete (changingPart=false,\ntmfd=0x7ffcf47d54b0,\n wait=true, crosscheck=<optimized out>, snapshot=<optimized out>,\ncid=<optimized out>,\n tid=0x7ffcf47d558a, rel=0x7f3012b54db0) at\n../../../src/include/access/tableam.h:1216\n#6 ExecDelete (mtstate=mtstate@entry=0x55df8a8196a0,\ntupleid=0x7ffcf47d558a, oldtuple=0x0,\n planSlot=planSlot@entry=0x55df8a81a8e8,\nepqstate=epqstate@entry=0x55df8a819798,\n\n estate=estate@entry=0x55df8a819058, processReturning=true,\ncanSetTag=true,\n changingPart=false, tupleDeleted=0x0, epqreturnslot=0x0) at\nnodeModifyTable.c:769\n#7 0x000055df8831aa25 in ExecModifyTable (pstate=0x55df8a8196a0) at\nnodeModifyTable.c:2230\n#8 0x000055df882efa9a in ExecProcNode (node=0x55df8a8196a0)\n at ../../../src/include/executor/executor.h:239\n#9 ExecutePlan (execute_once=<optimized out>, dest=0x55df88a89a00\n<spi_printtupDR>,\n direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>,\n operation=CMD_DELETE, use_parallel_mode=<optimized out>,\nplanstate=0x55df8a8196a0,\n estate=0x55df8a819058) at execMain.c:1648\n#10 standard_ExecutorRun (queryDesc=0x55df8a7de4b0, direction=<optimized\nout>, count=0,\n execute_once=<optimized out>) at execMain.c:365\n#11 0x000055df8832b90c in _SPI_pquery (tcount=0, fire_triggers=false,\n queryDesc=0x55df8a7de4b0) at spi.c:2521\n#12 _SPI_execute_plan (plan=plan@entry=0x55df8a812828, paramLI=<optimized\nout>,\n snapshot=snapshot@entry=0x0,\ncrosscheck_snapshot=crosscheck_snapshot@entry=0x0,\n read_only=read_only@entry=false, fire_triggers=fire_triggers@entry=false,\n\n tcount=<optimized out>) at spi.c:2296\n#13 0x000055df8832c15c in SPI_execute_snapshot (plan=plan@entry=0x55df8a812828,\n\n Values=Values@entry=0x7ffcf47d5820, Nulls=Nulls@entry=0x7ffcf47d5a20 \"\n\",\n snapshot=snapshot@entry=0x0,\ncrosscheck_snapshot=crosscheck_snapshot@entry=0x0,\n read_only=read_only@entry=false, fire_triggers=false, tcount=0) at\nspi.c:616\n#14 0x000055df88522f32 in ri_PerformCheck (riinfo=riinfo@entry=0x55df8a7f8050,\n\n qkey=qkey@entry=0x7ffcf47d5b28, qplan=0x55df8a812828,\n fk_rel=fk_rel@entry=0x7f3012b54db0, pk_rel=pk_rel@entry=0x7f3012b44a28,\n oldslot=oldslot@entry=0x55df8a826f88, newslot=0x0, detectNewRows=true,\nexpect_OK=8)\n at ri_triggers.c:2276\n#15 0x000055df88524653 in RI_FKey_cascade_del (fcinfo=<optimized out>) at\nri_triggers.c:819\n#16 0x000055df882c9996 in ExecCallTriggerFunc\n(trigdata=trigdata@entry=0x7ffcf47d5ff0,\n\n tgindx=tgindx@entry=0, finfo=finfo@entry=0x55df8a825710,\ninstr=instr@entry=0x0,\n per_tuple_context=per_tuple_context@entry=0x55df8a812f10) at\ntrigger.c:2432\n#17 0x000055df882cb459 in AfterTriggerExecute (trigdesc=0x55df8a825530,\n trigdesc=0x55df8a825530, trig_tuple_slot2=0x0, trig_tuple_slot1=0x0,\n per_tuple_context=0x55df8a812f10, instr=0x0, finfo=0x55df8a825710,\n relInfo=0x55df8a825418, event=0x55df8a81f0a8, estate=0x55df8a825188) at\ntrigger.c:4342\n#18 afterTriggerInvokeEvents (events=events@entry=0x55df8a7c3e40,\nfiring_id=1,\n estate=estate@entry=0x55df8a825188, delete_ok=delete_ok@entry=false) at\ntrigger.c:4539\n#19 0x000055df882d1408 in AfterTriggerEndQuery (estate=estate@entry\n=0x55df8a825188)\n at trigger.c:4850\n#20 0x000055df882efd99 in standard_ExecutorFinish (queryDesc=0x55df8a722ab8)\n at execMain.c:440\n#21 0x000055df88464bdd in ProcessQuery (plan=<optimized out>,\n sourceText=0x55df8a702f78 \"DELETE FROM t1 RETURNING id;\", params=0x0,\nqueryEnv=0x0,\n dest=0x55df8a722a20, completionTag=0x7ffcf47d6180 \"DELETE 11\") at\npquery.c:203\n#22 0x000055df88464e0b in PortalRunMulti (portal=portal@entry=0x55df8a7692f8,\n\n isTopLevel=isTopLevel@entry=true,\nsetHoldSnapshot=setHoldSnapshot@entry=true,\n\n dest=dest@entry=0x55df8a722a20, altdest=0x55df88a81040 <donothingDR>,\n completionTag=completionTag@entry=0x7ffcf47d6180 \"DELETE 11\") at\npquery.c:1283\n#23 0x000055df88465119 in FillPortalStore (portal=portal@entry=0x55df8a7692f8,\n\n isTopLevel=isTopLevel@entry=true) at pquery.c:1030\n#24 0x000055df88465d1d in PortalRun (portal=portal@entry=0x55df8a7692f8,\n count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,\n\n run_once=run_once@entry=true, dest=dest@entry=0x55df8a7ddb08,\n altdest=altdest@entry=0x55df8a7ddb08, completionTag=0x7ffcf47d63b0 \"\")\nat pquery.c:765\n#25 0x000055df88461512 in exec_simple_query (\n query_string=0x55df8a702f78 \"DELETE FROM t1 RETURNING id;\") at\npostgres.c:1215\n#26 0x000055df8846344e in PostgresMain (argc=<optimized out>,\n argv=argv@entry=0x55df8a72d4b0, dbname=<optimized out>,\nusername=<optimized out>)\n at postgres.c:4236\n#27 0x000055df8811906d in BackendRun (port=0x55df8a7234d0,\nport=0x55df8a7234d0)\n at postmaster.c:4431\n#28 BackendStartup (port=0x55df8a7234d0) at postmaster.c:4122\n#29 ServerLoop () at postmaster.c:1704\n#30 0x000055df883dc53e in PostmasterMain (argc=3, argv=0x55df8a6fb7c0) at\npostmaster.c:1377\n#31 0x000055df8811ad5f in main (argc=3, argv=0x55df8a6fb7c0) at main.c:228",
"msg_date": "Fri, 16 Aug 2019 09:44:15 -0700",
"msg_from": "Hadi Moshayedi <hadi@moshayedi.net>",
"msg_from_op": true,
"msg_subject": "REL_12_STABLE crashing with assertion failure in\n ExtractReplicaIdentity"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-16 09:44:15 -0700, Hadi Moshayedi wrote:\n> It seems that sometimes when DELETE cascades to referencing tables we fail\n> to acquire locks on replica identity index.\n> \n> To reproduce, set wal_level to logical, and run 1.sql.\n> \n> I can look into this, but I thought first I should send it here in case\n> someone who is more familiar with these related functions can solve it\n> quickly.\n\nI suspect this \"always\" has been broken, it's just that we previously\ndidn't have checks in place that detect these cases. I don't think it's\nlikely to cause actual harm, due to the locking on the table itself when\ndropping indexes etc. But we still should fix it.\n\nThe relevant code is:\n\n\t\t/*\n\t\t * If there are indices on the result relation, open them and save\n\t\t * descriptors in the result relation info, so that we can add new\n\t\t * index entries for the tuples we add/update. We need not do this\n\t\t * for a DELETE, however, since deletion doesn't affect indexes. Also,\n\t\t * inside an EvalPlanQual operation, the indexes might be open\n\t\t * already, since we share the resultrel state with the original\n\t\t * query.\n\t\t */\n\t\tif (resultRelInfo->ri_RelationDesc->rd_rel->relhasindex &&\n\t\t\toperation != CMD_DELETE &&\n\t\t\tresultRelInfo->ri_IndexRelationDescs == NULL)\n\t\t\tExecOpenIndices(resultRelInfo,\n\t\t\t\t\t\t\tnode->onConflictAction != ONCONFLICT_NONE);\n\n\nI'm not quite sure what the best way to fix this would be however. It\nseems like a bad idea to make locking dependent on wal_level, but I'm\nalso not sure we want to incur the price of locking one more table to\nevery delete on a table with a primary key?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 16 Aug 2019 11:00:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REL_12_STABLE crashing with assertion failure in\n ExtractReplicaIdentity"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-08-16 09:44:15 -0700, Hadi Moshayedi wrote:\n>> It seems that sometimes when DELETE cascades to referencing tables we fail\n>> to acquire locks on replica identity index.\n\n> I suspect this \"always\" has been broken, it's just that we previously\n> didn't have checks in place that detect these cases. I don't think it's\n> likely to cause actual harm, due to the locking on the table itself when\n> dropping indexes etc. But we still should fix it.\n\nYeah ... see the discussion leading up to 9c703c169,\n\nhttps://www.postgresql.org/message-id/flat/19465.1541636036%40sss.pgh.pa.us\n\nWe didn't pull the trigger on removing the CMD_DELETE exception here,\nbut I think the handwriting has been on the wall for some time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 17 Aug 2019 01:43:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REL_12_STABLE crashing with assertion failure in\n ExtractReplicaIdentity"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-17 01:43:45 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-08-16 09:44:15 -0700, Hadi Moshayedi wrote:\n> >> It seems that sometimes when DELETE cascades to referencing tables we fail\n> >> to acquire locks on replica identity index.\n>\n> > I suspect this \"always\" has been broken, it's just that we previously\n> > didn't have checks in place that detect these cases. I don't think it's\n> > likely to cause actual harm, due to the locking on the table itself when\n> > dropping indexes etc. But we still should fix it.\n>\n> Yeah ... see the discussion leading up to 9c703c169,\n>\n> https://www.postgresql.org/message-id/flat/19465.1541636036%40sss.pgh.pa.us\n>\n> We didn't pull the trigger on removing the CMD_DELETE exception here,\n> but I think the handwriting has been on the wall for some time.\n\nISTM there's a few different options here:\n\n1a) We build all index infos, unconditionally. As argued in the thread\n you reference, future tableams may eventually require that anyway,\n by doing more proactive index maintenance somehow. Currently there's\n however no support for such AMs via tableam (mostly because I wasn't\n sure how exactly that'd look, and none of the already in-development\n AMs needed it).\n\n2a) We separate acquisition of index locks from ExecOpenIndices(), and\n acquire index locks even for CMD_DELETE. Do so either during\n executor startup, or as part of AcquireExecutorLocks() (the latter\n on the basis that parsing/planning would have required the locks\n already).\n\nThere's also corresponding *b) options, where we only do additional work\nfor CMD_DELETE if wal_level = logical, and the table has a replica\nidentity requiring use of the index during deleteions. But I think\nthat's clearly enough a bad idea that we can just dismiss it out of\nhand.\n\n\n3) Remove the CheckRelationLockedByMe() assert from\n ExtractReplicaIdentity(), at least for 12. I don't think this is an\n all that convicing option, but it'd reduce churn relatively late in\n beta.\n\n4) Add a index_open(RowExclusiveLock) to ExtractReplicaIdentity(). That\n seems very unconvincing to me, because we'd do so for every row.\n\n\nI think there's some appeal in going towards 2), because batching lock\nacquisition into a more central place has the chance to yield some\nspeedups on its own, but more importantly would allow for batched\noperations one day. Centralizing lock acquisition also seems like it\nmight make things easier to understand than today, where a lot of\ndifferent parts of the system acquire the locks, even just for\nexecution. But it also seems to be likely too invasive for 12 - making\nme think that 1a) is the way to go for now.\n\nComments?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 20 Aug 2019 16:06:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REL_12_STABLE crashing with assertion failure in\n ExtractReplicaIdentity"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-08-17 01:43:45 -0400, Tom Lane wrote:\n>> Yeah ... see the discussion leading up to 9c703c169,\n>> https://www.postgresql.org/message-id/flat/19465.1541636036%40sss.pgh.pa.us\n>> We didn't pull the trigger on removing the CMD_DELETE exception here,\n>> but I think the handwriting has been on the wall for some time.\n\n> ISTM there's a few different options here:\n\n> 1a) We build all index infos, unconditionally. As argued in the thread\n> you reference, future tableams may eventually require that anyway,\n> by doing more proactive index maintenance somehow. Currently there's\n> however no support for such AMs via tableam (mostly because I wasn't\n> sure how exactly that'd look, and none of the already in-development\n> AMs needed it).\n> 2a) We separate acquisition of index locks from ExecOpenIndices(), and\n> acquire index locks even for CMD_DELETE. Do so either during\n> executor startup, or as part of AcquireExecutorLocks() (the latter\n> on the basis that parsing/planning would have required the locks\n> already).\n> There's also corresponding *b) options, where we only do additional work\n> for CMD_DELETE if wal_level = logical, and the table has a replica\n> identity requiring use of the index during deleteions. But I think\n> that's clearly enough a bad idea that we can just dismiss it out of\n> hand.\n> 3) Remove the CheckRelationLockedByMe() assert from\n> ExtractReplicaIdentity(), at least for 12. I don't think this is an\n> all that convicing option, but it'd reduce churn relatively late in\n> beta.\n> 4) Add a index_open(RowExclusiveLock) to ExtractReplicaIdentity(). That\n> seems very unconvincing to me, because we'd do so for every row.\n\nAs far as 4) goes, I think the code in ExtractReplicaIdentity is pretty\nduff anyway, because it doesn't bother to check for the defined failure\nreturn for RelationIdGetRelation. But if we're concerned about the\ncost of recalculating this stuff per-row, couldn't we cache it a little\nbetter? It should be safe to assume the set of index columns isn't\nchanging intra-query.\n\n... in fact, isn't all the infrastructure for that present already?\nWhy is this code looking directly at the index at all, rather than\nusing the relcache's rd_idattr bitmap?\n\nI suspect we'll have to do 1a) eventually anyway, but this particular\nproblem seems like it has a better solution. Will try to produce a\npatch in a bit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 Sep 2019 16:50:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REL_12_STABLE crashing with assertion failure in\n ExtractReplicaIdentity"
},
{
"msg_contents": "I wrote:\n> As far as 4) goes, I think the code in ExtractReplicaIdentity is pretty\n> duff anyway, because it doesn't bother to check for the defined failure\n> return for RelationIdGetRelation. But if we're concerned about the\n> cost of recalculating this stuff per-row, couldn't we cache it a little\n> better? It should be safe to assume the set of index columns isn't\n> changing intra-query.\n> ... in fact, isn't all the infrastructure for that present already?\n> Why is this code looking directly at the index at all, rather than\n> using the relcache's rd_idattr bitmap?\n\nHere's a proposed patch along those lines. It fixes Hadi's original\ncrash case and passes check-world.\n\nI'm a bit suspicious of the exclusion for idattrs being empty, but\nif I remove that, some of the contrib/test_decoding test results\nchange. Anybody want to comment on that? If that's actually an\nexpected situation, why is there an elog(DEBUG) in that path?\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 01 Sep 2019 17:31:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REL_12_STABLE crashing with assertion failure in\n ExtractReplicaIdentity"
},
{
"msg_contents": "On Mon, Sep 2, 2019 at 6:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > As far as 4) goes, I think the code in ExtractReplicaIdentity is pretty\n> > duff anyway, because it doesn't bother to check for the defined failure\n> > return for RelationIdGetRelation. But if we're concerned about the\n> > cost of recalculating this stuff per-row, couldn't we cache it a little\n> > better? It should be safe to assume the set of index columns isn't\n> > changing intra-query.\n> > ... in fact, isn't all the infrastructure for that present already?\n> > Why is this code looking directly at the index at all, rather than\n> > using the relcache's rd_idattr bitmap?\n>\n> Here's a proposed patch along those lines. It fixes Hadi's original\n> crash case and passes check-world.\n\nAgree that this patch would be a better solution for Hadi's report,\nalthough I also agree that the situation with index locking for DELETE\nisn't perfect.\n\n> I'm a bit suspicious of the exclusion for idattrs being empty, but\n> if I remove that, some of the contrib/test_decoding test results\n> change. Anybody want to comment on that? If that's actually an\n> expected situation, why is there an elog(DEBUG) in that path?\n\nISTM that the exclusion case may occur with the table's replica\nidentity being REPLICA_IDENTITY_DEFAULT and there being no primary\nindex defined, in which case nothing needs to get logged.\n\nThe elog(DEBUG) may just be a remnant from the days when this was\nbeing developed. I couldn't find any notes on it though in the\narchives [1] though.\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/message-id/flat/20131204155510.GO24801%40awork2.anarazel.de\n\n\n",
"msg_date": "Mon, 2 Sep 2019 13:50:42 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: REL_12_STABLE crashing with assertion failure in\n ExtractReplicaIdentity"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Mon, Sep 2, 2019 at 6:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Here's a proposed patch along those lines. It fixes Hadi's original\n>> crash case and passes check-world.\n\n> Agree that this patch would be a better solution for Hadi's report,\n> although I also agree that the situation with index locking for DELETE\n> isn't perfect.\n\nThanks for checking!\n\n>> I'm a bit suspicious of the exclusion for idattrs being empty, but\n>> if I remove that, some of the contrib/test_decoding test results\n>> change. Anybody want to comment on that? If that's actually an\n>> expected situation, why is there an elog(DEBUG) in that path?\n\n> ISTM that the exclusion case may occur with the table's replica\n> identity being REPLICA_IDENTITY_DEFAULT and there being no primary\n> index defined, in which case nothing needs to get logged.\n\nLooking more closely, the case is unreachable in the heap_update\npath because key_changed will necessarily be false if the idattrs\nset is empty. But it is reachable in heap_delete because that\njust passes key_changed = constant true, whether or not there's\nany defined replica identity. In view of that, I think\nwe should just remove the elog(DEBUG) ... and maybe add a comment\nexplaining this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Sep 2019 14:19:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REL_12_STABLE crashing with assertion failure in\n ExtractReplicaIdentity"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-01 16:50:00 -0400, Tom Lane wrote:\n> As far as 4) goes, I think the code in ExtractReplicaIdentity is pretty\n> duff anyway, because it doesn't bother to check for the defined failure\n> return for RelationIdGetRelation. But if we're concerned about the\n> cost of recalculating this stuff per-row, couldn't we cache it a little\n> better? It should be safe to assume the set of index columns isn't\n> changing intra-query.\n\nI agree that it ought to be more efficent - but also about as equally\nsafe? I.e. if the previous code wasn't safe, the new code wouldn't be\nsafe either? As in, we're \"just\" avoiding the assert, but not increasing\nsafety?\n\n\n> ... in fact, isn't all the infrastructure for that present already?\n> Why is this code looking directly at the index at all, rather than\n> using the relcache's rd_idattr bitmap?\n\nNo idea, that's too long ago :(\n\n\n> I'm a bit suspicious of the exclusion for idattrs being empty, but\n> if I remove that, some of the contrib/test_decoding test results\n> change. Anybody want to comment on that? If that's actually an\n> expected situation, why is there an elog(DEBUG) in that path?\n\nI think Amit's explanation here is probably accurate.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 2 Sep 2019 13:39:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: REL_12_STABLE crashing with assertion failure in\n ExtractReplicaIdentity"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I agree that it ought to be more efficent - but also about as equally\n> safe? I.e. if the previous code wasn't safe, the new code wouldn't be\n> safe either? As in, we're \"just\" avoiding the assert, but not increasing\n> safety?\n\nWell, the point is that the old code risks performing a relcache load\nwithout holding any lock on the relation. In practice, given that\nwe do hold a lock on the parent table, it's probably safe ... but\nit's at best bad practice. It's not too hard to imagine future\noptimizations that would allow this to result in a corrupt relcache entry.\n\nI don't believe that there's any equivalent risk in the modified code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Sep 2019 16:49:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: REL_12_STABLE crashing with assertion failure in\n ExtractReplicaIdentity"
}
] |
[
{
"msg_contents": "Somebody ran into issues when generating large XML output (upwards of\n256 MB) and then sending via a connection with a different\nclient_encoding. This occurs because we pessimistically allocate 4x as\nmuch memory as the string needs, and we run into the 1GB palloc\nlimitation. ISTM we can do better now by using huge allocations, as per\nthe preliminary attached patch (which probably needs an updated overflow\ncheck rather than have it removed altogether); but at least it is able\nto process this query, which it wasn't without the patch:\n\nselect query_to_xml(\n 'select a, cash_words(a::text::money) from generate_series(0, 2000000) a',\n true, false, '');\n\n-- \n�lvaro Herrera",
"msg_date": "Fri, 16 Aug 2019 14:14:18 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "allocation limit for encoding conversion"
},
{
"msg_contents": "On 2019-Aug-16, Alvaro Herrera wrote:\n\n> Somebody ran into issues when generating large XML output (upwards of\n> 256 MB) and then sending via a connection with a different\n> client_encoding.\n\nref: https://postgr.es/m/43a889a1-45fb-1d60-31ae-21e607307492@gmail.com\n(pgsql-es-ayuda)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 16 Aug 2019 14:35:09 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: allocation limit for encoding conversion"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Somebody ran into issues when generating large XML output (upwards of\n> 256 MB) and then sending via a connection with a different\n> client_encoding. This occurs because we pessimistically allocate 4x as\n> much memory as the string needs, and we run into the 1GB palloc\n> limitation. ISTM we can do better now by using huge allocations, as per\n> the preliminary attached patch (which probably needs an updated overflow\n> check rather than have it removed altogether); but at least it is able\n> to process this query, which it wasn't without the patch:\n\n> select query_to_xml(\n> 'select a, cash_words(a::text::money) from generate_series(0, 2000000) a',\n> true, false, '');\n\nI fear that allowing pg_do_encoding_conversion to return strings longer\nthan 1GB is just going to create failure cases somewhere else.\n\nHowever, it's certainly true that 4x growth is a pretty unlikely worst\ncase. Maybe we could do something like\n\n1. If string is short (say up to a few megabytes), continue to do it\nlike now. This avoids adding overhead for typical cases.\n\n2. Otherwise, run some lobotomized form of encoding conversion that\njust computes the space required (as an int64, I guess) without saving\nthe result anywhere.\n\n3. If space required > 1GB, fail.\n\n4. Otherwise, allocate just the space required, and convert.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Aug 2019 17:31:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allocation limit for encoding conversion"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-16 17:31:49 -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > Somebody ran into issues when generating large XML output (upwards of\n> > 256 MB) and then sending via a connection with a different\n> > client_encoding. This occurs because we pessimistically allocate 4x as\n> > much memory as the string needs, and we run into the 1GB palloc\n> > limitation. ISTM we can do better now by using huge allocations, as per\n> > the preliminary attached patch (which probably needs an updated overflow\n> > check rather than have it removed altogether); but at least it is able\n> > to process this query, which it wasn't without the patch:\n> \n> > select query_to_xml(\n> > 'select a, cash_words(a::text::money) from generate_series(0, 2000000) a',\n> > true, false, '');\n> \n> I fear that allowing pg_do_encoding_conversion to return strings longer\n> than 1GB is just going to create failure cases somewhere else.\n> \n> However, it's certainly true that 4x growth is a pretty unlikely worst\n> case. Maybe we could do something like\n> \n> 1. If string is short (say up to a few megabytes), continue to do it\n> like now. This avoids adding overhead for typical cases.\n> \n> 2. Otherwise, run some lobotomized form of encoding conversion that\n> just computes the space required (as an int64, I guess) without saving\n> the result anywhere.\n> \n> 3. If space required > 1GB, fail.\n> \n> 4. Otherwise, allocate just the space required, and convert.\n\nIt's probably too big a hammer for this specific case, but I think at\nsome point we ought to stop using fixed size allocations for this kind\nof work. Instead we should use something roughly like our StringInfo,\nexcept that when exceeding the current size limit, the overflowing data\nis stored in a separate allocation. And only once we actually need the\ndata in a consecutive form, we allocate memory that's large enough to\nstore the all the separate allocations in their entirety.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 16 Aug 2019 15:04:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: allocation limit for encoding conversion"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-08-16 17:31:49 -0400, Tom Lane wrote:\n>> I fear that allowing pg_do_encoding_conversion to return strings longer\n>> than 1GB is just going to create failure cases somewhere else.\n>> \n>> However, it's certainly true that 4x growth is a pretty unlikely worst\n>> case. Maybe we could do something like\n>> 1. If string is short (say up to a few megabytes), continue to do it\n>> like now. This avoids adding overhead for typical cases.\n>> 2. Otherwise, run some lobotomized form of encoding conversion that\n>> just computes the space required (as an int64, I guess) without saving\n>> the result anywhere.\n>> 3. If space required > 1GB, fail.\n>> 4. Otherwise, allocate just the space required, and convert.\n\n> It's probably too big a hammer for this specific case, but I think at\n> some point we ought to stop using fixed size allocations for this kind\n> of work. Instead we should use something roughly like our StringInfo,\n> except that when exceeding the current size limit, the overflowing data\n> is stored in a separate allocation. And only once we actually need the\n> data in a consecutive form, we allocate memory that's large enough to\n> store the all the separate allocations in their entirety.\n\nThat sounds pretty messy :-(.\n\nI spent some time looking at what I proposed above, and concluded that\nit's probably impractical. In the first place, we'd have to change\nthe API spec for encoding conversion functions. Now maybe that would\nnot be a huge deal, because there likely aren't very many people outside\nthe core code who are defining their own conversion functions, but it's\nstill a negative. More importantly, unless we wanted to duplicate\nlarge swaths of code, we'd end up inserting changes about like this\ninto the inner loops of encoding conversions:\n\n-\t\t*dest++ = code;\n+\t\tif (dest)\n+\t\t\t*dest++ = code;\n+\t\toutcount++;\n\nwhich seems like it'd be bad for performance.\n\nSo I now think that Alvaro's got basically the right idea, except\nthat I'm still afraid to allow strings larger than MaxAllocSize\nto run around loose in the backend. So in addition to the change\nhe suggested, we need a final check on strlen(result) not being\ntoo large. We can avoid doing a useless strlen() if the input len\nis small, though.\n\nIt then occurred to me that we could also repalloc the output buffer\ndown to just the required size, which is pointless if it's small\nbut not if we can give back several hundred MB. This is conveniently\nmergeable with the check to see whether we need to check strlen or not.\n\n... or at least, that's what I thought we could do. Testing showed\nme that AllocSetRealloc never actually gives back any space, even\nwhen it's just acting as a frontend for a direct malloc. However,\nwe can fix that for little more than the price of swapping the order\nof the is-it-a-decrease and is-it-a-large-chunk stanzas, as in the\n0002 patch below.\n\nI also put back the missing overflow check --- although that's unreachable\nin a 64-bit machine, it's not at all in 32-bit. The patch is still\nuseful in 32-bit though, since it still doubles the size of string\nwe can cope with.\n\nI think this is committable, though surely another pair of eyeballs\non it wouldn't hurt. Also, is it worth having a different error\nmessage for the case where the output does exceed MaxAllocSize?\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 24 Sep 2019 16:19:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allocation limit for encoding conversion"
},
{
"msg_contents": "On 2019-09-24 16:19:41 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-08-16 17:31:49 -0400, Tom Lane wrote:\n> >> I fear that allowing pg_do_encoding_conversion to return strings longer\n> >> than 1GB is just going to create failure cases somewhere else.\n> >>\n> >> However, it's certainly true that 4x growth is a pretty unlikely worst\n> >> case. Maybe we could do something like\n> >> 1. If string is short (say up to a few megabytes), continue to do it\n> >> like now. This avoids adding overhead for typical cases.\n> >> 2. Otherwise, run some lobotomized form of encoding conversion that\n> >> just computes the space required (as an int64, I guess) without saving\n> >> the result anywhere.\n> >> 3. If space required > 1GB, fail.\n> >> 4. Otherwise, allocate just the space required, and convert.\n>\n> > It's probably too big a hammer for this specific case, but I think at\n> > some point we ought to stop using fixed size allocations for this kind\n> > of work. Instead we should use something roughly like our StringInfo,\n> > except that when exceeding the current size limit, the overflowing data\n> > is stored in a separate allocation. And only once we actually need the\n> > data in a consecutive form, we allocate memory that's large enough to\n> > store the all the separate allocations in their entirety.\n>\n> That sounds pretty messy :-(.\n\nI don't think it's too bad - except for now allowing the .data member of\nsuch a 'chunked' StringInfo to be directly accessible, it'd be just\nabout the same interface as the current stringinfo. Combined perhaps\nwith a helper or two for \"de-chunking\" directly into another stringinfo,\nnetwork buffer etc, to avoid the unnecessary allocation of a buffer of\nthe overall size when the result is just going to be memcpy'd elsewhere.\n\nObviously a good first step would just to pass a StringInfo to the\nencoding routines. That'd solve the need for pessimistic overallocation,\nbecause the buffer can be enlarged. And by sizing the initial allocation\nto the input string (or at least only a small factor above), we'd not\nusually need (many) reallocations.\n\nThat'd also remove the need for unnecessary strlen/memcpy done in many\nencoding conversion callsites, like e.g.:\n\n\tp = pg_server_to_client(str, slen);\n\tif (p != str)\t\t\t\t/* actual conversion has been done? */\n\t{\n\t\tslen = strlen(p);\n\t\tappendBinaryStringInfo(buf, p, slen);\n\t\tpfree(p);\n\t}\n\nwhich do show up in profiles.\n\n\n> I spent some time looking at what I proposed above, and concluded that\n> it's probably impractical. In the first place, we'd have to change\n> the API spec for encoding conversion functions. Now maybe that would\n> not be a huge deal, because there likely aren't very many people outside\n> the core code who are defining their own conversion functions, but it's\n> still a negative. More importantly, unless we wanted to duplicate\n> large swaths of code, we'd end up inserting changes about like this\n> into the inner loops of encoding conversions:\n>\n> -\t\t*dest++ = code;\n> +\t\tif (dest)\n> +\t\t\t*dest++ = code;\n> +\t\toutcount++;\n> which seems like it'd be bad for performance.\n\nOne thing this made me wonder is if we shouldn't check the size of the\noutput string explicitly, rather than relying on overallocation. The\nonly reason we need an allocation bigger than MaxAllocSize here is that\nwe don't pass the output buffer size to the conversion routines. If\nthose routines instead checked whether the output buffer size is\nexceeded, and returned with the number of converted input bytes *and*\nthe position in the output buffer, we wouldn't have to overallocate\nquite so much.\n\nBut I suspect using a StringInfo, as suggested above, would be better.\n\nTo avoid making the tight innermost loop more expensive, I'd perhaps\ncode it roughly like:\n\n\n\n /*\n * Initially size output buffer to the likely required length, to\n * avoid unnecessary reallocations while growing.\n */\n enlargeStringInfo(output, input_len * ESTIMATED_GROWTH_FACTOR);\n\n /*\n * Process input in chunks, to reduce overhead of maintaining output buffer\n * for each processed input char. Increasing the buffer size too much will\n * lead to memory being wasted due to the necessary over-allocation.\n */\n #define CHUNK_SIZE 128\n remaining_bytes = input_len;\n while (remaining_bytes > 0)\n {\n local_len = Min(remaining_bytes, CHUNK_SIZE);\n\n /* ensure we have output buffer space for this chunk */\n enlargeStringInfo(output, MAX_CONVERSION_GROWTH * local_len);\n\n /* growing the stringinfo may invalidate previous dest */\n dest = output->data + output->len;\n\n while (local_len > 0)\n {\n /* current conversion logic, barely any slower */\n }\n\n remaining_bytes -= local_len;\n output->len = dest - output->data;\n }\n\n Assert(remaining_bytes == 0);\n\n\nAnd to avoid duplicating this code all over I think we could package it\nin a inline function with a per-char callback. Just about every useful\ncompiler ought to be able to remove those levels of indirection.\n\nSo a concrete conversion routine might look like:\n\nstatic inline int\niso8859_1_to_utf8_char(ConversionState *state)\n{\n unsigned short c = *state->src;\n\n if (c == 0)\n report_invalid_encoding(PG_LATIN1, (const char *) state->src, state->len);\n if (!IS_HIGHBIT_SET(c))\n *state->dest++ = c;\n else\n {\n *state->dest++ = (c >> 6) | 0xc0;\n *state->dest++ = (c & 0x003f) | HIGHBIT;\n }\n state->src++;\n state->len--;\n}\n\nDatum\niso8859_1_to_utf8(PG_FUNCTION_ARGS)\n{\n ConversionState state = {\n .conservative_growth_factor = 1.05,\n .max_perchar_overhead = 2,\n .src = (unsigned char *) PG_GETARG_CSTRING(2),\n .dest = (StringInfo *) PG_GETARG_POINTER(3),\n .len = PG_GETARG_INT32(4),\n };\n\n return encoding_conv_helper(&state, iso8859_1_to_utf8_char);\n}\n\nwhere encoding_conv_helper is a static inline function that does the\nchunking described above.\n\nThere's probably some added complexity around making sure that the\nchunking properly deals with multi-byte encodings properly, but that\nseems solvable.\n\n\n> It then occurred to me that we could also repalloc the output buffer\n> down to just the required size, which is pointless if it's small\n> but not if we can give back several hundred MB. This is conveniently\n> mergeable with the check to see whether we need to check strlen or not.\n>\n> ... or at least, that's what I thought we could do. Testing showed\n> me that AllocSetRealloc never actually gives back any space, even\n> when it's just acting as a frontend for a direct malloc. However,\n> we can fix that for little more than the price of swapping the order\n> of the is-it-a-decrease and is-it-a-large-chunk stanzas, as in the\n> 0002 patch below.\n\nThat seems generally like a good idea.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 24 Sep 2019 14:42:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: allocation limit for encoding conversion"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-09-24 16:19:41 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> It's probably too big a hammer for this specific case, but I think at\n>>> some point we ought to stop using fixed size allocations for this kind\n>>> of work. Instead we should use something roughly like our StringInfo,\n>>> except that when exceeding the current size limit, the overflowing data\n>>> is stored in a separate allocation. And only once we actually need the\n>>> data in a consecutive form, we allocate memory that's large enough to\n>>> store the all the separate allocations in their entirety.\n\n>> That sounds pretty messy :-(.\n\n> I don't think it's too bad - except for now allowing the .data member of\n> such a 'chunked' StringInfo to be directly accessible, it'd be just\n> about the same interface as the current stringinfo.\n\nI dunno. What you're describing would be a whole lotta work, and it'd\nbreak a user-visible API, and no amount of finagling is going to prevent\nit from making conversions somewhat slower, and the cases where it matters\nto not preallocate a surely-large-enough buffer are really few and far\nbetween. I have to think that we have better ways to spend our time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Sep 2019 18:09:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allocation limit for encoding conversion"
},
{
"msg_contents": "Hi, \n\nOn September 24, 2019 3:09:28 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Andres Freund <andres@anarazel.de> writes:\n>> On 2019-09-24 16:19:41 -0400, Tom Lane wrote:\n>>> Andres Freund <andres@anarazel.de> writes:\n>>>> It's probably too big a hammer for this specific case, but I think\n>at\n>>>> some point we ought to stop using fixed size allocations for this\n>kind\n>>>> of work. Instead we should use something roughly like our\n>StringInfo,\n>>>> except that when exceeding the current size limit, the overflowing\n>data\n>>>> is stored in a separate allocation. And only once we actually need\n>the\n>>>> data in a consecutive form, we allocate memory that's large enough\n>to\n>>>> store the all the separate allocations in their entirety.\n>\n>>> That sounds pretty messy :-(.\n>\n>> I don't think it's too bad - except for now allowing the .data member\n>of\n>> such a 'chunked' StringInfo to be directly accessible, it'd be just\n>> about the same interface as the current stringinfo.\n>\n>I dunno. What you're describing would be a whole lotta work, and it'd\n>break a user-visible API, and no amount of finagling is going to\n>prevent\n>it from making conversions somewhat slower, and the cases where it\n>matters\n>to not preallocate a surely-large-enough buffer are really few and far\n>between. I have to think that we have better ways to spend our time.\n\nIt'd not just avoid the overallocation, but also avoid the strlen and memcpy afterwards at the callsites, as well as the separate allocation. So I'd bet it'd be almost always a win.\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Tue, 24 Sep 2019 15:57:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: allocation limit for encoding conversion"
},
{
"msg_contents": "I wrote:\n> [ v2-0001-cope-with-large-encoding-conversions.patch ]\n> [ v2-0002-actually-recover-space-in-repalloc.patch ]\n\nI've pushed these patches (after some more review and cosmetic\nadjustments) and marked the CF entry closed. Andres is welcome\nto see if he can improve the situation further.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Oct 2019 17:38:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allocation limit for encoding conversion"
},
{
"msg_contents": "On 2019-Oct-03, Tom Lane wrote:\n\n> I wrote:\n> > [ v2-0001-cope-with-large-encoding-conversions.patch ]\n> > [ v2-0002-actually-recover-space-in-repalloc.patch ]\n> \n> I've pushed these patches (after some more review and cosmetic\n> adjustments) and marked the CF entry closed. Andres is welcome\n> to see if he can improve the situation further.\n\nMany thanks!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 3 Oct 2019 19:04:30 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: allocation limit for encoding conversion"
}
] |
[
{
"msg_contents": "Everywhere I've worked I've seen people struggle with table bloat. It's\nhard to even measure how much of it you have or where, let alone actually\nfix it.\n\nIf you search online you'll find dozens of different queries estimating how\nmuch empty space are in your tables and indexes based on pg_stats\nstatistics and suppositions about header lengths and padding and plugging\nthem into formulas of varying credibility.\n\nBut isn't this all just silliiness these days? We could actually sum up the\nspace recorded in the fsm and get a much more trustworthy number in\nmilliseconds.\n\nI rigged up a quick proof of concept and the code seems super simple and\nquick. There's one or two tables where the number is a bit suspect and\nthere's no fsm if vacuum hasn't run but that seems pretty small potatoes\nfor such a huge help in reducing user pain.\n\nEverywhere I've worked I've seen people struggle with table bloat. It's hard to even measure how much of it you have or where, let alone actually fix it.If you search online you'll find dozens of different queries estimating how much empty space are in your tables and indexes based on pg_stats statistics and suppositions about header lengths and padding and plugging them into formulas of varying credibility. But isn't this all just silliiness these days? We could actually sum up the space recorded in the fsm and get a much more trustworthy number in milliseconds. I rigged up a quick proof of concept and the code seems super simple and quick. There's one or two tables where the number is a bit suspect and there's no fsm if vacuum hasn't run but that seems pretty small potatoes for such a huge help in reducing user pain.",
"msg_date": "Fri, 16 Aug 2019 20:39:21 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Can't we give better table bloat stats easily?"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-16 20:39:21 -0400, Greg Stark wrote:\n> But isn't this all just silliiness these days? We could actually sum up the\n> space recorded in the fsm and get a much more trustworthy number in\n> milliseconds.\n\nYou mean like pgstattuple_approx()?\n\nhttps://www.postgresql.org/docs/current/pgstattuple.html\n\nOr something different?\n\n> I rigged up a quick proof of concept and the code seems super simple and\n> quick. There's one or two tables where the number is a bit suspect and\n> there's no fsm if vacuum hasn't run but that seems pretty small potatoes\n> for such a huge help in reducing user pain.\n\nHard to comment on what you propose, without more details. But note that\nyou can't just look at the FSM, because in a lot of workloads it is\noften hugely out of date. And fairly obviously it doesn't provide you\nwith information about how much space is currently occupied by dead\ntuples. What pgstattuple_approx does is to use the FSM for blocks that\nare all-visible, and look at the page otherwise.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 16 Aug 2019 17:59:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Can't we give better table bloat stats easily?"
},
{
"msg_contents": "On Sat, Aug 17, 2019 at 9:59 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-08-16 20:39:21 -0400, Greg Stark wrote:\n> > But isn't this all just silliiness these days? We could actually sum up the\n> > space recorded in the fsm and get a much more trustworthy number in\n> > milliseconds.\n>\n> You mean like pgstattuple_approx()?\n>\n> https://www.postgresql.org/docs/current/pgstattuple.html\n>\n> Or something different?\n>\n> > I rigged up a quick proof of concept and the code seems super simple and\n> > quick. There's one or two tables where the number is a bit suspect and\n> > there's no fsm if vacuum hasn't run but that seems pretty small potatoes\n> > for such a huge help in reducing user pain.\n>\n> Hard to comment on what you propose, without more details. But note that\n> you can't just look at the FSM, because in a lot of workloads it is\n> often hugely out of date. And fairly obviously it doesn't provide you\n> with information about how much space is currently occupied by dead\n> tuples. What pgstattuple_approx does is to use the FSM for blocks that\n> are all-visible, and look at the page otherwise.\n>\n\nIt's just an idea but we could have pgstattuple_approx use sample scan\nto estimate the table bloat more faster.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 26 Aug 2019 16:51:57 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Can't we give better table bloat stats easily?"
},
{
"msg_contents": "On Fri, Aug 16, 2019 at 8:39 PM Greg Stark <stark@mit.edu> wrote:\n\n> Everywhere I've worked I've seen people struggle with table bloat. It's\n> hard to even measure how much of it you have or where, let alone actually\n> fix it.\n>\n> If you search online you'll find dozens of different queries estimating\n> how much empty space are in your tables and indexes based on pg_stats\n> statistics and suppositions about header lengths and padding and plugging\n> them into formulas of varying credibility.\n>\n\nThere is not much we can do to suppress bad advice that people post on\ntheir own blogs. If wiki.postgresql.org is hosting bad advice, by all\nmeans we should fix that.\n\n\n> But isn't this all just silliiness these days? We could actually sum up\n> the space recorded in the fsm and get a much more trustworthy number in\n> milliseconds.\n>\n\nIf you have bloat problems, then you probably have vacuuming problems. If\nyou have vacuuming problems, how much can you trust fsm anyway?\n\nCheers,\n\nJeff\n\nOn Fri, Aug 16, 2019 at 8:39 PM Greg Stark <stark@mit.edu> wrote:Everywhere I've worked I've seen people struggle with table bloat. It's hard to even measure how much of it you have or where, let alone actually fix it.If you search online you'll find dozens of different queries estimating how much empty space are in your tables and indexes based on pg_stats statistics and suppositions about header lengths and padding and plugging them into formulas of varying credibility. There is not much we can do to suppress bad advice that people post on their own blogs. If wiki.postgresql.org is hosting bad advice, by all means we should fix that. But isn't this all just silliiness these days? We could actually sum up the space recorded in the fsm and get a much more trustworthy number in milliseconds. If you have bloat problems, then you probably have vacuuming problems. If you have vacuuming problems, how much can you trust fsm anyway?Cheers,Jeff",
"msg_date": "Mon, 26 Aug 2019 11:30:13 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Can't we give better table bloat stats easily?"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.