threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Hi,\n\nWhile I was reviewing replication slot statistics code, I found one\nissue in the data type used for pgstat_report_replslot function\nparameters. We pass int64 variables to the function but the function\nprototype uses int type. I I felt the function parameters should be\nint64. Attached patch fixes the same.\nThoughts?\n\nRegards,\nVignesh",
"msg_date": "Thu, 1 Apr 2021 22:19:54 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Data type correction in pgstat_report_replslot function parameters"
},
{
"msg_contents": "On Thu, Apr 1, 2021 at 10:20 PM vignesh C <vignesh21@gmail.com> wrote:\n\n> Hi,\n>\n> While I was reviewing replication slot statistics code, I found one\n> issue in the data type used for pgstat_report_replslot function\n> parameters. We pass int64 variables to the function but the function\n> prototype uses int type. I I felt the function parameters should be\n> int64. Attached patch fixes the same.\n\n\n+1 for the change. The patch LGTM.\n\nRegards,\nJeevan Ladhe\n\nOn Thu, Apr 1, 2021 at 10:20 PM vignesh C <vignesh21@gmail.com> wrote:Hi,\n\nWhile I was reviewing replication slot statistics code, I found one\nissue in the data type used for pgstat_report_replslot function\nparameters. We pass int64 variables to the function but the function\nprototype uses int type. I I felt the function parameters should be\nint64. Attached patch fixes the same.+1 for the change. The patch LGTM. Regards,Jeevan Ladhe",
"msg_date": "Thu, 1 Apr 2021 22:48:07 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Data type correction in pgstat_report_replslot function\n parameters"
},
{
"msg_contents": "\n\nOn 2021/04/02 2:18, Jeevan Ladhe wrote:\n> \n> \n> On Thu, Apr 1, 2021 at 10:20 PM vignesh C <vignesh21@gmail.com <mailto:vignesh21@gmail.com>> wrote:\n> \n> Hi,\n> \n> While I was reviewing replication slot statistics code, I found one\n> issue in the data type used for pgstat_report_replslot function\n> parameters. We pass int64 variables to the function but the function\n> prototype uses int type. I I felt the function parameters should be\n> int64. Attached patch fixes the same.\n\nIsn't it better to use PgStat_Counter instead of int64?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 2 Apr 2021 02:48:34 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Data type correction in pgstat_report_replslot function\n parameters"
},
{
"msg_contents": "On Thu, Apr 1, 2021 at 11:18 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/04/02 2:18, Jeevan Ladhe wrote:\n> >\n> >\n> > On Thu, Apr 1, 2021 at 10:20 PM vignesh C <vignesh21@gmail.com <mailto:vignesh21@gmail.com>> wrote:\n> >\n> > Hi,\n> >\n> > While I was reviewing replication slot statistics code, I found one\n> > issue in the data type used for pgstat_report_replslot function\n> > parameters. We pass int64 variables to the function but the function\n> > prototype uses int type. I I felt the function parameters should be\n> > int64. Attached patch fixes the same.\n>\n> Isn't it better to use PgStat_Counter instead of int64?\n>\n\nThanks for your comment, the updated patch contains the changes for it.\n\nRegards,\nVignesh",
"msg_date": "Fri, 2 Apr 2021 07:50:48 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Data type correction in pgstat_report_replslot function\n parameters"
},
{
"msg_contents": "\n\nOn 2021/04/02 11:20, vignesh C wrote:\n> On Thu, Apr 1, 2021 at 11:18 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2021/04/02 2:18, Jeevan Ladhe wrote:\n>>>\n>>>\n>>> On Thu, Apr 1, 2021 at 10:20 PM vignesh C <vignesh21@gmail.com <mailto:vignesh21@gmail.com>> wrote:\n>>>\n>>> Hi,\n>>>\n>>> While I was reviewing replication slot statistics code, I found one\n>>> issue in the data type used for pgstat_report_replslot function\n>>> parameters. We pass int64 variables to the function but the function\n>>> prototype uses int type. I I felt the function parameters should be\n>>> int64. Attached patch fixes the same.\n>>\n>> Isn't it better to use PgStat_Counter instead of int64?\n>>\n> \n> Thanks for your comment, the updated patch contains the changes for it.\n\nThanks for updating the patch! Pushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 2 Apr 2021 17:29:26 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Data type correction in pgstat_report_replslot function\n parameters"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 1:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/04/02 11:20, vignesh C wrote:\n> > On Thu, Apr 1, 2021 at 11:18 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2021/04/02 2:18, Jeevan Ladhe wrote:\n> >>>\n> >>>\n> >>> On Thu, Apr 1, 2021 at 10:20 PM vignesh C <vignesh21@gmail.com <mailto:vignesh21@gmail.com>> wrote:\n> >>>\n> >>> Hi,\n> >>>\n> >>> While I was reviewing replication slot statistics code, I found one\n> >>> issue in the data type used for pgstat_report_replslot function\n> >>> parameters. We pass int64 variables to the function but the function\n> >>> prototype uses int type. I I felt the function parameters should be\n> >>> int64. Attached patch fixes the same.\n> >>\n> >> Isn't it better to use PgStat_Counter instead of int64?\n> >>\n> >\n> > Thanks for your comment, the updated patch contains the changes for it.\n>\n> Thanks for updating the patch! Pushed.\n\nThanks for pushing.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 3 Apr 2021 07:08:57 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Data type correction in pgstat_report_replslot function\n parameters"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on the shared memory stats patch I (not for the first\ntime), issues with our process initialization.\n\nThe concrete issue was that I noticed that some stats early in startup\nweren't processed correctly - the stats system wasn't initialized yet. I\nconsequently added assertions ensuring that we don't try to report stats\nbefore that. Which blew up.\n\nEven in master we report stats well before the pgstat_initialize()\ncall. E.g. in autovac workers:\n\t\t/*\n\t\t * Report autovac startup to the stats collector. We deliberately do\n\t\t * this before InitPostgres, so that the last_autovac_time will get\n\t\t * updated even if the connection attempt fails. This is to prevent\n\t\t * autovac from getting \"stuck\" repeatedly selecting an unopenable\n\t\t * database, rather than making any progress on stuff it can connect\n\t\t * to.\n\t\t */\n\nThat previously just didn't cause a problem, because we didn't really\nneed pgstat_initialize() to have happened for stats reporting to work.\n\nIn the shared memory stats patch there's no dependency on\npgstat_initialize() knowing MyBackendId anymore (broken out to a\nseparate function). So I tried moving the stats initialization to\nsomewhere earlier.\n\n\nThere currently is simply no way of doing that that doesn't cause\nduplication, or weird conditions. We can't do it in:\n\n- InitProcess()/InitAuxiliaryProcess(),\n CreateSharedMemoryAndSemaphores() hasn't yet run in EXEC_BACKEND\n- below CreateSharedMemoryAndSemaphores(), as that isn't called for each\n backend in !EXEC_BACKEND\n- InitPostgres(), because autovac workers report stats before that\n- BaseInit(), because it's called before we have a PROC iff !EXEC_BACKEND\n- ...\n\nI have now worked around this by generous application of ugly, but I\nthink we really need to do something about this mazy mess. There's just\nan insane amount of duplication, and it's too complicated to remember\nmore than a few minutes.\n\nI really would like to not see things like\n\n\t/*\n\t * Create a per-backend PGPROC struct in shared memory, except in the\n\t * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do\n\t * this before we can use LWLocks (and in the EXEC_BACKEND case we already\n\t * had to do some stuff with LWLocks).\n\t */\n#ifdef EXEC_BACKEND\n\tif (!IsUnderPostmaster)\n\t\tInitProcess();\n#else\n\tInitProcess();\n#endif\n\nSimilarly, codeflow like bootstrap.c being involved in bog standard\nstuff like starting up wal writer etc, is just pointlessly\nconfusing. Note that bootstrap itself does *not* go through\nAuxiliaryProcessMain(), and thus has yet another set of initialization\nneeds.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Apr 2021 17:22:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Process initialization labyrinth"
},
{
"msg_contents": "On Thu, Apr 1, 2021 at 8:22 PM Andres Freund <andres@anarazel.de> wrote:\n> <snip>\n> I have now worked around this by generous application of ugly, but I\n> think we really need to do something about this mazy mess. There's just\n> an insane amount of duplication, and it's too complicated to remember\n> more than a few minutes.\n>\n> I really would like to not see things like\n>\n> /*\n> * Create a per-backend PGPROC struct in shared memory, except in the\n> * EXEC_BACKEND case where this was done in SubPostmasterMain. We must do\n> * this before we can use LWLocks (and in the EXEC_BACKEND case we already\n> * had to do some stuff with LWLocks).\n> */\n> #ifdef EXEC_BACKEND\n> if (!IsUnderPostmaster)\n> InitProcess();\n> #else\n> InitProcess();\n> #endif\n>\n> Similarly, codeflow like bootstrap.c being involved in bog standard\n> stuff like starting up wal writer etc, is just pointlessly\n> confusing. Note that bootstrap itself does *not* go through\n> AuxiliaryProcessMain(), and thus has yet another set of initialization\n> needs.\n\nI can't really speak to the initial points directly relating to\npgstat/shmem, but for the overall maze-like nature of the startup\ncode: is there any chance the startup centralization patchset would be\nof any help here?\nhttps://www.postgresql.org/message-id/flat/CAMN686FE0OdZKp9YPO=htC6LnA6aW4r-+jq=3Q5RAoFQgW8EtA@mail.gmail.com\n\nI know you are at least vaguely aware of the efforts there, as you\nreviewed the patchset. Figured I should at least bring it up in case\nit seemed helpful, or an effort you'd like to re-invigorate.\n\nThanks,\n\n--\nMike Palmiotto\nhttps://crunchydata.com\n\n\n",
"msg_date": "Fri, 2 Apr 2021 10:18:20 -0400",
"msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>",
"msg_from_op": false,
"msg_subject": "Re: Process initialization labyrinth"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-02 10:18:20 -0400, Mike Palmiotto wrote:\n> I can't really speak to the initial points directly relating to\n> pgstat/shmem, but for the overall maze-like nature of the startup\n> code: is there any chance the startup centralization patchset would be\n> of any help here?\n> https://www.postgresql.org/message-id/flat/CAMN686FE0OdZKp9YPO=htC6LnA6aW4r-+jq=3Q5RAoFQgW8EtA@mail.gmail.com\n\nI think parts of it could help, at least. It doesn't really do much\nabout centralizing / de-mazing the actual initialization of sub-systems,\nbut it'd make it a bit easier to do so.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Apr 2021 10:02:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Process initialization labyrinth"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 1:02 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-04-02 10:18:20 -0400, Mike Palmiotto wrote:\n> > I can't really speak to the initial points directly relating to\n> > pgstat/shmem, but for the overall maze-like nature of the startup\n> > code: is there any chance the startup centralization patchset would be\n> > of any help here?\n> > https://www.postgresql.org/message-id/flat/CAMN686FE0OdZKp9YPO=htC6LnA6aW4r-+jq=3Q5RAoFQgW8EtA@mail.gmail.com\n>\n> I think parts of it could help, at least. It doesn't really do much\n> about centralizing / de-mazing the actual initialization of sub-systems,\n> but it'd make it a bit easier to do so.\n\nThe patchset needs some work, no doubt. If you think it'd be useful, I\ncan spend some of my free time addressing any gaps in the design. I'd\nhate to see that code go to waste, as I think it may have been a\nreasonable first step.\n\nAlso not opposed to you taking the patchset and running with it if you prefer.\n\nThanks,\n-- \nMike Palmiotto\nhttps://crunchydata.com\n\n\n",
"msg_date": "Fri, 2 Apr 2021 13:31:10 -0400",
"msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>",
"msg_from_op": false,
"msg_subject": "Re: Process initialization labyrinth"
}
] |
[
{
"msg_contents": "Or am I misunderstanding something?\n\nTry this. The result of each “select” is shown as the trailing comment on the same line. I added whitespace by hand to line up the fields.\n\nselect interval '-1.7 years'; -- -1 years -8 mons\n\nselect interval '29.4 months'; -- 2 years 5 mons 12 days\n\nselect interval '-1.7 years 29.4 months'; -- 8 mons 12 days << wrong\nselect interval '29.4 months -1.7 years'; -- 9 mons 12 days\n\nselect interval '-1.7 years' + interval '29.4 months'; -- 9 mons 12 days\nselect interval '29.4 months' + interval '-1.7 years'; -- 9 mons 12 days\n\nAs I reason it, the last four “select” statements are all semantically the same. They’re just different syntaxes to add the two intervals the the first two “select” statements use separately. There’s one odd man out. And I reason this one to be wrong. Is there a flaw in my reasoning?\n\nFurther… there’s a notable asymmetry. The fractional part of “1.7 years” is 8.4 months. But the fractional part of the months value doesn’t spread further down into days. However, the fractional part of “29.4 months” (12 days) _does_ spread further down into days. What’s the rationale for this asymmetry?\n\nI can’t see that my observations here can be explained by the difference between calendar time and clock time. Here I’m just working with non-metric units like feet and inches. One year is just defined as 12 months. And one month is just defined as 30 days. All that stuff about adding a month to 3-Feb-2020 taking you to 3-Mar-2020 (same for leap years an non-leap years) , and that other stuff about adding one day to 23:00 on the day before the “spring forward” moment taking you to 23:00 on the next day (i.w. when intervals are added to timestamps) is downstream of simply adding two intervals.\n\n\nOr am I misunderstanding something?Try this. The result of each “select” is shown as the trailing comment on the same line. I added whitespace by hand to line up the fields.select interval '-1.7 years'; -- -1 years -8 monsselect interval '29.4 months'; -- 2 years 5 mons 12 daysselect interval '-1.7 years 29.4 months'; -- 8 mons 12 days << wrongselect interval '29.4 months -1.7 years'; -- 9 mons 12 daysselect interval '-1.7 years' + interval '29.4 months'; -- 9 mons 12 daysselect interval '29.4 months' + interval '-1.7 years'; -- 9 mons 12 daysAs I reason it, the last four “select” statements are all semantically the same. They’re just different syntaxes to add the two intervals the the first two “select” statements use separately. There’s one odd man out. And I reason this one to be wrong. Is there a flaw in my reasoning?Further… there’s a notable asymmetry. The fractional part of “1.7 years” is 8.4 months. But the fractional part of the months value doesn’t spread further down into days. However, the fractional part of “29.4 months” (12 days) _does_ spread further down into days. What’s the rationale for this asymmetry?I can’t see that my observations here can be explained by the difference between calendar time and clock time. Here I’m just working with non-metric units like feet and inches. One year is just defined as 12 months. And one month is just defined as 30 days. All that stuff about adding a month to 3-Feb-2020 taking you to 3-Mar-2020 (same for leap years an non-leap years) , and that other stuff about adding one day to 23:00 on the day before the “spring forward” moment taking you to 23:00 on the next day (i.w. when intervals are added to timestamps) is downstream of simply adding two intervals.",
"msg_date": "Thu, 1 Apr 2021 21:46:58 -0700",
"msg_from": "Bryn Llewellyn <bryn@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Thu, Apr 1, 2021 at 09:46:58PM -0700, Bryn Llewellyn wrote:\n> Or am I misunderstanding something?\n> \n> Try this. The result of each “select” is shown as the trailing comment on the\n> same line. I added whitespace by hand to line up the fields.\n> \n> select interval '-1.7 years'; -- -1 years -8 mons\n> \n> select interval '29.4 months'; -- 2 years 5 mons 12\n> days\n> \n> select interval '-1.7 years 29.4 months'; -- 8 mons 12\n> days << wrong\n> select interval '29.4 months -1.7 years'; -- 9 mons 12\n> days\n> \n> select interval '-1.7 years' + interval '29.4 months'; -- 9 mons 12\n> days\n> select interval '29.4 months' + interval '-1.7 years'; -- 9 mons 12\n> days\n> \n> As I reason it, the last four “select” statements are all semantically the\n> same. They’re just different syntaxes to add the two intervals the the first\n> two “select” statements use separately. There’s one odd man out. And I reason\n> this one to be wrong. Is there a flaw in my reasoning?\n\n[Thread moved to hackers.]\n\nLooking at your report, I thought I could easily find the cause, but it\nwasn't obvious. What is happening is that when you cast a float to an\ninteger in C, it rounds toward zero, meaning that 8.4 rounds to 8 and\n-8.4 rounds to -8. The big problem here is that -8.4 is rounding in a\npositive direction, while 8.4 rounds in a negative direction. See this:\n\n\tint(int(-8.4) + 29)\n\t 21\n\tint(int(29) + -8.4)\n\t 20\n\nWhen you do '-1.7 years' first, it become -8, and then adding 29 yields\n21. In the other order, it is 29 - 8.4, which yields 20.6, which\nbecomes 20. I honestly had never studied this interaction, though you\nwould think I would have seen it before. One interesting issue is that\nit only happens when the truncations happen to values with different\nsigns --- if they are both positive or negative, it is fine.\n\nThe best fix I think is to use rint()/round to round to the closest\ninteger, not toward zero. The attached patch does this in a few places,\nbut the code needs more research if we are happy with this approach,\nsince there are probably other cases. Using rint() does help to produce\nmore accurate results, thought the regression tests show no change from\nthis patch.\n\n> Further… there’s a notable asymmetry. The fractional part of “1.7 years” is 8.4\n> months. But the fractional part of the months value doesn’t spread further down\n> into days. However, the fractional part of “29.4 months” (12 days) _does_\n> spread further down into days. What’s the rationale for this asymmetry?\n\nYes, looking at the code, it seems we only spill down to one unit, not\nmore. I think we need to have a discussion if we want to change that. \nI think the idea was that if you specify a non-whole number, you\nprobably want to spill down one level, but don't want it spilling all\nthe way to milliseconds, which is certainly possible.\n\n> I can’t see that my observations here can be explained by the difference\n> between calendar time and clock time. Here I’m just working with non-metric\n> units like feet and inches. One year is just defined as 12 months. And one\n> month is just defined as 30 days. All that stuff about adding a month to\n> 3-Feb-2020 taking you to 3-Mar-2020 (same for leap years an non-leap years) ,\n> and that other stuff about adding one day to 23:00 on the day before the\n> “spring forward” moment taking you to 23:00 on the next day (i.w. when\n> intervals are added to timestamps) is downstream of simply adding two\n> intervals.\n\nAh, seems you have done some research. ;-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Fri, 2 Apr 2021 14:05:49 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "\nThread moved to hackers, with a patch.\n\n---------------------------------------------------------------------------\n\nOn Thu, Apr 1, 2021 at 09:46:58PM -0700, Bryn Llewellyn wrote:\n> Or am I misunderstanding something?\n> \n> Try this. The result of each “select” is shown as the trailing comment on the\n> same line. I added whitespace by hand to line up the fields.\n> \n> select interval '-1.7 years'; -- -1 years -8 mons\n> \n> select interval '29.4 months'; -- 2 years 5 mons 12\n> days\n> \n> select interval '-1.7 years 29.4 months'; -- 8 mons 12\n> days << wrong\n> select interval '29.4 months -1.7 years'; -- 9 mons 12\n> days\n> \n> select interval '-1.7 years' + interval '29.4 months'; -- 9 mons 12\n> days\n> select interval '29.4 months' + interval '-1.7 years'; -- 9 mons 12\n> days\n> \n> As I reason it, the last four “select” statements are all semantically the\n> same. They’re just different syntaxes to add the two intervals the the first\n> two “select” statements use separately. There’s one odd man out. And I reason\n> this one to be wrong. Is there a flaw in my reasoning?\n> \n> Further… there’s a notable asymmetry. The fractional part of “1.7 years” is 8.4\n> months. But the fractional part of the months value doesn’t spread further down\n> into days. However, the fractional part of “29.4 months” (12 days) _does_\n> spread further down into days. What’s the rationale for this asymmetry?\n> \n> I can’t see that my observations here can be explained by the difference\n> between calendar time and clock time. Here I’m just working with non-metric\n> units like feet and inches. One year is just defined as 12 months. And one\n> month is just defined as 30 days. All that stuff about adding a month to\n> 3-Feb-2020 taking you to 3-Mar-2020 (same for leap years an non-leap years) ,\n> and that other stuff about adding one day to 23:00 on the day before the\n> “spring forward” moment taking you to 23:00 on the next day (i.w. when\n> intervals are added to timestamps) is downstream of simply adding two\n> intervals.\n> \n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 2 Apr 2021 14:06:04 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "bruce@momjian.us wrote:\n\n> [Thread moved to hackers.] …The best fix I think is…\n> \n>> Bryn wrote: Further… there’s a notable asymmetry. The fractional part of “1.7 years” is 8.4 months. But the fractional part of the months value doesn’t spread further down into days. However, the fractional part of “29.4 months” (12 days) _does_ spread further down into days. What’s the rationale for this asymmetry?\n> \n> Yes, looking at the code, it seems we only spill down to one unit, not more. I think we need to have a discussion if we want to change that. I think the idea was that if you specify a non-whole number, you probably want to spill down one level, but don't want it spilling all the way to milliseconds, which is certainly possible.\n\nThanks for the quick response, Bruce. I was half expecting (re the bug) an explanation that showed that I’d (once again) misunderstood a fundamental principle.\n\nI should come clean about the larger context. I work for Yugabyte, Inc. We have a distributed SQL database that uses the Version 11.2 PostgreSQL C code for SQL processing “as is”.\n\nhttps://blog.yugabyte.com/distributed-postgresql-on-a-google-spanner-architecture-query-layer/\n\nThe founders decided to document YugabyteDB’s SQL functionality explicitly rather than just to point to the published PostgreSQL doc. (There are some DDL differences that reflect the storage layer differences.) I’m presently documenting date-time functionality. This is why I’m so focused on understanding the semantics exactly and on understanding the requirements that the functionality was designed to meet. I’m struggling with interval functionality. I read this:\n\nhttps://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-INPUT\n\n« …field values can have fractional parts; for example '1.5 week' or '01:02:03.45'. Such input is converted to the appropriate number of months, days, and seconds for storage. When this would result in a fractional number of months or days, the fraction is added to the lower-order fields using the conversion factors 1 month = 30 days and 1 day = 24 hours. For example, '1.5 month' becomes 1 month and 15 days. Only seconds will ever be shown as fractional on output. »\n\nNotice that the doc says that spill-down goes all the way to seconds and not just one unit. This simple test is consistent with the doc (output follows the dash-dash comment):\n\nselect ('6.54321 months'::interval)::text as i; -- 6 mons 16 days 07:06:40.32\n\nYou see similar spill-down with this:\n\nselect ('876.54321 days'::interval)::text as i; -- 876 days 13:02:13.344\n\nAnd so on down through the remaining smaller units. It’s only this test that doesn’t spill down one unit:\n\nselect ('6.54321 years'::interval)::text as i; -- 6 years 6 mons\n\nThis does suggest a straight bug rather than a case for committee debate about what might have been intended. What do you think, Bruce?\nbruce@momjian.us wrote:[Thread moved to hackers.] …The best fix I think is…Bryn wrote: Further… there’s a notable asymmetry. The fractional part of “1.7 years” is 8.4 months. But the fractional part of the months value doesn’t spread further down into days. However, the fractional part of “29.4 months” (12 days) _does_ spread further down into days. What’s the rationale for this asymmetry?Yes, looking at the code, it seems we only spill down to one unit, not more. I think we need to have a discussion if we want to change that. I think the idea was that if you specify a non-whole number, you probably want to spill down one level, but don't want it spilling all the way to milliseconds, which is certainly possible.Thanks for the quick response, Bruce. I was half expecting (re the bug) an explanation that showed that I’d (once again) misunderstood a fundamental principle.I should come clean about the larger context. I work for Yugabyte, Inc. We have a distributed SQL database that uses the Version 11.2 PostgreSQL C code for SQL processing “as is”.https://blog.yugabyte.com/distributed-postgresql-on-a-google-spanner-architecture-query-layer/The founders decided to document YugabyteDB’s SQL functionality explicitly rather than just to point to the published PostgreSQL doc. (There are some DDL differences that reflect the storage layer differences.) I’m presently documenting date-time functionality. This is why I’m so focused on understanding the semantics exactly and on understanding the requirements that the functionality was designed to meet. I’m struggling with interval functionality. I read this:https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-INPUT« …field values can have fractional parts; for example '1.5 week' or '01:02:03.45'. Such input is converted to the appropriate number of months, days, and seconds for storage. When this would result in a fractional number of months or days, the fraction is added to the lower-order fields using the conversion factors 1 month = 30 days and 1 day = 24 hours. For example, '1.5 month' becomes 1 month and 15 days. Only seconds will ever be shown as fractional on output. »Notice that the doc says that spill-down goes all the way to seconds and not just one unit. This simple test is consistent with the doc (output follows the dash-dash comment):select ('6.54321 months'::interval)::text as i; -- 6 mons 16 days 07:06:40.32You see similar spill-down with this:select ('876.54321 days'::interval)::text as i; -- 876 days 13:02:13.344And so on down through the remaining smaller units. It’s only this test that doesn’t spill down one unit:select ('6.54321 years'::interval)::text as i; -- 6 years 6 monsThis does suggest a straight bug rather than a case for committee debate about what might have been intended. What do you think, Bruce?",
"msg_date": "Fri, 2 Apr 2021 13:05:42 -0700",
"msg_from": "Bryn Llewellyn <bryn@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 11:06 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n>\n> Thread moved to hackers, with a patch.\n> ---------------------------------------------------------------------------\n>\n>\nHere is a link to that thread, for others who might be curious about it as\nI was:\nhttps://www.postgresql.org/message-id/flat/20210402180549.GF9270%40momjian.us#b3bdafbfeacab0dd8967ff2a3ebf7844\n\nI get why it can make sense to move a thread. But if when doing so you\npost a link to the new thread, that would be appreciated. Thanks!\n\nKen\n\n\n\n-- \nAGENCY Software\nA Free Software data system\nBy and for non-profits\n*http://agency-software.org/ <http://agency-software.org/>*\n*https://demo.agency-software.org/client\n<https://demo.agency-software.org/client>*\nken.tanzer@agency-software.org\n(253) 245-3801\n\nSubscribe to the mailing list\n<agency-general-request@lists.sourceforge.net?body=subscribe> to\nlearn more about AGENCY or\nfollow the discussion.\n\nOn Fri, Apr 2, 2021 at 11:06 AM Bruce Momjian <bruce@momjian.us> wrote:\nThread moved to hackers, with a patch.---------------------------------------------------------------------------\nHere is a link to that thread, for others who might be curious about it as I was:https://www.postgresql.org/message-id/flat/20210402180549.GF9270%40momjian.us#b3bdafbfeacab0dd8967ff2a3ebf7844I get why it can make sense to move a thread. But if when doing so you post a link to the new thread, that would be appreciated. Thanks!Ken-- AGENCY Software A Free Software data systemBy and for non-profitshttp://agency-software.org/https://demo.agency-software.org/clientken.tanzer@agency-software.org(253) 245-3801Subscribe to the mailing list tolearn more about AGENCY orfollow the discussion.",
"msg_date": "Fri, 2 Apr 2021 13:20:26 -0700",
"msg_from": "Ken Tanzer <ken.tanzer@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "Bruce:\nThanks for tackling this issue.\n\nThe patch looks good to me.\nWhen you have time, can you include the places which were not covered by\nthe current diff ?\n\nCheers\n\nOn Fri, Apr 2, 2021 at 11:06 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Thu, Apr 1, 2021 at 09:46:58PM -0700, Bryn Llewellyn wrote:\n> > Or am I misunderstanding something?\n> >\n> > Try this. The result of each “select” is shown as the trailing comment\n> on the\n> > same line. I added whitespace by hand to line up the fields.\n> >\n> > select interval '-1.7 years'; -- -1 years -8\n> mons\n> >\n> > select interval '29.4 months'; -- 2 years 5\n> mons 12\n> > days\n> >\n> > select interval '-1.7 years 29.4 months'; -- 8\n> mons 12\n> > days << wrong\n> > select interval '29.4 months -1.7 years'; -- 9\n> mons 12\n> > days\n> >\n> > select interval '-1.7 years' + interval '29.4 months'; -- 9\n> mons 12\n> > days\n> > select interval '29.4 months' + interval '-1.7 years'; -- 9\n> mons 12\n> > days\n> >\n> > As I reason it, the last four “select” statements are all semantically\n> the\n> > same. They’re just different syntaxes to add the two intervals the the\n> first\n> > two “select” statements use separately. There’s one odd man out. And I\n> reason\n> > this one to be wrong. Is there a flaw in my reasoning?\n>\n> [Thread moved to hackers.]\n>\n> Looking at your report, I thought I could easily find the cause, but it\n> wasn't obvious. What is happening is that when you cast a float to an\n> integer in C, it rounds toward zero, meaning that 8.4 rounds to 8 and\n> -8.4 rounds to -8. The big problem here is that -8.4 is rounding in a\n> positive direction, while 8.4 rounds in a negative direction. See this:\n>\n> int(int(-8.4) + 29)\n> 21\n> int(int(29) + -8.4)\n> 20\n>\n> When you do '-1.7 years' first, it become -8, and then adding 29 yields\n> 21. In the other order, it is 29 - 8.4, which yields 20.6, which\n> becomes 20. I honestly had never studied this interaction, though you\n> would think I would have seen it before. One interesting issue is that\n> it only happens when the truncations happen to values with different\n> signs --- if they are both positive or negative, it is fine.\n>\n> The best fix I think is to use rint()/round to round to the closest\n> integer, not toward zero. The attached patch does this in a few places,\n> but the code needs more research if we are happy with this approach,\n> since there are probably other cases. Using rint() does help to produce\n> more accurate results, thought the regression tests show no change from\n> this patch.\n>\n> > Further… there’s a notable asymmetry. The fractional part of “1.7 years”\n> is 8.4\n> > months. But the fractional part of the months value doesn’t spread\n> further down\n> > into days. However, the fractional part of “29.4 months” (12 days) _does_\n> > spread further down into days. What’s the rationale for this asymmetry?\n>\n> Yes, looking at the code, it seems we only spill down to one unit, not\n> more. I think we need to have a discussion if we want to change that.\n> I think the idea was that if you specify a non-whole number, you\n> probably want to spill down one level, but don't want it spilling all\n> the way to milliseconds, which is certainly possible.\n>\n> > I can’t see that my observations here can be explained by the difference\n> > between calendar time and clock time. Here I’m just working with\n> non-metric\n> > units like feet and inches. One year is just defined as 12 months. And\n> one\n> > month is just defined as 30 days. All that stuff about adding a month to\n> > 3-Feb-2020 taking you to 3-Mar-2020 (same for leap years an non-leap\n> years) ,\n> > and that other stuff about adding one day to 23:00 on the day before the\n> > “spring forward” moment taking you to 23:00 on the next day (i.w. when\n> > intervals are added to timestamps) is downstream of simply adding two\n> > intervals.\n>\n> Ah, seems you have done some research. ;-)\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n\nBruce:Thanks for tackling this issue.The patch looks good to me.When you have time, can you include the places which were not covered by the current diff ?CheersOn Fri, Apr 2, 2021 at 11:06 AM Bruce Momjian <bruce@momjian.us> wrote:On Thu, Apr 1, 2021 at 09:46:58PM -0700, Bryn Llewellyn wrote:\n> Or am I misunderstanding something?\n> \n> Try this. The result of each “select” is shown as the trailing comment on the\n> same line. I added whitespace by hand to line up the fields.\n> \n> select interval '-1.7 years'; -- -1 years -8 mons\n> \n> select interval '29.4 months'; -- 2 years 5 mons 12\n> days\n> \n> select interval '-1.7 years 29.4 months'; -- 8 mons 12\n> days << wrong\n> select interval '29.4 months -1.7 years'; -- 9 mons 12\n> days\n> \n> select interval '-1.7 years' + interval '29.4 months'; -- 9 mons 12\n> days\n> select interval '29.4 months' + interval '-1.7 years'; -- 9 mons 12\n> days\n> \n> As I reason it, the last four “select” statements are all semantically the\n> same. They’re just different syntaxes to add the two intervals the the first\n> two “select” statements use separately. There’s one odd man out. And I reason\n> this one to be wrong. Is there a flaw in my reasoning?\n\n[Thread moved to hackers.]\n\nLooking at your report, I thought I could easily find the cause, but it\nwasn't obvious. What is happening is that when you cast a float to an\ninteger in C, it rounds toward zero, meaning that 8.4 rounds to 8 and\n-8.4 rounds to -8. The big problem here is that -8.4 is rounding in a\npositive direction, while 8.4 rounds in a negative direction. See this:\n\n int(int(-8.4) + 29)\n 21\n int(int(29) + -8.4)\n 20\n\nWhen you do '-1.7 years' first, it become -8, and then adding 29 yields\n21. In the other order, it is 29 - 8.4, which yields 20.6, which\nbecomes 20. I honestly had never studied this interaction, though you\nwould think I would have seen it before. One interesting issue is that\nit only happens when the truncations happen to values with different\nsigns --- if they are both positive or negative, it is fine.\n\nThe best fix I think is to use rint()/round to round to the closest\ninteger, not toward zero. The attached patch does this in a few places,\nbut the code needs more research if we are happy with this approach,\nsince there are probably other cases. Using rint() does help to produce\nmore accurate results, thought the regression tests show no change from\nthis patch.\n\n> Further… there’s a notable asymmetry. The fractional part of “1.7 years” is 8.4\n> months. But the fractional part of the months value doesn’t spread further down\n> into days. However, the fractional part of “29.4 months” (12 days) _does_\n> spread further down into days. What’s the rationale for this asymmetry?\n\nYes, looking at the code, it seems we only spill down to one unit, not\nmore. I think we need to have a discussion if we want to change that. \nI think the idea was that if you specify a non-whole number, you\nprobably want to spill down one level, but don't want it spilling all\nthe way to milliseconds, which is certainly possible.\n\n> I can’t see that my observations here can be explained by the difference\n> between calendar time and clock time. Here I’m just working with non-metric\n> units like feet and inches. One year is just defined as 12 months. And one\n> month is just defined as 30 days. All that stuff about adding a month to\n> 3-Feb-2020 taking you to 3-Mar-2020 (same for leap years an non-leap years) ,\n> and that other stuff about adding one day to 23:00 on the day before the\n> “spring forward” moment taking you to 23:00 on the next day (i.w. when\n> intervals are added to timestamps) is downstream of simply adding two\n> intervals.\n\nAh, seems you have done some research. ;-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Fri, 2 Apr 2021 13:27:33 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 11:05 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Thu, Apr 1, 2021 at 09:46:58PM -0700, Bryn Llewellyn wrote:\n> > Or am I misunderstanding something?\n> >\n> > Try this. The result of each “select” is shown as the trailing comment\n> on the\n> > same line. I added whitespace by hand to line up the fields.\n> >\n> > select interval '-1.7 years'; -- -1 years -8\n> mons\n> >\n> > select interval '29.4 months'; -- 2 years 5\n> mons 12\n> > days\n> >\n> > select interval '-1.7 years 29.4 months'; -- 8\n> mons 12\n> > days << wrong\n> > select interval '29.4 months -1.7 years'; -- 9\n> mons 12\n> > days\n> >\n> > select interval '-1.7 years' + interval '29.4 months'; -- 9\n> mons 12\n> > days\n> > select interval '29.4 months' + interval '-1.7 years'; -- 9\n> mons 12\n> > days\n> >\n>\n\nWhile maybe there is an argument to fixing the negative/positive rounding\nissue - there is no way this gets solved without breaking the current\nimplementation\n\nselect interval '0.3 years' + interval '0.4 years' - interval '0.7 years' +\ninterval '0.1 years' should not equal 0 but it certainly does.\n\nUnless we take the concept of 0.3 years = 3 months and move to something\nalong the lines of\n\n1 year = 360 days\n1 month = 30 days\n\nso therefore\n\n0.3 years = 360 days * 0.3 = 108 days = 3 months 18 days\n0.4 years = 360 days * 0.4 = 144 days = 4 months 24 days\n0.7 years = 360 days * 0.7 = 252 days = 8 months 12 days\n\nThen, and only if we don't go to any more than tenths of a year, does the\nmath work. Probably this should resolve down to seconds and then work\nbackwards - but unless we're looking at breaking the entire way it\ncurrently resolves things - I don't think this is of much value.\n\nDoing math on intervals is like doing math on rounded numbers - there is\nalways going to be a pile of issues because the level of precision just is\nnot good enough.\n\nJohn\n\nOn Fri, Apr 2, 2021 at 11:05 AM Bruce Momjian <bruce@momjian.us> wrote:On Thu, Apr 1, 2021 at 09:46:58PM -0700, Bryn Llewellyn wrote:\n> Or am I misunderstanding something?\n> \n> Try this. The result of each “select” is shown as the trailing comment on the\n> same line. I added whitespace by hand to line up the fields.\n> \n> select interval '-1.7 years'; -- -1 years -8 mons\n> \n> select interval '29.4 months'; -- 2 years 5 mons 12\n> days\n> \n> select interval '-1.7 years 29.4 months'; -- 8 mons 12\n> days << wrong\n> select interval '29.4 months -1.7 years'; -- 9 mons 12\n> days\n> \n> select interval '-1.7 years' + interval '29.4 months'; -- 9 mons 12\n> days\n> select interval '29.4 months' + interval '-1.7 years'; -- 9 mons 12\n> days\n> While maybe there is an argument to fixing the negative/positive rounding issue - there is no way this gets solved without breaking the current implementationselect interval '0.3 years' + interval '0.4 years' - interval '0.7 years' + interval '0.1 years' should not equal 0 but it certainly does.Unless we take the concept of 0.3 years = 3 months and move to something along the lines of 1 year = 360 days1 month = 30 days so therefore 0.3 years = 360 days * 0.3 = 108 days = 3 months 18 days 0.4 years = 360 days * 0.4 = 144 days = 4 months 24 days0.7 years = 360 days * 0.7 = 252 days = 8 months 12 daysThen, and only if we don't go to any more than tenths of a year, does the math work. Probably this should resolve down to seconds and then work backwards - but unless we're looking at breaking the entire way it currently resolves things - I don't think this is of much value.Doing math on intervals is like doing math on rounded numbers - there is always going to be a pile of issues because the level of precision just is not good enough.John",
"msg_date": "Fri, 2 Apr 2021 14:00:03 -0700",
"msg_from": "John W Higgins <wishdev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 01:05:42PM -0700, Bryn Llewellyn wrote:\n> I should come clean about the larger context. I work for Yugabyte, Inc. We have\n> a distributed SQL database that uses the Version 11.2 PostgreSQL C code for SQL\n> processing “as is”.\n> \n> https://blog.yugabyte.com/\n> distributed-postgresql-on-a-google-spanner-architecture-query-layer/\n> \n> The founders decided to document YugabyteDB’s SQL functionality explicitly\n> rather than just to point to the published PostgreSQL doc. (There are some DDL\n> differences that reflect the storage layer differences.) I’m presently\n> documenting date-time functionality. This is why I’m so focused on\n> understanding the semantics exactly and on understanding the requirements that\n> the functionality was designed to meet. I’m struggling with interval\n> functionality. I read this:\n\n[Sorry, also moved this to hackers. I might normally do all the\ndiscussion on general, with patches, and then move it to hackers, but\nour PG 14 deadline is next week, so it is best to move it now in hopes\nit can be addressed in PG 14.]\n\nSure, seems like a good idea.\n\n> https://www.postgresql.org/docs/current/datatype-datetime.html#\n> DATATYPE-INTERVAL-INPUT\n> \n> « …field values can have fractional parts; for example '1.5 week' or\n> '01:02:03.45'. Such input is converted to the appropriate number of months,\n> days, and seconds for storage. When this would result in a fractional number of\n> months or days, the fraction is added to the lower-order fields using the\n> conversion factors 1 month = 30 days and 1 day = 24 hours. For example, '1.5\n> month' becomes 1 month and 15 days. Only seconds will ever be shown as\n> fractional on output. »\n> \n> Notice that the doc says that spill-down goes all the way to seconds and not\n> just one unit. This simple test is consistent with the doc (output follows the\n> dash-dash comment):\n> \n> select ('6.54321 months'::interval)::text as i; -- 6 mons 16 days 07:06:40.32\n> \n> You see similar spill-down with this:\n> \n> select ('876.54321 days'::interval)::text as i; -- 876 days 13:02:13.344\n> \n> And so on down through the remaining smaller units. It’s only this test that\n> doesn’t spill down one unit:\n> \n> select ('6.54321 years'::interval)::text as i; -- 6 years 6 mons\n> \n> This does suggest a straight bug rather than a case for committee debate about\n> what might have been intended. What do you think, Bruce?\n\nSo, that gets into more detail. When I said \"spill down one unit\", I\nwas not talking about _visible_ units, but rather the three internal\nunits used by Postgres:\n\n\thttps://www.postgresql.org/docs/13/datatype-datetime.html#DATATYPE-INTERVAL-INPUT\n\tInternally interval values are stored as months, days, and seconds.\n\t -------------------------\n\nHowever, while that explains why years don't spill beyond months, it\ndoesn't explain why months would spill beyond days. This certainly\nseems inconsistent.\n\nI have modified the patch to prevent partial months from creating\npartial hours/minutes/seconds, so the output is now at least consistent\nbased on the three units:\n\n\tSELECT ('6.54321 years'::interval)::text as i;\n\t i\n\t----------------\n\t 6 years 7 mons\n\t\n\tSELECT ('6.54321 months'::interval)::text as i;\n\t i\n\t----------------\n\t 6 mons 16 days\n\t\n\tSELECT ('876.54321 days'::interval)::text as i;\n\t i\n\t-----------------------\n\t 876 days 13:02:13.344\n\nPartial years keeps it in months, partial months takes it to days, and\npartial days take it to hours/minutes/seconds. This seems like an\nimprovement.\n\nThis also changes the regression test output, I think for the better:\n\n\t SELECT INTERVAL '1.5 weeks';\n\t i\n\t ------------------\n\t- 10 days 12:00:00\n\t+ 10 days\n\nThe new output is less precise, but probably closer to what the user\nwanted.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Fri, 2 Apr 2021 19:47:32 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 01:20:26PM -0700, Ken Tanzer wrote:\n> On Fri, Apr 2, 2021 at 11:06 AM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> \n> Thread moved to hackers, with a patch.\n> ---------------------------------------------------------------------------\n> \n> \n> \n> Here is a link to that thread, for others who might be curious about it as I\n> was:\n> https://www.postgresql.org/message-id/flat/20210402180549.GF9270%40momjian.us#\n> b3bdafbfeacab0dd8967ff2a3ebf7844\n> \n> I get why it can make sense to move a thread.� But if when doing so you post a\n> link to the new thread, that would be appreciated.� Thanks!\n\nI didn't think anyone but the original poster, who was copied in the new\nthread, would really care about this thread, but it seems I was wrong.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 2 Apr 2021 19:48:21 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 01:27:33PM -0700, Zhihong Yu wrote:\n> Bruce:\n> Thanks for tackling this issue.\n> \n> The patch looks good to me.\n> When you have time, can you include the places which were not covered by the\n> current diff ?\n\nI have just posted a new version of the patch which I think covers all\nthe right areas.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 2 Apr 2021 19:49:43 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 02:00:03PM -0700, John W Higgins wrote:\n> On Fri, Apr 2, 2021 at 11:05 AM Bruce Momjian <bruce@momjian.us> wrote:\n> While maybe there is an argument to fixing the negative/positive rounding issue\n> - there is no way this gets solved without breaking the current implementation\n> \n> select interval '0.3 years' + interval '0.4 years' - interval '0.7 years'�+\n> interval '0.1 years' should not equal 0 but it certainly does.\n\nMy new code returns 0.2 months for this, not zero:\n\n\tSELECT interval '0.3 years' + interval '0.4 years' -\n\t\tinterval '0.7 years' + interval '0.1 years';\n\t ?column?\n\t----------\n\t 2 mons\n\nwhich is also wrong since:\n\n\tSELECT interval '0.1 years';\n\t interval\n\t----------\n\t 1 mon\n\n> Unless we take the concept of 0.3 years = 3 months and move to something along\n> the lines of�\n> \n> 1 year = 360 days\n> 1 month = 30 days�\n> \n> so therefore�\n> \n> 0.3 years = 360 days * 0.3 = 108 days = 3 months 18 days�\n> 0.4 years = 360 days * 0.4 = 144 days = 4 months 24 days\n> 0.7 years = 360 days * 0.7 = 252 days = 8 months 12 days\n> \n> Then, and only if we don't go to any more than tenths of a year, does the math\n> work. Probably this should resolve down to seconds and then work backwards -\n> but unless we're looking at breaking the entire way it currently resolves\n> things - I don't think this is of much value.\n> \n> Doing math on intervals is like doing math on rounded numbers - there is always\n> going to be a pile of issues because the level of precision just is not good\n> enough.\n\nI think the big question is what units do people want with fractional\nvalues. I have posted a follow-up email that spills only for one unit,\nwhich I think is the best approach.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 2 Apr 2021 19:58:15 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "Hi,\nbq. My new code returns 0.2 months for this, not zero\n\nCan you clarify (the output below that was 2 mons, not 0.2) ?\n\nThanks\n\nOn Fri, Apr 2, 2021 at 4:58 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Fri, Apr 2, 2021 at 02:00:03PM -0700, John W Higgins wrote:\n> > On Fri, Apr 2, 2021 at 11:05 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > While maybe there is an argument to fixing the negative/positive\n> rounding issue\n> > - there is no way this gets solved without breaking the current\n> implementation\n> >\n> > select interval '0.3 years' + interval '0.4 years' - interval '0.7\n> years' +\n> > interval '0.1 years' should not equal 0 but it certainly does.\n>\n> My new code returns 0.2 months for this, not zero:\n>\n> SELECT interval '0.3 years' + interval '0.4 years' -\n> interval '0.7 years' + interval '0.1 years';\n> ?column?\n> ----------\n> 2 mons\n>\n> which is also wrong since:\n>\n> SELECT interval '0.1 years';\n> interval\n> ----------\n> 1 mon\n>\n> > Unless we take the concept of 0.3 years = 3 months and move to something\n> along\n> > the lines of\n> >\n> > 1 year = 360 days\n> > 1 month = 30 days\n> >\n> > so therefore\n> >\n> > 0.3 years = 360 days * 0.3 = 108 days = 3 months 18 days\n> > 0.4 years = 360 days * 0.4 = 144 days = 4 months 24 days\n> > 0.7 years = 360 days * 0.7 = 252 days = 8 months 12 days\n> >\n> > Then, and only if we don't go to any more than tenths of a year, does\n> the math\n> > work. Probably this should resolve down to seconds and then work\n> backwards -\n> > but unless we're looking at breaking the entire way it currently resolves\n> > things - I don't think this is of much value.\n> >\n> > Doing math on intervals is like doing math on rounded numbers - there is\n> always\n> > going to be a pile of issues because the level of precision just is not\n> good\n> > enough.\n>\n> I think the big question is what units do people want with fractional\n> values. I have posted a follow-up email that spills only for one unit,\n> which I think is the best approach.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n>\n>\n\nHi,bq. My new code returns 0.2 months for this, not zeroCan you clarify (the output below that was 2 mons, not 0.2) ?ThanksOn Fri, Apr 2, 2021 at 4:58 PM Bruce Momjian <bruce@momjian.us> wrote:On Fri, Apr 2, 2021 at 02:00:03PM -0700, John W Higgins wrote:\n> On Fri, Apr 2, 2021 at 11:05 AM Bruce Momjian <bruce@momjian.us> wrote:\n> While maybe there is an argument to fixing the negative/positive rounding issue\n> - there is no way this gets solved without breaking the current implementation\n> \n> select interval '0.3 years' + interval '0.4 years' - interval '0.7 years' +\n> interval '0.1 years' should not equal 0 but it certainly does.\n\nMy new code returns 0.2 months for this, not zero:\n\n SELECT interval '0.3 years' + interval '0.4 years' -\n interval '0.7 years' + interval '0.1 years';\n ?column?\n ----------\n 2 mons\n\nwhich is also wrong since:\n\n SELECT interval '0.1 years';\n interval\n ----------\n 1 mon\n\n> Unless we take the concept of 0.3 years = 3 months and move to something along\n> the lines of \n> \n> 1 year = 360 days\n> 1 month = 30 days \n> \n> so therefore \n> \n> 0.3 years = 360 days * 0.3 = 108 days = 3 months 18 days \n> 0.4 years = 360 days * 0.4 = 144 days = 4 months 24 days\n> 0.7 years = 360 days * 0.7 = 252 days = 8 months 12 days\n> \n> Then, and only if we don't go to any more than tenths of a year, does the math\n> work. Probably this should resolve down to seconds and then work backwards -\n> but unless we're looking at breaking the entire way it currently resolves\n> things - I don't think this is of much value.\n> \n> Doing math on intervals is like doing math on rounded numbers - there is always\n> going to be a pile of issues because the level of precision just is not good\n> enough.\n\nI think the big question is what units do people want with fractional\nvalues. I have posted a follow-up email that spills only for one unit,\nwhich I think is the best approach.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Fri, 2 Apr 2021 17:07:27 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 07:47:32PM -0400, Bruce Momjian wrote:\n> I have modified the patch to prevent partial months from creating\n> partial hours/minutes/seconds, so the output is now at least consistent\n> based on the three units:\n> \n> \tSELECT ('6.54321 years'::interval)::text as i;\n> \t i\n> \t----------------\n> \t 6 years 7 mons\n> \t\n> \tSELECT ('6.54321 months'::interval)::text as i;\n> \t i\n> \t----------------\n> \t 6 mons 16 days\n> \t\n> \tSELECT ('876.54321 days'::interval)::text as i;\n> \t i\n> \t-----------------------\n> \t 876 days 13:02:13.344\n> \n> Partial years keeps it in months, partial months takes it to days, and\n> partial days take it to hours/minutes/seconds. This seems like an\n> improvement.\n> \n> This also changes the regression test output, I think for the better:\n> \n> \t SELECT INTERVAL '1.5 weeks';\n> \t i\n> \t ------------------\n> \t- 10 days 12:00:00\n> \t+ 10 days\n> \n> The new output is less precise, but probably closer to what the user\n> wanted.\n\nThinking some more about this, the connection between months and days is\nvery inaccurate, 30 days/month, but the connection between days and\nhours/minutes/seconds is pretty accurate, except for leap days. \nTherefore, returning \"10 days 12:00:00\" is in many ways better, but\nreturning hours/minutes/seconds for fractional months is very arbitrary\nand suggests an accuracy that doesn't exist. However, I am afraid that\ntrying to enforce that distinction in the Postgres behavior would appear\nvery arbitrary, so what I did above is proabably the best I can do. \nHere is another example of what we have:\n\n\tSELECT INTERVAL '1.5 years';\n\t interval\n\t---------------\n\t 1 year 6 mons\n\t\n\tSELECT INTERVAL '1.5 months';\n\t interval\n\t---------------\n\t 1 mon 15 days\n\t\n\tSELECT INTERVAL '1.5 weeks';\n\t interval\n\t----------\n\t 10 days\n\t\n\tSELECT INTERVAL '1.5 days';\n\t interval\n\t----------------\n\t 1 day 12:00:00\n\t\n\tSELECT INTERVAL '1.5 hours';\n\t interval\n\t----------\n\t 01:30:00\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 2 Apr 2021 20:36:27 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "bruce@momjian.us wrote:\n\n> I have just posted a new version of the patch which I think covers all the right areas.\n\nI found the relevant email from you to pgsql-hackers here:\n\nhttps://www.postgresql.org/message-id/20210402234732.GA29125%40momjian.us\n\nYou said:\n\n> I have modified the patch to prevent partial months from creating partial hours/minutes/seconds… Partial years keeps it in months, partial months takes it to days, and partial days take it to hours/minutes/seconds. This seems like an improvement.\n\nI have written some PL/pgSQL code that faithfully emulates the behavior that I see in my present vanilla PostgreSQL Version 13.2 system in a wide range of tests. This is the key part:\n\n m1 int not null := trunc(p.mo);\n m_remainder numeric not null := p.mo - m1::numeric;\n m int not null := trunc(p.yy*12) + m1;\n\n d_real numeric not null := p.dd + m_remainder*30.0;\n d int not null := floor(d_real);\n d_remainder numeric not null := d_real - d::numeric;\n\n s numeric not null := d_remainder*24.0*60.0*60.0 +\n p.hh*60.0*60.0 +\n p.mi*60.0 +\n p.ss;\nbegin\n return (m, d, s)::modeled_interval_t;\nend;\n\nThese quantities:\n\np.yy, p.mo, p.dd, p.hh, p.mi, and p.ss\n\nare the user’s parameterization. All are real numbers. Because non-integral values for years, months, days, hours, and minutes are allowed when you specify a value using the ::interval typecast, my reference doc must state the rules. I would have struggled to express these rules in prose—especially given the use both of trunc() and floor(). I would have struggled more to explain what requirements these rules meet.\n\nI gather that by the time YugabyteDB has adopted your patch, my PL/pgSQL will no longer be a correct emulation. So I’ll re-write it then.\n\nI intend to advise users always to constrain the values, when they specify an interval value explicitly, so the the years, months, days, hours, and minutes are integers. This is, after all, the discipline that the make_interval() built-in imposes. So I might recommend using only that.\n\n\n\n\n\n\n\n\n\n\nbruce@momjian.us wrote:I have just posted a new version of the patch which I think covers all the right areas.I found the relevant email from you to pgsql-hackers here:https://www.postgresql.org/message-id/20210402234732.GA29125%40momjian.usYou said:I have modified the patch to prevent partial months from creating partial hours/minutes/seconds… Partial years keeps it in months, partial months takes it to days, and partial days take it to hours/minutes/seconds. This seems like an improvement.I have written some PL/pgSQL code that faithfully emulates the behavior that I see in my present vanilla PostgreSQL Version 13.2 system in a wide range of tests. This is the key part: m1 int not null := trunc(p.mo); m_remainder numeric not null := p.mo - m1::numeric; m int not null := trunc(p.yy*12) + m1; d_real numeric not null := p.dd + m_remainder*30.0; d int not null := floor(d_real); d_remainder numeric not null := d_real - d::numeric; s numeric not null := d_remainder*24.0*60.0*60.0 + p.hh*60.0*60.0 + p.mi*60.0 + p.ss;begin return (m, d, s)::modeled_interval_t;end;These quantities:p.yy, p.mo, p.dd, p.hh, p.mi, and p.ssare the user’s parameterization. All are real numbers. Because non-integral values for years, months, days, hours, and minutes are allowed when you specify a value using the ::interval typecast, my reference doc must state the rules. I would have struggled to express these rules in prose—especially given the use both of trunc() and floor(). I would have struggled more to explain what requirements these rules meet.I gather that by the time YugabyteDB has adopted your patch, my PL/pgSQL will no longer be a correct emulation. So I’ll re-write it then.I intend to advise users always to constrain the values, when they specify an interval value explicitly, so the the years, months, days, hours, and minutes are integers. This is, after all, the discipline that the make_interval() built-in imposes. So I might recommend using only that.",
"msg_date": "Fri, 2 Apr 2021 17:50:59 -0700",
"msg_from": "Bryn Llewellyn <bryn@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 05:50:59PM -0700, Bryn Llewellyn wrote:\n> are the user’s parameterization. All are real numbers. Because non-integral\n> values for years, months, days, hours, and minutes are allowed when you specify\n> a value using the ::interval typecast, my reference doc must state the rules. I\n> would have struggled to express these rules in prose—especially given the use\n> both of trunc() and floor(). I would have struggled more to explain what\n> requirements these rules meet.\n\nThe fundamental issue is that while months, days, and seconds are\nconsistent in their own units, when you have to cross from one unit to\nanother, it is by definition imprecise, since the interval is not tied\nto a specific date, with its own days-of-the-month and leap days and\ndaylight savings time changes. It feels like it is going to be\nimprecise no matter what we do.\n\nAdding to this is the fact that interval values are stored in C 'struct\ntm' defined in libc's ctime(), where months are integers, so carrying\naround non-integer month values until we get a final result would add a\nlot of complexity, and complexity to a system that is by definition\nimprecise, which doesn't seem worth it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 2 Apr 2021 21:02:29 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "Hi,\nI got a local build with second patch where:\n\nyugabyte=# SELECT interval '0.3 years' + interval '0.4 years' -\n interval '0.7 years';\n ?column?\n----------\n 1 mon\n\nI think the outcome is a bit unintuitive (I would expect result close to 0).\n\nCheers\n\nOn Fri, Apr 2, 2021 at 5:07 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n> Hi,\n> bq. My new code returns 0.2 months for this, not zero\n>\n> Can you clarify (the output below that was 2 mons, not 0.2) ?\n>\n> Thanks\n>\n> On Fri, Apr 2, 2021 at 4:58 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n>> On Fri, Apr 2, 2021 at 02:00:03PM -0700, John W Higgins wrote:\n>> > On Fri, Apr 2, 2021 at 11:05 AM Bruce Momjian <bruce@momjian.us> wrote:\n>> > While maybe there is an argument to fixing the negative/positive\n>> rounding issue\n>> > - there is no way this gets solved without breaking the current\n>> implementation\n>> >\n>> > select interval '0.3 years' + interval '0.4 years' - interval '0.7\n>> years' +\n>> > interval '0.1 years' should not equal 0 but it certainly does.\n>>\n>> My new code returns 0.2 months for this, not zero:\n>>\n>> SELECT interval '0.3 years' + interval '0.4 years' -\n>> interval '0.7 years' + interval '0.1 years';\n>> ?column?\n>> ----------\n>> 2 mons\n>>\n>> which is also wrong since:\n>>\n>> SELECT interval '0.1 years';\n>> interval\n>> ----------\n>> 1 mon\n>>\n>> > Unless we take the concept of 0.3 years = 3 months and move to\n>> something along\n>> > the lines of\n>> >\n>> > 1 year = 360 days\n>> > 1 month = 30 days\n>> >\n>> > so therefore\n>> >\n>> > 0.3 years = 360 days * 0.3 = 108 days = 3 months 18 days\n>> > 0.4 years = 360 days * 0.4 = 144 days = 4 months 24 days\n>> > 0.7 years = 360 days * 0.7 = 252 days = 8 months 12 days\n>> >\n>> > Then, and only if we don't go to any more than tenths of a year, does\n>> the math\n>> > work. Probably this should resolve down to seconds and then work\n>> backwards -\n>> > but unless we're looking at breaking the entire way it currently\n>> resolves\n>> > things - I don't think this is of much value.\n>> >\n>> > Doing math on intervals is like doing math on rounded numbers - there\n>> is always\n>> > going to be a pile of issues because the level of precision just is not\n>> good\n>> > enough.\n>>\n>> I think the big question is what units do people want with fractional\n>> values. I have posted a follow-up email that spills only for one unit,\n>> which I think is the best approach.\n>>\n>> --\n>> Bruce Momjian <bruce@momjian.us> https://momjian.us\n>> EDB https://enterprisedb.com\n>>\n>> If only the physical world exists, free will is an illusion.\n>>\n>>\n>>\n>>\n\nHi,I got a local build with second patch where:yugabyte=# SELECT interval '0.3 years' + interval '0.4 years' - interval '0.7 years'; ?column?---------- 1 monI think the outcome is a bit unintuitive (I would expect result close to 0).CheersOn Fri, Apr 2, 2021 at 5:07 PM Zhihong Yu <zyu@yugabyte.com> wrote:Hi,bq. My new code returns 0.2 months for this, not zeroCan you clarify (the output below that was 2 mons, not 0.2) ?ThanksOn Fri, Apr 2, 2021 at 4:58 PM Bruce Momjian <bruce@momjian.us> wrote:On Fri, Apr 2, 2021 at 02:00:03PM -0700, John W Higgins wrote:\n> On Fri, Apr 2, 2021 at 11:05 AM Bruce Momjian <bruce@momjian.us> wrote:\n> While maybe there is an argument to fixing the negative/positive rounding issue\n> - there is no way this gets solved without breaking the current implementation\n> \n> select interval '0.3 years' + interval '0.4 years' - interval '0.7 years' +\n> interval '0.1 years' should not equal 0 but it certainly does.\n\nMy new code returns 0.2 months for this, not zero:\n\n SELECT interval '0.3 years' + interval '0.4 years' -\n interval '0.7 years' + interval '0.1 years';\n ?column?\n ----------\n 2 mons\n\nwhich is also wrong since:\n\n SELECT interval '0.1 years';\n interval\n ----------\n 1 mon\n\n> Unless we take the concept of 0.3 years = 3 months and move to something along\n> the lines of \n> \n> 1 year = 360 days\n> 1 month = 30 days \n> \n> so therefore \n> \n> 0.3 years = 360 days * 0.3 = 108 days = 3 months 18 days \n> 0.4 years = 360 days * 0.4 = 144 days = 4 months 24 days\n> 0.7 years = 360 days * 0.7 = 252 days = 8 months 12 days\n> \n> Then, and only if we don't go to any more than tenths of a year, does the math\n> work. Probably this should resolve down to seconds and then work backwards -\n> but unless we're looking at breaking the entire way it currently resolves\n> things - I don't think this is of much value.\n> \n> Doing math on intervals is like doing math on rounded numbers - there is always\n> going to be a pile of issues because the level of precision just is not good\n> enough.\n\nI think the big question is what units do people want with fractional\nvalues. I have posted a follow-up email that spills only for one unit,\nwhich I think is the best approach.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Fri, 2 Apr 2021 18:11:08 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, 2 Apr 2021 at 21:08, Zhihong Yu <zyu@yugabyte.com> wrote:\n\n> Hi,\n> I got a local build with second patch where:\n>\n> yugabyte=# SELECT interval '0.3 years' + interval '0.4 years' -\n> interval '0.7 years';\n> ?column?\n> ----------\n> 1 mon\n>\n> I think the outcome is a bit unintuitive (I would expect result close to\n> 0).\n>\n\nThat's not fundamentally different from this:\n\nodyssey=> select 12 * 3/10 + 12 * 4/10 - 12 * 7/10;\n ?column?\n----------\n -1\n(1 row)\n\nodyssey=>\n\nAnd actually the result is pretty close to 0. I mean it’s less than 0.1\nyear.\n\nI wonder if it might have been better if only integers had been accepted\nfor the components? If you want 0.3 years write 0.3 * '1 year'::interval.\nBut changing it now would be a pretty significant backwards compatibility\nbreak.\n\nThere's really no avoiding counterintuitive behaviour though. Look at this:\n\nodyssey=> select 0.3 * '1 year'::interval + 0.4 * '1 year'::interval - 0.7\n* '1 year'::interval;\n ?column?\n------------------\n -1 mons +30 days\n(1 row)\n\nodyssey=> select 0.3 * '1 year'::interval + 0.4 * '1 year'::interval - 0.7\n* '1 year'::interval = '0';\n ?column?\n----------\n t\n(1 row)\n\nodyssey=>\n\nIn other words, doing the “same” calculation but with multiplying 1 year\nintervals by floats to get the values to add, you end up with an interval\nthat while not identical to 0 does compare equal to 0. So very close to 0;\nin fact, as close to 0 as you can get without actually being identically 0.\n\nOn Fri, 2 Apr 2021 at 21:08, Zhihong Yu <zyu@yugabyte.com> wrote:Hi,I got a local build with second patch where:yugabyte=# SELECT interval '0.3 years' + interval '0.4 years' - interval '0.7 years'; ?column?---------- 1 monI think the outcome is a bit unintuitive (I would expect result close to 0).That's not fundamentally different from this:odyssey=> select 12 * 3/10 + 12 * 4/10 - 12 * 7/10; ?column? ---------- -1(1 row)odyssey=> And actually the result is pretty close to 0. I mean it’s less than 0.1 year.I wonder if it might have been better if only integers had been accepted for the components? If you want 0.3 years write 0.3 * '1 year'::interval. But changing it now would be a pretty significant backwards compatibility break.There's really no avoiding counterintuitive behaviour though. Look at this:odyssey=> select 0.3 * '1 year'::interval + 0.4 * '1 year'::interval - 0.7 * '1 year'::interval; ?column? ------------------ -1 mons +30 days(1 row)odyssey=> select 0.3 * '1 year'::interval + 0.4 * '1 year'::interval - 0.7 * '1 year'::interval = '0'; ?column? ---------- t(1 row)odyssey=> In other words, doing the “same” calculation but with multiplying 1 year intervals by floats to get the values to add, you end up with an interval that while not identical to 0 does compare equal to 0. So very close to 0; in fact, as close to 0 as you can get without actually being identically 0.",
"msg_date": "Fri, 2 Apr 2021 21:23:51 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "Hi,\nThe mix of interval and comparison with float is not easy to interpret. See\nthe following (I got 0.0833 since the result for interval '0.3 years' +\ninterval '0.4 years' - ... query was 1 month and 1/12 ~= 0.0833).\n\nyugabyte=# select 0.3 * '1 year'::interval + 0.4 * '1 year'::interval - 0.7\n* '1 year'::interval = '0.0833 year'::interval;\n ?column?\n----------\n f\n\nAs long as Bruce's patch makes improvements over the current behavior, I\nthink that's fine.\n\nCheers\n\nOn Fri, Apr 2, 2021 at 6:24 PM Isaac Morland <isaac.morland@gmail.com>\nwrote:\n\n> On Fri, 2 Apr 2021 at 21:08, Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>> Hi,\n>> I got a local build with second patch where:\n>>\n>> yugabyte=# SELECT interval '0.3 years' + interval '0.4 years' -\n>> interval '0.7 years';\n>> ?column?\n>> ----------\n>> 1 mon\n>>\n>> I think the outcome is a bit unintuitive (I would expect result close to\n>> 0).\n>>\n>\n> That's not fundamentally different from this:\n>\n> odyssey=> select 12 * 3/10 + 12 * 4/10 - 12 * 7/10;\n> ?column?\n> ----------\n> -1\n> (1 row)\n>\n> odyssey=>\n>\n> And actually the result is pretty close to 0. I mean it’s less than 0.1\n> year.\n>\n> I wonder if it might have been better if only integers had been accepted\n> for the components? If you want 0.3 years write 0.3 * '1 year'::interval.\n> But changing it now would be a pretty significant backwards compatibility\n> break.\n>\n> There's really no avoiding counterintuitive behaviour though. Look at this:\n>\n> odyssey=> select 0.3 * '1 year'::interval + 0.4 * '1 year'::interval - 0.7\n> * '1 year'::interval;\n> ?column?\n> ------------------\n> -1 mons +30 days\n> (1 row)\n>\n> odyssey=> select 0.3 * '1 year'::interval + 0.4 * '1 year'::interval - 0.7\n> * '1 year'::interval = '0';\n> ?column?\n> ----------\n> t\n> (1 row)\n>\n> odyssey=>\n>\n> In other words, doing the “same” calculation but with multiplying 1 year\n> intervals by floats to get the values to add, you end up with an interval\n> that while not identical to 0 does compare equal to 0. So very close to 0;\n> in fact, as close to 0 as you can get without actually being identically 0.\n>\n\nHi,The mix of interval and comparison with float is not easy to interpret. See the following (I got 0.0833 since the result for interval '0.3 years' + interval '0.4 years' - ... query was 1 month and 1/12 ~= 0.0833).yugabyte=# select 0.3 * '1 year'::interval + 0.4 * '1 year'::interval - 0.7 * '1 year'::interval = '0.0833 year'::interval; ?column?---------- fAs long as Bruce's patch makes improvements over the current behavior, I think that's fine.CheersOn Fri, Apr 2, 2021 at 6:24 PM Isaac Morland <isaac.morland@gmail.com> wrote:On Fri, 2 Apr 2021 at 21:08, Zhihong Yu <zyu@yugabyte.com> wrote:Hi,I got a local build with second patch where:yugabyte=# SELECT interval '0.3 years' + interval '0.4 years' - interval '0.7 years'; ?column?---------- 1 monI think the outcome is a bit unintuitive (I would expect result close to 0).That's not fundamentally different from this:odyssey=> select 12 * 3/10 + 12 * 4/10 - 12 * 7/10; ?column? ---------- -1(1 row)odyssey=> And actually the result is pretty close to 0. I mean it’s less than 0.1 year.I wonder if it might have been better if only integers had been accepted for the components? If you want 0.3 years write 0.3 * '1 year'::interval. But changing it now would be a pretty significant backwards compatibility break.There's really no avoiding counterintuitive behaviour though. Look at this:odyssey=> select 0.3 * '1 year'::interval + 0.4 * '1 year'::interval - 0.7 * '1 year'::interval; ?column? ------------------ -1 mons +30 days(1 row)odyssey=> select 0.3 * '1 year'::interval + 0.4 * '1 year'::interval - 0.7 * '1 year'::interval = '0'; ?column? ---------- t(1 row)odyssey=> In other words, doing the “same” calculation but with multiplying 1 year intervals by floats to get the values to add, you end up with an interval that while not identical to 0 does compare equal to 0. So very close to 0; in fact, as close to 0 as you can get without actually being identically 0.",
"msg_date": "Fri, 2 Apr 2021 19:06:08 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 06:11:08PM -0700, Zhihong Yu wrote:\n> Hi,\n> I got a local build with second patch where:\n> \n> yugabyte=# SELECT �interval '0.3 years' + interval '0.4 years' -\n> � � � � � � � � interval '0.7 years';\n> �?column?\n> ----------\n> �1 mon\n> \n> I think the outcome is a bit unintuitive (I would expect result�close�to 0).\n\nUh, the current code returns:\n\n\tSELECT interval '0.3 years' + interval '0.4 years' - interval '0.7 years';\n\t ?column?\n\t----------\n\t -1 mon\n\nand with the patch it is:\n\t\n\tSELECT interval '0.3 years' + interval '0.4 years' - interval '0.7 years';\n\t ?column?\n\t----------\n\t 1 mon\n\nWhat it isn't, is zero months, which is the obviously ideal answer.\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 2 Apr 2021 22:19:05 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 07:06:08PM -0700, Zhihong Yu wrote:\n> Hi,\n> The mix of interval and comparison with float is not easy to interpret. See the\n> following (I got 0.0833 since the result for�interval '0.3 years' + interval\n> '0.4 years' - ... query was 1 month and 1/12 ~= 0.0833).\n> \n> yugabyte=# select 0.3 * '1 year'::interval + 0.4 * '1 year'::interval - 0.7 *\n> '1 year'::interval = '0.0833 year'::interval;\n> �?column?\n> ----------\n> �f\n> \n> As long as Bruce's patch makes improvements over the current behavior, I think\n> that's fine.\n\nI wish I could figure out how to improve it any futher. What is odd is\nthat I have never seen this reported as a problem before. I plan to\napply this early next week for PG 14.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 2 Apr 2021 22:21:26 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "Bruce:\nIn src/interfaces/ecpg/pgtypeslib/interval.c, how about the following\nplaces ?\n\nAround line 158:\n case 'Y':\n tm->tm_year += val;\n tm->tm_mon += (fval * MONTHS_PER_YEAR);\n\nAround line 194:\n tm->tm_year += val;\n tm->tm_mon += (fval * MONTHS_PER_YEAR);\n\nIs rint() needed for these two cases ?\n\nCheers\n\nOn Fri, Apr 2, 2021 at 7:21 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Fri, Apr 2, 2021 at 07:06:08PM -0700, Zhihong Yu wrote:\n> > Hi,\n> > The mix of interval and comparison with float is not easy to interpret.\n> See the\n> > following (I got 0.0833 since the result for interval '0.3 years' +\n> interval\n> > '0.4 years' - ... query was 1 month and 1/12 ~= 0.0833).\n> >\n> > yugabyte=# select 0.3 * '1 year'::interval + 0.4 * '1 year'::interval -\n> 0.7 *\n> > '1 year'::interval = '0.0833 year'::interval;\n> > ?column?\n> > ----------\n> > f\n> >\n> > As long as Bruce's patch makes improvements over the current behavior, I\n> think\n> > that's fine.\n>\n> I wish I could figure out how to improve it any futher. What is odd is\n> that I have never seen this reported as a problem before. I plan to\n> apply this early next week for PG 14.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n\nBruce:In src/interfaces/ecpg/pgtypeslib/interval.c, how about the following places ?Around line 158: case 'Y': tm->tm_year += val; tm->tm_mon += (fval * MONTHS_PER_YEAR);Around line 194: tm->tm_year += val; tm->tm_mon += (fval * MONTHS_PER_YEAR);Is rint() needed for these two cases ?CheersOn Fri, Apr 2, 2021 at 7:21 PM Bruce Momjian <bruce@momjian.us> wrote:On Fri, Apr 2, 2021 at 07:06:08PM -0700, Zhihong Yu wrote:\n> Hi,\n> The mix of interval and comparison with float is not easy to interpret. See the\n> following (I got 0.0833 since the result for interval '0.3 years' + interval\n> '0.4 years' - ... query was 1 month and 1/12 ~= 0.0833).\n> \n> yugabyte=# select 0.3 * '1 year'::interval + 0.4 * '1 year'::interval - 0.7 *\n> '1 year'::interval = '0.0833 year'::interval;\n> ?column?\n> ----------\n> f\n> \n> As long as Bruce's patch makes improvements over the current behavior, I think\n> that's fine.\n\nI wish I could figure out how to improve it any futher. What is odd is\nthat I have never seen this reported as a problem before. I plan to\napply this early next week for PG 14.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Fri, 2 Apr 2021 19:53:35 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 07:53:35PM -0700, Zhihong Yu wrote:\n> Bruce:\n> In�src/interfaces/ecpg/pgtypeslib/interval.c, how�about the following places ?\n> \n> Around line 158:\n> � � � � � � � � case 'Y':\n> � � � � � � � � � � tm->tm_year += val;\n> � � � � � � � � � � tm->tm_mon += (fval * MONTHS_PER_YEAR);\n> \n> Around line 194:\n> � � � � � � � � � � tm->tm_year += val;\n> � � � � � � � � � � tm->tm_mon += (fval * MONTHS_PER_YEAR);\n> \n> Is rint() needed for these two cases ?\n\nAh, yes, good point. Updated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Fri, 2 Apr 2021 23:00:12 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Apr 02, 2021 at 10:21:26PM -0400, Bruce Momjian wrote:\n> I wish I could figure out how to improve it any futher. What is odd is\n> that I have never seen this reported as a problem before. I plan to\n> apply this early next week for PG 14.\n\nOn Fri, Apr 02, 2021 at 01:05:42PM -0700, Bryn Llewellyn wrote:\n> bruce@momjian.us wrote:\n> > Yes, looking at the code, it seems we only spill down to one unit, not more. I think we need to have a discussion if we want to change that. \n\nIf this is a bug, then there's no deadline - and if you're proposing a behavior\nchange, then I don't think it's a good time to begin the discussion.\n\n> https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-INPUT\n> « …field values can have fractional parts; for example '1.5 week' or '01:02:03.45'. Such input is converted to the appropriate number of months, days, and seconds for storage. When this would result in a fractional number of months or days, the fraction is added to the lower-order fields using the conversion factors 1 month = 30 days and 1 day = 24 hours. For example, '1.5 month' becomes 1 month and 15 days. Only seconds will ever be shown as fractional on output. »\n\nYour patch changes what seems to be the intended behavior, with the test case\nadded by:\n\n|commit 57bfb27e60055c10e42b87e68a894720c2f78e70\n|Author: Tom Lane <tgl@sss.pgh.pa.us>\n|Date: Mon Sep 4 01:26:28 2006 +0000\n|\n| Fix interval input parser so that fractional weeks and months are\n| cascaded first to days and only what is leftover into seconds. This\n\nAnd documented by:\n\n|commit dbf57d31f8d7bf5c058a9fab2a1ccb4a336864ce\n|Author: Tom Lane <tgl@sss.pgh.pa.us>\n|Date: Sun Nov 9 17:09:48 2008 +0000\n|\n| Add some documentation about handling of fractions in interval input.\n| (It's always worked like this, but we never documented it before.)\n\nIf you were to change the behavior, I think you'd have to update the\ndocumentation, too - but I think that's not a desirable change.\n\nI *am* curious why the YEAR, DECADE, CENTURY, AND MILLENIUM cases only handle\nfractional intervals down to the next smaller unit, and not down to\nseconds/milliseconds. I wrote a patch to handle that by adding\nAdjustFractMons(), if we agree that it's desirable to change.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 5 Apr 2021 11:33:10 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Mon, Apr 5, 2021 at 11:33:10AM -0500, Justin Pryzby wrote:\n> On Fri, Apr 02, 2021 at 10:21:26PM -0400, Bruce Momjian wrote:\n> > I wish I could figure out how to improve it any futher. What is odd is\n> > that I have never seen this reported as a problem before. I plan to\n> > apply this early next week for PG 14.\n> \n> On Fri, Apr 02, 2021 at 01:05:42PM -0700, Bryn Llewellyn wrote:\n> > bruce@momjian.us wrote:\n> > > Yes, looking at the code, it seems we only spill down to one unit, not more. I think we need to have a discussion if we want to change that. \n> \n> If this is a bug, then there's no deadline - and if you're proposing a behavior\n> change, then I don't think it's a good time to begin the discussion.\n\nWell, bug or not, we are not going to change back branches for this, and\nif you want a larger discussion, it will have to wait for PG 15.\n\n> > https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-INPUT\n> > « …field values can have fractional parts; for example '1.5 week' or '01:02:03.45'. Such input is converted to the appropriate number of months, days, and seconds for storage. When this would result in a fractional number of months or days, the fraction is added to the lower-order fields using the conversion factors 1 month = 30 days and 1 day = 24 hours. For example, '1.5 month' becomes 1 month and 15 days. Only seconds will ever be shown as fractional on output. »\n\nI see that. What is not clear here is how far we flow down. I was\nlooking at adding documentation or regression tests for that, but was\nunsure. I adjusted the docs slightly in the attached patch.\n\n> Your patch changes what seems to be the intended behavior, with the test case\n> added by:\n> \n> |commit 57bfb27e60055c10e42b87e68a894720c2f78e70\n> |Author: Tom Lane <tgl@sss.pgh.pa.us>\n> |Date: Mon Sep 4 01:26:28 2006 +0000\n> |\n> | Fix interval input parser so that fractional weeks and months are\n> | cascaded first to days and only what is leftover into seconds. This\n> \n> And documented by:\n> \n> |commit dbf57d31f8d7bf5c058a9fab2a1ccb4a336864ce\n> |Author: Tom Lane <tgl@sss.pgh.pa.us>\n> |Date: Sun Nov 9 17:09:48 2008 +0000\n> |\n> | Add some documentation about handling of fractions in interval input.\n> | (It's always worked like this, but we never documented it before.)\n> \n> If you were to change the behavior, I think you'd have to update the\n> documentation, too - but I think that's not a desirable change.\n\n> I *am* curious why the YEAR, DECADE, CENTURY, AND MILLENIUM cases only handle\n> fractional intervals down to the next smaller unit, and not down to\n> seconds/milliseconds. I wrote a patch to handle that by adding\n> AdjustFractMons(), if we agree that it's desirable to change.\n\nThe interaction of months/days/seconds is so imprecise that passing it\nfuther down doesn't make much sense, and suggests a precision that\ndoesn't exist, but if people prefer that we can do it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Mon, 5 Apr 2021 14:01:58 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Mon, Apr 05, 2021 at 02:01:58PM -0400, Bruce Momjian wrote:\n> On Mon, Apr 5, 2021 at 11:33:10AM -0500, Justin Pryzby wrote:\n> > > https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-INPUT\n> > > « …field values can have fractional parts; for example '1.5 week' or '01:02:03.45'. Such input is converted to the appropriate number of months, days, and seconds for storage. When this would result in a fractional number of months or days, the fraction is added to the lower-order fields using the conversion factors 1 month = 30 days and 1 day = 24 hours. For example, '1.5 month' becomes 1 month and 15 days. Only seconds will ever be shown as fractional on output. »\n> \n> I see that. What is not clear here is how far we flow down. I was\n> looking at adding documentation or regression tests for that, but was\n> unsure. I adjusted the docs slightly in the attached patch.\n\nI should have adjusted the quote to include context:\n\n| In the verbose input format, and in SOME FIELDS of the more compact input formats, field values can have fractional parts[...]\n\nI don't know what \"some fields\" means - more clarity here would help indicate\nthe intended behavior.\n\n> The interaction of months/days/seconds is so imprecise that passing it\n> futher down doesn't make much sense, and suggests a precision that\n> doesn't exist, but if people prefer that we can do it.\n\nI agree on its face that \"months\" is imprecise (30, 31, 27, 28 days),\nespecially fractional months, and same for \"years\" (leap years), and hours per\nday (DST), but even minutes (\"leap seconds\"). But the documentation seems to\nbe clear about the behavior:\n\n| .. using the conversion factors 1 month = 30 days and 1 day = 24 hours\n\nI think the most obvious/consistent change is for years and greater to \"cascade\ndown\" to seconds, and not just months.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 5 Apr 2021 13:15:22 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Mon, Apr 5, 2021 at 01:15:22PM -0500, Justin Pryzby wrote:\n> On Mon, Apr 05, 2021 at 02:01:58PM -0400, Bruce Momjian wrote:\n> > On Mon, Apr 5, 2021 at 11:33:10AM -0500, Justin Pryzby wrote:\n> > > > https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-INPUT\n> > > > « …field values can have fractional parts; for example '1.5 week' or '01:02:03.45'. Such input is converted to the appropriate number of months, days, and seconds for storage. When this would result in a fractional number of months or days, the fraction is added to the lower-order fields using the conversion factors 1 month = 30 days and 1 day = 24 hours. For example, '1.5 month' becomes 1 month and 15 days. Only seconds will ever be shown as fractional on output. »\n> > \n> > I see that. What is not clear here is how far we flow down. I was\n> > looking at adding documentation or regression tests for that, but was\n> > unsure. I adjusted the docs slightly in the attached patch.\n> \n> I should have adjusted the quote to include context:\n> \n> | In the verbose input format, and in SOME FIELDS of the more compact input formats, field values can have fractional parts[...]\n> \n> I don't know what \"some fields\" means - more clarity here would help indicate\n> the intended behavior.\n\nI assume it is comparing the verbose format to the ISO 8601 time\nintervals format, which I have not looked at. Interesting I see this as\na C comment at the top of DecodeISO8601Interval();\n\n\t * A couple exceptions from the spec:\n\t * - a week field ('W') may coexist with other units\n-->\t * - allows decimals in fields other than the least significant unit.\n\nI don't actually see anything in our code that doesn't support factional\nvalues, so maybe the docs are wrong and need to be fixed.\n\nActually, according to our regression tests, this fails:\n\n\tSELECT '5.5 seconds 3 milliseconds'::interval;\n\tERROR: invalid input syntax for type interval: \"5.5 seconds 3 milliseconds\"\n\nbut that is the verbose format, I think.\n\n> > The interaction of months/days/seconds is so imprecise that passing it\n> > futher down doesn't make much sense, and suggests a precision that\n> > doesn't exist, but if people prefer that we can do it.\n> \n> I agree on its face that \"months\" is imprecise (30, 31, 27, 28 days),\n> especially fractional months, and same for \"years\" (leap years), and hours per\n> day (DST), but even minutes (\"leap seconds\"). But the documentation seems to\n> be clear about the behavior:\n> \n> | .. using the conversion factors 1 month = 30 days and 1 day = 24 hours\n> \n> I think the most obvious/consistent change is for years and greater to \"cascade\n> down\" to seconds, and not just months.\n\nWow, well, that is _an_ option. Would people like that? It is certainly\neasier to explain.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Mon, 5 Apr 2021 14:37:54 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "> On 05-Apr-2021, at 11:37, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Mon, Apr 5, 2021 at 01:15:22PM -0500, Justin Pryzby wrote:\n>> On Mon, Apr 05, 2021 at 02:01:58PM -0400, Bruce Momjian wrote:\n>>> On Mon, Apr 5, 2021 at 11:33:10AM -0500, Justin Pryzby wrote:\n>>>>> https://www.google.com/url?q=https://www.postgresql.org/docs/current/datatype-datetime.html%23DATATYPE-INTERVAL-INPUT&source=gmail-imap&ust=1618252677000000&usg=AOvVaw34LnV9DlK4pcYY5NJGQe-m\n>>>>> « …field values can have fractional parts; for example '1.5 week' or '01:02:03.45'. Such input is converted to the appropriate number of months, days, and seconds for storage. When this would result in a fractional number of months or days, the fraction is added to the lower-order fields using the conversion factors 1 month = 30 days and 1 day = 24 hours. For example, '1.5 month' becomes 1 month and 15 days. Only seconds will ever be shown as fractional on output. »\n>>> \n>>> I see that. What is not clear here is how far we flow down. I was\n>>> looking at adding documentation or regression tests for that, but was\n>>> unsure. I adjusted the docs slightly in the attached patch.\n>> \n>> I should have adjusted the quote to include context:\n>> \n>> | In the verbose input format, and in SOME FIELDS of the more compact input formats, field values can have fractional parts[...]\n>> \n>> I don't know what \"some fields\" means - more clarity here would help indicate\n>> the intended behavior.\n> \n> I assume it is comparing the verbose format to the ISO 8601 time\n> intervals format, which I have not looked at. Interesting I see this as\n> a C comment at the top of DecodeISO8601Interval();\n> \n> \t * A couple exceptions from the spec:\n> \t * - a week field ('W') may coexist with other units\n> -->\t * - allows decimals in fields other than the least significant unit.\n> \n> I don't actually see anything in our code that doesn't support factional\n> values, so maybe the docs are wrong and need to be fixed.\n> \n> Actually, according to our regression tests, this fails:\n> \n> \tSELECT '5.5 seconds 3 milliseconds'::interval;\n> \tERROR: invalid input syntax for type interval: \"5.5 seconds 3 milliseconds\"\n> \n> but that is the verbose format, I think.\n> \n>>> The interaction of months/days/seconds is so imprecise that passing it\n>>> futher down doesn't make much sense, and suggests a precision that\n>>> doesn't exist, but if people prefer that we can do it.\n>> \n>> I agree on its face that \"months\" is imprecise (30, 31, 27, 28 days),\n>> especially fractional months, and same for \"years\" (leap years), and hours per\n>> day (DST), but even minutes (\"leap seconds\"). But the documentation seems to\n>> be clear about the behavior:\n>> \n>> | .. using the conversion factors 1 month = 30 days and 1 day = 24 hours\n>> \n>> I think the most obvious/consistent change is for years and greater to \"cascade\n>> down\" to seconds, and not just months.\n> \n> Wow, well, that is _an_ option. Would people like that? It is certainly\n> easier to explain.\n\nIt seems to me that this whole business is an irrevocable mess. The original design could have brought three overload-distinguishable types, \"interval month\", \"interval day\", and \"interval second\"—each represented internally as a scalar. There could have been built-ins to convert between them using conventionally specified rules. Then interval arithmetic would have been clear. For example, an attempt to assign the difference between two timestamps to anything but \"interval second\" would cause an error (as it does in Oracle database, even though there there are only two interval kinds). But we can only deal with what we have and accept the fact that the doc will inevitably be tortuous.\n\nGivea this, I agree that fractional years should simply convert to fractional months (to be then added to verbetim-given fractional months) just before representing the months as the trunc() of the value and cascading the remainder down to days. Units like century would fall out naturally in the same way.\n\n\nABOUT LEAP SECONDS\n\nLook at this (from Feb 2005):\n\n«\nPostgreSQL does not support leap seconds\nhttps://www.postgresql.org/message-id/1162319515.20050202141132@mail.ru\n»\n\nI don't know if the title reports a state of affairs in the hope that this be changed to bring such support—or whether it simply states what obtains and always will. Anyway, a simple test (below) shows that PG Version 13.2 doesn't honor leap seconds.\n\nDETAIL\n\nFirst, it helps me to demonstrate, using leap years, that this is a base phenomenon of the proleptic Gregorian calendar that PG uses—and has nothing to do with time zones. (If it did, the then leap year notion could be time zone dependent. Do this\n\nselect\n '2020-02-29'::date as \"date\",\n '2020-02-29 23:59:59.99999'::timestamp as \"plain timestamp\";\n\nThis is the result:\n\n date | plain timestamp \n------------+---------------------------\n 2020-02-29 | 2020-02-29 23:59:59.99999\n\nChanging the year to 2021 brings the 22008 error \"date/time field value out of range\". (Of course, you have to split the test into two pieces to be sure that you get the same error with both data types.)\n\nThis suggests a test that uses '23:59:60.000000' for the time. However, try this first:\n\nselect\n '23:59:60.000000'::time as \"time\",\n '2021-04-05 23:59:60.000000'::timestamp as \"plain timestamp\";\n\n time | plain timestamp \n----------+---------------------\n 24:00:00 | 2021-04-06 00:00:00\n\nThis is annoying. It reflects what seems to me to be an unfortunate design choice. Anyway, this behavior will never change. But it means that a precise discussion needs more words than had one minute been taken as a closed-open interval—[0,60) seconds—(with 59.99999 legal and 60.000000 illegal). It's too boring to type all those words here. Just do this:\n\nselect '2021-04-05 23:59:60.5'::timestamp as \"plain timestamp\";\n\nThis is the result:\n\n plain timestamp \n-----------------------\n 2021-04-06 00:00:00.5\n\nOf course, there was no leap second (on the planet—never, mind databases) at this moment. The most recent leap second was 2016-12-31 at 23:59:60 (UTC) meaning that '60.000000' through '60.999999' were all meaningful times on 31-Dec that year. So try this:\n\nselect '2016-12-31 23:59:60.5'::timestamp as \"should be leap second\";\n\nThis is the result:\n\n should be leap second \n-----------------------\n 2017-01-01 00:00:00.5\n\nThis tells me that the subject line of the email from 2005 remains correct: PostgreSQL does not support leap seconds. Given this, we can safely say that one minute is exactly 60 seconds (and that one hour is exactly 60 minutes) and never mention leap seconds ever again. I assume that it's this that must have informed the decision to represent an interval value as the three fields months, days, and seconds.\n\n\n\n",
"msg_date": "Mon, 5 Apr 2021 13:06:36 -0700",
"msg_from": "Bryn Llewellyn <bryn@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Mon, Apr 5, 2021 at 01:06:36PM -0700, Bryn Llewellyn wrote:\n> > On 05-Apr-2021, at 11:37, Bruce Momjian <bruce@momjian.us> wrote On:\n> > Mon, Apr 5, 2021 at 01:15:22PM -0500, Justin Pryzby wrote :\n>\n> It seems to me that this whole business is an irrevocable mess. The\n> original design could have brought three overload-distinguishable\n> types, \"interval month\", \"interval day\", and \"interval second\"—each\n> represented internally as a scalar. There could have been built-ins\n> to convert between them using conventionally specified rules. Then\n> interval arithmetic would have been clear. For example, an attempt to\n> assign the difference between two timestamps to anything but \"interval\n> second\" would cause an error (as it does in Oracle database, even\n> though there there are only two interval kinds). But we can only deal\n> with what we have and accept the fact that the doc will inevitably be\n> tortuous.\n\nThe problem with making three data types is that someone is going to\nwant to use a mixture of those, so I am not sure it is a win.\n\n> Givea this, I agree that fractional years should simply convert to\n> fractional months (to be then added to verbetim-given fractional\n> months) just before representing the months as the trunc() of the\n> value and cascading the remainder down to days. Units like century\n> would fall out naturally in the same way.\n\nI am confused --- are you saying we should do the interval addition,\nthen truncate, because we don't do that now, and it would be hard to do.\n\n> ABOUT LEAP SECONDS\n>\n> Look at this (from Feb 2005):\n>\n> « PostgreSQL does not support leap seconds\n> https://www.postgresql.org/message-id/1162319515.20050202141132@mail.r\n> u »\n>\n> I don't know if the title reports a state of affairs in the hope that\n> this be changed to bring such support—or whether it simply states\n> what obtains and always will. Anyway, a simple test (below) shows that\n> PG Version 13.2 doesn't honor leap seconds.\n\nPostgres is documented as not supporting leap seconds:\n\n\thttps://www.postgresql.org/docs/13/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT\n\t\n\ttimezone\n\t\n\t The time zone offset from UTC, measured in seconds. Positive values\n\tcorrespond to time zones east of UTC, negative values to zones west of\n\tUTC. (Technically, PostgreSQL does not use UTC because leap seconds are\n\tnot handled.)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Mon, 5 Apr 2021 16:35:24 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "> On 05-Apr-2021, at 13:35, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Mon, Apr 5, 2021 at 01:06:36PM -0700, Bryn Llewellyn wrote:\n>>> On 05-Apr-2021, at 11:37, Bruce Momjian <bruce@momjian.us> wrote On:\n>>> Mon, Apr 5, 2021 at 01:15:22PM -0500, Justin Pryzby wrote :\n>> \n>> It seems to me that this whole business is an irrevocable mess. The\n>> original design could have brought three overload-distinguishable\n>> types, \"interval month\", \"interval day\", and \"interval second\"—each\n>> represented internally as a scalar. There could have been built-ins\n>> to convert between them using conventionally specified rules. Then\n>> interval arithmetic would have been clear. For example, an attempt to\n>> assign the difference between two timestamps to anything but \"interval\n>> second\" would cause an error (as it does in Oracle database, even\n>> though there there are only two interval kinds). But we can only deal\n>> with what we have and accept the fact that the doc will inevitably be\n>> tortuous.\n> \n> The problem with making three data types is that someone is going to\n> want to use a mixture of those, so I am not sure it is a win.\n> \n>> Givea this, I agree that fractional years should simply convert to\n>> fractional months (to be then added to verbetim-given fractional\n>> months) just before representing the months as the trunc() of the\n>> value and cascading the remainder down to days. Units like century\n>> would fall out naturally in the same way.\n> \n> I am confused --- are you saying we should do the interval addition,\n> then truncate, because we don't do that now, and it would be hard to do.\n> \n>> ABOUT LEAP SECONDS\n>> \n>> Look at this (from Feb 2005):\n>> \n>> « PostgreSQL does not support leap seconds\n>> https://www.google.com/url?q=https://www.postgresql.org/message-id/1162319515.20050202141132@mail.r&source=gmail-imap&ust=1618259739000000&usg=AOvVaw0lT0Zz_HDsCrF5HrWCjplE\n>> u »\n>> \n>> I don't know if the title reports a state of affairs in the hope that\n>> this be changed to bring such support—or whether it simply states\n>> what obtains and always will. Anyway, a simple test (below) shows that\n>> PG Version 13.2 doesn't honor leap seconds.\n> \n> Postgres is documented as not supporting leap seconds:\n> \n> \thttps://www.google.com/url?q=https://www.postgresql.org/docs/13/functions-datetime.html%23FUNCTIONS-DATETIME-EXTRACT&source=gmail-imap&ust=1618259739000000&usg=AOvVaw35xJBdHRIsAYVV4pTzs0wR\n> \t\n> \ttimezone\n> \t\n> \t The time zone offset from UTC, measured in seconds. Positive values\n> \tcorrespond to time zones east of UTC, negative values to zones west of\n> \tUTC. (Technically, PostgreSQL does not use UTC because leap seconds are\n> \tnot handled.)\n\nThanks for the “leap seconds not supported” link. Google’s search within site refused to find that for me. (Talk about well hidden).\n\nAbout “ three data [interval] types” it’s too late anyway. So I’ll say no more.\n\nRe “are you saying we should do the interval addition, then truncate, because we don't do that now, and it would be hard to do.” I wan’t thinking of interval addition at all. Simply how the three values that that make up the internal representation are computed from a specified interval value. Like the PL/pgSQL simulation I showed you in an earlier reply. I can't find that in the archive now. So here it is again. Sorry for the repetition.\n\np.yy, p.mo, p.dd, p.hh, p.mi, and p.ss are th input\n\nm, d, and s are the internal representation\n\n m1 int not null := trunc(p.mo);\n m_remainder numeric not null := p.mo - m1::numeric;\n m int not null := trunc(p.yy*12) + m1;\n\n d_real numeric not null := p.dd + m_remainder*30.0;\n d int not null := floor(d_real);\n d_remainder numeric not null := d_real - d::numeric;\n\n s numeric not null := d_remainder*24.0*60.0*60.0 +\n p.hh*60.0*60.0 +\n p.mi*60.0 +\n p.ss;\n\nI have a harness to supply years, months, days, hours, minutes, and seconds values (like the lteral does the,) and to get them back (as \"extract\" gets them) using the actual implementation and my simulation. The two approaches have never disagreed using a wide range of inputs.\n\nThe algorithm that my code shows (esp with both trunc() and float() in play) is too hard to describe in words.\n\n\n\n\n\n",
"msg_date": "Mon, 5 Apr 2021 13:58:23 -0700",
"msg_from": "Bryn Llewellyn <bryn@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Mon, Apr 5, 2021 at 02:01:58PM -0400, Bruce Momjian wrote:\n> On Mon, Apr 5, 2021 at 11:33:10AM -0500, Justin Pryzby wrote:\n> Well, bug or not, we are not going to change back branches for this, and\n> if you want a larger discussion, it will have to wait for PG 15.\n> \n> > > https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-INPUT\n> > > « …field values can have fractional parts; for example '1.5 week' or '01:02:03.45'. Such input is converted to the appropriate number of months, days, and seconds for storage. When this would result in a fractional number of months or days, the fraction is added to the lower-order fields using the conversion factors 1 month = 30 days and 1 day = 24 hours. For example, '1.5 month' becomes 1 month and 15 days. Only seconds will ever be shown as fractional on output. »\n> \n> I see that. What is not clear here is how far we flow down. I was\n> looking at adding documentation or regression tests for that, but was\n> unsure. I adjusted the docs slightly in the attached patch.\n\nHere is an updated patch, which will be for PG 15. It updates the\ndocumentation to state:\n\n\tThe fractional parts are used to compute appropriate values for the next\n\tlower-order internal fields (months, days, seconds).\n\nIt removes the flow from fractional months/weeks to\nhours-minutes-seconds, and adds missing rounding for fractional\ncomputations.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Thu, 8 Apr 2021 13:24:44 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "> On 08-Apr-2021, at 10:24, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Mon, Apr 5, 2021 at 02:01:58PM -0400, Bruce Momjian wrote:\n>> On Mon, Apr 5, 2021 at 11:33:10AM -0500, Justin Pryzby wrote:\n>> Well, bug or not, we are not going to change back branches for this, and\n>> if you want a larger discussion, it will have to wait for PG 15.\n>> \n>>>> https://www.google.com/url?q=https://www.postgresql.org/docs/current/datatype-datetime.html%23DATATYPE-INTERVAL-INPUT&source=gmail-imap&ust=1618507489000000&usg=AOvVaw2h2TNbK7O41zsDn8HfD88C\n>>>> « …field values can have fractional parts; for example '1.5 week' or '01:02:03.45'. Such input is converted to the appropriate number of months, days, and seconds for storage. When this would result in a fractional number of months or days, the fraction is added to the lower-order fields using the conversion factors 1 month = 30 days and 1 day = 24 hours. For example, '1.5 month' becomes 1 month and 15 days. Only seconds will ever be shown as fractional on output. »\n>> \n>> I see that. What is not clear here is how far we flow down. I was\n>> looking at adding documentation or regression tests for that, but was\n>> unsure. I adjusted the docs slightly in the attached patch.\n> \n> Here is an updated patch, which will be for PG 15. It updates the\n> documentation to state:\n> \n> \tThe fractional parts are used to compute appropriate values for the next\n> \tlower-order internal fields (months, days, seconds).\n> \n> It removes the flow from fractional months/weeks to\n> hours-minutes-seconds, and adds missing rounding for fractional\n> computations.\n\nThank you Bruce. I look forward to documenting this new algorithm for YugabyteDB. The algorithm implements the transformation from this:\n\n[\n yy_in numeric,\n mo_in numeric,\n dd_in numeric,\n hh_in numeric,\n mi_in numeric,\n ss_in numeric\n]\n\nto this:\n\n[\n mo_internal_representation int,\n dd_internal_representation int,\n ss_internal_representation numeric(1000,6)\n]\n\nI am convinced that a prose account of the algorithm, by itself, is not the best way to tell the reader the rules that the algorithm implements. Rather, psuedocode is needed. I mentioned before that, better still, is actual executable PL/pgSQL code. (I can expect readers to be fluent in PL/pgSQL.) Given this executable simulation, an informal prose sketch of what it does will definitely add value.\n\nMay I ask you to fill in the body of this stub by translating the C that you have in hand?\n\ncreate type internal_representation_t as(\n mo_internal_representation int,\n dd_internal_representation int,\n ss_internal_representation numeric(1000,6));\n\ncreate function internal_representation(\n yy_in numeric default 0,\n mo_in numeric default 0,\n dd_in numeric default 0,\n hh_in numeric default 0,\n mi_in numeric default 0,\n ss_in numeric default 0)\n returns internal_representation_t\n language plpgsql\nas $body$\ndeclare\n mo_internal_representation int not null := 0;\n dd_internal_representation int not null := 0;\n ss_internal_representation numeric not null := 0.0;\n\n ok constant boolean :=\n (yy_in is not null) and\n (mo_in is not null) and\n (dd_in is not null) and\n (hh_in is not null) and\n (mi_in is not null) and\n (ss_in is not null);\nbegin\n assert ok, 'No actual argument, when provided, may be null';\n\n -- The algorithm.\n\n return (mo_internal_representation, dd_internal_representation, ss_internal_representation)::internal_representation_t;\nend;\n$body$;\n\nBy the way, I believe that a user might well decide always to supply all the fields in a \"from text to interval\" typecast, except for the seconds, as integral values. This, after all, is what the \"make_interval()\" function enforces. But, because the typecast approach allows non-integral values, the reference documentation must explain the rules unambiguously so that the reader can predict the outcome of any ad hoc test that they might try.\n\nIt's a huge pity that the three values of the internal representation cannot be observed directly using SQL because each behaves with different semantics when an interval value is added to a timestamptz value. However, as a second best (and knowing the algorithm above), a user can create interval values where only one of the three fields is populated and test their understanding of the semantic rules that way.\n\n",
"msg_date": "Thu, 8 Apr 2021 11:17:18 -0700",
"msg_from": "Bryn Llewellyn <bryn@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 10:24 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Mon, Apr 5, 2021 at 02:01:58PM -0400, Bruce Momjian wrote:\n> > On Mon, Apr 5, 2021 at 11:33:10AM -0500, Justin Pryzby wrote:\n> > Well, bug or not, we are not going to change back branches for this, and\n> > if you want a larger discussion, it will have to wait for PG 15.\n> >\n> > > >\n> https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-INPUT\n> > > > « …field values can have fractional parts; for example '1.5 week' or\n> '01:02:03.45'. Such input is converted to the appropriate number of months,\n> days, and seconds for storage. When this would result in a fractional\n> number of months or days, the fraction is added to the lower-order fields\n> using the conversion factors 1 month = 30 days and 1 day = 24 hours. For\n> example, '1.5 month' becomes 1 month and 15 days. Only seconds will ever be\n> shown as fractional on output. »\n> >\n> > I see that. What is not clear here is how far we flow down. I was\n> > looking at adding documentation or regression tests for that, but was\n> > unsure. I adjusted the docs slightly in the attached patch.\n>\n> Here is an updated patch, which will be for PG 15. It updates the\n> documentation to state:\n>\n> The fractional parts are used to compute appropriate values for\n> the next\n> lower-order internal fields (months, days, seconds).\n>\n> It removes the flow from fractional months/weeks to\n> hours-minutes-seconds, and adds missing rounding for fractional\n> computations.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n+1 to this patch.\n\nOn Thu, Apr 8, 2021 at 10:24 AM Bruce Momjian <bruce@momjian.us> wrote:On Mon, Apr 5, 2021 at 02:01:58PM -0400, Bruce Momjian wrote:\n> On Mon, Apr 5, 2021 at 11:33:10AM -0500, Justin Pryzby wrote:\n> Well, bug or not, we are not going to change back branches for this, and\n> if you want a larger discussion, it will have to wait for PG 15.\n> \n> > > https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-INPUT\n> > > « …field values can have fractional parts; for example '1.5 week' or '01:02:03.45'. Such input is converted to the appropriate number of months, days, and seconds for storage. When this would result in a fractional number of months or days, the fraction is added to the lower-order fields using the conversion factors 1 month = 30 days and 1 day = 24 hours. For example, '1.5 month' becomes 1 month and 15 days. Only seconds will ever be shown as fractional on output. »\n> \n> I see that. What is not clear here is how far we flow down. I was\n> looking at adding documentation or regression tests for that, but was\n> unsure. I adjusted the docs slightly in the attached patch.\n\nHere is an updated patch, which will be for PG 15. It updates the\ndocumentation to state:\n\n The fractional parts are used to compute appropriate values for the next\n lower-order internal fields (months, days, seconds).\n\nIt removes the flow from fractional months/weeks to\nhours-minutes-seconds, and adds missing rounding for fractional\ncomputations.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n+1 to this patch.",
"msg_date": "Sun, 11 Apr 2021 12:57:32 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Sun, Apr 11, 2021 at 12:57 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Thu, Apr 8, 2021 at 10:24 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n>> On Mon, Apr 5, 2021 at 02:01:58PM -0400, Bruce Momjian wrote:\n>> > On Mon, Apr 5, 2021 at 11:33:10AM -0500, Justin Pryzby wrote:\n>> > Well, bug or not, we are not going to change back branches for this, and\n>> > if you want a larger discussion, it will have to wait for PG 15.\n>> >\n>> > > >\n>> https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-INPUT\n>> > > > « …field values can have fractional parts; for example '1.5 week'\n>> or '01:02:03.45'. Such input is converted to the appropriate number of\n>> months, days, and seconds for storage. When this would result in a\n>> fractional number of months or days, the fraction is added to the\n>> lower-order fields using the conversion factors 1 month = 30 days and 1 day\n>> = 24 hours. For example, '1.5 month' becomes 1 month and 15 days. Only\n>> seconds will ever be shown as fractional on output. »\n>> >\n>> > I see that. What is not clear here is how far we flow down. I was\n>> > looking at adding documentation or regression tests for that, but was\n>> > unsure. I adjusted the docs slightly in the attached patch.\n>>\n>> Here is an updated patch, which will be for PG 15. It updates the\n>> documentation to state:\n>>\n>> The fractional parts are used to compute appropriate values for\n>> the next\n>> lower-order internal fields (months, days, seconds).\n>>\n>> It removes the flow from fractional months/weeks to\n>> hours-minutes-seconds, and adds missing rounding for fractional\n>> computations.\n>>\n>> --\n>> Bruce Momjian <bruce@momjian.us> https://momjian.us\n>> EDB https://enterprisedb.com\n>>\n>> If only the physical world exists, free will is an illusion.\n>>\n>>\n> +1 to this patch.\n>\n\nBryn reminded me, off list, about the flowing down from fractional\nday after the patch.\n\nBefore Bruce confirms the removal of the flowing down from fractional day,\nI withhold my previous +1.\n\nBryn would respond with more details.\n\nCheers\n\nOn Sun, Apr 11, 2021 at 12:57 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Thu, Apr 8, 2021 at 10:24 AM Bruce Momjian <bruce@momjian.us> wrote:On Mon, Apr 5, 2021 at 02:01:58PM -0400, Bruce Momjian wrote:\n> On Mon, Apr 5, 2021 at 11:33:10AM -0500, Justin Pryzby wrote:\n> Well, bug or not, we are not going to change back branches for this, and\n> if you want a larger discussion, it will have to wait for PG 15.\n> \n> > > https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-INPUT\n> > > « …field values can have fractional parts; for example '1.5 week' or '01:02:03.45'. Such input is converted to the appropriate number of months, days, and seconds for storage. When this would result in a fractional number of months or days, the fraction is added to the lower-order fields using the conversion factors 1 month = 30 days and 1 day = 24 hours. For example, '1.5 month' becomes 1 month and 15 days. Only seconds will ever be shown as fractional on output. »\n> \n> I see that. What is not clear here is how far we flow down. I was\n> looking at adding documentation or regression tests for that, but was\n> unsure. I adjusted the docs slightly in the attached patch.\n\nHere is an updated patch, which will be for PG 15. It updates the\ndocumentation to state:\n\n The fractional parts are used to compute appropriate values for the next\n lower-order internal fields (months, days, seconds).\n\nIt removes the flow from fractional months/weeks to\nhours-minutes-seconds, and adds missing rounding for fractional\ncomputations.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n+1 to this patch. Bryn reminded me, off list, about the flowing down from fractional day after the patch.Before Bruce confirms the removal of the flowing down from fractional day, I withhold my previous +1.Bryn would respond with more details.Cheers",
"msg_date": "Sun, 11 Apr 2021 16:33:52 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Sun, Apr 11, 2021 at 4:33 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Sun, Apr 11, 2021 at 12:57 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>>\n>>\n>> On Thu, Apr 8, 2021 at 10:24 AM Bruce Momjian <bruce@momjian.us> wrote:\n>>\n>>> On Mon, Apr 5, 2021 at 02:01:58PM -0400, Bruce Momjian wrote:\n>>> > On Mon, Apr 5, 2021 at 11:33:10AM -0500, Justin Pryzby wrote:\n>>> > Well, bug or not, we are not going to change back branches for this,\n>>> and\n>>> > if you want a larger discussion, it will have to wait for PG 15.\n>>> >\n>>> > > >\n>>> https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-INPUT\n>>> > > > « …field values can have fractional parts; for example '1.5 week'\n>>> or '01:02:03.45'. Such input is converted to the appropriate number of\n>>> months, days, and seconds for storage. When this would result in a\n>>> fractional number of months or days, the fraction is added to the\n>>> lower-order fields using the conversion factors 1 month = 30 days and 1 day\n>>> = 24 hours. For example, '1.5 month' becomes 1 month and 15 days. Only\n>>> seconds will ever be shown as fractional on output. »\n>>> >\n>>> > I see that. What is not clear here is how far we flow down. I was\n>>> > looking at adding documentation or regression tests for that, but was\n>>> > unsure. I adjusted the docs slightly in the attached patch.\n>>>\n>>> Here is an updated patch, which will be for PG 15. It updates the\n>>> documentation to state:\n>>>\n>>> The fractional parts are used to compute appropriate values for\n>>> the next\n>>> lower-order internal fields (months, days, seconds).\n>>>\n>>> It removes the flow from fractional months/weeks to\n>>> hours-minutes-seconds, and adds missing rounding for fractional\n>>> computations.\n>>>\n>>> --\n>>> Bruce Momjian <bruce@momjian.us> https://momjian.us\n>>> EDB https://enterprisedb.com\n>>>\n>>> If only the physical world exists, free will is an illusion.\n>>>\n>>>\n>> +1 to this patch.\n>>\n>\n> Bryn reminded me, off list, about the flowing down from fractional\n> day after the patch.\n>\n> Before Bruce confirms the removal of the flowing down from fractional day,\n> I withhold my previous +1.\n>\n> Bryn would respond with more details.\n>\n> Cheers\n>\n\nAmong previous examples given by Bryn, the following produces correct\nresult based on Bruce's patch.\n\n# select interval '-1.7 years 29.4 months';\n interval\n----------------\n 9 mons 12 days\n(1 row)\n\nCheers\n\nOn Sun, Apr 11, 2021 at 4:33 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Sun, Apr 11, 2021 at 12:57 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Thu, Apr 8, 2021 at 10:24 AM Bruce Momjian <bruce@momjian.us> wrote:On Mon, Apr 5, 2021 at 02:01:58PM -0400, Bruce Momjian wrote:\n> On Mon, Apr 5, 2021 at 11:33:10AM -0500, Justin Pryzby wrote:\n> Well, bug or not, we are not going to change back branches for this, and\n> if you want a larger discussion, it will have to wait for PG 15.\n> \n> > > https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-INPUT\n> > > « …field values can have fractional parts; for example '1.5 week' or '01:02:03.45'. Such input is converted to the appropriate number of months, days, and seconds for storage. When this would result in a fractional number of months or days, the fraction is added to the lower-order fields using the conversion factors 1 month = 30 days and 1 day = 24 hours. For example, '1.5 month' becomes 1 month and 15 days. Only seconds will ever be shown as fractional on output. »\n> \n> I see that. What is not clear here is how far we flow down. I was\n> looking at adding documentation or regression tests for that, but was\n> unsure. I adjusted the docs slightly in the attached patch.\n\nHere is an updated patch, which will be for PG 15. It updates the\ndocumentation to state:\n\n The fractional parts are used to compute appropriate values for the next\n lower-order internal fields (months, days, seconds).\n\nIt removes the flow from fractional months/weeks to\nhours-minutes-seconds, and adds missing rounding for fractional\ncomputations.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n+1 to this patch. Bryn reminded me, off list, about the flowing down from fractional day after the patch.Before Bruce confirms the removal of the flowing down from fractional day, I withhold my previous +1.Bryn would respond with more details.CheersAmong previous examples given by Bryn, the following produces correct result based on Bruce's patch.# select interval '-1.7 years 29.4 months'; interval---------------- 9 mons 12 days(1 row)Cheers",
"msg_date": "Sun, 11 Apr 2021 19:33:34 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Sun, Apr 11, 2021 at 07:33:34PM -0700, Zhihong Yu wrote:\n> Among previous examples given by Bryn, the following produces correct result\n> based on Bruce's patch.\n> \n> # select interval '-1.7 years 29.4 months';\n> � � interval\n> ----------------\n> �9 mons 12 days\n\nYes, that changed is caused by the rounding fixes, and not by the unit\npushdown adjustments.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Mon, 12 Apr 2021 12:18:42 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "bruce@momjian.us wrote:\n> \n> zyu@yugabyte.com wrote:\n>> Among previous examples given by Bryn, the following produces correct result based on Bruce's patch.\n>> \n>> # select interval '-1.7 years 29.4 months';\n>> interval\n>> ----------------\n>> 9 mons 12 days\n> \n> Yes, that changed is caused by the rounding fixes, and not by the unit pushdown adjustments.\n\nI showed you all this example a long time ago:\n\nselect (\n '\n 3.853467 years\n '::interval\n )::text as i;\n\nThis behavior is the same in the env. of Bruce’s patch as in unpatched PG 13.2. This is the result.\n\n3 years 10 mons\n\nNotice that \"3.853467 years\" is \"3 years\" plus \"10.241604 months\". This explains the \"10 mons\" in the result. But the 0.241604 months remainder did not spill down into days.\n\nCan anybody defend this quirk? An extension of this example with a real number of month in the user input is correspondingly yet more quirky. The rules can be written down. But they’re too tortuos to allow an ordinary mortal confidently to design code that relies on them.\n\n(I was unable to find any rule statement that lets the user predict this in the doc. But maybe that’s because of my feeble searching skills.)\n\nIf there is no defense (and I cannot imagine one) might Bruce’s patch normalize this too to follow this rule:\n\n— convert 'y years m months' to the real number y*12 + m.\n\n— record truc( y*12 + m) in the \"months\" field of the internal representation\n\n— flow the remainder down to days (but no further)\n\nAfter all, you've bitten the bullet now and changed the behavior. This means that the semantics of some extant applications will change. So... in for a penny, in for a pound?\n\n",
"msg_date": "Mon, 12 Apr 2021 15:09:48 -0700",
"msg_from": "Bryn Llewellyn <bryn@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 03:09:48PM -0700, Bryn Llewellyn wrote:\n> I showed you all this example a long time ago:\n> \n> select (\n> '\n> 3.853467 years\n> '::interval\n> )::text as i;\n> \n> This behavior is the same in the env. of Bruce’s patch as in unpatched PG 13.2. This is the result.\n> \n> 3 years 10 mons\n> \n> Notice that \"3.853467 years\" is \"3 years\" plus \"10.241604 months\". This explains the \"10 mons\" in the result. But the 0.241604 months remainder did not spill down into days.\n> \n> Can anybody defend this quirk? An extension of this example with a real number of month in the user input is correspondingly yet more quirky. The rules can be written down. But they’re too tortuos to allow an ordinary mortal confidently to design code that relies on them.\n> \n> (I was unable to find any rule statement that lets the user predict this in the doc. But maybe that’s because of my feeble searching skills.)\n> \n> If there is no defense (and I cannot imagine one) might Bruce’s patch normalize this too to follow this rule:\n> \n> — convert 'y years m months' to the real number y*12 + m.\n> \n> — record truc( y*12 + m) in the \"months\" field of the internal representation\n> \n> — flow the remainder down to days (but no further)\n> \n> After all, you've bitten the bullet now and changed the behavior. This means that the semantics of some extant applications will change. So... in for a penny, in for a pound?\n\nThe docs now say:\n\n Field values can have fractional parts; for example, <literal>'1.5\n weeks'</literal> or <literal>'01:02:03.45'</literal>. The fractional\n--> parts are used to compute appropriate values for the next lower-order\n internal fields (months, days, seconds).\n\nmeaning fractional years flows to the next lower internal unit, months,\nand no further. Fractional months would flow to days. The idea of not\nflowing past the next lower-order internal field is that the\napproximations between units are not precise enough to flow accurately.\n\nWith my patch, the output is now:\n\n\tSELECT INTERVAL '3 years 10.241604 months';\n\t interval\n\t------------------------\n\t 3 years 10 mons 7 days\n\nIt used to flow to seconds.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Mon, 12 Apr 2021 19:22:37 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Mon, Apr 12, 2021 at 03:09:48PM -0700, Bryn Llewellyn wrote:\n>> After all, you've bitten the bullet now and changed the behavior. This means that the semantics of some extant applications will change. So... in for a penny, in for a pound?\n\n> The docs now say:\n\n> Field values can have fractional parts; for example, <literal>'1.5\n> weeks'</literal> or <literal>'01:02:03.45'</literal>. The fractional\n> --> parts are used to compute appropriate values for the next lower-order\n> internal fields (months, days, seconds).\n\n> meaning fractional years flows to the next lower internal unit, months,\n> and no further. Fractional months would flow to days. The idea of not\n> flowing past the next lower-order internal field is that the\n> approximations between units are not precise enough to flow accurately.\n\nUm, what's the argument for down-converting AT ALL? The problem is\nprecisely that any such conversion is mostly fictional.\n\n> With my patch, the output is now:\n\n> \tSELECT INTERVAL '3 years 10.241604 months';\n> \t interval\n> \t------------------------\n> \t 3 years 10 mons 7 days\n\n> It used to flow to seconds.\n\nYeah, that's better than before, but I don't see any principled argument\nfor it not to be \"3 years 10 months\", full stop.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Apr 2021 19:38:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 07:38:21PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Mon, Apr 12, 2021 at 03:09:48PM -0700, Bryn Llewellyn wrote:\n> >> After all, you've bitten the bullet now and changed the behavior. This means that the semantics of some extant applications will change. So... in for a penny, in for a pound?\n> \n> > The docs now say:\n> \n> > Field values can have fractional parts; for example, <literal>'1.5\n> > weeks'</literal> or <literal>'01:02:03.45'</literal>. The fractional\n> > --> parts are used to compute appropriate values for the next lower-order\n> > internal fields (months, days, seconds).\n> \n> > meaning fractional years flows to the next lower internal unit, months,\n> > and no further. Fractional months would flow to days. The idea of not\n> > flowing past the next lower-order internal field is that the\n> > approximations between units are not precise enough to flow accurately.\n> \n> Um, what's the argument for down-converting AT ALL? The problem is\n> precisely that any such conversion is mostly fictional.\n\nTrue.\n\n> > With my patch, the output is now:\n> \n> > \tSELECT INTERVAL '3 years 10.241604 months';\n> > \t interval\n> > \t------------------------\n> > \t 3 years 10 mons 7 days\n> \n> > It used to flow to seconds.\n> \n> Yeah, that's better than before, but I don't see any principled argument\n> for it not to be \"3 years 10 months\", full stop.\n\nWell, the case was:\n\n\tSELECT INTERVAL '0.1 months';\n\t interval\n\t----------\n\t 3 days\n\t\n\tSELECT INTERVAL '0.1 months' + interval '0.9 months';\n\t ?column?\n\t----------\n\t 30 days\n\nIf you always truncate, you basically lose the ability to specify\nfractional internal units, which I think is useful. I would say if you\nuse fractional units of one of the internal units, you are basically\nknowing you are asking for an approximation --- that is not true of '3.5\nyears', for example.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Mon, 12 Apr 2021 20:00:27 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "> tgl@sss.pgh.pa.us wrote:\n> \n> bruce@momjian.us writes:\n>> bryn@yugabyte.com wrote:\n>>> After all, you've bitten the bullet now and changed the behavior. This means that the semantics of some extant applications will change. So... in for a penny, in for a pound?\n> \n>> The docs now say:\n> \n>> Field values can have fractional parts; for example, <literal>'1.5\n>> weeks'</literal> or <literal>'01:02:03.45'</literal>. The fractional\n>> --> parts are used to compute appropriate values for the next lower-order\n>> internal fields (months, days, seconds).\n> \n>> meaning fractional years flows to the next lower internal unit, months, and no further. Fractional months would flow to days. The idea of not flowing past the next lower-order internal field is that the approximations between units are not precise enough to flow accurately.\n> \n> Um, what's the argument for down-converting AT ALL? The problem is precisely that any such conversion is mostly fictional.\n> \n>> With my patch, the output is now:\n> \n>> \tSELECT INTERVAL '3 years 10.241604 months';\n>> \t interval\n>> \t------------------------\n>> \t 3 years 10 mons 7 days\n> \n>> It used to flow to seconds.\n> \n> Yeah, that's better than before, but I don't see any principled argument for it not to be \"3 years 10 months\", full stop.\n\nTom, I fully support your suggestion to have no flow down at all. Please may this happen! However, the new rule must be described in terms of the three fields of the internal representation: [months, days, seconds]. This representation is already documented.\n\nDon’t forget that '731.42587 hours’ is read back as \"731:25:33.132\" (or, if you prefer, 731 hours, 25 minutes, and 33.132 seconds if you use \"extract\" and your own pretty print). But we don’t think of this as “flow down”. Rather, it’s just a conventional representation of the seconds field of the internal representation. I could go on. But you all know what I mean.\n\nBy the way, I made a nice little demo (for my doc project). It shows that:\n\n(1) if you pick the right date-time just before a DST change, and do the addition in the right time zone, then adding 24 hours gets a different answer than adding one day.\n\n(2) if you pick the right date-time just before 29-Feb in a leap year, then adding 30 days gets a different answer than adding one month.\n\nYou all know why. And though the doc could explain and illustrate this better, it does tell you to expect this. It also says that the difference in semantics that these examples show is the reason for the three-field internal representation.\n\nIt seems to me that both the age-old behavior that vanilla 13.2 exhibits, and the behavior in the regime of Bruce’s patch are like adding 2.2 oranges to 1.3 oranges and getting 3 oranges and 21 apples (because 1 orange is conventionally the same as 42 apples). Bruce made a step in the right direction by stopping oranges convert all the way down to bananas. But it would be so much better to get rid of this false equivalence business altogether.\n\n\n\ntgl@sss.pgh.pa.us wrote:bruce@momjian.us writes:bryn@yugabyte.com wrote:After all, you've bitten the bullet now and changed the behavior. This means that the semantics of some extant applications will change. So... in for a penny, in for a pound?The docs now say: Field values can have fractional parts; for example, <literal>'1.5 weeks'</literal> or <literal>'01:02:03.45'</literal>. The fractional--> parts are used to compute appropriate values for the next lower-order internal fields (months, days, seconds).meaning fractional years flows to the next lower internal unit, months, and no further. Fractional months would flow to days. The idea of not flowing past the next lower-order internal field is that the approximations between units are not precise enough to flow accurately.Um, what's the argument for down-converting AT ALL? The problem is precisely that any such conversion is mostly fictional.With my patch, the output is now: SELECT INTERVAL '3 years 10.241604 months'; interval ------------------------ 3 years 10 mons 7 daysIt used to flow to seconds.Yeah, that's better than before, but I don't see any principled argument for it not to be \"3 years 10 months\", full stop.Tom, I fully support your suggestion to have no flow down at all. Please may this happen! However, the new rule must be described in terms of the three fields of the internal representation: [months, days, seconds]. This representation is already documented.Don’t forget that '731.42587 hours’ is read back as \"731:25:33.132\" (or, if you prefer, 731 hours, 25 minutes, and 33.132 seconds if you use \"extract\" and your own pretty print). But we don’t think of this as “flow down”. Rather, it’s just a conventional representation of the seconds field of the internal representation. I could go on. But you all know what I mean.By the way, I made a nice little demo (for my doc project). It shows that:(1) if you pick the right date-time just before a DST change, and do the addition in the right time zone, then adding 24 hours gets a different answer than adding one day.(2) if you pick the right date-time just before 29-Feb in a leap year, then adding 30 days gets a different answer than adding one month.You all know why. And though the doc could explain and illustrate this better, it does tell you to expect this. It also says that the difference in semantics that these examples show is the reason for the three-field internal representation.It seems to me that both the age-old behavior that vanilla 13.2 exhibits, and the behavior in the regime of Bruce’s patch are like adding 2.2 oranges to 1.3 oranges and getting 3 oranges and 21 apples (because 1 orange is conventionally the same as 42 apples). Bruce made a step in the right direction by stopping oranges convert all the way down to bananas. But it would be so much better to get rid of this false equivalence business altogether.",
"msg_date": "Mon, 12 Apr 2021 17:06:24 -0700",
"msg_from": "Bryn Llewellyn <bryn@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "> On 12-Apr-2021, at 17:00, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Mon, Apr 12, 2021 at 07:38:21PM -0400, Tom Lane wrote:\n>> Bruce Momjian <bruce@momjian.us> writes:\n>>> On Mon, Apr 12, 2021 at 03:09:48PM -0700, Bryn Llewellyn wrote:\n>>>> After all, you've bitten the bullet now and changed the behavior. This means that the semantics of some extant applications will change. So... in for a penny, in for a pound?\n>> \n>>> The docs now say:\n>> \n>>> Field values can have fractional parts; for example, <literal>'1.5\n>>> weeks'</literal> or <literal>'01:02:03.45'</literal>. The fractional\n>>> --> parts are used to compute appropriate values for the next lower-order\n>>> internal fields (months, days, seconds).\n>> \n>>> meaning fractional years flows to the next lower internal unit, months,\n>>> and no further. Fractional months would flow to days. The idea of not\n>>> flowing past the next lower-order internal field is that the\n>>> approximations between units are not precise enough to flow accurately.\n>> \n>> Um, what's the argument for down-converting AT ALL? The problem is\n>> precisely that any such conversion is mostly fictional.\n> \n> True.\n> \n>>> With my patch, the output is now:\n>> \n>>> \tSELECT INTERVAL '3 years 10.241604 months';\n>>> \t interval\n>>> \t------------------------\n>>> \t 3 years 10 mons 7 days\n>> \n>>> It used to flow to seconds.\n>> \n>> Yeah, that's better than before, but I don't see any principled argument\n>> for it not to be \"3 years 10 months\", full stop.\n> \n> Well, the case was:\n> \n> \tSELECT INTERVAL '0.1 months';\n> \t interval\n> \t----------\n> \t 3 days\n> \t\n> \tSELECT INTERVAL '0.1 months' + interval '0.9 months';\n> \t ?column?\n> \t----------\n> \t 30 days\n> \n> If you always truncate, you basically lose the ability to specify\n> fractional internal units, which I think is useful. I would say if you\n> use fractional units of one of the internal units, you are basically\n> knowing you are asking for an approximation --- that is not true of '3.5\n> years', for example.\n\nI’d argue that the fact that this:\n\n('0.3 months'::interval) + ('0.7 months'::interval)\n\nIs reported as '30 days' and not '1 month' is yet another bug—precisely because of what I said in my previous email (sorry that I forked the thread) where I referred to the fact that, in the right test, adding 1 month gets a different answer than adding 30 days. Yet another convincing reason to get rid of this flow down business altogether.\n\nIf some application wants to model flow-down, then it can do so with trivial programming and full control over its own definition of the rules.\n\n\n\n",
"msg_date": "Mon, 12 Apr 2021 17:20:43 -0700",
"msg_from": "Bryn Llewellyn <bryn@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 05:20:43PM -0700, Bryn Llewellyn wrote:\n> I’d argue that the fact that this:\n>\n> ('0.3 months'::interval) + ('0.7 months'::interval)\n>\n> Is reported as '30 days' and not '1 month' is yet another\n> bug—precisely because of what I said in my previous email (sorry\n> that I forked the thread) where I referred to the fact that, in the\n> right test, adding 1 month gets a different answer than adding 30\n> days. \n\nFlowing _up_ is what these functions do:\n\n\t\\df *justify*\n\t List of functions\n\t Schema | Name | Result data type | Argument data types | Type\n\t------------+------------------+------------------+---------------------+------\n\t pg_catalog | justify_days | interval | interval | func\n\t pg_catalog | justify_hours | interval | interval | func\n\t pg_catalog | justify_interval | interval | interval | func\n\n> Yet another convincing reason to get rid of this flow down\n> business altogether.\n\nWe can certainly get rid of all downflow, which in the current patch is\nonly when fractional internal units are specified.\n\n> If some application wants to model flow-down, then it can do so with\n> trivial programming and full control over its own definition of the\n> rules.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Mon, 12 Apr 2021 20:25:58 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 4:22 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Mon, Apr 12, 2021 at 03:09:48PM -0700, Bryn Llewellyn wrote:\n> > I showed you all this example a long time ago:\n> >\n> > select (\n> > '\n> > 3.853467 years\n> > '::interval\n> > )::text as i;\n> >\n> > This behavior is the same in the env. of Bruce’s patch as in unpatched\n> PG 13.2. This is the result.\n> >\n> > 3 years 10 mons\n> >\n> > Notice that \"3.853467 years\" is \"3 years\" plus \"10.241604 months\". This\n> explains the \"10 mons\" in the result. But the 0.241604 months remainder did\n> not spill down into days.\n> >\n> > Can anybody defend this quirk? An extension of this example with a real\n> number of month in the user input is correspondingly yet more quirky. The\n> rules can be written down. But they’re too tortuos to allow an ordinary\n> mortal confidently to design code that relies on them.\n> >\n> > (I was unable to find any rule statement that lets the user predict this\n> in the doc. But maybe that’s because of my feeble searching skills.)\n> >\n> > If there is no defense (and I cannot imagine one) might Bruce’s patch\n> normalize this too to follow this rule:\n> >\n> > — convert 'y years m months' to the real number y*12 + m.\n> >\n> > — record truc( y*12 + m) in the \"months\" field of the internal\n> representation\n> >\n> > — flow the remainder down to days (but no further)\n> >\n> > After all, you've bitten the bullet now and changed the behavior. This\n> means that the semantics of some extant applications will change. So... in\n> for a penny, in for a pound?\n>\n> The docs now say:\n>\n> Field values can have fractional parts; for example, <literal>'1.5\n> weeks'</literal> or <literal>'01:02:03.45'</literal>. The fractional\n> --> parts are used to compute appropriate values for the next lower-order\n> internal fields (months, days, seconds).\n>\n> meaning fractional years flows to the next lower internal unit, months,\n> and no further. Fractional months would flow to days. The idea of not\n> flowing past the next lower-order internal field is that the\n> approximations between units are not precise enough to flow accurately.\n>\n> With my patch, the output is now:\n>\n> SELECT INTERVAL '3 years 10.241604 months';\n> interval\n> ------------------------\n> 3 years 10 mons 7 days\n>\n> It used to flow to seconds.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\nBased on the results of more samples, I restore +1 to Bruce's latest patch.\n\nCheers\n\nOn Mon, Apr 12, 2021 at 4:22 PM Bruce Momjian <bruce@momjian.us> wrote:On Mon, Apr 12, 2021 at 03:09:48PM -0700, Bryn Llewellyn wrote:\n> I showed you all this example a long time ago:\n> \n> select (\n> '\n> 3.853467 years\n> '::interval\n> )::text as i;\n> \n> This behavior is the same in the env. of Bruce’s patch as in unpatched PG 13.2. This is the result.\n> \n> 3 years 10 mons\n> \n> Notice that \"3.853467 years\" is \"3 years\" plus \"10.241604 months\". This explains the \"10 mons\" in the result. But the 0.241604 months remainder did not spill down into days.\n> \n> Can anybody defend this quirk? An extension of this example with a real number of month in the user input is correspondingly yet more quirky. The rules can be written down. But they’re too tortuos to allow an ordinary mortal confidently to design code that relies on them.\n> \n> (I was unable to find any rule statement that lets the user predict this in the doc. But maybe that’s because of my feeble searching skills.)\n> \n> If there is no defense (and I cannot imagine one) might Bruce’s patch normalize this too to follow this rule:\n> \n> — convert 'y years m months' to the real number y*12 + m.\n> \n> — record truc( y*12 + m) in the \"months\" field of the internal representation\n> \n> — flow the remainder down to days (but no further)\n> \n> After all, you've bitten the bullet now and changed the behavior. This means that the semantics of some extant applications will change. So... in for a penny, in for a pound?\n\nThe docs now say:\n\n Field values can have fractional parts; for example, <literal>'1.5\n weeks'</literal> or <literal>'01:02:03.45'</literal>. The fractional\n--> parts are used to compute appropriate values for the next lower-order\n internal fields (months, days, seconds).\n\nmeaning fractional years flows to the next lower internal unit, months,\nand no further. Fractional months would flow to days. The idea of not\nflowing past the next lower-order internal field is that the\napproximations between units are not precise enough to flow accurately.\n\nWith my patch, the output is now:\n\n SELECT INTERVAL '3 years 10.241604 months';\n interval\n ------------------------\n 3 years 10 mons 7 days\n\nIt used to flow to seconds.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\nBased on the results of more samples, I restore +1 to Bruce's latest patch.Cheers",
"msg_date": "Tue, 13 Apr 2021 07:07:52 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "> On 12-Apr-2021, at 17:25, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Mon, Apr 12, 2021 at 05:20:43PM -0700, Bryn Llewellyn wrote:\n>> I’d argue that the fact that this:\n>> \n>> ('0.3 months'::interval) + ('0.7 months'::interval)\n>> \n>> Is reported as '30 days' and not '1 month' is yet another\n>> bug—precisely because of what I said in my previous email (sorry\n>> that I forked the thread) where I referred to the fact that, in the\n>> right test, adding 1 month gets a different answer than adding 30\n>> days. \n> \n> Flowing _up_ is what these functions do:\n> \n> \t\\df *justify*\n> \t List of functions\n> \t Schema | Name | Result data type | Argument data types | Type\n> \t------------+------------------+------------------+---------------------+------\n> \t pg_catalog | justify_days | interval | interval | func\n> \t pg_catalog | justify_hours | interval | interval | func\n> \t pg_catalog | justify_interval | interval | interval | func\n> \n>> Yet another convincing reason to get rid of this flow down\n>> business altogether.\n> \n> We can certainly get rid of all downflow, which in the current patch is\n> only when fractional internal units are specified.\n> \n>> If some application wants to model flow-down, then it can do so with\n>> trivial programming and full control over its own definition of the\n>> rules.\n\n“Yes please!” re Bruce’s “We can certainly get rid of all downflow, which in the current patch is only when fractional internal units are specified.”\n\nNotice that a user who creates interval values explicitly only by using “make_interval()” will see no behavior change.\n\nThere’s another case of up-flow. When you subtract one timestamp value from another, and when they’re far enough apart, then the (internal representation of the) resulting interval value has both a seconds component and a days component. (But never, in my tests, a months component.) I assume that the implementation first gets the difference between the two timestamp values in seconds using (the moral equivalent of) “extract epoch”. And then, if this is greater than 24*60*60, it implements up-flow using the “rule-of-24”—never mind that this means that if you add the answer back to the timestamp value that you subtracted, then you might not get the timestamp value from which you subtracted. (This happens around a DST change and has been discussed earlier in the thread.)\n\nThe purpose of these three “justify” functions is dubious. I think that it’s best to think of the 3-component interval vector like an [x, y, z] vector in 3d geometric space, where the three coordinates are not mutually convertible because each has unique semantics. This point has been rehearsed many times in this long thread.\n\nHaving said this, it isn’t hard to understand the rules that the functions implement. And, most importantly, their use is voluntary. They are, though, no more than shipped and documented wrappers for a few special cases. A user could so easily write their own function like this:\n\n1. Compute the values of the three components of the internal representation of the passed-in interval value using the “extract” feature and some simple arithmetic.\n\n2. Derive the [minutes, days, seconds] values of a new representation using whatever rules you feel for.\n\n3. Use these new values to create the return interval value.\n\nFor example, I might follow a discipline to use interval values that have only one of the three components of the internal representation non-zero. It’s easy to use the “domain” feature for this. (I can use these in any context where I can use the shipped interval.) I could then write a function to convert a “pure seconds” value of my_interval to a “pure years” value. And I could implement my own rules:\n\n— Makes sense only for a large number of seconds that comes out to at least five years (else assert failure).\n\n— Converts seconds to years using the rule that 1 year is, on average, 365.25*24*60*60 seconds, and then truncates it.\n\nThere’s no shipped function that does this, and this makes me suspect that I’d prefer to roll my own for any serious purpose.\n\nThe existence of the three “justify” functions is, therefore, harmless.\n\n\nOn 12-Apr-2021, at 17:25, Bruce Momjian <bruce@momjian.us> wrote:On Mon, Apr 12, 2021 at 05:20:43PM -0700, Bryn Llewellyn wrote:I’d argue that the fact that this:('0.3 months'::interval) + ('0.7 months'::interval)Is reported as '30 days' and not '1 month' is yet anotherbug—precisely because of what I said in my previous email (sorrythat I forked the thread) where I referred to the fact that, in theright test, adding 1 month gets a different answer than adding 30days. Flowing _up_ is what these functions do: \\df *justify* List of functions Schema | Name | Result data type | Argument data types | Type ------------+------------------+------------------+---------------------+------ pg_catalog | justify_days | interval | interval | func pg_catalog | justify_hours | interval | interval | func pg_catalog | justify_interval | interval | interval | funcYet another convincing reason to get rid of this flow downbusiness altogether.We can certainly get rid of all downflow, which in the current patch isonly when fractional internal units are specified.If some application wants to model flow-down, then it can do so withtrivial programming and full control over its own definition of therules.“Yes please!” re Bruce’s “We can certainly get rid of all downflow, which in the current patch is only when fractional internal units are specified.”Notice that a user who creates interval values explicitly only by using “make_interval()” will see no behavior change.There’s another case of up-flow. When you subtract one timestamp value from another, and when they’re far enough apart, then the (internal representation of the) resulting interval value has both a seconds component and a days component. (But never, in my tests, a months component.) I assume that the implementation first gets the difference between the two timestamp values in seconds using (the moral equivalent of) “extract epoch”. And then, if this is greater than 24*60*60, it implements up-flow using the “rule-of-24”—never mind that this means that if you add the answer back to the timestamp value that you subtracted, then you might not get the timestamp value from which you subtracted. (This happens around a DST change and has been discussed earlier in the thread.)The purpose of these three “justify” functions is dubious. I think that it’s best to think of the 3-component interval vector like an [x, y, z] vector in 3d geometric space, where the three coordinates are not mutually convertible because each has unique semantics. This point has been rehearsed many times in this long thread.Having said this, it isn’t hard to understand the rules that the functions implement. And, most importantly, their use is voluntary. They are, though, no more than shipped and documented wrappers for a few special cases. A user could so easily write their own function like this:1. Compute the values of the three components of the internal representation of the passed-in interval value using the “extract” feature and some simple arithmetic.2. Derive the [minutes, days, seconds] values of a new representation using whatever rules you feel for.3. Use these new values to create the return interval value.For example, I might follow a discipline to use interval values that have only one of the three components of the internal representation non-zero. It’s easy to use the “domain” feature for this. (I can use these in any context where I can use the shipped interval.) I could then write a function to convert a “pure seconds” value of my_interval to a “pure years” value. And I could implement my own rules:— Makes sense only for a large number of seconds that comes out to at least five years (else assert failure).— Converts seconds to years using the rule that 1 year is, on average, 365.25*24*60*60 seconds, and then truncates it.There’s no shipped function that does this, and this makes me suspect that I’d prefer to roll my own for any serious purpose.The existence of the three “justify” functions is, therefore, harmless.",
"msg_date": "Tue, 13 Apr 2021 10:55:31 -0700",
"msg_from": "Bryn Llewellyn <bryn@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, May 7, 2021 at 07:23:42PM -0700, Zhihong Yu wrote:\n> On Tue, Apr 13, 2021 at 10:55 AM Bryn Llewellyn <bryn@yugabyte.com> wrote:\n> There’s no shipped function that does this, and this makes me suspect that\n> I’d prefer to roll my own for any serious purpose.\n> \n> The existence of the three “justify” functions is, therefore, harmless.\n> \n> Bruce / Tom:\n> Can we revisit this topic ?\n\nI thought we agreed that the attached patch will be applied to PG 15.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Fri, 7 May 2021 22:23:08 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 10:55 AM Bryn Llewellyn <bryn@yugabyte.com> wrote:\n\n> On 12-Apr-2021, at 17:25, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, Apr 12, 2021 at 05:20:43PM -0700, Bryn Llewellyn wrote:\n>\n> I’d argue that the fact that this:\n>\n> ('0.3 months'::interval) + ('0.7 months'::interval)\n>\n> Is reported as '30 days' and not '1 month' is yet another\n> bug—precisely because of what I said in my previous email (sorry\n> that I forked the thread) where I referred to the fact that, in the\n> right test, adding 1 month gets a different answer than adding 30\n> days.\n>\n>\n> Flowing _up_ is what these functions do:\n>\n> \\df *justify*\n> List of functions\n> Schema | Name | Result data type | Argument data types |\n> Type\n>\n> ------------+------------------+------------------+---------------------+------\n> pg_catalog | justify_days | interval | interval |\n> func\n> pg_catalog | justify_hours | interval | interval |\n> func\n> pg_catalog | justify_interval | interval | interval |\n> func\n>\n> Yet another convincing reason to get rid of this flow down\n> business altogether.\n>\n>\n> We can certainly get rid of all downflow, which in the current patch is\n> only when fractional internal units are specified.\n>\n> If some application wants to model flow-down, then it can do so with\n> trivial programming and full control over its own definition of the\n> rules.\n>\n>\n> *“Yes please!” re Bruce’s “We can certainly get rid of all downflow, which\n> in the current patch is only when fractional internal units are specified.”*\n>\n> Notice that a user who creates interval values explicitly only by using\n> “make_interval()” will see no behavior change.\n>\n> There’s another case of up-flow. When you subtract one timestamp value\n> from another, and when they’re far enough apart, then the (internal\n> representation of the) resulting interval value has both a seconds\n> component and a days component. (But never, in my tests, a months\n> component.) I assume that the implementation first gets the difference\n> between the two timestamp values in seconds using (the moral equivalent of)\n> “extract epoch”. And then, if this is greater than 24*60*60, it implements\n> up-flow using the “rule-of-24”—never mind that this means that if you add\n> the answer back to the timestamp value that you subtracted, then you might\n> not get the timestamp value from which you subtracted. (This happens around\n> a DST change and has been discussed earlier in the thread.)\n>\n> The purpose of these three “justify” functions is dubious. I think that\n> it’s best to think of the 3-component interval vector like an [x, y, z]\n> vector in 3d geometric space, where the three coordinates are not mutually\n> convertible because each has unique semantics. This point has been\n> rehearsed many times in this long thread.\n>\n> Having said this, it isn’t hard to understand the rules that the functions\n> implement. And, most importantly, their use is voluntary. They are, though,\n> no more than shipped and documented wrappers for a few special cases. A\n> user could so easily write their own function like this:\n>\n> 1. Compute the values of the three components of the internal\n> representation of the passed-in interval value using the “extract” feature\n> and some simple arithmetic.\n>\n> 2. Derive the [minutes, days, seconds] values of a new representation\n> using whatever rules you feel for.\n>\n> 3. Use these new values to create the return interval value.\n>\n> For example, I might follow a discipline to use interval values that have\n> only one of the three components of the internal representation non-zero.\n> It’s easy to use the “domain” feature for this. (I can use these in any\n> context where I can use the shipped interval.) I could then write a\n> function to convert a “pure seconds” value of my_interval to a “pure years”\n> value. And I could implement my own rules:\n>\n> — Makes sense only for a large number of seconds that comes out to at\n> least five years (else assert failure).\n>\n> — Converts seconds to years using the rule that 1 year is, on\n> average, 365.25*24*60*60 seconds, and then truncates it.\n>\n> There’s no shipped function that does this, and this makes me suspect that\n> I’d prefer to roll my own for any serious purpose.\n>\n> The existence of the three “justify” functions is, therefore, harmless.\n>\n>\nBruce / Tom:\nCan we revisit this topic ?\n\nCheers\n\nOn Tue, Apr 13, 2021 at 10:55 AM Bryn Llewellyn <bryn@yugabyte.com> wrote:On 12-Apr-2021, at 17:25, Bruce Momjian <bruce@momjian.us> wrote:On Mon, Apr 12, 2021 at 05:20:43PM -0700, Bryn Llewellyn wrote:I’d argue that the fact that this:('0.3 months'::interval) + ('0.7 months'::interval)Is reported as '30 days' and not '1 month' is yet anotherbug—precisely because of what I said in my previous email (sorrythat I forked the thread) where I referred to the fact that, in theright test, adding 1 month gets a different answer than adding 30days. Flowing _up_ is what these functions do: \\df *justify* List of functions Schema | Name | Result data type | Argument data types | Type ------------+------------------+------------------+---------------------+------ pg_catalog | justify_days | interval | interval | func pg_catalog | justify_hours | interval | interval | func pg_catalog | justify_interval | interval | interval | funcYet another convincing reason to get rid of this flow downbusiness altogether.We can certainly get rid of all downflow, which in the current patch isonly when fractional internal units are specified.If some application wants to model flow-down, then it can do so withtrivial programming and full control over its own definition of therules.“Yes please!” re Bruce’s “We can certainly get rid of all downflow, which in the current patch is only when fractional internal units are specified.”Notice that a user who creates interval values explicitly only by using “make_interval()” will see no behavior change.There’s another case of up-flow. When you subtract one timestamp value from another, and when they’re far enough apart, then the (internal representation of the) resulting interval value has both a seconds component and a days component. (But never, in my tests, a months component.) I assume that the implementation first gets the difference between the two timestamp values in seconds using (the moral equivalent of) “extract epoch”. And then, if this is greater than 24*60*60, it implements up-flow using the “rule-of-24”—never mind that this means that if you add the answer back to the timestamp value that you subtracted, then you might not get the timestamp value from which you subtracted. (This happens around a DST change and has been discussed earlier in the thread.)The purpose of these three “justify” functions is dubious. I think that it’s best to think of the 3-component interval vector like an [x, y, z] vector in 3d geometric space, where the three coordinates are not mutually convertible because each has unique semantics. This point has been rehearsed many times in this long thread.Having said this, it isn’t hard to understand the rules that the functions implement. And, most importantly, their use is voluntary. They are, though, no more than shipped and documented wrappers for a few special cases. A user could so easily write their own function like this:1. Compute the values of the three components of the internal representation of the passed-in interval value using the “extract” feature and some simple arithmetic.2. Derive the [minutes, days, seconds] values of a new representation using whatever rules you feel for.3. Use these new values to create the return interval value.For example, I might follow a discipline to use interval values that have only one of the three components of the internal representation non-zero. It’s easy to use the “domain” feature for this. (I can use these in any context where I can use the shipped interval.) I could then write a function to convert a “pure seconds” value of my_interval to a “pure years” value. And I could implement my own rules:— Makes sense only for a large number of seconds that comes out to at least five years (else assert failure).— Converts seconds to years using the rule that 1 year is, on average, 365.25*24*60*60 seconds, and then truncates it.There’s no shipped function that does this, and this makes me suspect that I’d prefer to roll my own for any serious purpose.The existence of the three “justify” functions is, therefore, harmless.Bruce / Tom:Can we revisit this topic ?Cheers",
"msg_date": "Fri, 7 May 2021 19:23:42 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, May 7, 2021 at 7:23 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Fri, May 7, 2021 at 07:23:42PM -0700, Zhihong Yu wrote:\n> > On Tue, Apr 13, 2021 at 10:55 AM Bryn Llewellyn <bryn@yugabyte.com>\n> wrote:\n> > There’s no shipped function that does this, and this makes me\n> suspect that\n> > I’d prefer to roll my own for any serious purpose.\n> >\n> > The existence of the three “justify” functions is, therefore,\n> harmless.\n> >\n> > Bruce / Tom:\n> > Can we revisit this topic ?\n>\n> I thought we agreed that the attached patch will be applied to PG 15.\n>\n\nGood to know.\n\nHopefully it lands soon.\n\n\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n\nOn Fri, May 7, 2021 at 7:23 PM Bruce Momjian <bruce@momjian.us> wrote:On Fri, May 7, 2021 at 07:23:42PM -0700, Zhihong Yu wrote:\n> On Tue, Apr 13, 2021 at 10:55 AM Bryn Llewellyn <bryn@yugabyte.com> wrote:\n> There’s no shipped function that does this, and this makes me suspect that\n> I’d prefer to roll my own for any serious purpose.\n> \n> The existence of the three “justify” functions is, therefore, harmless.\n> \n> Bruce / Tom:\n> Can we revisit this topic ?\n\nI thought we agreed that the attached patch will be applied to PG 15.Good to know. Hopefully it lands soon.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Fri, 7 May 2021 19:39:31 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, May 7, 2021 at 07:39:31PM -0700, Zhihong Yu wrote:\n> \n> \n> On Fri, May 7, 2021 at 7:23 PM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Fri, May 7, 2021 at 07:23:42PM -0700, Zhihong Yu wrote:\n> > On Tue, Apr 13, 2021 at 10:55 AM Bryn Llewellyn <bryn@yugabyte.com>\n> wrote:\n> > There’s no shipped function that does this, and this makes me suspect\n> that\n> > I’d prefer to roll my own for any serious purpose.\n> >\n> > The existence of the three “justify” functions is, therefore,\n> harmless.\n> >\n> > Bruce / Tom:\n> > Can we revisit this topic ?\n> \n> I thought we agreed that the attached patch will be applied to PG 15.\n> \n> \n> Good to know. \n> \n> Hopefully it lands soon.\n\nIt will be applied in June/July, but will not appear in a release until\nSept/Oct, 2022. Sorry.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 7 May 2021 22:42:04 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "> On 29 Jun 2021, at 18:50, Zhihong Yu <zyu@yugabyte.com> wrote:\n\n> Now that PG 15 is open for commit, do you think the patch can land ?\n\nAdding it to the commitfest patch tracker is a good way to ensure it's not\nforgotten about:\n\n\thttps://commitfest.postgresql.org/33/\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 29 Jun 2021 18:49:45 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, May 7, 2021 at 7:42 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Fri, May 7, 2021 at 07:39:31PM -0700, Zhihong Yu wrote:\n> >\n> >\n> > On Fri, May 7, 2021 at 7:23 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Fri, May 7, 2021 at 07:23:42PM -0700, Zhihong Yu wrote:\n> > > On Tue, Apr 13, 2021 at 10:55 AM Bryn Llewellyn <bryn@yugabyte.com\n> >\n> > wrote:\n> > > There’s no shipped function that does this, and this makes me\n> suspect\n> > that\n> > > I’d prefer to roll my own for any serious purpose.\n> > >\n> > > The existence of the three “justify” functions is, therefore,\n> > harmless.\n> > >\n> > > Bruce / Tom:\n> > > Can we revisit this topic ?\n> >\n> > I thought we agreed that the attached patch will be applied to PG 15.\n> >\n> >\n> > Good to know.\n> >\n> > Hopefully it lands soon.\n>\n> It will be applied in June/July, but will not appear in a release until\n> Sept/Oct, 2022. Sorry.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n> Bruce:\nNow that PG 15 is open for commit, do you think the patch can land ?\n\nCheers\n\nOn Fri, May 7, 2021 at 7:42 PM Bruce Momjian <bruce@momjian.us> wrote:On Fri, May 7, 2021 at 07:39:31PM -0700, Zhihong Yu wrote:\n> \n> \n> On Fri, May 7, 2021 at 7:23 PM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Fri, May 7, 2021 at 07:23:42PM -0700, Zhihong Yu wrote:\n> > On Tue, Apr 13, 2021 at 10:55 AM Bryn Llewellyn <bryn@yugabyte.com>\n> wrote:\n> > There’s no shipped function that does this, and this makes me suspect\n> that\n> > I’d prefer to roll my own for any serious purpose.\n> >\n> > The existence of the three “justify” functions is, therefore,\n> harmless.\n> >\n> > Bruce / Tom:\n> > Can we revisit this topic ?\n> \n> I thought we agreed that the attached patch will be applied to PG 15.\n> \n> \n> Good to know. \n> \n> Hopefully it lands soon.\n\nIt will be applied in June/July, but will not appear in a release until\nSept/Oct, 2022. Sorry.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\nBruce:Now that PG 15 is open for commit, do you think the patch can land ?Cheers",
"msg_date": "Tue, 29 Jun 2021 09:50:05 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Tue, Jun 29, 2021 at 06:49:45PM +0200, Daniel Gustafsson wrote:\n> > On 29 Jun 2021, at 18:50, Zhihong Yu <zyu@yugabyte.com> wrote:\n> \n> > Now that PG 15 is open for commit, do you think the patch can land ?\n> \n> Adding it to the commitfest patch tracker is a good way to ensure it's not\n> forgotten about:\n> \n> \thttps://commitfest.postgresql.org/33/\n\nOK, I have been keeping it in my git tree since I wrote it and will\napply it in the next few days.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Wed, 30 Jun 2021 12:35:16 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Wed, Jun 30, 2021 at 9:35 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Jun 29, 2021 at 06:49:45PM +0200, Daniel Gustafsson wrote:\n> > > On 29 Jun 2021, at 18:50, Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > > Now that PG 15 is open for commit, do you think the patch can land ?\n> >\n> > Adding it to the commitfest patch tracker is a good way to ensure it's\n> not\n> > forgotten about:\n> >\n> > https://commitfest.postgresql.org/33/\n>\n> OK, I have been keeping it in my git tree since I wrote it and will\n> apply it in the next few days.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n> Thanks, Bruce.\n\nHopefully you can get to this soon.\n\nOn Wed, Jun 30, 2021 at 9:35 AM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Jun 29, 2021 at 06:49:45PM +0200, Daniel Gustafsson wrote:\n> > On 29 Jun 2021, at 18:50, Zhihong Yu <zyu@yugabyte.com> wrote:\n> \n> > Now that PG 15 is open for commit, do you think the patch can land ?\n> \n> Adding it to the commitfest patch tracker is a good way to ensure it's not\n> forgotten about:\n> \n> https://commitfest.postgresql.org/33/\n\nOK, I have been keeping it in my git tree since I wrote it and will\napply it in the next few days.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\nThanks, Bruce.Hopefully you can get to this soon.",
"msg_date": "Thu, 8 Jul 2021 10:22:04 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Thu, Jul 8, 2021 at 10:22 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Wed, Jun 30, 2021 at 9:35 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n>> On Tue, Jun 29, 2021 at 06:49:45PM +0200, Daniel Gustafsson wrote:\n>> > > On 29 Jun 2021, at 18:50, Zhihong Yu <zyu@yugabyte.com> wrote:\n>> >\n>> > > Now that PG 15 is open for commit, do you think the patch can land ?\n>> >\n>> > Adding it to the commitfest patch tracker is a good way to ensure it's\n>> not\n>> > forgotten about:\n>> >\n>> > https://commitfest.postgresql.org/33/\n>>\n>> OK, I have been keeping it in my git tree since I wrote it and will\n>> apply it in the next few days.\n>>\n>> --\n>> Bruce Momjian <bruce@momjian.us> https://momjian.us\n>> EDB https://enterprisedb.com\n>>\n>> If only the physical world exists, free will is an illusion.\n>>\n>> Thanks, Bruce.\n>\n> Hopefully you can get to this soon.\n>\n\nBruce:\nPlease see if the patch can be integrated now.\n\nCheers\n\nOn Thu, Jul 8, 2021 at 10:22 AM Zhihong Yu <zyu@yugabyte.com> wrote:On Wed, Jun 30, 2021 at 9:35 AM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Jun 29, 2021 at 06:49:45PM +0200, Daniel Gustafsson wrote:\n> > On 29 Jun 2021, at 18:50, Zhihong Yu <zyu@yugabyte.com> wrote:\n> \n> > Now that PG 15 is open for commit, do you think the patch can land ?\n> \n> Adding it to the commitfest patch tracker is a good way to ensure it's not\n> forgotten about:\n> \n> https://commitfest.postgresql.org/33/\n\nOK, I have been keeping it in my git tree since I wrote it and will\napply it in the next few days.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\nThanks, Bruce.Hopefully you can get to this soon. Bruce: Please see if the patch can be integrated now.Cheers",
"msg_date": "Wed, 14 Jul 2021 09:03:21 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 09:03:21AM -0700, Zhihong Yu wrote:\n> On Thu, Jul 8, 2021 at 10:22 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Wed, Jun 30, 2021 at 9:35 AM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Tue, Jun 29, 2021 at 06:49:45PM +0200, Daniel Gustafsson wrote:\n> > > On 29 Jun 2021, at 18:50, Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > > Now that PG 15 is open for commit, do you think the patch can land\n> ?\n> >\n> > Adding it to the commitfest patch tracker is a good way to ensure\n> it's not\n> > forgotten about:\n> >\n> > https://commitfest.postgresql.org/33/\n> \n> OK, I have been keeping it in my git tree since I wrote it and will\n> apply it in the next few days.\n> Thanks, Bruce.\n> \n> Hopefully you can get to this soon. \n> \n> Bruce: \n> Please see if the patch can be integrated now.\n\nI found a mistake in my most recent patch. For example, in master we\nsee this output:\n\n\tSELECT INTERVAL '1.8594 months';\n\t interval\n\t--------------------------\n\t 1 mon 25 days 18:46:04.8\n\nObviously this should return '1 mon 26 days', but with my most recent\npatch, it returned '1 mon 25 days'. Turns out I had not properly used\nrint() in AdjustFractDays, and in fact the function is now longer needed\nbecause it is just a multiplication and an rint().\n\nUpdated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Tue, 20 Jul 2021 00:14:19 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Mon, Jul 19, 2021 at 9:14 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Wed, Jul 14, 2021 at 09:03:21AM -0700, Zhihong Yu wrote:\n> > On Thu, Jul 8, 2021 at 10:22 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > On Wed, Jun 30, 2021 at 9:35 AM Bruce Momjian <bruce@momjian.us>\n> wrote:\n> >\n> > On Tue, Jun 29, 2021 at 06:49:45PM +0200, Daniel Gustafsson\n> wrote:\n> > > > On 29 Jun 2021, at 18:50, Zhihong Yu <zyu@yugabyte.com>\n> wrote:\n> > >\n> > > > Now that PG 15 is open for commit, do you think the patch\n> can land\n> > ?\n> > >\n> > > Adding it to the commitfest patch tracker is a good way to\n> ensure\n> > it's not\n> > > forgotten about:\n> > >\n> > > https://commitfest.postgresql.org/33/\n> >\n> > OK, I have been keeping it in my git tree since I wrote it and\n> will\n> > apply it in the next few days.\n> > Thanks, Bruce.\n> >\n> > Hopefully you can get to this soon.\n> >\n> > Bruce:\n> > Please see if the patch can be integrated now.\n>\n> I found a mistake in my most recent patch. For example, in master we\n> see this output:\n>\n> SELECT INTERVAL '1.8594 months';\n> interval\n> --------------------------\n> 1 mon 25 days 18:46:04.8\n>\n> Obviously this should return '1 mon 26 days', but with my most recent\n> patch, it returned '1 mon 25 days'. Turns out I had not properly used\n> rint() in AdjustFractDays, and in fact the function is now longer needed\n> because it is just a multiplication and an rint().\n>\n> Updated patch attached.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n\nHi,\nPatch looks good.\nMaybe add the statement above as a test case :\n\nSELECT INTERVAL '1.8594 months'\n\nCheers\n\nOn Mon, Jul 19, 2021 at 9:14 PM Bruce Momjian <bruce@momjian.us> wrote:On Wed, Jul 14, 2021 at 09:03:21AM -0700, Zhihong Yu wrote:\n> On Thu, Jul 8, 2021 at 10:22 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Wed, Jun 30, 2021 at 9:35 AM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Tue, Jun 29, 2021 at 06:49:45PM +0200, Daniel Gustafsson wrote:\n> > > On 29 Jun 2021, at 18:50, Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > > Now that PG 15 is open for commit, do you think the patch can land\n> ?\n> >\n> > Adding it to the commitfest patch tracker is a good way to ensure\n> it's not\n> > forgotten about:\n> >\n> > https://commitfest.postgresql.org/33/\n> \n> OK, I have been keeping it in my git tree since I wrote it and will\n> apply it in the next few days.\n> Thanks, Bruce.\n> \n> Hopefully you can get to this soon. \n> \n> Bruce: \n> Please see if the patch can be integrated now.\n\nI found a mistake in my most recent patch. For example, in master we\nsee this output:\n\n SELECT INTERVAL '1.8594 months';\n interval\n --------------------------\n 1 mon 25 days 18:46:04.8\n\nObviously this should return '1 mon 26 days', but with my most recent\npatch, it returned '1 mon 25 days'. Turns out I had not properly used\nrint() in AdjustFractDays, and in fact the function is now longer needed\nbecause it is just a multiplication and an rint().\n\nUpdated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.Hi,Patch looks good.Maybe add the statement above as a test case :SELECT INTERVAL '1.8594 months' Cheers",
"msg_date": "Tue, 20 Jul 2021 14:33:07 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 02:33:07PM -0700, Zhihong Yu wrote:\n> On Mon, Jul 19, 2021 at 9:14 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Obviously this should return '1 mon 26 days', but with my most recent\n> > patch, it returned '1 mon 25 days'. Turns out I had not properly used\n> > rint() in AdjustFractDays, and in fact the function is now longer needed\n> > because it is just a multiplication and an rint().\n>\n> Patch looks good.\n> Maybe add the statement above as a test case :\n> \n> SELECT INTERVAL '1.8594 months' \n\nGood idea --- updated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Tue, 20 Jul 2021 18:53:50 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 3:53 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Jul 20, 2021 at 02:33:07PM -0700, Zhihong Yu wrote:\n> > On Mon, Jul 19, 2021 at 9:14 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > Obviously this should return '1 mon 26 days', but with my most\n> recent\n> > > patch, it returned '1 mon 25 days'. Turns out I had not properly\n> used\n> > > rint() in AdjustFractDays, and in fact the function is now longer\n> needed\n> > > because it is just a multiplication and an rint().\n> >\n> > Patch looks good.\n> > Maybe add the statement above as a test case :\n> >\n> > SELECT INTERVAL '1.8594 months'\n>\n> Good idea --- updated patch attached.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n> Hi,\nWith your patch, the following example (Coutesy Bryn) still shows fraction:\n\n# select (interval '1 month')*1.123;\n ?column?\n-----------------------\n 1 mon 3 days 16:33:36\n(1 row)\n\nDo you think the output can be improved (by getting rid of fraction) ?\n\nThanks\n\nOn Tue, Jul 20, 2021 at 3:53 PM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Jul 20, 2021 at 02:33:07PM -0700, Zhihong Yu wrote:\n> On Mon, Jul 19, 2021 at 9:14 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Obviously this should return '1 mon 26 days', but with my most recent\n> > patch, it returned '1 mon 25 days'. Turns out I had not properly used\n> > rint() in AdjustFractDays, and in fact the function is now longer needed\n> > because it is just a multiplication and an rint().\n>\n> Patch looks good.\n> Maybe add the statement above as a test case :\n> \n> SELECT INTERVAL '1.8594 months' \n\nGood idea --- updated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\nHi,With your patch, the following example (Coutesy Bryn) still shows fraction:# select (interval '1 month')*1.123; ?column?----------------------- 1 mon 3 days 16:33:36(1 row) Do you think the output can be improved (by getting rid of fraction) ?Thanks",
"msg_date": "Tue, 20 Jul 2021 17:13:37 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 05:13:37PM -0700, Zhihong Yu wrote:\n> On Tue, Jul 20, 2021 at 3:53 PM Bruce Momjian <bruce@momjian.us> wrote:\n> With your patch, the following example (Coutesy Bryn) still shows fraction:\n> \n> # select (interval '1 month')*1.123;\n> ?column?\n> -----------------------\n> 1 mon 3 days 16:33:36\n> (1 row) \n> \n> Do you think the output can be improved (by getting rid of fraction) ?\n\nWell, I focused on how fractional units were processed inside of\ninterval values. I never considered how multiplication should be\nhandled. I have not really thought about how to handle that, but this\nexample now gives me concern:\n\n\tSELECT INTERVAL '1.06 months 1 hour';\n\t interval\n\t-----------------------\n\t 1 mon 2 days 01:00:00\n\nNotice that it rounds the '1.06 months' to '1 mon 2 days', rather than\nspilling to hours/minutes/seconds, even though hours is already\nspecified. I don't see a better way to handle this than the current\ncode already does, but it is something odd.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Tue, 20 Jul 2021 22:48:34 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Wed, 21 Jul 2021 at 03:48, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> this example now gives me concern:\n>\n> SELECT INTERVAL '1.06 months 1 hour';\n> interval\n> -----------------------\n> 1 mon 2 days 01:00:00\n>\n> Notice that it rounds the '1.06 months' to '1 mon 2 days', rather than\n> spilling to hours/minutes/seconds, even though hours is already\n> specified. I don't see a better way to handle this than the current\n> code already does, but it is something odd.\n\nHmm, looking at this whole thread, I have to say that I prefer the old\nbehaviour of spilling down to lower units.\n\nFor example, with this patch:\n\nSELECT '0.5 weeks'::interval;\n interval\n----------\n 4 days\n\nwhich I don't think is really an improvement. My expectation is that\nhalf a week is 3.5 days, and I prefer what it used to return, namely\n'3 days 12:00:00'.\n\nIt's true that that leads to odd-looking results when the field value\nhas lots of fractional digits, but that was at least explainable, and\nfollowed the documentation.\n\nLooking for a general principle to follow, how about this -- the\nresult of specifying a fractional value should be the same as\nmultiplying an interval of 1 unit by that value. In other words,\n'1.8594 months'::interval should be the same as '1 month'::interval *\n1.8594. (Actually, it probably can't easily be made exactly the same\nin all cases, due to differences in the floating point computations in\nthe two cases, and rounding errors, but it's hopefully not far off,\nunlike the results obtained by not spilling down to lower units on\ninput.)\n\nThe cases that are broken in master, in my opinion, are the larger\nunits (year and above), which don't propagate down in the same way as\nfractional months and below. So, for example, '0.7 years' should be\n8.4 months (with the conversion factor of 1 year = 12 months), giving\n'8 months 12 days', which is what '1 year'::interval * 0.7 produces.\nSure, there are arguably more accurate ways of computing that.\nHowever, that's the result obtained using the documented conversion\nfactors, so it's justifiable in those terms.\n\nIt's worth noting another case that is broken in master:\n\nSELECT '1.7 decades'::interval;\n interval\n------------------\n 16 years 11 mons\n\nwhich is surely not what anyone would expect. The current patch fixes\nthis, but it would also be fixed by handling the fractional digits for\nthese units in the same way as for smaller units. There was an earlier\npatch doing that, I think, though I didn't test it.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 21 Jul 2021 09:23:18 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> Hmm, looking at this whole thread, I have to say that I prefer the old\n> behaviour of spilling down to lower units.\n\n> For example, with this patch:\n\n> SELECT '0.5 weeks'::interval;\n> interval\n> ----------\n> 4 days\n\n> which I don't think is really an improvement. My expectation is that\n> half a week is 3.5 days, and I prefer what it used to return, namely\n> '3 days 12:00:00'.\n\nYeah, that is clearly a significant dis-improvement.\n\nIn general, considering that (most of?) the existing behavior has stood\nfor decades, I think we need to tread VERY carefully about changing it.\nI don't want to see this patch changing any case that is not indisputably\nbroken.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Jul 2021 05:58:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "> On 21-Jul-2021, at 02:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n>> Hmm, looking at this whole thread, I have to say that I prefer the old\n>> behaviour of spilling down to lower units.\n> \n>> For example, with this patch:\n> \n>> SELECT '0.5 weeks'::interval;\n>> interval\n>> ----------\n>> 4 days\n> \n>> which I don't think is really an improvement. My expectation is that\n>> half a week is 3.5 days, and I prefer what it used to return, namely\n>> '3 days 12:00:00'.\n> \n> Yeah, that is clearly a significant dis-improvement.\n> \n> In general, considering that (most of?) the existing behavior has stood\n> for decades, I think we need to tread VERY carefully about changing it.\n> I don't want to see this patch changing any case that is not indisputably\n> broken.\n\nIt was me that started the enormous thread with the title “Have I found an interval arithmetic bug?” on 01-Apr-2021. I presented this testcase:\n\n> select interval '-1.7 years'; -- -1 years -8 mons\n> \n> select interval '29.4 months'; -- 2 years 5 mons 12 days\n> \n> select interval '-1.7 years 29.4 months'; -- 8 mons 12 days << wrong\n> select interval '29.4 months -1.7 years'; -- 9 mons 12 days\n> \n> select interval '-1.7 years' + interval '29.4 months'; -- 9 mons 12 days\n> select interval '29.4 months' + interval '-1.7 years'; -- 9 mons 12 days\n\nThe consensus was that the outcome that I flagged with “wrong” does indeed have that status. After all, it’s hard to see how anybody could intend this rule (that anyway holds in only some cases):\n\n-a + b <> b - a\n\nIt seems odd that there’s been no recent reference to my testcase and how it behaves in the environment of Bruce’s patch.\n\nI don’t recall the history of the thread. But Bruce took on the task of fixing this narrow issue. Anyway, somehow, the whole question of “spill down” came up for discussion. The rules aren’t documented and I’ve been unable to find any reference even to the phenomenon. I have managed to implement a model, in PL/pgSQL, that gets the same results as the native implementation in every one of many tests that I’ve done. I appreciate that this doesn’t prove that my model is correct. But it would seem that it must be on the right track. The rules that my PL/pgSQL uses are self-evidently whimsical—but they were needed precisely to get the same outcomes as the native implementation. There was some discussion of all this somewhere in this thread.\n\nIf memory serves, it was Tom who suggested changing the spill-down rules. This was possibly meant entirely rhetorically. But it seems that Bruce did set about implementing a change here. (I was unable to find a clear prose functional spec for the new behavior. Probably I didn’t know where to look.\n\nThere’s no doubt that a change in these rules would change the behavior of extant code. But then, in a purist sense, this is the case with any bug fix.\n\nI’m simply waiting on a final ruling and final outcome.\n\nMeanwhile, I’ve worked out a way to tame all this business (by using domain types and associated functionality) so that application code can deal confidently with only pure months, pure days, and pure seconds interval values (thinking of the internal [mm, dd, ss] representation). The scheme ensures that spill-down never occurs by rounding the years or the months field to integral values. If you want to add a “mixed” interval to a timestamp, then you simply add the different kinds of interval in the one expression. And you use parentheses to assert, visibly, the priority rule that you intend.\n\nBecause this is ordinary application code, there are no compatibility issues for me. My approach won’t see a change in behavior no matter what is decided about the present patch.\n\n\nOn 21-Jul-2021, at 02:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:Dean Rasheed <dean.a.rasheed@gmail.com> writes:Hmm, looking at this whole thread, I have to say that I prefer the oldbehaviour of spilling down to lower units.For example, with this patch:SELECT '0.5 weeks'::interval; interval---------- 4 dayswhich I don't think is really an improvement. My expectation is thathalf a week is 3.5 days, and I prefer what it used to return, namely'3 days 12:00:00'.Yeah, that is clearly a significant dis-improvement.In general, considering that (most of?) the existing behavior has stoodfor decades, I think we need to tread VERY carefully about changing it.I don't want to see this patch changing any case that is not indisputablybroken.It was me that started the enormous thread with the title “Have I found an interval arithmetic bug?” on 01-Apr-2021. I presented this testcase:select interval '-1.7 years'; -- -1 years -8 monsselect interval '29.4 months'; -- 2 years 5 mons 12 daysselect interval '-1.7 years 29.4 months'; -- 8 mons 12 days << wrongselect interval '29.4 months -1.7 years'; -- 9 mons 12 daysselect interval '-1.7 years' + interval '29.4 months'; -- 9 mons 12 daysselect interval '29.4 months' + interval '-1.7 years'; -- 9 mons 12 daysThe consensus was that the outcome that I flagged with “wrong” does indeed have that status. After all, it’s hard to see how anybody could intend this rule (that anyway holds in only some cases):-a + b <> b - aIt seems odd that there’s been no recent reference to my testcase and how it behaves in the environment of Bruce’s patch.I don’t recall the history of the thread. But Bruce took on the task of fixing this narrow issue. Anyway, somehow, the whole question of “spill down” came up for discussion. The rules aren’t documented and I’ve been unable to find any reference even to the phenomenon. I have managed to implement a model, in PL/pgSQL, that gets the same results as the native implementation in every one of many tests that I’ve done. I appreciate that this doesn’t prove that my model is correct. But it would seem that it must be on the right track. The rules that my PL/pgSQL uses are self-evidently whimsical—but they were needed precisely to get the same outcomes as the native implementation. There was some discussion of all this somewhere in this thread.If memory serves, it was Tom who suggested changing the spill-down rules. This was possibly meant entirely rhetorically. But it seems that Bruce did set about implementing a change here. (I was unable to find a clear prose functional spec for the new behavior. Probably I didn’t know where to look.There’s no doubt that a change in these rules would change the behavior of extant code. But then, in a purist sense, this is the case with any bug fix.I’m simply waiting on a final ruling and final outcome.Meanwhile, I’ve worked out a way to tame all this business (by using domain types and associated functionality) so that application code can deal confidently with only pure months, pure days, and pure seconds interval values (thinking of the internal [mm, dd, ss] representation). The scheme ensures that spill-down never occurs by rounding the years or the months field to integral values. If you want to add a “mixed” interval to a timestamp, then you simply add the different kinds of interval in the one expression. And you use parentheses to assert, visibly, the priority rule that you intend.Because this is ordinary application code, there are no compatibility issues for me. My approach won’t see a change in behavior no matter what is decided about the present patch.",
"msg_date": "Wed, 21 Jul 2021 10:18:34 -0700",
"msg_from": "Bryn Llewellyn <bryn@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "Bryn Llewellyn <bryn@yugabyte.com> writes:\n> On 21-Jul-2021, at 02:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> In general, considering that (most of?) the existing behavior has stood\n>> for decades, I think we need to tread VERY carefully about changing it.\n>> I don't want to see this patch changing any case that is not indisputably\n>> broken.\n\n> It was me that started the enormous thread with the title “Have I found an interval arithmetic bug?” on 01-Apr-2021. I presented this testcase:\n\n>> select interval '-1.7 years'; -- -1 years -8 mons\n>> \n>> select interval '29.4 months'; -- 2 years 5 mons 12 days\n>> \n>> select interval '-1.7 years 29.4 months'; -- 8 mons 12 days << wrong\n>> select interval '29.4 months -1.7 years'; -- 9 mons 12 days\n>> \n>> select interval '-1.7 years' + interval '29.4 months'; -- 9 mons 12 days\n>> select interval '29.4 months' + interval '-1.7 years'; -- 9 mons 12 days\n\n> The consensus was that the outcome that I flagged with “wrong” does indeed have that status.\n\nYeah, I think it's self-evident that your last four cases should\nproduce the same results. Whether '9 mons 12 days' is the best\npossible result is debatable --- in a perfect world, maybe we'd\nproduce '9 mons' exactly --- but given that the first two cases\nproduce what they do, that does seem self-consistent. I think\nwe should be setting out to fix that outlier without causing\nany of the other five results to change.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Jul 2021 13:29:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "> On 21-Jul-2021, at 01:23, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> \n> On Wed, 21 Jul 2021 at 03:48, Bruce Momjian <bruce@momjian.us> wrote:\n>> \n>> this example now gives me concern:\n>> \n>> SELECT INTERVAL '1.06 months 1 hour';\n>> interval\n>> -----------------------\n>> 1 mon 2 days 01:00:00\n>> \n>> Notice that it rounds the '1.06 months' to '1 mon 2 days', rather than\n>> spilling to hours/minutes/seconds, even though hours is already\n>> specified. I don't see a better way to handle this than the current\n>> code already does, but it is something odd.\n> \n> Hmm, looking at this whole thread, I have to say that I prefer the old\n> behaviour of spilling down to lower units.\n> \n> For example, with this patch:\n> \n> SELECT '0.5 weeks'::interval;\n> interval\n> ----------\n> 4 days\n> \n> which I don't think is really an improvement. My expectation is that\n> half a week is 3.5 days, and I prefer what it used to return, namely\n> '3 days 12:00:00'.\n> \n> It's true that that leads to odd-looking results when the field value\n> has lots of fractional digits, but that was at least explainable, and\n> followed the documentation.\n> \n> Looking for a general principle to follow, how about this -- the\n> result of specifying a fractional value should be the same as\n> multiplying an interval of 1 unit by that value. In other words,\n> '1.8594 months'::interval should be the same as '1 month'::interval *\n> 1.8594. (Actually, it probably can't easily be made exactly the same\n> in all cases, due to differences in the floating point computations in\n> the two cases, and rounding errors, but it's hopefully not far off,\n> unlike the results obtained by not spilling down to lower units on\n> input.)\n> \n> The cases that are broken in master, in my opinion, are the larger\n> units (year and above), which don't propagate down in the same way as\n> fractional months and below. So, for example, '0.7 years' should be\n> 8.4 months (with the conversion factor of 1 year = 12 months), giving\n> '8 months 12 days', which is what '1 year'::interval * 0.7 produces.\n> Sure, there are arguably more accurate ways of computing that.\n> However, that's the result obtained using the documented conversion\n> factors, so it's justifiable in those terms.\n> \n> It's worth noting another case that is broken in master:\n> \n> SELECT '1.7 decades'::interval;\n> interval\n> ------------------\n> 16 years 11 mons\n> \n> which is surely not what anyone would expect. The current patch fixes\n> this, but it would also be fixed by handling the fractional digits for\n> these units in the same way as for smaller units. There was an earlier\n> patch doing that, I think, though I didn't test it.\n> \n> Regards,\n> Dean\n\nAnd try these two tests. (I’m using Version 13.3.) on current MacOS.\n\nselect\n '1.7 decades'::interval as i1, \n ('1 decades'::interval)*1.7 as i2,\n ('10 years'::interval)*1.7 as i3;\n\n i1 | i2 | i3 \n------------------+----------+----------\n 16 years 11 mons | 17 years | 17 years\n\nselect\n '1.7345 decades'::interval as i4, \n ('1 decades'::interval)*1.7345 as i5,\n ('10 years'::interval)*1.7345 as i6;\n\n i4 | i5 | i6 \n-----------------+---------------------------------+---------------------------------\n 17 years 4 mons | 17 years 4 mons 4 days 04:48:00 | 17 years 4 mons 4 days 04:48:00\n\nShows only what we know already: mixed interval arithmetic is fishy.\n\nSeems to me that units like “weeks”, “centuries”, “millennia”, and so on are a solution (broken in some cases) looking for a problem. Try this (and variants like I showed above):\n\nselect\n '1.7345 millennia'::interval as i7,\n '1.7345 centuries'::interval as i8,\n '1.7345 weeks'::interval as i9;\n\n i7 | i8 | i9 \n-------------------+------------------+--------------------\n 1734 years 6 mons | 173 years 5 mons | 12 days 03:23:45.6\n\n\n\n",
"msg_date": "Wed, 21 Jul 2021 10:44:08 -0700",
"msg_from": "Bryn Llewellyn <bryn@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Wed, Jul 21, 2021 at 01:29:49PM -0400, Tom Lane wrote:\n> Bryn Llewellyn <bryn@yugabyte.com> writes:\n> > It was me that started the enormous thread with the title “Have I found an interval arithmetic bug?” on 01-Apr-2021. I presented this testcase:\n> \n> >> select interval '-1.7 years'; -- -1 years -8 mons\n> >> \n> >> select interval '29.4 months'; -- 2 years 5 mons 12 days\n> >> \n> >> select interval '-1.7 years 29.4 months'; -- 8 mons 12 days << wrong\n> >> select interval '29.4 months -1.7 years'; -- 9 mons 12 days\n> >> \n> >> select interval '-1.7 years' + interval '29.4 months'; -- 9 mons 12 days\n> >> select interval '29.4 months' + interval '-1.7 years'; -- 9 mons 12 days\n> \n> > The consensus was that the outcome that I flagged with “wrong” does indeed have that status.\n> \n> Yeah, I think it's self-evident that your last four cases should\n> produce the same results. Whether '9 mons 12 days' is the best\n> possible result is debatable --- in a perfect world, maybe we'd\n> produce '9 mons' exactly --- but given that the first two cases\n> produce what they do, that does seem self-consistent. I think\n> we should be setting out to fix that outlier without causing\n> any of the other five results to change.\n\nOK, I decided to reverse some of the changes I was proposing once I\nstarted to think about the inaccuracy of not spilling down from 'weeks'\nto seconds when hours also appear. The fundamental issue is that the\nmonths-to-days conversion is almost always an approximation, while the\ndays to seconds conversion is almost always accurate. This means we are\nnever going to have consistent spill-down that is useful.\n\nTherefore, I went ahead and accepted that years and larger units spill\nonly to months, months spill only to days, and weeks and lower spill all\nthe way down to seconds. I also spelled this out in the docs, and\nexplained why we have this behavior.\n\nAlso, with my patch, the last four queries return the same result\nbecause of the proper rounding also added by the patch, attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Wed, 21 Jul 2021 20:07:13 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "> On 21-Jul-2021, at 17:07, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Wed, Jul 21, 2021 at 01:29:49PM -0400, Tom Lane wrote:\n>> Bryn Llewellyn <bryn@yugabyte.com> writes:\n>>> It was me that started the enormous thread with the title “Have I found an interval arithmetic bug?” on 01-Apr-2021. I presented this testcase:\n>> \n>>>> select interval '-1.7 years'; -- -1 years -8 mons\n>>>> \n>>>> select interval '29.4 months'; -- 2 years 5 mons 12 days\n>>>> \n>>>> select interval '-1.7 years 29.4 months'; -- 8 mons 12 days << wrong\n>>>> select interval '29.4 months -1.7 years'; -- 9 mons 12 days\n>>>> \n>>>> select interval '-1.7 years' + interval '29.4 months'; -- 9 mons 12 days\n>>>> select interval '29.4 months' + interval '-1.7 years'; -- 9 mons 12 days\n>> \n>>> The consensus was that the outcome that I flagged with “wrong” does indeed have that status.\n>> \n>> Yeah, I think it's self-evident that your last four cases should\n>> produce the same results. Whether '9 mons 12 days' is the best\n>> possible result is debatable --- in a perfect world, maybe we'd\n>> produce '9 mons' exactly --- but given that the first two cases\n>> produce what they do, that does seem self-consistent. I think\n>> we should be setting out to fix that outlier without causing\n>> any of the other five results to change.\n> \n> OK, I decided to reverse some of the changes I was proposing once I\n> started to think about the inaccuracy of not spilling down from 'weeks'\n> to seconds when hours also appear. The fundamental issue is that the\n> months-to-days conversion is almost always an approximation, while the\n> days to seconds conversion is almost always accurate. This means we are\n> never going to have consistent spill-down that is useful.\n> \n> Therefore, I went ahead and accepted that years and larger units spill\n> only to months, months spill only to days, and weeks and lower spill all\n> the way down to seconds. I also spelled this out in the docs, and\n> explained why we have this behavior.\n> \n> Also, with my patch, the last four queries return the same result\n> because of the proper rounding also added by the patch, attached.\n\nYour statement\n\n“months-to-days conversion is almost always an approximation, while the days to seconds conversion is almost always accurate.” \n\nis misleading. Any conversion like these (and also the “spill up” conversions that the justify_hours(), justify_days(), and justify_interval() built-in functions bring) are semantically dangerous because of the different rules for adding a pure months, a pure days, or a pure seconds interval to a timestamptz value.\n\nUnless you avoid mixed interval values, then it’s so hard (even though it is possible) to predict the outcomes of interval arithmetic. Rather, all you get is emergent behavior that I fail to see can be relied upon in deliberately designed application code. Here’s a telling example:\n\nset timezone = 'America/Los_Angeles';\nwith\n c as (\n select\n '2021-03-13 19:00:00 America/Los_Angeles'::timestamptz as d,\n '25 hours'::interval as i)\nselect\n d + i as \"d + i\",\n d + justify_hours(i) as \"d + justify_hours(i)\"\nfrom c;\n\nThis is the result:\n\n d + i | d + justify_hours(i) \n------------------------+------------------------\n 2021-03-14 21:00:00-07 | 2021-03-14 20:00:00-07\n\nThe two results are different, even though the native equality test shows that the two different interval values are the same:\n\nwith\n c as (select '25 hours'::interval as i)\nselect (i = justify_hours(i))::text\nfrom c;\n\nThe result is TRUE.\n\nThe only route to sanity is to use only pure interval values (i.e. where only one of the fields of the internal [mm, dd, ss] representation is non-zero.\n\nI mentioned that you can use a set of three domain types to enforce your intended practice here.\n\nIn other words, by programming application code defensively, it’s possible to insulate oneself entirely from the emergent behavior of the decades old PG code that implements the unconstrained native interval functionality and that brings what can only be considered to be unpredictable results.\n\nMoreover, this defensive approach insulates you from any changes that Bruce’s patch might make.\n\n\nOn 21-Jul-2021, at 17:07, Bruce Momjian <bruce@momjian.us> wrote:On Wed, Jul 21, 2021 at 01:29:49PM -0400, Tom Lane wrote:Bryn Llewellyn <bryn@yugabyte.com> writes:It was me that started the enormous thread with the title “Have I found an interval arithmetic bug?” on 01-Apr-2021. I presented this testcase:select interval '-1.7 years'; -- -1 years -8 monsselect interval '29.4 months'; -- 2 years 5 mons 12 daysselect interval '-1.7 years 29.4 months'; -- 8 mons 12 days << wrongselect interval '29.4 months -1.7 years'; -- 9 mons 12 daysselect interval '-1.7 years' + interval '29.4 months'; -- 9 mons 12 daysselect interval '29.4 months' + interval '-1.7 years'; -- 9 mons 12 daysThe consensus was that the outcome that I flagged with “wrong” does indeed have that status.Yeah, I think it's self-evident that your last four cases shouldproduce the same results. Whether '9 mons 12 days' is the bestpossible result is debatable --- in a perfect world, maybe we'dproduce '9 mons' exactly --- but given that the first two casesproduce what they do, that does seem self-consistent. I thinkwe should be setting out to fix that outlier without causingany of the other five results to change.OK, I decided to reverse some of the changes I was proposing once Istarted to think about the inaccuracy of not spilling down from 'weeks'to seconds when hours also appear. The fundamental issue is that themonths-to-days conversion is almost always an approximation, while thedays to seconds conversion is almost always accurate. This means we arenever going to have consistent spill-down that is useful.Therefore, I went ahead and accepted that years and larger units spillonly to months, months spill only to days, and weeks and lower spill allthe way down to seconds. I also spelled this out in the docs, andexplained why we have this behavior.Also, with my patch, the last four queries return the same resultbecause of the proper rounding also added by the patch, attached.Your statement“months-to-days conversion is almost always an approximation, while the days to seconds conversion is almost always accurate.” is misleading. Any conversion like these (and also the “spill up” conversions that the justify_hours(), justify_days(), and justify_interval() built-in functions bring) are semantically dangerous because of the different rules for adding a pure months, a pure days, or a pure seconds interval to a timestamptz value.Unless you avoid mixed interval values, then it’s so hard (even though it is possible) to predict the outcomes of interval arithmetic. Rather, all you get is emergent behavior that I fail to see can be relied upon in deliberately designed application code. Here’s a telling example:set timezone = 'America/Los_Angeles';with c as ( select '2021-03-13 19:00:00 America/Los_Angeles'::timestamptz as d, '25 hours'::interval as i)select d + i as \"d + i\", d + justify_hours(i) as \"d + justify_hours(i)\"from c;This is the result: d + i | d + justify_hours(i) ------------------------+------------------------ 2021-03-14 21:00:00-07 | 2021-03-14 20:00:00-07The two results are different, even though the native equality test shows that the two different interval values are the same:with c as (select '25 hours'::interval as i)select (i = justify_hours(i))::textfrom c;The result is TRUE.The only route to sanity is to use only pure interval values (i.e. where only one of the fields of the internal [mm, dd, ss] representation is non-zero.I mentioned that you can use a set of three domain types to enforce your intended practice here.In other words, by programming application code defensively, it’s possible to insulate oneself entirely from the emergent behavior of the decades old PG code that implements the unconstrained native interval functionality and that brings what can only be considered to be unpredictable results.Moreover, this defensive approach insulates you from any changes that Bruce’s patch might make.",
"msg_date": "Wed, 21 Jul 2021 18:39:26 -0700",
"msg_from": "Bryn Llewellyn <bryn@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Wed, Jul 21, 2021 at 06:39:26PM -0700, Bryn Llewellyn wrote:\n> Your statement\n> \n> \n> “months-to-days conversion is almost always an approximation, while the\n> days to seconds conversion is almost always accurate.” \n> \n> \n> is misleading. Any conversion like these (and also the “spill up” conversions\n> that the justify_hours(), justify_days(), and justify_interval() built-in\n> functions bring) are semantically dangerous because of the different rules for\n> adding a pure months, a pure days, or a pure seconds interval to a timestamptz\n> value.\n\nWe are trying to get the most reasonable output for fractional values\n--- I stand by my statements.\n\n> Unless you avoid mixed interval values, then it’s so hard (even though it is\n> possible) to predict the outcomes of interval arithmetic. Rather, all you get\n> is emergent behavior that I fail to see can be relied upon in deliberately\n> designed application code. Here’s a telling example:\n\nThe point is that we will get unusual values, so we should do the best\nwe can.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Wed, 21 Jul 2021 21:43:31 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Wed, Jul 21, 2021 at 6:43 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Wed, Jul 21, 2021 at 06:39:26PM -0700, Bryn Llewellyn wrote:\n> > Your statement\n> >\n> >\n> > “months-to-days conversion is almost always an approximation, while\n> the\n> > days to seconds conversion is almost always accurate.”\n> >\n> >\n> > is misleading. Any conversion like these (and also the “spill up”\n> conversions\n> > that the justify_hours(), justify_days(), and justify_interval() built-in\n> > functions bring) are semantically dangerous because of the different\n> rules for\n> > adding a pure months, a pure days, or a pure seconds interval to a\n> timestamptz\n> > value.\n>\n> We are trying to get the most reasonable output for fractional values\n> --- I stand by my statements.\n>\n> > Unless you avoid mixed interval values, then it’s so hard (even though\n> it is\n> > possible) to predict the outcomes of interval arithmetic. Rather, all\n> you get\n> > is emergent behavior that I fail to see can be relied upon in\n> deliberately\n> > designed application code. Here’s a telling example:\n>\n> The point is that we will get unusual values, so we should do the best\n> we can.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n> Hi,\n\n- tm->tm_mon += (fval * MONTHS_PER_YEAR);\n+ tm->tm_mon += rint(fval * MONTHS_PER_YEAR);\n\nShould the handling for year use the same check as that for month ?\n\n- AdjustFractDays(fval, tm, fsec, DAYS_PER_MONTH);\n+ /* round to a full month? */\n+ if (rint(fval * DAYS_PER_MONTH) == DAYS_PER_MONTH)\n\nCheers\n\nOn Wed, Jul 21, 2021 at 6:43 PM Bruce Momjian <bruce@momjian.us> wrote:On Wed, Jul 21, 2021 at 06:39:26PM -0700, Bryn Llewellyn wrote:\n> Your statement\n> \n> \n> “months-to-days conversion is almost always an approximation, while the\n> days to seconds conversion is almost always accurate.” \n> \n> \n> is misleading. Any conversion like these (and also the “spill up” conversions\n> that the justify_hours(), justify_days(), and justify_interval() built-in\n> functions bring) are semantically dangerous because of the different rules for\n> adding a pure months, a pure days, or a pure seconds interval to a timestamptz\n> value.\n\nWe are trying to get the most reasonable output for fractional values\n--- I stand by my statements.\n\n> Unless you avoid mixed interval values, then it’s so hard (even though it is\n> possible) to predict the outcomes of interval arithmetic. Rather, all you get\n> is emergent behavior that I fail to see can be relied upon in deliberately\n> designed application code. Here’s a telling example:\n\nThe point is that we will get unusual values, so we should do the best\nwe can.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\nHi,- tm->tm_mon += (fval * MONTHS_PER_YEAR);+ tm->tm_mon += rint(fval * MONTHS_PER_YEAR);Should the handling for year use the same check as that for month ?- AdjustFractDays(fval, tm, fsec, DAYS_PER_MONTH);+ /* round to a full month? */+ if (rint(fval * DAYS_PER_MONTH) == DAYS_PER_MONTH)Cheers",
"msg_date": "Thu, 22 Jul 2021 14:59:42 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 2:59 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Wed, Jul 21, 2021 at 6:43 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n>> On Wed, Jul 21, 2021 at 06:39:26PM -0700, Bryn Llewellyn wrote:\n>> > Your statement\n>> >\n>> >\n>> > “months-to-days conversion is almost always an approximation, while\n>> the\n>> > days to seconds conversion is almost always accurate.”\n>> >\n>> >\n>> > is misleading. Any conversion like these (and also the “spill up”\n>> conversions\n>> > that the justify_hours(), justify_days(), and justify_interval()\n>> built-in\n>> > functions bring) are semantically dangerous because of the different\n>> rules for\n>> > adding a pure months, a pure days, or a pure seconds interval to a\n>> timestamptz\n>> > value.\n>>\n>> We are trying to get the most reasonable output for fractional values\n>> --- I stand by my statements.\n>>\n>> > Unless you avoid mixed interval values, then it’s so hard (even though\n>> it is\n>> > possible) to predict the outcomes of interval arithmetic. Rather, all\n>> you get\n>> > is emergent behavior that I fail to see can be relied upon in\n>> deliberately\n>> > designed application code. Here’s a telling example:\n>>\n>> The point is that we will get unusual values, so we should do the best\n>> we can.\n>>\n>> --\n>> Bruce Momjian <bruce@momjian.us> https://momjian.us\n>> EDB https://enterprisedb.com\n>>\n>> If only the physical world exists, free will is an illusion.\n>>\n>> Hi,\n>\n> - tm->tm_mon += (fval * MONTHS_PER_YEAR);\n> + tm->tm_mon += rint(fval * MONTHS_PER_YEAR);\n>\n> Should the handling for year use the same check as that for month ?\n>\n> - AdjustFractDays(fval, tm, fsec, DAYS_PER_MONTH);\n> + /* round to a full month? */\n> + if (rint(fval * DAYS_PER_MONTH) == DAYS_PER_MONTH)\n>\n> Cheers\n>\nHi,\nI guess the reason for current patch was that year to months conversion is\naccurate.\nOn the new test:\n\n+SELECT INTERVAL '1.16 months 01:00:00' AS \"One mon 5 days one hour\";\n\n0.16 * 31 = 4.96 < 5\n\nI wonder why 5 days were chosen in the test output.\n\nOn Thu, Jul 22, 2021 at 2:59 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Wed, Jul 21, 2021 at 6:43 PM Bruce Momjian <bruce@momjian.us> wrote:On Wed, Jul 21, 2021 at 06:39:26PM -0700, Bryn Llewellyn wrote:\n> Your statement\n> \n> \n> “months-to-days conversion is almost always an approximation, while the\n> days to seconds conversion is almost always accurate.” \n> \n> \n> is misleading. Any conversion like these (and also the “spill up” conversions\n> that the justify_hours(), justify_days(), and justify_interval() built-in\n> functions bring) are semantically dangerous because of the different rules for\n> adding a pure months, a pure days, or a pure seconds interval to a timestamptz\n> value.\n\nWe are trying to get the most reasonable output for fractional values\n--- I stand by my statements.\n\n> Unless you avoid mixed interval values, then it’s so hard (even though it is\n> possible) to predict the outcomes of interval arithmetic. Rather, all you get\n> is emergent behavior that I fail to see can be relied upon in deliberately\n> designed application code. Here’s a telling example:\n\nThe point is that we will get unusual values, so we should do the best\nwe can.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\nHi,- tm->tm_mon += (fval * MONTHS_PER_YEAR);+ tm->tm_mon += rint(fval * MONTHS_PER_YEAR);Should the handling for year use the same check as that for month ?- AdjustFractDays(fval, tm, fsec, DAYS_PER_MONTH);+ /* round to a full month? */+ if (rint(fval * DAYS_PER_MONTH) == DAYS_PER_MONTH)Cheers Hi,I guess the reason for current patch was that year to months conversion is accurate.On the new test:+SELECT INTERVAL '1.16 months 01:00:00' AS \"One mon 5 days one hour\";0.16 * 31 = 4.96 < 5I wonder why 5 days were chosen in the test output.",
"msg_date": "Thu, 22 Jul 2021 15:17:52 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 03:17:52PM -0700, Zhihong Yu wrote:\n> On Thu, Jul 22, 2021 at 2:59 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> Hi,\n> \n> - tm->tm_mon += (fval * MONTHS_PER_YEAR);\n> + tm->tm_mon += rint(fval * MONTHS_PER_YEAR);\n> \n> Should the handling for year use the same check as that for month ?\n> \n> - AdjustFractDays(fval, tm, fsec, DAYS_PER_MONTH);\n> + /* round to a full month? */\n> + if (rint(fval * DAYS_PER_MONTH) == DAYS_PER_MONTH)\n> \n> Cheers \n> \n> Hi,\n> I guess the reason for current patch was that year to months conversion is\n> accurate.\n\nOur internal units are hours/days/seconds, so the spill _up_ from months\nto years happens automatically:\n\n\tSELECT INTERVAL '23.99 months';\n\t interval\n\t----------\n\t 2 years\n\n> On the new test:\n> \n> +SELECT INTERVAL '1.16 months 01:00:00' AS \"One mon 5 days one hour\";\n> \n> 0.16 * 31 = 4.96 < 5\n> \n> I wonder why 5 days were chosen in the test output.\n\nWe use 30 days/month, not 31. However, I think you are missing the\nchanges in the patch and I am just understanding them fully now. There\nare two big changes:\n\n1. The amount of spill from months only to days\n2. The _rounding_ of the result once we stop spilling at months or days\n\n#2 is the part I think you missed.\n\nOne thing missing from my previous patch was the handling of negative\nunits, which is now handled properly in the attached patch:\n\n\tSELECT INTERVAL '-1.99 years';\n\t interval\n\t----------\n\t -2 years\n\n\tSELECT INTERVAL '-1.99 months';\n\t interval\n\t----------\n\t -2 mons\n\nI ended up creating a function to handle this, which allowed me to\nsimplify some of the surrounding code.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Fri, 23 Jul 2021 11:05:51 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "> On 23-Jul-2021, at 08:05, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Thu, Jul 22, 2021 at 03:17:52PM -0700, Zhihong Yu wrote:\n>> On Thu, Jul 22, 2021 at 2:59 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>> Hi,\n>> \n>> - tm->tm_mon += (fval * MONTHS_PER_YEAR);\n>> + tm->tm_mon += rint(fval * MONTHS_PER_YEAR);\n>> \n>> Should the handling for year use the same check as that for month ?\n>> \n>> - AdjustFractDays(fval, tm, fsec, DAYS_PER_MONTH);\n>> + /* round to a full month? */\n>> + if (rint(fval * DAYS_PER_MONTH) == DAYS_PER_MONTH)\n>> \n>> Cheers \n>> \n>> Hi,\n>> I guess the reason for current patch was that year to months conversion is\n>> accurate.\n> \n> Our internal units are hours/days/seconds, so the spill _up_ from months\n> to years happens automatically:\n> \n> \tSELECT INTERVAL '23.99 months';\n> \t interval\n> \t----------\n> \t 2 years\n> \n>> On the new test:\n>> \n>> +SELECT INTERVAL '1.16 months 01:00:00' AS \"One mon 5 days one hour\";\n>> \n>> 0.16 * 31 = 4.96 < 5\n>> \n>> I wonder why 5 days were chosen in the test output.\n> \n> We use 30 days/month, not 31. However, I think you are missing the\n> changes in the patch and I am just understanding them fully now. There\n> are two big changes:\n> \n> 1. The amount of spill from months only to days\n> 2. The _rounding_ of the result once we stop spilling at months or days\n> \n> #2 is the part I think you missed.\n> \n> One thing missing from my previous patch was the handling of negative\n> units, which is now handled properly in the attached patch:\n> \n> \tSELECT INTERVAL '-1.99 years';\n> \t interval\n> \t----------\n> \t -2 years\n> \n> \tSELECT INTERVAL '-1.99 months';\n> \t interval\n> \t----------\n> \t -2 mons\n> \n> I ended up creating a function to handle this, which allowed me to\n> simplify some of the surrounding code.\n> \n> -- \n> Bruce Momjian <bruce@momjian.us> https://www.google.com/url?q=https://momjian.us&source=gmail-imap&ust=1627657554000000&usg=AOvVaw2pMx7QBd3qSjHK1L9oUnl0\n> EDB https://www.google.com/url?q=https://enterprisedb.com&source=gmail-imap&ust=1627657554000000&usg=AOvVaw2Q92apfhXmqqFYz7aN16YL\n> \n> If only the physical world exists, free will is an illusion.\n> \n> <interval.diff.gz>\n> \n\nWill the same new spilldown rules hold in the same way for interval multiplication and division as they will for the interpretation of an interval literal?\n\nThe semantics here are (at least as far as my limited search skills have shown me) simply undocumented. But my tests in 13.3 have to date not disproved this hypothesis:\n\n* considering \"new_i ◄— i * f\"\n\n* # notice that the internal representation is _months_, days, and seconds at odds with \"Our internal units are hours/days/seconds,\"\n* let i’s internal representation be [mm, dd, ss] \n\n* new_i’s “intermediate” internal representation is [mm*f, dd*f, ss*f]\n\n* input these values to the same spilldown algorithm that is applied when these same intermediate values are used in an interval literal\n\n* so the result is [new_mm, new_dd, new_ss]\n\nHere’s an example:\n\nselect\n '1.2345 months 1.2345 days 1.2345 seconds'::interval = \n '1 month 1 day 1 second'::interval*1.2345;\n\nIn 13.3, the result is TRUE. (I know that this doesn’t guarantee that the internal representations of the two compared interval values are the same. But it’s a necessary condition for the outcome that I’m referring to and serves to indecate the pont I’m making. A more careful test can be made.\nOn 23-Jul-2021, at 08:05, Bruce Momjian <bruce@momjian.us> wrote:On Thu, Jul 22, 2021 at 03:17:52PM -0700, Zhihong Yu wrote:On Thu, Jul 22, 2021 at 2:59 PM Zhihong Yu <zyu@yugabyte.com> wrote: Hi, - tm->tm_mon += (fval * MONTHS_PER_YEAR); + tm->tm_mon += rint(fval * MONTHS_PER_YEAR); Should the handling for year use the same check as that for month ? - AdjustFractDays(fval, tm, fsec, DAYS_PER_MONTH); + /* round to a full month? */ + if (rint(fval * DAYS_PER_MONTH) == DAYS_PER_MONTH) Cheers Hi,I guess the reason for current patch was that year to months conversion isaccurate.Our internal units are hours/days/seconds, so the spill _up_ from monthsto years happens automatically: SELECT INTERVAL '23.99 months'; interval ---------- 2 yearsOn the new test:+SELECT INTERVAL '1.16 months 01:00:00' AS \"One mon 5 days one hour\";0.16 * 31 = 4.96 < 5I wonder why 5 days were chosen in the test output.We use 30 days/month, not 31. However, I think you are missing thechanges in the patch and I am just understanding them fully now. Thereare two big changes:1. The amount of spill from months only to days2. The _rounding_ of the result once we stop spilling at months or days#2 is the part I think you missed.One thing missing from my previous patch was the handling of negativeunits, which is now handled properly in the attached patch: SELECT INTERVAL '-1.99 years'; interval ---------- -2 years SELECT INTERVAL '-1.99 months'; interval ---------- -2 monsI ended up creating a function to handle this, which allowed me tosimplify some of the surrounding code.-- Bruce Momjian <bruce@momjian.us> https://www.google.com/url?q=https://momjian.us&source=gmail-imap&ust=1627657554000000&usg=AOvVaw2pMx7QBd3qSjHK1L9oUnl0 EDB https://www.google.com/url?q=https://enterprisedb.com&source=gmail-imap&ust=1627657554000000&usg=AOvVaw2Q92apfhXmqqFYz7aN16YL If only the physical world exists, free will is an illusion.<interval.diff.gz>Will the same new spilldown rules hold in the same way for interval multiplication and division as they will for the interpretation of an interval literal?The semantics here are (at least as far as my limited search skills have shown me) simply undocumented. But my tests in 13.3 have to date not disproved this hypothesis:* considering \"new_i ◄— i * f\"* # notice that the internal representation is _months_, days, and seconds at odds with \"Our internal units are hours/days/seconds,\"* let i’s internal representation be [mm, dd, ss] * new_i’s “intermediate” internal representation is [mm*f, dd*f, ss*f]* input these values to the same spilldown algorithm that is applied when these same intermediate values are used in an interval literal* so the result is [new_mm, new_dd, new_ss]Here’s an example:select '1.2345 months 1.2345 days 1.2345 seconds'::interval = '1 month 1 day 1 second'::interval*1.2345;In 13.3, the result is TRUE. (I know that this doesn’t guarantee that the internal representations of the two compared interval values are the same. But it’s a necessary condition for the outcome that I’m referring to and serves to indecate the pont I’m making. A more careful test can be made.",
"msg_date": "Fri, 23 Jul 2021 10:55:11 -0700",
"msg_from": "Bryn Llewellyn <bryn@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Jul 23, 2021 at 10:55:11AM -0700, Bryn Llewellyn wrote:\n> SELECT\n> '1.2345 months 1.2345 days 1.2345 seconds'::interval = \n> '1 month 1 day 1 second'::interval*1.2345;\n> \n> In 13.3, the result is TRUE. (I know that this doesn’t guarantee that the\n> internal representations of the two compared interval values are the same. But\n> it’s a necessary condition for the outcome that I’m referring to and serves to\n> indecate the pont I’m making. A more careful test can be made.\n\nSo you are saying fractional unit output should match multiplication\noutput? It doesn't now for all units:\n\n\tSELECT interval '1.3443 years';\n\t interval\n\t---------------\n\t 1 year 4 mons\n\t\n\tSELECT interval '1 years' * 1.3443;\n\t ?column?\n\t---------------------------------\n\t 1 year 4 mons 3 days 22:45:07.2\n\nIt is true this patch is further reducing that matching. Do people\nthink I should make them match as part of this patch?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 23 Jul 2021 16:27:38 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "> On 23-Jul-2021, bruce@momjian.us wrote:\n> \n> On Fri, Jul 23, 2021 at 10:55:11AM -0700, Bryn Llewellyn wrote:\n>> SELECT\n>> '1.2345 months 1.2345 days 1.2345 seconds'::interval = \n>> '1 month 1 day 1 second'::interval*1.2345;\n>> \n>> In 13.3, the result is TRUE. (I know that this doesn’t guarantee that the\n>> internal representations of the two compared interval values are the same. But\n>> it’s a necessary condition for the outcome that I’m referring to and serves to\n>> indecate the pont I’m making. A more careful test can be made.\n> \n> So you are saying fractional unit output should match multiplication\n> output? It doesn't now for all units:\n> \n> \tSELECT interval '1.3443 years';\n> \t interval\n> \t---------------\n> \t 1 year 4 mons\n> \t\n> \tSELECT interval '1 years' * 1.3443;\n> \t ?column?\n> \t---------------------------------\n> \t 1 year 4 mons 3 days 22:45:07.2\n> \n> It is true this patch is further reducing that matching. Do people\n> think I should make them match as part of this patch?\n\nSummary:\n--------\n\nIt seems to me that the best thing to do is fix the coarse bug about which there is unanimous agreement and to leave everything else (quirks and all) untouched.\n\nDetail:\n-------\n\nMy previous email (to which Bruce replies here) was muddled. Sorry. The challenge is that are a number of mechanisms at work. Their effects are conflated. And it’s very hard to unscramble them.\n\nThe underlying problem is that the internal representation of an interval value is a [mm, dd, ss] tuple. This fact is documented. The representation is key to understanding the definitions of these operations:\n\n— defining an interval value from a text literal that uses real number values for its various fields.\n\n— defining an interval value from make_interval(). (This one is easy because the API requires integral values for all but the seconds argument. It would be interesting to know why this asymmetrical definition was implemented. It seems to imply that somebody though that spilldown was a bad idea and should be prevented before it might happen.)\n\n— creating the text typecast of an extant interval value.\n\n— creating an interval value by adding/subtracting an extant interval value to/from another\n\n— creating an interval value by multiplying or dividing an extant interval value by a (real) number\n\n— creating an interval value by subtracting a pair of moments of the same data type (timestamptz, plain timestamp, or time)\n\n— creating a new moment value by adding or subtracting an extant interval value to an extant moment value of the same data type.\n\n— creating an interval value by applying justify_hours(i), justify_days(i), and justify_interval(i) to an extant interval value, i.\n\n— creating a double precision value by applying extract(epoch from i) \nto an extant interval value, i.\n\n— evaluating inequality and equality tests to compare two extant interval values.\n\nNotice that, for example, this test:\n\nselect\n interval '1.3443 years' as i1,\n interval '1 years' * 1.3443 as i2;\n\nconflates three things: converting an interval literal to a [mm, dd, ss] tuple; multiplying a [mm, dd, ss] tuple by a real number; and converting a [mm, dd, ss] tuple to a text representation. Similarly, this test:\n\nselect\n interval '1.3443 years' <\n interval '1 years' * 1.3443;\n\nconflates three things: converting an interval literal to a [mm, dd, ss] tuple; multiplying a [mm, dd, ss] tuple by a real number; and inequality comparison of two [mm, dd, ss] two [mm, dd, ss] tuples.\n\nAs far as I’ve been able, the PG documentation doesn’t do a good job of defining the semantics of any of these operations. Some (like the “justify” functions” are sketched reasonably well. Others, like interval multiplication, are entirely undefined.\n\nThis makes discussion of simple test like the two I showed immediately above hard. It also makes any discussion of correctness, possible bugs, and proposed implementation changes very difficult.\n\nFurther, it also makes it hard to see how tests for application code that uses any of these operations can be designed. The normal approach relies on testing that you get what you expect. But here, you don't know what to expect—unless (as I’ve done) you first assert hypotheses for the undefined operations and test them with programmed simulations. Of course, this is, in general, an unreliable approach. The only way to know what code is intended to do is to read the prose specification that informs the implementation.\n\nI had forgotten one piece of the long history of this thread. Soon after I presented the testcase that folks agree shows a clear bug, I asked about the rules for creating the internal [mm, dd, ss] tuple from a text literal that uses real numbers for the fields. My tests showed two things: (1) an intuitively clear model for the spilldown of nonintegral months to days and then, in turn, of nonintegral days to seconds; and (2) a quirky rule for deriving intermediate months from fractional years and fractional months before then using the more obvious rules to spill to days. (This defies terse expression in prose. I copied my PL/pgSQL implementation below.)\n\nThere was initially some discussion about changing implementation o the spill-down from [years, months] in the interval literal to the ultimate [mm, dd, ss] representation. This is what's Bruces is asking about. And it's what I was muddled about.\n\nAs I’ve said, my conclusion is that the only safe approach is to create and use only “pure” interval values (where just one of the internal fields is non-zero). For this reason (and having seen what I decided was the impossibly unmemorable rules that my modeled implementation uses) I didn’t look at the rules for the other fields that the interval literal allows (weeks, centuries, millennia, and so on).\n\n--------------------------------------------------------------------------------\nmm_trunc constant int not null := trunc(p.mm);\nmm_remainder constant double precision not null := p.mm - mm_trunc::double precision;\n\n-- This is a quirk.\nmm_out constant int not null := trunc(p.yy*mm_per_yy) + mm_trunc;\n\ndd_real_from_mm constant double precision not null := mm_remainder*dd_per_mm;\n\ndd_int_from_mm constant int not null := trunc(dd_real_from_mm);\ndd_remainder_from_mm constant double precision not null := dd_real_from_mm - dd_int_from_mm::double precision;\n\ndd_int_from_user constant int not null := trunc(p.dd);\ndd_remainder_from_user constant double precision not null := p.dd - dd_int_from_user::double precision;\n\ndd_out constant int not null := dd_int_from_mm + dd_int_from_user;\n\nd_remainder constant double precision not null := dd_remainder_from_mm + dd_remainder_from_user;\n\nss_out constant double precision not null := d_remainder*ss_per_dd +\n p.hh*ss_per_hh +\n p.mi*ss_per_mi +\n p.ss;\n--------------------------------------------------------------------------------\n\n\n\n\n\n\n\n\n\n\n\n\n\nOn 23-Jul-2021, bruce@momjian.us wrote:On Fri, Jul 23, 2021 at 10:55:11AM -0700, Bryn Llewellyn wrote:SELECT '1.2345 months 1.2345 days 1.2345 seconds'::interval = '1 month 1 day 1 second'::interval*1.2345;In 13.3, the result is TRUE. (I know that this doesn’t guarantee that theinternal representations of the two compared interval values are the same. Butit’s a necessary condition for the outcome that I’m referring to and serves toindecate the pont I’m making. A more careful test can be made.So you are saying fractional unit output should match multiplicationoutput? It doesn't now for all units: SELECT interval '1.3443 years'; interval --------------- 1 year 4 mons SELECT interval '1 years' * 1.3443; ?column? --------------------------------- 1 year 4 mons 3 days 22:45:07.2It is true this patch is further reducing that matching. Do peoplethink I should make them match as part of this patch?Summary:--------It seems to me that the best thing to do is fix the coarse bug about which there is unanimous agreement and to leave everything else (quirks and all) untouched.Detail:-------My previous email (to which Bruce replies here) was muddled. Sorry. The challenge is that are a number of mechanisms at work. Their effects are conflated. And it’s very hard to unscramble them.The underlying problem is that the internal representation of an interval value is a [mm, dd, ss] tuple. This fact is documented. The representation is key to understanding the definitions of these operations:— defining an interval value from a text literal that uses real number values for its various fields.— defining an interval value from make_interval(). (This one is easy because the API requires integral values for all but the seconds argument. It would be interesting to know why this asymmetrical definition was implemented. It seems to imply that somebody though that spilldown was a bad idea and should be prevented before it might happen.)— creating the text typecast of an extant interval value.— creating an interval value by adding/subtracting an extant interval value to/from another— creating an interval value by multiplying or dividing an extant interval value by a (real) number— creating an interval value by subtracting a pair of moments of the same data type (timestamptz, plain timestamp, or time)— creating a new moment value by adding or subtracting an extant interval value to an extant moment value of the same data type.— creating an interval value by applying justify_hours(i), justify_days(i), and justify_interval(i) to an extant interval value, i.— creating a double precision value by applying extract(epoch from i) to an extant interval value, i.— evaluating inequality and equality tests to compare two extant interval values.Notice that, for example, this test:select interval '1.3443 years' as i1, interval '1 years' * 1.3443 as i2;conflates three things: converting an interval literal to a [mm, dd, ss] tuple; multiplying a [mm, dd, ss] tuple by a real number; and converting a [mm, dd, ss] tuple to a text representation. Similarly, this test:select interval '1.3443 years' < interval '1 years' * 1.3443;conflates three things: converting an interval literal to a [mm, dd, ss] tuple; multiplying a [mm, dd, ss] tuple by a real number; and inequality comparison of two [mm, dd, ss] two [mm, dd, ss] tuples.As far as I’ve been able, the PG documentation doesn’t do a good job of defining the semantics of any of these operations. Some (like the “justify” functions” are sketched reasonably well. Others, like interval multiplication, are entirely undefined.This makes discussion of simple test like the two I showed immediately above hard. It also makes any discussion of correctness, possible bugs, and proposed implementation changes very difficult.Further, it also makes it hard to see how tests for application code that uses any of these operations can be designed. The normal approach relies on testing that you get what you expect. But here, you don't know what to expect—unless (as I’ve done) you first assert hypotheses for the undefined operations and test them with programmed simulations. Of course, this is, in general, an unreliable approach. The only way to know what code is intended to do is to read the prose specification that informs the implementation.I had forgotten one piece of the long history of this thread. Soon after I presented the testcase that folks agree shows a clear bug, I asked about the rules for creating the internal [mm, dd, ss] tuple from a text literal that uses real numbers for the fields. My tests showed two things: (1) an intuitively clear model for the spilldown of nonintegral months to days and then, in turn, of nonintegral days to seconds; and (2) a quirky rule for deriving intermediate months from fractional years and fractional months before then using the more obvious rules to spill to days. (This defies terse expression in prose. I copied my PL/pgSQL implementation below.)There was initially some discussion about changing implementation o the spill-down from [years, months] in the interval literal to the ultimate [mm, dd, ss] representation. This is what's Bruces is asking about. And it's what I was muddled about.As I’ve said, my conclusion is that the only safe approach is to create and use only “pure” interval values (where just one of the internal fields is non-zero). For this reason (and having seen what I decided was the impossibly unmemorable rules that my modeled implementation uses) I didn’t look at the rules for the other fields that the interval literal allows (weeks, centuries, millennia, and so on).--------------------------------------------------------------------------------mm_trunc constant int not null := trunc(p.mm);mm_remainder constant double precision not null := p.mm - mm_trunc::double precision;-- This is a quirk.mm_out constant int not null := trunc(p.yy*mm_per_yy) + mm_trunc;dd_real_from_mm constant double precision not null := mm_remainder*dd_per_mm;dd_int_from_mm constant int not null := trunc(dd_real_from_mm);dd_remainder_from_mm constant double precision not null := dd_real_from_mm - dd_int_from_mm::double precision;dd_int_from_user constant int not null := trunc(p.dd);dd_remainder_from_user constant double precision not null := p.dd - dd_int_from_user::double precision;dd_out constant int not null := dd_int_from_mm + dd_int_from_user;d_remainder constant double precision not null := dd_remainder_from_mm + dd_remainder_from_user;ss_out constant double precision not null := d_remainder*ss_per_dd + p.hh*ss_per_hh + p.mi*ss_per_mi + p.ss;--------------------------------------------------------------------------------",
"msg_date": "Sun, 25 Jul 2021 11:56:54 -0700",
"msg_from": "Bryn Llewellyn <bryn@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Sun, Jul 25, 2021 at 11:56:54AM -0700, Bryn Llewellyn wrote:\n> As far as I’ve been able, the PG documentation doesn’t do a good job of\n> defining the semantics of any of these operations. Some (like the “justify”\n\nThis is because fractional interval values are not used or asked about\noften.\n\n> functions” are sketched reasonably well. Others, like interval multiplication,\n> are entirely undefined.\n\nYes, the “justify” functions were requested and implemented because they\nmet a frequently-requested need unrelated to fractional values, though\nthey do have spill-up uses.\n\n> This makes discussion of simple test like the two I showed immediately above\n> hard. It also makes any discussion of correctness, possible bugs, and proposed\n> implementation changes very difficult.\n\nAgreed. With fractional values an edge use-case, we are trying to find\nthe most useful implementation.\n\n> As I’ve said, my conclusion is that the only safe approach is to create and use\n> only “pure” interval values (where just one of the internal fields is\n> non-zero). For this reason (and having seen what I decided was the impossibly\n> unmemorable rules that my modeled implementation uses) I didn’t look at the\n> rules for the other fields that the interval literal allows (weeks, centuries,\n> millennia, and so on).\n\nI think the current page is clear about _specifying_ fractional units,\nbut you are right that multiplication/division of fractional values is\nnot covered.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Mon, 26 Jul 2021 13:35:41 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 09:02:29PM -0400, Bruce Momjian wrote:\n> On Fri, Apr 2, 2021 at 05:50:59PM -0700, Bryn Llewellyn wrote:\n> > are the user’s parameterization. All are real numbers. Because non-integral\n> > values for years, months, days, hours, and minutes are allowed when you specify\n> > a value using the ::interval typecast, my reference doc must state the rules. I\n> > would have struggled to express these rules in prose—especially given the use\n> > both of trunc() and floor(). I would have struggled more to explain what\n> > requirements these rules meet.\n> \n> The fundamental issue is that while months, days, and seconds are\n> consistent in their own units, when you have to cross from one unit to\n> another, it is by definition imprecise, since the interval is not tied\n> to a specific date, with its own days-of-the-month and leap days and\n> daylight savings time changes. It feels like it is going to be\n> imprecise no matter what we do.\n> \n> Adding to this is the fact that interval values are stored in C 'struct\n> tm' defined in libc's ctime(), where months are integers, so carrying\n> around non-integer month values until we get a final result would add a\n> lot of complexity, and complexity to a system that is by definition\n> imprecise, which doesn't seem worth it.\n\nI went ahead and modified the interval multiplication/division functions\nto use the same logic as fractional interval units:\n\n\tSELECT interval '23 mons';\n\t interval\n\t----------------\n\t 1 year 11 mons\n\t\n\tSELECT interval '23 mons' / 2;\n\t ?column?\n\t-----------------\n\t 11 mons 15 days\n\t\n\tSELECT interval '23.5 mons';\n\t interval\n\t------------------------\n\t 1 year 11 mons 15 days\n\t\n\tSELECT interval '23.5 mons' / 2;\n\t ?column?\n\t--------------------------\n\t 11 mons 22 days 12:00:00\n\nI think the big issue is that the casting to interval into integer\nmons/days/secs so we can no longer make the distinction of units >\nmonths vs months.\n\nUsing Bryn's example, the master branch output is:\n\n\tSELECT\n\t interval '1.3443 years' as i1,\n\t interval '1 years' * 1.3443 as i2;\n\t i1 | i2\n\t---------------+---------------------------------\n\t 1 year 4 mons | 1 year 4 mons 3 days 22:45:07.2\n\nand the attached patch output is:\n\n\tSELECT\n\t interval '1.3443 years' as i1,\n\t interval '1 years' * 1.3443 as i2;\n\t i1 | i2\n\t---------------+----------------------\n\t 1 year 4 mons | 1 year 4 mons 4 days\n\nwhich looks like an improvement.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Tue, 27 Jul 2021 15:36:14 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I went ahead and modified the interval multiplication/division functions\n> to use the same logic as fractional interval units:\n\nWait. A. Minute.\n\nWhat I think we have consensus on is that interval_in is doing the\nwrong thing in a particular corner case. I have heard nobody but\nyou suggesting that we should start undertaking behavioral changes\nin other interval functions, and I don't believe that that's a good\nroad to start going down. These behaviors have stood for many years.\nMoreover, since the whole thing is by definition operating with\ninadequate information, it is inevitable that for every case you\nmake better there will be another one you make worse.\n\nI'm really not on board with changing anything except interval_in,\nand even there, we had better be certain that everything we change\nis a case that is certainly being made better.\n\nBTW, please do not post patches as gzipped attachments, unless\nthey're enormous. You're just adding another step making it\nharder for people to look at them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Jul 2021 16:01:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 04:01:54PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I went ahead and modified the interval multiplication/division functions\n> > to use the same logic as fractional interval units:\n> \n> Wait. A. Minute.\n> \n> What I think we have consensus on is that interval_in is doing the\n> wrong thing in a particular corner case. I have heard nobody but\n> you suggesting that we should start undertaking behavioral changes\n> in other interval functions, and I don't believe that that's a good\n> road to start going down. These behaviors have stood for many years.\n> Moreover, since the whole thing is by definition operating with\n> inadequate information, it is inevitable that for every case you\n> make better there will be another one you make worse.\n\nBryn mentioned this so I thought I would see what the result looks like.\nI am fine to skip them.\n\n> I'm really not on board with changing anything except interval_in,\n> and even there, we had better be certain that everything we change\n> is a case that is certainly being made better.\n\nWell, I think what I had before the multiply/divide changes were\nacceptable to everyone except Bryn, who was looking for more\nconsistency.\n \n> BTW, please do not post patches as gzipped attachments, unless\n> they're enormous. You're just adding another step making it\n> harder for people to look at them.\n\nOK, what is large for you? 100k bytes? I was using 10k bytes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Tue, 27 Jul 2021 17:13:14 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "> On 27-Jul-2021, at 14:13, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Tue, Jul 27, 2021 at 04:01:54PM -0400, Tom Lane wrote:\n>> Bruce Momjian <bruce@momjian.us> writes:\n>>> I went ahead and modified the interval multiplication/division functions\n>>> to use the same logic as fractional interval units:\n>> \n>> Wait. A. Minute.\n>> \n>> What I think we have consensus on is that interval_in is doing the\n>> wrong thing in a particular corner case. I have heard nobody but\n>> you suggesting that we should start undertaking behavioral changes\n>> in other interval functions, and I don't believe that that's a good\n>> road to start going down. These behaviors have stood for many years.\n>> Moreover, since the whole thing is by definition operating with\n>> inadequate information, it is inevitable that for every case you\n>> make better there will be another one you make worse.\n> \n> Bryn mentioned this so I thought I would see what the result looks like.\n> I am fine to skip them.\n> \n>> I'm really not on board with changing anything except interval_in,\n>> and even there, we had better be certain that everything we change\n>> is a case that is certainly being made better.\n> \n> Well, I think what I had before the multiply/divide changes were\n> acceptable to everyone except Bryn, who was looking for more\n> consistency.\n> \n>> BTW, please do not post patches as gzipped attachments, unless\n>> they're enormous. You're just adding another step making it\n>> harder for people to look at them.\n> \n> OK, what is large for you? 100k bytes? I was using 10k bytes.\n\nBefore I say anything else, I’ll stress what I wrote recently (under the heading “summary”). I support Tom’s idea that the only appropriate change to make is to fix only the exactly self-evident bug that I reported at the start of this thread.\n\nI fear that Bruce doesn’t understand my point about interval multiplication (which includes multiplying by a number whose absolute value lies between 0 and 1). Here it is. I believe that the semantics are (and should be) defined like this:\n\n[mm, dd, ss]*n == post_spilldown([mm*n, dd*n, ss*n])\n\nwhere the function post_spilldown() applies the rules that are used when an interval literal that specifies only values for months, days, and seconds is converted to the internal [mm, dd, ss] representation—where mm and dd are 4-byte integers and ss is an 80byte integer that represents microseconds.\n\nHere’s a simple test that’s consistent with that hypothesis:\n\nwith\n c1 as (\n select\n '1 month 1 day 1 second'::interval as i1,\n '1.234 month 1.234 day 1.234 second'::interval as i3),\n\n c2 as (\n select i1*1.234 as i2, i3 from c1)\n\nselect i2::text as i2_txt, i3::text from c2 as i3_txt;\n\nHere’s the result:\n\n i2_txt | i3 \n---------------------------+---------------------------\n 1 mon 8 days 06:05:46.834 | 1 mon 8 days 06:05:46.834\n\nSo I’m so far happy.\n\nBut, like I said, I’d forgotten a orthogonal quirk. This test shows it. It’s informed by the fact that 1.2345*12.0 is 14.8140.\n\nselect\n ('1.2345 years' ::interval)::text as i1_txt,\n ('14.8140 months'::interval)::text as i2_txt;\n\nHere’s the result:\n\n i1_txt | i2_txt \n---------------+--------------------------------\n 1 year 2 mons | 1 year 2 mons 24 days 10:04:48\n\nIt seems to be to be crazy behavior. I haven’t found any account of it in the PG docs. Others have argued that it’s a sensible result. Anyway, I don’t believe that I’ve ever argued that it’s a bug. I wanted only to know what rationale informed the design. I agree that changing the behavior here would be problematic for extant code.\n \nThis quirk explains the outcome of this test:\n\nselect\n ('1.2345 years'::interval)::text as i1_txt,\n ('14.8140 months'::interval)::text as i2_txt,\n (1.2345*('1 years'::interval))::text as i3_txt;\n\nThis is the result:\n\n i1_txt | i2_txt | i3_txt \n---------------+--------------------------------+--------------------------------\n 1 year 2 mons | 1 year 2 mons 24 days 10:04:48 | 1 year 2 mons 24 days 10:04:48\n\nNotice that the same text is reported for i2_txt as for i3_txt.\n\n\n\n",
"msg_date": "Tue, 27 Jul 2021 15:36:37 -0700",
"msg_from": "Bryn Llewellyn <bryn@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 3:36 PM Bryn Llewellyn <bryn@yugabyte.com> wrote:\n\n>\n> with\n> c1 as (\n> select\n> '1 month 1 day 1 second'::interval as i1,\n> '1.234 month 1.234 day 1.234 second'::interval as i3),\n>\n> c2 as (\n> select i1*1.234 as i2, i3 from c1)\n>\n> select i2::text as i2_txt, i3::text from c2 as i3_txt;\n>\n>\nIt's nice to envision all forms of fancy calculations. But the fact is that\n\n'1.5 month'::interval * 2 != '3 month\"::interval\n\nwith any of these patches - and if that doesn't work - the rest of the\nstrange numbers really seem to be irrelevant.\n\nIf there is a desire to handle fractional cases - then all pieces need to\nbe held as provided until they are transformed into something. In other\nwords - 1.5 month needs to be held as 1.5 month until we ask for it to be\nreduced to 1 month and 15 days at some point. If the interval data type\nimmediately casts 1.5 months to 1 month 15 days then all subsequent\ncalculations are going to be wrong.\n\nI appreciate there is generally no way to accomplish this right now - but\nthat means walking away from things like 1 month * 1.234 as being not\ncalculable as opposed to trying to piece something together that fails\npretty quickly.\n\nJohn\n\nOn Tue, Jul 27, 2021 at 3:36 PM Bryn Llewellyn <bryn@yugabyte.com> wrote:\nwith\n c1 as (\n select\n '1 month 1 day 1 second'::interval as i1,\n '1.234 month 1.234 day 1.234 second'::interval as i3),\n\n c2 as (\n select i1*1.234 as i2, i3 from c1)\n\nselect i2::text as i2_txt, i3::text from c2 as i3_txt;\nIt's nice to envision all forms of fancy calculations. But the fact is that '1.5 month'::interval * 2 != '3 month\"::interval with any of these patches - and if that doesn't work - the rest of the strange numbers really seem to be irrelevant.If there is a desire to handle fractional cases - then all pieces need to be held as provided until they are transformed into something. In other words - 1.5 month needs to be held as 1.5 month until we ask for it to be reduced to 1 month and 15 days at some point. If the interval data type immediately casts 1.5 months to 1 month 15 days then all subsequent calculations are going to be wrong.I appreciate there is generally no way to accomplish this right now - but that means walking away from things like 1 month * 1.234 as being not calculable as opposed to trying to piece something together that fails pretty quickly.John",
"msg_date": "Tue, 27 Jul 2021 16:08:22 -0700",
"msg_from": "John W Higgins <wishdev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Wed, 28 Jul 2021 at 00:08, John W Higgins <wishdev@gmail.com> wrote:\n>\n> It's nice to envision all forms of fancy calculations. But the fact is that\n>\n> '1.5 month'::interval * 2 != '3 month\"::interval\n>\n\nThat's not exactly true. Even without the patch:\n\nSELECT '1.5 month'::interval * 2 AS product,\n '3 month'::interval AS expected,\n justify_interval('1.5 month'::interval * 2) AS justified_product,\n '1.5 month'::interval * 2 = '3 month'::interval AS equal;\n\n product | expected | justified_product | equal\n----------------+----------+-------------------+-------\n 2 mons 30 days | 3 mons | 3 mons | t\n(1 row)\n\nSo it's equal even without calling justify_interval() on the result.\n\nFWIW, I remain of the opinion that the interval literal code should\njust spill down to lower units in all cases, just like the\nmultiplication and division code, so that the results are consistent\n(barring floating point rounding errors) and explainable.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 28 Jul 2021 08:42:31 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 12:42 AM Dean Rasheed <dean.a.rasheed@gmail.com>\nwrote:\n\n> On Wed, 28 Jul 2021 at 00:08, John W Higgins <wishdev@gmail.com> wrote:\n> >\n> > It's nice to envision all forms of fancy calculations. But the fact is\n> that\n> >\n> > '1.5 month'::interval * 2 != '3 month\"::interval\n> >\n>\n> That's not exactly true. Even without the patch:\n>\n> SELECT '1.5 month'::interval * 2 AS product,\n> '3 month'::interval AS expected,\n> justify_interval('1.5 month'::interval * 2) AS justified_product,\n> '1.5 month'::interval * 2 = '3 month'::interval AS equal;\n>\n> product | expected | justified_product | equal\n> ----------------+----------+-------------------+-------\n> 2 mons 30 days | 3 mons | 3 mons | t\n> (1 row)\n>\n>\nThat's viewing something via the mechanism that is incorrectly (technically\nspeaking) doing the work in the first place. It believes they are the same\n- but they are clearly not when actually used.\n\nselect '1/1/2001'::date + (interval '3 month');\n ?column?\n---------------------\n 2001-04-01 00:00:00\n(1 row)\n\nvs\n\nselect '1/1/2001'::date + (interval '1.5 month' * 2);\n ?column?\n---------------------\n 2001-03-31 00:00:00\n(1 row)\n\nThat's the flaw in this entire body of work - we keep taking fractional\namounts - doing round offs and then trying to add or multiply the pieces\nback and ending up with weird floating point math style errors. That's\nnever to complain about it - but we shouldn't be looking at edge cases with\nthings like 1 month * 1.234 when 1.5 months * 2 doesn't work properly.\n\nJohn\n\nP.S. Finally we have items like this\n\nselect '12/1/2001'::date + (interval '1.5 months' * 2);\n ?column?\n---------------------\n 2002-03-03 00:00:00\n(1 row)\n\npostgres=# select '1/1/2001'::date + (interval '1.5 months' * 2);\n ?column?\n---------------------\n 2001-03-31 00:00:00\n(1 row)\n\nWhich only has a 28 day gap because of the length of February - clearly\nthis is not working quite right.\n\nOn Wed, Jul 28, 2021 at 12:42 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:On Wed, 28 Jul 2021 at 00:08, John W Higgins <wishdev@gmail.com> wrote:\n>\n> It's nice to envision all forms of fancy calculations. But the fact is that\n>\n> '1.5 month'::interval * 2 != '3 month\"::interval\n>\n\nThat's not exactly true. Even without the patch:\n\nSELECT '1.5 month'::interval * 2 AS product,\n '3 month'::interval AS expected,\n justify_interval('1.5 month'::interval * 2) AS justified_product,\n '1.5 month'::interval * 2 = '3 month'::interval AS equal;\n\n product | expected | justified_product | equal\n----------------+----------+-------------------+-------\n 2 mons 30 days | 3 mons | 3 mons | t\n(1 row)\nThat's viewing something via the mechanism that is incorrectly (technically speaking) doing the work in the first place. It believes they are the same - but they are clearly not when actually used.select '1/1/2001'::date + (interval '3 month'); ?column? --------------------- 2001-04-01 00:00:00(1 row)vsselect '1/1/2001'::date + (interval '1.5 month' * 2); ?column? --------------------- 2001-03-31 00:00:00(1 row)That's the flaw in this entire body of work - we keep taking fractional amounts - doing round offs and then trying to add or multiply the pieces back and ending up with weird floating point math style errors. That's never to complain about it - but we shouldn't be looking at edge cases with things like 1 month * 1.234 when 1.5 months * 2 doesn't work properly.JohnP.S. Finally we have items like thisselect '12/1/2001'::date + (interval '1.5 months' * 2); ?column? --------------------- 2002-03-03 00:00:00(1 row)postgres=# select '1/1/2001'::date + (interval '1.5 months' * 2); ?column? --------------------- 2001-03-31 00:00:00(1 row)Which only has a 28 day gap because of the length of February - clearly this is not working quite right.",
"msg_date": "Wed, 28 Jul 2021 07:23:50 -0700",
"msg_from": "John W Higgins <wishdev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 08:42:31AM +0100, Dean Rasheed wrote:\n> On Wed, 28 Jul 2021 at 00:08, John W Higgins <wishdev@gmail.com> wrote:\n> >\n> > It's nice to envision all forms of fancy calculations. But the fact is that\n> >\n> > '1.5 month'::interval * 2 != '3 month\"::interval\n> >\n> \n> That's not exactly true. Even without the patch:\n> \n> SELECT '1.5 month'::interval * 2 AS product,\n> '3 month'::interval AS expected,\n> justify_interval('1.5 month'::interval * 2) AS justified_product,\n> '1.5 month'::interval * 2 = '3 month'::interval AS equal;\n> \n> product | expected | justified_product | equal\n> ----------------+----------+-------------------+-------\n> 2 mons 30 days | 3 mons | 3 mons | t\n> (1 row)\n> \n> So it's equal even without calling justify_interval() on the result.\n> \n> FWIW, I remain of the opinion that the interval literal code should\n> just spill down to lower units in all cases, just like the\n> multiplication and division code, so that the results are consistent\n> (barring floating point rounding errors) and explainable.\n\nHere is a more minimal patch that doesn't change the spill-down units at\nall, but merely documents it, and changes the spilldown to months to\nround instead of truncate.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Wed, 28 Jul 2021 11:19:16 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 4:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> What I think we have consensus on is that interval_in is doing the\n> wrong thing in a particular corner case. I have heard nobody but\n> you suggesting that we should start undertaking behavioral changes\n> in other interval functions, and I don't believe that that's a good\n> road to start going down. These behaviors have stood for many years.\n> Moreover, since the whole thing is by definition operating with\n> inadequate information, it is inevitable that for every case you\n> make better there will be another one you make worse.\n\nI agree that we need to be really conservative here. I think Tom is\nright that if we start changing behaviors that \"seem wrong,\" we will\nprobably make some things better and other things worse. The overall\namount of stuff that \"seems wrong\" will probably not go down, but a\nlot of people's applications will break when they try to upgrade to\nv15. That's not going to be a win overall.\n\nI think a lot of the discussion on this thread consists of people\nhoping for things that are not very realistic. The interval type\nrepresents the number of months as an integer, and the number of days\nas an integer. That means that an interval like '0.7 months' does not\nreally exist. If you ask for that interval what you get is actually 21\ndays, which is a reasonable approximation of 0.7 months but not the\nsame thing, except in April, June, September, and November. So when\nyou then say that you want 0.7 months + 0.3 months to equal 1.0\nmonths, what you're really requesting is that 21 days + 9 days = 1\nmonth. That system has been tried in the past, but it was abandoned\nroughly around the time of Julius Caeser for the very good reason that\nthe orbital period of the earth about the sun is noticeably greater\nthan 360 days.\n\nIt would be entirely possible to design a data type that could\nrepresent such values more exactly. A data type that had a\nrepresentation similar to interval but with double values for the\nnumbers of years and months would be able to compute 0.7 months + 0.3\nmonths and get 1.0 months with no problem.\n\nIf we were doing this over again, I would argue that, with this\non-disk representation, 0.7 months ought to be rejected as invalid\ninput, because it's generally not a good idea to have a data type that\nsilently converts a value into a different value that is not\nequivalent for all purposes. It is confusing and causes people to\nexpect behavior different from what they will actually get. Now, it\nseems far too late to consider such a change at this point ... and it\nis also no good considering a change to the on-disk representation of\nthe existing data type at this point ... but it is also no good\npretending like we have a floating-point representation of months and\ndays when we actually do not.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 28 Jul 2021 11:31:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> If we were doing this over again, I would argue that, with this\n> on-disk representation, 0.7 months ought to be rejected as invalid\n> input, because it's generally not a good idea to have a data type that\n> silently converts a value into a different value that is not\n> equivalent for all purposes. It is confusing and causes people to\n> expect behavior different from what they will actually get. Now, it\n> seems far too late to consider such a change at this point ... and it\n> is also no good considering a change to the on-disk representation of\n> the existing data type at this point ... but it is also no good\n> pretending like we have a floating-point representation of months and\n> days when we actually do not.\n\nYou know, I was thinking exactly that thing earlier. Changing the\non-disk representation is certainly a nonstarter, but the problem\nhere stems from expecting interval_in to do something sane with inputs\nthat do not correspond to any representable value. I do not think we\nhave any other datatypes where we expect the input function to make\nchoices like that.\n\nIs it really too late to say \"that was a damfool idea\" and drop fractional\nyears/months/etc from interval_in's lexicon? By definition, this will not\ncreate any dump/reload problems, because interval_out will never emit any\nsuch thing. It will break applications that are expecting such syntax to\ndo something sane. But that expectation is fundamentally not meetable,\nso maybe we should just make a clean break. (Without back-patching it,\nof course.)\n\nI'm not entirely sure about whether to reject fractional days, though.\nConverting those on the assumption of 1 day == 24 hours is not quite\ntheoretically right. But it's unsurprising, which is not something\nwe can say about fractions of the larger units. So maybe it's OK to\ncontinue accepting that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Jul 2021 11:52:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 11:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> You know, I was thinking exactly that thing earlier. Changing the\n> on-disk representation is certainly a nonstarter, but the problem\n> here stems from expecting interval_in to do something sane with inputs\n> that do not correspond to any representable value. I do not think we\n> have any other datatypes where we expect the input function to make\n> choices like that.\n\nIt's not exactly the same issue, but the input function whose behavior\nmost regularly trips people up is bytea, because they try something\nlike 'x'::bytea and it seems to DWTW so then they try it on all their\ndata and discover that, for example, '\\'::bytea fails outright, or\nthat ''::bytea = '\\x'::bytea, contrary to expectations. People often\nseem to think that casting to bytea should work like convert_to(), but\nit doesn't. As in the case at hand, byteain() has to guess whether the\ninput is intended to be the 'hex' or 'escape' format, and because the\n'escape' format looks a lot like plain old text, confusion ensues.\nNow, guessing between two input formats that are both legal for the\ndata type is not exactly the same as guessing what to do with a value\nthat's not directly representable, but it has the same ultimate effect\ni.e. the human beings perceive the system as buggy.\n\nA case that is perhaps more theoretically similar to the instance at\nhand is rounding during the construction of floating point values. My\nsystem thinks '1.00000000000000000000000001'::float = '1'::float, so\nin that case, as in this one, we've decided that it's OK to build an\ninexact representation of the input value. I don't really see what can\nbe done about this considering that the textual representation uses\nbase 10 and the internal representation uses base 2, but I think this\ndoesn't cause us as many problems in practice because people\nunderstand how it works, which doesn't seem to be the case with the\ninterval data type, at last if this thread is any indication.\n\nI am dubious that it's worth the pain of making the input function\nreject cases involving fractional units. It's true that some people\nhere aren't happy with the current behavior, but they may no happier\nif we reject those cases with an error, and other people may then be\nunhappy too. I think your previous idea was the best one so far: fix\nthe input function so that 'X years Y months' and 'Y months X years'\nalways produce the same answer, and call it good.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 28 Jul 2021 12:32:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Jul 28, 2021 at 11:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> You know, I was thinking exactly that thing earlier. Changing the\n>> on-disk representation is certainly a nonstarter, but the problem\n>> here stems from expecting interval_in to do something sane with inputs\n>> that do not correspond to any representable value. I do not think we\n>> have any other datatypes where we expect the input function to make\n>> choices like that.\n\n> A case that is perhaps more theoretically similar to the instance at\n> hand is rounding during the construction of floating point values. My\n> system thinks '1.00000000000000000000000001'::float = '1'::float, so\n> in that case, as in this one, we've decided that it's OK to build an\n> inexact representation of the input value.\n\nFair point, but you decided when you chose to use float that you don't\ncare about the differences between numbers that only differ at the\nseventeenth or so decimal place. (Maybe, if you don't understand what\nfloat is, you didn't make that choice intentionally ... but that's\na documentation issue not a code shortcoming.) However, it's fairly\nhard to believe that somebody who writes '1.4 years'::interval doesn't\ncare about the 0.4 year. The fact that we silently convert that to,\neffectively, 1.33333333... years seems like a bigger roundoff error\nthan one would expect.\n\n> I am dubious that it's worth the pain of making the input function\n> reject cases involving fractional units. It's true that some people\n> here aren't happy with the current behavior, but they may no happier\n> if we reject those cases with an error, and other people may then be\n> unhappy too.\n\nMaybe. A possible compromise is to accept only exactly-representable\nfractions. Then, for instance, we'd take 1.5 years (resulting in 18\nmonths) but not 1.4 years. Now, this might fall foul of your point about\nnot wanting to mislead people into expecting the system to do things it\ncan't; but I'd argue that the existing behavior misleads them much more.\n\n> I think your previous idea was the best one so far: fix\n> the input function so that 'X years Y months' and 'Y months X years'\n> always produce the same answer, and call it good.\n\nThat would clearly be a bug fix. I'm just troubled that there are\nstill behaviors that people will see as bugs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Jul 2021 13:05:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 1:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Fair point, but you decided when you chose to use float that you don't\n> care about the differences between numbers that only differ at the\n> seventeenth or so decimal place. (Maybe, if you don't understand what\n> float is, you didn't make that choice intentionally ... but that's\n> a documentation issue not a code shortcoming.) However, it's fairly\n> hard to believe that somebody who writes '1.4 years'::interval doesn't\n> care about the 0.4 year. The fact that we silently convert that to,\n> effectively, 1.33333333... years seems like a bigger roundoff error\n> than one would expect.\n\nYeah, that's definitely a fair point!\n\n> > I am dubious that it's worth the pain of making the input function\n> > reject cases involving fractional units. It's true that some people\n> > here aren't happy with the current behavior, but they may no happier\n> > if we reject those cases with an error, and other people may then be\n> > unhappy too.\n>\n> Maybe. A possible compromise is to accept only exactly-representable\n> fractions. Then, for instance, we'd take 1.5 years (resulting in 18\n> months) but not 1.4 years. Now, this might fall foul of your point about\n> not wanting to mislead people into expecting the system to do things it\n> can't; but I'd argue that the existing behavior misleads them much more.\n\nWell, let's see what other people think.\n\n> > I think your previous idea was the best one so far: fix\n> > the input function so that 'X years Y months' and 'Y months X years'\n> > always produce the same answer, and call it good.\n>\n> That would clearly be a bug fix. I'm just troubled that there are\n> still behaviors that people will see as bugs.\n\nThat's a reasonable thing to be troubled about, but the date and time\nrelated datatypes have so many odd and crufty behaviors that I have a\nhard time believing that there's another possible outcome. If somebody\nshowed up today and proposed a new data type and told us that the way\nto format values of that data type was to say to_char(my_value,\nalphabet_soup) I think they would not be taken very seriously. A lot\nof this code, and the associated interfaces, date back to a time when\nPostgreSQL was far more primitive than today, and when databases in\ngeneral were as well. At least we didn't end up with a datatype called\nvarchar2 ... or not yet, anyway.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 28 Jul 2021 13:28:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Jul 28, 2021 at 1:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> That would clearly be a bug fix. I'm just troubled that there are\n>> still behaviors that people will see as bugs.\n\n> That's a reasonable thing to be troubled about, but the date and time\n> related datatypes have so many odd and crufty behaviors that I have a\n> hard time believing that there's another possible outcome.\n\nThere's surely a ton of cruft there, but I think most of it stems from\nwestern civilization's received rules for timekeeping, which we can do\nlittle about. But the fact that interval_in accepts '1.4 years' when\nit cannot do anything very reasonable with that input is entirely\nself-inflicted damage.\n\nBTW, I don't have a problem with the \"interval * float8\" operator\ndoing equally strange things, because if you don't like what it\ndoes you can always write your own multiplication function that\nyou like better. There can be only one interval_in, though,\nso I don't think it should be misrepresenting the fundamental\ncapabilities of the datatype.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Jul 2021 13:47:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 11:19:16AM -0400, Bruce Momjian wrote:\n> On Wed, Jul 28, 2021 at 08:42:31AM +0100, Dean Rasheed wrote:\n> > So it's equal even without calling justify_interval() on the result.\n> > \n> > FWIW, I remain of the opinion that the interval literal code should\n> > just spill down to lower units in all cases, just like the\n> > multiplication and division code, so that the results are consistent\n> > (barring floating point rounding errors) and explainable.\n> \n> Here is a more minimal patch that doesn't change the spill-down units at\n> all, but merely documents it, and changes the spilldown to months to\n> round instead of truncate.\n\nUnless I hear more feedback, I plan to apply this doc patch to all\nbranches with the word \"rounded\" changed to \"truncated\" in the back\nbranches, and apply the rounded code changes to master.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 30 Jul 2021 12:04:39 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 12:04:39PM -0400, Bruce Momjian wrote:\n> On Wed, Jul 28, 2021 at 11:19:16AM -0400, Bruce Momjian wrote:\n> > On Wed, Jul 28, 2021 at 08:42:31AM +0100, Dean Rasheed wrote:\n> > > So it's equal even without calling justify_interval() on the result.\n> > > \n> > > FWIW, I remain of the opinion that the interval literal code should\n> > > just spill down to lower units in all cases, just like the\n> > > multiplication and division code, so that the results are consistent\n> > > (barring floating point rounding errors) and explainable.\n> > \n> > Here is a more minimal patch that doesn't change the spill-down units at\n> > all, but merely documents it, and changes the spilldown to months to\n> > round instead of truncate.\n> \n> Unless I hear more feedback, I plan to apply this doc patch to all\n> branches with the word \"rounded\" changed to \"truncated\" in the back\n> branches, and apply the rounded code changes to master.\n\nNow that I think of it, I will just remove the word \"rounded\" from the\nback branch docs so we are technically breaking the documented API less\nin PG 15.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 30 Jul 2021 12:08:54 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Fri, Jul 30, 2021 at 12:04:39PM -0400, Bruce Momjian wrote:\n>> Unless I hear more feedback, I plan to apply this doc patch to all\n>> branches with the word \"rounded\" changed to \"truncated\" in the back\n>> branches, and apply the rounded code changes to master.\n\n> Now that I think of it, I will just remove the word \"rounded\" from the\n> back branch docs so we are technically breaking the documented API less\n> in PG 15.\n\nI think your first idea was better. Not documenting the behavior\ndoesn't make this not an API change; it just makes it harder for\npeople to understand what changed.\n\nThe doc patch itself is not exactly fine:\n\n+ Field values can have fractional parts; for example, <literal>'1.5\n+ weeks'</literal> or <literal>'01:02:03.45'</literal>. However,\n\nI think \"some field values\", as it was worded previously, was better.\nIf you try to write 01.5:02:03, that is not going to be interpreted\nas 1.5 hours. (Hmm, I get something that seems quite insane:\n\nregression=# select '01.5:02:03'::interval;\n interval \n----------------\n 1 day 14:03:00\n(1 row)\n\nI wonder what it thinks it's doing there.)\n\nThis is wrong:\n\n+ because interval internally stores only three integer units (months,\n+ days, seconds), fractional units must be spilled to smaller units.\n\ns/seconds/microseconds/ is probably enough to fix that.\n\n+ For example, because months are approximated to equal 30 days,\n+ fractional values of units greater than months is rounded to be the\n+ nearest integer number of months. Fractional units of months or less\n+ are computed to be an integer number of days and seconds, assuming\n+ 24 hours per day. For example, <literal>'1.5 months'</literal>\n+ becomes <literal>1 month 15 days</literal>.\n\nThis entire passage is vague, and grammatically shaky too. Perhaps\nmore like\n\n Fractional parts of units larger than months are rounded to the\n nearest integer number of months; for example '1.5 years'\n becomes '1 year 6 mons'. Fractional parts of months are rounded\n to the nearest integer number of days, using the assumption that\n one month equals 30 days; for example '1.5 months'\n becomes '1 mon 15 days'. Fractional parts of days and weeks\n are converted to microseconds, using the assumption that one day\n equals 24 hours.\n\n On output, the months field is shown as an appropriate number of\n years and months; the days field is shown as-is; the microseconds\n field is converted to hours, minutes, and possibly-fractional\n seconds.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Jul 2021 12:49:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 12:49:34PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Now that I think of it, I will just remove the word \"rounded\" from the\n> > back branch docs so we are technically breaking the documented API less\n> > in PG 15.\n> \n> I think your first idea was better. Not documenting the behavior\n> doesn't make this not an API change; it just makes it harder for\n> people to understand what changed.\n\nOK. However, I thought we were more worried about changing documented\nAPIs than undocumented ones. Anyway, I will do as you suggested.\n\n> The doc patch itself is not exactly fine:\n> \n> + Field values can have fractional parts; for example, <literal>'1.5\n> + weeks'</literal> or <literal>'01:02:03.45'</literal>. However,\n> \n> I think \"some field values\", as it was worded previously, was better.\n> If you try to write 01.5:02:03, that is not going to be interpreted\n> as 1.5 hours. (Hmm, I get something that seems quite insane:\n> \n> regression=# select '01.5:02:03'::interval;\n> interval \n> ----------------\n> 1 day 14:03:00\n> (1 row)\n> \n> I wonder what it thinks it's doing there.)\n\nIt thinks 01.5:02:03 is Days:Hours;Minute, so I think all fields can use\nfractions:\n\n\tSELECT interval '1.5 minutes';\n\t interval\n\t----------\n\t 00:01:30\n\n> This is wrong:\n> \n> + because interval internally stores only three integer units (months,\n> + days, seconds), fractional units must be spilled to smaller units.\n> \n> s/seconds/microseconds/ is probably enough to fix that.\n\nOK, there were a few place that said \"seconds\" so I fixed those too.\n\n> + For example, because months are approximated to equal 30 days,\n> + fractional values of units greater than months is rounded to be the\n> + nearest integer number of months. Fractional units of months or less\n> + are computed to be an integer number of days and seconds, assuming\n> + 24 hours per day. For example, <literal>'1.5 months'</literal>\n> + becomes <literal>1 month 15 days</literal>.\n> \n> This entire passage is vague, and grammatically shaky too. Perhaps\n> more like\n> \n> Fractional parts of units larger than months are rounded to the\n> nearest integer number of months; for example '1.5 years'\n> becomes '1 year 6 mons'. Fractional parts of months are rounded\n> to the nearest integer number of days, using the assumption that\n> one month equals 30 days; for example '1.5 months'\n\nThe newest patch actually doesn't work as explained above --- fractional\nmonths now continue to spill to microseconds. I think you are looking\nat a previous version.\n\n> becomes '1 mon 15 days'. Fractional parts of days and weeks\n> are converted to microseconds, using the assumption that one day\n> equals 24 hours.\n\nUh, fractional weeks can be integer days.\n\n> On output, the months field is shown as an appropriate number of\n> years and months; the days field is shown as-is; the microseconds\n> field is converted to hours, minutes, and possibly-fractional\n> seconds.\n\nHere is an updated patch that includes some of your ideas.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Fri, 30 Jul 2021 15:03:13 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 3:03 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Here is an updated patch that includes some of your ideas.\n\nJust to be clear, I am against this patch. I don't think it's a\nminimal change for the reported problem, and I think some people will\nbe unhappy about the behavior changes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 30 Jul 2021 15:08:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 03:08:56PM -0400, Robert Haas wrote:\n> On Fri, Jul 30, 2021 at 3:03 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Here is an updated patch that includes some of your ideas.\n> \n> Just to be clear, I am against this patch. I don't think it's a\n> minimal change for the reported problem, and I think some people will\n> be unhappy about the behavior changes.\n\nUh, what do you suggest then? You wanted the years/months fixed, and\nrounding at spill stop time makes sense, and fixes the problem.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 30 Jul 2021 15:20:10 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 3:20 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Uh, what do you suggest then? You wanted the years/months fixed, and\n> rounding at spill stop time makes sense, and fixes the problem.\n\nHmm, maybe I misunderstood. Are you saying that you think the patch\nwill fix cases like interval '-1.7 years 29.4 months' and interval\n'29.4 months -1.7 years' to produce the same answer without changing\nany other cases? I had the impression that you were proposing a bigger\nchange to the rules for converting fractional units to units of lower\ntype, particularly because Tom called it an \"API change\".\n\nFor some reason I can't apply the patch locally.\n\n[rhaas pgsql]$ patch -p1 < ~/Downloads/interval.diff\n(Stripping trailing CRs from patch.)\npatching file doc/src/sgml/datatype.sgml\n(Stripping trailing CRs from patch.)\npatching file src/backend/utils/adt/datetime.c\npatch: **** malformed patch at line 90: @@ -3601,7 +3597,7 @@\nDecodeISO8601Interval(char *str,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 30 Jul 2021 15:47:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Fri, Jul 30, 2021 at 03:08:56PM -0400, Robert Haas wrote:\n>> Just to be clear, I am against this patch. I don't think it's a\n>> minimal change for the reported problem, and I think some people will\n>> be unhappy about the behavior changes.\n\n> Uh, what do you suggest then? You wanted the years/months fixed, and\n> rounding at spill stop time makes sense, and fixes the problem.\n\nThe complained-of bug is that 'X years Y months' isn't always\nidentical to 'Y months X years'. I do not believe that this patch\nfixes that, though it may obscure the problem for some values of\nX and Y. After a quick look at the code, I am suspicious that\nthe actual problem is that AdjustFractDays is applied at the wrong\ntime, before we've collected all the input. We probably need to\ncollect up all of the contributing input as floats and then do the\nfractional spilling once at the end.\n\nHaving said that, I also agree that it's not great that '1.4 years'\nis converted to '1 year 4 mons' (1.33333... years) rather than\n'1 year 5 mons' (1.41666... years). The latter seems like a clearly\nsaner translation. I would really rather that we reject such input\naltogether, but if we're going to accept it, we should round not\ntruncate.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Jul 2021 15:54:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 03:47:53PM -0400, Robert Haas wrote:\n> On Fri, Jul 30, 2021 at 3:20 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Uh, what do you suggest then? You wanted the years/months fixed, and\n> > rounding at spill stop time makes sense, and fixes the problem.\n> \n> Hmm, maybe I misunderstood. Are you saying that you think the patch\n> will fix cases like interval '-1.7 years 29.4 months' and interval\n> '29.4 months -1.7 years' to produce the same answer without changing\n> any other cases? I had the impression that you were proposing a bigger\n\nYes, tests from the oringal email:\n\n\tSELECT interval '-1.7 years 29.4 months';\n\t interval\n\t----------------\n\t 9 mons 12 days\n\t(1 row)\n\t\n\tSELECT interval '29.4 months -1.7 years';\n\t interval\n\t----------------\n\t 9 mons 12 days\n\t(1 row)\n\t\n\tSELECT interval '-1.7 years' + interval '29.4 months';\n\t ?column?\n\t----------------\n\t 9 mons 12 days\n\t(1 row)\n\t\n\tSELECT interval '29.4 months' + interval '-1.7 years';\n\t ?column?\n\t----------------\n\t 9 mons 12 days\n\n> change to the rules for converting fractional units to units of lower\n> type, particularly because Tom called it an \"API change\".\n\nThe API change is to _round_ units greater than months to integeral\nmonth values; we currently truncate. Changing the spill behavior has\nbeen rejected.\n\n> For some reason I can't apply the patch locally.\n> \n> [rhaas pgsql]$ patch -p1 < ~/Downloads/interval.diff\n> (Stripping trailing CRs from patch.)\n> patching file doc/src/sgml/datatype.sgml\n> (Stripping trailing CRs from patch.)\n> patching file src/backend/utils/adt/datetime.c\n> patch: **** malformed patch at line 90: @@ -3601,7 +3597,7 @@\n> DecodeISO8601Interval(char *str,\n\nUh, here is the patch again, in case that helps.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Fri, 30 Jul 2021 15:55:05 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 03:54:42PM -0400, Tom Lane wrote:\n> Having said that, I also agree that it's not great that '1.4 years'\n> is converted to '1 year 4 mons' (1.33333... years) rather than\n> '1 year 5 mons' (1.41666... years). The latter seems like a clearly\n> saner translation. I would really rather that we reject such input\n> altogether, but if we're going to accept it, we should round not\n> truncate.\n\nMy patch returns what you want:\n\n\tSELECT interval '1.4 years';\n\t interval\n\t---------------\n\t 1 year 5 mons\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 30 Jul 2021 15:56:49 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Fri, Jul 30, 2021 at 03:54:42PM -0400, Tom Lane wrote:\n>> Having said that, I also agree that it's not great that '1.4 years'\n>> is converted to '1 year 4 mons' (1.33333... years) rather than\n>> '1 year 5 mons' (1.41666... years). The latter seems like a clearly\n>> saner translation. I would really rather that we reject such input\n>> altogether, but if we're going to accept it, we should round not\n>> truncate.\n\n> My patch returns what you want:\n\nYeah, as far as that point goes, I was replying to Robert's\nobjection not your patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Jul 2021 16:44:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 03:03:13PM -0400, Bruce Momjian wrote:\n> > On output, the months field is shown as an appropriate number of\n> > years and months; the days field is shown as-is; the microseconds\n> > field is converted to hours, minutes, and possibly-fractional\n> > seconds.\n> \n> Here is an updated patch that includes some of your ideas.\n\n\"Rounding\" patch applied to master, and back branches got only the\nadjusted \"truncated\" doc patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Tue, 3 Aug 2021 12:19:25 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Have I found an interval arithmetic bug?"
}
] |
[
{
"msg_contents": "Hi,\n\nI found a problem with the pg_checksums.c.\n\nThe total_size is calculated by scanning the directory.\nThe current_size is calculated by scanning the files, but the current_size does not include the size of NewPages.\n\nThis may cause pg_checksums progress report to not be 100%.\nI have attached a patch that fixes this.\n\nRegards,\nShinya Kato",
"msg_date": "Fri, 2 Apr 2021 05:23:32 +0000",
"msg_from": "<Shinya11.Kato@nttdata.com>",
"msg_from_op": true,
"msg_subject": "Fix pg_checksums progress report"
},
{
"msg_contents": "\n\nOn 2021/04/02 14:23, Shinya11.Kato@nttdata.com wrote:\n> Hi,\n> \n> I found a problem with the pg_checksums.c.\n> \n> The total_size is calculated by scanning the directory.\n> The current_size is calculated by scanning the files, but the current_size does not include the size of NewPages.\n> \n> This may cause pg_checksums progress report to not be 100%.\n> I have attached a patch that fixes this.\n\nThanks for the report and patch!\n\nI could reproduce this issue and confirmed that the patch fixes it.\n\nRegarding the patch, I think that it's better to add the comment about\nwhy current_size needs to be counted including new pages.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 2 Apr 2021 14:39:20 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_checksums progress report"
},
{
"msg_contents": ">-----Original Message-----\n>From: Fujii Masao <masao.fujii@oss.nttdata.com>\n>Sent: Friday, April 2, 2021 2:39 PM\n>To: Shinya11.Kato@nttdata.com; pgsql-hackers@postgresql.org\n>Subject: Re: Fix pg_checksums progress report\n>\n>\n>\n>On 2021/04/02 14:23, Shinya11.Kato@nttdata.com wrote:\n>> Hi,\n>>\n>> I found a problem with the pg_checksums.c.\n>>\n>> The total_size is calculated by scanning the directory.\n>> The current_size is calculated by scanning the files, but the current_size does\n>not include the size of NewPages.\n>>\n>> This may cause pg_checksums progress report to not be 100%.\n>> I have attached a patch that fixes this.\n>\n>Thanks for the report and patch!\n>\n>I could reproduce this issue and confirmed that the patch fixes it.\n>\n>Regarding the patch, I think that it's better to add the comment about why\n>current_size needs to be counted including new pages.\n\nThanks for your review.\nI added a comment to the patch, and attached the new patch.\n\nRegards,\nShinya Kato",
"msg_date": "Fri, 2 Apr 2021 07:30:32 +0000",
"msg_from": "<Shinya11.Kato@nttdata.com>",
"msg_from_op": true,
"msg_subject": "RE: Fix pg_checksums progress report"
},
{
"msg_contents": "On Fri, Apr 02, 2021 at 07:30:32AM +0000, Shinya11.Kato@nttdata.com wrote:\n> I added a comment to the patch, and attached the new patch.\n\nHmm. This looks to come from 280e5f14 that introduced the progress\nreports so this would need a backpatch down to 12. I have not looked\nin details and have not looked at the patch yet, though. Fujii-san,\nare you planning to take care of that? That was my stuff originally,\nso I am fine to look at it. But not now, on a Friday afternoon :)\n--\nMichael",
"msg_date": "Fri, 2 Apr 2021 16:47:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_checksums progress report"
},
{
"msg_contents": "\n\nOn 2021/04/02 16:47, Michael Paquier wrote:\n> On Fri, Apr 02, 2021 at 07:30:32AM +0000, Shinya11.Kato@nttdata.com wrote:\n>> I added a comment to the patch, and attached the new patch.\n\nThanks for updating the patch!\n\n+\t\t/*\n+\t\t * The current_size is calculated before checking if header is a\n+\t\t * new page, because total_size includes the size of new pages.\n+\t\t */\n+\t\tcurrent_size += r;\n\nI'd like to comment more. What about the following?\n\n---------------------------\nSince the file size is counted as total_size for progress status information, the sizes of all pages including new ones in the file should be counted as current_size. Otherwise the progress reporting calculated using those counters may not reach 100%.\n---------------------------\n\n\n> Hmm. This looks to come from 280e5f14 that introduced the progress\n> reports so this would need a backpatch down to 12.\n\nYes.\n\n\n> I have not looked\n> in details and have not looked at the patch yet, though. Fujii-san,\n> are you planning to take care of that?\n\nYes, I will. Thanks for the consideration!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 2 Apr 2021 18:03:21 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_checksums progress report"
},
{
"msg_contents": ">-----Original Message-----\n>From: Fujii Masao <masao.fujii@oss.nttdata.com>\n>Sent: Friday, April 2, 2021 6:03 PM\n>To: Michael Paquier <michael@paquier.xyz>; Shinya11.Kato@nttdata.com\n>Cc: pgsql-hackers@postgresql.org\n>Subject: Re: Fix pg_checksums progress report\n>\n>\n>\n>On 2021/04/02 16:47, Michael Paquier wrote:\n>> On Fri, Apr 02, 2021 at 07:30:32AM +0000, Shinya11.Kato@nttdata.com wrote:\n>>> I added a comment to the patch, and attached the new patch.\n>\n>Thanks for updating the patch!\n>\n>+\t\t/*\n>+\t\t * The current_size is calculated before checking if header is a\n>+\t\t * new page, because total_size includes the size of new\n>pages.\n>+\t\t */\n>+\t\tcurrent_size += r;\n>\n>I'd like to comment more. What about the following?\n>\n>---------------------------\n>Since the file size is counted as total_size for progress status information, the\n>sizes of all pages including new ones in the file should be counted as\n>current_size. Otherwise the progress reporting calculated using those counters\n>may not reach 100%.\n>---------------------------\n\nThanks for your review!\nI updated the patch, and attached it.\n\nRegards,\nShinya Kato",
"msg_date": "Fri, 2 Apr 2021 09:19:32 +0000",
"msg_from": "<Shinya11.Kato@nttdata.com>",
"msg_from_op": true,
"msg_subject": "RE: Fix pg_checksums progress report"
},
{
"msg_contents": "On Fri, Apr 02, 2021 at 06:03:21PM +0900, Fujii Masao wrote:\n> On 2021/04/02 16:47, Michael Paquier wrote:\n>> I have not looked\n>> in details and have not looked at the patch yet, though. Fujii-san,\n>> are you planning to take care of that?\n> \n> Yes, I will. Thanks for the consideration!\n\nOK, thanks!\n--\nMichael",
"msg_date": "Fri, 2 Apr 2021 18:59:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_checksums progress report"
},
{
"msg_contents": "\n\nOn 2021/04/02 18:19, Shinya11.Kato@nttdata.com wrote:\n> Thanks for your review!\n> I updated the patch, and attached it.\n\nThanks for updating the patch! Pushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sat, 3 Apr 2021 00:09:57 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_checksums progress report"
}
] |
[
{
"msg_contents": "RLS policies quals/checks are optimized inline, and so I generally avoid\nwriting a separate procedure so the optimizer can do it's thing.\n\nHowever, if you need a security definer to avoid recursive RLS if you're\ndoing a more complex query say, on a join table, anyone wish there was a\nflag on the policy itself to specify that `WITH CHECK` or `USING`\nexpression could be run via security definer?\n\nThe main reason for this is to avoid writing a separate security definer\nfunction so you can benefit from the optimizer.\n\nIs this possible? Would this be worth a feature request to postgres core?\n\nCheers!\n\nDan\n\nRLS policies quals/checks are optimized inline, and so I generally avoid writing a separate procedure so the optimizer can do it's thing.However, if you need a security definer to avoid recursive RLS if you're doing a more complex query say, on a join table, anyone wish there was a flag on the policy itself to specify that `WITH CHECK` or `USING` expression could be run via security definer?The main reason for this is to avoid writing a separate security definer function so you can benefit from the optimizer. Is this possible? Would this be worth a feature request to postgres core?Cheers!Dan",
"msg_date": "Thu, 1 Apr 2021 22:44:02 -0700",
"msg_from": "Dan Lynch <pyramation@gmail.com>",
"msg_from_op": true,
"msg_subject": "policies with security definer option for allowing inline\n optimization"
},
{
"msg_contents": "On Fri, 2 Apr 2021 at 01:44, Dan Lynch <pyramation@gmail.com> wrote:\n\n> RLS policies quals/checks are optimized inline, and so I generally avoid\n> writing a separate procedure so the optimizer can do it's thing.\n>\n> However, if you need a security definer to avoid recursive RLS if you're\n> doing a more complex query say, on a join table, anyone wish there was a\n> flag on the policy itself to specify that `WITH CHECK` or `USING`\n> expression could be run via security definer?\n>\n> The main reason for this is to avoid writing a separate security definer\n> function so you can benefit from the optimizer.\n>\n> Is this possible? Would this be worth a feature request to postgres core?\n>\n\nIf we're going to do this we should do the same for triggers as well.\n\nIt's easy to imagine a situation in which RLS policies need to refer to\ninformation which should not be accessible to the role using the table, and\nsimilarly it's easy to imagine a situation in which a trigger needs to\nwrite to another table which should not be accessible to the role using the\ntable which has the trigger.\n\nOn Fri, 2 Apr 2021 at 01:44, Dan Lynch <pyramation@gmail.com> wrote:RLS policies quals/checks are optimized inline, and so I generally avoid writing a separate procedure so the optimizer can do it's thing.However, if you need a security definer to avoid recursive RLS if you're doing a more complex query say, on a join table, anyone wish there was a flag on the policy itself to specify that `WITH CHECK` or `USING` expression could be run via security definer?The main reason for this is to avoid writing a separate security definer function so you can benefit from the optimizer. Is this possible? Would this be worth a feature request to postgres core?If we're going to do this we should do the same for triggers as well.It's easy to imagine a situation in which RLS policies need to refer to information which should not be accessible to the role using the table, and similarly it's easy to imagine a situation in which a trigger needs to write to another table which should not be accessible to the role using the table which has the trigger.",
"msg_date": "Fri, 2 Apr 2021 09:09:04 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: policies with security definer option for allowing inline\n optimization"
},
{
"msg_contents": "Greetings,\n\n* Isaac Morland (isaac.morland@gmail.com) wrote:\n> On Fri, 2 Apr 2021 at 01:44, Dan Lynch <pyramation@gmail.com> wrote:\n> > RLS policies quals/checks are optimized inline, and so I generally avoid\n> > writing a separate procedure so the optimizer can do it's thing.\n> >\n> > However, if you need a security definer to avoid recursive RLS if you're\n> > doing a more complex query say, on a join table, anyone wish there was a\n> > flag on the policy itself to specify that `WITH CHECK` or `USING`\n> > expression could be run via security definer?\n> >\n> > The main reason for this is to avoid writing a separate security definer\n> > function so you can benefit from the optimizer.\n> >\n> > Is this possible? Would this be worth a feature request to postgres core?\n> \n> If we're going to do this we should do the same for triggers as well.\n\n... and views.\n\n> It's easy to imagine a situation in which RLS policies need to refer to\n> information which should not be accessible to the role using the table, and\n> similarly it's easy to imagine a situation in which a trigger needs to\n> write to another table which should not be accessible to the role using the\n> table which has the trigger.\n\nI'm generally +1 on adding the ability for the DBA to choose which user\nvarious things run as. There's definitely use-cases for both in my\nexperience. Also would be great to add the ability to have policies on\nviews too which would probably help address some of these cases.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 2 Apr 2021 09:30:16 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: policies with security definer option for allowing inline\n optimization"
},
{
"msg_contents": "On 04/02/21 09:09, Isaac Morland wrote:\n> If we're going to do this we should do the same for triggers as well.\n> \n> ... it's easy to imagine a situation in which a trigger needs to\n> write to another table which should not be accessible to the role using the\n> table which has the trigger.\n\nTriggers seem to be an area of long-standing weirdness[1].\n\nRegards,\n-Chap\n\n\n[1]\nhttps://www.postgresql.org/message-id/b1be2d05-b9fd-b9db-ea7f-38253e4e4bab%40anastigmatix.net\n\n\n",
"msg_date": "Fri, 2 Apr 2021 09:44:39 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: policies with security definer option for allowing inline\n optimization"
},
{
"msg_contents": "On Fri, 2 Apr 2021 at 09:30, Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Isaac Morland (isaac.morland@gmail.com) wrote:\n> > On Fri, 2 Apr 2021 at 01:44, Dan Lynch <pyramation@gmail.com> wrote:\n> > > RLS policies quals/checks are optimized inline, and so I generally\n> avoid\n> > > writing a separate procedure so the optimizer can do it's thing.\n> > >\n> > > However, if you need a security definer to avoid recursive RLS if\n> you're\n> > > doing a more complex query say, on a join table, anyone wish there was\n> a\n> > > flag on the policy itself to specify that `WITH CHECK` or `USING`\n> > > expression could be run via security definer?\n> > >\n> > > The main reason for this is to avoid writing a separate security\n> definer\n> > > function so you can benefit from the optimizer.\n> > >\n> > > Is this possible? Would this be worth a feature request to postgres\n> core?\n> >\n> > If we're going to do this we should do the same for triggers as well.\n>\n> ... and views.\n>\n\nViews already run security definer, allowing them to be used for some of\nthe same information-hiding purposes as RLS. But I just found something\nstrange: current_user/_role returns the user's role, not the view owner's\nrole:\n\npostgres=# create table tt as select 5;\nSELECT 1\npostgres=# create view tv as select *, current_user from tt;\nCREATE VIEW\npostgres=# table tt;\n ?column?\n----------\n 5\n(1 row)\n\npostgres=# table tv;\n ?column? | current_user\n----------+--------------\n 5 | postgres\n(1 row)\n\npostgres=# set role to t1;\nSET\npostgres=> table tt;\nERROR: permission denied for table tt\npostgres=> table tv;\nERROR: permission denied for view tv\npostgres=> set role to postgres;\nSET\npostgres=# grant select on tv to public;\nGRANT\npostgres=# set role to t1;\nSET\npostgres=> table tt;\nERROR: permission denied for table tt\npostgres=> table tv;\n ?column? | current_user\n----------+--------------\n 5 | t1\n(1 row)\n\npostgres=>\n\nNote that even though current_user is t1 \"inside\" the view, it is still\nable to see the contents of table tt. Shouldn't current_user/_role return\nthe view owner in this situation? By contrast security definer functions\nwork properly:\n\npostgres=# create function get_current_user_sd () returns name security\ndefiner language sql as $$ select current_user $$;\nCREATE FUNCTION\npostgres=# select get_current_user_sd ();\n get_current_user_sd\n---------------------\n postgres\n(1 row)\n\npostgres=# set role t1;\nSET\npostgres=> select get_current_user_sd ();\n get_current_user_sd\n---------------------\n postgres\n(1 row)\n\npostgres=>\n\nOn Fri, 2 Apr 2021 at 09:30, Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Isaac Morland (isaac.morland@gmail.com) wrote:\n> On Fri, 2 Apr 2021 at 01:44, Dan Lynch <pyramation@gmail.com> wrote:\n> > RLS policies quals/checks are optimized inline, and so I generally avoid\n> > writing a separate procedure so the optimizer can do it's thing.\n> >\n> > However, if you need a security definer to avoid recursive RLS if you're\n> > doing a more complex query say, on a join table, anyone wish there was a\n> > flag on the policy itself to specify that `WITH CHECK` or `USING`\n> > expression could be run via security definer?\n> >\n> > The main reason for this is to avoid writing a separate security definer\n> > function so you can benefit from the optimizer.\n> >\n> > Is this possible? Would this be worth a feature request to postgres core?\n> \n> If we're going to do this we should do the same for triggers as well.\n\n... and views.Views already run security definer, allowing them to be used for some of the same information-hiding purposes as RLS. But I just found something strange: current_user/_role returns the user's role, not the view owner's role:postgres=# create table tt as select 5;SELECT 1postgres=# create view tv as select *, current_user from tt;CREATE VIEWpostgres=# table tt; ?column? ---------- 5(1 row)postgres=# table tv; ?column? | current_user ----------+-------------- 5 | postgres(1 row)postgres=# set role to t1;SETpostgres=> table tt;ERROR: permission denied for table ttpostgres=> table tv;ERROR: permission denied for view tvpostgres=> set role to postgres;SETpostgres=# grant select on tv to public;GRANTpostgres=# set role to t1;SETpostgres=> table tt;ERROR: permission denied for table ttpostgres=> table tv; ?column? | current_user ----------+-------------- 5 | t1(1 row)postgres=> Note that even though current_user is t1 \"inside\" the view, it is still able to see the contents of table tt. Shouldn't current_user/_role return the view owner in this situation? By contrast security definer functions work properly:postgres=# create function get_current_user_sd () returns name security definer language sql as $$ select current_user $$;CREATE FUNCTIONpostgres=# select get_current_user_sd (); get_current_user_sd --------------------- postgres(1 row)postgres=# set role t1;SETpostgres=> select get_current_user_sd (); get_current_user_sd --------------------- postgres(1 row)postgres=>",
"msg_date": "Fri, 2 Apr 2021 09:57:27 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: policies with security definer option for allowing inline\n optimization"
},
{
"msg_contents": "On Fri, 2 Apr 2021 at 09:44, Chapman Flack <chap@anastigmatix.net> wrote:\n\n> On 04/02/21 09:09, Isaac Morland wrote:\n> > If we're going to do this we should do the same for triggers as well.\n> >\n> > ... it's easy to imagine a situation in which a trigger needs to\n> > write to another table which should not be accessible to the role using\n> the\n> > table which has the trigger.\n>\n> Triggers seem to be an area of long-standing weirdness[1].\n>\n\nThanks for that reference. That has convinced me that I was wrong in a\nprevious discussion to say that triggers should run as the table owner:\ninstead, they should run as the trigger owner (implying that triggers\nshould have owners). Of course at this point the change could only be made\nas an option in order to avoid a backward compatibility break.\n\n[1]\n>\n> https://www.postgresql.org/message-id/b1be2d05-b9fd-b9db-ea7f-38253e4e4bab%40anastigmatix.net\n>\n\nOn Fri, 2 Apr 2021 at 09:44, Chapman Flack <chap@anastigmatix.net> wrote:On 04/02/21 09:09, Isaac Morland wrote:\n> If we're going to do this we should do the same for triggers as well.\n> \n> ... it's easy to imagine a situation in which a trigger needs to\n> write to another table which should not be accessible to the role using the\n> table which has the trigger.\n\nTriggers seem to be an area of long-standing weirdness[1].\nThanks for that reference. That has convinced me that I was wrong in a previous discussion to say that triggers should run as the table owner: instead, they should run as the trigger owner (implying that triggers should have owners). Of course at this point the change could only be made as an option in order to avoid a backward compatibility break.\n[1]\nhttps://www.postgresql.org/message-id/b1be2d05-b9fd-b9db-ea7f-38253e4e4bab%40anastigmatix.net",
"msg_date": "Fri, 2 Apr 2021 10:03:53 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: policies with security definer option for allowing inline\n optimization"
},
{
"msg_contents": "On 4/2/21 9:57 AM, Isaac Morland wrote:\n> Views already run security definer, allowing them to be used for some of the \n> same information-hiding purposes as RLS. But I just found something strange: \n> current_user/_role returns the user's role, not the view owner's role:\n\n> postgres=# set role to t1;\n> SET\n> postgres=> table tt;\n> ERROR: permission denied for table tt\n> postgres=> table tv;\n> ?column? | current_user\n> ----------+--------------\n> 5 | t1\n> (1 row)\n> \n> postgres=>\n> \n> Note that even though current_user is t1 \"inside\" the view, it is still able to \n> see the contents of table tt. Shouldn't current_user/_role return the view owner \n> in this situation? By contrast security definer functions work properly:\n\nThat is because while VIEWs are effectively SECURITY DEFINER for table access, \nfunctions running as part of the view are still SECURITY INVOKER if they were \ndefined that way. And \"current_user\" is essentially just a special grammatical \ninterface to a SECURITY INVOKER function:\n\npostgres=# \\df+ current_user\nList of functions\n-[ RECORD 1 ]-------+------------------\nSchema | pg_catalog\nName | current_user\nResult data type | name\nArgument data types |\nType | func\nVolatility | stable\nParallel | safe\nOwner | postgres\nSecurity | invoker\nAccess privileges |\nLanguage | internal\nSource code | current_user\nDescription | current user name\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Fri, 2 Apr 2021 10:10:56 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: policies with security definer option for allowing inline\n optimization"
},
{
"msg_contents": "Greetings,\n\n* Joe Conway (mail@joeconway.com) wrote:\n> On 4/2/21 9:57 AM, Isaac Morland wrote:\n> >Views already run security definer, allowing them to be used for some of\n> >the same information-hiding purposes as RLS. But I just found something\n> >strange: current_user/_role returns the user's role, not the view owner's\n> >role:\n> \n> >postgres=# set role to t1;\n> >SET\n> >postgres=> table tt;\n> >ERROR: permission denied for table tt\n> >postgres=> table tv;\n> > ?column? | current_user\n> >----------+--------------\n> > 5 | t1\n> >(1 row)\n> >\n> >postgres=>\n> >\n> >Note that even though current_user is t1 \"inside\" the view, it is still\n> >able to see the contents of table tt. Shouldn't current_user/_role return\n> >the view owner in this situation? By contrast security definer functions\n> >work properly:\n> \n> That is because while VIEWs are effectively SECURITY DEFINER for table\n> access, functions running as part of the view are still SECURITY INVOKER if\n> they were defined that way. And \"current_user\" is essentially just a special\n> grammatical interface to a SECURITY INVOKER function:\n\nRight- and what I was really getting at is that it'd sometimes be nice\nto have the view run as 'security invoker' for table access. In\ngeneral, it seems like it'd be useful to be able to control each piece\nand define if it's to be security invoker or security definer. We're\nable to do that for functions, but not other parts of the system.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 2 Apr 2021 10:23:54 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: policies with security definer option for allowing inline\n optimization"
},
{
"msg_contents": "On 4/2/21 10:23 AM, Stephen Frost wrote:\n> Greetings,\n> \n> * Joe Conway (mail@joeconway.com) wrote:\n>> On 4/2/21 9:57 AM, Isaac Morland wrote:\n>> >Views already run security definer, allowing them to be used for some of\n>> >the same information-hiding purposes as RLS. But I just found something\n>> >strange: current_user/_role returns the user's role, not the view owner's\n>> >role:\n>> \n>> >postgres=# set role to t1;\n>> >SET\n>> >postgres=> table tt;\n>> >ERROR: permission denied for table tt\n>> >postgres=> table tv;\n>> > ?column? | current_user\n>> >----------+--------------\n>> > 5 | t1\n>> >(1 row)\n>> >\n>> >postgres=>\n>> >\n>> >Note that even though current_user is t1 \"inside\" the view, it is still\n>> >able to see the contents of table tt. Shouldn't current_user/_role return\n>> >the view owner in this situation? By contrast security definer functions\n>> >work properly:\n>> \n>> That is because while VIEWs are effectively SECURITY DEFINER for table\n>> access, functions running as part of the view are still SECURITY INVOKER if\n>> they were defined that way. And \"current_user\" is essentially just a special\n>> grammatical interface to a SECURITY INVOKER function:\n> \n> Right- and what I was really getting at is that it'd sometimes be nice\n> to have the view run as 'security invoker' for table access. In\n> general, it seems like it'd be useful to be able to control each piece\n> and define if it's to be security invoker or security definer. We're\n> able to do that for functions, but not other parts of the system.\n\n+1\n\nAgreed -- I have opined similarly in the past\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Fri, 2 Apr 2021 10:47:29 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: policies with security definer option for allowing inline\n optimization"
},
{
"msg_contents": "My goal is to use RLS for everything, including SELECTs, so it's super\nimportant to consider every performance tweak possible. Appreciate any\ninsights or comments. I'm also hoping to document this better for\napplication developers who want to use postgres and RLS.\n\nDoes anyone know details of, or where to find more information about the\nimplications of the optimizer on the quals/checks for the policies being\nfunctions vs inline?\n\nIt seems that the solution today may be that we have to write functions\nwith security definer. I also saw Joe's linked in share regarding an\narticle using inline functions\n<https://blog.crunchydata.com/blog/a-postgresql-row-level-security-primer-creating-large-policies>\nin the qual/checks to solve a policy size issue, but also wondering the\nperformance implications of inline vs functions:\n\nImagine you need to do a JOIN to check an owned table against an acl table\n\n(group_id = ANY (ARRAY (\n SELECT\n acl.entity_id\n FROM\n org_memberships_acl acl\n JOIN app_groups obj ON acl.entity_id = obj.owner_id\n WHERE\n acl.actor_id = current_user_id())))\n\n\nyou could wrap that query into a function (so we can apply SECURITY DEFINER\nto the tables involved to avoid nested RLS lookups)\n\n(group_id = ANY (ARRAY (\n get_group_ids_of_current_user()\n)))\n\nDoes anyone here know how the optimizer would handle this? I suppose if the\nget_group_ids_of_current_user() function is marked as STABLE, would the\noptimizer cache this value for every row in a SELECT that returned\nmultiple rows? Is it possible that if the function is sql vs plpgsql it\nmakes a difference?\n\nAm I splitting hairs here, and maybe this is a trivial nuance that\nshouldn't really matter for performance? If it's true that inline functions\nwould perform bretter, then definitely this thread and potentially feature\nrequest seems pretty important.\n\n\n*Other important RLS Performance Optimizations*\n\nI also want to share my research from online so it's documented somewhere.\nI would love to get more information to formally document these\noptimizations. Here are the two articles I've found to be useful for how to\nstructure RLS policies performantly:\n\nhttps://cazzer.medium.com/designing-the-most-performant-row-level-security-strategy-in-postgres-a06084f31945\n\n\nhttps://medium.com/@ethanresnick/there-are-a-few-faster-ways-that-i-know-of-to-handle-the-third-case-with-rls-9d22eaa890e5\n\n\n\nThe 2nd article was particularly useful (which was written in response to this\narticle\n<https://medium.com/@bartels/using-postgresql-row-level-security-rls-to-authorize-read-queries-for-your-applications-users-a2838d2afb92>),\nhighlighting an important detail that should probably be more explicit for\nfolks writing policies, especially for SELECT policies. Essentially it\nboils down to not passing properties from the rows into the functions used\nto check security, but instead inverting the logic and instead returning\nthe identifiers as an array and checking if the row's owned key matches one\nof those identifiers.\n\nFor example,\n\na GOOD qual/check expr\n\nowner_id = ANY ( function_that_gets_current_users_organization_ids() )\n\na BAD qual/check expr\n\ncan_user_access_object(owner_id)\n\nThe main benefit of the first expr is that if\nfunction_that_gets_current_users_organization_ids is STABLE, the optimizer\ncan run this once for all rows, and thus for SELECTs should actually run\nfast. The 2nd expr takes as an argument the column, which would have to run\nfor every single row making SELECTs run very slow depending on the function.\n\nThis actually is pretty intuitive once you look at it. Reversing the logic\nand returning IDs makes sense when you imagine what PG has to do in order\nto check rows, I suppose there are limitations depending on the cardinality\nof the IDs returned and postgres's ability to check some_id = ANY (array)\nfor large arrays.\n\nDan Lynch\n(734) 657-4483\n\n\nOn Fri, Apr 2, 2021 at 7:47 AM Joe Conway <mail@joeconway.com> wrote:\n\n> On 4/2/21 10:23 AM, Stephen Frost wrote:\n> > Greetings,\n> >\n> > * Joe Conway (mail@joeconway.com) wrote:\n> >> On 4/2/21 9:57 AM, Isaac Morland wrote:\n> >> >Views already run security definer, allowing them to be used for some\n> of\n> >> >the same information-hiding purposes as RLS. But I just found something\n> >> >strange: current_user/_role returns the user's role, not the view\n> owner's\n> >> >role:\n> >>\n> >> >postgres=# set role to t1;\n> >> >SET\n> >> >postgres=> table tt;\n> >> >ERROR: permission denied for table tt\n> >> >postgres=> table tv;\n> >> > ?column? | current_user\n> >> >----------+--------------\n> >> > 5 | t1\n> >> >(1 row)\n> >> >\n> >> >postgres=>\n> >> >\n> >> >Note that even though current_user is t1 \"inside\" the view, it is still\n> >> >able to see the contents of table tt. Shouldn't current_user/_role\n> return\n> >> >the view owner in this situation? By contrast security definer\n> functions\n> >> >work properly:\n> >>\n> >> That is because while VIEWs are effectively SECURITY DEFINER for table\n> >> access, functions running as part of the view are still SECURITY\n> INVOKER if\n> >> they were defined that way. And \"current_user\" is essentially just a\n> special\n> >> grammatical interface to a SECURITY INVOKER function:\n> >\n> > Right- and what I was really getting at is that it'd sometimes be nice\n> > to have the view run as 'security invoker' for table access. In\n> > general, it seems like it'd be useful to be able to control each piece\n> > and define if it's to be security invoker or security definer. We're\n> > able to do that for functions, but not other parts of the system.\n>\n> +1\n>\n> Agreed -- I have opined similarly in the past\n>\n> Joe\n>\n> --\n> Crunchy Data - http://crunchydata.com\n> PostgreSQL Support for Secure Enterprises\n> Consulting, Training, & Open Source Development\n>\n\nMy goal is to use RLS for everything, including SELECTs, so it's super important to consider every performance tweak possible. Appreciate any insights or comments. I'm also hoping to document this better for application developers who want to use postgres and RLS.Does anyone know details of, or where to find more information about the implications of the optimizer on the quals/checks for the policies being functions vs inline? It seems that the solution today may be that we have to write functions with security definer. I also saw Joe's linked in share regarding an article using inline functions in the qual/checks to solve a policy size issue, but also wondering the performance implications of inline vs functions:Imagine you need to do a JOIN to check an owned table against an acl table(group_id = ANY (ARRAY ( SELECT acl.entity_id FROM org_memberships_acl acl JOIN app_groups obj ON acl.entity_id = obj.owner_id WHERE acl.actor_id = current_user_id())))you could wrap that query into a function (so we can apply SECURITY DEFINER to the tables involved to avoid nested RLS lookups)(group_id = ANY (ARRAY ( get_group_ids_of_current_user() )))Does anyone here know how the optimizer would handle this? I suppose if the get_group_ids_of_current_user() function is marked as STABLE, would the optimizer cache this value for every row in a SELECT that returned multiple rows? Is it possible that if the function is sql vs plpgsql it makes a difference? Am I splitting hairs here, and maybe this is a trivial nuance that shouldn't really matter for performance? If it's true that inline functions would perform bretter, then definitely this thread and potentially feature request seems pretty important. Other important RLS Performance OptimizationsI also want to share my research from online so it's documented somewhere. I would love to get more information to formally document these optimizations. Here are the two articles I've found to be useful for how to structure RLS policies performantly:\nhttps://cazzer.medium.com/designing-the-most-performant-row-level-security-strategy-in-postgres-a06084f31945\nhttps://medium.com/@ethanresnick/there-are-a-few-faster-ways-that-i-know-of-to-handle-the-third-case-with-rls-9d22eaa890e5The 2nd article was particularly useful (which was written in response to this article), highlighting an important detail that should probably be more explicit for folks writing policies, especially for SELECT policies. Essentially it boils down to not passing properties from the rows into the functions used to check security, but instead inverting the logic and instead returning the identifiers as an array and checking if the row's owned key matches one of those identifiers.For example, a GOOD qual/check expr owner_id = ANY ( function_that_gets_current_users_organization_ids() )a BAD qual/check expr can_user_access_object(owner_id) The main benefit of the first expr is that if function_that_gets_current_users_organization_ids is STABLE, the optimizer can run this once for all rows, and thus for SELECTs should actually run fast. The 2nd expr takes as an argument the column, which would have to run for every single row making SELECTs run very slow depending on the function.This actually is pretty intuitive once you look at it. Reversing the logic and returning IDs makes sense when you imagine what PG has to do in order to check rows, I suppose there are limitations depending on the cardinality of the IDs returned and postgres's ability to check some_id = ANY (array) for large arrays. Dan Lynch(734) 657-4483On Fri, Apr 2, 2021 at 7:47 AM Joe Conway <mail@joeconway.com> wrote:On 4/2/21 10:23 AM, Stephen Frost wrote:\n> Greetings,\n> \n> * Joe Conway (mail@joeconway.com) wrote:\n>> On 4/2/21 9:57 AM, Isaac Morland wrote:\n>> >Views already run security definer, allowing them to be used for some of\n>> >the same information-hiding purposes as RLS. But I just found something\n>> >strange: current_user/_role returns the user's role, not the view owner's\n>> >role:\n>> \n>> >postgres=# set role to t1;\n>> >SET\n>> >postgres=> table tt;\n>> >ERROR: permission denied for table tt\n>> >postgres=> table tv;\n>> > ?column? | current_user\n>> >----------+--------------\n>> > 5 | t1\n>> >(1 row)\n>> >\n>> >postgres=>\n>> >\n>> >Note that even though current_user is t1 \"inside\" the view, it is still\n>> >able to see the contents of table tt. Shouldn't current_user/_role return\n>> >the view owner in this situation? By contrast security definer functions\n>> >work properly:\n>> \n>> That is because while VIEWs are effectively SECURITY DEFINER for table\n>> access, functions running as part of the view are still SECURITY INVOKER if\n>> they were defined that way. And \"current_user\" is essentially just a special\n>> grammatical interface to a SECURITY INVOKER function:\n> \n> Right- and what I was really getting at is that it'd sometimes be nice\n> to have the view run as 'security invoker' for table access. In\n> general, it seems like it'd be useful to be able to control each piece\n> and define if it's to be security invoker or security definer. We're\n> able to do that for functions, but not other parts of the system.\n\n+1\n\nAgreed -- I have opined similarly in the past\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Fri, 2 Apr 2021 14:24:59 -0700",
"msg_from": "Dan Lynch <pyramation@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: policies with security definer option for allowing inline\n optimization"
},
{
"msg_contents": "On Fri, Apr 02, 2021 at 02:24:59PM -0700, Dan Lynch wrote:\n> Does anyone know details of, or where to find more information about the\n> implications of the optimizer on the quals/checks for the policies being\n> functions vs inline?\n\nRoughly, the PostgreSQL optimizer treats LANGUAGE SQL functions like a C\ncompiler treats \"extern inline\" functions. Other PostgreSQL functions behave\nlike C functions in a shared library. Non-SQL functions can do arbitrary\nthings, and the optimizer knows only facts like their volatility and the value\ngiven in CREATE FUNCTION ... COST.\n\n> I suppose if the\n> get_group_ids_of_current_user() function is marked as STABLE, would the\n> optimizer cache this value for every row in a SELECT that returned\n> multiple rows?\n\nWhile there was a patch to implement caching, it never finished. The\noptimizer is allowed to, and sometimes does, choose plan shapes that reduce\nthe number of function calls.\n\n> Is it possible that if the function is sql vs plpgsql it\n> makes a difference?\n\nYes; see inline_function() in the PostgreSQL source. The hard part of\n$SUBJECT is creating the infrastructure to inline across a SECURITY DEFINER\nboundary. Currently, a single optimizable statement operates under just one\nuser identity. Somehow, the optimizer would need to translate the SECURITY\nDEFINER call into a list of moments where the executor shall switch user ID,\nthen maintain that list across further optimization steps. security_barrier\nviews are the most-similar thing, but as Joe Conway mentioned, views differ\nfrom SECURITY DEFINER in crucial ways.\n\n\n",
"msg_date": "Sun, 4 Apr 2021 12:51:23 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: policies with security definer option for allowing inline\n optimization"
},
{
"msg_contents": "This is great, thanks! It's great to have somewhere in the source to read\nabout the optimizer! very cool!\n\n\n>\n> > I suppose if the\n> > get_group_ids_of_current_user() function is marked as STABLE, would the\n> > optimizer cache this value for every row in a SELECT that returned\n> > multiple rows?\n>\n> While there was a patch to implement caching, it never finished. The\n> optimizer is allowed to, and sometimes does, choose plan shapes that reduce\n> the number of function calls.\n>\n\nSo for multiple rows, it's possible that the same query could happen for\neach row? Even if it's clearly stable and only a read operation is\nhappening?\n\nI suppose if the possibility exists that this could happen, perhaps using\nRLS for selects is not quite \"production ready\"? Or perhaps if the RLS\nqual/check is written well-enough, then maybe the performance hit wouldn't\nbe noticed?\n\nThis is great, thanks! It's great to have somewhere in the source to read about the optimizer! very cool! \n\n> I suppose if the\n> get_group_ids_of_current_user() function is marked as STABLE, would the\n> optimizer cache this value for every row in a SELECT that returned\n> multiple rows?\n\nWhile there was a patch to implement caching, it never finished. The\noptimizer is allowed to, and sometimes does, choose plan shapes that reduce\nthe number of function calls.So for multiple rows, it's possible that the same query could happen for each row? Even if it's clearly stable and only a read operation is happening?I suppose if the possibility exists that this could happen, perhaps using RLS for selects is not quite \"production ready\"? Or perhaps if the RLS qual/check is written well-enough, then maybe the performance hit wouldn't be noticed?",
"msg_date": "Mon, 5 Apr 2021 19:51:46 -0700",
"msg_from": "Dan Lynch <pyramation@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: policies with security definer option for allowing inline\n optimization"
},
{
"msg_contents": "On Mon, Apr 05, 2021 at 07:51:46PM -0700, Dan Lynch wrote:\n> > > I suppose if the\n> > > get_group_ids_of_current_user() function is marked as STABLE, would the\n> > > optimizer cache this value for every row in a SELECT that returned\n> > > multiple rows?\n> >\n> > While there was a patch to implement caching, it never finished. The\n> > optimizer is allowed to, and sometimes does, choose plan shapes that reduce\n> > the number of function calls.\n> \n> So for multiple rows, it's possible that the same query could happen for\n> each row? Even if it's clearly stable and only a read operation is\n> happening?\n\nYes. The caching patch thread gives some example queries:\nhttps://postgr.es/m/flat/CABRT9RA-RomVS-yzQ2wUtZ%3Dm-eV61LcbrL1P1J3jydPStTfc6Q%40mail.gmail.com\n\n> I suppose if the possibility exists that this could happen, perhaps using\n> RLS for selects is not quite \"production ready\"?\n\nI would not draw that conclusion.\n\n> Or perhaps if the RLS\n> qual/check is written well-enough, then maybe the performance hit wouldn't\n> be noticed?\n\nYes.\n\n\n",
"msg_date": "Mon, 5 Apr 2021 23:20:56 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: policies with security definer option for allowing inline\n optimization"
},
{
"msg_contents": ">\n>\n> > I suppose if the possibility exists that this could happen, perhaps using\n> > RLS for selects is not quite \"production ready\"?\n>\n> I would not draw that conclusion.\n>\n>\nThis is great to hear! I'm betting a lot on RLS and have been investing a\nlot into it.\n\n\n> > Or perhaps if the RLS\n> > qual/check is written well-enough, then maybe the performance hit\n> wouldn't\n> > be noticed?\n>\n> Yes.\n>\n\nAmazing to hear. Sounds like the path I'm on is good to go and will only\nimprove over time :)\n\nFinal question: do you think using procedures vs writing inline queries for\nRLS quals/checks has a big difference in performance (assuming functions\nare sql)?\n\nAppreciate your info here!\n\n\n> I suppose if the possibility exists that this could happen, perhaps using\n> RLS for selects is not quite \"production ready\"?\n\nI would not draw that conclusion.\nThis is great to hear! I'm betting a lot on RLS and have been investing a lot into it. \n> Or perhaps if the RLS\n> qual/check is written well-enough, then maybe the performance hit wouldn't\n> be noticed?\n\nYes.Amazing to hear. Sounds like the path I'm on is good to go and will only improve over time :) Final question: do you think using procedures vs writing inline queries for RLS quals/checks has a big difference in performance (assuming functions are sql)?Appreciate your info here!",
"msg_date": "Tue, 6 Apr 2021 13:16:16 -0700",
"msg_from": "Dan Lynch <pyramation@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: policies with security definer option for allowing inline\n optimization"
},
{
"msg_contents": "On Tue, Apr 06, 2021 at 01:16:16PM -0700, Dan Lynch wrote:\n> Final question: do you think using procedures vs writing inline queries for\n> RLS quals/checks has a big difference in performance (assuming functions\n> are sql)?\n\nIf the function meets the criteria for inlining (see inline_function()),\nthere's negligible performance difference. Otherwise, the performance\ndifference may be large.\n\n\n",
"msg_date": "Tue, 6 Apr 2021 19:23:04 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: policies with security definer option for allowing inline\n optimization"
}
] |
[
{
"msg_contents": "Hi all,\n\nI found typos in verify_heapam.c.\n\ns/comitted/committed/\n\nPlease find an attached patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Fri, 2 Apr 2021 15:02:33 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix typo in verify_heapam.c"
},
{
"msg_contents": "\n\nOn 2021/04/02 15:02, Masahiko Sawada wrote:\n> Hi all,\n> \n> I found typos in verify_heapam.c.\n> \n> s/comitted/committed/\n> \n> Please find an attached patch.\n\nThanks for the report and patch! Pushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 2 Apr 2021 16:28:45 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in verify_heapam.c"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 4:28 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/04/02 15:02, Masahiko Sawada wrote:\n> > Hi all,\n> >\n> > I found typos in verify_heapam.c.\n> >\n> > s/comitted/committed/\n> >\n> > Please find an attached patch.\n>\n> Thanks for the report and patch! Pushed.\n\nThank you!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 2 Apr 2021 16:43:42 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typo in verify_heapam.c"
}
] |
[
{
"msg_contents": "Hello Hackers,\n\nWhy pg_walfile_name() can't be executed under recovery? What is the best\nway for me to get the current timeline and/or the file being recovering on\nthe standby using a postgres query? I know I can get it via process title\nbut don't want to go that route.\n\nThanks,\nSatya\n\nHello Hackers,Why pg_walfile_name() can't be executed under recovery? What is the best way for me to get the current timeline and/or the file being recovering on the standby using a postgres query? I know I can get it via process title but don't want to go that route.Thanks,Satya",
"msg_date": "Fri, 2 Apr 2021 01:23:02 -0700",
"msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>",
"msg_from_op": true,
"msg_subject": "why pg_walfile_name() cannot be executed during recovery?"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 4:23 AM SATYANARAYANA NARLAPURAM\n<satyanarlapuram@gmail.com> wrote:\n> Why pg_walfile_name() can't be executed under recovery?\n\nI believe the issue is that the backend executing the function might\nnot have an accurate idea about which TLI to use. But I don't\nunderstand why we can't find some solution to that problem.\n\n> What is the best way for me to get the current timeline and/or the file being recovering on the standby using a postgres query? I know I can get it via process title but don't want to go that route.\n\npg_stat_wal_receiver has LSN and TLI information, but probably won't\nhelp except when WAL receiver is actually active.\npg_last_wal_receive_lsn() and pg_last_wal_replay_lsn() will give the\nLSN at any point during recovery, but not the TLI. We might have some\ngaps in this area...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 2 Apr 2021 08:22:09 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why pg_walfile_name() cannot be executed during recovery?"
},
{
"msg_contents": "On Fri, 2 Apr 2021 08:22:09 -0400\nRobert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Apr 2, 2021 at 4:23 AM SATYANARAYANA NARLAPURAM\n> <satyanarlapuram@gmail.com> wrote:\n> > Why pg_walfile_name() can't be executed under recovery? \n> \n> I believe the issue is that the backend executing the function might\n> not have an accurate idea about which TLI to use. But I don't\n> understand why we can't find some solution to that problem.\n> \n> > What is the best way for me to get the current timeline and/or the file\n> > being recovering on the standby using a postgres query? I know I can get it\n> > via process title but don't want to go that route. \n> \n> pg_stat_wal_receiver has LSN and TLI information, but probably won't\n> help except when WAL receiver is actually active.\n> pg_last_wal_receive_lsn() and pg_last_wal_replay_lsn() will give the\n> LSN at any point during recovery, but not the TLI. We might have some\n> gaps in this area...\n\nYep, see previous discussion:\nhttps://www.postgresql.org/message-id/flat/20190723180518.635ac554%40firost\n\nThe status by the time was to consider a new view eg. pg_stat_recovery, to\nreport various recovery stats.\n\nBut maybe the best place now would be to include it in the new pg_stat_wal view?\n\nAs I'm interesting with this feature as well, I volunteer to work on it as\nauthor or reviewer.\n\nRegards,\n\n\n",
"msg_date": "Wed, 7 Apr 2021 19:04:38 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: why pg_walfile_name() cannot be executed during recovery?"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 5:52 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Apr 2, 2021 at 4:23 AM SATYANARAYANA NARLAPURAM\n> <satyanarlapuram@gmail.com> wrote:\n> > Why pg_walfile_name() can't be executed under recovery?\n>\n> I believe the issue is that the backend executing the function might\n> not have an accurate idea about which TLI to use. But I don't\n> understand why we can't find some solution to that problem.\n>\n> > What is the best way for me to get the current timeline and/or the file being recovering on the standby using a postgres query? I know I can get it via process title but don't want to go that route.\n>\n> pg_stat_wal_receiver has LSN and TLI information, but probably won't\n> help except when WAL receiver is actually active.\n> pg_last_wal_receive_lsn() and pg_last_wal_replay_lsn() will give the\n> LSN at any point during recovery, but not the TLI. We might have some\n> gaps in this area...\n\nI spent some time today to allow pg_walfile_{name, name_offset} run in\nrecovery. Timeline ID is computed while in recovery as follows - WAL\nreceiver's last received and flushed WAL record's TLI if it's\nstreaming, otherwise the last replayed WAL record's TLI. This way,\nthese functions can be used on standby or PITR server or even in crash\nrecovery if the server opens up for read-only connections.\n\nPlease have a look at the attached patch.\n\nIf the approach looks okay, I can add notes in the documentation.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Thu, 7 Apr 2022 19:02:42 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why pg_walfile_name() cannot be executed during recovery?"
},
{
"msg_contents": "On Thu, Apr 7, 2022 at 9:32 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> I spent some time today to allow pg_walfile_{name, name_offset} run in\n> recovery. Timeline ID is computed while in recovery as follows - WAL\n> receiver's last received and flushed WAL record's TLI if it's\n> streaming, otherwise the last replayed WAL record's TLI. This way,\n> these functions can be used on standby or PITR server or even in crash\n> recovery if the server opens up for read-only connections.\n\nI don't think this is a good definition. Suppose I ask for\npg_walfile_name() using an older LSN. With this approach, we're going\nto get a filename based on the idea that the TLI that was in effect\nback then is the same one as the TLI that is in effect now, which\nmight not be true. For example, suppose that the current TLI is 2 and\nit branched off of timeline 1 at 10/0. If I ask for\npg_walfile_name('F/0'), it's going to give me the name of a WAL file\nthat has never existed. That seems bad.\n\nIt's also worth noting that there's a bit of a definitional problem\nhere. If in the same situation, I ask for pg_walfile_name('11/0'),\nit's going to give me a filename based on TLI 2, but there's also a\nWAL file for that LSN with TLI 1. How do we know which one the user\nwants? Perhaps one idea would be to say that the relevant TLI is the\none which was in effect at the time that LSN was replayed. If we do\nthat, what about future LSNs? We could assume that for future LSNs,\nthe TLI should be the same as the current TLI, but maybe that's also\nmisleading, because recovery_target_timeline could be set.\n\nI think it's really important to start by being precise about the\nquestion that we think pg_walfile_name() ought to be answering. If we\ndon't know that, then we really can't say what TLI it should be using.\nIt's not hard to make the function return SOME answer using SOME TLI,\nbut then it's not clear that the answer is the right one for any\nparticular purpose. And in that case the function is more dangerous\nthan useful, because people will write code that uses it to do stuff,\nand then that stuff won't actually work correctly under all\ncircumstances.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Apr 2022 11:37:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why pg_walfile_name() cannot be executed during recovery?"
},
{
"msg_contents": "On Thu, Apr 07, 2022 at 11:37:15AM -0400, Robert Haas wrote:\n> It's also worth noting that there's a bit of a definitional problem\n> here. If in the same situation, I ask for pg_walfile_name('11/0'),\n> it's going to give me a filename based on TLI 2, but there's also a\n> WAL file for that LSN with TLI 1. How do we know which one the user\n> wants? Perhaps one idea would be to say that the relevant TLI is the\n> one which was in effect at the time that LSN was replayed. If we do\n> that, what about future LSNs? We could assume that for future LSNs,\n> the TLI should be the same as the current TLI, but maybe that's also\n> misleading, because recovery_target_timeline could be set.\n\nFWIW, for future positions, I'd be rather on board with the concept of\nusing the TLI currently being replayed, but as you say that comes down\nto the definition borders we want to use. Another possibility would\nbe to return an error and kick the can down the road if we are unsure\nof what the right behavior is. For past positions, this should go\nthrough a lookup of the timeline history file (the patch does not do\nthat at quick glance).\n\n> I think it's really important to start by being precise about the\n> question that we think pg_walfile_name() ought to be answering. If we\n> don't know that, then we really can't say what TLI it should be using.\n> It's not hard to make the function return SOME answer using SOME TLI,\n> but then it's not clear that the answer is the right one for any\n> particular purpose. And in that case the function is more dangerous\n> than useful, because people will write code that uses it to do stuff,\n> and then that stuff won't actually work correctly under all\n> circumstances.\n\nAgreed.\n--\nMichael",
"msg_date": "Fri, 8 Apr 2022 08:59:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: why pg_walfile_name() cannot be executed during recovery?"
},
{
"msg_contents": "On Thu, Apr 7, 2022 at 9:07 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Apr 7, 2022 at 9:32 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > I spent some time today to allow pg_walfile_{name, name_offset} run in\n> > recovery. Timeline ID is computed while in recovery as follows - WAL\n> > receiver's last received and flushed WAL record's TLI if it's\n> > streaming, otherwise the last replayed WAL record's TLI. This way,\n> > these functions can be used on standby or PITR server or even in crash\n> > recovery if the server opens up for read-only connections.\n>\n> I don't think this is a good definition. Suppose I ask for\n> pg_walfile_name() using an older LSN. With this approach, we're going\n> to get a filename based on the idea that the TLI that was in effect\n> back then is the same one as the TLI that is in effect now, which\n> might not be true. For example, suppose that the current TLI is 2 and\n> it branched off of timeline 1 at 10/0. If I ask for\n> pg_walfile_name('F/0'), it's going to give me the name of a WAL file\n> that has never existed. That seems bad.\n>\n> It's also worth noting that there's a bit of a definitional problem\n> here. If in the same situation, I ask for pg_walfile_name('11/0'),\n> it's going to give me a filename based on TLI 2, but there's also a\n> WAL file for that LSN with TLI 1. How do we know which one the user\n> wants? Perhaps one idea would be to say that the relevant TLI is the\n> one which was in effect at the time that LSN was replayed. If we do\n> that, what about future LSNs? We could assume that for future LSNs,\n> the TLI should be the same as the current TLI, but maybe that's also\n> misleading, because recovery_target_timeline could be set.\n\nFundamental question - should the pg_walfile_{name, name_offset} check\nwhether the file with the computed WAL file name exists on the server\nright now or ever existed earlier? Right now, they don't do that, see\n[1].\n\nI think we can make the functions more robust:\npg_walfile_{name, name_offset}(lsn, check_if_file_exists = false, tli\n= invalid_timelineid) - when check_if_file_exists is true checks for\nthe computed WAL file existence and when a valid tli is provided uses\nit in computing the WAL file name. When tli isn't provided, it\ncontinues to use insert tli for primary, and in recovery it uses tli\nas proposed in my patch. Perhaps, it can also do (as Michael\nsuggested) this - if check_if_file_exists is true and tli isn't\nprovided and there's timeline history, then it can go look at all the\ntimelines and whether the file exists with the computed name with\nhistory tli.\n\n> I think it's really important to start by being precise about the\n> question that we think pg_walfile_name() ought to be answering. If we\n> don't know that, then we really can't say what TLI it should be using.\n> It's not hard to make the function return SOME answer using SOME TLI,\n> but then it's not clear that the answer is the right one for any\n> particular purpose. And in that case the function is more dangerous\n> than useful, because people will write code that uses it to do stuff,\n> and then that stuff won't actually work correctly under all\n> circumstances.\n\nYes, once we agree on the semantics of these functions, having better\ndocumentation will help.\n\nThoughts?\n\n[1]\npostgres=# select * from pg_walfile_name('50000/dfdf');\n pg_walfile_name\n--------------------------\n 000000010005000000000000\n(1 row)\npostgres=# select * from pg_walfile_name_offset('50000/dfdf');\n file_name | file_offset\n--------------------------+-------------\n 000000010005000000000000 | 57311\n(1 row)\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 8 Apr 2022 19:01:34 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why pg_walfile_name() cannot be executed during recovery?"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 9:31 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Fundamental question - should the pg_walfile_{name, name_offset} check\n> whether the file with the computed WAL file name exists on the server\n> right now or ever existed earlier? Right now, they don't do that, see\n> [1].\n\nI don't think that checking whether the file exists is the right\napproach. However, I do think that it's important to be precise about\nwhich TLI is going to be used. I think it would be reasonable to\nredefine this function (on both the primary and the standby) so that\nthe TLI that is used is the one that was in effect at the time record\nat the given LSN was either written or replayed. Then, you could\npotentially use this function to figure out whether you still have the\nWAL files that are needed to replay up to some previous point in the\nWAL stream. However, what about the segments where we switched from\none TLI to the next in the middle of the segment? There, you probably\nneed both the old and the new segments, or maybe if you're trying to\nstream them you only need the new one because we have some weird\nspecial case that will send the segment from the new timeline when the\nsegment from the old timeline is requested. So you couldn't just call\nthis function on one LSN per segment and call it good, and it wouldn't\nnecessarily be the case that the filenames you got back were exactly\nthe ones you needed.\n\nSo I'm not entirely sure this proposal is good enough, but it at least\nwould have the advantage of meaning that the filename you get back is\none that existed at some point in time and somebody used it for\nsomething.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 8 Apr 2022 09:57:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why pg_walfile_name() cannot be executed during recovery?"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 7:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Apr 8, 2022 at 9:31 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Fundamental question - should the pg_walfile_{name, name_offset} check\n> > whether the file with the computed WAL file name exists on the server\n> > right now or ever existed earlier? Right now, they don't do that, see\n> > [1].\n>\n> I don't think that checking whether the file exists is the right\n> approach. However, I do think that it's important to be precise about\n> which TLI is going to be used. I think it would be reasonable to\n> redefine this function (on both the primary and the standby) so that\n> the TLI that is used is the one that was in effect at the time record\n> at the given LSN was either written or replayed. Then, you could\n> potentially use this function to figure out whether you still have the\n> WAL files that are needed to replay up to some previous point in the\n> WAL stream. However, what about the segments where we switched from\n> one TLI to the next in the middle of the segment? There, you probably\n> need both the old and the new segments, or maybe if you're trying to\n> stream them you only need the new one because we have some weird\n> special case that will send the segment from the new timeline when the\n> segment from the old timeline is requested. So you couldn't just call\n> this function on one LSN per segment and call it good, and it wouldn't\n> necessarily be the case that the filenames you got back were exactly\n> the ones you needed.\n>\n> So I'm not entirely sure this proposal is good enough, but it at least\n> would have the advantage of meaning that the filename you get back is\n> one that existed at some point in time and somebody used it for\n> something.\n\nUsing insert tli when not in recovery and using tli of the last WAL\nreplayed record in crash/archive/standby recovery, seems a reasonable\nchoice to me. I've also added a note in the docs.\n\nAttaching v2 with the above change. Please review it further.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Sat, 9 Apr 2022 19:00:30 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why pg_walfile_name() cannot be executed during recovery?"
},
{
"msg_contents": "\n\n> On 9 Apr 2022, at 18:30, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> Using insert tli when not in recovery and using tli of the last WAL\n> replayed record in crash/archive/standby recovery, seems a reasonable\n> choice to me. \n\nPlease excuse me if I'm not attentive enough. I've read this thread. And I could not find what is the problem that you are solving. What is the purpose of the WAL file name you want to obtain?\n\npg_walfile_name() - is a formatting function. With TLI as an hidden argument. If we want it to work on Standby we should just convert it to pure formatting function without access the the DB state, pass TLI as an argument.\nMaking implicit TLI computation with certain expectations is not a good idea IMV.\n\npg_walfile_name() could just read .history file, determine which TLI contains given LSN and format the name. And still there's a tricky segments during TLI switch.\n\nEither way we can rename the function to pg_walfile_name_as_if_on_timeline_of_last_wal_replayed().\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sat, 9 Apr 2022 21:25:01 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: why pg_walfile_name() cannot be executed during recovery?"
},
{
"msg_contents": "On Sat, Apr 9, 2022 at 12:25 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> Please excuse me if I'm not attentive enough. I've read this thread. And I could not find what is the problem that you are solving. What is the purpose of the WAL file name you want to obtain?\n\nYeah, I'd also like to know this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 9 Apr 2022 12:51:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why pg_walfile_name() cannot be executed during recovery?"
},
{
"msg_contents": "On Sat, Apr 9, 2022 at 10:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Apr 9, 2022 at 12:25 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > Please excuse me if I'm not attentive enough. I've read this thread. And I could not find what is the problem that you are solving. What is the purpose of the WAL file name you want to obtain?\n>\n> Yeah, I'd also like to know this.\n\nIMO, uses of pg_walfile_{name, name_offset} are plenty. Say, I have\nLSNs (say, flush, insert, replayed or WAL receiver latest received)\nand I would like to know the WAL file name and offset in an app\nconnecting to postgres or a control plane either for doing some\nreporting or figuring out whether a WAL file exists given an LSN or\nfor some other reason. With these functions restricted when the server\nis in recovery mode, the apps or control plane code can't use them and\nthey have to do if (!pg_is_in_recovery()) {select * from\npg_walfile_{name, name_offset}.\n\nAm I missing any other important use-cases?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 22 Apr 2022 19:45:14 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why pg_walfile_name() cannot be executed during recovery?"
},
{
"msg_contents": "\n\n> 22 апр. 2022 г., в 19:15, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> написал(а):\n> \n> On Sat, Apr 9, 2022 at 10:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> \n>> On Sat, Apr 9, 2022 at 12:25 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>>> Please excuse me if I'm not attentive enough. I've read this thread. And I could not find what is the problem that you are solving. What is the purpose of the WAL file name you want to obtain?\n>> \n>> Yeah, I'd also like to know this.\n> \n> IMO, uses of pg_walfile_{name, name_offset} are plenty. Say, I have\n> LSNs (say, flush, insert, replayed or WAL receiver latest received)\nAFAIK flush, receive and replay LSNs may be on 3 different timelines rendering two names incorrect. Actually, this proves that pg_wal_filename() should not be called on Standby with a present function prototype.\n\n> and I would like to know the WAL file name and offset in an app\n> connecting to postgres or a control plane either for doing some\n> reporting\nWhat kind of reporting?\n\n> or figuring out whether a WAL file exists given an LSN or\n> for some other reason.\nThere might me many WAL files on the same LSN. Please, specify more detailed scenario to use WAL file name.\n\n> With these functions restricted when the server\n> is in recovery mode, the apps or control plane code can't use them and\n> they have to do if (!pg_is_in_recovery()) {select * from\n> pg_walfile_{name, name_offset}.\n> \n> Am I missing any other important use-cases?\n\nI do not see correct use-case among these. You justify necessity to run pg_wal_filename() on Standby by having a LSN (not a problem), by doing some kind of reporting (to broad problem) and checking existence of some WAL file (more details needed). What is the problem leading to checking the existence of the file?\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 25 Apr 2022 11:18:24 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: why pg_walfile_name() cannot be executed during recovery?"
}
] |
[
{
"msg_contents": "Hi\r\n\r\nI am checked an query from\r\nhttps://www.depesz.com/2021/04/01/waiting-for-postgresql-14-add-unistr-function/\r\narticle.\r\n\r\npostgres=# SELECT U&'\\+01F603';\r\n┌──────────┐\r\n│ ?column? │\r\n╞══════════╡\r\n│ 😃 │\r\n└──────────┘\r\n(1 row)\r\n\r\n\r\nThe result is not correct. Emoji has width 2 chars, but psql calculates\r\nwith just one char.\r\n\r\nRegards\r\n\r\nPavel\r\n\nHiI am checked an query from https://www.depesz.com/2021/04/01/waiting-for-postgresql-14-add-unistr-function/ article.postgres=# SELECT U&'\\+01F603';┌──────────┐│ ?column? │╞══════════╡│ 😃 │└──────────┘(1 row)The result is not correct. Emoji has width 2 chars, but psql calculates with just one char.RegardsPavel",
"msg_date": "Fri, 2 Apr 2021 10:45:21 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "badly calculated width of emoji in psql"
},
{
"msg_contents": "On Fri, 2021-04-02 at 10:45 +0200, Pavel Stehule wrote:\n> I am checked an query from https://www.depesz.com/2021/04/01/waiting-for-postgresql-14-add-unistr-function/ article.\n> \n> postgres=# SELECT U&'\\+01F603';\n> ┌──────────┐\n> │ ?column? │\n> ╞══════════╡\n> │ 😃 │\n> └──────────┘\n> (1 row)\n> \n> \n> The result is not correct. Emoji has width 2 chars, but psql calculates with just one char.\n\nHow about this:\n\ndiff --git a/src/common/wchar.c b/src/common/wchar.c\nindex 6e7d731e02..e2d0d9691c 100644\n--- a/src/common/wchar.c\n+++ b/src/common/wchar.c\n@@ -673,7 +673,8 @@ ucs_wcwidth(pg_wchar ucs)\n \t\t (ucs >= 0xfe30 && ucs <= 0xfe6f) ||\t/* CJK Compatibility Forms */\n \t\t (ucs >= 0xff00 && ucs <= 0xff5f) ||\t/* Fullwidth Forms */\n \t\t (ucs >= 0xffe0 && ucs <= 0xffe6) ||\n-\t\t (ucs >= 0x20000 && ucs <= 0x2ffff)));\n+\t\t (ucs >= 0x20000 && ucs <= 0x2ffff) ||\n+\t\t (ucs >= 0x1f300 && ucs <= 0x1faff)));\t/* symbols and emojis */\n }\n \n /*\n\nThis is guesswork based on the unicode entries that look like symbols.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 02 Apr 2021 11:37:39 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "pá 2. 4. 2021 v 11:37 odesílatel Laurenz Albe <laurenz.albe@cybertec.at>\nnapsal:\n\n> On Fri, 2021-04-02 at 10:45 +0200, Pavel Stehule wrote:\n> > I am checked an query from\n> https://www.depesz.com/2021/04/01/waiting-for-postgresql-14-add-unistr-function/\n> article.\n> >\n> > postgres=# SELECT U&'\\+01F603';\n> > ┌──────────┐\n> > │ ?column? │\n> > ╞══════════╡\n> > │ 😃 │\n> > └──────────┘\n> > (1 row)\n> >\n> >\n> > The result is not correct. Emoji has width 2 chars, but psql calculates\n> with just one char.\n>\n> How about this:\n>\n> diff --git a/src/common/wchar.c b/src/common/wchar.c\n> index 6e7d731e02..e2d0d9691c 100644\n> --- a/src/common/wchar.c\n> +++ b/src/common/wchar.c\n> @@ -673,7 +673,8 @@ ucs_wcwidth(pg_wchar ucs)\n> (ucs >= 0xfe30 && ucs <= 0xfe6f) || /* CJK\n> Compatibility Forms */\n> (ucs >= 0xff00 && ucs <= 0xff5f) || /* Fullwidth Forms\n> */\n> (ucs >= 0xffe0 && ucs <= 0xffe6) ||\n> - (ucs >= 0x20000 && ucs <= 0x2ffff)));\n> + (ucs >= 0x20000 && ucs <= 0x2ffff) ||\n> + (ucs >= 0x1f300 && ucs <= 0x1faff))); /* symbols and\n> emojis */\n> }\n>\n> /*\n>\n> This is guesswork based on the unicode entries that look like symbols.\n>\n\nit helps\n\nwith this patch, the formatting is correct\n\nPavel\n\n>\n> Yours,\n> Laurenz Albe\n>\n>\n\npá 2. 4. 2021 v 11:37 odesílatel Laurenz Albe <laurenz.albe@cybertec.at> napsal:On Fri, 2021-04-02 at 10:45 +0200, Pavel Stehule wrote:\n> I am checked an query from https://www.depesz.com/2021/04/01/waiting-for-postgresql-14-add-unistr-function/ article.\n> \n> postgres=# SELECT U&'\\+01F603';\n> ┌──────────┐\n> │ ?column? │\n> ╞══════════╡\n> │ 😃 │\n> └──────────┘\n> (1 row)\n> \n> \n> The result is not correct. Emoji has width 2 chars, but psql calculates with just one char.\n\nHow about this:\n\ndiff --git a/src/common/wchar.c b/src/common/wchar.c\nindex 6e7d731e02..e2d0d9691c 100644\n--- a/src/common/wchar.c\n+++ b/src/common/wchar.c\n@@ -673,7 +673,8 @@ ucs_wcwidth(pg_wchar ucs)\n (ucs >= 0xfe30 && ucs <= 0xfe6f) || /* CJK Compatibility Forms */\n (ucs >= 0xff00 && ucs <= 0xff5f) || /* Fullwidth Forms */\n (ucs >= 0xffe0 && ucs <= 0xffe6) ||\n- (ucs >= 0x20000 && ucs <= 0x2ffff)));\n+ (ucs >= 0x20000 && ucs <= 0x2ffff) ||\n+ (ucs >= 0x1f300 && ucs <= 0x1faff))); /* symbols and emojis */\n }\n\n /*\n\nThis is guesswork based on the unicode entries that look like symbols.it helpswith this patch, the formatting is correctPavel\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 2 Apr 2021 11:51:26 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "At Fri, 2 Apr 2021 11:51:26 +0200, Pavel Stehule <pavel.stehule@gmail.com> wrote in \n> with this patch, the formatting is correct\n\nI think the hardest point of this issue is that we don't have a\nreasonable authoritative source that determines character width. And\nthat the presentation is heavily dependent on environment.\n\nUnicode 9 and/or 10 defines the character properties \"Emoji\" and\n\"Emoji_Presentation\", and tr51[1] says that\n\n> Emoji are generally presented with a square aspect ratio, which\n> presents a problem for flags.\n...\n> Current practice is for emoji to have a square aspect ratio, deriving\n> from their origin in Japanese. For interoperability, it is recommended\n> that this practice be continued with current and future emoji. They\n> will typically have about the same vertical placement and advance\n> width as CJK ideographs. For example:\n\nOk, even putting aside flags, the first table in [2] asserts that \"#\",\n\"*\", \"0-9\" are emoji characters. But we and I think no-one never\npresent them in two-columns. And the table has many mysterious holes\nI haven't looked into.\n\nWe could Emoji_Presentation=yes for the purpose, but for example,\nU+23E9(BLACK RIGHT-POINTING DOUBLE TRIANGLE) has the property\nEmoji_Presentation=yes but U+23E9(BLACK RIGHT-POINTING DOUBLE TRIANGLE\nWITH VERTICAL BAR) does not for a reason uncertaion to me. It doesn't\nlook like other than some kind of mistake.\n\nAbout environment, for example, U+23E9 is an emoji, and\nEmoji_Presentation=yes, but it is shown in one column on my\nxterm. (I'm not sure what font am I using..)\n\n[1] http://www.unicode.org/reports/tr51/\n[2] https://unicode.org/Public/13.0.0/ucd/emoji/emoji-data.txt\n\nA possible compromise is that we treat all Emoji=yes characters\nexcluding ASCII characters as double-width and manually merge the\nfragmented regions into reasonably larger chunks.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 05 Apr 2021 14:07:18 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "po 5. 4. 2021 v 7:07 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nnapsal:\n\n> At Fri, 2 Apr 2021 11:51:26 +0200, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote in\n> > with this patch, the formatting is correct\n>\n> I think the hardest point of this issue is that we don't have a\n> reasonable authoritative source that determines character width. And\n> that the presentation is heavily dependent on environment.\n>\n> Unicode 9 and/or 10 defines the character properties \"Emoji\" and\n> \"Emoji_Presentation\", and tr51[1] says that\n>\n> > Emoji are generally presented with a square aspect ratio, which\n> > presents a problem for flags.\n> ...\n> > Current practice is for emoji to have a square aspect ratio, deriving\n> > from their origin in Japanese. For interoperability, it is recommended\n> > that this practice be continued with current and future emoji. They\n> > will typically have about the same vertical placement and advance\n> > width as CJK ideographs. For example:\n>\n> Ok, even putting aside flags, the first table in [2] asserts that \"#\",\n> \"*\", \"0-9\" are emoji characters. But we and I think no-one never\n> present them in two-columns. And the table has many mysterious holes\n> I haven't looked into.\n>\n> We could Emoji_Presentation=yes for the purpose, but for example,\n> U+23E9(BLACK RIGHT-POINTING DOUBLE TRIANGLE) has the property\n> Emoji_Presentation=yes but U+23E9(BLACK RIGHT-POINTING DOUBLE TRIANGLE\n> WITH VERTICAL BAR) does not for a reason uncertaion to me. It doesn't\n> look like other than some kind of mistake.\n>\n> About environment, for example, U+23E9 is an emoji, and\n> Emoji_Presentation=yes, but it is shown in one column on my\n> xterm. (I'm not sure what font am I using..)\n>\n> [1] http://www.unicode.org/reports/tr51/\n> [2] https://unicode.org/Public/13.0.0/ucd/emoji/emoji-data.txt\n>\n> A possible compromise is that we treat all Emoji=yes characters\n> excluding ASCII characters as double-width and manually merge the\n> fragmented regions into reasonably larger chunks.\n>\n\nok\n\nIt should be fixed in glibc,\n\nhttps://sourceware.org/bugzilla/show_bug.cgi?id=20313\n\nso we can check it\n\nRegards\n\nPavel\n\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\npo 5. 4. 2021 v 7:07 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com> napsal:At Fri, 2 Apr 2021 11:51:26 +0200, Pavel Stehule <pavel.stehule@gmail.com> wrote in \n> with this patch, the formatting is correct\n\nI think the hardest point of this issue is that we don't have a\nreasonable authoritative source that determines character width. And\nthat the presentation is heavily dependent on environment.\n\nUnicode 9 and/or 10 defines the character properties \"Emoji\" and\n\"Emoji_Presentation\", and tr51[1] says that\n\n> Emoji are generally presented with a square aspect ratio, which\n> presents a problem for flags.\n...\n> Current practice is for emoji to have a square aspect ratio, deriving\n> from their origin in Japanese. For interoperability, it is recommended\n> that this practice be continued with current and future emoji. They\n> will typically have about the same vertical placement and advance\n> width as CJK ideographs. For example:\n\nOk, even putting aside flags, the first table in [2] asserts that \"#\",\n\"*\", \"0-9\" are emoji characters. But we and I think no-one never\npresent them in two-columns. And the table has many mysterious holes\nI haven't looked into.\n\nWe could Emoji_Presentation=yes for the purpose, but for example,\nU+23E9(BLACK RIGHT-POINTING DOUBLE TRIANGLE) has the property\nEmoji_Presentation=yes but U+23E9(BLACK RIGHT-POINTING DOUBLE TRIANGLE\nWITH VERTICAL BAR) does not for a reason uncertaion to me. It doesn't\nlook like other than some kind of mistake.\n\nAbout environment, for example, U+23E9 is an emoji, and\nEmoji_Presentation=yes, but it is shown in one column on my\nxterm. (I'm not sure what font am I using..)\n\n[1] http://www.unicode.org/reports/tr51/\n[2] https://unicode.org/Public/13.0.0/ucd/emoji/emoji-data.txt\n\nA possible compromise is that we treat all Emoji=yes characters\nexcluding ASCII characters as double-width and manually merge the\nfragmented regions into reasonably larger chunks.okIt should be fixed in glibc, https://sourceware.org/bugzilla/show_bug.cgi?id=20313so we can check itRegardsPavel\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 5 Apr 2021 15:13:28 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Mon, 2021-04-05 at 14:07 +0900, Kyotaro Horiguchi wrote:\r\n> At Fri, 2 Apr 2021 11:51:26 +0200, Pavel Stehule <pavel.stehule@gmail.com> wrote in \r\n> > with this patch, the formatting is correct\r\n> \r\n> I think the hardest point of this issue is that we don't have a\r\n> reasonable authoritative source that determines character width. And\r\n> that the presentation is heavily dependent on environment.\r\n\r\n> Unicode 9 and/or 10 defines the character properties \"Emoji\" and\r\n> \"Emoji_Presentation\", and tr51[1] says that\r\n> \r\n> > Emoji are generally presented with a square aspect ratio, which\r\n> > presents a problem for flags.\r\n> ...\r\n> > Current practice is for emoji to have a square aspect ratio, deriving\r\n> > from their origin in Japanese. For interoperability, it is recommended\r\n> > that this practice be continued with current and future emoji. They\r\n> > will typically have about the same vertical placement and advance\r\n> > width as CJK ideographs. For example:\r\n> \r\n> Ok, even putting aside flags, the first table in [2] asserts that \"#\",\r\n> \"*\", \"0-9\" are emoji characters. But we and I think no-one never\r\n> present them in two-columns. And the table has many mysterious holes\r\n> I haven't looked into.\r\n\r\nI think that's why Emoji_Presentation is false for those characters --\r\nthey _could_ be presented as emoji if the UI should choose to do so, or\r\nif an emoji presentation selector is used, but by default a text\r\npresentation would be expected.\r\n\r\n> We could Emoji_Presentation=yes for the purpose, but for example,\r\n> U+23E9(BLACK RIGHT-POINTING DOUBLE TRIANGLE) has the property\r\n> Emoji_Presentation=yes but U+23E9(BLACK RIGHT-POINTING DOUBLE TRIANGLE\r\n> WITH VERTICAL BAR) does not for a reason uncertaion to me. It doesn't\r\n> look like other than some kind of mistake.\r\n\r\nThat is strange.\r\n\r\n> About environment, for example, U+23E9 is an emoji, and\r\n> Emoji_Presentation=yes, but it is shown in one column on my\r\n> xterm. (I'm not sure what font am I using..)\r\n\r\nI would guess that's the key issue here. If we choose a particular\r\nwidth for emoji characters, is there anything keeping a terminal's font\r\nfrom doing something different anyway?\r\n\r\nFurthermore, if the stream contains an emoji presentation selector\r\nafter a code point that would normally be text, shouldn't we change\r\nthat glyph to have an emoji \"expected width\"?\r\n\r\nI'm wondering if the most correct solution would be to have the user\r\ntell the client what width to use, using .psqlrc or something.\r\n\r\n> A possible compromise is that we treat all Emoji=yes characters\r\n> excluding ASCII characters as double-width and manually merge the\r\n> fragmented regions into reasonably larger chunks.\r\n\r\nWe could also keep the fragments as-is and generate a full interval\r\ntable, like common/unicode_combining_table.h. It looks like there's\r\nroughly double the number of emoji intervals as combining intervals, so\r\nhopefully adding a second binary search wouldn't be noticeably slower.\r\n\r\n--\r\n\r\nIn your opinion, would the current one-line patch proposal make things\r\nstrictly better than they are today, or would it have mixed results?\r\nI'm wondering how to help this patch move forward for the current\r\ncommitfest, or if we should maybe return with feedback for now.\r\n\r\n--Jacob\r\n",
"msg_date": "Wed, 7 Jul 2021 18:03:34 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "st 7. 7. 2021 v 20:03 odesílatel Jacob Champion <pchampion@vmware.com>\nnapsal:\n\n> On Mon, 2021-04-05 at 14:07 +0900, Kyotaro Horiguchi wrote:\n> > At Fri, 2 Apr 2021 11:51:26 +0200, Pavel Stehule <\n> pavel.stehule@gmail.com> wrote in\n> > > with this patch, the formatting is correct\n> >\n> > I think the hardest point of this issue is that we don't have a\n> > reasonable authoritative source that determines character width. And\n> > that the presentation is heavily dependent on environment.\n>\n> > Unicode 9 and/or 10 defines the character properties \"Emoji\" and\n> > \"Emoji_Presentation\", and tr51[1] says that\n> >\n> > > Emoji are generally presented with a square aspect ratio, which\n> > > presents a problem for flags.\n> > ...\n> > > Current practice is for emoji to have a square aspect ratio, deriving\n> > > from their origin in Japanese. For interoperability, it is recommended\n> > > that this practice be continued with current and future emoji. They\n> > > will typically have about the same vertical placement and advance\n> > > width as CJK ideographs. For example:\n> >\n> > Ok, even putting aside flags, the first table in [2] asserts that \"#\",\n> > \"*\", \"0-9\" are emoji characters. But we and I think no-one never\n> > present them in two-columns. And the table has many mysterious holes\n> > I haven't looked into.\n>\n> I think that's why Emoji_Presentation is false for those characters --\n> they _could_ be presented as emoji if the UI should choose to do so, or\n> if an emoji presentation selector is used, but by default a text\n> presentation would be expected.\n>\n> > We could Emoji_Presentation=yes for the purpose, but for example,\n> > U+23E9(BLACK RIGHT-POINTING DOUBLE TRIANGLE) has the property\n> > Emoji_Presentation=yes but U+23E9(BLACK RIGHT-POINTING DOUBLE TRIANGLE\n> > WITH VERTICAL BAR) does not for a reason uncertaion to me. It doesn't\n> > look like other than some kind of mistake.\n>\n> That is strange.\n>\n> > About environment, for example, U+23E9 is an emoji, and\n> > Emoji_Presentation=yes, but it is shown in one column on my\n> > xterm. (I'm not sure what font am I using..)\n>\n> I would guess that's the key issue here. If we choose a particular\n> width for emoji characters, is there anything keeping a terminal's font\n> from doing something different anyway?\n>\n> Furthermore, if the stream contains an emoji presentation selector\n> after a code point that would normally be text, shouldn't we change\n> that glyph to have an emoji \"expected width\"?\n>\n> I'm wondering if the most correct solution would be to have the user\n> tell the client what width to use, using .psqlrc or something.\n>\n\nGnome terminal does it - VTE does it - there is option how to display chars\nwith not well specified width.\n\n\n> > A possible compromise is that we treat all Emoji=yes characters\n> > excluding ASCII characters as double-width and manually merge the\n> > fragmented regions into reasonably larger chunks.\n>\n> We could also keep the fragments as-is and generate a full interval\n> table, like common/unicode_combining_table.h. It looks like there's\n> roughly double the number of emoji intervals as combining intervals, so\n> hopefully adding a second binary search wouldn't be noticeably slower.\n>\n> --\n>\n> In your opinion, would the current one-line patch proposal make things\n> strictly better than they are today, or would it have mixed results?\n> I'm wondering how to help this patch move forward for the current\n> commitfest, or if we should maybe return with feedback for now.\n>\n\nWe can check how these chars are printed in most common terminals in modern\nversions. I am afraid that it can be problematic to find a solution that\nworks everywhere, because some libraries on some platforms are pretty\nobsolete.\n\nRegards\n\nPavel\n\n\n> --Jacob\n>\n\nst 7. 7. 2021 v 20:03 odesílatel Jacob Champion <pchampion@vmware.com> napsal:On Mon, 2021-04-05 at 14:07 +0900, Kyotaro Horiguchi wrote:\n> At Fri, 2 Apr 2021 11:51:26 +0200, Pavel Stehule <pavel.stehule@gmail.com> wrote in \n> > with this patch, the formatting is correct\n> \n> I think the hardest point of this issue is that we don't have a\n> reasonable authoritative source that determines character width. And\n> that the presentation is heavily dependent on environment.\n\n> Unicode 9 and/or 10 defines the character properties \"Emoji\" and\n> \"Emoji_Presentation\", and tr51[1] says that\n> \n> > Emoji are generally presented with a square aspect ratio, which\n> > presents a problem for flags.\n> ...\n> > Current practice is for emoji to have a square aspect ratio, deriving\n> > from their origin in Japanese. For interoperability, it is recommended\n> > that this practice be continued with current and future emoji. They\n> > will typically have about the same vertical placement and advance\n> > width as CJK ideographs. For example:\n> \n> Ok, even putting aside flags, the first table in [2] asserts that \"#\",\n> \"*\", \"0-9\" are emoji characters. But we and I think no-one never\n> present them in two-columns. And the table has many mysterious holes\n> I haven't looked into.\n\nI think that's why Emoji_Presentation is false for those characters --\nthey _could_ be presented as emoji if the UI should choose to do so, or\nif an emoji presentation selector is used, but by default a text\npresentation would be expected.\n\n> We could Emoji_Presentation=yes for the purpose, but for example,\n> U+23E9(BLACK RIGHT-POINTING DOUBLE TRIANGLE) has the property\n> Emoji_Presentation=yes but U+23E9(BLACK RIGHT-POINTING DOUBLE TRIANGLE\n> WITH VERTICAL BAR) does not for a reason uncertaion to me. It doesn't\n> look like other than some kind of mistake.\n\nThat is strange.\n\n> About environment, for example, U+23E9 is an emoji, and\n> Emoji_Presentation=yes, but it is shown in one column on my\n> xterm. (I'm not sure what font am I using..)\n\nI would guess that's the key issue here. If we choose a particular\nwidth for emoji characters, is there anything keeping a terminal's font\nfrom doing something different anyway?\n\nFurthermore, if the stream contains an emoji presentation selector\nafter a code point that would normally be text, shouldn't we change\nthat glyph to have an emoji \"expected width\"?\n\nI'm wondering if the most correct solution would be to have the user\ntell the client what width to use, using .psqlrc or something.Gnome terminal does it - VTE does it - there is option how to display chars with not well specified width. \n\n> A possible compromise is that we treat all Emoji=yes characters\n> excluding ASCII characters as double-width and manually merge the\n> fragmented regions into reasonably larger chunks.\n\nWe could also keep the fragments as-is and generate a full interval\ntable, like common/unicode_combining_table.h. It looks like there's\nroughly double the number of emoji intervals as combining intervals, so\nhopefully adding a second binary search wouldn't be noticeably slower.\n\n--\n\nIn your opinion, would the current one-line patch proposal make things\nstrictly better than they are today, or would it have mixed results?\nI'm wondering how to help this patch move forward for the current\ncommitfest, or if we should maybe return with feedback for now.We can check how these chars are printed in most common terminals in modern versions. I am afraid that it can be problematic to find a solution that works everywhere, because some libraries on some platforms are pretty obsolete.RegardsPavel\n\n--Jacob",
"msg_date": "Wed, 7 Jul 2021 20:19:34 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Wed, Jul 07, 2021 at 06:03:34PM +0000, Jacob Champion wrote:\n> I would guess that's the key issue here. If we choose a particular\n> width for emoji characters, is there anything keeping a terminal's font\n> from doing something different anyway?\n\nI'd say that we are doing our best in guessing what it should be,\nthen. One cannot predict how fonts are designed.\n\n> We could also keep the fragments as-is and generate a full interval\n> table, like common/unicode_combining_table.h. It looks like there's\n> roughly double the number of emoji intervals as combining intervals, so\n> hopefully adding a second binary search wouldn't be noticeably slower.\n\nHmm. Such things have a cost, and this one sounds costly with a\nlimited impact. What do we gain except a better visibility with psql?\n\n> In your opinion, would the current one-line patch proposal make things\n> strictly better than they are today, or would it have mixed results?\n> I'm wondering how to help this patch move forward for the current\n> commitfest, or if we should maybe return with feedback for now.\n\nBased on the following list, it seems to me that [u+1f300,u+0x1faff]\nwon't capture everything, like the country flags:\nhttp://www.unicode.org/emoji/charts/full-emoji-list.html\n--\nMichael",
"msg_date": "Mon, 19 Jul 2021 16:46:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "po 19. 7. 2021 v 9:46 odesílatel Michael Paquier <michael@paquier.xyz>\nnapsal:\n\n> On Wed, Jul 07, 2021 at 06:03:34PM +0000, Jacob Champion wrote:\n> > I would guess that's the key issue here. If we choose a particular\n> > width for emoji characters, is there anything keeping a terminal's font\n> > from doing something different anyway?\n>\n> I'd say that we are doing our best in guessing what it should be,\n> then. One cannot predict how fonts are designed.\n>\n> > We could also keep the fragments as-is and generate a full interval\n> > table, like common/unicode_combining_table.h. It looks like there's\n> > roughly double the number of emoji intervals as combining intervals, so\n> > hopefully adding a second binary search wouldn't be noticeably slower.\n>\n> Hmm. Such things have a cost, and this one sounds costly with a\n> limited impact. What do we gain except a better visibility with psql?\n>\n\nThe benefit is correct displaying. I checked impact on server side, and\nucs_wcwidth is used just for calculation of error position. Any other usage\nis only in psql.\n\nMoreover, I checked unicode ranges, and I think so for common languages the\nperformance impact should be zero (because typically use ucs < 0x1100). The\npossible (but very low) impact can be for some historic languages or\nspecial symbols. It has not any impact for ranges that currently return\ndisplay width 2, because the new range is at the end of list.\n\nI am not sure how wide usage of PQdsplen is outside psql, but I have no\nreason to think so, so developers will prefer this function over built\nfunctionality in any developing environment that supports unicode. So in\nthis case I have a strong opinion to prefer correctness of result against\ncurrent speed (note: I have an experience from pspg development, where this\noperation is really on critical path, and I tried do some micro\noptimization without strong effect - on very big unusual result (very wide,\nvery long (100K rows) the difference was about 500 ms (on pager side, it\ndoes nothing else than string operations in this moment)).\n\nRegards\n\nPavel\n\n>\n> > In your opinion, would the current one-line patch proposal make things\n> > strictly better than they are today, or would it have mixed results?\n> > I'm wondering how to help this patch move forward for the current\n> > commitfest, or if we should maybe return with feedback for now.\n>\n> Based on the following list, it seems to me that [u+1f300,u+0x1faff]\n> won't capture everything, like the country flags:\n> http://www.unicode.org/emoji/charts/full-emoji-list.html\n> --\n> Michael\n>\n\npo 19. 7. 2021 v 9:46 odesílatel Michael Paquier <michael@paquier.xyz> napsal:On Wed, Jul 07, 2021 at 06:03:34PM +0000, Jacob Champion wrote:\n> I would guess that's the key issue here. If we choose a particular\n> width for emoji characters, is there anything keeping a terminal's font\n> from doing something different anyway?\n\nI'd say that we are doing our best in guessing what it should be,\nthen. One cannot predict how fonts are designed.\n\n> We could also keep the fragments as-is and generate a full interval\n> table, like common/unicode_combining_table.h. It looks like there's\n> roughly double the number of emoji intervals as combining intervals, so\n> hopefully adding a second binary search wouldn't be noticeably slower.\n\nHmm. Such things have a cost, and this one sounds costly with a\nlimited impact. What do we gain except a better visibility with psql?The benefit is correct displaying. I checked impact on server side, and ucs_wcwidth is used just for calculation of error position. Any other usage is only in psql.Moreover, I checked unicode ranges, and I think so for common languages the performance impact should be zero (because typically use ucs < 0x1100). The possible (but very low) impact can be for some historic languages or special symbols. It has not any impact for ranges that currently return display width 2, because the new range is at the end of list. I am not sure how wide usage of PQdsplen is outside psql, but I have no reason to think so, so developers will prefer this function over built functionality in any developing environment that supports unicode. So in this case I have a strong opinion to prefer correctness of result against current speed (note: I have an experience from pspg development, where this operation is really on critical path, and I tried do some micro optimization without strong effect - on very big unusual result (very wide, very long (100K rows) the difference was about 500 ms (on pager side, it does nothing else than string operations in this moment)). RegardsPavel\n\n> In your opinion, would the current one-line patch proposal make things\n> strictly better than they are today, or would it have mixed results?\n> I'm wondering how to help this patch move forward for the current\n> commitfest, or if we should maybe return with feedback for now.\n\nBased on the following list, it seems to me that [u+1f300,u+0x1faff]\nwon't capture everything, like the country flags:\nhttp://www.unicode.org/emoji/charts/full-emoji-list.html\n--\nMichael",
"msg_date": "Mon, 19 Jul 2021 12:03:35 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Mon, 2021-07-19 at 16:46 +0900, Michael Paquier wrote:\n> > In your opinion, would the current one-line patch proposal make things\n> > strictly better than they are today, or would it have mixed results?\n> > I'm wondering how to help this patch move forward for the current\n> > commitfest, or if we should maybe return with feedback for now.\n> \n> Based on the following list, it seems to me that [u+1f300,u+0x1faff]\n> won't capture everything, like the country flags:\n> http://www.unicode.org/emoji/charts/full-emoji-list.html\n\nThat could be adapted; the question is if the approach as such is\ndesirable or not. This is necessarily a moving target, at the rate\nthat emojis are created and added to Unicode.\n\nMy personal feeling is that something simple and perhaps imperfect\nas my one-liner that may miss some corner cases would be ok, but\nanything that saps more performance or is complicated would not\nbe worth the effort.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 19 Jul 2021 13:13:57 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Mon, 2021-07-19 at 13:13 +0200, Laurenz Albe wrote:\r\n> On Mon, 2021-07-19 at 16:46 +0900, Michael Paquier wrote:\r\n> > > In your opinion, would the current one-line patch proposal make things\r\n> > > strictly better than they are today, or would it have mixed results?\r\n> > > I'm wondering how to help this patch move forward for the current\r\n> > > commitfest, or if we should maybe return with feedback for now.\r\n> > \r\n> > Based on the following list, it seems to me that [u+1f300,u+0x1faff]\r\n> > won't capture everything, like the country flags:\r\n> > https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.unicode.org%2Femoji%2Fcharts%2Ffull-emoji-list.html&data=04%7C01%7Cpchampion%40vmware.com%7Cbc3f4cff42094f60fa7708d94aa64f11%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637622900429154586%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=lfSsqU%2BEiSJrwftt9FL13ib7pw0Mzt5DYl%2BSjL2%2Bm%2F0%3D&reserved=0\r\n\r\nOn my machine, the regional indicator codes take up one column each\r\n(even though they display as a wide uppercase letter), so making them\r\nwide would break alignment. This seems to match up with Annex #11 [1]:\r\n\r\nED4. East Asian Wide (W): All other characters that are always\r\n wide. [...] This category includes characters that have explicit\r\n halfwidth counterparts, along with characters that have the [UTS51]\r\n property Emoji_Presentation, with the exception of characters that\r\n have the [UCD] property Regional_Indicator\r\n\r\nSo for whatever reason, those indicator codes aren't considered East\r\nAsian Wide by Unicode (and therefore glibc), even though they are\r\nEmoji_Presentation. And glibc appears to be using East Asian Wide as\r\nthe flag for a 2-column character.\r\n\r\nglibc 2.31 is based on Unicode 12.1, I think. So if Postgres is built\r\nagainst a Unicode database that's different from the system's,\r\nobviously you'll see odd results no matter what we do here.\r\n\r\nAnd _all_ of that completely ignores the actual country-flag-combining\r\nbehavior, which my terminal doesn't do and I assume would be part of a\r\nseparate conversation entirely, along with things like ZWJ sequences.\r\n\r\n> That could be adapted; the question is if the approach as such is\r\n> desirable or not. This is necessarily a moving target, at the rate\r\n> that emojis are created and added to Unicode.\r\n\r\nSure. We already have code in the tree that deals with that moving\r\ntarget, though, by parsing apart pieces of the Unicode database. So the\r\nadded maintenance cost should be pretty low.\r\n\r\n> My personal feeling is that something simple and perhaps imperfect\r\n> as my one-liner that may miss some corner cases would be ok, but\r\n> anything that saps more performance or is complicated would not\r\n> be worth the effort.\r\n\r\nAnother data point: on my machine (Ubuntu 20.04, glibc 2.31) that\r\nadditional range not only misses a large number of emoji (e.g. in the\r\n2xxx codepoint range), it incorrectly treats some narrow codepoints as\r\nwide (e.g. many in the 1F32x range have Emoji_Presentation set to\r\nfalse).\r\n\r\nI note that the doc comment for ucs_wcwidth()...\r\n\r\n> *\t - Spacing characters in the East Asian Wide (W) or East Asian\r\n> *\t\tFullWidth (F) category as defined in Unicode Technical\r\n> *\t\tReport #11 have a column width of 2.\r\n\r\n...doesn't match reality anymore. The East Asian width handling was\r\nlast updated in 2006, it looks like? So I wonder whether fixing the\r\ncode to match the comment would not only fix the emoji problem but also\r\na bunch of other non-emoji characters.\r\n\r\n--Jacob\r\n\r\n[1] http://www.unicode.org/reports/tr11/\r\n",
"msg_date": "Wed, 21 Jul 2021 00:08:24 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Wed, 2021-07-21 at 00:08 +0000, Jacob Champion wrote:\r\n> On Mon, 2021-07-19 at 13:13 +0200, Laurenz Albe wrote:\r\n> > That could be adapted; the question is if the approach as such is\r\n> > desirable or not. This is necessarily a moving target, at the rate\r\n> > that emojis are created and added to Unicode.\r\n> \r\n> Sure. We already have code in the tree that deals with that moving\r\n> target, though, by parsing apart pieces of the Unicode database. So the\r\n> added maintenance cost should be pretty low.\r\n\r\n(I am working on such a patch today and will report back.)\r\n\r\n--Jacob\r\n",
"msg_date": "Wed, 21 Jul 2021 16:05:11 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Wed, 2021-07-21 at 00:08 +0000, Jacob Champion wrote:\r\n> I note that the doc comment for ucs_wcwidth()...\r\n> \r\n> > *\t - Spacing characters in the East Asian Wide (W) or East Asian\r\n> > *\t\tFullWidth (F) category as defined in Unicode Technical\r\n> > *\t\tReport #11 have a column width of 2.\r\n> \r\n> ...doesn't match reality anymore. The East Asian width handling was\r\n> last updated in 2006, it looks like? So I wonder whether fixing the\r\n> code to match the comment would not only fix the emoji problem but also\r\n> a bunch of other non-emoji characters.\r\n\r\nAttached is my attempt at that. This adds a second interval table,\r\nhandling not only the emoji range in the original patch but also\r\ncorrecting several non-emoji character ranges which are included in the\r\n13.0 East Asian Wide/Fullwidth sets. Try for example\r\n\r\n- U+2329 LEFT POINTING ANGLE BRACKET\r\n- U+16FE0 TANGUT ITERATION MARK\r\n- U+18000 KATAKANA LETTER ARCHAIC E\r\n\r\nThis should work reasonably well for terminals that depend on modern\r\nversions of Unicode's EastAsianWidth.txt to figure out character width.\r\nI don't know how it behaves on BSD libc or Windows.\r\n\r\nThe new binary search isn't free, but my naive attempt at measuring the\r\nperformance hit made it look worse than it actually is. Since the\r\nmeasurement function was previously returning an incorrect (too short)\r\nwidth, we used to get a free performance boost by not printing the\r\ncorrect number of alignment/border characters. I'm still trying to\r\nfigure out how best to isolate the performance changes due to this\r\npatch.\r\n\r\nPavel, I'd be interested to see what your benchmarks find with this\r\ncode. Does this fix the original issue for you?\r\n\r\n--Jacob",
"msg_date": "Wed, 21 Jul 2021 22:12:51 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "Hi\n\nčt 22. 7. 2021 v 0:12 odesílatel Jacob Champion <pchampion@vmware.com>\nnapsal:\n\n> On Wed, 2021-07-21 at 00:08 +0000, Jacob Champion wrote:\n> > I note that the doc comment for ucs_wcwidth()...\n> >\n> > > * - Spacing characters in the East Asian Wide (W) or East Asian\n> > > * FullWidth (F) category as defined in Unicode Technical\n> > > * Report #11 have a column width of 2.\n> >\n> > ...doesn't match reality anymore. The East Asian width handling was\n> > last updated in 2006, it looks like? So I wonder whether fixing the\n> > code to match the comment would not only fix the emoji problem but also\n> > a bunch of other non-emoji characters.\n>\n> Attached is my attempt at that. This adds a second interval table,\n> handling not only the emoji range in the original patch but also\n> correcting several non-emoji character ranges which are included in the\n> 13.0 East Asian Wide/Fullwidth sets. Try for example\n>\n> - U+2329 LEFT POINTING ANGLE BRACKET\n> - U+16FE0 TANGUT ITERATION MARK\n> - U+18000 KATAKANA LETTER ARCHAIC E\n>\n> This should work reasonably well for terminals that depend on modern\n> versions of Unicode's EastAsianWidth.txt to figure out character width.\n> I don't know how it behaves on BSD libc or Windows.\n>\n> The new binary search isn't free, but my naive attempt at measuring the\n> performance hit made it look worse than it actually is. Since the\n> measurement function was previously returning an incorrect (too short)\n> width, we used to get a free performance boost by not printing the\n> correct number of alignment/border characters. I'm still trying to\n> figure out how best to isolate the performance changes due to this\n> patch.\n>\n> Pavel, I'd be interested to see what your benchmarks find with this\n> code. Does this fix the original issue for you?\n>\n\nI can confirm that the original issue is fixed.\n\nI tested performance\n\nI had three data sets\n\n1. typical data - mix ascii and utf characters typical for czech language -\n25K lines - there is very small slowdown 2ms from 24 to 26ms (stored file\nof this result has 3MB)\n\n2. the worst case - this reports has only emoji 1000 chars * 10K rows -\nthere is more significant slowdown - from 160 ms to 220 ms (stored file has\n39MB)\n\n3. a little bit of obscure datasets generated by \\x and select * from\npg_proc - it has 99K lines - and there are a lot of unicode decorations\n(borders). The line has 17K chars. (the stored file has 1.7GB)\nIn this dataset I see a slowdown from 4300 to 4700 ms.\n\nIn all cases, the data are in memory (in filesystem cache). I tested load\nto pspg.\n\n9% looks too high, but in absolute time it is 400ms for 99K lines and very\nuntypical data, or 2ms for more typical results., 2ms are nothing (for\ninteractive work). More - this is from a pspg perspective. In psql there\ncan be overhead of network, protocol processing, formatting, and more and\nmore, and psql doesn't need to calculate display width of decorations\n(borders), what is the reason for slowdowns in pspg.\n\nPavel\n\n\n\n\n> --Jacob\n>\n\nHičt 22. 7. 2021 v 0:12 odesílatel Jacob Champion <pchampion@vmware.com> napsal:On Wed, 2021-07-21 at 00:08 +0000, Jacob Champion wrote:\n> I note that the doc comment for ucs_wcwidth()...\n> \n> > * - Spacing characters in the East Asian Wide (W) or East Asian\n> > * FullWidth (F) category as defined in Unicode Technical\n> > * Report #11 have a column width of 2.\n> \n> ...doesn't match reality anymore. The East Asian width handling was\n> last updated in 2006, it looks like? So I wonder whether fixing the\n> code to match the comment would not only fix the emoji problem but also\n> a bunch of other non-emoji characters.\n\nAttached is my attempt at that. This adds a second interval table,\nhandling not only the emoji range in the original patch but also\ncorrecting several non-emoji character ranges which are included in the\n13.0 East Asian Wide/Fullwidth sets. Try for example\n\n- U+2329 LEFT POINTING ANGLE BRACKET\n- U+16FE0 TANGUT ITERATION MARK\n- U+18000 KATAKANA LETTER ARCHAIC E\n\nThis should work reasonably well for terminals that depend on modern\nversions of Unicode's EastAsianWidth.txt to figure out character width.\nI don't know how it behaves on BSD libc or Windows.\n\nThe new binary search isn't free, but my naive attempt at measuring the\nperformance hit made it look worse than it actually is. Since the\nmeasurement function was previously returning an incorrect (too short)\nwidth, we used to get a free performance boost by not printing the\ncorrect number of alignment/border characters. I'm still trying to\nfigure out how best to isolate the performance changes due to this\npatch.\n\nPavel, I'd be interested to see what your benchmarks find with this\ncode. Does this fix the original issue for you?I can confirm that the original issue is fixed. I tested performanceI had three data sets1. typical data - mix ascii and utf characters typical for czech language - 25K lines - there is very small slowdown 2ms from 24 to 26ms (stored file of this result has 3MB)2. the worst case - this reports has only emoji 1000 chars * 10K rows - there is more significant slowdown - from 160 ms to 220 ms (stored file has 39MB)3. a little bit of obscure datasets generated by \\x and select * from pg_proc - it has 99K lines - and there are a lot of unicode decorations (borders). The line has 17K chars. (the stored file has 1.7GB)In this dataset I see a slowdown from 4300 to 4700 ms.In all cases, the data are in memory (in filesystem cache). I tested load to pspg.9% looks too high, but in absolute time it is 400ms for 99K lines and very untypical data, or 2ms for more typical results., 2ms are nothing (for interactive work). More - this is from a pspg perspective. In psql there can be overhead of network, protocol processing, formatting, and more and more, and psql doesn't need to calculate display width of decorations (borders), what is the reason for slowdowns in pspg.Pavel\n\n--Jacob",
"msg_date": "Fri, 23 Jul 2021 17:42:20 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Fri, 2021-07-23 at 17:42 +0200, Pavel Stehule wrote:\r\n> čt 22. 7. 2021 v 0:12 odesílatel Jacob Champion <pchampion@vmware.com> napsal:\r\n> > \r\n> > Pavel, I'd be interested to see what your benchmarks find with this\r\n> > code. Does this fix the original issue for you?\r\n> \r\n> I can confirm that the original issue is fixed. \r\n\r\nGreat!\r\n\r\n> I tested performance\r\n> \r\n> I had three data sets\r\n> \r\n> 1. typical data - mix ascii and utf characters typical for czech\r\n> language - 25K lines - there is very small slowdown 2ms from 24 to\r\n> 26ms (stored file of this result has 3MB)\r\n> \r\n> 2. the worst case - this reports has only emoji 1000 chars * 10K rows\r\n> - there is more significant slowdown - from 160 ms to 220 ms (stored\r\n> file has 39MB)\r\n\r\nI assume the stored file size has grown with this patch, since we're\r\nnow printing the correct number of spacing/border characters?\r\n\r\n> 3. a little bit of obscure datasets generated by \\x and select * from\r\n> pg_proc - it has 99K lines - and there are a lot of unicode\r\n> decorations (borders). The line has 17K chars. (the stored file has\r\n> 1.7GB)\r\n> In this dataset I see a slowdown from 4300 to 4700 ms.\r\n\r\nThese results are lining up fairly well with my profiling. To isolate\r\nthe effects of the new algorithm (as opposed to printing time) I\r\nredirected to /dev/null:\r\n\r\n psql postgres -c \"select repeat(unistr('\\u115D'), 10000000);\" > /dev/null\r\n\r\nThis is what I expect to be the worst case for the new patch: a huge\r\nstring consisting of nothing but \\u115D, which is in the first interval\r\nand therefore takes the maximum number of iterations during the binary\r\nsearch.\r\n\r\nFor that command, the wall time slowdown with this patch was about\r\n240ms (from 1.128s to 1.366s, or 21% slower). Callgrind shows an\r\nincrease of 18% in the number of instructions executed with the\r\ninterval table patch, all of it coming from PQdsplen (no surprise).\r\nPQdsplen itself has a 36% increase in instruction count for that run.\r\n\r\nI also did a microbenchmark of PQdsplen (code attached, requires Google\r\nBenchmark [1]). The three cases I tested were standard ASCII\r\ncharacters, a smiley-face emoji, and the worst-case \\u115F character.\r\n\r\nWithout the patch:\r\n\r\n------------------------------------------------------------\r\nBenchmark Time CPU Iterations\r\n------------------------------------------------------------\r\n...\r\nBM_Ascii_mean 4.97 ns 4.97 ns 5\r\nBM_Ascii_median 4.97 ns 4.97 ns 5\r\nBM_Ascii_stddev 0.035 ns 0.035 ns 5\r\n...\r\nBM_Emoji_mean 6.30 ns 6.30 ns 5\r\nBM_Emoji_median 6.30 ns 6.30 ns 5\r\nBM_Emoji_stddev 0.045 ns 0.045 ns 5\r\n...\r\nBM_Worst_mean 12.4 ns 12.4 ns 5\r\nBM_Worst_median 12.4 ns 12.4 ns 5\r\nBM_Worst_stddev 0.038 ns 0.038 ns 5\r\n\r\nWith the patch:\r\n\r\n------------------------------------------------------------\r\nBenchmark Time CPU Iterations\r\n------------------------------------------------------------\r\n...\r\nBM_Ascii_mean 4.59 ns 4.59 ns 5\r\nBM_Ascii_median 4.60 ns 4.60 ns 5\r\nBM_Ascii_stddev 0.069 ns 0.068 ns 5\r\n...\r\nBM_Emoji_mean 11.8 ns 11.8 ns 5\r\nBM_Emoji_median 11.8 ns 11.8 ns 5\r\nBM_Emoji_stddev 0.059 ns 0.059 ns 5\r\n...\r\nBM_Worst_mean 18.5 ns 18.5 ns 5\r\nBM_Worst_median 18.5 ns 18.5 ns 5\r\nBM_Worst_stddev 0.077 ns 0.077 ns 5\r\n\r\nSo an incredibly tiny improvement in the ASCII case, which is\r\nreproducible across multiple runs and not just a fluke (I assume\r\nbecause the code is smaller now and has better cache line\r\ncharacteristics?). A ~90% slowdown for the emoji case, and a ~50%\r\nslowdown for the worst-performing characters. That seems perfectly\r\nreasonable considering we're talking about dozens of nanoseconds.\r\n\r\n> 9% looks too high, but in absolute time it is 400ms for 99K lines and\r\n> very untypical data, or 2ms for more typical results., 2ms are\r\n> nothing (for interactive work). More - this is from a pspg\r\n> perspective. In psql there can be overhead of network, protocol\r\n> processing, formatting, and more and more, and psql doesn't need to\r\n> calculate display width of decorations (borders), what is the reason\r\n> for slowdowns in pspg.\r\n\r\nYeah. Considering the alignment code is for user display, the absolute\r\nperformance is going to dominate, and I don't see any red flags so far.\r\nIf you're regularly dealing with unbelievably huge amounts of emoji, I\r\nthink the amount of extra time we're seeing here is unlikely to be a\r\nproblem. If it is, you can always turn alignment off. (Do you rely on\r\nhorizontal alignment for lines with millions of characters, anyway?)\r\n\r\nLaurenz, Michael, what do you think?\r\n\r\n--Jacob\r\n\r\n[1] https://github.com/google/benchmark",
"msg_date": "Mon, 26 Jul 2021 17:27:24 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "čt 22. 7. 2021 v 0:12 odesílatel Jacob Champion <pchampion@vmware.com>\nnapsal:\n\n> On Wed, 2021-07-21 at 00:08 +0000, Jacob Champion wrote:\n> > I note that the doc comment for ucs_wcwidth()...\n> >\n> > > * - Spacing characters in the East Asian Wide (W) or East Asian\n> > > * FullWidth (F) category as defined in Unicode Technical\n> > > * Report #11 have a column width of 2.\n> >\n> > ...doesn't match reality anymore. The East Asian width handling was\n> > last updated in 2006, it looks like? So I wonder whether fixing the\n> > code to match the comment would not only fix the emoji problem but also\n> > a bunch of other non-emoji characters.\n>\n> Attached is my attempt at that. This adds a second interval table,\n> handling not only the emoji range in the original patch but also\n> correcting several non-emoji character ranges which are included in the\n> 13.0 East Asian Wide/Fullwidth sets. Try for example\n>\n> - U+2329 LEFT POINTING ANGLE BRACKET\n> - U+16FE0 TANGUT ITERATION MARK\n> - U+18000 KATAKANA LETTER ARCHAIC E\n>\n> This should work reasonably well for terminals that depend on modern\n> versions of Unicode's EastAsianWidth.txt to figure out character width.\n> I don't know how it behaves on BSD libc or Windows.\n>\n> The new binary search isn't free, but my naive attempt at measuring the\n> performance hit made it look worse than it actually is. Since the\n> measurement function was previously returning an incorrect (too short)\n> width, we used to get a free performance boost by not printing the\n> correct number of alignment/border characters. I'm still trying to\n> figure out how best to isolate the performance changes due to this\n> patch.\n>\n> Pavel, I'd be interested to see what your benchmarks find with this\n> code. Does this fix the original issue for you?\n>\n\nThis patch fixed badly formatted tables with emoji.\n\nI checked this patch, and it is correct and a step forward, because it\ndynamically sets intervals of double wide characters, and the code is more\nreadable.\n\nI checked and performance, and although there is measurable slowdown, it is\nnegligible in absolute values. Previous code was a little bit faster - it\nchecked less ranges, but was not fully correct and up to date.\n\nThe patching was without problems\nThere are no regress tests, but I am not sure so they are necessary for\nthis case.\nmake check-world passed without problems\n\nI'll mark this patch as ready for committer\n\nRegards\n\nPavel\n\n\n\n\n\n>\n> --Jacob\n>\n\nčt 22. 7. 2021 v 0:12 odesílatel Jacob Champion <pchampion@vmware.com> napsal:On Wed, 2021-07-21 at 00:08 +0000, Jacob Champion wrote:\n> I note that the doc comment for ucs_wcwidth()...\n> \n> > * - Spacing characters in the East Asian Wide (W) or East Asian\n> > * FullWidth (F) category as defined in Unicode Technical\n> > * Report #11 have a column width of 2.\n> \n> ...doesn't match reality anymore. The East Asian width handling was\n> last updated in 2006, it looks like? So I wonder whether fixing the\n> code to match the comment would not only fix the emoji problem but also\n> a bunch of other non-emoji characters.\n\nAttached is my attempt at that. This adds a second interval table,\nhandling not only the emoji range in the original patch but also\ncorrecting several non-emoji character ranges which are included in the\n13.0 East Asian Wide/Fullwidth sets. Try for example\n\n- U+2329 LEFT POINTING ANGLE BRACKET\n- U+16FE0 TANGUT ITERATION MARK\n- U+18000 KATAKANA LETTER ARCHAIC E\n\nThis should work reasonably well for terminals that depend on modern\nversions of Unicode's EastAsianWidth.txt to figure out character width.\nI don't know how it behaves on BSD libc or Windows.\n\nThe new binary search isn't free, but my naive attempt at measuring the\nperformance hit made it look worse than it actually is. Since the\nmeasurement function was previously returning an incorrect (too short)\nwidth, we used to get a free performance boost by not printing the\ncorrect number of alignment/border characters. I'm still trying to\nfigure out how best to isolate the performance changes due to this\npatch.\n\nPavel, I'd be interested to see what your benchmarks find with this\ncode. Does this fix the original issue for you?This patch fixed badly formatted tables with emoji.I checked this patch, and it is correct and a step forward, because it dynamically sets intervals of double wide characters, and the code is more readable.I checked and performance, and although there is measurable slowdown, it is negligible in absolute values. Previous code was a little bit faster - it checked less ranges, but was not fully correct and up to date.The patching was without problemsThere are no regress tests, but I am not sure so they are necessary for this case.make check-world passed without problemsI'll mark this patch as ready for committerRegardsPavel \n\n--Jacob",
"msg_date": "Thu, 12 Aug 2021 08:41:48 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "I tried this patch on and MacOS11/iterm2 and RHEL 7 (ssh'd from the Mac, in\ncase that matters) and the example shown at the top of the thread shows no\ndifference:\n\njohn.naylor=# \\pset border 2\nBorder style is 2.\njohn.naylor=# SELECT U&'\\+01F603';\n+----------+\n| ?column? |\n+----------+\n| 😃 |\n+----------+\n(1 row)\n\n(In case it doesn't render locally, the right bar in the result cell is\nstill shifted to the right.\n\nWhat is the expected context to show a behavior change? Does one need some\nspecific terminal or setting?\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nI tried this patch on and MacOS11/iterm2 and RHEL 7 (ssh'd from the Mac, in case that matters) and the example shown at the top of the thread shows no difference:john.naylor=# \\pset border 2Border style is 2.john.naylor=# SELECT U&'\\+01F603';+----------+| ?column? |+----------+| 😃 |+----------+(1 row)(In case it doesn't render locally, the right bar in the result cell is still shifted to the right.What is the expected context to show a behavior change? Does one need some specific terminal or setting?-- John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 12 Aug 2021 12:36:08 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "čt 12. 8. 2021 v 18:36 odesílatel John Naylor <john.naylor@enterprisedb.com>\nnapsal:\n\n> I tried this patch on and MacOS11/iterm2 and RHEL 7 (ssh'd from the Mac,\n> in case that matters) and the example shown at the top of the thread shows\n> no difference:\n>\n> john.naylor=# \\pset border 2\n> Border style is 2.\n> john.naylor=# SELECT U&'\\+01F603';\n> +----------+\n> | ?column? |\n> +----------+\n> | 😃 |\n> +----------+\n> (1 row)\n>\n\ndid you run make clean?\n\nwhen I executed just patch & make, it didn't work\n\n\n> (In case it doesn't render locally, the right bar in the result cell is\n> still shifted to the right.\n>\n> What is the expected context to show a behavior change? Does one need some\n> specific terminal or setting?\n>\n\nI assigned screenshots\n\n\n>\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com\n>",
"msg_date": "Thu, 12 Aug 2021 18:46:11 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Thu, 2021-08-12 at 12:36 -0400, John Naylor wrote:\r\n> I tried this patch on and MacOS11/iterm2 and RHEL 7 (ssh'd from the Mac, in case that matters) and the example shown at the top of the thread shows no difference:\r\n> \r\n> john.naylor=# \\pset border 2\r\n> Border style is 2.\r\n> john.naylor=# SELECT U&'\\+01F603';\r\n> +----------+\r\n> | ?column? |\r\n> +----------+\r\n> | 😃 |\r\n> +----------+\r\n> (1 row)\r\n> \r\n> (In case it doesn't render locally, the right bar in the result cell is still shifted to the right.\r\n> \r\n> What is the expected context to show a behavior change?\r\n\r\nThere shouldn't be anything special. (If your terminal was set up to\r\ndisplay emoji in single columns, that would cause alignment issues, but\r\nin the opposite direction to the one you're seeing.)\r\n\r\n> Does one need some specific terminal or setting?\r\n\r\nIn your case, an incorrect number of spaces are being printed, so it\r\nshouldn't have anything to do with your terminal settings.\r\n\r\nWas this a clean build? Perhaps I've introduced (or exacerbated) a\r\ndependency bug in the Makefile? The patch doing nothing is a surprising\r\nresult given the code change.\r\n\r\n--Jacob\r\n",
"msg_date": "Thu, 12 Aug 2021 16:54:02 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Thu, Aug 12, 2021 at 12:46 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> did you run make clean?\n>\n> when I executed just patch & make, it didn't work\n\nI did not, but I always have --enable-depend on. I tried again with make\nclean, and ccache -C just in case, and it works now.\n\nOn Thu, Aug 12, 2021 at 12:54 PM Jacob Champion <pchampion@vmware.com>\nwrote:\n\n> Was this a clean build? Perhaps I've introduced (or exacerbated) a\n> dependency bug in the Makefile? The patch doing nothing is a surprising\n> result given the code change.\n\nYeah, given that Pavel had the same issue, that's a possibility. I don't\nrecall that happening with other unicode patches I've tested.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Aug 12, 2021 at 12:46 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:> did you run make clean?>> when I executed just patch & make, it didn't workI did not, but I always have --enable-depend on. I tried again with make clean, and ccache -C just in case, and it works now.On Thu, Aug 12, 2021 at 12:54 PM Jacob Champion <pchampion@vmware.com> wrote:> Was this a clean build? Perhaps I've introduced (or exacerbated) a> dependency bug in the Makefile? The patch doing nothing is a surprising> result given the code change.Yeah, given that Pavel had the same issue, that's a possibility. I don't recall that happening with other unicode patches I've tested. --John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 12 Aug 2021 14:16:25 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "The patch looks pretty good to me. I just have a stylistic suggestion which\nI've attached as a text file. There are also some outdated comments that\nare not the responsibility of this patch, but I kind of want to fix them\nnow:\n\n * - Hangul Jamo medial vowels and final consonants (U+1160-U+11FF)\n * have a column width of 0.\n\nWe got rid of this range in d8594d123c1, which is correct.\n\n * - Other format characters (general category code Cf in the Unicode\n * database) and ZERO WIDTH SPACE (U+200B) have a column width of 0.\n\nWe don't treat Cf the same as Me or Mn, and I believe that's deliberate. We\nalso no longer have the exception for zero-width space.\n\nIt seems the consensus so far is that performance is not an issue, and I'm\ninclined to agree.\n\nI'm a bit concerned about the build dependencies not working right, but\nit's not clear it's even due to the patch. I'll spend some time\ninvestigating next week.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 12 Aug 2021 17:13:31 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Thu, 2021-08-12 at 17:13 -0400, John Naylor wrote:\r\n> The patch looks pretty good to me. I just have a stylistic suggestion\r\n> which I've attached as a text file.\r\n\r\nGetting rid of the \"clever addition\" looks much better to me, thanks. I\r\nhaven't verified the changes to the doc comment, but your description\r\nseems reasonable.\r\n\r\n> I'm a bit concerned about the build dependencies not working right,\r\n> but it's not clear it's even due to the patch. I'll spend some time\r\n> investigating next week.\r\n\r\nIf I vandalize src/common/wchar.c on HEAD, say by deleting the contents\r\nof pg_wchar_table, and then run `make install`, then libpq doesn't get\r\nrebuilt and there's no effect on the frontend. The postgres executable\r\ndoes get rebuilt for the backend.\r\n\r\nIt looks like src/interfaces/libpq/Makefile doesn't have a dependency\r\non libpgcommon (or libpgport, for that matter). For comparison,\r\nsrc/backend/Makefile has this:\r\n\r\n OBJS = \\\r\n $(LOCALOBJS) \\\r\n $(SUBDIROBJS) \\\r\n $(top_builddir)/src/common/libpgcommon_srv.a \\\r\n $(top_builddir)/src/port/libpgport_srv.a\r\n\r\nSo I think that's a bug that needs to be fixed independently, whether\r\nthis patch goes in or not.\r\n\r\n--Jacob\r\n",
"msg_date": "Thu, 12 Aug 2021 22:34:57 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Thu, Aug 12, 2021 at 05:13:31PM -0400, John Naylor wrote:\n> I'm a bit concerned about the build dependencies not working right, but\n> it's not clear it's even due to the patch. I'll spend some time\n> investigating next week.\n\nHow large do libpgcommon deliverables get with this patch? Skimming\nthrough the patch, that just looks like a couple of bytes, still.\n--\nMichael",
"msg_date": "Mon, 16 Aug 2021 11:44:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Sun, Aug 15, 2021 at 10:45 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n>\n> On Thu, Aug 12, 2021 at 05:13:31PM -0400, John Naylor wrote:\n> > I'm a bit concerned about the build dependencies not working right, but\n> > it's not clear it's even due to the patch. I'll spend some time\n> > investigating next week.\n>\n> How large do libpgcommon deliverables get with this patch? Skimming\n> through the patch, that just looks like a couple of bytes, still.\n\nMore like a couple thousand bytes. That's because the width of mbinterval\ndoubled. If this is not desirable, we could teach the scripts to adjust the\nwidth of the interval type depending on the largest character they saw.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sun, Aug 15, 2021 at 10:45 PM Michael Paquier <michael@paquier.xyz> wrote:>> On Thu, Aug 12, 2021 at 05:13:31PM -0400, John Naylor wrote:> > I'm a bit concerned about the build dependencies not working right, but> > it's not clear it's even due to the patch. I'll spend some time> > investigating next week.>> How large do libpgcommon deliverables get with this patch? Skimming> through the patch, that just looks like a couple of bytes, still.More like a couple thousand bytes. That's because the width of mbinterval doubled. If this is not desirable, we could teach the scripts to adjust the width of the interval type depending on the largest character they saw.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 16 Aug 2021 11:24:33 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Mon, 2021-08-16 at 11:24 -0400, John Naylor wrote:\r\n> \r\n> On Sun, Aug 15, 2021 at 10:45 PM Michael Paquier <michael@paquier.xyz> wrote:\r\n> \r\n> > How large do libpgcommon deliverables get with this patch? Skimming\r\n> > through the patch, that just looks like a couple of bytes, still.\r\n> \r\n> More like a couple thousand bytes. That's because the width\r\n> of mbinterval doubled. If this is not desirable, we could teach the\r\n> scripts to adjust the width of the interval type depending on the\r\n> largest character they saw.\r\n\r\nTrue. Note that the combining character table currently excludes\r\ncodepoints outside of the BMP, so if someone wants combinations in\r\nhigher planes to be handled correctly in the future, the mbinterval for\r\nthat table may have to be widened anyway.\r\n\r\n--Jacob\r\n",
"msg_date": "Mon, 16 Aug 2021 17:04:32 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Mon, Aug 16, 2021 at 1:04 PM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> On Mon, 2021-08-16 at 11:24 -0400, John Naylor wrote:\n> >\n> > On Sun, Aug 15, 2021 at 10:45 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n> >\n> > > How large do libpgcommon deliverables get with this patch? Skimming\n> > > through the patch, that just looks like a couple of bytes, still.\n> >\n> > More like a couple thousand bytes. That's because the width\n> > of mbinterval doubled. If this is not desirable, we could teach the\n> > scripts to adjust the width of the interval type depending on the\n> > largest character they saw.\n>\n> True. Note that the combining character table currently excludes\n> codepoints outside of the BMP, so if someone wants combinations in\n> higher planes to be handled correctly in the future, the mbinterval for\n> that table may have to be widened anyway.\n\nHmm, somehow it escaped my attention that the combining character table\nscript explicitly excludes those. There's no comment about it. Maybe best\nto ask Peter E. (CC'd)\n\nPeter, does the combining char table exclude values > 0xFFFF for size\nreasons, correctness, or some other consideration?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Aug 16, 2021 at 1:04 PM Jacob Champion <pchampion@vmware.com> wrote:>> On Mon, 2021-08-16 at 11:24 -0400, John Naylor wrote:> >> > On Sun, Aug 15, 2021 at 10:45 PM Michael Paquier <michael@paquier.xyz> wrote:> >> > > How large do libpgcommon deliverables get with this patch? Skimming> > > through the patch, that just looks like a couple of bytes, still.> >> > More like a couple thousand bytes. That's because the width> > of mbinterval doubled. If this is not desirable, we could teach the> > scripts to adjust the width of the interval type depending on the> > largest character they saw.>> True. Note that the combining character table currently excludes> codepoints outside of the BMP, so if someone wants combinations in> higher planes to be handled correctly in the future, the mbinterval for> that table may have to be widened anyway.Hmm, somehow it escaped my attention that the combining character table script explicitly excludes those. There's no comment about it. Maybe best to ask Peter E. (CC'd)Peter, does the combining char table exclude values > 0xFFFF for size reasons, correctness, or some other consideration?--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 16 Aug 2021 16:06:10 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "I wrote:\n\n> On Sun, Aug 15, 2021 at 10:45 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n> >\n> > On Thu, Aug 12, 2021 at 05:13:31PM -0400, John Naylor wrote:\n> > > I'm a bit concerned about the build dependencies not working right,\nbut\n> > > it's not clear it's even due to the patch. I'll spend some time\n> > > investigating next week.\n> >\n> > How large do libpgcommon deliverables get with this patch? Skimming\n> > through the patch, that just looks like a couple of bytes, still.\n>\n> More like a couple thousand bytes. That's because the width of mbinterval\ndoubled. If this is not desirable, we could teach the scripts to adjust the\nwidth of the interval type depending on the largest character they saw.\n\nFor src/common/libpgcommon.a, in a non-assert / non-debug build:\nmaster: 254912\npatch: 256432\n\nAnd if I go further and remove the limit on the largest character in the\ncombining table, I get 257248, which is still a relatively small difference.\n\nI had a couple further thoughts:\n\n1. The coding historically used normal comparison and branching for\neverything, but recently it only does that to detect control characters,\nand then goes through a binary search (and with this patch, two binary\nsearches) for everything else. Although the performance regression of the\ncurrent patch seems negligible, we could use almost the same branches to\nfast-path printable ascii text, like this:\n\n+ /* fast path for printable ASCII characters */\n+ if (ucs >= 0x20 || ucs < 0x7f)\n+ return 1;\n+\n /* test for 8-bit control characters */\n if (ucs == 0)\n return 0;\n\n- if (ucs < 0x20 || (ucs >= 0x7f && ucs < 0xa0) || ucs > 0x0010ffff)\n+ if (ucs < 0xa0 || ucs > 0x0010ffff)\n return -1;\n\n2. As written, the patch adds a script that's very close to an existing\none, and emits a new file that has the same type of contents as an existing\none, both of which are #included in one place. I wonder if we should\nconsider having just one script that ingests both files and emits one file.\nAll we need is for mbinterval to encode the character width, but we can\nprobably do that with a bitfield like the normprops table to save space.\nThen, we only do one binary search. Opinions?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nI wrote:> On Sun, Aug 15, 2021 at 10:45 PM Michael Paquier <michael@paquier.xyz> wrote:> >> > On Thu, Aug 12, 2021 at 05:13:31PM -0400, John Naylor wrote:> > > I'm a bit concerned about the build dependencies not working right, but> > > it's not clear it's even due to the patch. I'll spend some time> > > investigating next week.> >> > How large do libpgcommon deliverables get with this patch? Skimming> > through the patch, that just looks like a couple of bytes, still.>> More like a couple thousand bytes. That's because the width of mbinterval doubled. If this is not desirable, we could teach the scripts to adjust the width of the interval type depending on the largest character they saw.For src/common/libpgcommon.a, in a non-assert / non-debug build:master: 254912patch: 256432And if I go further and remove the limit on the largest character in the combining table, I get 257248, which is still a relatively small difference.I had a couple further thoughts:1. The coding historically used normal comparison and branching for everything, but recently it only does that to detect control characters, and then goes through a binary search (and with this patch, two binary searches) for everything else. Although the performance regression of the current patch seems negligible, we could use almost the same branches to fast-path printable ascii text, like this:+ /* fast path for printable ASCII characters */+ if (ucs >= 0x20 || ucs < 0x7f)+ return 1;+ /* test for 8-bit control characters */ if (ucs == 0) return 0;- if (ucs < 0x20 || (ucs >= 0x7f && ucs < 0xa0) || ucs > 0x0010ffff)+ if (ucs < 0xa0 || ucs > 0x0010ffff) return -1;2. As written, the patch adds a script that's very close to an existing one, and emits a new file that has the same type of contents as an existing one, both of which are #included in one place. I wonder if we should consider having just one script that ingests both files and emits one file. All we need is for mbinterval to encode the character width, but we can probably do that with a bitfield like the normprops table to save space. Then, we only do one binary search. Opinions?--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 19 Aug 2021 13:49:27 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "\nOn 16.08.21 22:06, John Naylor wrote:\n> Peter, does the combining char table exclude values > 0xFFFF for size \n> reasons, correctness, or some other consideration?\n\nI don't remember a reason, other than perhaps making the generated table \nmatch the previous manual table in scope. IIRC, the previous table was \nancient, so perhaps from the days before higher Unicode values were \nuniversally supported in the code.\n\n\n",
"msg_date": "Thu, 19 Aug 2021 20:12:10 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Thu, 2021-08-19 at 13:49 -0400, John Naylor wrote:\r\n> I had a couple further thoughts:\r\n> \r\n> 1. The coding historically used normal comparison and branching for\r\n> everything, but recently it only does that to detect control\r\n> characters, and then goes through a binary search (and with this\r\n> patch, two binary searches) for everything else. Although the\r\n> performance regression of the current patch seems negligible,\r\n\r\nIf I'm reading the code correctly, ASCII characters don't go through\r\nthe binary searches; they're already short-circuited at the beginning\r\nof mbbisearch(). On my machine that's enough for the patch to be a\r\nperformance _improvement_ for ASCII, not a regression.\r\n\r\nDoes adding another short-circuit at the top improve the\r\nmicrobenchmarks noticeably? I assumed the compiler had pretty well\r\noptimized all that already.\r\n\r\n> we could use almost the same branches to fast-path printable ascii\r\n> text, like this:\r\n> \r\n> + /* fast path for printable ASCII characters */\r\n> + if (ucs >= 0x20 || ucs < 0x7f)\r\n> + return 1;\r\n\r\nShould be && instead of ||, I think.\r\n\r\n> +\r\n> /* test for 8-bit control characters */\r\n> if (ucs == 0)\r\n> return 0;\r\n> \r\n> - if (ucs < 0x20 || (ucs >= 0x7f && ucs < 0xa0) || ucs > 0x0010ffff)\r\n> + if (ucs < 0xa0 || ucs > 0x0010ffff)\r\n> return -1;\r\n> \r\n> 2. As written, the patch adds a script that's very close to an\r\n> existing one, and emits a new file that has the same type of contents\r\n> as an existing one, both of which are #included in one place. I\r\n> wonder if we should consider having just one script that ingests both\r\n> files and emits one file. All we need is for mbinterval to encode the\r\n> character width, but we can probably do that with a bitfield like the\r\n> normprops table to save space. Then, we only do one binary search.\r\n> Opinions?\r\n\r\nI guess it just depends on what the end result looks/performs like.\r\nWe'd save seven hops or so in the worst case?\r\n\r\n--Jacob\r\n",
"msg_date": "Fri, 20 Aug 2021 00:05:19 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Thu, Aug 19, 2021 at 8:05 PM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> On Thu, 2021-08-19 at 13:49 -0400, John Naylor wrote:\n> > I had a couple further thoughts:\n> >\n> > 1. The coding historically used normal comparison and branching for\n> > everything, but recently it only does that to detect control\n> > characters, and then goes through a binary search (and with this\n> > patch, two binary searches) for everything else. Although the\n> > performance regression of the current patch seems negligible,\n>\n> If I'm reading the code correctly, ASCII characters don't go through\n> the binary searches; they're already short-circuited at the beginning\n> of mbbisearch(). On my machine that's enough for the patch to be a\n> performance _improvement_ for ASCII, not a regression.\n\nI had assumed that there would be a function call, but looking at the\nassembly, it's inlined, so you're right.\n\n> Should be && instead of ||, I think.\n\nYes, you're quite right. Clearly I didn't test it. :-) But given the\nprevious, I won't pursue this further.\n\n> > 2. As written, the patch adds a script that's very close to an\n> > existing one, and emits a new file that has the same type of contents\n> > as an existing one, both of which are #included in one place. I\n> > wonder if we should consider having just one script that ingests both\n> > files and emits one file. All we need is for mbinterval to encode the\n> > character width, but we can probably do that with a bitfield like the\n> > normprops table to save space. Then, we only do one binary search.\n> > Opinions?\n>\n> I guess it just depends on what the end result looks/performs like.\n> We'd save seven hops or so in the worst case?\n\nSomething like that. Attached is what I had in mind (using real patches to\nsee what the CF bot thinks):\n\n0001 is a simple renaming\n0002 puts the char width inside the mbinterval so we can put arbitrary\nvalues there\n0003 is Jacob's patch adjusted to use the same binary search as for\ncombining characters\n0004 removes the ancient limit on combining characters, so the following\nworks now:\n\nSELECT U&'\\+0102E1\\+0102E0';\n+----------+\n| ?column? |\n+----------+\n| 𐋡𐋠 |\n+----------+\n(1 row)\n\nI think the adjustments to 0003 result in a cleaner and more extensible\ndesign, but a case could be made otherwise. The former combining table\nscript is a bit more complex than the sum of its former self and Jacob's\nproposed new script, but just slightly.\n\nAlso, I checked the behavior of this comment that I proposed to remove\nupthread:\n\n- * - Other format characters (general category code Cf in the Unicode\n- * database) and ZERO WIDTH SPACE (U+200B) have a column width of 0.\n\nWe don't handle the latter in our current setup:\n\nSELECT U&'foo\\200Bbar';\n+----------+\n| ?column? |\n+----------+\n| foobar |\n+----------+\n(1 row)\n\nNot sure if we should do anything about this. It was an explicit exception\nyears ago in our vendored manual table, but is not labeled as such in the\nofficial Unicode files.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 20 Aug 2021 13:05:49 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "I plan to commit my proposed v2 this week unless I hear reservations about\nmy adjustments, or bikeshedding. I'm thinking of squashing 0001 and 0002.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nI plan to commit my proposed v2 this week unless I hear reservations about my adjustments, or bikeshedding. I'm thinking of squashing 0001 and 0002.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 24 Aug 2021 12:05:28 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Fri, 2021-08-20 at 13:05 -0400, John Naylor wrote:\r\n> On Thu, Aug 19, 2021 at 8:05 PM Jacob Champion <pchampion@vmware.com> wrote:\r\n> > I guess it just depends on what the end result looks/performs like.\r\n> > We'd save seven hops or so in the worst case?\r\n> \r\n> Something like that. Attached is what I had in mind (using real\r\n> patches to see what the CF bot thinks):\r\n> \r\n> 0001 is a simple renaming\r\n> 0002 puts the char width inside the mbinterval so we can put arbitrary values there\r\n\r\n0002 introduces a mixed declarations/statements warning for\r\nucs_wcwidth(). Other than that, LGTM overall.\r\n\r\n> --- a/src/common/wchar.c\r\n> +++ b/src/common/wchar.c\r\n> @@ -583,9 +583,9 @@ pg_utf_mblen(const unsigned char *s)\r\n> \r\n> struct mbinterval\r\n> {\r\n> - unsigned short first;\r\n> - unsigned short last;\r\n> - signed short width;\r\n> + unsigned int first;\r\n> + unsigned int last:21;\r\n> + signed int width:4;\r\n> };\r\n\r\nOh, right -- my patch moved mbinterval from short to int, but should I\r\nhave used uint32 instead? It would only matter in theory for the\r\n`first` member now that the bitfields are there.\r\n\r\n> I think the adjustments to 0003 result in a cleaner and more\r\n> extensible design, but a case could be made otherwise. The former\r\n> combining table script is a bit more complex than the sum of its\r\n> former self and Jacob's proposed new script, but just slightly.\r\n\r\nThe microbenchmark says it's also more performant, so +1 to your\r\nversion.\r\n\r\nDoes there need to be any sanity check for overlapping ranges between\r\nthe combining and fullwidth sets? The Unicode data on a dev's machine\r\nwould have to be broken somehow for that to happen, but it could\r\npotentially go undetected for a while if it did.\r\n\r\n> Also, I checked the behavior of this comment that I proposed to remove upthread:\r\n> \r\n> - * - Other format characters (general category code Cf in the Unicode\r\n> - * database) and ZERO WIDTH SPACE (U+200B) have a column width of 0.\r\n> \r\n> We don't handle the latter in our current setup:\r\n> \r\n> SELECT U&'foo\\200Bbar';\r\n> +----------+\r\n> | ?column? |\r\n> +----------+\r\n> | foobar |\r\n> +----------+\r\n> (1 row)\r\n> \r\n> Not sure if we should do anything about this. It was an explicit\r\n> exception years ago in our vendored manual table, but is not labeled\r\n> as such in the official Unicode files.\r\n\r\nI'm wary of changing too many things at once, but it does seem like we\r\nshould be giving that codepoint a width of 0.\r\n\r\nOn Tue, 2021-08-24 at 12:05 -0400, John Naylor wrote:\r\n> I plan to commit my proposed v2 this week unless I hear reservations\r\n> about my adjustments, or bikeshedding. I'm thinking of squashing 0001\r\n> and 0002.\r\n\r\n+1\r\n\r\nThanks!\r\n--Jacob\r\n",
"msg_date": "Tue, 24 Aug 2021 17:50:50 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Tue, Aug 24, 2021 at 1:50 PM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> On Fri, 2021-08-20 at 13:05 -0400, John Naylor wrote:\n> > On Thu, Aug 19, 2021 at 8:05 PM Jacob Champion <pchampion@vmware.com>\nwrote:\n> > > I guess it just depends on what the end result looks/performs like.\n> > > We'd save seven hops or so in the worst case?\n> >\n> > Something like that. Attached is what I had in mind (using real\n> > patches to see what the CF bot thinks):\n> >\n> > 0001 is a simple renaming\n> > 0002 puts the char width inside the mbinterval so we can put arbitrary\nvalues there\n>\n> 0002 introduces a mixed declarations/statements warning for\n> ucs_wcwidth(). Other than that, LGTM overall.\n\nI didn't see that warning with clang 12, either with or without assertions,\nbut I do see it on gcc 11. Fixed, and pushed 0001 and 0002. I decided\nagainst squashing them, since my original instinct was correct -- the\nheader changes too much for git to consider it the same file, which may\nmake archeology harder.\n\n> > --- a/src/common/wchar.c\n> > +++ b/src/common/wchar.c\n> > @@ -583,9 +583,9 @@ pg_utf_mblen(const unsigned char *s)\n> >\n> > struct mbinterval\n> > {\n> > - unsigned short first;\n> > - unsigned short last;\n> > - signed short width;\n> > + unsigned int first;\n> > + unsigned int last:21;\n> > + signed int width:4;\n> > };\n>\n> Oh, right -- my patch moved mbinterval from short to int, but should I\n> have used uint32 instead? It would only matter in theory for the\n> `first` member now that the bitfields are there.\n\nI'm not sure it would matter, but the usual type for codepoints is unsigned.\n\n> > I think the adjustments to 0003 result in a cleaner and more\n> > extensible design, but a case could be made otherwise. The former\n> > combining table script is a bit more complex than the sum of its\n> > former self and Jacob's proposed new script, but just slightly.\n>\n> The microbenchmark says it's also more performant, so +1 to your\n> version.\n>\n> Does there need to be any sanity check for overlapping ranges between\n> the combining and fullwidth sets? The Unicode data on a dev's machine\n> would have to be broken somehow for that to happen, but it could\n> potentially go undetected for a while if it did.\n\nThanks for testing again! The sanity check sounds like a good idea, so I'll\nwork on that and push soon.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Aug 24, 2021 at 1:50 PM Jacob Champion <pchampion@vmware.com> wrote:>> On Fri, 2021-08-20 at 13:05 -0400, John Naylor wrote:> > On Thu, Aug 19, 2021 at 8:05 PM Jacob Champion <pchampion@vmware.com> wrote:> > > I guess it just depends on what the end result looks/performs like.> > > We'd save seven hops or so in the worst case?> >> > Something like that. Attached is what I had in mind (using real> > patches to see what the CF bot thinks):> >> > 0001 is a simple renaming> > 0002 puts the char width inside the mbinterval so we can put arbitrary values there>> 0002 introduces a mixed declarations/statements warning for> ucs_wcwidth(). Other than that, LGTM overall.I didn't see that warning with clang 12, either with or without assertions, but I do see it on gcc 11. Fixed, and pushed 0001 and 0002. I decided against squashing them, since my original instinct was correct -- the header changes too much for git to consider it the same file, which may make archeology harder.> > --- a/src/common/wchar.c> > +++ b/src/common/wchar.c> > @@ -583,9 +583,9 @@ pg_utf_mblen(const unsigned char *s)> >> > struct mbinterval> > {> > - unsigned short first;> > - unsigned short last;> > - signed short width;> > + unsigned int first;> > + unsigned int last:21;> > + signed int width:4;> > };>> Oh, right -- my patch moved mbinterval from short to int, but should I> have used uint32 instead? It would only matter in theory for the> `first` member now that the bitfields are there.I'm not sure it would matter, but the usual type for codepoints is unsigned.> > I think the adjustments to 0003 result in a cleaner and more> > extensible design, but a case could be made otherwise. The former> > combining table script is a bit more complex than the sum of its> > former self and Jacob's proposed new script, but just slightly.>> The microbenchmark says it's also more performant, so +1 to your> version.>> Does there need to be any sanity check for overlapping ranges between> the combining and fullwidth sets? The Unicode data on a dev's machine> would have to be broken somehow for that to happen, but it could> potentially go undetected for a while if it did.Thanks for testing again! The sanity check sounds like a good idea, so I'll work on that and push soon.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 25 Aug 2021 13:13:08 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Tue, Aug 24, 2021 at 1:50 PM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> Does there need to be any sanity check for overlapping ranges between\n> the combining and fullwidth sets? The Unicode data on a dev's machine\n> would have to be broken somehow for that to happen, but it could\n> potentially go undetected for a while if it did.\n\nIt turns out I should have done that to begin with. In the Unicode data, it\napparently happens that a character can be both combining and wide, and\nthat will cause ranges to overlap in my scheme:\n\n302A..302D;W # Mn [4] IDEOGRAPHIC LEVEL TONE MARK..IDEOGRAPHIC\nENTERING TONE MARK\n\n{0x3000, 0x303E, 2},\n{0x302A, 0x302D, 0},\n\n3099..309A;W # Mn [2] COMBINING KATAKANA-HIRAGANA VOICED SOUND\nMARK..COMBINING KATAKANA-HIRAGANA SEMI-VOICED SOUND MARK\n\n{0x3099, 0x309A, 0},\n{0x3099, 0x30FF, 2},\n\nGoing by the above, Jacob's patch from July 21 just happened to be correct\nby chance since the combining character search happened first.\n\nIt seems the logical thing to do is revert my 0001 and 0002 and go back to\nsomething much closer to Jacob's patch, plus a big comment explaining that\nthe order in which the searches happen matters.\n\nThe EastAsianWidth.txt does have combining property \"Mn\" in the comment\nabove, so it's tempting to just read that (plus we could read just one file\nfor these properties). However, it seems risky to rely on comments, since\ntheir presence and format is probably less stable than the data format.\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Aug 24, 2021 at 1:50 PM Jacob Champion <pchampion@vmware.com> wrote:>> Does there need to be any sanity check for overlapping ranges between> the combining and fullwidth sets? The Unicode data on a dev's machine> would have to be broken somehow for that to happen, but it could> potentially go undetected for a while if it did.It turns out I should have done that to begin with. In the Unicode data, it apparently happens that a character can be both combining and wide, and that will cause ranges to overlap in my scheme:302A..302D;W # Mn [4] IDEOGRAPHIC LEVEL TONE MARK..IDEOGRAPHIC ENTERING TONE MARK{0x3000, 0x303E, 2},{0x302A, 0x302D, 0},3099..309A;W # Mn [2] COMBINING KATAKANA-HIRAGANA VOICED SOUND MARK..COMBINING KATAKANA-HIRAGANA SEMI-VOICED SOUND MARK{0x3099, 0x309A, 0},{0x3099, 0x30FF, 2},Going by the above, Jacob's patch from July 21 just happened to be correct by chance since the combining character search happened first.It seems the logical thing to do is revert my 0001 and 0002 and go back to something much closer to Jacob's patch, plus a big comment explaining that the order in which the searches happen matters.The EastAsianWidth.txt does have combining property \"Mn\" in the comment above, so it's tempting to just read that (plus we could read just one file for these properties). However, it seems risky to rely on comments, since their presence and format is probably less stable than the data format.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 25 Aug 2021 16:15:34 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "I wrote:\n> On Tue, Aug 24, 2021 at 1:50 PM Jacob Champion <pchampion@vmware.com>\nwrote:\n\n> It seems the logical thing to do is revert my 0001 and 0002 and go back\nto something much closer to Jacob's patch, plus a big comment explaining\nthat the order in which the searches happen matters.\n\nI pushed Jacob's patch with the addendum I shared upthread, plus a comment\nabout search order. I also took the liberty of changing the author in the\nCF app to Jacob. Later I'll push detecting non-spacing characters beyond\nthe BMP.\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nI wrote:> On Tue, Aug 24, 2021 at 1:50 PM Jacob Champion <pchampion@vmware.com> wrote:> It seems the logical thing to do is revert my 0001 and 0002 and go back to something much closer to Jacob's patch, plus a big comment explaining that the order in which the searches happen matters.I pushed Jacob's patch with the addendum I shared upthread, plus a comment about search order. I also took the liberty of changing the author in the CF app to Jacob. Later I'll push detecting non-spacing characters beyond the BMP.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 26 Aug 2021 11:12:58 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "On Wed, 2021-08-25 at 16:15 -0400, John Naylor wrote:\r\n> On Tue, Aug 24, 2021 at 1:50 PM Jacob Champion <pchampion@vmware.com> wrote:\r\n> >\r\n> > Does there need to be any sanity check for overlapping ranges between\r\n> > the combining and fullwidth sets? The Unicode data on a dev's machine\r\n> > would have to be broken somehow for that to happen, but it could\r\n> > potentially go undetected for a while if it did.\r\n> \r\n> It turns out I should have done that to begin with. In the Unicode\r\n> data, it apparently happens that a character can be both combining\r\n> and wide, and that will cause ranges to overlap in my scheme:\r\n\r\nI was looking for overlaps in my review, but I skipped right over that,\r\nsorry...\r\n\r\nOn Thu, 2021-08-26 at 11:12 -0400, John Naylor wrote:\r\n> I pushed Jacob's patch with the addendum I shared upthread, plus a\r\n> comment about search order. I also took the liberty of changing the\r\n> author in the CF app to Jacob. Later I'll push detecting non-spacing\r\n> characters beyond the BMP.\r\n\r\nThanks!\r\n\r\n--Jacob\r\n",
"msg_date": "Thu, 26 Aug 2021 15:25:22 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: badly calculated width of emoji in psql"
},
{
"msg_contents": "čt 26. 8. 2021 v 17:25 odesílatel Jacob Champion <pchampion@vmware.com>\nnapsal:\n\n> On Wed, 2021-08-25 at 16:15 -0400, John Naylor wrote:\n> > On Tue, Aug 24, 2021 at 1:50 PM Jacob Champion <pchampion@vmware.com>\n> wrote:\n> > >\n> > > Does there need to be any sanity check for overlapping ranges between\n> > > the combining and fullwidth sets? The Unicode data on a dev's machine\n> > > would have to be broken somehow for that to happen, but it could\n> > > potentially go undetected for a while if it did.\n> >\n> > It turns out I should have done that to begin with. In the Unicode\n> > data, it apparently happens that a character can be both combining\n> > and wide, and that will cause ranges to overlap in my scheme:\n>\n> I was looking for overlaps in my review, but I skipped right over that,\n> sorry...\n>\n> On Thu, 2021-08-26 at 11:12 -0400, John Naylor wrote:\n> > I pushed Jacob's patch with the addendum I shared upthread, plus a\n> > comment about search order. I also took the liberty of changing the\n> > author in the CF app to Jacob. Later I'll push detecting non-spacing\n> > characters beyond the BMP.\n>\n> Thanks!\n>\n\nGreat, thanks\n\nPavel\n\n\n> --Jacob\n>\n\nčt 26. 8. 2021 v 17:25 odesílatel Jacob Champion <pchampion@vmware.com> napsal:On Wed, 2021-08-25 at 16:15 -0400, John Naylor wrote:\n> On Tue, Aug 24, 2021 at 1:50 PM Jacob Champion <pchampion@vmware.com> wrote:\n> >\n> > Does there need to be any sanity check for overlapping ranges between\n> > the combining and fullwidth sets? The Unicode data on a dev's machine\n> > would have to be broken somehow for that to happen, but it could\n> > potentially go undetected for a while if it did.\n> \n> It turns out I should have done that to begin with. In the Unicode\n> data, it apparently happens that a character can be both combining\n> and wide, and that will cause ranges to overlap in my scheme:\n\nI was looking for overlaps in my review, but I skipped right over that,\nsorry...\n\nOn Thu, 2021-08-26 at 11:12 -0400, John Naylor wrote:\n> I pushed Jacob's patch with the addendum I shared upthread, plus a\n> comment about search order. I also took the liberty of changing the\n> author in the CF app to Jacob. Later I'll push detecting non-spacing\n> characters beyond the BMP.\n\nThanks!Great, thanks Pavel\n\n--Jacob",
"msg_date": "Thu, 26 Aug 2021 17:47:33 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: badly calculated width of emoji in psql"
}
] |
[
{
"msg_contents": "Hi, all\n\n I want to know why call pgstat_reset_all function during recovery process, under what circumstances the data will be invalid after recovery?\n \n Thanks & Best Regard\n\n\n\n Hi, all I want to know why call pgstat_reset_all function during recovery process, under what circumstances the data will be invalid after recovery? Thanks & Best Regard",
"msg_date": "Fri, 02 Apr 2021 17:41:33 +0800",
"msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <mengjuan.cmj@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?V2h5IHJlc2V0IHBnc3RhdCBkdXJpbmcgcmVjb3Zlcnk=?="
}
] |
[
{
"msg_contents": "While reviewing Pavel Borisov's patch to enable INCLUDE columns in\nSP-GiST, I found some things that seem like pre-existing bugs.\nThese only accidentally fail to cause any problems in the existing\nSP-GiST opclasses:\n\n1. The attType passed to an opclass's config method is documented as\n\n Oid attType; /* Data type to be indexed */\n\nNow, I would read that as meaning the type of the underlying heap\ncolumn; the documentation and code about when a \"compress\" method\nis required certainly seem to think so too. What is actually being\npassed, though, is the data type of the index column, that is the\n\"opckeytype\" of the index opclass. We've failed to notice this because\n(1) for most of the core SP-GiST opclasses, the two types are the same,\nand (2) none of the core opclasses bother to examine attType anyway.\n\n2. When performing an index-only scan on an SP-GiST index, what we\npass back as the tuple descriptor of the output tuples is just the\nindex relation's own tupdesc, cf spgbeginscan:\n\n /* Set up indexTupDesc and xs_hitupdesc in case it's an index-only scan */\n so->indexTupDesc = scan->xs_hitupdesc = RelationGetDescr(rel);\n\nAgain, what this is going to report is the opckeytype, not the\nreconstructed heap column datatype. That's just flat out wrong.\nWe've failed to notice because the only core opclass for which\nthose types are different is poly_ops, which does not support\nreconstructing the polygons for index-only scan.\n\nWe need to do something about this because the INCLUDE patch needs\nthe relation descriptor of an SP-GiST index to match the reality\nof what is stored in the leaf tuples. Right now, as far as I can tell,\nthere isn't really any necessary connection between the atttype\nclaimed by the relation descriptor and the leaf type that's physically\nstored. They're accidentally the same in all existing opclasses,\nbut only accidentally.\n\nI propose changing things so that\n\n(A) attType really is the input (heap) data type.\n\n(B) We enforce that leafType agrees with the opclass opckeytype,\nensuring the index tupdesc can be used for leaf tuples.\n\n(C) The tupdesc passed back for an index-only scan reports the\ninput (heap) data type.\n\nThis might be too much change for the back branches. Given the\nlack of complaints to date, I think we can just fix it in HEAD.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Apr 2021 12:37:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "SP-GiST confusion: indexed column's type vs. index column type"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 9:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I propose changing things so that\n>\n> (A) attType really is the input (heap) data type.\n>\n> (B) We enforce that leafType agrees with the opclass opckeytype,\n> ensuring the index tupdesc can be used for leaf tuples.\n>\n> (C) The tupdesc passed back for an index-only scan reports the\n> input (heap) data type.\n>\n> This might be too much change for the back branches. Given the\n> lack of complaints to date, I think we can just fix it in HEAD.\n\n+1 to fixing it on HEAD only.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 2 Apr 2021 09:46:28 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: SP-GiST confusion: indexed column's type vs. index column type"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Apr 2, 2021 at 9:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I propose changing things so that\n>> (A) attType really is the input (heap) data type.\n>> (B) We enforce that leafType agrees with the opclass opckeytype,\n>> ensuring the index tupdesc can be used for leaf tuples.\n>> (C) The tupdesc passed back for an index-only scan reports the\n>> input (heap) data type.\n>> \n>> This might be too much change for the back branches. Given the\n>> lack of complaints to date, I think we can just fix it in HEAD.\n\n> +1 to fixing it on HEAD only.\n\nHere's a draft patch for that, in case anyone wants to look it\nover.\n\nThe confusion went even deeper than I thought, as some of the code\nmistakenly thought that reconstructed \"leafValue\" values were of the\nleaf data type rather than the input attribute type. (Which is not\ntoo surprising, given that that's such a misleading name, but the\ndocs are clear and correct on the point.)\n\nAlso, both the code and docs thought that the \"reconstructedValue\"\ndatums that are passed down the tree during a search should be of\nthe leaf data type. This seems to me to be arrant nonsense.\nAs an example, if you made an opclass that indexes 1-D arrays\nby labeling each inner node with successive array elements,\nright down to the leaf which is the last array element, it will\nabsolutely not work for the reconstructedValues to be of the\nleaf type --- they have to be of the array type. (As I said\nin commit 1ebdec8c0, this'd be a fairly poorly-chosen opclass\ndesign, but it seems like it ought to physically work.)\n\nGiven the amount of confusion here, I don't have a lot of confidence\nthat an opclass that wants to reconstruct values while having\nleafType different from input type will work even with this patch.\nI'm strongly tempted to make a src/test/modules module that\nimplements exactly the silly design given above, just so we have\nsome coverage for this scenario.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 02 Apr 2021 19:24:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SP-GiST confusion: indexed column's type vs. index column type"
},
{
"msg_contents": "I wrote:\n> Also, both the code and docs thought that the \"reconstructedValue\"\n> datums that are passed down the tree during a search should be of\n> the leaf data type. This seems to me to be arrant nonsense.\n> As an example, if you made an opclass that indexes 1-D arrays\n> by labeling each inner node with successive array elements,\n> right down to the leaf which is the last array element, it will\n> absolutely not work for the reconstructedValues to be of the\n> leaf type --- they have to be of the array type. (As I said\n> in commit 1ebdec8c0, this'd be a fairly poorly-chosen opclass\n> design, but it seems like it ought to physically work.)\n\nSo after trying to build an opclass that does that, I have a clearer\nunderstanding of why opclasses that'd break the existing code are\nso thin on the ground. You can't do the above, because the opclass\ncannot force the AM to add inner nodes that it doesn't want to.\nFor example, the first few index entries will simply be dumped into\nthe root page as undifferentiated leaf tuples. This means that,\nif you'd like to be able to return reconstructed index entries, the\nleaf data type *must* be able to hold all the data that is in an\ninput value. In principle you could represent it in some other\nformat, but the path of least resistance is definitely to make the\nleaf type the same as the input.\n\nI still want to make an opclass in which those types are different,\nif only for testing purposes, but I'm having a hard time coming up\nwith a plan that's not totally lame. Best idea I can think of is\nto wrap the input in a bytea, which just begs the question \"why\nwould you do that?\". Anybody have a less lame thought?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Apr 2021 22:05:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SP-GiST confusion: indexed column's type vs. index column type"
},
{
"msg_contents": "I wrote:\n> I still want to make an opclass in which those types are different,\n> if only for testing purposes, but I'm having a hard time coming up\n> with a plan that's not totally lame. Best idea I can think of is\n> to wrap the input in a bytea, which just begs the question \"why\n> would you do that?\". Anybody have a less lame thought?\n\nI thought of a plan that's at least simple to code: make an opclass\nthat takes \"name\" but does all the internal storage as \"text\". Then\nall the code can be stolen from spgtextproc.c with very minor changes.\nI'd been too fixated on finding an example in which attType and\nleafType differ as to pass-by-ref vs pass-by-value, but actually a\ntest case with positive typlen vs. varlena typlen will do just as well\nfor finding wrong-type references.\n\nAnd, having coded that up, my first test result is\n\nregression=# create extension spgist_name_ops ;\nERROR: storage type cannot be different from data type for access method \"spgist\"\n\nevidently because SPGiST doesn't set amroutine->amstorage.\n\nThat's silly on its face because we have built-in opclasses in which\nthose types are different, but it probably helps explain why there are\nno field reports of trouble with these bugs ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Apr 2021 13:16:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SP-GiST confusion: indexed column's type vs. index column type"
},
{
"msg_contents": "Here's a patch that, in addition to what I mentioned upthread,\nrescinds the limitation that user-defined SPGIST opclasses can't\nset the STORAGE parameter, and cleans up some residual confusion\nabout whether values are of the indexed type (attType) or the\nstorage type (leafType). Once I'd wrapped my head around the idea\nthat indeed intermediate-level \"reconstructed\" values ought to be\nof the leafType, there were fewer bugs of that sort than I thought\nyesterday ... but still a nonzero number.\n\nI've also attached a test module that exercises reconstruction\nduring index-only scan with leafType being meaningfully different\nfrom attType. I'm not quite sure whether this is worth\ncommitting, but I'm leaning towards doing so.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 03 Apr 2021 16:06:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SP-GiST confusion: indexed column's type vs. index column type"
},
{
"msg_contents": "I wrote:\n> I propose changing things so that\n> (B) We enforce that leafType agrees with the opclass opckeytype,\n> ensuring the index tupdesc can be used for leaf tuples.\n\nAfter looking at PostGIS I realized that a hard restriction of this\nsort won't fly, because it'd make upgrades impossible for them.\nThey have some lossy SPGiST opclasses, in which leafType is returned\nas different from the original input datatype. Since up to now\nwe've disallowed the STORAGE clause for user-defined SPGiST\nopclasses, they are unable to declare these opclasses honestly in\nexisting releases, but it didn't matter. If we enforce that STORAGE\nmatches leafType then their existing opclass definitions won't work\nin v14, but they can't fix them before upgrading either.\n\nSo I backed off the complaint about that to be just an amvalidate\nwarning, and pushed it.\n\nThis means the INCLUDE patch will still have to account for the\npossibility that the index tupdesc is an inaccurate representation\nof the actual leaf tuple contents, but we can treat that case less\nefficiently without feeling bad about it. So we should be able to\ndo something similar for the leaf tupdesc as for the index-only-scan\noutput tupdesc, that is use the relcache's tupdesc if it's got the\nright first column type, otherwise copy-and-modify that tupdesc.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Apr 2021 14:40:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SP-GiST confusion: indexed column's type vs. index column type"
}
] |
[
{
"msg_contents": "Dear Sirs,\n\nMy name is Magzum Assanbayev, I am a Master Student at KIMEP University in\nKazakhstan, expected to graduate in Spring 2022.\n\nHaving made some research into your organization I have deduced that my\ncurrent skill set might be suitable to your needs.\n\nOut of what I can offer, I have been practicing data analytics for 2.5\nyears at PwC Competency Centre in Audit and Assurance department dealing\nwith audit automation with an Alteryx data analytics software. The software\nallows seamless big data manipulation and output, and has its own community\nsharing the ideas among others, please see link:\nhttps://community.alteryx.com/?category.id=external\n\nAs a track record, I can state that the workflows developed under my\nsupervision have cut significant hours of repetitive work for audit teams,\nranging from 5-20% per audit engagement with positive user feedback. The\nrange of work done varies from automating mathematical accuracy check of\nconsolidation reports to disclosure recompilation as well as journal entry\nanalysis based on a predefined audit criteria. The workflows developed used\nAlteryx macros and RegEx to solve the problems in case.\n\nThe software is popular among largest corporate brands in the world which\nproves its value for cost and actuality in a competitive market of data\nanalytics software. The evident users of the software are Big 4 audit\ncompanies, Google itself, Coca-Cola, Deutsche Bank, etc.\n\nIn addition, I have an entry-level acquaintance with Python and Excel VBA,\nhaving completed 'Crash Course on Python' and 'Excel/VBA for Creative\nProblem Solving, Part 1' courses on Coursera (see certificates attached).\n\nPlease note that both courses were completed during the busy season under\nheavy audit workload being full-time employed at PwC. My eagerness and\nability to learn is backed up by my Bachelor GPA of 4.31/4.33 at KIMEP,\nwhere I studied Finance and Accounting. I was also awarded scholarships and\nstipends for academic achievement for several years.\n\nIf this letter has caught your eye and made you interested, I am happy to\nbrainstorm any potential projects that we can do together during the\nupcoming summer!\n\nPlease let me know by replying to this email.\n\nThank you!",
"msg_date": "Fri, 2 Apr 2021 23:14:47 +0600",
"msg_from": "Magzum Assanbayev <magzum.assanbayev@gmail.com>",
"msg_from_op": true,
"msg_subject": "GSoC 2021 - Student looking for a mentor - Magzum Assanbayev"
},
{
"msg_contents": "On 03/04/2021 06:14, Magzum Assanbayev wrote:\n> Dear Sirs,\n\nNote that there are some females that hack pg!\n\n\n>\n> My name is Magzum Assanbayev, I am a Master Student at KIMEP \n> University in Kazakhstan, expected to graduate in Spring 2022.\n>\n> Having made some research into your organization I have deduced that \n> my current skill set might be suitable to your needs.\n[...]\n> Please let me know by replying to this email.\n>\n> Thank you!\n\n\n\n\n",
"msg_date": "Mon, 5 Apr 2021 07:46:19 +1200",
"msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: GSoC 2021 - Student looking for a mentor - Magzum Assanbayev"
},
{
"msg_contents": "Hello Magzum,\n\nThank you for the email! I am really glad you are interested in working with Postgres. \n\nHave you tried looking at the project ideas? You can find them here: https://wiki.postgresql.org/wiki/GSoC_2021\n\nIf you have any preference, you are more than welcome to ask questions and clarifications in this mailing list. On the other side, if you have any specific idea about a project you could do which isn’t listed there, we can discuss it. However, it is a bit late to come up with projects now, since the deadline for applications is approaching, so I would recommend you to try one of the proposed ones. \n\nIlaria\n\n> Am 06.04.2021 um 19:59 schrieb Magzum Assanbayev <magzum.assanbayev@gmail.com>:\n> \n> \n> Dear Sirs,\n> \n> My name is Magzum Assanbayev, I am a Master Student at KIMEP University in Kazakhstan, expected to graduate in Spring 2022.\n> \n> Having made some research into your organization I have deduced that my current skill set might be suitable to your needs.\n> \n> Out of what I can offer, I have been practicing data analytics for 2.5 years at PwC Competency Centre in Audit and Assurance department dealing with audit automation with an Alteryx data analytics software. The software allows seamless big data manipulation and output, and has its own community sharing the ideas among others, please see link: https://community.alteryx.com/?category.id=external\n> \n> As a track record, I can state that the workflows developed under my supervision have cut significant hours of repetitive work for audit teams, ranging from 5-20% per audit engagement with positive user feedback. The range of work done varies from automating mathematical accuracy check of consolidation reports to disclosure recompilation as well as journal entry analysis based on a predefined audit criteria. The workflows developed used Alteryx macros and RegEx to solve the problems in case.\n> \n> The software is popular among largest corporate brands in the world which proves its value for cost and actuality in a competitive market of data analytics software. The evident users of the software are Big 4 audit companies, Google itself, Coca-Cola, Deutsche Bank, etc.\n> \n> In addition, I have an entry-level acquaintance with Python and Excel VBA, having completed 'Crash Course on Python' and 'Excel/VBA for Creative Problem Solving, Part 1' courses on Coursera (see certificates attached). \n> \n> Please note that both courses were completed during the busy season under heavy audit workload being full-time employed at PwC. My eagerness and ability to learn is backed up by my Bachelor GPA of 4.31/4.33 at KIMEP, where I studied Finance and Accounting. I was also awarded scholarships and stipends for academic achievement for several years.\n> \n> If this letter has caught your eye and made you interested, I am happy to brainstorm any potential projects that we can do together during the upcoming summer!\n> \n> Please let me know by replying to this email.\n> \n> Thank you!\n> <Magzum Assanbayev - Crash Course on Python certificate.pdf>\n> <Magzum Assanbayev - ExcelVBA for Creative Problem Solving, Part 1 certificate.pdf>\n\nHello Magzum,Thank you for the email! I am really glad you are interested in working with Postgres. Have you tried looking at the project ideas? You can find them here: https://wiki.postgresql.org/wiki/GSoC_2021If you have any preference, you are more than welcome to ask questions and clarifications in this mailing list. On the other side, if you have any specific idea about a project you could do which isn’t listed there, we can discuss it. However, it is a bit late to come up with projects now, since the deadline for applications is approaching, so I would recommend you to try one of the proposed ones. IlariaAm 06.04.2021 um 19:59 schrieb Magzum Assanbayev <magzum.assanbayev@gmail.com>:Dear Sirs,My name is Magzum Assanbayev, I am a Master Student at KIMEP University in Kazakhstan, expected to graduate in Spring 2022.Having made some research into your organization I have deduced that my current skill set might be suitable to your needs.Out of what I can offer, I have been practicing data analytics for 2.5 years at PwC Competency Centre in Audit and Assurance department dealing with audit automation with an Alteryx data analytics software. The software allows seamless big data manipulation and output, and has its own community sharing the ideas among others, please see link: https://community.alteryx.com/?category.id=externalAs a track record, I can state that the workflows developed under my supervision have cut significant hours of repetitive work for audit teams, ranging from 5-20% per audit engagement with positive user feedback. The range of work done varies from automating mathematical accuracy check of consolidation reports to disclosure recompilation as well as journal entry analysis based on a predefined audit criteria. The workflows developed used Alteryx macros and RegEx to solve the problems in case.The software is popular among largest corporate brands in the world which proves its value for cost and actuality in a competitive market of data analytics software. The evident users of the software are Big 4 audit companies, Google itself, Coca-Cola, Deutsche Bank, etc.In addition, I have an entry-level acquaintance with Python and Excel VBA, having completed 'Crash Course on Python' and 'Excel/VBA for Creative Problem Solving, Part 1' courses on Coursera (see certificates attached). Please note that both courses were completed during the busy season under heavy audit workload being full-time employed at PwC. My eagerness and ability to learn is backed up by my Bachelor GPA of 4.31/4.33 at KIMEP, where I studied Finance and Accounting. I was also awarded scholarships and stipends for academic achievement for several years.If this letter has caught your eye and made you interested, I am happy to brainstorm any potential projects that we can do together during the upcoming summer!Please let me know by replying to this email.Thank you!\n<Magzum Assanbayev - Crash Course on Python certificate.pdf><Magzum Assanbayev - ExcelVBA for Creative Problem Solving, Part 1 certificate.pdf>",
"msg_date": "Tue, 6 Apr 2021 20:07:48 +0200",
"msg_from": "Ilaria <ilaria.battiston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GSoC 2021 - Student looking for a mentor - Magzum Assanbayev"
}
] |
[
{
"msg_contents": "Hi,\n\nThis grew out of my patch to split the waits event code out of\npgstat.[ch], which in turn grew out of the shared memory stats patch\nseries.\n\n\npgstat_report_wait_start() and pgstat_report_wait_end() currently check\npgstat_track_activities before assigning to MyProc->wait_event_info.\nGiven the small cost of the assignment, and that pgstat_track_activities\nis almost always used, I'm doubtful that that's the right tradeoff.\n\nNormally I would say that branch prediction will take care of this cost\n- but because pgstat_report_wait_{start,end} are inlined, that has to\nhappen in each of the calling locations.\n\nThe code works out to be something like the following (this is from\nbasebackup_read_file, the simplest caller I could quickly find, I\nremoved interspersed code from it):\n\n267\t\tif (!pgstat_track_activities || !proc)\n 0x0000000000430e4d <+13>:\tcmpb $0x1,0x4882e1(%rip) # 0x8b9135 <pgstat_track_activities>\n\n265\t\tvolatile PGPROC *proc = MyProc;\n 0x0000000000430e54 <+20>:\tmov 0x48c52d(%rip),%rax # 0x8bd388 <MyProc>\n\n266\n267\t\tif (!pgstat_track_activities || !proc)\n 0x0000000000430e5b <+27>:\tjne 0x430e6c <basebackup_read_file+44>\n 0x0000000000430e5d <+29>:\ttest %rax,%rax\n 0x0000000000430e60 <+32>:\tje 0x430e6c <basebackup_read_file+44>\n\n268\t\t\treturn;\n269\n270\t\t/*\n271\t\t * Since this is a four-byte field which is always read and written as\n272\t\t * four-bytes, updates are atomic.\n273\t\t */\n274\t\tproc->wait_event_info = wait_event_info;\n 0x0000000000430e62 <+34>:\tmovl $0xa000000,0x2c8(%rax)\n\n/home/andres/src/postgresql/src/backend/replication/basebackup.c:\n2014\t\trc = pg_pread(fd, buf, nbytes, offset);\n 0x0000000000430e6c <+44>:\tcall 0xc4790 <pread@plt>\n\nstripping the source:\n 0x0000000000430e4d <+13>:\tcmpb $0x1,0x4882e1(%rip) # 0x8b9135 <pgstat_track_activities>\n 0x0000000000430e54 <+20>:\tmov 0x48c52d(%rip),%rax # 0x8bd388 <MyProc>\n 0x0000000000430e5b <+27>:\tjne 0x430e6c <basebackup_read_file+44>\n 0x0000000000430e5d <+29>:\ttest %rax,%rax\n 0x0000000000430e60 <+32>:\tje 0x430e6c <basebackup_read_file+44>\n 0x0000000000430e62 <+34>:\tmovl $0xa000000,0x2c8(%rax)\n 0x0000000000430e6c <+44>:\tcall 0xc4790 <pread@plt>\n\n\njust removing the pgstat_track_activities check turns that into\n\n 0x0000000000430d8d <+13>:\tmov 0x48c5f4(%rip),%rax # 0x8bd388 <MyProc>\n 0x0000000000430d94 <+20>:\ttest %rax,%rax\n 0x0000000000430d97 <+23>:\tje 0x430da3 <basebackup_read_file+35>\n 0x0000000000430d99 <+25>:\tmovl $0xa000000,0x2c8(%rax)\n 0x0000000000430da3 <+35>:\tcall 0xc4790 <pread@plt>\n\nwhich does seem (a bit) nicer.\n\nHowever, we can improve this further, entirely eliminating branches, by\nintroducing something like \"my_wait_event_info\" that initially just\npoints to a local variable and is switched to shared once MyProc is\nassigned.\n\nObviously incorrect, for comparison: Just removing the MyProc != NULL\ncheck yields:\n 0x0000000000430bcd <+13>:\tmov 0x48c7b4(%rip),%rax # 0x8bd388 <MyProc>\n 0x0000000000430bd4 <+20>:\tmovl $0xa000000,0x2c8(%rax)\n 0x0000000000430bde <+30>:\tcall 0xc47d0 <pread@plt>\n\nusing a uint32 *my_wait_event_info yields:\n 0x0000000000430b4d <+13>:\tmov 0x47615c(%rip),%rax # 0x8a6cb0 <my_wait_event_info>\n 0x0000000000430b54 <+20>:\tmovl $0xa000000,(%rax)\n 0x0000000000430b5a <+26>:\tcall 0xc47d0 <pread@plt>\n\nNote how the lack of offset addressing in the my_wait_event_info version\nmakes the instruction smaller (call is at 26 instead of 30).\n\n\nNow, perhaps all of this isn't worth optimizing, most of the things done\nwithin pgstat_report_wait_start()/end() are expensive-ish. And forward\nbranches are statically predicted to be not taken on several\nplatforms. I have seen this these instructions show up in profiles in\nworkloads with contended lwlocks at least...\n\nThere's also a small win in code size:\n text\t data\t bss\t dec\t hex\tfilename\n8932095\t 192160\t 204656\t9328911\t 8e590f\tsrc/backend/postgres\n8928544\t 192160\t 204656\t9325360\t 8e4b30\tsrc/backend/postgres_my_wait_event_info\n\n\nIf we went for the my_wait_event_info approach there is one further\nadvantage, after my change to move the wait event code into a separate\nfile: wait_event.h does not need to include proc.h anymore, which seems\narchitecturally nice for things like fd.c.\n\n\nAttached is patch series doing this.\n\n\nI'm inclined to separately change the comment format for\nwait_event.[ch], there's no no reason to stick with the current style:\n\n/* ----------\n * pgstat_report_wait_start() -\n *\n *\tCalled from places where server process needs to wait. This is called\n *\tto report wait event information. The wait information is stored\n *\tas 4-bytes where first byte represents the wait event class (type of\n *\twait, for different types of wait, refer WaitClass) and the next\n *\t3-bytes represent the actual wait event. Currently 2-bytes are used\n *\tfor wait event which is sufficient for current usage, 1-byte is\n *\treserved for future usage.\n *\n * NB: this *must* be able to survive being called before MyProc has been\n * initialized.\n * ----------\n */\n\nI.e. I'd like to remove the ----- framing, the repetition of the\nfunction name, and the varying indentation in the comment.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 2 Apr 2021 12:44:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Making wait events a bit more efficient"
},
{
"msg_contents": "Hi,\n\n+extern PGDLLIMPORT uint32 *my_wait_event_info;\n\nIt seems volatile should be added to the above declaration. Since later:\n\n+ *(volatile uint32 *) my_wait_event_info = wait_event_info;\n\nCheers\n\nOn Fri, Apr 2, 2021 at 12:45 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> This grew out of my patch to split the waits event code out of\n> pgstat.[ch], which in turn grew out of the shared memory stats patch\n> series.\n>\n>\n> pgstat_report_wait_start() and pgstat_report_wait_end() currently check\n> pgstat_track_activities before assigning to MyProc->wait_event_info.\n> Given the small cost of the assignment, and that pgstat_track_activities\n> is almost always used, I'm doubtful that that's the right tradeoff.\n>\n> Normally I would say that branch prediction will take care of this cost\n> - but because pgstat_report_wait_{start,end} are inlined, that has to\n> happen in each of the calling locations.\n>\n> The code works out to be something like the following (this is from\n> basebackup_read_file, the simplest caller I could quickly find, I\n> removed interspersed code from it):\n>\n> 267 if (!pgstat_track_activities || !proc)\n> 0x0000000000430e4d <+13>: cmpb $0x1,0x4882e1(%rip) #\n> 0x8b9135 <pgstat_track_activities>\n>\n> 265 volatile PGPROC *proc = MyProc;\n> 0x0000000000430e54 <+20>: mov 0x48c52d(%rip),%rax #\n> 0x8bd388 <MyProc>\n>\n> 266\n> 267 if (!pgstat_track_activities || !proc)\n> 0x0000000000430e5b <+27>: jne 0x430e6c <basebackup_read_file+44>\n> 0x0000000000430e5d <+29>: test %rax,%rax\n> 0x0000000000430e60 <+32>: je 0x430e6c <basebackup_read_file+44>\n>\n> 268 return;\n> 269\n> 270 /*\n> 271 * Since this is a four-byte field which is always read\n> and written as\n> 272 * four-bytes, updates are atomic.\n> 273 */\n> 274 proc->wait_event_info = wait_event_info;\n> 0x0000000000430e62 <+34>: movl $0xa000000,0x2c8(%rax)\n>\n> /home/andres/src/postgresql/src/backend/replication/basebackup.c:\n> 2014 rc = pg_pread(fd, buf, nbytes, offset);\n> 0x0000000000430e6c <+44>: call 0xc4790 <pread@plt>\n>\n> stripping the source:\n> 0x0000000000430e4d <+13>: cmpb $0x1,0x4882e1(%rip) #\n> 0x8b9135 <pgstat_track_activities>\n> 0x0000000000430e54 <+20>: mov 0x48c52d(%rip),%rax #\n> 0x8bd388 <MyProc>\n> 0x0000000000430e5b <+27>: jne 0x430e6c <basebackup_read_file+44>\n> 0x0000000000430e5d <+29>: test %rax,%rax\n> 0x0000000000430e60 <+32>: je 0x430e6c <basebackup_read_file+44>\n> 0x0000000000430e62 <+34>: movl $0xa000000,0x2c8(%rax)\n> 0x0000000000430e6c <+44>: call 0xc4790 <pread@plt>\n>\n>\n> just removing the pgstat_track_activities check turns that into\n>\n> 0x0000000000430d8d <+13>: mov 0x48c5f4(%rip),%rax #\n> 0x8bd388 <MyProc>\n> 0x0000000000430d94 <+20>: test %rax,%rax\n> 0x0000000000430d97 <+23>: je 0x430da3 <basebackup_read_file+35>\n> 0x0000000000430d99 <+25>: movl $0xa000000,0x2c8(%rax)\n> 0x0000000000430da3 <+35>: call 0xc4790 <pread@plt>\n>\n> which does seem (a bit) nicer.\n>\n> However, we can improve this further, entirely eliminating branches, by\n> introducing something like \"my_wait_event_info\" that initially just\n> points to a local variable and is switched to shared once MyProc is\n> assigned.\n>\n> Obviously incorrect, for comparison: Just removing the MyProc != NULL\n> check yields:\n> 0x0000000000430bcd <+13>: mov 0x48c7b4(%rip),%rax #\n> 0x8bd388 <MyProc>\n> 0x0000000000430bd4 <+20>: movl $0xa000000,0x2c8(%rax)\n> 0x0000000000430bde <+30>: call 0xc47d0 <pread@plt>\n>\n> using a uint32 *my_wait_event_info yields:\n> 0x0000000000430b4d <+13>: mov 0x47615c(%rip),%rax #\n> 0x8a6cb0 <my_wait_event_info>\n> 0x0000000000430b54 <+20>: movl $0xa000000,(%rax)\n> 0x0000000000430b5a <+26>: call 0xc47d0 <pread@plt>\n>\n> Note how the lack of offset addressing in the my_wait_event_info version\n> makes the instruction smaller (call is at 26 instead of 30).\n>\n>\n> Now, perhaps all of this isn't worth optimizing, most of the things done\n> within pgstat_report_wait_start()/end() are expensive-ish. And forward\n> branches are statically predicted to be not taken on several\n> platforms. I have seen this these instructions show up in profiles in\n> workloads with contended lwlocks at least...\n>\n> There's also a small win in code size:\n> text data bss dec hex filename\n> 8932095 192160 204656 9328911 8e590f src/backend/postgres\n> 8928544 192160 204656 9325360 8e4b30\n> src/backend/postgres_my_wait_event_info\n>\n>\n> If we went for the my_wait_event_info approach there is one further\n> advantage, after my change to move the wait event code into a separate\n> file: wait_event.h does not need to include proc.h anymore, which seems\n> architecturally nice for things like fd.c.\n>\n>\n> Attached is patch series doing this.\n>\n>\n> I'm inclined to separately change the comment format for\n> wait_event.[ch], there's no no reason to stick with the current style:\n>\n> /* ----------\n> * pgstat_report_wait_start() -\n> *\n> * Called from places where server process needs to wait. This is\n> called\n> * to report wait event information. The wait information is stored\n> * as 4-bytes where first byte represents the wait event class (type\n> of\n> * wait, for different types of wait, refer WaitClass) and the next\n> * 3-bytes represent the actual wait event. Currently 2-bytes are\n> used\n> * for wait event which is sufficient for current usage, 1-byte is\n> * reserved for future usage.\n> *\n> * NB: this *must* be able to survive being called before MyProc has been\n> * initialized.\n> * ----------\n> */\n>\n> I.e. I'd like to remove the ----- framing, the repetition of the\n> function name, and the varying indentation in the comment.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nHi,+extern PGDLLIMPORT uint32 *my_wait_event_info;It seems volatile should be added to the above declaration. Since later:+ *(volatile uint32 *) my_wait_event_info = wait_event_info;CheersOn Fri, Apr 2, 2021 at 12:45 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nThis grew out of my patch to split the waits event code out of\npgstat.[ch], which in turn grew out of the shared memory stats patch\nseries.\n\n\npgstat_report_wait_start() and pgstat_report_wait_end() currently check\npgstat_track_activities before assigning to MyProc->wait_event_info.\nGiven the small cost of the assignment, and that pgstat_track_activities\nis almost always used, I'm doubtful that that's the right tradeoff.\n\nNormally I would say that branch prediction will take care of this cost\n- but because pgstat_report_wait_{start,end} are inlined, that has to\nhappen in each of the calling locations.\n\nThe code works out to be something like the following (this is from\nbasebackup_read_file, the simplest caller I could quickly find, I\nremoved interspersed code from it):\n\n267 if (!pgstat_track_activities || !proc)\n 0x0000000000430e4d <+13>: cmpb $0x1,0x4882e1(%rip) # 0x8b9135 <pgstat_track_activities>\n\n265 volatile PGPROC *proc = MyProc;\n 0x0000000000430e54 <+20>: mov 0x48c52d(%rip),%rax # 0x8bd388 <MyProc>\n\n266\n267 if (!pgstat_track_activities || !proc)\n 0x0000000000430e5b <+27>: jne 0x430e6c <basebackup_read_file+44>\n 0x0000000000430e5d <+29>: test %rax,%rax\n 0x0000000000430e60 <+32>: je 0x430e6c <basebackup_read_file+44>\n\n268 return;\n269\n270 /*\n271 * Since this is a four-byte field which is always read and written as\n272 * four-bytes, updates are atomic.\n273 */\n274 proc->wait_event_info = wait_event_info;\n 0x0000000000430e62 <+34>: movl $0xa000000,0x2c8(%rax)\n\n/home/andres/src/postgresql/src/backend/replication/basebackup.c:\n2014 rc = pg_pread(fd, buf, nbytes, offset);\n 0x0000000000430e6c <+44>: call 0xc4790 <pread@plt>\n\nstripping the source:\n 0x0000000000430e4d <+13>: cmpb $0x1,0x4882e1(%rip) # 0x8b9135 <pgstat_track_activities>\n 0x0000000000430e54 <+20>: mov 0x48c52d(%rip),%rax # 0x8bd388 <MyProc>\n 0x0000000000430e5b <+27>: jne 0x430e6c <basebackup_read_file+44>\n 0x0000000000430e5d <+29>: test %rax,%rax\n 0x0000000000430e60 <+32>: je 0x430e6c <basebackup_read_file+44>\n 0x0000000000430e62 <+34>: movl $0xa000000,0x2c8(%rax)\n 0x0000000000430e6c <+44>: call 0xc4790 <pread@plt>\n\n\njust removing the pgstat_track_activities check turns that into\n\n 0x0000000000430d8d <+13>: mov 0x48c5f4(%rip),%rax # 0x8bd388 <MyProc>\n 0x0000000000430d94 <+20>: test %rax,%rax\n 0x0000000000430d97 <+23>: je 0x430da3 <basebackup_read_file+35>\n 0x0000000000430d99 <+25>: movl $0xa000000,0x2c8(%rax)\n 0x0000000000430da3 <+35>: call 0xc4790 <pread@plt>\n\nwhich does seem (a bit) nicer.\n\nHowever, we can improve this further, entirely eliminating branches, by\nintroducing something like \"my_wait_event_info\" that initially just\npoints to a local variable and is switched to shared once MyProc is\nassigned.\n\nObviously incorrect, for comparison: Just removing the MyProc != NULL\ncheck yields:\n 0x0000000000430bcd <+13>: mov 0x48c7b4(%rip),%rax # 0x8bd388 <MyProc>\n 0x0000000000430bd4 <+20>: movl $0xa000000,0x2c8(%rax)\n 0x0000000000430bde <+30>: call 0xc47d0 <pread@plt>\n\nusing a uint32 *my_wait_event_info yields:\n 0x0000000000430b4d <+13>: mov 0x47615c(%rip),%rax # 0x8a6cb0 <my_wait_event_info>\n 0x0000000000430b54 <+20>: movl $0xa000000,(%rax)\n 0x0000000000430b5a <+26>: call 0xc47d0 <pread@plt>\n\nNote how the lack of offset addressing in the my_wait_event_info version\nmakes the instruction smaller (call is at 26 instead of 30).\n\n\nNow, perhaps all of this isn't worth optimizing, most of the things done\nwithin pgstat_report_wait_start()/end() are expensive-ish. And forward\nbranches are statically predicted to be not taken on several\nplatforms. I have seen this these instructions show up in profiles in\nworkloads with contended lwlocks at least...\n\nThere's also a small win in code size:\n text data bss dec hex filename\n8932095 192160 204656 9328911 8e590f src/backend/postgres\n8928544 192160 204656 9325360 8e4b30 src/backend/postgres_my_wait_event_info\n\n\nIf we went for the my_wait_event_info approach there is one further\nadvantage, after my change to move the wait event code into a separate\nfile: wait_event.h does not need to include proc.h anymore, which seems\narchitecturally nice for things like fd.c.\n\n\nAttached is patch series doing this.\n\n\nI'm inclined to separately change the comment format for\nwait_event.[ch], there's no no reason to stick with the current style:\n\n/* ----------\n * pgstat_report_wait_start() -\n *\n * Called from places where server process needs to wait. This is called\n * to report wait event information. The wait information is stored\n * as 4-bytes where first byte represents the wait event class (type of\n * wait, for different types of wait, refer WaitClass) and the next\n * 3-bytes represent the actual wait event. Currently 2-bytes are used\n * for wait event which is sufficient for current usage, 1-byte is\n * reserved for future usage.\n *\n * NB: this *must* be able to survive being called before MyProc has been\n * initialized.\n * ----------\n */\n\nI.e. I'd like to remove the ----- framing, the repetition of the\nfunction name, and the varying indentation in the comment.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 2 Apr 2021 13:06:35 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Making wait events a bit more efficient"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-02 13:06:35 -0700, Zhihong Yu wrote:\n> +extern PGDLLIMPORT uint32 *my_wait_event_info;\n> \n> It seems volatile should be added to the above declaration. Since later:\n> \n> + *(volatile uint32 *) my_wait_event_info = wait_event_info;\n\nWhy? We really just want to make the store volatile, nothing else. I\nthink it's much better to annotate that we want individual stores to\nhappen regardless of compiler optimizations, rather than all\ninteractions with a variable.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Apr 2021 13:10:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Making wait events a bit more efficient"
},
{
"msg_contents": "Hi,\nMaybe I am not familiar with your patch.\n\nI don't see where my_wait_event_info is read (there is no getter method in\nthe patch).\n\nIn that case, it is fine omitting volatile in the declaration.\n\nCheers\n\nOn Fri, Apr 2, 2021 at 1:10 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2021-04-02 13:06:35 -0700, Zhihong Yu wrote:\n> > +extern PGDLLIMPORT uint32 *my_wait_event_info;\n> >\n> > It seems volatile should be added to the above declaration. Since later:\n> >\n> > + *(volatile uint32 *) my_wait_event_info = wait_event_info;\n>\n> Why? We really just want to make the store volatile, nothing else. I\n> think it's much better to annotate that we want individual stores to\n> happen regardless of compiler optimizations, rather than all\n> interactions with a variable.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nHi,Maybe I am not familiar with your patch.I don't see where my_wait_event_info is read (there is no getter method in the patch).In that case, it is fine omitting volatile in the declaration.CheersOn Fri, Apr 2, 2021 at 1:10 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2021-04-02 13:06:35 -0700, Zhihong Yu wrote:\n> +extern PGDLLIMPORT uint32 *my_wait_event_info;\n> \n> It seems volatile should be added to the above declaration. Since later:\n> \n> + *(volatile uint32 *) my_wait_event_info = wait_event_info;\n\nWhy? We really just want to make the store volatile, nothing else. I\nthink it's much better to annotate that we want individual stores to\nhappen regardless of compiler optimizations, rather than all\ninteractions with a variable.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 2 Apr 2021 13:42:42 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Making wait events a bit more efficient"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-02 13:42:42 -0700, Zhihong Yu wrote:\n> I don't see where my_wait_event_info is read (there is no getter method in\n> the patch).\n\nThere are no reads via my_wait_event_info. Once connected to shared\nmemory, InitProcess() calls pgstat_set_wait_event_storage() to point it\nto &MyProc->wait_event_info.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Apr 2021 13:44:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Making wait events a bit more efficient"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-02 12:44:58 -0700, Andres Freund wrote:\n> If we went for the my_wait_event_info approach there is one further\n> advantage, after my change to move the wait event code into a separate\n> file: wait_event.h does not need to include proc.h anymore, which seems\n> architecturally nice for things like fd.c.\n\nThat part turns out to make one aspect of the shared memory stats patch\ncleaner, so I am planning to push this commit fairly soon, unless\nsomebody sees a reason not to do so?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Apr 2021 19:55:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Making wait events a bit more efficient"
},
{
"msg_contents": "On 2021-04-02 19:55:16 -0700, Andres Freund wrote:\n> On 2021-04-02 12:44:58 -0700, Andres Freund wrote:\n> > If we went for the my_wait_event_info approach there is one further\n> > advantage, after my change to move the wait event code into a separate\n> > file: wait_event.h does not need to include proc.h anymore, which seems\n> > architecturally nice for things like fd.c.\n> \n> That part turns out to make one aspect of the shared memory stats patch\n> cleaner, so I am planning to push this commit fairly soon, unless\n> somebody sees a reason not to do so?\n\nDone.\n\n\n",
"msg_date": "Sat, 3 Apr 2021 12:08:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Making wait events a bit more efficient"
}
] |
[
{
"msg_contents": "Dear fellow hackers,\n\nThis patch is one day late, my apologies for missing the deadline this year.\n\nPostgreSQL has since long been suffering from the lack of a proper UNIX style motd (message of the day).\n\nDBAs have no ways of conveying important information to users,\nhaving to rely on external protocols, such as HTTPS and \"websites\" to provide such information.\n\nBy adding a motd configuration parameter, the DBA can set this to a text string,\nwhich will be automatically presented to the user as a NOTICE when logging on to the server.\n\nWhile at it, fix escape_single_quotes_ascii() to properly escape newlines,\nso that such can be used in ALTER SYSTEM values.\nThis makes sense, since parsing \\n in config values works just fine.\n\nTo demonstrate the usefulness of this feature,\nI've setup an open public PostgreSQL server at \"pit.org\",\nto which anyone can connect without a password.\n\nYou need to know the username though,\nwhich will hopefully make problems for bots.\n\n$ psql -U brad -h pit.org motd\nNOTICE:\n ____ ______ ___\n / )/ /\n( / __ _ )\n (/ o) ( o) )\n _ (_ ) ) /\n /_/ )_/\n / //| |\\\n v | | v\n __/\n\nThis was accomplished by setting the \"motd\",\nwhich requires superuser privileges:\n\n$ psql motd\npsql (14devel)\nType \"help\" for help.\n\nmotd=# ALTER SYSTEM SET motd TO E'\\u001B[94m'\n'\\n ____ ______ ___ '\n'\\n / )/ \\/ \\ '\n'\\n ( / __ _\\ )'\n'\\n \\ (/ o) ( o) )'\n'\\n \\_ (_ ) \\ ) / '\n'\\n \\ /\\_/ \\)_/ '\n'\\n \\/ //| |\\\\ '\n'\\n v | | v '\n'\\n \\__/ '\n'\\u001b[0m';\nALTER SYSTEM\nmotd=# SELECT pg_reload_conf();\npg_reload_conf\n----------------\nt\n(1 row)\n\nmotd=# \\q\n\nAscii elephant in example by Michael Paquier [1], with ANSI colors added by me.\n\n[1] https://www.postgresql.org/message-id/CAB7nPqRaacwcaANOYY3Hxd3T0No5RdZXyqM5HB6fta%2BCoDLOEg%40mail.gmail.com\n\nHappy Easter!\n\n/Joel",
"msg_date": "Fri, 02 Apr 2021 22:46:16 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "[PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 10:46:16PM +0200, Joel Jacobson wrote:\n> Dear fellow hackers,\n> \n> This patch is one day late, my apologies for missing the deadline this year.\n\nUh, the deadline for the last commitfest was March 1, 2021, not April\n1.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 2 Apr 2021 16:51:39 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "On 04/02/21 16:46, Joel Jacobson wrote:\n\n> ____ ______ ___\n> / )/ /\n> ( / __ _ )\n> (/ o) ( o) )\n> _ (_ ) ) /\n> /_/ )_/\n> / //| |\\\n> v | | v\n> __/\n\n\nSlonik's backslashes are falling off. Eww.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 2 Apr 2021 16:59:22 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "Hi Joel\n\nOn Fri, Apr 2, 2021 at 11:47 PM Joel Jacobson <joel@compiler.org> wrote:\n\n> PostgreSQL has since long been suffering from the lack of a proper UNIX\n> style motd (message of the day).\n>\n\nFirst of all, thanks for your work on this! I think this is an important\nfeature to have, but I would love to see a way to have a set of strings\nfrom which you choose a random one to display. That way you could brighten\nyour day with random positive messages.\n\n\n-marko\n\n\nHi Joel \n\nOn Fri, Apr 2, 2021 at 11:47 PM Joel Jacobson <joel@compiler.org> wrote:PostgreSQL has since long been suffering from the lack of a proper UNIX style motd (message of the day).First of all, thanks for your work on this! I think this is an important feature to have, but I would love to see a way to have a set of strings from which you choose a random one to display. That way you could brighten your day with random positive messages.-marko",
"msg_date": "Sat, 3 Apr 2021 00:09:09 +0300",
"msg_from": "Marko Tiikkaja <marko@joh.to>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "On Fri, Apr 02, 2021 at 10:46:16PM +0200, Joel Jacobson wrote:\n> Ascii elephant in example by Michael Paquier [1], with ANSI colors added by me.\n> \n> [1] https://www.postgresql.org/message-id/CAB7nPqRaacwcaANOYY3Hxd3T0No5RdZXyqM5HB6fta%2BCoDLOEg%40mail.gmail.com\n\nThe credit here goes to Charles Clavadetscher, not me.\n--\nMichael",
"msg_date": "Sat, 3 Apr 2021 11:47:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "On Fri, Apr 2, 2021, at 22:51, Bruce Momjian wrote:\n> On Fri, Apr 2, 2021 at 10:46:16PM +0200, Joel Jacobson wrote:\n> > Dear fellow hackers,\n> > \n> > This patch is one day late, my apologies for missing the deadline this year.\n> \n> Uh, the deadline for the last commitfest was March 1, 2021, not April\n> 1.\n\nOh, I see. I'll make sure to submit it to next year's 1 April commitfest.\n\n/Joel\nOn Fri, Apr 2, 2021, at 22:51, Bruce Momjian wrote:On Fri, Apr 2, 2021 at 10:46:16PM +0200, Joel Jacobson wrote:> Dear fellow hackers,> > This patch is one day late, my apologies for missing the deadline this year.Uh, the deadline for the last commitfest was March 1, 2021, not April1.Oh, I see. I'll make sure to submit it to next year's 1 April commitfest./Joel",
"msg_date": "Sat, 03 Apr 2021 07:13:34 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "On Sat, Apr 3, 2021, at 04:47, Michael Paquier wrote:\n> On Fri, Apr 02, 2021 at 10:46:16PM +0200, Joel Jacobson wrote:\n> > Ascii elephant in example by Michael Paquier [1], with ANSI colors added by me.\n> > \n> > [1] https://www.postgresql.org/message-id/CAB7nPqRaacwcaANOYY3Hxd3T0No5RdZXyqM5HB6fta%2BCoDLOEg%40mail.gmail.com\n> \n> The credit here goes to Charles Clavadetscher, not me.\n> --\n> Michael\n\nRight! Sorry about that. The initial \">\" in front of the ascii art confused me, didn't understand it was part of the reply, since all text around it was your.\n\nMany thanks Charles for the beautiful ascii art!\n\n/Joel\nOn Sat, Apr 3, 2021, at 04:47, Michael Paquier wrote:On Fri, Apr 02, 2021 at 10:46:16PM +0200, Joel Jacobson wrote:> Ascii elephant in example by Michael Paquier [1], with ANSI colors added by me.> > [1] https://www.postgresql.org/message-id/CAB7nPqRaacwcaANOYY3Hxd3T0No5RdZXyqM5HB6fta%2BCoDLOEg%40mail.gmail.comThe credit here goes to Charles Clavadetscher, not me.--MichaelRight! Sorry about that. The initial \">\" in front of the ascii art confused me, didn't understand it was part of the reply, since all text around it was your.Many thanks Charles for the beautiful ascii art!/Joel",
"msg_date": "Sat, 03 Apr 2021 07:16:05 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "On Fri, Apr 2, 2021, at 22:59, Chapman Flack wrote:\n> Slonik's backslashes are falling off. Eww.\n> \n> Regards,\n> -Chap\n\nThanks for the bug report.\n\nFixed by properly escaping backslackes:\n\nALTER SYSTEM SET motd TO E'\\u001B[94m'\n'\\n ____ ______ ___ '\n'\\n / )/ \\\\/ \\\\ '\n'\\n ( / __ _\\\\ )'\n'\\n \\\\ (/ o) ( o) )'\n'\\n \\\\_ (_ ) \\\\ ) / '\n'\\n \\\\ /\\\\_/ \\\\)_/ '\n'\\n \\\\/ //| |\\\\\\\\ '\n'\\n v | | v '\n'\\n \\\\__/ '\n'\\u001b[0m';\n\nI've deployed the fix to production:\n\n$ psql -U brad -h pit.org motd\n\nNOTICE:\n ____ ______ ___\n / )/ \\/ \\\n( / __ _\\ )\n \\ (/ o) ( o) )\n \\_ (_ ) \\ ) /\n \\ /\\_/ \\)_/\n \\/ //| |\\\\\n v | | v\n \\__/\npsql (14devel)\nSSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)\nType \"help\" for help.\n\nmotd=> \\q\n\n/Joel\nOn Fri, Apr 2, 2021, at 22:59, Chapman Flack wrote:Slonik's backslashes are falling off. Eww.Regards,-ChapThanks for the bug report.Fixed by properly escaping backslackes:ALTER SYSTEM SET motd TO E'\\u001B[94m''\\n ____ ______ ___ ''\\n / )/ \\\\/ \\\\ ''\\n ( / __ _\\\\ )''\\n \\\\ (/ o) ( o) )''\\n \\\\_ (_ ) \\\\ ) / ''\\n \\\\ /\\\\_/ \\\\)_/ ''\\n \\\\/ //| |\\\\\\\\ ''\\n v | | v ''\\n \\\\__/ ''\\u001b[0m';I've deployed the fix to production:$ psql -U brad -h pit.org motdNOTICE: ____ ______ ___ / )/ \\/ \\( / __ _\\ ) \\ (/ o) ( o) ) \\_ (_ ) \\ ) / \\ /\\_/ \\)_/ \\/ //| |\\\\ v | | v \\__/psql (14devel)SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)Type \"help\" for help.motd=> \\q/Joel",
"msg_date": "Sat, 03 Apr 2021 07:20:23 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "On Fri, Apr 2, 2021, at 23:09, Marko Tiikkaja wrote:\n> Hi Joel\n> \n> On Fri, Apr 2, 2021 at 11:47 PM Joel Jacobson <joel@compiler.org> wrote:\n>> PostgreSQL has since long been suffering from the lack of a proper UNIX style motd (message of the day).\n> \n> First of all, thanks for your work on this! I think this is an important feature to have, but I would love to see a way to have a set of strings from which you choose a random one to display. That way you could brighten your day with random positive messages.\n> \n> \n> -marko\n\nFun idea! I implemented it as a Perl script using the fortune command.\n\nThere are quite a lot of elephant jokes in the fortune database actually.\n\n$ sudo apt install fortune-mod\n\n$ crontab -l\n0 0 * * * bash -c \"/usr/local/bin/fortune_slony.pl | psql\"\n\n$ psql\nNOTICE:\n ____ ______ ___\n / )/ \\/ \\\n( / __ _\\ )\n \\ (/ o) ( o) )\n \\_ (_ ) \\ ) /\n \\ /\\_/ \\)_/\n \\/ //| |\\\\\n v | | v\n \\__/\nQ: Know what the difference between your latest project\nand putting wings on an elephant is?\nA: Who knows? The elephant *might* fly, heh, heh...\n\n/Joel",
"msg_date": "Sat, 03 Apr 2021 08:50:54 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "Hi\n\nOn 2021-04-03 07:16, Joel Jacobson wrote:\n> On Sat, Apr 3, 2021, at 04:47, Michael Paquier wrote:\n> \n>> On Fri, Apr 02, 2021 at 10:46:16PM +0200, Joel Jacobson wrote:\n>> \n>>> Ascii elephant in example by Michael Paquier [1], with ANSI colors\n>> added by me.\n>> \n>>> \n>> \n>>> [1]\n>> \n> https://www.postgresql.org/message-id/CAB7nPqRaacwcaANOYY3Hxd3T0No5RdZXyqM5HB6fta%2BCoDLOEg%40mail.gmail.com\n>> \n>> The credit here goes to Charles Clavadetscher, not me.\n>> \n>> --\n>> \n>> Michael\n> \n> Right! Sorry about that. The initial \">\" in front of the ascii art\n> confused me, didn't understand it was part of the reply, since all\n> text around it was your.\n> \n> Many thanks Charles for the beautiful ascii art!\n> \n> /Joel\n\nYou are welcome. There were some discussions until it came to the final \nform (that you can see below).\nYou may also want to have a look at this:\n\nhttps://www.swisspug.org/wiki/index.php/Promotion\n\nIt includes a DB-Function that can be used to create the ASCII image \nwith texts.\n\nRegards\nCharles\n\n-- \nCharles Clavadetscher\nSwiss PostgreSQL Users Group\nTreasurer\nSpitzackerstrasse 9\nCH - 8057 Zürich\n\nhttp://www.swisspug.org\n\n+---------------------------+\n| ____ ______ ___ |\n| / )/ \\/ \\ |\n| ( / __ _\\ ) |\n| \\ (/ o) ( o) ) |\n| \\_ (_ ) \\ ) _/ |\n| \\ /\\_/ \\)/ |\n| \\/ <//| |\\\\> |\n| _| | |\n| \\|_/ |\n| |\n| Swiss PostgreSQL |\n| Users Group |\n| |\n+---------------------------+\n\n\n",
"msg_date": "Sat, 03 Apr 2021 09:08:13 +0200",
"msg_from": "Charles Clavadetscher <clavadetscher@swisspug.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "Hello Joel,\n\n> This patch is one day late, my apologies for missing the deadline this year.\n>\n> PostgreSQL has since long been suffering from the lack of a proper UNIX style motd (message of the day).\n\nMy 0.02ᅵ: apart from the Fool's day joke dimension, I'd admit that I would \nnot mind actually having such a fun feature in pg, possibly disabled by \ndefault.\n\n-- \nFabien.",
"msg_date": "Sat, 3 Apr 2021 10:14:21 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "On Sat, Apr 3, 2021, at 10:14, Fabien COELHO wrote:\n> \n> Hello Joel,\n> \n> > This patch is one day late, my apologies for missing the deadline this year.\n> >\n> > PostgreSQL has since long been suffering from the lack of a proper UNIX style motd (message of the day).\n> \n> My 0.02€: apart from the Fool's day joke dimension, I'd admit that I would \n> not mind actually having such a fun feature in pg, possibly disabled by \n> default.\n\nFun to hear you find it useful.\nI'm actually using it myself in production for something, to display instructions to users when they login.\n\nWhen implementing this I stumbled upon newlines can't be used in ALTER SYSTEM parameter values.\n\nI see they were disallowed in commit 99f3b5613bd1f145b5dbbe86000337bbe37fb094\n\nHowever, reading escaped newlines seems to be working just fine.\nThe commit message from 2016 seems to imply otherwise:\n\n\"the configuration-file parser doesn't support embedded newlines in string literals\"\n\nThe first patch, 0001-quote-newlines.patch, fixes the part of escaping newlines\nbefore they are written to the configuration file.\n\nPerhaps the configuration-file parser has been fixed since to support embedded newlines?\nIf so, then maybe it would actually be an idea to support newlines by escaping them?\nEspecially since newlines are supported by set_config().\n\n/Joel\nOn Sat, Apr 3, 2021, at 10:14, Fabien COELHO wrote:Hello Joel,> This patch is one day late, my apologies for missing the deadline this year.>> PostgreSQL has since long been suffering from the lack of a proper UNIX style motd (message of the day).My 0.02€: apart from the Fool's day joke dimension, I'd admit that I would not mind actually having such a fun feature in pg, possibly disabled by default.Fun to hear you find it useful.I'm actually using it myself in production for something, to display instructions to users when they login.When implementing this I stumbled upon newlines can't be used in ALTER SYSTEM parameter values.I see they were disallowed in commit 99f3b5613bd1f145b5dbbe86000337bbe37fb094However, reading escaped newlines seems to be working just fine.The commit message from 2016 seems to imply otherwise:\"the configuration-file parser doesn't support embedded newlines in string literals\"The first patch, 0001-quote-newlines.patch, fixes the part of escaping newlinesbefore they are written to the configuration file.Perhaps the configuration-file parser has been fixed since to support embedded newlines?If so, then maybe it would actually be an idea to support newlines by escaping them?Especially since newlines are supported by set_config()./Joel",
"msg_date": "Sat, 03 Apr 2021 11:54:56 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "On 04/03/21 01:20, Joel Jacobson wrote:\n> I've deployed the fix to production:\n> \n> $ psql -U brad -h pit.org motd\n> \n> NOTICE:\n> ____ ______ ___\n> / )/ \\/ \\\n> ( / __ _\\ )\n> \\ (/ o) ( o) )\n> \\_ (_ ) \\ ) /\n> \\ /\\_/ \\)_/\n> \\/ //| |\\\\\n> v | | v\n> \\__/\n\nNow there's some kind of Max Headroom thing going on with the second row,\nand this time I'm not sure how to explain it. (I knew the backslashes were\nbecause they weren't doubled.)\n\nI have done 'view source' in my mail client to make sure it's not just\nsome display artifact on my end. Something has eaten a space before that\nleft paren. What would do that?\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Sat, 3 Apr 2021 09:43:56 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "\n\n> Perhaps the configuration-file parser has been fixed since to support \n> embedded newlines? If so, then maybe it would actually be an idea to \n> support newlines by escaping them?\n\nDunno.\n\nIf such a feature gets considered, I'm not sure I'd like to actually edit \npg configuration file to change the message.\n\nThe actual source looks pretty straightforward. I'm wondering whether pg \nstyle would suggest to write motd != NULL instead of just motd.\n\nI'm wondering whether it should be possible to designate (1) a file the \ncontent of which would be shown, or (2) a command, the output of which \nwould be shown [ok, there might be security implications on this one].\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 3 Apr 2021 17:50:16 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "On Sat, Apr 3, 2021, at 15:43, Chapman Flack wrote:\n> Now there's some kind of Max Headroom thing going on with the second row,\n> and this time I'm not sure how to explain it. (I knew the backslashes were\n> because they weren't doubled.)\n> \n> I have done 'view source' in my mail client to make sure it's not just\n> some display artifact on my end. Something has eaten a space before that\n> left paren. What would do that?\n\nThanks for noticing.\nI've updated the ascii art now using the version from swisspug.org,\ndoes it look correct now to you?\n\n$ psql -U brad -h pit.org motd\n\nNOTICE:\n ____ ______ ___\n/ )/ \\/ \\\n( / __ _\\ )\n\\ (/ o) ( o) )\n \\_ (_ ) \\ ) _/\n \\ /\\_/ \\)/\n \\/ <//| |\\\\>\n _| |\n \\|_/\nTo be or not to be.\n-- Shakespeare\nTo do is to be.\n-- Nietzsche\nTo be is to do.\n-- Sartre\nDo be do be do.\n-- Sinatra\n\n/Joel\nOn Sat, Apr 3, 2021, at 15:43, Chapman Flack wrote:Now there's some kind of Max Headroom thing going on with the second row,and this time I'm not sure how to explain it. (I knew the backslashes werebecause they weren't doubled.)I have done 'view source' in my mail client to make sure it's not justsome display artifact on my end. Something has eaten a space before thatleft paren. What would do that?Thanks for noticing.I've updated the ascii art now using the version from swisspug.org,does it look correct now to you?$ psql -U brad -h pit.org motdNOTICE: ____ ______ ___/ )/ \\/ \\( / __ _\\ )\\ (/ o) ( o) ) \\_ (_ ) \\ ) _/ \\ /\\_/ \\)/ \\/ <//| |\\\\> _| | \\|_/To be or not to be.-- ShakespeareTo do is to be.-- NietzscheTo be is to do.-- SartreDo be do be do.-- Sinatra/Joel",
"msg_date": "Sat, 03 Apr 2021 20:24:10 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "On 04/03/21 14:24, Joel Jacobson wrote:\n> Thanks for noticing.\n> I've updated the ascii art now using the version from swisspug.org,\n> does it look correct now to you?\n> \n> $ psql -U brad -h pit.org motd\n> \n> NOTICE:\n> ____ ______ ___\n> / )/ \\/ \\\n> ( / __ _\\ )\n> \\ (/ o) ( o) )\n> \\_ (_ ) \\ ) _/\n> \\ /\\_/ \\)/\n> \\/ <//| |\\\\>\n> _| |\n> \\|_/\n\nIn the email as I received it (including in view-source, so it is\nnot a display artifact), rows 2 and 4 are now missing an initial space.\n\nI've pulled up the version from the list archive also[1], and see the\nsame issue there.\n\nI'm not /only/ trying to be funny here ... I'm wondering if there could\nbe something relevant to be learned from finding out where the initial\nspace is being dropped, and why.\n\nRegards,\n-Chap\n\n\n[1]\nhttps://www.postgresql.org/message-id/b8855527-88f5-4613-a258-8523cbded8be%40www.fastmail.com\n\n\n",
"msg_date": "Sat, 3 Apr 2021 14:48:53 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "On 2021-Apr-03, Joel Jacobson wrote:\n\n> I'm actually using it myself in production for something, to display\n> instructions to users when they login.\n\nYeah, such as\n\n\"If your CREATE sentences don't work, please run\nCREATE SCHEMA AUTHORIZATION CURRENT_USER\"\n\nfor systems where the PUBLIC schema has been dropped.\n\n-- \n�lvaro Herrera Valdivia, Chile\n<Schwern> It does it in a really, really complicated way\n<crab> why does it need to be complicated?\n<Schwern> Because it's MakeMaker.\n\n\n",
"msg_date": "Sat, 3 Apr 2021 16:16:39 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "On Sat, Apr 3, 2021, at 17:50, Fabien COELHO wrote:\n> > Perhaps the configuration-file parser has been fixed since to support \n> > embedded newlines? If so, then maybe it would actually be an idea to \n> > support newlines by escaping them?\n> \n> Dunno.\n> \n> If such a feature gets considered, I'm not sure I'd like to actually edit \n> pg configuration file to change the message.\n\nFor the ALTER SYSTEM case, the value would be written to postgresql.auto.conf,\nand that file we shouldn't edit manually anyway, right?\n\n> \n> The actual source looks pretty straightforward. I'm wondering whether pg \n> style would suggest to write motd != NULL instead of just motd.\n\nThat's what I had originally, but when reviewing my code verifying code style,\nI noticed the other case it more common:\n\nif \\([a-z]* != NULL &&\n119 results in 72 files\n\nif \\([a-z]* &&\n936 results in 311 files\n\n> \n> I'm wondering whether it should be possible to designate (1) a file the \n> content of which would be shown, or (2) a command, the output of which \n> would be shown [ok, there might be security implications on this one].\n\nCan't we just do that via plpgsql and EXECUTE somehow?\n\n/Joel\nOn Sat, Apr 3, 2021, at 17:50, Fabien COELHO wrote:> Perhaps the configuration-file parser has been fixed since to support > embedded newlines? If so, then maybe it would actually be an idea to > support newlines by escaping them?Dunno.If such a feature gets considered, I'm not sure I'd like to actually edit pg configuration file to change the message.For the ALTER SYSTEM case, the value would be written to postgresql.auto.conf,and that file we shouldn't edit manually anyway, right?The actual source looks pretty straightforward. I'm wondering whether pg style would suggest to write motd != NULL instead of just motd.That's what I had originally, but when reviewing my code verifying code style,I noticed the other case it more common:if \\([a-z]* != NULL &&119 results in 72 filesif \\([a-z]* &&936 results in 311 filesI'm wondering whether it should be possible to designate (1) a file the content of which would be shown, or (2) a command, the output of which would be shown [ok, there might be security implications on this one].Can't we just do that via plpgsql and EXECUTE somehow?/Joel",
"msg_date": "Sun, 04 Apr 2021 07:55:16 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "On Sun, Apr 4, 2021, at 07:55, Joel Jacobson wrote:\n> On Sat, Apr 3, 2021, at 17:50, Fabien COELHO wrote:\n>> I'm wondering whether it should be possible to designate (1) a file the \n>> content of which would be shown, or (2) a command, the output of which \n>> would be shown [ok, there might be security implications on this one].\n> \n> Can't we just do that via plpgsql and EXECUTE somehow?\n\nRight, we can't since\n\nERROR: ALTER SYSTEM cannot be executed from a function\n\nI wrongly thought a PROCEDURE would work, but it gives the same error.\n\n/Joel\nOn Sun, Apr 4, 2021, at 07:55, Joel Jacobson wrote:On Sat, Apr 3, 2021, at 17:50, Fabien COELHO wrote:I'm wondering whether it should be possible to designate (1) a file the content of which would be shown, or (2) a command, the output of which would be shown [ok, there might be security implications on this one].Can't we just do that via plpgsql and EXECUTE somehow?Right, we can't sinceERROR: ALTER SYSTEM cannot be executed from a functionI wrongly thought a PROCEDURE would work, but it gives the same error./Joel",
"msg_date": "Sun, 04 Apr 2021 08:23:56 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "Hello Joel,\n\nMy 0.02€:\n\n>> If such a feature gets considered, I'm not sure I'd like to actually edit\n>> pg configuration file to change the message.\n>\n> For the ALTER SYSTEM case, the value would be written to postgresql.auto.conf,\n> and that file we shouldn't edit manually anyway, right?\n\nSure. I meant change the configuration in any way, through manual editing, \nalter system, whatever.\n\n>> The actual source looks pretty straightforward. I'm wondering whether pg\n>> style would suggest to write motd != NULL instead of just motd.\n>\n> That's what I had originally, but when reviewing my code verifying code style,\n> I noticed the other case it more common:\n\nIf other cases are indeed pointers. For pgbench, all direct \"if (xxx &&\" \ncases are simple booleans or integers, pointers seem to have \"!= NULL\". \nWhile looking quickly at the grep output, ISTM that most obvious pointers \nhave \"!= NULL\" and other cases often look like booleans:\n\n catalog/pg_operator.c: if (isDelete && t->oprcom == baseId)\n catalog/toasting.c: if (check && lockmode != AccessExclusiveLock)\n commands/async.c: if (amRegisteredListener && listenChannels == NIL)\n commands/explain.c: if (es->analyze && es->timing)\n …\n\nI'm sure there are exceptions, but ISTM that the local style is \"!= NULL\".\n\n>> I'm wondering whether it should be possible to designate (1) a file the\n>> content of which would be shown, or (2) a command, the output of which\n>> would be shown [ok, there might be security implications on this one].\n>\n> Can't we just do that via plpgsql and EXECUTE somehow?\n\nHmmm.\n\nShould we want to execute forcibly some PL/pgSQL on any new connection? \nNot sure this is really desirable. I was thinking of something more \ntrivial, like the \"motd\" directeve could designate a file instead of the \nmessage itself.\n\nThere could be a hook system to execute some user code on new connections \nand other events. It could be a new kind of event trigger, eg with \nconnection_start, connection_end… That could be nice for other purposes,\ni.e. auditing. Now, event triggers are not global, they work inside a \ndatabase, which would suggest that if extended a new connection event \nwould be fired per database connection, not just once per connection. Not \nsure it would be a bad thing.\n\n-- \nFabien.",
"msg_date": "Sun, 4 Apr 2021 09:16:30 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "On Sun, Apr 4, 2021, at 09:16, Fabien COELHO wrote:\n> If other cases are indeed pointers. For pgbench, all direct \"if (xxx &&\" \n> cases are simple booleans or integers, pointers seem to have \"!= NULL\". \n> While looking quickly at the grep output, ISTM that most obvious pointers \n> have \"!= NULL\" and other cases often look like booleans:\n> \n> catalog/pg_operator.c: if (isDelete && t->oprcom == baseId)\n> catalog/toasting.c: if (check && lockmode != AccessExclusiveLock)\n> commands/async.c: if (amRegisteredListener && listenChannels == NIL)\n> commands/explain.c: if (es->analyze && es->timing)\n> …\n> \n> I'm sure there are exceptions, but ISTM that the local style is \"!= NULL\".\n\nMany thanks for explaining.\n\n> \n> >> I'm wondering whether it should be possible to designate (1) a file the\n> >> content of which would be shown, or (2) a command, the output of which\n> >> would be shown [ok, there might be security implications on this one].\n> >\n> > Can't we just do that via plpgsql and EXECUTE somehow?\n> \n> Hmmm.\n> \n> Should we want to execute forcibly some PL/pgSQL on any new connection? \n\nOh, of course, you want the command to be execute for each new connection.\n\nMy idea was to use PL/pgSQL to execute only when you wanted to update the stored motd value,\nbut of course, if you want a new value from the command for each new connection,\nthen that doesn't work (and it doesn't work anyway due to not being able to execute ALTER SYSTEM from functions).\n\n> Not sure this is really desirable. I was thinking of something more \n> trivial, like the \"motd\" directeve could designate a file instead of the \n> message itself.\n> \n> There could be a hook system to execute some user code on new connections \n> and other events. It could be a new kind of event trigger, eg with \n> connection_start, connection_end… That could be nice for other purposes,\n> i.e. auditing. Now, event triggers are not global, they work inside a \n> database, which would suggest that if extended a new connection event \n> would be fired per database connection, not just once per connection. Not \n> sure it would be a bad thing.\n\nSuch a hook sounds like a good idea.\nIf we would have such a hook, then another possibility would be to implement motd as an extension, right?\n\n/Joel\nOn Sun, Apr 4, 2021, at 09:16, Fabien COELHO wrote:If other cases are indeed pointers. For pgbench, all direct \"if (xxx &&\" cases are simple booleans or integers, pointers seem to have \"!= NULL\". While looking quickly at the grep output, ISTM that most obvious pointers have \"!= NULL\" and other cases often look like booleans: catalog/pg_operator.c: if (isDelete && t->oprcom == baseId) catalog/toasting.c: if (check && lockmode != AccessExclusiveLock) commands/async.c: if (amRegisteredListener && listenChannels == NIL) commands/explain.c: if (es->analyze && es->timing) …I'm sure there are exceptions, but ISTM that the local style is \"!= NULL\".Many thanks for explaining.>> I'm wondering whether it should be possible to designate (1) a file the>> content of which would be shown, or (2) a command, the output of which>> would be shown [ok, there might be security implications on this one].>> Can't we just do that via plpgsql and EXECUTE somehow?Hmmm.Should we want to execute forcibly some PL/pgSQL on any new connection? Oh, of course, you want the command to be execute for each new connection.My idea was to use PL/pgSQL to execute only when you wanted to update the stored motd value,but of course, if you want a new value from the command for each new connection,then that doesn't work (and it doesn't work anyway due to not being able to execute ALTER SYSTEM from functions).Not sure this is really desirable. I was thinking of something more trivial, like the \"motd\" directeve could designate a file instead of the message itself.There could be a hook system to execute some user code on new connections and other events. It could be a new kind of event trigger, eg with connection_start, connection_end… That could be nice for other purposes,i.e. auditing. Now, event triggers are not global, they work inside a database, which would suggest that if extended a new connection event would be fired per database connection, not just once per connection. Not sure it would be a bad thing.Such a hook sounds like a good idea.If we would have such a hook, then another possibility would be to implement motd as an extension, right?/Joel",
"msg_date": "Sun, 04 Apr 2021 09:25:31 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n\n>>> The actual source looks pretty straightforward. I'm wondering whether pg\n>>> style would suggest to write motd != NULL instead of just motd.\n>>\n>> That's what I had originally, but when reviewing my code verifying code style,\n>> I noticed the other case it more common:\n>>\n>> if \\([a-z]* != NULL &&\n>> 119 results in 72 files\n>>\n>> if \\([a-z]* &&\n>> 936 results in 311 files\n>\n> If other cases are indeed pointers. For pgbench, all direct \"if (xxx &&\"\n> cases are simple booleans or integers, pointers seem to have \"!=\n> NULL\". While looking quickly at the grep output, ISTM that most obvious\n> pointers have \"!= NULL\" and other cases often look like booleans:\n>\n> catalog/pg_operator.c: if (isDelete && t->oprcom == baseId)\n> catalog/toasting.c: if (check && lockmode != AccessExclusiveLock)\n> commands/async.c: if (amRegisteredListener && listenChannels == NIL)\n> commands/explain.c: if (es->analyze && es->timing)\n> …\n>\n> I'm sure there are exceptions, but ISTM that the local style is \"!= NULL\".\n\nLooking specifically at code checking an expression before dereferencing\nit, we get:\n\n$ ag '(?:if|Assert)\\s*\\(\\s*(\\S+)\\s*&&\\s*\\1->\\w+' | wc -l\n247\n\n$ ag '(?:if|Assert)\\s*\\(\\s*(\\S+)\\s*!=\\s*NULL\\s*&&\\s*\\1->\\w+' | wc -l\n74\n\nSo the shorter 'foo && foo->bar' form (which I personally prefer) is\nconsiderably more common than the longer 'foo != NULL && foo->bar' form.\n\n- ilmari\n-- \n\"The surreality of the universe tends towards a maximum\" -- Skud's Law\n\"Never formulate a law or axiom that you're not prepared to live with\n the consequences of.\" -- Skud's Meta-Law\n\n\n",
"msg_date": "Sun, 04 Apr 2021 19:42:30 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
},
{
"msg_contents": "On Sun, Apr 4, 2021, at 20:42, Dagfinn Ilmari Mannsåker wrote:\n> Fabien COELHO <coelho@cri.ensmp.fr <mailto:coelho%40cri.ensmp.fr>> writes:\n> \n> >>> The actual source looks pretty straightforward. I'm wondering whether pg\n> >>> style would suggest to write motd != NULL instead of just motd.\n> >>\n> >> That's what I had originally, but when reviewing my code verifying code style,\n> >> I noticed the other case it more common:\n> >>\n> >> if \\([a-z]* != NULL &&\n> >> 119 results in 72 files\n> >>\n> >> if \\([a-z]* &&\n> >> 936 results in 311 files\n> >\n> > If other cases are indeed pointers. For pgbench, all direct \"if (xxx &&\"\n> > cases are simple booleans or integers, pointers seem to have \"!=\n> > NULL\". While looking quickly at the grep output, ISTM that most obvious\n> > pointers have \"!= NULL\" and other cases often look like booleans:\n> >\n> > catalog/pg_operator.c: if (isDelete && t->oprcom == baseId)\n> > catalog/toasting.c: if (check && lockmode != AccessExclusiveLock)\n> > commands/async.c: if (amRegisteredListener && listenChannels == NIL)\n> > commands/explain.c: if (es->analyze && es->timing)\n> > …\n> >\n> > I'm sure there are exceptions, but ISTM that the local style is \"!= NULL\".\n> \n> Looking specifically at code checking an expression before dereferencing\n> it, we get:\n> \n> $ ag '(?:if|Assert)\\s*\\(\\s*(\\S+)\\s*&&\\s*\\1->\\w+' | wc -l\n> 247\n> \n> $ ag '(?:if|Assert)\\s*\\(\\s*(\\S+)\\s*!=\\s*NULL\\s*&&\\s*\\1->\\w+' | wc -l\n> 74\n> \n> So the shorter 'foo && foo->bar' form (which I personally prefer) is\n> considerably more common than the longer 'foo != NULL && foo->bar' form.\n\nOh, I see. This gets more and more interesting.\n\nMore of the most popular variant like a good rule to follow,\nexcept when a new improved pattern is invented and new code\nwritten in a new way, but all old code written in the old way remains,\nso less experienced developers following such a rule,\nwill continue to write code in the old way.\n\nI sometimes do \"git log -p\" grepping for recent code changes,\nto see how new code is written.\n\nIt would be nice if there would be a grep similar to \"ag\" that could\nalso dig the git repo and show date/time when such code lines\nwere added.\n\nI was looking for some PostgreSQL coding convention document,\nand found https://www.postgresql.org/docs/current/source-conventions.html\n\nMaybe \"foo != NULL && foo->bar\" XOR \"foo && foo->bar\" should be added to such document?\n\nIs it an ambition to normalize the entire code base, to use just one of the two?\n\nIf so, maybe we could use some C compiler to get the AST\nfor all the C files and search it for occurrences, and then after normalizing\ncompiling again to verify the AST is unchanged (or changed in the desired way)?\n\n/Joel\nOn Sun, Apr 4, 2021, at 20:42, Dagfinn Ilmari Mannsåker wrote:Fabien COELHO <coelho@cri.ensmp.fr> writes:>>> The actual source looks pretty straightforward. I'm wondering whether pg>>> style would suggest to write motd != NULL instead of just motd.>>>> That's what I had originally, but when reviewing my code verifying code style,>> I noticed the other case it more common:>>>> if \\([a-z]* != NULL &&>> 119 results in 72 files>>>> if \\([a-z]* &&>> 936 results in 311 files>> If other cases are indeed pointers. For pgbench, all direct \"if (xxx &&\"> cases are simple booleans or integers, pointers seem to have \"!=> NULL\". While looking quickly at the grep output, ISTM that most obvious> pointers have \"!= NULL\" and other cases often look like booleans:>> catalog/pg_operator.c: if (isDelete && t->oprcom == baseId)> catalog/toasting.c: if (check && lockmode != AccessExclusiveLock)> commands/async.c: if (amRegisteredListener && listenChannels == NIL)> commands/explain.c: if (es->analyze && es->timing)> …>> I'm sure there are exceptions, but ISTM that the local style is \"!= NULL\".Looking specifically at code checking an expression before dereferencingit, we get:$ ag '(?:if|Assert)\\s*\\(\\s*(\\S+)\\s*&&\\s*\\1->\\w+' | wc -l247$ ag '(?:if|Assert)\\s*\\(\\s*(\\S+)\\s*!=\\s*NULL\\s*&&\\s*\\1->\\w+' | wc -l74So the shorter 'foo && foo->bar' form (which I personally prefer) isconsiderably more common than the longer 'foo != NULL && foo->bar' form.Oh, I see. This gets more and more interesting.More of the most popular variant like a good rule to follow,except when a new improved pattern is invented and new codewritten in a new way, but all old code written in the old way remains,so less experienced developers following such a rule,will continue to write code in the old way.I sometimes do \"git log -p\" grepping for recent code changes,to see how new code is written.It would be nice if there would be a grep similar to \"ag\" that couldalso dig the git repo and show date/time when such code lineswere added.I was looking for some PostgreSQL coding convention document,and found https://www.postgresql.org/docs/current/source-conventions.htmlMaybe \"foo != NULL && foo->bar\" XOR \"foo && foo->bar\" should be added to such document?Is it an ambition to normalize the entire code base, to use just one of the two?If so, maybe we could use some C compiler to get the ASTfor all the C files and search it for occurrences, and then after normalizingcompiling again to verify the AST is unchanged (or changed in the desired way)?/Joel",
"msg_date": "Mon, 05 Apr 2021 08:19:27 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement motd for PostgreSQL"
}
] |
[
{
"msg_contents": "Hello,\n\nAttached is a small but confusing mistake in the json documentation (a @@ instead of @?) that has been there since version 12. (It took me quite some time to figure that out while testing with the recent SQL/JSON patches -- which I initially blamed).\n \nTo be applied from 12, 13, and master.\n\nThanks,\n\nErik Rijkers",
"msg_date": "Sat, 3 Apr 2021 14:01:38 +0200 (CEST)",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "fix old confusing JSON example"
},
{
"msg_contents": "> On 2021.04.03. 14:01 Erik Rijkers <er@xs4all.nl> wrote:\n> \n> Hello,\n> \n> Attached is a small but confusing mistake in the json documentation (a @@ instead of @?) that has been there since version 12. (It took me quite some time to figure that out while testing with the recent SQL/JSON patches -- which I initially blamed).\n> \n> To be applied from 12, 13, and master.\n\nOops, sent to wrong list.\n\nLet me add some arguments for the change:\n\nThe original text is:\n--------------------------\nAlso, GIN index supports @@ and @? operators, which perform jsonpath matching.\n\n SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @@ '$.tags[*] == \"qui\"';\n SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @@ '$.tags[*] ? (@ == \"qui\")';\n\n--------------------------\nSo, that gives information on two operators, and then gives one example query for each. Clearly, the second example was meant to illustrate a where-clause with the @? operator.\n\nSmall change to prevent great confusion (I'll admit it took me far too long to understand this).\n\nthanks,\n\nErik Rijkers\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n> \n> Thanks,\n> \n> Erik Rijkers",
"msg_date": "Sat, 3 Apr 2021 14:28:38 +0200 (CEST)",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "Re: fix old confusing JSON example"
},
{
"msg_contents": "On Sat, Apr 03, 2021 at 02:01:38PM +0200, Erik Rijkers wrote:\n> Attached is a small but confusing mistake in the json documentation\n> (a @@ instead of @?) that has been there since version 12. (It took\n> me quite some time to figure that out while testing with the recent\n> SQL/JSON patches -- which I initially blamed).\n\nPlease note that pgsql-committers is the mailing list with emails\ngenerated automatically for each commit done in the main repository.\nFor anything related to the docs, pgsql-docs is more adapted, so I am\nredirecting this thread there.\n--\nMichael",
"msg_date": "Sat, 3 Apr 2021 21:32:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fix old confusing JSON example"
},
{
"msg_contents": "On Sat, Apr 03, 2021 at 02:28:38PM +0200, Erik Rijkers wrote:\n> So, that gives information on two operators, and then gives one\n> example query for each. Clearly, the second example was meant to\n> illustrate a where-clause with the @? operator. \n> \n> Small change to prevent great confusion (I'll admit it took me far\n> too long to understand this). \n\nOnce one guesses the definition of the table to use with the sample\ndata at disposal in the docs, it is easy to see that both queries\nshould return the same result, but the second one misses the shot and\nis corrected as you say. So, applied.\n\nMy apologies for the delay.\n--\nMichael",
"msg_date": "Fri, 16 Apr 2021 17:00:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fix old confusing JSON example"
},
{
"msg_contents": "On Fri, Apr 16, 2021 at 11:00 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Sat, Apr 03, 2021 at 02:28:38PM +0200, Erik Rijkers wrote:\n> > So, that gives information on two operators, and then gives one\n> > example query for each. Clearly, the second example was meant to\n> > illustrate a where-clause with the @? operator.\n> >\n> > Small change to prevent great confusion (I'll admit it took me far\n> > too long to understand this).\n>\n> Once one guesses the definition of the table to use with the sample\n> data at disposal in the docs, it is easy to see that both queries\n> should return the same result, but the second one misses the shot and\n> is corrected as you say. So, applied.\n>\n> My apologies for the delay.\n\nMy apologies for missing this. And thank you for taking care!\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 16 Apr 2021 17:25:18 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix old confusing JSON example"
},
{
"msg_contents": "> On 2021.04.16. 10:00 Michael Paquier <michael@paquier.xyz> wrote:\n> On Sat, Apr 03, 2021 at 02:28:38PM +0200, Erik Rijkers wrote:\n> > So, that gives information on two operators, and then gives one\n> > example query for each. Clearly, the second example was meant to\n> > illustrate a where-clause with the @? operator. \n> > \n> > Small change to prevent great confusion (I'll admit it took me far\n> > too long to understand this). \n> \n> Once one guesses the definition of the table to use with the sample\n> data at disposal in the docs, it is easy to see that both queries\n> should return the same result, but the second one misses the shot and\n> is corrected as you say. So, applied.\n\nGreat, thank you.\n\nI just happened to use the website-documentation and noticed that there the change is not done: it still has the erroneous line, in the docs of 13 (current), and 12; the docs of 14devel are apparently updated.\n\nThat makes me wonder: is there a regular html-docs-update (dayly? weekly?) of doc-bugs of this kind in the website-docs of current and earlier releases?\n\nTo be clear, I am talking about the lines below:\n 'GIN index supports @@ and @? operators'\n\non pages\n https://www.postgresql.org/docs/13/datatype-json.html\n https://www.postgresql.org/docs/12/datatype-json.html\n\nwhere the change that was pushed was to correct the second example from @@ to @?\n\nthanks,\n\nErik Rijkers\n\n\n> \n> My apologies for the delay.\n> --\n> Michael\n\n\n",
"msg_date": "Tue, 20 Apr 2021 21:07:52 +0200 (CEST)",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "Re: fix old confusing JSON example"
},
{
"msg_contents": "On Tue, Apr 20, 2021 at 09:07:52PM +0200, Erik Rijkers wrote:\n> I just happened to use the website-documentation and noticed that there the change is not done: it still has the erroneous line, in the docs of 13 (current), and 12; the docs of 14devel are apparently updated.\n> \n> That makes me wonder: is there a regular html-docs-update (dayly? weekly?) of doc-bugs of this kind in the website-docs of current and earlier releases?\n> \n> To be clear, I am talking about the lines below:\n> 'GIN index supports @@ and @? operators'\n> \n> on pages\n> https://www.postgresql.org/docs/13/datatype-json.html\n> https://www.postgresql.org/docs/12/datatype-json.html\n> \n> where the change that was pushed was to correct the second example from @@ to @?\n\nLooking at the doc \"HOME\", it says:\nhttps://www.postgresql.org/docs/13/index.html\n| PostgreSQL 13.2 Documentation\n\nSo this seems to be updated for minor releases.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 20 Apr 2021 14:44:28 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fix old confusing JSON example"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Tue, Apr 20, 2021 at 09:07:52PM +0200, Erik Rijkers wrote:\n>> I just happened to use the website-documentation and noticed that there the change is not done: it still has the erroneous line, in the docs of 13 (current), and 12; the docs of 14devel are apparently updated.\n>> \n>> That makes me wonder: is there a regular html-docs-update (dayly? weekly?) of doc-bugs of this kind in the website-docs of current and earlier releases?\n\n> Looking at the doc \"HOME\", it says:\n> https://www.postgresql.org/docs/13/index.html\n> | PostgreSQL 13.2 Documentation\n> So this seems to be updated for minor releases.\n\nYeah. The website's copy of the devel version of the docs is refreshed\nquickly (within a few hours of commit, usually) but released branches\nare only updated when there's a release.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Apr 2021 17:08:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fix old confusing JSON example"
}
] |
[
{
"msg_contents": "Hi,\nWe migrated our Oracle Databases to PostgreSQL. One of the simple select\nquery that takes 4 ms on Oracle is taking around 200 ms on PostgreSQL.\nCould you please advise. Please find query and query plans below. Gather\ncost seems high. Will increasing max_parallel_worker_per_gather help?\n\nexplain analyse SELECT bom.address_key dom2137,bom.address_type_key\ndom1727,bom.start_date dom1077,bom.end_date dom828,bom.address_status_key\ndom1955,bom.address_role_key dom1711,bom.delivery_point_created\ndom2362,bom.postcode dom873,bom.postcode_name dom1390,bom.street_name\ndom1186,bom.premises_number_1 dom1777,bom.premises_number_2\ndom1778,bom.premises_letter_1 dom1784,bom.premises_letter_2\ndom1785,bom.premises_separator dom1962,bom.stairway dom892,bom.po_box\ndom653,bom.apartment_number dom1732,bom.apartment_letter\ndom1739,bom.street_key dom1097,bom.address_use_key dom1609,bom.language_key\ndom1272,bom.address_family_id dom1796,bom.cur_address_key\ndom2566,bom.created_by dom1052,bom.modified_by dom1158,bom.creation_time\ndom1392,bom.modification_time dom1813 FROM DEPT.address dom WHERE\naddress_key = 6113763\n\n[\n{\n\"Plan\": {\n\"Node Type\": \"Gather\",\n\"Parallel Aware\": false,\n\"Actual Rows\": 1,\n\"Actual Loops\": 1,\n\"Workers Planned\": 1,\n\"Workers Launched\": 1,\n\"Single Copy\": true,\n\"Plans\": [\n{\n\"Node Type\": \"Index Scan\",\n\"Parent Relationship\": \"Outer\",\n\"Parallel Aware\": false,\n\"Scan Direction\": \"Forward\",\n\"Index Name\": \"address1_i7\",\n\"Relation Name\": \"address\",\n\"Alias\": \"dom\",\n\"Actual Rows\": 1,\n\"Actual Loops\": 1,\n\"Index Cond\": \"(address_key = 6113763)\",\n\"Rows Removed by Index Recheck\": 0\n}\n]\n},\n\"Triggers\": []\n}\n]\n\n\"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual\ntime=174.318..198.539 rows=1 loops=1)\"\n\" Workers Planned: 1\"\n\" Workers Launched: 1\"\n\" Single Copy: true\"\n\" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65 rows=1\nwidth=127) (actual time=0.125..0.125 rows=1 loops=1)\"\n\" Index Cond: (address_key = 6113763)\"\n\"Planning Time: 0.221 ms\"\n\"Execution Time: 198.601 ms\"\n\n\n\nRegards,\nAditya.\n\nHi,We migrated our Oracle Databases to PostgreSQL. One of the simple select query that takes 4 ms on Oracle is taking around 200 ms on PostgreSQL. Could you please advise. Please find query and query plans below. Gather cost seems high. Will increasing max_parallel_worker_per_gather help?explain analyse SELECT bom.address_key dom2137,bom.address_type_key dom1727,bom.start_date dom1077,bom.end_date dom828,bom.address_status_key dom1955,bom.address_role_key dom1711,bom.delivery_point_created dom2362,bom.postcode dom873,bom.postcode_name dom1390,bom.street_name dom1186,bom.premises_number_1 dom1777,bom.premises_number_2 dom1778,bom.premises_letter_1 dom1784,bom.premises_letter_2 dom1785,bom.premises_separator dom1962,bom.stairway dom892,bom.po_box dom653,bom.apartment_number dom1732,bom.apartment_letter dom1739,bom.street_key dom1097,bom.address_use_key dom1609,bom.language_key dom1272,bom.address_family_id dom1796,bom.cur_address_key dom2566,bom.created_by dom1052,bom.modified_by dom1158,bom.creation_time dom1392,bom.modification_time dom1813 FROM DEPT.address dom WHERE address_key = 6113763[{\"Plan\": {\"Node Type\": \"Gather\",\"Parallel Aware\": false,\"Actual Rows\": 1,\"Actual Loops\": 1,\"Workers Planned\": 1,\"Workers Launched\": 1,\"Single Copy\": true,\"Plans\": [{\"Node Type\": \"Index Scan\",\"Parent Relationship\": \"Outer\",\"Parallel Aware\": false,\"Scan Direction\": \"Forward\",\"Index Name\": \"address1_i7\",\"Relation Name\": \"address\",\"Alias\": \"dom\",\"Actual Rows\": 1,\"Actual Loops\": 1,\"Index Cond\": \"(address_key = 6113763)\",\"Rows Removed by Index Recheck\": 0}]},\"Triggers\": []}]\"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual time=174.318..198.539 rows=1 loops=1)\"\" Workers Planned: 1\"\" Workers Launched: 1\"\" Single Copy: true\"\" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65 rows=1 width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\" Index Cond: (address_key = 6113763)\"\"Planning Time: 0.221 ms\"\"Execution Time: 198.601 ms\"Regards,Aditya.",
"msg_date": "Sat, 3 Apr 2021 19:08:22 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on Oracle\n after migration."
},
{
"msg_contents": "On Sat, Apr 3, 2021 at 7:08 PM aditya desai <admad123@gmail.com> wrote:\n>\n> Hi,\n> We migrated our Oracle Databases to PostgreSQL. One of the simple select query that takes 4 ms on Oracle is taking around 200 ms on PostgreSQL. Could you please advise. Please find query and query plans below. Gather cost seems high. Will increasing max_parallel_worker_per_gather help?\n\nNo it doesn't. For small tables, parallelism might not help since it\ndoesn't come for free. Try setting max_parallel_worker_per_gather to 0\ni.e. without parallel query.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 3 Apr 2021 19:17:47 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "so 3. 4. 2021 v 15:38 odesílatel aditya desai <admad123@gmail.com> napsal:\n\n> Hi,\n> We migrated our Oracle Databases to PostgreSQL. One of the simple select\n> query that takes 4 ms on Oracle is taking around 200 ms on PostgreSQL.\n> Could you please advise. Please find query and query plans below. Gather\n> cost seems high. Will increasing max_parallel_worker_per_gather help?\n>\n> explain analyse SELECT bom.address_key dom2137,bom.address_type_key\n> dom1727,bom.start_date dom1077,bom.end_date dom828,bom.address_status_key\n> dom1955,bom.address_role_key dom1711,bom.delivery_point_created\n> dom2362,bom.postcode dom873,bom.postcode_name dom1390,bom.street_name\n> dom1186,bom.premises_number_1 dom1777,bom.premises_number_2\n> dom1778,bom.premises_letter_1 dom1784,bom.premises_letter_2\n> dom1785,bom.premises_separator dom1962,bom.stairway dom892,bom.po_box\n> dom653,bom.apartment_number dom1732,bom.apartment_letter\n> dom1739,bom.street_key dom1097,bom.address_use_key dom1609,bom.language_key\n> dom1272,bom.address_family_id dom1796,bom.cur_address_key\n> dom2566,bom.created_by dom1052,bom.modified_by dom1158,bom.creation_time\n> dom1392,bom.modification_time dom1813 FROM DEPT.address dom WHERE\n> address_key = 6113763\n>\n> [\n> {\n> \"Plan\": {\n> \"Node Type\": \"Gather\",\n> \"Parallel Aware\": false,\n> \"Actual Rows\": 1,\n> \"Actual Loops\": 1,\n> \"Workers Planned\": 1,\n> \"Workers Launched\": 1,\n> \"Single Copy\": true,\n> \"Plans\": [\n> {\n> \"Node Type\": \"Index Scan\",\n> \"Parent Relationship\": \"Outer\",\n> \"Parallel Aware\": false,\n> \"Scan Direction\": \"Forward\",\n> \"Index Name\": \"address1_i7\",\n> \"Relation Name\": \"address\",\n> \"Alias\": \"dom\",\n> \"Actual Rows\": 1,\n> \"Actual Loops\": 1,\n> \"Index Cond\": \"(address_key = 6113763)\",\n> \"Rows Removed by Index Recheck\": 0\n> }\n> ]\n> },\n> \"Triggers\": []\n> }\n> ]\n>\n> \"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual\n> time=174.318..198.539 rows=1 loops=1)\"\n> \" Workers Planned: 1\"\n> \" Workers Launched: 1\"\n> \" Single Copy: true\"\n> \" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65 rows=1\n> width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\n> \" Index Cond: (address_key = 6113763)\"\n> \"Planning Time: 0.221 ms\"\n> \"Execution Time: 198.601 ms\"\n>\n\nYou should have broken configuration - there is not any reason to start\nparallelism - probably some option in postgresql.conf has very bad value.\nSecond - it's crazy to see 200 ms just on interprocess communication -\nmaybe your CPU is overutilized.\n\nRegards\n\nPavel\n\n\n\n\n>\n>\n> Regards,\n> Aditya.\n>\n\nso 3. 4. 2021 v 15:38 odesílatel aditya desai <admad123@gmail.com> napsal:Hi,We migrated our Oracle Databases to PostgreSQL. One of the simple select query that takes 4 ms on Oracle is taking around 200 ms on PostgreSQL. Could you please advise. Please find query and query plans below. Gather cost seems high. Will increasing max_parallel_worker_per_gather help?explain analyse SELECT bom.address_key dom2137,bom.address_type_key dom1727,bom.start_date dom1077,bom.end_date dom828,bom.address_status_key dom1955,bom.address_role_key dom1711,bom.delivery_point_created dom2362,bom.postcode dom873,bom.postcode_name dom1390,bom.street_name dom1186,bom.premises_number_1 dom1777,bom.premises_number_2 dom1778,bom.premises_letter_1 dom1784,bom.premises_letter_2 dom1785,bom.premises_separator dom1962,bom.stairway dom892,bom.po_box dom653,bom.apartment_number dom1732,bom.apartment_letter dom1739,bom.street_key dom1097,bom.address_use_key dom1609,bom.language_key dom1272,bom.address_family_id dom1796,bom.cur_address_key dom2566,bom.created_by dom1052,bom.modified_by dom1158,bom.creation_time dom1392,bom.modification_time dom1813 FROM DEPT.address dom WHERE address_key = 6113763[{\"Plan\": {\"Node Type\": \"Gather\",\"Parallel Aware\": false,\"Actual Rows\": 1,\"Actual Loops\": 1,\"Workers Planned\": 1,\"Workers Launched\": 1,\"Single Copy\": true,\"Plans\": [{\"Node Type\": \"Index Scan\",\"Parent Relationship\": \"Outer\",\"Parallel Aware\": false,\"Scan Direction\": \"Forward\",\"Index Name\": \"address1_i7\",\"Relation Name\": \"address\",\"Alias\": \"dom\",\"Actual Rows\": 1,\"Actual Loops\": 1,\"Index Cond\": \"(address_key = 6113763)\",\"Rows Removed by Index Recheck\": 0}]},\"Triggers\": []}]\"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual time=174.318..198.539 rows=1 loops=1)\"\" Workers Planned: 1\"\" Workers Launched: 1\"\" Single Copy: true\"\" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65 rows=1 width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\" Index Cond: (address_key = 6113763)\"\"Planning Time: 0.221 ms\"\"Execution Time: 198.601 ms\"You should have broken configuration - there is not any reason to start parallelism - probably some option in postgresql.conf has very bad value. Second - it's crazy to see 200 ms just on interprocess communication - maybe your CPU is overutilized.RegardsPavelRegards,Aditya.",
"msg_date": "Sat, 3 Apr 2021 16:08:01 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "It seems like something is missing. Is this table partitioned? How long ago\nwas migration done? Has vacuum freeze and analyze of tables been done? Was\nindex created after populating data or reindexed after perhaps? What\nversion are you using?\n\nIt seems like something is missing. Is this table partitioned? How long ago was migration done? Has vacuum freeze and analyze of tables been done? Was index created after populating data or reindexed after perhaps? What version are you using?",
"msg_date": "Sat, 3 Apr 2021 08:10:23 -0600",
"msg_from": "Michael Lewis <mlewis@entrata.com>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "On Sat, Apr 03, 2021 at 04:08:01PM +0200, Pavel Stehule wrote:\n> so 3. 4. 2021 v 15:38 odes�latel aditya desai <admad123@gmail.com> napsal:\n> > \"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual\n> > time=174.318..198.539 rows=1 loops=1)\"\n> > \" Workers Planned: 1\"\n> > \" Workers Launched: 1\"\n> > \" Single Copy: true\"\n> > \" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65 rows=1\n> > width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\n> > \" Index Cond: (address_key = 6113763)\"\n> > \"Planning Time: 0.221 ms\"\n> > \"Execution Time: 198.601 ms\"\n> \n> You should have broken configuration - there is not any reason to start\n> parallelism - probably some option in postgresql.conf has very bad value.\n> Second - it's crazy to see 200 ms just on interprocess communication -\n> maybe your CPU is overutilized.\n\nIt seems like force_parallel_mode is set, which is for debugging and not for\n\"forcing things to go faster\". Maybe we should rename the parameter, like\nparallel_mode_testing=on.\n\nhttp://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 3 Apr 2021 09:16:51 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "Hi Michael,\nThanks for your response.\nIs this table partitioned? - No\nHow long ago was migration done? - 27th March 2021\nHas vacuum freeze and analyze of tables been done? - We ran vacuum analyze.\n Was index created after populating data or reindexed after perhaps? -\nIndex was created after data load and reindex was executed on all tables\nyesterday.\n Version is PostgreSQL-11\n\nRegards,\nAditya.\n\n\nOn Sat, Apr 3, 2021 at 7:40 PM Michael Lewis <mlewis@entrata.com> wrote:\n\n> It seems like something is missing. Is this table partitioned? How long\n> ago was migration done? Has vacuum freeze and analyze of tables been done?\n> Was index created after populating data or reindexed after perhaps? What\n> version are you using?\n>\n\nHi Michael,Thanks for your response.Is this table partitioned? - NoHow long ago was migration done? - 27th March 2021Has vacuum freeze and analyze of tables been done? - We ran vacuum analyze. Was index created after populating data or reindexed after perhaps? - Index was created after data load and reindex was executed on all tables yesterday. Version is PostgreSQL-11Regards,Aditya.On Sat, Apr 3, 2021 at 7:40 PM Michael Lewis <mlewis@entrata.com> wrote:It seems like something is missing. Is this table partitioned? How long ago was migration done? Has vacuum freeze and analyze of tables been done? Was index created after populating data or reindexed after perhaps? What version are you using?",
"msg_date": "Sat, 3 Apr 2021 20:29:22 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "On Sat, Apr 3, 2021 at 08:29:22PM +0530, aditya desai wrote:\n> Hi Michael,\n> Thanks for your response.\n> Is this table partitioned? - No\n> How long ago was migration done? - 27th March 2021\n> Has vacuum freeze and analyze of tables been done? - We ran vacuum analyze.\n> �Was index created after populating data or reindexed after perhaps? - Index\n> was created after data load and reindex was executed on all tables yesterday.\n> �Version is PostgreSQL-11\n\nFYI, the output of these queries will show u what changes have been made\nto the configuration file:\n\n\tSELECT version();\n\t\n\tSELECT name, current_setting(name), source\n\tFROM pg_settings\n\tWHERE source NOT IN ('default', 'override');\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Sat, 3 Apr 2021 11:04:17 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "Hi Justin,\nYes, force_parallel_mode is on. Should we set it off?\n\nRegards,\nAditya.\n\nOn Sat, Apr 3, 2021 at 7:46 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Sat, Apr 03, 2021 at 04:08:01PM +0200, Pavel Stehule wrote:\n> > so 3. 4. 2021 v 15:38 odesílatel aditya desai <admad123@gmail.com>\n> napsal:\n> > > \"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual\n> > > time=174.318..198.539 rows=1 loops=1)\"\n> > > \" Workers Planned: 1\"\n> > > \" Workers Launched: 1\"\n> > > \" Single Copy: true\"\n> > > \" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65\n> rows=1\n> > > width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\n> > > \" Index Cond: (address_key = 6113763)\"\n> > > \"Planning Time: 0.221 ms\"\n> > > \"Execution Time: 198.601 ms\"\n> >\n> > You should have broken configuration - there is not any reason to start\n> > parallelism - probably some option in postgresql.conf has very bad\n> value.\n> > Second - it's crazy to see 200 ms just on interprocess communication -\n> > maybe your CPU is overutilized.\n>\n> It seems like force_parallel_mode is set, which is for debugging and not\n> for\n> \"forcing things to go faster\". Maybe we should rename the parameter, like\n> parallel_mode_testing=on.\n>\n> http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html\n>\n> --\n> Justin\n>\n\nHi Justin,Yes, force_parallel_mode is on. Should we set it off?Regards,Aditya.On Sat, Apr 3, 2021 at 7:46 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Sat, Apr 03, 2021 at 04:08:01PM +0200, Pavel Stehule wrote:\n> so 3. 4. 2021 v 15:38 odesílatel aditya desai <admad123@gmail.com> napsal:\n> > \"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual\n> > time=174.318..198.539 rows=1 loops=1)\"\n> > \" Workers Planned: 1\"\n> > \" Workers Launched: 1\"\n> > \" Single Copy: true\"\n> > \" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65 rows=1\n> > width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\n> > \" Index Cond: (address_key = 6113763)\"\n> > \"Planning Time: 0.221 ms\"\n> > \"Execution Time: 198.601 ms\"\n> \n> You should have broken configuration - there is not any reason to start\n> parallelism - probably some option in postgresql.conf has very bad value.\n> Second - it's crazy to see 200 ms just on interprocess communication -\n> maybe your CPU is overutilized.\n\nIt seems like force_parallel_mode is set, which is for debugging and not for\n\"forcing things to go faster\". Maybe we should rename the parameter, like\nparallel_mode_testing=on.\n\nhttp://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html\n\n-- \nJustin",
"msg_date": "Sat, 3 Apr 2021 20:38:18 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> Hi Justin,\n> Yes, force_parallel_mode is on. Should we set it off?\n\nYes. I bet someone set it without reading our docs:\n\n\thttps://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n\n-->\tAllows the use of parallel queries for testing purposes even in cases\n-->\twhere no performance benefit is expected.\n\nWe might need to clarify this sentence to be clearer it is _only_ for\ntesting. Also, I suggest you review _all_ changes that have been made\nto the server since I am worried other unwise changes might also have\nbeen made.\n\n---------------------------------------------------------------------------\n\n> \n> Regards,\n> Aditya.\n> \n> On Sat, Apr 3, 2021 at 7:46 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> On Sat, Apr 03, 2021 at 04:08:01PM +0200, Pavel Stehule wrote:\n> > so 3. 4. 2021 v 15:38 odes�latel aditya desai <admad123@gmail.com>\n> napsal:\n> > > \"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual\n> > > time=174.318..198.539 rows=1 loops=1)\"\n> > > \" Workers Planned: 1\"\n> > > \" Workers Launched: 1\"\n> > > \" Single Copy: true\"\n> > > \" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65 rows\n> =1\n> > > width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\n> > > \" Index Cond: (address_key = 6113763)\"\n> > > \"Planning Time: 0.221 ms\"\n> > > \"Execution Time: 198.601 ms\"\n> >\n> > You should have broken configuration - there is not any reason to start\n> > parallelism -� probably some option in postgresql.conf has very bad\n> value.\n> > Second - it's crazy to see 200 ms just on interprocess communication -\n> > maybe your CPU is overutilized.\n> \n> It seems like force_parallel_mode is set, which is for debugging and not\n> for\n> \"forcing things to go faster\".� Maybe we should rename the parameter, like\n> parallel_mode_testing=on.\n> \n> http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html\n> \n> --\n> Justin\n> \n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Sat, 3 Apr 2021 11:12:01 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "Thanks Bruce!! Will set it off and retry.\n\nOn Sat, Apr 3, 2021 at 8:42 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> > Hi Justin,\n> > Yes, force_parallel_mode is on. Should we set it off?\n>\n> Yes. I bet someone set it without reading our docs:\n>\n>\n> https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n>\n> --> Allows the use of parallel queries for testing purposes even in\n> cases\n> --> where no performance benefit is expected.\n>\n> We might need to clarify this sentence to be clearer it is _only_ for\n> testing. Also, I suggest you review _all_ changes that have been made\n> to the server since I am worried other unwise changes might also have\n> been made.\n>\n> ---------------------------------------------------------------------------\n>\n> >\n> > Regards,\n> > Aditya.\n> >\n> > On Sat, Apr 3, 2021 at 7:46 PM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n> >\n> > On Sat, Apr 03, 2021 at 04:08:01PM +0200, Pavel Stehule wrote:\n> > > so 3. 4. 2021 v 15:38 odesílatel aditya desai <admad123@gmail.com>\n> > napsal:\n> > > > \"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual\n> > > > time=174.318..198.539 rows=1 loops=1)\"\n> > > > \" Workers Planned: 1\"\n> > > > \" Workers Launched: 1\"\n> > > > \" Single Copy: true\"\n> > > > \" -> Index Scan using address1_i7 on address1 dom\n> (cost=0.43..2.65 rows\n> > =1\n> > > > width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\n> > > > \" Index Cond: (address_key = 6113763)\"\n> > > > \"Planning Time: 0.221 ms\"\n> > > > \"Execution Time: 198.601 ms\"\n> > >\n> > > You should have broken configuration - there is not any reason to\n> start\n> > > parallelism - probably some option in postgresql.conf has very bad\n> > value.\n> > > Second - it's crazy to see 200 ms just on interprocess\n> communication -\n> > > maybe your CPU is overutilized.\n> >\n> > It seems like force_parallel_mode is set, which is for debugging and\n> not\n> > for\n> > \"forcing things to go faster\". Maybe we should rename the\n> parameter, like\n> > parallel_mode_testing=on.\n> >\n> >\n> http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html\n> >\n> > --\n> > Justin\n> >\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n\nThanks Bruce!! Will set it off and retry.On Sat, Apr 3, 2021 at 8:42 PM Bruce Momjian <bruce@momjian.us> wrote:On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> Hi Justin,\n> Yes, force_parallel_mode is on. Should we set it off?\n\nYes. I bet someone set it without reading our docs:\n\n https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n\n--> Allows the use of parallel queries for testing purposes even in cases\n--> where no performance benefit is expected.\n\nWe might need to clarify this sentence to be clearer it is _only_ for\ntesting. Also, I suggest you review _all_ changes that have been made\nto the server since I am worried other unwise changes might also have\nbeen made.\n\n---------------------------------------------------------------------------\n\n> \n> Regards,\n> Aditya.\n> \n> On Sat, Apr 3, 2021 at 7:46 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> On Sat, Apr 03, 2021 at 04:08:01PM +0200, Pavel Stehule wrote:\n> > so 3. 4. 2021 v 15:38 odesílatel aditya desai <admad123@gmail.com>\n> napsal:\n> > > \"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual\n> > > time=174.318..198.539 rows=1 loops=1)\"\n> > > \" Workers Planned: 1\"\n> > > \" Workers Launched: 1\"\n> > > \" Single Copy: true\"\n> > > \" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65 rows\n> =1\n> > > width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\n> > > \" Index Cond: (address_key = 6113763)\"\n> > > \"Planning Time: 0.221 ms\"\n> > > \"Execution Time: 198.601 ms\"\n> >\n> > You should have broken configuration - there is not any reason to start\n> > parallelism - probably some option in postgresql.conf has very bad\n> value.\n> > Second - it's crazy to see 200 ms just on interprocess communication -\n> > maybe your CPU is overutilized.\n> \n> It seems like force_parallel_mode is set, which is for debugging and not\n> for\n> \"forcing things to go faster\". Maybe we should rename the parameter, like\n> parallel_mode_testing=on.\n> \n> http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html\n> \n> --\n> Justin\n> \n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Sat, 3 Apr 2021 20:52:25 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "On Sat, Apr 3, 2021 at 11:12:01AM -0400, Bruce Momjian wrote:\n> On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> > Hi Justin,\n> > Yes, force_parallel_mode is on. Should we set it off?\n> \n> Yes. I bet someone set it without reading our docs:\n> \n> \thttps://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> -->\tAllows the use of parallel queries for testing purposes even in cases\n> -->\twhere no performance benefit is expected.\n> \n> We might need to clarify this sentence to be clearer it is _only_ for\n> testing. Also, I suggest you review _all_ changes that have been made\n> to the server since I am worried other unwise changes might also have\n> been made.\n\nThis brings up an issue we see occasionally. You can either leave\neverything as default, get advice on which defaults to change, or study\neach setting and then change defaults. Changing defaults without study\noften leads to poor configurations, as we are seeing here.\n\nThe lucky thing is that you noticed a slow query and found the\nmisconfiguration, but I am sure there are many servers where\nmisconfiguration is never detected. I wish I knew how to improve this\nsituation, but user education seems to be all we can do.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Sat, 3 Apr 2021 11:24:02 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "adding the group.\n\n aad_log_min_messages | warning\n | configuration file\n application_name | psql\n | client\n archive_command |\nc:\\postgres\\bin\\xlogcopy\\xlogcopy.exe archive blob \"%f\" \"%p\" |\nconfiguration file\n archive_mode | on\n | configuration file\n archive_timeout | 15min\n | configuration file\n authentication_timeout | 30s\n | configuration file\n autovacuum_analyze_scale_factor | 0.05\n | configuration file\n autovacuum_naptime | 15s\n | configuration file\n autovacuum_vacuum_scale_factor | 0.05\n | configuration file\n bgwriter_delay | 20ms\n | configuration file\n bgwriter_flush_after | 512kB\n | configuration file\n bgwriter_lru_maxpages | 100\n | configuration file\n checkpoint_completion_target | 0.9\n | configuration file\n checkpoint_flush_after | 256kB\n | configuration file\n checkpoint_timeout | 5min\n | configuration file\n client_encoding | UTF8\n | client\n connection_ID |\n5b59f092-444c-49df-b5d6-a7a0028a7855 | client\n connection_PeerIP |\nfd40:4d4a:11:5067:6d11:500:a07:5144 | client\n connection_Vnet | on\n | client\n constraint_exclusion | partition\n | configuration file\n data_sync_retry | on\n | configuration file\n DateStyle | ISO, MDY\n | configuration file\n default_text_search_config | pg_catalog.english\n | configuration file\n dynamic_shared_memory_type | windows\n | configuration file\n effective_cache_size | 160GB\n | configuration file\n enable_seqscan | off\n | configuration file\n force_parallel_mode | off\n | configuration file\n from_collapse_limit | 15\n | configuration file\n full_page_writes | off\n | configuration file\n hot_standby | on\n | configuration file\n hot_standby_feedback | on\n | configuration file\n join_collapse_limit | 15\n | configuration file\n lc_messages | English_United States.1252\n | configuration file\n lc_monetary | English_United States.1252\n | configuration file\n lc_numeric | English_United States.1252\n | configuration file\n lc_time | English_United States.1252\n | configuration file\n listen_addresses | *\n | configuration file\n log_checkpoints | on\n | configuration file\n log_connections | on\n | configuration file\n log_destination | stderr\n | configuration file\n log_file_mode | 0640\n | configuration file\n log_line_prefix | %t-%c-\n | configuration file\n log_min_messages_internal | info\n | configuration file\n log_rotation_age | 1h\n | configuration file\n log_rotation_size | 100MB\n | configuration file\n log_timezone | UTC\n | configuration file\n logging_collector | on\n | configuration file\n maintenance_work_mem | 1GB\n | configuration file\n max_connections | 1900\n | configuration file\n max_parallel_workers_per_gather | 16\n | configuration file\n max_replication_slots | 10\n | configuration file\n max_stack_depth | 2MB\n | environment variable\n max_wal_senders | 10\n | configuration file\n max_wal_size | 26931MB\n | configuration file\n min_wal_size | 4GB\n | configuration file\n pg_qs.query_capture_mode | top\n | configuration file\n pgms_wait_sampling.query_capture_mode | all\n | configuration file\n pgstat_udp_port | 20224\n | command line\n port | 20224\n | command line\n random_page_cost | 1.1\n | configuration file\n shared_buffers | 64GB\n | configuration file\n ssl | on\n | configuration file\n ssl_ca_file | root.crt\n | configuration file\n superuser_reserved_connections | 5\n | configuration file\n TimeZone | EET\n | configuration file\n track_io_timing | on\n | configuration file\n wal_buffers | 128MB\n | configuration file\n wal_keep_segments | 25\n | configuration file\n wal_level | replica\n | configuration file\n work_mem | 16MB\n | configuration file\n\n\nOn Sat, Apr 3, 2021 at 8:59 PM aditya desai <admad123@gmail.com> wrote:\n\n> Hi Bruce,\n> Please find the below output.force_parallel_mode if off now.\n>\n> aad_log_min_messages | warning\n> | configuration file\n> application_name | psql\n> | client\n> archive_command |\n> c:\\postgres\\bin\\xlogcopy\\xlogcopy.exe archive blob \"%f\" \"%p\" |\n> configuration file\n> archive_mode | on\n> | configuration file\n> archive_timeout | 15min\n> | configuration file\n> authentication_timeout | 30s\n> | configuration file\n> autovacuum_analyze_scale_factor | 0.05\n> | configuration file\n> autovacuum_naptime | 15s\n> | configuration file\n> autovacuum_vacuum_scale_factor | 0.05\n> | configuration file\n> bgwriter_delay | 20ms\n> | configuration file\n> bgwriter_flush_after | 512kB\n> | configuration file\n> bgwriter_lru_maxpages | 100\n> | configuration file\n> checkpoint_completion_target | 0.9\n> | configuration file\n> checkpoint_flush_after | 256kB\n> | configuration file\n> checkpoint_timeout | 5min\n> | configuration file\n> client_encoding | UTF8\n> | client\n> connection_ID |\n> 5b59f092-444c-49df-b5d6-a7a0028a7855 | client\n> connection_PeerIP |\n> fd40:4d4a:11:5067:6d11:500:a07:5144 | client\n> connection_Vnet | on\n> | client\n> constraint_exclusion | partition\n> | configuration file\n> data_sync_retry | on\n> | configuration file\n> DateStyle | ISO, MDY\n> | configuration file\n> default_text_search_config | pg_catalog.english\n> | configuration file\n> dynamic_shared_memory_type | windows\n> | configuration file\n> effective_cache_size | 160GB\n> | configuration file\n> enable_seqscan | off\n> | configuration file\n> force_parallel_mode | off\n> | configuration file\n> from_collapse_limit | 15\n> | configuration file\n> full_page_writes | off\n> | configuration file\n> hot_standby | on\n> | configuration file\n> hot_standby_feedback | on\n> | configuration file\n> join_collapse_limit | 15\n> | configuration file\n> lc_messages | English_United States.1252\n> | configuration file\n> lc_monetary | English_United States.1252\n> | configuration file\n> lc_numeric | English_United States.1252\n> | configuration file\n> lc_time | English_United States.1252\n> | configuration file\n> listen_addresses | *\n> | configuration file\n> log_checkpoints | on\n> | configuration file\n> log_connections | on\n> | configuration file\n> log_destination | stderr\n> | configuration file\n> log_file_mode | 0640\n> | configuration file\n> log_line_prefix | %t-%c-\n> | configuration file\n> log_min_messages_internal | info\n> | configuration file\n> log_rotation_age | 1h\n> | configuration file\n> log_rotation_size | 100MB\n> | configuration file\n> log_timezone | UTC\n> | configuration file\n> logging_collector | on\n> | configuration file\n> maintenance_work_mem | 1GB\n> | configuration file\n> max_connections | 1900\n> | configuration file\n> max_parallel_workers_per_gather | 16\n> | configuration file\n> max_replication_slots | 10\n> | configuration file\n> max_stack_depth | 2MB\n> | environment variable\n> max_wal_senders | 10\n> | configuration file\n> max_wal_size | 26931MB\n> | configuration file\n> min_wal_size | 4GB\n> | configuration file\n> pg_qs.query_capture_mode | top\n> | configuration file\n> pgms_wait_sampling.query_capture_mode | all\n> | configuration file\n> pgstat_udp_port | 20224\n> | command line\n> port | 20224\n> | command line\n> random_page_cost | 1.1\n> | configuration file\n> shared_buffers | 64GB\n> | configuration file\n> ssl | on\n> | configuration file\n> ssl_ca_file | root.crt\n> | configuration file\n> superuser_reserved_connections | 5\n> | configuration file\n> TimeZone | EET\n> | configuration file\n> track_io_timing | on\n> | configuration file\n> wal_buffers | 128MB\n> | configuration file\n> wal_keep_segments | 25\n> | configuration file\n> wal_level | replica\n> | configuration file\n> work_mem | 16MB\n> | configuration file\n>\n>\n> Regards,\n> Aditya.\n>\n>\n>\n> On Sat, Apr 3, 2021 at 8:34 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n>> On Sat, Apr 3, 2021 at 08:29:22PM +0530, aditya desai wrote:\n>> > Hi Michael,\n>> > Thanks for your response.\n>> > Is this table partitioned? - No\n>> > How long ago was migration done? - 27th March 2021\n>> > Has vacuum freeze and analyze of tables been done? - We ran vacuum\n>> analyze.\n>> > Was index created after populating data or reindexed after perhaps? -\n>> Index\n>> > was created after data load and reindex was executed on all tables\n>> yesterday.\n>> > Version is PostgreSQL-11\n>>\n>> FYI, the output of these queries will show u what changes have been made\n>> to the configuration file:\n>>\n>> SELECT version();\n>>\n>> SELECT name, current_setting(name), source\n>> FROM pg_settings\n>> WHERE source NOT IN ('default', 'override');\n>>\n>> --\n>> Bruce Momjian <bruce@momjian.us> https://momjian.us\n>> EDB https://enterprisedb.com\n>>\n>> If only the physical world exists, free will is an illusion.\n>>\n>>\n\nadding the group. aad_log_min_messages | warning | configuration file application_name | psql | client archive_command | c:\\postgres\\bin\\xlogcopy\\xlogcopy.exe archive blob \"%f\" \"%p\" | configuration file archive_mode | on | configuration file archive_timeout | 15min | configuration file authentication_timeout | 30s | configuration file autovacuum_analyze_scale_factor | 0.05 | configuration file autovacuum_naptime | 15s | configuration file autovacuum_vacuum_scale_factor | 0.05 | configuration file bgwriter_delay | 20ms | configuration file bgwriter_flush_after | 512kB | configuration file bgwriter_lru_maxpages | 100 | configuration file checkpoint_completion_target | 0.9 | configuration file checkpoint_flush_after | 256kB | configuration file checkpoint_timeout | 5min | configuration file client_encoding | UTF8 | client connection_ID | 5b59f092-444c-49df-b5d6-a7a0028a7855 | client connection_PeerIP | fd40:4d4a:11:5067:6d11:500:a07:5144 | client connection_Vnet | on | client constraint_exclusion | partition | configuration file data_sync_retry | on | configuration file DateStyle | ISO, MDY | configuration file default_text_search_config | pg_catalog.english | configuration file dynamic_shared_memory_type | windows | configuration file effective_cache_size | 160GB | configuration file enable_seqscan | off | configuration file force_parallel_mode | off | configuration file from_collapse_limit | 15 | configuration file full_page_writes | off | configuration file hot_standby | on | configuration file hot_standby_feedback | on | configuration file join_collapse_limit | 15 | configuration file lc_messages | English_United States.1252 | configuration file lc_monetary | English_United States.1252 | configuration file lc_numeric | English_United States.1252 | configuration file lc_time | English_United States.1252 | configuration file listen_addresses | * | configuration file log_checkpoints | on | configuration file log_connections | on | configuration file log_destination | stderr | configuration file log_file_mode | 0640 | configuration file log_line_prefix | %t-%c- | configuration file log_min_messages_internal | info | configuration file log_rotation_age | 1h | configuration file log_rotation_size | 100MB | configuration file log_timezone | UTC | configuration file logging_collector | on | configuration file maintenance_work_mem | 1GB | configuration file max_connections | 1900 | configuration file max_parallel_workers_per_gather | 16 | configuration file max_replication_slots | 10 | configuration file max_stack_depth | 2MB | environment variable max_wal_senders | 10 | configuration file max_wal_size | 26931MB | configuration file min_wal_size | 4GB | configuration file pg_qs.query_capture_mode | top | configuration file pgms_wait_sampling.query_capture_mode | all | configuration file pgstat_udp_port | 20224 | command line port | 20224 | command line random_page_cost | 1.1 | configuration file shared_buffers | 64GB | configuration file ssl | on | configuration file ssl_ca_file | root.crt | configuration file superuser_reserved_connections | 5 | configuration file TimeZone | EET | configuration file track_io_timing | on | configuration file wal_buffers | 128MB | configuration file wal_keep_segments | 25 | configuration file wal_level | replica | configuration file work_mem | 16MB | configuration fileOn Sat, Apr 3, 2021 at 8:59 PM aditya desai <admad123@gmail.com> wrote:Hi Bruce,Please find the below output.force_parallel_mode if off now. aad_log_min_messages | warning | configuration file application_name | psql | client archive_command | c:\\postgres\\bin\\xlogcopy\\xlogcopy.exe archive blob \"%f\" \"%p\" | configuration file archive_mode | on | configuration file archive_timeout | 15min | configuration file authentication_timeout | 30s | configuration file autovacuum_analyze_scale_factor | 0.05 | configuration file autovacuum_naptime | 15s | configuration file autovacuum_vacuum_scale_factor | 0.05 | configuration file bgwriter_delay | 20ms | configuration file bgwriter_flush_after | 512kB | configuration file bgwriter_lru_maxpages | 100 | configuration file checkpoint_completion_target | 0.9 | configuration file checkpoint_flush_after | 256kB | configuration file checkpoint_timeout | 5min | configuration file client_encoding | UTF8 | client connection_ID | 5b59f092-444c-49df-b5d6-a7a0028a7855 | client connection_PeerIP | fd40:4d4a:11:5067:6d11:500:a07:5144 | client connection_Vnet | on | client constraint_exclusion | partition | configuration file data_sync_retry | on | configuration file DateStyle | ISO, MDY | configuration file default_text_search_config | pg_catalog.english | configuration file dynamic_shared_memory_type | windows | configuration file effective_cache_size | 160GB | configuration file enable_seqscan | off | configuration file force_parallel_mode | off | configuration file from_collapse_limit | 15 | configuration file full_page_writes | off | configuration file hot_standby | on | configuration file hot_standby_feedback | on | configuration file join_collapse_limit | 15 | configuration file lc_messages | English_United States.1252 | configuration file lc_monetary | English_United States.1252 | configuration file lc_numeric | English_United States.1252 | configuration file lc_time | English_United States.1252 | configuration file listen_addresses | * | configuration file log_checkpoints | on | configuration file log_connections | on | configuration file log_destination | stderr | configuration file log_file_mode | 0640 | configuration file log_line_prefix | %t-%c- | configuration file log_min_messages_internal | info | configuration file log_rotation_age | 1h | configuration file log_rotation_size | 100MB | configuration file log_timezone | UTC | configuration file logging_collector | on | configuration file maintenance_work_mem | 1GB | configuration file max_connections | 1900 | configuration file max_parallel_workers_per_gather | 16 | configuration file max_replication_slots | 10 | configuration file max_stack_depth | 2MB | environment variable max_wal_senders | 10 | configuration file max_wal_size | 26931MB | configuration file min_wal_size | 4GB | configuration file pg_qs.query_capture_mode | top | configuration file pgms_wait_sampling.query_capture_mode | all | configuration file pgstat_udp_port | 20224 | command line port | 20224 | command line random_page_cost | 1.1 | configuration file shared_buffers | 64GB | configuration file ssl | on | configuration file ssl_ca_file | root.crt | configuration file superuser_reserved_connections | 5 | configuration file TimeZone | EET | configuration file track_io_timing | on | configuration file wal_buffers | 128MB | configuration file wal_keep_segments | 25 | configuration file wal_level | replica | configuration file work_mem | 16MB | configuration fileRegards,Aditya.On Sat, Apr 3, 2021 at 8:34 PM Bruce Momjian <bruce@momjian.us> wrote:On Sat, Apr 3, 2021 at 08:29:22PM +0530, aditya desai wrote:\r\n> Hi Michael,\r\n> Thanks for your response.\r\n> Is this table partitioned? - No\r\n> How long ago was migration done? - 27th March 2021\r\n> Has vacuum freeze and analyze of tables been done? - We ran vacuum analyze.\r\n> Was index created after populating data or reindexed after perhaps? - Index\r\n> was created after data load and reindex was executed on all tables yesterday.\r\n> Version is PostgreSQL-11\n\r\nFYI, the output of these queries will show u what changes have been made\r\nto the configuration file:\n\r\n SELECT version();\n\r\n SELECT name, current_setting(name), source\r\n FROM pg_settings\r\n WHERE source NOT IN ('default', 'override');\n\r\n-- \r\n Bruce Momjian <bruce@momjian.us> https://momjian.us\r\n EDB https://enterprisedb.com\n\r\n If only the physical world exists, free will is an illusion.",
"msg_date": "Sat, 3 Apr 2021 21:00:24 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "I will gather all information and get back to you\n\nOn Sat, Apr 3, 2021 at 9:00 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> so 3. 4. 2021 v 17:15 odesílatel aditya desai <admad123@gmail.com> napsal:\n>\n>> Hi Pavel,\n>> Thanks for response. Please see below.\n>> work_mem=16MB\n>> maintenance_work_mem=1GB\n>> effective_cache_size=160GB\n>> shared_buffers=64GB\n>> force_parallel_mode=ON\n>>\n>\n> force_parallel_mode is very bad idea. efective_cache_size=160GB can be too\n> much too. work_mem 16 MB is maybe too low. The configuration looks a little\n> bit chaotic :)\n>\n> How much has RAM your server? How much CPU cores are there? What is\n> max_connections?\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>> Regards,\n>> Aditya.\n>>\n>>\n>> On Sat, Apr 3, 2021 at 7:38 PM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>>>\n>>>\n>>> so 3. 4. 2021 v 15:38 odesílatel aditya desai <admad123@gmail.com>\n>>> napsal:\n>>>\n>>>> Hi,\n>>>> We migrated our Oracle Databases to PostgreSQL. One of the simple\n>>>> select query that takes 4 ms on Oracle is taking around 200 ms on\n>>>> PostgreSQL. Could you please advise. Please find query and query plans\n>>>> below. Gather cost seems high. Will increasing\n>>>> max_parallel_worker_per_gather help?\n>>>>\n>>>> explain analyse SELECT bom.address_key dom2137,bom.address_type_key\n>>>> dom1727,bom.start_date dom1077,bom.end_date dom828,bom.address_status_key\n>>>> dom1955,bom.address_role_key dom1711,bom.delivery_point_created\n>>>> dom2362,bom.postcode dom873,bom.postcode_name dom1390,bom.street_name\n>>>> dom1186,bom.premises_number_1 dom1777,bom.premises_number_2\n>>>> dom1778,bom.premises_letter_1 dom1784,bom.premises_letter_2\n>>>> dom1785,bom.premises_separator dom1962,bom.stairway dom892,bom.po_box\n>>>> dom653,bom.apartment_number dom1732,bom.apartment_letter\n>>>> dom1739,bom.street_key dom1097,bom.address_use_key dom1609,bom.language_key\n>>>> dom1272,bom.address_family_id dom1796,bom.cur_address_key\n>>>> dom2566,bom.created_by dom1052,bom.modified_by dom1158,bom.creation_time\n>>>> dom1392,bom.modification_time dom1813 FROM DEPT.address dom WHERE\n>>>> address_key = 6113763\n>>>>\n>>>> [\n>>>> {\n>>>> \"Plan\": {\n>>>> \"Node Type\": \"Gather\",\n>>>> \"Parallel Aware\": false,\n>>>> \"Actual Rows\": 1,\n>>>> \"Actual Loops\": 1,\n>>>> \"Workers Planned\": 1,\n>>>> \"Workers Launched\": 1,\n>>>> \"Single Copy\": true,\n>>>> \"Plans\": [\n>>>> {\n>>>> \"Node Type\": \"Index Scan\",\n>>>> \"Parent Relationship\": \"Outer\",\n>>>> \"Parallel Aware\": false,\n>>>> \"Scan Direction\": \"Forward\",\n>>>> \"Index Name\": \"address1_i7\",\n>>>> \"Relation Name\": \"address\",\n>>>> \"Alias\": \"dom\",\n>>>> \"Actual Rows\": 1,\n>>>> \"Actual Loops\": 1,\n>>>> \"Index Cond\": \"(address_key = 6113763)\",\n>>>> \"Rows Removed by Index Recheck\": 0\n>>>> }\n>>>> ]\n>>>> },\n>>>> \"Triggers\": []\n>>>> }\n>>>> ]\n>>>>\n>>>> \"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual\n>>>> time=174.318..198.539 rows=1 loops=1)\"\n>>>> \" Workers Planned: 1\"\n>>>> \" Workers Launched: 1\"\n>>>> \" Single Copy: true\"\n>>>> \" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65\n>>>> rows=1 width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\n>>>> \" Index Cond: (address_key = 6113763)\"\n>>>> \"Planning Time: 0.221 ms\"\n>>>> \"Execution Time: 198.601 ms\"\n>>>>\n>>>\n>>> You should have broken configuration - there is not any reason to start\n>>> parallelism - probably some option in postgresql.conf has very bad value.\n>>> Second - it's crazy to see 200 ms just on interprocess communication -\n>>> maybe your CPU is overutilized.\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>>\n>>>\n>>>\n>>>>\n>>>>\n>>>> Regards,\n>>>> Aditya.\n>>>>\n>>>\n\nI will gather all information and get back to youOn Sat, Apr 3, 2021 at 9:00 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:so 3. 4. 2021 v 17:15 odesílatel aditya desai <admad123@gmail.com> napsal:Hi Pavel,Thanks for response. Please see below.work_mem=16MBmaintenance_work_mem=1GBeffective_cache_size=160GBshared_buffers=64GBforce_parallel_mode=ONforce_parallel_mode is very bad idea. efective_cache_size=160GB can be too much too. work_mem 16 MB is maybe too low. The configuration looks a little bit chaotic :)How much has RAM your server? How much CPU cores are there? What is max_connections? RegardsPavel Regards,Aditya.On Sat, Apr 3, 2021 at 7:38 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:so 3. 4. 2021 v 15:38 odesílatel aditya desai <admad123@gmail.com> napsal:Hi,We migrated our Oracle Databases to PostgreSQL. One of the simple select query that takes 4 ms on Oracle is taking around 200 ms on PostgreSQL. Could you please advise. Please find query and query plans below. Gather cost seems high. Will increasing max_parallel_worker_per_gather help?explain analyse SELECT bom.address_key dom2137,bom.address_type_key dom1727,bom.start_date dom1077,bom.end_date dom828,bom.address_status_key dom1955,bom.address_role_key dom1711,bom.delivery_point_created dom2362,bom.postcode dom873,bom.postcode_name dom1390,bom.street_name dom1186,bom.premises_number_1 dom1777,bom.premises_number_2 dom1778,bom.premises_letter_1 dom1784,bom.premises_letter_2 dom1785,bom.premises_separator dom1962,bom.stairway dom892,bom.po_box dom653,bom.apartment_number dom1732,bom.apartment_letter dom1739,bom.street_key dom1097,bom.address_use_key dom1609,bom.language_key dom1272,bom.address_family_id dom1796,bom.cur_address_key dom2566,bom.created_by dom1052,bom.modified_by dom1158,bom.creation_time dom1392,bom.modification_time dom1813 FROM DEPT.address dom WHERE address_key = 6113763[{\"Plan\": {\"Node Type\": \"Gather\",\"Parallel Aware\": false,\"Actual Rows\": 1,\"Actual Loops\": 1,\"Workers Planned\": 1,\"Workers Launched\": 1,\"Single Copy\": true,\"Plans\": [{\"Node Type\": \"Index Scan\",\"Parent Relationship\": \"Outer\",\"Parallel Aware\": false,\"Scan Direction\": \"Forward\",\"Index Name\": \"address1_i7\",\"Relation Name\": \"address\",\"Alias\": \"dom\",\"Actual Rows\": 1,\"Actual Loops\": 1,\"Index Cond\": \"(address_key = 6113763)\",\"Rows Removed by Index Recheck\": 0}]},\"Triggers\": []}]\"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual time=174.318..198.539 rows=1 loops=1)\"\" Workers Planned: 1\"\" Workers Launched: 1\"\" Single Copy: true\"\" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65 rows=1 width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\" Index Cond: (address_key = 6113763)\"\"Planning Time: 0.221 ms\"\"Execution Time: 198.601 ms\"You should have broken configuration - there is not any reason to start parallelism - probably some option in postgresql.conf has very bad value. Second - it's crazy to see 200 ms just on interprocess communication - maybe your CPU is overutilized.RegardsPavelRegards,Aditya.",
"msg_date": "Sat, 3 Apr 2021 21:03:42 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "so 3. 4. 2021 v 17:30 odesílatel aditya desai <admad123@gmail.com> napsal:\n\n> adding the group.\n>\n> aad_log_min_messages | warning\n> | configuration file\n> application_name | psql\n> | client\n> archive_command |\n> c:\\postgres\\bin\\xlogcopy\\xlogcopy.exe archive blob \"%f\" \"%p\" |\n> configuration file\n> archive_mode | on\n> | configuration file\n> archive_timeout | 15min\n> | configuration file\n> authentication_timeout | 30s\n> | configuration file\n> autovacuum_analyze_scale_factor | 0.05\n> | configuration file\n> autovacuum_naptime | 15s\n> | configuration file\n> autovacuum_vacuum_scale_factor | 0.05\n> | configuration file\n> bgwriter_delay | 20ms\n> | configuration file\n> bgwriter_flush_after | 512kB\n> | configuration file\n> bgwriter_lru_maxpages | 100\n> | configuration file\n> checkpoint_completion_target | 0.9\n> | configuration file\n> checkpoint_flush_after | 256kB\n> | configuration file\n> checkpoint_timeout | 5min\n> | configuration file\n> client_encoding | UTF8\n> | client\n> connection_ID |\n> 5b59f092-444c-49df-b5d6-a7a0028a7855 | client\n> connection_PeerIP |\n> fd40:4d4a:11:5067:6d11:500:a07:5144 | client\n> connection_Vnet | on\n> | client\n> constraint_exclusion | partition\n> | configuration file\n> data_sync_retry | on\n> | configuration file\n> DateStyle | ISO, MDY\n> | configuration file\n> default_text_search_config | pg_catalog.english\n> | configuration file\n> dynamic_shared_memory_type | windows\n> | configuration file\n> effective_cache_size | 160GB\n> | configuration file\n> enable_seqscan | off\n> | configuration file\n> force_parallel_mode | off\n> | configuration file\n> from_collapse_limit | 15\n> | configuration file\n> full_page_writes | off\n> | configuration file\n> hot_standby | on\n> | configuration file\n> hot_standby_feedback | on\n> | configuration file\n> join_collapse_limit | 15\n> | configuration file\n> lc_messages | English_United States.1252\n> | configuration file\n> lc_monetary | English_United States.1252\n> | configuration file\n> lc_numeric | English_United States.1252\n> | configuration file\n> lc_time | English_United States.1252\n> | configuration file\n> listen_addresses | *\n> | configuration file\n> log_checkpoints | on\n> | configuration file\n> log_connections | on\n> | configuration file\n> log_destination | stderr\n> | configuration file\n> log_file_mode | 0640\n> | configuration file\n> log_line_prefix | %t-%c-\n> | configuration file\n> log_min_messages_internal | info\n> | configuration file\n> log_rotation_age | 1h\n> | configuration file\n> log_rotation_size | 100MB\n> | configuration file\n> log_timezone | UTC\n> | configuration file\n> logging_collector | on\n> | configuration file\n> maintenance_work_mem | 1GB\n> | configuration file\n> max_connections | 1900\n> | configuration file\n> max_parallel_workers_per_gather | 16\n> | configuration file\n> max_replication_slots | 10\n> | configuration file\n> max_stack_depth | 2MB\n> | environment variable\n> max_wal_senders | 10\n> | configuration file\n> max_wal_size | 26931MB\n> | configuration file\n> min_wal_size | 4GB\n> | configuration file\n> pg_qs.query_capture_mode | top\n> | configuration file\n> pgms_wait_sampling.query_capture_mode | all\n> | configuration file\n> pgstat_udp_port | 20224\n> | command line\n> port | 20224\n> | command line\n> random_page_cost | 1.1\n> | configuration file\n> shared_buffers | 64GB\n> | configuration file\n> ssl | on\n> | configuration file\n> ssl_ca_file | root.crt\n> | configuration file\n> superuser_reserved_connections | 5\n> | configuration file\n> TimeZone | EET\n> | configuration file\n> track_io_timing | on\n> | configuration file\n> wal_buffers | 128MB\n> | configuration file\n> wal_keep_segments | 25\n> | configuration file\n> wal_level | replica\n> | configuration file\n> work_mem | 16MB\n> | configuration file\n>\n>\nmax_connections | 1900\n\nit is really not good - there can be very high CPU overloading with a lot\nof others issues.\n\n\n\n> On Sat, Apr 3, 2021 at 8:59 PM aditya desai <admad123@gmail.com> wrote:\n>\n>> Hi Bruce,\n>> Please find the below output.force_parallel_mode if off now.\n>>\n>> aad_log_min_messages | warning\n>> | configuration file\n>> application_name | psql\n>> | client\n>> archive_command |\n>> c:\\postgres\\bin\\xlogcopy\\xlogcopy.exe archive blob \"%f\" \"%p\" |\n>> configuration file\n>> archive_mode | on\n>> | configuration file\n>> archive_timeout | 15min\n>> | configuration file\n>> authentication_timeout | 30s\n>> | configuration file\n>> autovacuum_analyze_scale_factor | 0.05\n>> | configuration file\n>> autovacuum_naptime | 15s\n>> | configuration file\n>> autovacuum_vacuum_scale_factor | 0.05\n>> | configuration file\n>> bgwriter_delay | 20ms\n>> | configuration file\n>> bgwriter_flush_after | 512kB\n>> | configuration file\n>> bgwriter_lru_maxpages | 100\n>> | configuration file\n>> checkpoint_completion_target | 0.9\n>> | configuration file\n>> checkpoint_flush_after | 256kB\n>> | configuration file\n>> checkpoint_timeout | 5min\n>> | configuration file\n>> client_encoding | UTF8\n>> | client\n>> connection_ID |\n>> 5b59f092-444c-49df-b5d6-a7a0028a7855 | client\n>> connection_PeerIP |\n>> fd40:4d4a:11:5067:6d11:500:a07:5144 | client\n>> connection_Vnet | on\n>> | client\n>> constraint_exclusion | partition\n>> | configuration file\n>> data_sync_retry | on\n>> | configuration file\n>> DateStyle | ISO, MDY\n>> | configuration file\n>> default_text_search_config | pg_catalog.english\n>> | configuration file\n>> dynamic_shared_memory_type | windows\n>> | configuration file\n>> effective_cache_size | 160GB\n>> | configuration file\n>> enable_seqscan | off\n>> | configuration file\n>> force_parallel_mode | off\n>> | configuration file\n>> from_collapse_limit | 15\n>> | configuration file\n>> full_page_writes | off\n>> | configuration file\n>> hot_standby | on\n>> | configuration file\n>> hot_standby_feedback | on\n>> | configuration file\n>> join_collapse_limit | 15\n>> | configuration file\n>> lc_messages | English_United States.1252\n>> | configuration file\n>> lc_monetary | English_United States.1252\n>> | configuration file\n>> lc_numeric | English_United States.1252\n>> | configuration file\n>> lc_time | English_United States.1252\n>> | configuration file\n>> listen_addresses | *\n>> | configuration file\n>> log_checkpoints | on\n>> | configuration file\n>> log_connections | on\n>> | configuration file\n>> log_destination | stderr\n>> | configuration file\n>> log_file_mode | 0640\n>> | configuration file\n>> log_line_prefix | %t-%c-\n>> | configuration file\n>> log_min_messages_internal | info\n>> | configuration file\n>> log_rotation_age | 1h\n>> | configuration file\n>> log_rotation_size | 100MB\n>> | configuration file\n>> log_timezone | UTC\n>> | configuration file\n>> logging_collector | on\n>> | configuration file\n>> maintenance_work_mem | 1GB\n>> | configuration file\n>> max_connections | 1900\n>> | configuration file\n>> max_parallel_workers_per_gather | 16\n>> | configuration file\n>> max_replication_slots | 10\n>> | configuration file\n>> max_stack_depth | 2MB\n>> | environment variable\n>> max_wal_senders | 10\n>> | configuration file\n>> max_wal_size | 26931MB\n>> | configuration file\n>> min_wal_size | 4GB\n>> | configuration file\n>> pg_qs.query_capture_mode | top\n>> | configuration file\n>> pgms_wait_sampling.query_capture_mode | all\n>> | configuration file\n>> pgstat_udp_port | 20224\n>> | command line\n>> port | 20224\n>> | command line\n>> random_page_cost | 1.1\n>> | configuration file\n>> shared_buffers | 64GB\n>> | configuration file\n>> ssl | on\n>> | configuration file\n>> ssl_ca_file | root.crt\n>> | configuration file\n>> superuser_reserved_connections | 5\n>> | configuration file\n>> TimeZone | EET\n>> | configuration file\n>> track_io_timing | on\n>> | configuration file\n>> wal_buffers | 128MB\n>> | configuration file\n>> wal_keep_segments | 25\n>> | configuration file\n>> wal_level | replica\n>> | configuration file\n>> work_mem | 16MB\n>> | configuration file\n>>\n>>\n>> Regards,\n>> Aditya.\n>>\n>>\n>>\n>> On Sat, Apr 3, 2021 at 8:34 PM Bruce Momjian <bruce@momjian.us> wrote:\n>>\n>>> On Sat, Apr 3, 2021 at 08:29:22PM +0530, aditya desai wrote:\n>>> > Hi Michael,\n>>> > Thanks for your response.\n>>> > Is this table partitioned? - No\n>>> > How long ago was migration done? - 27th March 2021\n>>> > Has vacuum freeze and analyze of tables been done? - We ran vacuum\n>>> analyze.\n>>> > Was index created after populating data or reindexed after perhaps? -\n>>> Index\n>>> > was created after data load and reindex was executed on all tables\n>>> yesterday.\n>>> > Version is PostgreSQL-11\n>>>\n>>> FYI, the output of these queries will show u what changes have been made\n>>> to the configuration file:\n>>>\n>>> SELECT version();\n>>>\n>>> SELECT name, current_setting(name), source\n>>> FROM pg_settings\n>>> WHERE source NOT IN ('default', 'override');\n>>>\n>>> --\n>>> Bruce Momjian <bruce@momjian.us> https://momjian.us\n>>> EDB https://enterprisedb.com\n>>>\n>>> If only the physical world exists, free will is an illusion.\n>>>\n>>>\n\nso 3. 4. 2021 v 17:30 odesílatel aditya desai <admad123@gmail.com> napsal:adding the group. aad_log_min_messages | warning | configuration file application_name | psql | client archive_command | c:\\postgres\\bin\\xlogcopy\\xlogcopy.exe archive blob \"%f\" \"%p\" | configuration file archive_mode | on | configuration file archive_timeout | 15min | configuration file authentication_timeout | 30s | configuration file autovacuum_analyze_scale_factor | 0.05 | configuration file autovacuum_naptime | 15s | configuration file autovacuum_vacuum_scale_factor | 0.05 | configuration file bgwriter_delay | 20ms | configuration file bgwriter_flush_after | 512kB | configuration file bgwriter_lru_maxpages | 100 | configuration file checkpoint_completion_target | 0.9 | configuration file checkpoint_flush_after | 256kB | configuration file checkpoint_timeout | 5min | configuration file client_encoding | UTF8 | client connection_ID | 5b59f092-444c-49df-b5d6-a7a0028a7855 | client connection_PeerIP | fd40:4d4a:11:5067:6d11:500:a07:5144 | client connection_Vnet | on | client constraint_exclusion | partition | configuration file data_sync_retry | on | configuration file DateStyle | ISO, MDY | configuration file default_text_search_config | pg_catalog.english | configuration file dynamic_shared_memory_type | windows | configuration file effective_cache_size | 160GB | configuration file enable_seqscan | off | configuration file force_parallel_mode | off | configuration file from_collapse_limit | 15 | configuration file full_page_writes | off | configuration file hot_standby | on | configuration file hot_standby_feedback | on | configuration file join_collapse_limit | 15 | configuration file lc_messages | English_United States.1252 | configuration file lc_monetary | English_United States.1252 | configuration file lc_numeric | English_United States.1252 | configuration file lc_time | English_United States.1252 | configuration file listen_addresses | * | configuration file log_checkpoints | on | configuration file log_connections | on | configuration file log_destination | stderr | configuration file log_file_mode | 0640 | configuration file log_line_prefix | %t-%c- | configuration file log_min_messages_internal | info | configuration file log_rotation_age | 1h | configuration file log_rotation_size | 100MB | configuration file log_timezone | UTC | configuration file logging_collector | on | configuration file maintenance_work_mem | 1GB | configuration file max_connections | 1900 | configuration file max_parallel_workers_per_gather | 16 | configuration file max_replication_slots | 10 | configuration file max_stack_depth | 2MB | environment variable max_wal_senders | 10 | configuration file max_wal_size | 26931MB | configuration file min_wal_size | 4GB | configuration file pg_qs.query_capture_mode | top | configuration file pgms_wait_sampling.query_capture_mode | all | configuration file pgstat_udp_port | 20224 | command line port | 20224 | command line random_page_cost | 1.1 | configuration file shared_buffers | 64GB | configuration file ssl | on | configuration file ssl_ca_file | root.crt | configuration file superuser_reserved_connections | 5 | configuration file TimeZone | EET | configuration file track_io_timing | on | configuration file wal_buffers | 128MB | configuration file wal_keep_segments | 25 | configuration file wal_level | replica | configuration file work_mem | 16MB | configuration filemax_connections | 1900 it is really not good - there can be very high CPU overloading with a lot of others issues.On Sat, Apr 3, 2021 at 8:59 PM aditya desai <admad123@gmail.com> wrote:Hi Bruce,Please find the below output.force_parallel_mode if off now. aad_log_min_messages | warning | configuration file application_name | psql | client archive_command | c:\\postgres\\bin\\xlogcopy\\xlogcopy.exe archive blob \"%f\" \"%p\" | configuration file archive_mode | on | configuration file archive_timeout | 15min | configuration file authentication_timeout | 30s | configuration file autovacuum_analyze_scale_factor | 0.05 | configuration file autovacuum_naptime | 15s | configuration file autovacuum_vacuum_scale_factor | 0.05 | configuration file bgwriter_delay | 20ms | configuration file bgwriter_flush_after | 512kB | configuration file bgwriter_lru_maxpages | 100 | configuration file checkpoint_completion_target | 0.9 | configuration file checkpoint_flush_after | 256kB | configuration file checkpoint_timeout | 5min | configuration file client_encoding | UTF8 | client connection_ID | 5b59f092-444c-49df-b5d6-a7a0028a7855 | client connection_PeerIP | fd40:4d4a:11:5067:6d11:500:a07:5144 | client connection_Vnet | on | client constraint_exclusion | partition | configuration file data_sync_retry | on | configuration file DateStyle | ISO, MDY | configuration file default_text_search_config | pg_catalog.english | configuration file dynamic_shared_memory_type | windows | configuration file effective_cache_size | 160GB | configuration file enable_seqscan | off | configuration file force_parallel_mode | off | configuration file from_collapse_limit | 15 | configuration file full_page_writes | off | configuration file hot_standby | on | configuration file hot_standby_feedback | on | configuration file join_collapse_limit | 15 | configuration file lc_messages | English_United States.1252 | configuration file lc_monetary | English_United States.1252 | configuration file lc_numeric | English_United States.1252 | configuration file lc_time | English_United States.1252 | configuration file listen_addresses | * | configuration file log_checkpoints | on | configuration file log_connections | on | configuration file log_destination | stderr | configuration file log_file_mode | 0640 | configuration file log_line_prefix | %t-%c- | configuration file log_min_messages_internal | info | configuration file log_rotation_age | 1h | configuration file log_rotation_size | 100MB | configuration file log_timezone | UTC | configuration file logging_collector | on | configuration file maintenance_work_mem | 1GB | configuration file max_connections | 1900 | configuration file max_parallel_workers_per_gather | 16 | configuration file max_replication_slots | 10 | configuration file max_stack_depth | 2MB | environment variable max_wal_senders | 10 | configuration file max_wal_size | 26931MB | configuration file min_wal_size | 4GB | configuration file pg_qs.query_capture_mode | top | configuration file pgms_wait_sampling.query_capture_mode | all | configuration file pgstat_udp_port | 20224 | command line port | 20224 | command line random_page_cost | 1.1 | configuration file shared_buffers | 64GB | configuration file ssl | on | configuration file ssl_ca_file | root.crt | configuration file superuser_reserved_connections | 5 | configuration file TimeZone | EET | configuration file track_io_timing | on | configuration file wal_buffers | 128MB | configuration file wal_keep_segments | 25 | configuration file wal_level | replica | configuration file work_mem | 16MB | configuration fileRegards,Aditya.On Sat, Apr 3, 2021 at 8:34 PM Bruce Momjian <bruce@momjian.us> wrote:On Sat, Apr 3, 2021 at 08:29:22PM +0530, aditya desai wrote:\r\n> Hi Michael,\r\n> Thanks for your response.\r\n> Is this table partitioned? - No\r\n> How long ago was migration done? - 27th March 2021\r\n> Has vacuum freeze and analyze of tables been done? - We ran vacuum analyze.\r\n> Was index created after populating data or reindexed after perhaps? - Index\r\n> was created after data load and reindex was executed on all tables yesterday.\r\n> Version is PostgreSQL-11\n\r\nFYI, the output of these queries will show u what changes have been made\r\nto the configuration file:\n\r\n SELECT version();\n\r\n SELECT name, current_setting(name), source\r\n FROM pg_settings\r\n WHERE source NOT IN ('default', 'override');\n\r\n-- \r\n Bruce Momjian <bruce@momjian.us> https://momjian.us\r\n EDB https://enterprisedb.com\n\r\n If only the physical world exists, free will is an illusion.",
"msg_date": "Sat, 3 Apr 2021 17:35:36 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "On Sat, Apr 3, 2021 at 09:00:24PM +0530, aditya desai wrote:\n> adding the group.\n\nPerfect. That is a lot of non-default settings, so I would be concerned\nthere are other misconfigurations in there --- the group here might have\nsome tips.\n\n> �aad_log_min_messages� � � � � � � � � | warning� � � � � � � � � � � � � � � �\n> � � � � � � � � � � � | configuration file\n\nThe above is not a PG config variable.\n\n> �connection_ID� � � � � � � � � � � � �| 5b59f092-444c-49df-b5d6-a7a0028a7855�\n> � � � � � � � � � � � �| client\n> �connection_PeerIP� � � � � � � � � � �| fd40:4d4a:11:5067:6d11:500:a07:5144� �\n> � � � � � � � � � � � | client\n> �connection_Vnet� � � � � � � � � � � �| on� � � � � � � � � � � � � � � � � �\n\nUh, these are not a PG settings. You need to show us the output of\nversion() because this is not standard Postgres. A quick search\nsuggests this is a Microsoft version of Postgres. I will stop\ncommenting.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Sat, 3 Apr 2021 11:38:12 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n>> Yes, force_parallel_mode is on. Should we set it off?\n\n> Yes. I bet someone set it without reading our docs:\n\n> \thttps://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n\n> -->\tAllows the use of parallel queries for testing purposes even in cases\n> -->\twhere no performance benefit is expected.\n\n> We might need to clarify this sentence to be clearer it is _only_ for\n> testing.\n\nI wonder why it is listed under planner options at all, and not under\ndeveloper options.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Apr 2021 11:39:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> >> Yes, force_parallel_mode is on. Should we set it off?\n> \n> > Yes. I bet someone set it without reading our docs:\n> \n> > \thttps://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> > -->\tAllows the use of parallel queries for testing purposes even in cases\n> > -->\twhere no performance benefit is expected.\n> \n> > We might need to clarify this sentence to be clearer it is _only_ for\n> > testing.\n> \n> I wonder why it is listed under planner options at all, and not under\n> developer options.\n\nBecause it's there to help DBAs catch errors in functions incorrectly marked as\nparallel safe.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 3 Apr 2021 10:41:14 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "On Sat, Apr 3, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> >> Yes, force_parallel_mode is on. Should we set it off?\n> \n> > Yes. I bet someone set it without reading our docs:\n> \n> > \thttps://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> > -->\tAllows the use of parallel queries for testing purposes even in cases\n> > -->\twhere no performance benefit is expected.\n> \n> > We might need to clarify this sentence to be clearer it is _only_ for\n> > testing.\n> \n> I wonder why it is listed under planner options at all, and not under\n> developer options.\n\nI was kind of surprised by that myself since I was working on a blog\nentry about from_collapse_limit and join_collapse_limit. I think moving\nit makes sense.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Sat, 3 Apr 2021 11:42:59 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "On Sat, Apr 3, 2021 at 10:41:14AM -0500, Justin Pryzby wrote:\n> On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> > >> Yes, force_parallel_mode is on. Should we set it off?\n> > \n> > > Yes. I bet someone set it without reading our docs:\n> > \n> > > \thttps://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> > \n> > > -->\tAllows the use of parallel queries for testing purposes even in cases\n> > > -->\twhere no performance benefit is expected.\n> > \n> > > We might need to clarify this sentence to be clearer it is _only_ for\n> > > testing.\n> > \n> > I wonder why it is listed under planner options at all, and not under\n> > developer options.\n> \n> Because it's there to help DBAs catch errors in functions incorrectly marked as\n> parallel safe.\n\nUh, isn't that developer/debugging?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Sat, 3 Apr 2021 11:43:36 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "Thanks Justin. Will review all parameters and get back to you.\n\nOn Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> > >> Yes, force_parallel_mode is on. Should we set it off?\n> >\n> > > Yes. I bet someone set it without reading our docs:\n> >\n> > >\n> https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> >\n> > > --> Allows the use of parallel queries for testing purposes even in\n> cases\n> > > --> where no performance benefit is expected.\n> >\n> > > We might need to clarify this sentence to be clearer it is _only_ for\n> > > testing.\n> >\n> > I wonder why it is listed under planner options at all, and not under\n> > developer options.\n>\n> Because it's there to help DBAs catch errors in functions incorrectly\n> marked as\n> parallel safe.\n>\n> --\n> Justin\n>\n\nThanks Justin. Will review all parameters and get back to you.On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> >> Yes, force_parallel_mode is on. Should we set it off?\n> \n> > Yes. I bet someone set it without reading our docs:\n> \n> > https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> > --> Allows the use of parallel queries for testing purposes even in cases\n> > --> where no performance benefit is expected.\n> \n> > We might need to clarify this sentence to be clearer it is _only_ for\n> > testing.\n> \n> I wonder why it is listed under planner options at all, and not under\n> developer options.\n\nBecause it's there to help DBAs catch errors in functions incorrectly marked as\nparallel safe.\n\n-- \nJustin",
"msg_date": "Sat, 3 Apr 2021 21:14:33 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "Hi Justin/Bruce/Pavel,\nThanks for your inputs. After setting force_parallel_mode=off Execution\ntime of same query was reduced to 1ms from 200 ms. Worked like a charm. We\nalso increased work_mem to 80=MB. Thanks again.\n\nRegards,\nAditya.\n\nOn Sat, Apr 3, 2021 at 9:14 PM aditya desai <admad123@gmail.com> wrote:\n\n> Thanks Justin. Will review all parameters and get back to you.\n>\n> On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n>> On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n>> > Bruce Momjian <bruce@momjian.us> writes:\n>> > > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n>> > >> Yes, force_parallel_mode is on. Should we set it off?\n>> >\n>> > > Yes. I bet someone set it without reading our docs:\n>> >\n>> > >\n>> https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n>> >\n>> > > --> Allows the use of parallel queries for testing purposes even in\n>> cases\n>> > > --> where no performance benefit is expected.\n>> >\n>> > > We might need to clarify this sentence to be clearer it is _only_ for\n>> > > testing.\n>> >\n>> > I wonder why it is listed under planner options at all, and not under\n>> > developer options.\n>>\n>> Because it's there to help DBAs catch errors in functions incorrectly\n>> marked as\n>> parallel safe.\n>>\n>> --\n>> Justin\n>>\n>\n\nHi Justin/Bruce/Pavel,Thanks for your inputs. After setting force_parallel_mode=off Execution time of same query was reduced to 1ms from 200 ms. Worked like a charm. We also increased work_mem to 80=MB. Thanks again.Regards,Aditya.On Sat, Apr 3, 2021 at 9:14 PM aditya desai <admad123@gmail.com> wrote:Thanks Justin. Will review all parameters and get back to you.On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> >> Yes, force_parallel_mode is on. Should we set it off?\n> \n> > Yes. I bet someone set it without reading our docs:\n> \n> > https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> > --> Allows the use of parallel queries for testing purposes even in cases\n> > --> where no performance benefit is expected.\n> \n> > We might need to clarify this sentence to be clearer it is _only_ for\n> > testing.\n> \n> I wonder why it is listed under planner options at all, and not under\n> developer options.\n\nBecause it's there to help DBAs catch errors in functions incorrectly marked as\nparallel safe.\n\n-- \nJustin",
"msg_date": "Sat, 3 Apr 2021 23:06:57 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "so 3. 4. 2021 v 19:37 odesílatel aditya desai <admad123@gmail.com> napsal:\n\n> Hi Justin/Bruce/Pavel,\n> Thanks for your inputs. After setting force_parallel_mode=off Execution\n> time of same query was reduced to 1ms from 200 ms. Worked like a charm. We\n> also increased work_mem to 80=MB. Thanks\n>\n\nsuper.\n\nThe too big max_connection can cause a lot of problems. You should install\nand use pgbouncer or pgpool II.\n\nhttps://scalegrid.io/blog/postgresql-connection-pooling-part-4-pgbouncer-vs-pgpool/\n\nRegards\n\nPavel\n\n\n\n\n> again.\n>\n> Regards,\n> Aditya.\n>\n> On Sat, Apr 3, 2021 at 9:14 PM aditya desai <admad123@gmail.com> wrote:\n>\n>> Thanks Justin. Will review all parameters and get back to you.\n>>\n>> On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <pryzby@telsasoft.com>\n>> wrote:\n>>\n>>> On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n>>> > Bruce Momjian <bruce@momjian.us> writes:\n>>> > > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n>>> > >> Yes, force_parallel_mode is on. Should we set it off?\n>>> >\n>>> > > Yes. I bet someone set it without reading our docs:\n>>> >\n>>> > >\n>>> https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n>>> >\n>>> > > --> Allows the use of parallel queries for testing purposes even in\n>>> cases\n>>> > > --> where no performance benefit is expected.\n>>> >\n>>> > > We might need to clarify this sentence to be clearer it is _only_ for\n>>> > > testing.\n>>> >\n>>> > I wonder why it is listed under planner options at all, and not under\n>>> > developer options.\n>>>\n>>> Because it's there to help DBAs catch errors in functions incorrectly\n>>> marked as\n>>> parallel safe.\n>>>\n>>> --\n>>> Justin\n>>>\n>>\n\nso 3. 4. 2021 v 19:37 odesílatel aditya desai <admad123@gmail.com> napsal:Hi Justin/Bruce/Pavel,Thanks for your inputs. After setting force_parallel_mode=off Execution time of same query was reduced to 1ms from 200 ms. Worked like a charm. We also increased work_mem to 80=MB. Thanks super.The too big max_connection can cause a lot of problems. You should install and use pgbouncer or pgpool II. https://scalegrid.io/blog/postgresql-connection-pooling-part-4-pgbouncer-vs-pgpool/RegardsPavel again.Regards,Aditya.On Sat, Apr 3, 2021 at 9:14 PM aditya desai <admad123@gmail.com> wrote:Thanks Justin. Will review all parameters and get back to you.On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> >> Yes, force_parallel_mode is on. Should we set it off?\n> \n> > Yes. I bet someone set it without reading our docs:\n> \n> > https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> > --> Allows the use of parallel queries for testing purposes even in cases\n> > --> where no performance benefit is expected.\n> \n> > We might need to clarify this sentence to be clearer it is _only_ for\n> > testing.\n> \n> I wonder why it is listed under planner options at all, and not under\n> developer options.\n\nBecause it's there to help DBAs catch errors in functions incorrectly marked as\nparallel safe.\n\n-- \nJustin",
"msg_date": "Sat, 3 Apr 2021 19:41:50 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "Yes. I have made suggestions on connection pooling as well. Currently it is\nbeing done from Application side.\n\nOn Sat, Apr 3, 2021 at 11:12 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> so 3. 4. 2021 v 19:37 odesílatel aditya desai <admad123@gmail.com> napsal:\n>\n>> Hi Justin/Bruce/Pavel,\n>> Thanks for your inputs. After setting force_parallel_mode=off Execution\n>> time of same query was reduced to 1ms from 200 ms. Worked like a charm. We\n>> also increased work_mem to 80=MB. Thanks\n>>\n>\n> super.\n>\n> The too big max_connection can cause a lot of problems. You should install\n> and use pgbouncer or pgpool II.\n>\n>\n> https://scalegrid.io/blog/postgresql-connection-pooling-part-4-pgbouncer-vs-pgpool/\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>> again.\n>>\n>> Regards,\n>> Aditya.\n>>\n>> On Sat, Apr 3, 2021 at 9:14 PM aditya desai <admad123@gmail.com> wrote:\n>>\n>>> Thanks Justin. Will review all parameters and get back to you.\n>>>\n>>> On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <pryzby@telsasoft.com>\n>>> wrote:\n>>>\n>>>> On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n>>>> > Bruce Momjian <bruce@momjian.us> writes:\n>>>> > > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n>>>> > >> Yes, force_parallel_mode is on. Should we set it off?\n>>>> >\n>>>> > > Yes. I bet someone set it without reading our docs:\n>>>> >\n>>>> > >\n>>>> https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n>>>> >\n>>>> > > --> Allows the use of parallel queries for testing purposes even in\n>>>> cases\n>>>> > > --> where no performance benefit is expected.\n>>>> >\n>>>> > > We might need to clarify this sentence to be clearer it is _only_\n>>>> for\n>>>> > > testing.\n>>>> >\n>>>> > I wonder why it is listed under planner options at all, and not under\n>>>> > developer options.\n>>>>\n>>>> Because it's there to help DBAs catch errors in functions incorrectly\n>>>> marked as\n>>>> parallel safe.\n>>>>\n>>>> --\n>>>> Justin\n>>>>\n>>>\n\nYes. I have made suggestions on connection pooling as well. Currently it is being done from Application side.On Sat, Apr 3, 2021 at 11:12 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:so 3. 4. 2021 v 19:37 odesílatel aditya desai <admad123@gmail.com> napsal:Hi Justin/Bruce/Pavel,Thanks for your inputs. After setting force_parallel_mode=off Execution time of same query was reduced to 1ms from 200 ms. Worked like a charm. We also increased work_mem to 80=MB. Thanks super.The too big max_connection can cause a lot of problems. You should install and use pgbouncer or pgpool II. https://scalegrid.io/blog/postgresql-connection-pooling-part-4-pgbouncer-vs-pgpool/RegardsPavel again.Regards,Aditya.On Sat, Apr 3, 2021 at 9:14 PM aditya desai <admad123@gmail.com> wrote:Thanks Justin. Will review all parameters and get back to you.On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> >> Yes, force_parallel_mode is on. Should we set it off?\n> \n> > Yes. I bet someone set it without reading our docs:\n> \n> > https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> > --> Allows the use of parallel queries for testing purposes even in cases\n> > --> where no performance benefit is expected.\n> \n> > We might need to clarify this sentence to be clearer it is _only_ for\n> > testing.\n> \n> I wonder why it is listed under planner options at all, and not under\n> developer options.\n\nBecause it's there to help DBAs catch errors in functions incorrectly marked as\nparallel safe.\n\n-- \nJustin",
"msg_date": "Sat, 3 Apr 2021 23:15:47 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "so 3. 4. 2021 v 19:45 odesílatel aditya desai <admad123@gmail.com> napsal:\n\n> Yes. I have made suggestions on connection pooling as well. Currently it\n> is being done from Application side.\n>\n\nIt is usual - but the application side pooling doesn't solve well\noverloading. The behaviour of the database is not linear. Usually opened\nconnections are not active. But any non active connection can be changed to\nan active connection (there is not any limit for active connections), and\nthen the performance can be very very slow. Good pooling and good setting\nof max_connections is protection against overloading. max_connection should\nbe 10-20 x CPU cores (for OLTP)\n\nRegards\n\nPavel\n\n\n\n\n> On Sat, Apr 3, 2021 at 11:12 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>>\n>>\n>> so 3. 4. 2021 v 19:37 odesílatel aditya desai <admad123@gmail.com>\n>> napsal:\n>>\n>>> Hi Justin/Bruce/Pavel,\n>>> Thanks for your inputs. After setting force_parallel_mode=off Execution\n>>> time of same query was reduced to 1ms from 200 ms. Worked like a charm. We\n>>> also increased work_mem to 80=MB. Thanks\n>>>\n>>\n>> super.\n>>\n>> The too big max_connection can cause a lot of problems. You should\n>> install and use pgbouncer or pgpool II.\n>>\n>>\n>> https://scalegrid.io/blog/postgresql-connection-pooling-part-4-pgbouncer-vs-pgpool/\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>\n>>\n>>> again.\n>>>\n>>> Regards,\n>>> Aditya.\n>>>\n>>> On Sat, Apr 3, 2021 at 9:14 PM aditya desai <admad123@gmail.com> wrote:\n>>>\n>>>> Thanks Justin. Will review all parameters and get back to you.\n>>>>\n>>>> On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <pryzby@telsasoft.com>\n>>>> wrote:\n>>>>\n>>>>> On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n>>>>> > Bruce Momjian <bruce@momjian.us> writes:\n>>>>> > > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n>>>>> > >> Yes, force_parallel_mode is on. Should we set it off?\n>>>>> >\n>>>>> > > Yes. I bet someone set it without reading our docs:\n>>>>> >\n>>>>> > >\n>>>>> https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n>>>>> >\n>>>>> > > --> Allows the use of parallel queries for testing purposes even\n>>>>> in cases\n>>>>> > > --> where no performance benefit is expected.\n>>>>> >\n>>>>> > > We might need to clarify this sentence to be clearer it is _only_\n>>>>> for\n>>>>> > > testing.\n>>>>> >\n>>>>> > I wonder why it is listed under planner options at all, and not under\n>>>>> > developer options.\n>>>>>\n>>>>> Because it's there to help DBAs catch errors in functions incorrectly\n>>>>> marked as\n>>>>> parallel safe.\n>>>>>\n>>>>> --\n>>>>> Justin\n>>>>>\n>>>>\n\nso 3. 4. 2021 v 19:45 odesílatel aditya desai <admad123@gmail.com> napsal:Yes. I have made suggestions on connection pooling as well. Currently it is being done from Application side.It is usual - but the application side pooling doesn't solve well overloading. The behaviour of the database is not linear. Usually opened connections are not active. But any non active connection can be changed to an active connection (there is not any limit for active connections), and then the performance can be very very slow. Good pooling and good setting of max_connections is protection against overloading. max_connection should be 10-20 x CPU cores (for OLTP)RegardsPavel On Sat, Apr 3, 2021 at 11:12 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:so 3. 4. 2021 v 19:37 odesílatel aditya desai <admad123@gmail.com> napsal:Hi Justin/Bruce/Pavel,Thanks for your inputs. After setting force_parallel_mode=off Execution time of same query was reduced to 1ms from 200 ms. Worked like a charm. We also increased work_mem to 80=MB. Thanks super.The too big max_connection can cause a lot of problems. You should install and use pgbouncer or pgpool II. https://scalegrid.io/blog/postgresql-connection-pooling-part-4-pgbouncer-vs-pgpool/RegardsPavel again.Regards,Aditya.On Sat, Apr 3, 2021 at 9:14 PM aditya desai <admad123@gmail.com> wrote:Thanks Justin. Will review all parameters and get back to you.On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> >> Yes, force_parallel_mode is on. Should we set it off?\n> \n> > Yes. I bet someone set it without reading our docs:\n> \n> > https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> > --> Allows the use of parallel queries for testing purposes even in cases\n> > --> where no performance benefit is expected.\n> \n> > We might need to clarify this sentence to be clearer it is _only_ for\n> > testing.\n> \n> I wonder why it is listed under planner options at all, and not under\n> developer options.\n\nBecause it's there to help DBAs catch errors in functions incorrectly marked as\nparallel safe.\n\n-- \nJustin",
"msg_date": "Sat, 3 Apr 2021 19:50:38 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "Forking this thread\nhttps://www.postgresql.org/message-id/20210403154336.GG29125%40momjian.us\n\nOn Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> > > >> Yes, force_parallel_mode is on. Should we set it off?\n\nBruce Momjian <bruce@momjian.us> writes:\n> > > > Yes. I bet someone set it without reading our docs:\n...\n> > > > We might need to clarify this sentence to be clearer it is _only_ for\n> > > > testing.\n\nOn Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> > > I wonder why it is listed under planner options at all, and not under\n> > > developer options.\n\nOn Sat, Apr 3, 2021 at 10:41:14AM -0500, Justin Pryzby wrote:\n> > Because it's there to help DBAs catch errors in functions incorrectly marked as\n> > parallel safe.\n\nOn Sat, Apr 03, 2021 at 11:43:36AM -0400, Bruce Momjian wrote:\n> Uh, isn't that developer/debugging?\n\nI understood \"developer\" to mean someone who's debugging postgres itself, not\n(say) a function written using pl/pgsql. Like backtrace_functions,\npost_auth_delay, jit_profiling_support.\n\nBut I see that some \"dev\" options are more user-facing (for a sufficiently\nadvanced user):\nignore_checksum_failure, ignore_invalid_pages, zero_damaged_pages.\n\nAlso, I understood this to mean the \"category\" in pg_settings, but I guess\nwhat's important here is the absense of the GUC in the sample/template config\nfile. pg_settings.category and the sample headings it appears are intended to\nbe synchronized, but a few of them are out of sync. See attached.\n\n+1 to move this to \"developer\" options and remove it from the sample config:\n\n# - Other Planner Options -\n#force_parallel_mode = off\n\n-- \nJustin",
"msg_date": "Sat, 3 Apr 2021 20:25:46 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "[PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "Noted thanks!!\n\nOn Sun, Apr 4, 2021 at 4:19 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> ne 4. 4. 2021 v 12:39 odesílatel aditya desai <admad123@gmail.com> napsal:\n>\n>> Hi Pavel,\n>> Notes thanks. We have 64 core cpu and 320 GB RAM.\n>>\n>\n> ok - this is probably good for max thousand connections, maybe less (about\n> 6 hundred). Postgres doesn't perform well, when there are too many active\n> queries. Other databases have limits for active queries, and then use an\n> internal queue. But Postgres has nothing similar.\n>\n>\n>\n>\n>\n>\n>\n>> Regards,\n>> Aditya.\n>>\n>> On Sat, Apr 3, 2021 at 11:21 PM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>>>\n>>>\n>>> so 3. 4. 2021 v 19:45 odesílatel aditya desai <admad123@gmail.com>\n>>> napsal:\n>>>\n>>>> Yes. I have made suggestions on connection pooling as well. Currently\n>>>> it is being done from Application side.\n>>>>\n>>>\n>>> It is usual - but the application side pooling doesn't solve well\n>>> overloading. The behaviour of the database is not linear. Usually opened\n>>> connections are not active. But any non active connection can be changed to\n>>> an active connection (there is not any limit for active connections), and\n>>> then the performance can be very very slow. Good pooling and good setting\n>>> of max_connections is protection against overloading. max_connection should\n>>> be 10-20 x CPU cores (for OLTP)\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>>\n>>>\n>>>\n>>>> On Sat, Apr 3, 2021 at 11:12 PM Pavel Stehule <pavel.stehule@gmail.com>\n>>>> wrote:\n>>>>\n>>>>>\n>>>>>\n>>>>> so 3. 4. 2021 v 19:37 odesílatel aditya desai <admad123@gmail.com>\n>>>>> napsal:\n>>>>>\n>>>>>> Hi Justin/Bruce/Pavel,\n>>>>>> Thanks for your inputs. After setting force_parallel_mode=off\n>>>>>> Execution time of same query was reduced to 1ms from 200 ms. Worked like a\n>>>>>> charm. We also increased work_mem to 80=MB. Thanks\n>>>>>>\n>>>>>\n>>>>> super.\n>>>>>\n>>>>> The too big max_connection can cause a lot of problems. You should\n>>>>> install and use pgbouncer or pgpool II.\n>>>>>\n>>>>>\n>>>>> https://scalegrid.io/blog/postgresql-connection-pooling-part-4-pgbouncer-vs-pgpool/\n>>>>>\n>>>>> Regards\n>>>>>\n>>>>> Pavel\n>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>>> again.\n>>>>>>\n>>>>>> Regards,\n>>>>>> Aditya.\n>>>>>>\n>>>>>> On Sat, Apr 3, 2021 at 9:14 PM aditya desai <admad123@gmail.com>\n>>>>>> wrote:\n>>>>>>\n>>>>>>> Thanks Justin. Will review all parameters and get back to you.\n>>>>>>>\n>>>>>>> On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <pryzby@telsasoft.com>\n>>>>>>> wrote:\n>>>>>>>\n>>>>>>>> On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n>>>>>>>> > Bruce Momjian <bruce@momjian.us> writes:\n>>>>>>>> > > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n>>>>>>>> > >> Yes, force_parallel_mode is on. Should we set it off?\n>>>>>>>> >\n>>>>>>>> > > Yes. I bet someone set it without reading our docs:\n>>>>>>>> >\n>>>>>>>> > >\n>>>>>>>> https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n>>>>>>>> >\n>>>>>>>> > > --> Allows the use of parallel queries for testing purposes\n>>>>>>>> even in cases\n>>>>>>>> > > --> where no performance benefit is expected.\n>>>>>>>> >\n>>>>>>>> > > We might need to clarify this sentence to be clearer it is\n>>>>>>>> _only_ for\n>>>>>>>> > > testing.\n>>>>>>>> >\n>>>>>>>> > I wonder why it is listed under planner options at all, and not\n>>>>>>>> under\n>>>>>>>> > developer options.\n>>>>>>>>\n>>>>>>>> Because it's there to help DBAs catch errors in functions\n>>>>>>>> incorrectly marked as\n>>>>>>>> parallel safe.\n>>>>>>>>\n>>>>>>>> --\n>>>>>>>> Justin\n>>>>>>>>\n>>>>>>>\n\nNoted thanks!!On Sun, Apr 4, 2021 at 4:19 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:ne 4. 4. 2021 v 12:39 odesílatel aditya desai <admad123@gmail.com> napsal:Hi Pavel,Notes thanks. We have 64 core cpu and 320 GB RAM.ok - this is probably good for max thousand connections, maybe less (about 6 hundred). Postgres doesn't perform well, when there are too many active queries. Other databases have limits for active queries, and then use an internal queue. But Postgres has nothing similar. Regards,Aditya.On Sat, Apr 3, 2021 at 11:21 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:so 3. 4. 2021 v 19:45 odesílatel aditya desai <admad123@gmail.com> napsal:Yes. I have made suggestions on connection pooling as well. Currently it is being done from Application side.It is usual - but the application side pooling doesn't solve well overloading. The behaviour of the database is not linear. Usually opened connections are not active. But any non active connection can be changed to an active connection (there is not any limit for active connections), and then the performance can be very very slow. Good pooling and good setting of max_connections is protection against overloading. max_connection should be 10-20 x CPU cores (for OLTP)RegardsPavel On Sat, Apr 3, 2021 at 11:12 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:so 3. 4. 2021 v 19:37 odesílatel aditya desai <admad123@gmail.com> napsal:Hi Justin/Bruce/Pavel,Thanks for your inputs. After setting force_parallel_mode=off Execution time of same query was reduced to 1ms from 200 ms. Worked like a charm. We also increased work_mem to 80=MB. Thanks super.The too big max_connection can cause a lot of problems. You should install and use pgbouncer or pgpool II. https://scalegrid.io/blog/postgresql-connection-pooling-part-4-pgbouncer-vs-pgpool/RegardsPavel again.Regards,Aditya.On Sat, Apr 3, 2021 at 9:14 PM aditya desai <admad123@gmail.com> wrote:Thanks Justin. Will review all parameters and get back to you.On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> >> Yes, force_parallel_mode is on. Should we set it off?\n> \n> > Yes. I bet someone set it without reading our docs:\n> \n> > https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> > --> Allows the use of parallel queries for testing purposes even in cases\n> > --> where no performance benefit is expected.\n> \n> > We might need to clarify this sentence to be clearer it is _only_ for\n> > testing.\n> \n> I wonder why it is listed under planner options at all, and not under\n> developer options.\n\nBecause it's there to help DBAs catch errors in functions incorrectly marked as\nparallel safe.\n\n-- \nJustin",
"msg_date": "Sun, 4 Apr 2021 16:40:33 +0530",
"msg_from": "aditya desai <admad123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "The previous patches accidentally included some unrelated changes.\n\n-- \nJustin",
"msg_date": "Thu, 8 Apr 2021 16:38:13 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Sat, Apr 03, 2021 at 08:25:46PM -0500, Justin Pryzby wrote:\n> Forking this thread\n> https://www.postgresql.org/message-id/20210403154336.GG29125%40momjian.us\n\nDidn't see this one, thanks for forking.\n\n> I understood \"developer\" to mean someone who's debugging postgres itself, not\n> (say) a function written using pl/pgsql. Like backtrace_functions,\n> post_auth_delay, jit_profiling_support.\n> \n> But I see that some \"dev\" options are more user-facing (for a sufficiently\n> advanced user):\n> ignore_checksum_failure, ignore_invalid_pages, zero_damaged_pages.\n> \n> Also, I understood this to mean the \"category\" in pg_settings, but I guess\n> what's important here is the absense of the GUC in the sample/template config\n> file. pg_settings.category and the sample headings it appears are intended to\n> be synchronized, but a few of them are out of sync. See attached.\n> \n> +1 to move this to \"developer\" options and remove it from the sample config:\n> \n> # - Other Planner Options -\n> #force_parallel_mode = off\n\n0001 has some changes to pg_config_manual.h related to valgrind and\nmemory randomization. You may want to remove that before posting a\npatch.\n\n- {\"track_commit_timestamp\", PGC_POSTMASTER, REPLICATION,\n+ {\"track_commit_timestamp\", PGC_POSTMASTER, REPLICATION_SENDING,\nI can get behind this change for clarity where it gets actively used.\n\n- {\"track_activity_query_size\", PGC_POSTMASTER, RESOURCES_MEM,\n+ {\"track_activity_query_size\", PGC_POSTMASTER, STATS_COLLECTOR,\nBut not this one, because it is a memory setting.\n\n- {\"force_parallel_mode\", PGC_USERSET, QUERY_TUNING_OTHER,\n+ {\"force_parallel_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\nAnd not this one either, as it is mainly a planner thing, like the\nother parameters in the same area.\n\nThe last change is related to log_autovacuum_min_duration, and I can\nget behind the argument you are making to group all log activity\nparameters together. Now, about this part:\n+#log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and\n+ # their durations, > 0 logs only\n+ # actions running at least this number\n+ # of milliseconds.\nI think that we should clarify in the description that this is an\nautovacuum-only thing, say by appending a small sentence about the\nfact that it logs autovacuum activities, in a similar fashion to\nlog_temp_files. Moving the parameter out of the autovacuum section\nmakes it lose a bit of context.\n\n@@ -6903,6 +6903,7 @@ fetch_more_data_begin(AsyncRequest *areq)\n char sql[64];\n\n Assert(!fsstate->conn_state->pendingAreq);\n+ Assert(fsstate->conn);\nWhat's this diff doing here? \n--\nMichaelx",
"msg_date": "Fri, 9 Apr 2021 10:50:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 10:50:53AM +0900, Michael Paquier wrote:\n> On Sat, Apr 03, 2021 at 08:25:46PM -0500, Justin Pryzby wrote:\n> > Forking this thread\n> > https://www.postgresql.org/message-id/20210403154336.GG29125%40momjian.us\n> \n> Didn't see this one, thanks for forking.\n> \n> - {\"force_parallel_mode\", PGC_USERSET, QUERY_TUNING_OTHER,\n> + {\"force_parallel_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\n> And not this one either, as it is mainly a planner thing, like the\n> other parameters in the same area.\n\nThis is the main motive behind the patch.\n\nDeveloper options aren't shown in postgresql.conf.sample, which it seems like\nsometimes people read through quickly, setting a whole bunch of options that\nsound good, sometimes including this one. And in the best case they then ask\non -performance why their queries are slow and we tell them to turn it back off\nto fix their issues. This changes to no longer put it in .sample, and calling\nit a \"dev\" option seems to be the classification and mechanism by which to do\nthat.\n\n-- \nJustin\n\nps, Maybe you saw that I'd already resent without including the accidental junk\nhunks.\n\n\n",
"msg_date": "Thu, 8 Apr 2021 22:17:18 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 10:17:18PM -0500, Justin Pryzby wrote:\n> On Fri, Apr 09, 2021 at 10:50:53AM +0900, Michael Paquier wrote:\n> > On Sat, Apr 03, 2021 at 08:25:46PM -0500, Justin Pryzby wrote:\n> > > Forking this thread\n> > > https://www.postgresql.org/message-id/20210403154336.GG29125%40momjian.us\n> > \n> > Didn't see this one, thanks for forking.\n> > \n> > - {\"force_parallel_mode\", PGC_USERSET, QUERY_TUNING_OTHER,\n> > + {\"force_parallel_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\n> > And not this one either, as it is mainly a planner thing, like the\n> > other parameters in the same area.\n> \n> This is the main motive behind the patch.\n> \n> Developer options aren't shown in postgresql.conf.sample, which it seems like\n> sometimes people read through quickly, setting a whole bunch of options that\n> sound good, sometimes including this one. And in the best case they then ask\n> on -performance why their queries are slow and we tell them to turn it back off\n> to fix their issues. This changes to no longer put it in .sample, and calling\n> it a \"dev\" option seems to be the classification and mechanism by which to do\n> that.\n\n+1\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 9 Apr 2021 07:39:28 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 07:39:28AM -0400, Bruce Momjian wrote:\n> On Thu, Apr 8, 2021 at 10:17:18PM -0500, Justin Pryzby wrote:\n>> This is the main motive behind the patch.\n>> \n>> Developer options aren't shown in postgresql.conf.sample, which it seems like\n>> sometimes people read through quickly, setting a whole bunch of options that\n>> sound good, sometimes including this one. And in the best case they then ask\n>> on -performance why their queries are slow and we tell them to turn it back off\n>> to fix their issues. This changes to no longer put it in .sample, and calling\n>> it a \"dev\" option seems to be the classification and mechanism by which to do\n>> that.\n> \n> +1\n\nHm. I can see the point you are making based on the bug report that\nhas led to this thread:\nhttps://www.postgresql.org/message-id/CAN0SRDFV=Fv0zXHCGbh7gh=MTfw05Xd1x7gjJrZs5qn-TEphOw@mail.gmail.com\n\nHowever, I'd like to think that we can do better than what's proposed\nin the patch. There are a couple of things to consider here:\n- Should the parameter be renamed to reflect that it should only be\nused for testing purposes?\n- Should we make more general the description of the developer options\nin the docs?\n\nI have applied the patch for log_autovacuum_min_duration for now, as\nthis one is clearly wrong.\n--\nMichael",
"msg_date": "Mon, 12 Apr 2021 14:01:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 10:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Apr 09, 2021 at 07:39:28AM -0400, Bruce Momjian wrote:\n> > On Thu, Apr 8, 2021 at 10:17:18PM -0500, Justin Pryzby wrote:\n> >> This is the main motive behind the patch.\n> >>\n> >> Developer options aren't shown in postgresql.conf.sample, which it seems like\n> >> sometimes people read through quickly, setting a whole bunch of options that\n> >> sound good, sometimes including this one. And in the best case they then ask\n> >> on -performance why their queries are slow and we tell them to turn it back off\n> >> to fix their issues. This changes to no longer put it in .sample, and calling\n> >> it a \"dev\" option seems to be the classification and mechanism by which to do\n> >> that.\n> >\n> > +1\n>\n> Hm. I can see the point you are making based on the bug report that\n> has led to this thread:\n> https://www.postgresql.org/message-id/CAN0SRDFV=Fv0zXHCGbh7gh=MTfw05Xd1x7gjJrZs5qn-TEphOw@mail.gmail.com\n>\n> However, I'd like to think that we can do better than what's proposed\n> in the patch. There are a couple of things to consider here:\n> - Should the parameter be renamed to reflect that it should only be\n> used for testing purposes?\n> - Should we make more general the description of the developer options\n> in the docs?\n\nIMO, categorizing force_parallel_mode to DEVELOPER_OPTIONS and moving\nit to the \"Developer Options\" section in config.sgml looks\nappropriate. So, the v2-0004 patch proposed by Justin at [1] looks\ngood to me. If there are any other GUCs that are not meant to be used\nin production, IMO we could follow the same.\n\n[1] https://www.postgresql.org/message-id/20210408213812.GA18734%40telsasoft.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 12 Apr 2021 10:58:37 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> However, I'd like to think that we can do better than what's proposed\n> in the patch. There are a couple of things to consider here:\n> - Should the parameter be renamed to reflect that it should only be\n> used for testing purposes?\n\n-1 to that part, because it would break a bunch of buildfarm animals'\nconfigurations. I doubt that any gain in clarity would be worth it.\n\n> - Should we make more general the description of the developer options\n> in the docs?\n\nPerhaps ... what did you have in mind?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Apr 2021 01:40:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 01:40:52AM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> However, I'd like to think that we can do better than what's proposed\n>> in the patch. There are a couple of things to consider here:\n>> - Should the parameter be renamed to reflect that it should only be\n>> used for testing purposes?\n> \n> -1 to that part, because it would break a bunch of buildfarm animals'\n> configurations. I doubt that any gain in clarity would be worth it.\n\nOkay.\n\n>> - Should we make more general the description of the developer options\n>> in the docs?\n> \n> Perhaps ... what did you have in mind?\n\nThe first sentence of the page now says that:\n\"The following parameters are intended for work on the PostgreSQL\nsource code, and in some cases to assist with recovery of severely\ndamaged databases.\"\n\nThat does not stick with force_parallel_mode IMO. Maybe:\n\"The following parameters are intended for development work related to\nPostgreSQL. Some of them work on the PostgreSQL source code, some of\nthem can be used to control the run-time behavior of the server, and\nin some cases they can be used to assist with the recovery of severely\ndamaged databases.\"\n--\nMichael",
"msg_date": "Tue, 13 Apr 2021 16:34:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 04:34:23PM +0900, Michael Paquier wrote:\n> On Mon, Apr 12, 2021 at 01:40:52AM -0400, Tom Lane wrote:\n> >> - Should we make more general the description of the developer options\n> >> in the docs?\n> > \n> > Perhaps ... what did you have in mind?\n> \n> The first sentence of the page now says that:\n> \"The following parameters are intended for work on the PostgreSQL\n> source code, and in some cases to assist with recovery of severely\n> damaged databases.\"\n> \n> That does not stick with force_parallel_mode IMO. Maybe:\n\nGood point.\n\n-- \nJustin",
"msg_date": "Tue, 13 Apr 2021 07:31:39 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Apr 12, 2021 at 01:40:52AM -0400, Tom Lane wrote:\n>> Perhaps ... what did you have in mind?\n\n> The first sentence of the page now says that:\n> \"The following parameters are intended for work on the PostgreSQL\n> source code, and in some cases to assist with recovery of severely\n> damaged databases.\"\n\n> That does not stick with force_parallel_mode IMO. Maybe:\n> \"The following parameters are intended for development work related to\n> PostgreSQL. Some of them work on the PostgreSQL source code, some of\n> them can be used to control the run-time behavior of the server, and\n> in some cases they can be used to assist with the recovery of severely\n> damaged databases.\"\n\nI think that's overly wordy. Maybe\n\nThe following parameters are intended for developer testing, and\nshould never be enabled for production work. However, some of\nthem can be used to assist with the recovery of severely\ndamaged databases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Apr 2021 10:12:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 10:12:35AM -0400, Tom Lane wrote:\n> The following parameters are intended for developer testing, and\n> should never be enabled for production work. However, some of\n> them can be used to assist with the recovery of severely\n> damaged databases.\n\nOkay, that's fine by me.\n--\nMichael",
"msg_date": "Wed, 14 Apr 2021 13:54:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 07:31:39AM -0500, Justin Pryzby wrote:\n> Good point.\n\nThanks. I have used the wording that Tom has proposed upthread, added\none GUC_NOT_IN_SAMPLE that you forgot, and applied the\nforce_parallel_mode patch.\n--\nMichael",
"msg_date": "Wed, 14 Apr 2021 15:57:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 03:57:21PM +0900, Michael Paquier wrote:\n> On Tue, Apr 13, 2021 at 07:31:39AM -0500, Justin Pryzby wrote:\n> > Good point.\n> \n> Thanks. I have used the wording that Tom has proposed upthread, added\n> one GUC_NOT_IN_SAMPLE that you forgot, and applied the\n> force_parallel_mode patch.\n\nThanks. It just occured to me to ask if we should backpatch it.\nThe goal is to avoid someone trying to use this as a peformance option.\n\nIt's to their benefit and ours if they don't do that on v10-13 for the next 5\nyears, not just v14-17.\n\nThe patch seems to apply cleanly on v12 but cherry-pick needs help for other\nbranches...\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 23 Apr 2021 13:23:26 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Fri, Apr 23, 2021 at 01:23:26PM -0500, Justin Pryzby wrote:\n> The patch seems to apply cleanly on v12 but cherry-pick needs help for other\n> branches...\n\nFWIW, this did not seem bad enough to me to require a back-patch.\nThis parameter got introduced in 2016 and this was the only report\nrelated to it for the last 5 years.\n--\nMichael",
"msg_date": "Sat, 24 Apr 2021 10:50:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Sat, Apr 24, 2021 at 10:50:21AM +0900, Michael Paquier wrote:\n> On Fri, Apr 23, 2021 at 01:23:26PM -0500, Justin Pryzby wrote:\n> > The patch seems to apply cleanly on v12 but cherry-pick needs help for other\n> > branches...\n> \n> FWIW, this did not seem bad enough to me to require a back-patch.\n> This parameter got introduced in 2016 and this was the only report\n> related to it for the last 5 years.\n\nNo, it's not the first report - although I'm surprised I wasn't able to find\nmore than these.\n\nhttps://www.postgresql.org/message-id/20190102164525.GU25379@telsasoft.com\nhttps://www.postgresql.org/message-id/CAKJS1f_Qi0iboCos3wu6QiAbdF-9FoK57wxzKbe2-WcesN4rFA%40mail.gmail.com\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 23 Apr 2021 21:57:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 10:50:53AM +0900, Michael Paquier wrote:\n> - {\"track_commit_timestamp\", PGC_POSTMASTER, REPLICATION,\n> + {\"track_commit_timestamp\", PGC_POSTMASTER, REPLICATION_SENDING,\n> I can get behind this change for clarity where it gets actively used.\n\nI'm not sure what you meant?\n\n...but, I realized just now that *zero* other GUCs use \"REPLICATION\".\nAnd the documentation puts it in 20.6.1. Sending Servers,\nso it still seems to me that this is correct to move this, too.\n\nhttps://www.postgresql.org/docs/devel/runtime-config-replication.html\n\nThen, I wonder if REPLICATION should be removed from guc_tables.h...\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 28 Apr 2021 23:24:04 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> ...but, I realized just now that *zero* other GUCs use \"REPLICATION\".\n> And the documentation puts it in 20.6.1. Sending Servers,\n> so it still seems to me that this is correct to move this, too.\n> https://www.postgresql.org/docs/devel/runtime-config-replication.html\n> Then, I wonder if REPLICATION should be removed from guc_tables.h...\n\nFor the archives' sake --- these things are now committed as part of\na55a98477. I'd forgotten this thread, and then rediscovered the same\ninconsistencies as Justin had while reviewing Bharath Rupireddy's patch\nfor bug #16997 [1].\n\nI think this thread can now be closed off as done. However, there\nare some open issues mentioned in the other thread, if anyone here\nwants to comment.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/16997-ff16127f6e0d1390%40postgresql.org\n\n\n",
"msg_date": "Sat, 08 May 2021 12:39:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
}
] |
[
{
"msg_contents": "Hi,\n\nRight now dsm_create() has the following assertion:\n\t/* Unsafe in postmaster (and pointless in a stand-alone backend). */\n\tAssert(IsUnderPostmaster);\n\nI agree with the \"unsafe in postmaster\" bit. But I'm not convinced by\nthe \"pointless in a stand-alone backend\" part.\n\nWe're starting to build building blocks of the system using DSM now, and\nseveral of those seem like they should work the same whether in single\nuser mode or not.\n\nI just hit this when testing whether the shared memory stats support\nworks in single user mode: It does, as long as only a few stats exist,\nafter that this assertion is hit, removing the assertion solves that.\n\nToday the stats system doesn't work in single user mode, in a weird way:\n2021-04-03 16:01:39.872 PDT [3698737][not initialized][1/3:0] LOG: using stale statistics instead of current ones because stats collector is not responding\n2021-04-03 16:01:39.872 PDT [3698737][not initialized][1/3:0] STATEMENT: select * from pg_stat_all_tables;\nthen proceeding to return a lot of 0s and NULLs.\n\nI think that's not great: E.g. when hitting wraparound issues, checking\nsomething like pg_stat_all_tables.last_vacuum seems like an entirely\nreasonable thing to do.\n\nObviously not something we'd fix with the current stats collector\napproach, but I don't think it's something we should cargo cult forward\neither.\n\nTherefore I propose replacing the assertion with something along the\nlines of\nAssert(IsUnderPostmaster || !IsPostmasterEnvironment);\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 3 Apr 2021 16:29:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Allowing dsm allocations in single user mode"
}
] |
[
{
"msg_contents": "Hi,\nI was looking at AttrDefaultFetch and saw that the variable found is never\nread.\n\nI think it can be removed. See attached patch.\n\nCheers",
"msg_date": "Sat, 3 Apr 2021 19:47:03 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Unused variable found in AttrDefaultFetch"
},
{
"msg_contents": "On Sun, Apr 4, 2021 at 8:14 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> I was looking at AttrDefaultFetch and saw that the variable found is never read.\n>\n> I think it can be removed. See attached patch.\n\n+1 to remove it and the patch LGTM. For reference, below is the commit\nthat removed last usage of \"found\" variable:\n\ncommit 16828d5c0273b4fe5f10f42588005f16b415b2d8\nAuthor: Andrew Dunstan <andrew@dunslane.net>\nDate: Wed Mar 28 10:43:52 2018 +1030\n\n Fast ALTER TABLE ADD COLUMN with a non-NULL default\n\n Currently adding a column to a table with a non-NULL default results in\n a rewrite of the table. For large tables this can be both expensive and\n disruptive. This patch removes the need for the rewrite as long as the\n\n@@ -4063,10 +4125,6 @@ AttrDefaultFetch(Relation relation)\n\n systable_endscan(adscan);\n heap_close(adrel, AccessShareLock);\n-\n- if (found != ndef)\n- elog(WARNING, \"%d attrdef record(s) missing for rel %s\",\n- ndef - found, RelationGetRelationName(relation));\n }\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 4 Apr 2021 10:13:26 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unused variable found in AttrDefaultFetch"
},
{
"msg_contents": "On Sun, Apr 04, 2021 at 10:13:26AM +0530, Bharath Rupireddy wrote:\n> +1 to remove it and the patch LGTM.\n\nIndeed, there is no point to keep that around. I'll go clean up that\nas you propose.\n--\nMichael",
"msg_date": "Sun, 4 Apr 2021 13:53:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Unused variable found in AttrDefaultFetch"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sun, Apr 04, 2021 at 10:13:26AM +0530, Bharath Rupireddy wrote:\n>> +1 to remove it and the patch LGTM.\n\n> Indeed, there is no point to keep that around. I'll go clean up that\n> as you propose.\n\nWhat Andrew was suggesting in the other thread might well result in\nputting it back. I'd hold off till that decision is made.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Apr 2021 01:00:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused variable found in AttrDefaultFetch"
},
{
"msg_contents": "Andrew:\nCan you chime in which direction to go ?\n\nOnce consensus is reached, I can provide a new patch, if needed.\n\nCheers\n\nOn Sat, Apr 3, 2021 at 9:54 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sun, Apr 04, 2021 at 10:13:26AM +0530, Bharath Rupireddy wrote:\n> > +1 to remove it and the patch LGTM.\n>\n> Indeed, there is no point to keep that around. I'll go clean up that\n> as you propose.\n> --\n> Michael\n>\n\nAndrew:Can you chime in which direction to go ?Once consensus is reached, I can provide a new patch, if needed.CheersOn Sat, Apr 3, 2021 at 9:54 PM Michael Paquier <michael@paquier.xyz> wrote:On Sun, Apr 04, 2021 at 10:13:26AM +0530, Bharath Rupireddy wrote:\n> +1 to remove it and the patch LGTM.\n\nIndeed, there is no point to keep that around. I'll go clean up that\nas you propose.\n--\nMichael",
"msg_date": "Sun, 4 Apr 2021 06:39:44 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Unused variable found in AttrDefaultFetch"
},
{
"msg_contents": "\nOn 4/4/21 9:39 AM, Zhihong Yu wrote:\n> Andrew:\n> Can you chime in which direction to go ?\n>\n> Once consensus is reached, I can provide a new patch, if needed.\n>\n> Cheers\n>\n>\n\n[ please don't top-post ]\n\n\nI don't think we need a new patch. We'll clean this up one way or\nanother as part of the cleanup on the other thread.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 4 Apr 2021 11:13:27 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Unused variable found in AttrDefaultFetch"
},
{
"msg_contents": "Andrew:\nCan you let me know which thread you were referring to?\n\nI navigated the thread mentioned in your commit. It has been more than 3\nyears since the last response:\n\nhttps://www.postgresql.org/message-id/CAA8%3DA7-OPsGeazXxiojQNMus51odNZVn8EVNSoWZ2y9yRL%2BBvQ%40mail.gmail.com\n\nCan you let me know the current plan ?\n\nCheers\n\nOn Sun, Apr 4, 2021 at 8:13 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 4/4/21 9:39 AM, Zhihong Yu wrote:\n> > Andrew:\n> > Can you chime in which direction to go ?\n> >\n> > Once consensus is reached, I can provide a new patch, if needed.\n> >\n> > Cheers\n> >\n> >\n>\n> [ please don't top-post ]\n>\n>\n> I don't think we need a new patch. We'll clean this up one way or\n> another as part of the cleanup on the other thread.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nAndrew:Can you let me know which thread you were referring to?I navigated the thread mentioned in your commit. It has been more than 3 years since the last response:https://www.postgresql.org/message-id/CAA8%3DA7-OPsGeazXxiojQNMus51odNZVn8EVNSoWZ2y9yRL%2BBvQ%40mail.gmail.comCan you let me know the current plan ?CheersOn Sun, Apr 4, 2021 at 8:13 AM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 4/4/21 9:39 AM, Zhihong Yu wrote:\n> Andrew:\n> Can you chime in which direction to go ?\n>\n> Once consensus is reached, I can provide a new patch, if needed.\n>\n> Cheers\n>\n>\n\n[ please don't top-post ]\n\n\nI don't think we need a new patch. We'll clean this up one way or\nanother as part of the cleanup on the other thread.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 4 Apr 2021 08:47:43 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Unused variable found in AttrDefaultFetch"
},
{
"msg_contents": "I found the recent thread under 'ALTER TABLE ADD COLUMN fast default' which\nhasn't appeared in the message chain yet.\n\nI will watch that thread.\n\nCheers\n\n\n\nOn Sun, Apr 4, 2021 at 8:47 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n> Andrew:\n> Can you let me know which thread you were referring to?\n>\n> I navigated the thread mentioned in your commit. It has been more than 3\n> years since the last response:\n>\n>\n> https://www.postgresql.org/message-id/CAA8%3DA7-OPsGeazXxiojQNMus51odNZVn8EVNSoWZ2y9yRL%2BBvQ%40mail.gmail.com\n>\n> Can you let me know the current plan ?\n>\n> Cheers\n>\n> On Sun, Apr 4, 2021 at 8:13 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>>\n>> On 4/4/21 9:39 AM, Zhihong Yu wrote:\n>> > Andrew:\n>> > Can you chime in which direction to go ?\n>> >\n>> > Once consensus is reached, I can provide a new patch, if needed.\n>> >\n>> > Cheers\n>> >\n>> >\n>>\n>> [ please don't top-post ]\n>>\n>>\n>> I don't think we need a new patch. We'll clean this up one way or\n>> another as part of the cleanup on the other thread.\n>>\n>>\n>> cheers\n>>\n>>\n>> andrew\n>>\n>> --\n>> Andrew Dunstan\n>> EDB: https://www.enterprisedb.com\n>>\n>>\n\nI found the recent thread under 'ALTER TABLE ADD COLUMN fast default' which hasn't appeared in the message chain yet.I will watch that thread.CheersOn Sun, Apr 4, 2021 at 8:47 AM Zhihong Yu <zyu@yugabyte.com> wrote:Andrew:Can you let me know which thread you were referring to?I navigated the thread mentioned in your commit. It has been more than 3 years since the last response:https://www.postgresql.org/message-id/CAA8%3DA7-OPsGeazXxiojQNMus51odNZVn8EVNSoWZ2y9yRL%2BBvQ%40mail.gmail.comCan you let me know the current plan ?CheersOn Sun, Apr 4, 2021 at 8:13 AM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 4/4/21 9:39 AM, Zhihong Yu wrote:\n> Andrew:\n> Can you chime in which direction to go ?\n>\n> Once consensus is reached, I can provide a new patch, if needed.\n>\n> Cheers\n>\n>\n\n[ please don't top-post ]\n\n\nI don't think we need a new patch. We'll clean this up one way or\nanother as part of the cleanup on the other thread.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 4 Apr 2021 09:00:24 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Unused variable found in AttrDefaultFetch"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> Andrew:\n> Can you let me know which thread you were referring to?\n\nI assume he meant\nhttps://www.postgresql.org/message-id/flat/31e2e921-7002-4c27-59f5-51f08404c858%402ndQuadrant.com\n\nwhih was last added to just moments ago.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Apr 2021 12:07:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused variable found in AttrDefaultFetch"
}
] |
[
{
"msg_contents": "Hello,\n\njust a quick patch for a single-letter typo in a comment\nin src/backend/commands/collationcmds.c\n...\n * set of language+region combinations, whereas the latter only returns\n- * language+region combinations of they are distinct from the language's\n+ * language+region combinations if they are distinct from the language's\n * base collation. So there might not be a de-DE or en-GB, which \nwould be\n...\n(please see the attached patch).\n\n-- \nAnton Voloshin\nPostgres Professional: https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Sun, 4 Apr 2021 15:49:35 +0300",
"msg_from": "Anton Voloshin <a.voloshin@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "[PATCH] typo fix in collationcmds.c: \"if they are distinct\""
},
{
"msg_contents": "On Sun, Apr 04, 2021 at 03:49:35PM +0300, Anton Voloshin wrote:\n> just a quick patch for a single-letter typo in a comment\n> in src/backend/commands/collationcmds.c\n> ...\n\nThanks, fixed. This came from 51e225d.\n--\nMichael",
"msg_date": "Mon, 5 Apr 2021 11:20:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] typo fix in collationcmds.c: \"if they are distinct\""
}
] |
[
{
"msg_contents": "Hello,\n\nin src/backend/utils/adt/formatting.c, in icu_convert_case() I see:\n if (status == U_BUFFER_OVERFLOW_ERROR)\n {\n /* try again with adjusted length */\n pfree(*buff_dest);\n *buff_dest = palloc(len_dest * sizeof(**buff_dest));\n ...\n\nIs there any reason why this should not be repalloc()?\n\nIn case it should be, I've attached a corresponding patch.\n\n-- \nAnton Voloshin\nPostgres Professional: https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Sun, 4 Apr 2021 18:34:47 +0300",
"msg_from": "Anton Voloshin <a.voloshin@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "possible repalloc() in icu_convert_case()"
},
{
"msg_contents": "Anton Voloshin <a.voloshin@postgrespro.ru> writes:\n> in src/backend/utils/adt/formatting.c, in icu_convert_case() I see:\n> if (status == U_BUFFER_OVERFLOW_ERROR)\n> {\n> /* try again with adjusted length */\n> pfree(*buff_dest);\n> *buff_dest = palloc(len_dest * sizeof(**buff_dest));\n> ...\n\n> Is there any reason why this should not be repalloc()?\n\nrepalloc is likely to be more expensive, since it implies copying\ndata which isn't helpful here. I think this code is fine as-is.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Apr 2021 12:20:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: possible repalloc() in icu_convert_case()"
},
{
"msg_contents": "On 04.04.2021 19:20, Tom Lane wrote:\n> repalloc is likely to be more expensive, since it implies copying\n> data which isn't helpful here. I think this code is fine as-is.\n\nOh, you are right, thanks. I did not think properly about copying in \nrepalloc.\n\n-- \nAnton Voloshin\nPostgres Professional: https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Sun, 4 Apr 2021 21:09:33 +0300",
"msg_from": "Anton Voloshin <a.voloshin@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: possible repalloc() in icu_convert_case()"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen AV worker items where introduced 4 years ago, i was suggested that\nit could be used for other things like cleaning the pending list of GIN\nindex when it reaches gin_pending_list_limit instead of making user\nvisible operation pay the price.\n\nThat never happened though. So, here is a little patch for that.\n\nShould I add an entry for this on next commitfest?\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL",
"msg_date": "Mon, 5 Apr 2021 01:31:17 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "use AV worker items infrastructure for GIN pending list's cleanup"
},
{
"msg_contents": "On Mon, Apr 5, 2021, at 3:31 AM, Jaime Casanova wrote:\n> When AV worker items where introduced 4 years ago, i was suggested that\n> it could be used for other things like cleaning the pending list of GIN\n> index when it reaches gin_pending_list_limit instead of making user\n> visible operation pay the price.\n> \n> That never happened though. So, here is a little patch for that.\n> \n> Should I add an entry for this on next commitfest?\n+1. It slipped through the cracks along the years. It is even suggested in the\ncurrent docs since the fast update support.\n\nhttps://www.postgresql.org/docs/current/gin-tips.html\n\n> To avoid fluctuations in observed response time, it's desirable to have\n> pending-list cleanup occur in the background (i.e., via autovacuum).\n\nCould you provide a link from the previous discussion?\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Apr 5, 2021, at 3:31 AM, Jaime Casanova wrote:When AV worker items where introduced 4 years ago, i was suggested thatit could be used for other things like cleaning the pending list of GINindex when it reaches gin_pending_list_limit instead of making uservisible operation pay the price.That never happened though. So, here is a little patch for that.Should I add an entry for this on next commitfest?+1. It slipped through the cracks along the years. It is even suggested in thecurrent docs since the fast update support.https://www.postgresql.org/docs/current/gin-tips.html> To avoid fluctuations in observed response time, it's desirable to have> pending-list cleanup occur in the background (i.e., via autovacuum).Could you provide a link from the previous discussion?--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 05 Apr 2021 10:41:22 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: use AV worker items infrastructure for GIN pending list's cleanup"
},
{
"msg_contents": "On Mon, Apr 05, 2021 at 10:41:22AM -0300, Euler Taveira wrote:\n> On Mon, Apr 5, 2021, at 3:31 AM, Jaime Casanova wrote:\n> > When AV worker items where introduced 4 years ago, i was suggested that\n> > it could be used for other things like cleaning the pending list of GIN\n> > index when it reaches gin_pending_list_limit instead of making user\n> > visible operation pay the price.\n> > \n> > That never happened though. So, here is a little patch for that.\n> > \n> > Should I add an entry for this on next commitfest?\n> +1. It slipped through the cracks along the years. It is even suggested in the\n> current docs since the fast update support.\n> \n> https://www.postgresql.org/docs/current/gin-tips.html\n> \n\nInteresting, that comment maybe needs to be rewritten. I would go for\nremove completely the first paragraph under gin_pending_list_limit entry\n\n> \n> Could you provide a link from the previous discussion?\n> \n\nIt happened here:\nhttps://www.postgresql.org/message-id/flat/20170301045823.vneqdqkmsd4as4ds%40alvherre.pgsql\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Mon, 5 Apr 2021 09:47:41 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "Re: use AV worker items infrastructure for GIN pending list's cleanup"
},
{
"msg_contents": "On Mon, Apr 5, 2021, at 16:47, Jaime Casanova wrote:\n> On Mon, Apr 05, 2021 at 10:41:22AM -0300, Euler Taveira wrote:\n> > On Mon, Apr 5, 2021, at 3:31 AM, Jaime Casanova wrote:\n> > > When AV worker items where introduced 4 years ago, i was suggested that\n> > > it could be used for other things like cleaning the pending list of GIN\n> > > index when it reaches gin_pending_list_limit instead of making user\n> > > visible operation pay the price.\n> > > \n> > > That never happened though. So, here is a little patch for that.\n> > > \n> > > Should I add an entry for this on next commitfest?\n> > +1. It slipped through the cracks along the years. It is even suggested in the\n> > current docs since the fast update support.\n> > \n> > https://www.postgresql.org/docs/current/gin-tips.html\n> > \n> \n> Interesting, that comment maybe needs to be rewritten. I would go for\n> remove completely the first paragraph under gin_pending_list_limit entry\n\nThanks for working on this patch.\n\nI found this thread searching for \"gin_pending_list_limit\" in pg hackers after reading an interesting article found via the front page of Hacker News: \"Debugging random slow writes in PostgreSQL\" (https://iamsafts.com/posts/postgres-gin-performance/).\n\nI thought it could be interesting to read about a real user story where this patch would be helpful.\n\nHacker News discussion: https://news.ycombinator.com/item?id=27152507\n\n/Joel\n\nOn Mon, Apr 5, 2021, at 16:47, Jaime Casanova wrote:On Mon, Apr 05, 2021 at 10:41:22AM -0300, Euler Taveira wrote:> On Mon, Apr 5, 2021, at 3:31 AM, Jaime Casanova wrote:> > When AV worker items where introduced 4 years ago, i was suggested that> > it could be used for other things like cleaning the pending list of GIN> > index when it reaches gin_pending_list_limit instead of making user> > visible operation pay the price.> > > > That never happened though. So, here is a little patch for that.> > > > Should I add an entry for this on next commitfest?> +1. It slipped through the cracks along the years. It is even suggested in the> current docs since the fast update support.> > https://www.postgresql.org/docs/current/gin-tips.html> Interesting, that comment maybe needs to be rewritten. I would go forremove completely the first paragraph under gin_pending_list_limit entryThanks for working on this patch.I found this thread searching for \"gin_pending_list_limit\" in pg hackers after reading an interesting article found via the front page of Hacker News: \"Debugging random slow writes in PostgreSQL\" (https://iamsafts.com/posts/postgres-gin-performance/).I thought it could be interesting to read about a real user story where this patch would be helpful.Hacker News discussion: https://news.ycombinator.com/item?id=27152507/Joel",
"msg_date": "Sat, 15 May 2021 08:12:51 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: use AV worker items infrastructure for GIN pending list's cleanup"
},
{
"msg_contents": "On Sat, May 15, 2021 at 08:12:51AM +0200, Joel Jacobson wrote:\n> On Mon, Apr 5, 2021, at 16:47, Jaime Casanova wrote:\n> > On Mon, Apr 05, 2021 at 10:41:22AM -0300, Euler Taveira wrote:\n> > > On Mon, Apr 5, 2021, at 3:31 AM, Jaime Casanova wrote:\n> > > > When AV worker items where introduced 4 years ago, i was suggested that\n> > > > it could be used for other things like cleaning the pending list of GIN\n> > > > index when it reaches gin_pending_list_limit instead of making user\n> > > > visible operation pay the price.\n> > > > \n> > > > That never happened though. So, here is a little patch for that.\n> > > > \n> > > > Should I add an entry for this on next commitfest?\n> > > +1. It slipped through the cracks along the years. It is even suggested in the\n> > > current docs since the fast update support.\n> > > \n> > > https://www.postgresql.org/docs/current/gin-tips.html\n> > > \n> > \n> > Interesting, that comment maybe needs to be rewritten. I would go for\n> > remove completely the first paragraph under gin_pending_list_limit entry\n> \n> Thanks for working on this patch.\n> \n> I found this thread searching for \"gin_pending_list_limit\" in pg hackers after reading an interesting article found via the front page of Hacker News: \"Debugging random slow writes in PostgreSQL\" (https://iamsafts.com/posts/postgres-gin-performance/).\n> \n> I thought it could be interesting to read about a real user story where this patch would be helpful.\n> \n\nA customer here has 20+ GIN indexes in a big heavily used table and\nevery time one of the indexes reaches gin_pending_list_limit (because of\nan insert or update) a user feels the impact.\n\nSo, currently we have a cronjob running periodically and checking\npending list sizes to process the index before the limit get fired by an\nuser operation. While the index still is processed and locked the fact\nthat doesn't happen in the user face make the process less notorious and\nin the mind of users faster.\n\nThis will provide the same facility, the process will happen \"in the\nbackground\".\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Sat, 15 May 2021 01:42:12 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "Re: use AV worker items infrastructure for GIN pending list's cleanup"
},
{
"msg_contents": "On Sat, May 15, 2021, at 08:42, Jaime Casanova wrote:\n> A customer here has 20+ GIN indexes in a big heavily used table and\n> every time one of the indexes reaches gin_pending_list_limit (because of\n> an insert or update) a user feels the impact.\n> \n> So, currently we have a cronjob running periodically and checking\n> pending list sizes to process the index before the limit get fired by an\n> user operation. While the index still is processed and locked the fact\n> that doesn't happen in the user face make the process less notorious and\n> in the mind of users faster.\n> \n> This will provide the same facility, the process will happen \"in the\n> background\".\n\nSounds like a great improvement, many thanks.\n\n/Joel\n\nOn Sat, May 15, 2021, at 08:42, Jaime Casanova wrote:A customer here has 20+ GIN indexes in a big heavily used table andevery time one of the indexes reaches gin_pending_list_limit (because ofan insert or update) a user feels the impact.So, currently we have a cronjob running periodically and checkingpending list sizes to process the index before the limit get fired by anuser operation. While the index still is processed and locked the factthat doesn't happen in the user face make the process less notorious andin the mind of users faster.This will provide the same facility, the process will happen \"in thebackground\".Sounds like a great improvement, many thanks./Joel",
"msg_date": "Sat, 15 May 2021 08:51:46 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: use AV worker items infrastructure for GIN pending list's cleanup"
},
{
"msg_contents": "On Mon, Apr 5, 2021 at 3:31 PM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n>\n> Hi,\n>\n> When AV worker items where introduced 4 years ago, i was suggested that\n> it could be used for other things like cleaning the pending list of GIN\n> index when it reaches gin_pending_list_limit instead of making user\n> visible operation pay the price.\n>\n> That never happened though. So, here is a little patch for that.\n\nThank you for working on this.\n\nI like the idea of cleaning the GIN pending list using by autovacuum\nwork item. But with the patch, we request and skip the pending list\ncleanup if the pending list size exceeds gin_pending_list_limit during\ninsertion. But autovacuum work items are executed after an autovacuum\nruns. So if many insertions happen before executing the autovacuum\nwork item, we will end up greatly exceeding the threshold\n(gin_pending_list_limit) and registering the same work item again and\nagain. Maybe we need something like a soft limit and a hard limit?\nThat is, if the pending list size exceeds the soft limit, we request\nthe work item. OTOH, if it exceeds the hard limit\n(gin_pending_list_limit) we cleanup the pending list before insertion.\nWe might also need to have autovacuum work items ignore the work item\nif the same work item with the same arguments is already registered.\nIn addition to that, I think we should avoid the work item for\ncleaning the pending list from being executed if an autovacuum runs on\nthe gin index before executing the work item.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 17 May 2021 13:46:37 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: use AV worker items infrastructure for GIN pending list's cleanup"
},
{
"msg_contents": "On Mon, May 17, 2021 at 01:46:37PM +0900, Masahiko Sawada wrote:\n> On Mon, Apr 5, 2021 at 3:31 PM Jaime Casanova\n> <jcasanov@systemguards.com.ec> wrote:\n> >\n> > Hi,\n> >\n> > When AV worker items where introduced 4 years ago, i was suggested that\n> > it could be used for other things like cleaning the pending list of GIN\n> > index when it reaches gin_pending_list_limit instead of making user\n> > visible operation pay the price.\n> >\n> > That never happened though. So, here is a little patch for that.\n> \n> Thank you for working on this.\n> \n> I like the idea of cleaning the GIN pending list using by autovacuum\n> work item. But with the patch, we request and skip the pending list\n> cleanup if the pending list size exceeds gin_pending_list_limit during\n> insertion. But autovacuum work items are executed after an autovacuum\n> runs. So if many insertions happen before executing the autovacuum\n> work item, we will end up greatly exceeding the threshold\n> (gin_pending_list_limit) and registering the same work item again and\n> again. Maybe we need something like a soft limit and a hard limit?\n> That is, if the pending list size exceeds the soft limit, we request\n> the work item. OTOH, if it exceeds the hard limit\n> (gin_pending_list_limit) we cleanup the pending list before insertion.\n> We might also need to have autovacuum work items ignore the work item\n> if the same work item with the same arguments is already registered.\n> In addition to that, I think we should avoid the work item for\n> cleaning the pending list from being executed if an autovacuum runs on\n> the gin index before executing the work item.\n> \n\nThanks for your comments on this. I have been working on a rebased\nversion, but ENOTIME right now. \n\nWill mark this one as \"Returned with feedback\" and resubmit for\nnovember.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Wed, 8 Sep 2021 09:07:11 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "Re: use AV worker items infrastructure for GIN pending list's cleanup"
}
] |
[
{
"msg_contents": "\n\nOn 2021/04/02 18:41, 蔡梦娟(玊于) wrote:\n> \n> Hi, all\n> \n> I want to know why call pgstat_reset_all function during recovery\n> process, under what circumstances the data will be invalid after recovery?\n\nIf my understanding is right, PITR is the case. Now, the stats files are\ngenerated as a one-time snapshot. This means that the stats counters \nsaved at last may not be valid for the specific point in time.\n\nFWIW, there was a related discussion([1]) although the behavior is not\nchanged yet.\n\n[1] https://www.postgresql.org/message-id/1416.1479760254%40sss.pgh.pa.us\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n\n",
"msg_date": "Mon, 5 Apr 2021 19:20:20 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Why reset pgstat during recovery"
}
] |
[
{
"msg_contents": "Greetings,\n\nI'm Mohamed Mansour, a Data Engineer at IBM and a Master's degree student\nin the Computer Engineering Department - Faculty of Engineering - Cairo\nUniversity.\n\nI would like to apply to google summer of code to work on the following\nproject:\n\nDatabase Load Stress Benchmark\n\nKindly find my attached CV and tell me if there is a place for me related\nto this project or if you see another project that fits me better, then I\nwill build the proposal as soon as possible\n\nThanks in advance\n\n\n\n*Eng. Mohamed Mansour(EG+) 20 112 003 3329(EG+) 20 106 352 6328Data\nEngineer*\n\nGreetings,I'm Mohamed Mansour, a Data Engineer at IBM and a Master's degree student in the Computer Engineering Department - Faculty of Engineering - Cairo University.I would like to apply to google summer of code to work on the following project:Database Load Stress BenchmarkKindly find my attached CV and tell me if there is a place for me related to this project or if you see another project that fits me better, then I will build the proposal as soon as possibleThanks in advanceEng. Mohamed Mansour(EG+) 20 112 003 3329(EG+) 20 106 352 6328Data Engineer",
"msg_date": "Mon, 5 Apr 2021 23:46:36 +0200",
"msg_from": "Mohamed Mansour <mohamedmansour.mm317@gmail.com>",
"msg_from_op": true,
"msg_subject": "GSoc Applicant"
},
{
"msg_contents": "Kindly find the attached CV\n\n\n\n*Eng. Mohamed Mansour(EG+) 20 112 003 3329(EG+) 20 106 352 6328Data\nEngineer*\n\n\n---------- Forwarded message ---------\nFrom: Mohamed Mansour <mohamedmansour.mm317@gmail.com>\nDate: Mon, Apr 5, 2021 at 11:46 PM\nSubject: GSoc Applicant\nTo: <pgsql-hackers@lists.postgresql.org>\n\n\nGreetings,\n\nI'm Mohamed Mansour, a Data Engineer at IBM and a Master's degree student\nin the Computer Engineering Department - Faculty of Engineering - Cairo\nUniversity.\n\nI would like to apply to google summer of code to work on the following\nproject:\n\nDatabase Load Stress Benchmark\n\nKindly find my attached CV and tell me if there is a place for me related\nto this project or if you see another project that fits me better, then I\nwill build the proposal as soon as possible\n\nThanks in advance\n\n\n\n*Eng. Mohamed Mansour(EG+) 20 112 003 3329(EG+) 20 106 352 6328Data\nEngineer*",
"msg_date": "Mon, 5 Apr 2021 23:52:28 +0200",
"msg_from": "Mohamed Mansour <mohamedmansour.mm317@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fwd: GSoc Applicant"
},
{
"msg_contents": "Hello,\n\nOn Mon, Apr 05, 2021 at 11:46:36PM +0200, Mohamed Mansour wrote:\n> Greetings,\n> \n> I'm Mohamed Mansour, a Data Engineer at IBM and a Master's degree student\n> in the Computer Engineering Department - Faculty of Engineering - Cairo\n> University.\n> \n> I would like to apply to google summer of code to work on the following\n> project:\n> \n> Database Load Stress Benchmark\n> \n> Kindly find my attached CV and tell me if there is a place for me related\n> to this project or if you see another project that fits me better, then I\n> will build the proposal as soon as possible\n\nI don't see anything in your CV that suggest you couldn't be successful\nin this project, but we'd like you to put together a proposal for the\nprojects are you interested in.\n\nIf this is the only project that is most interesting to you, then please\ngo ahead and submit a draft for the mentors to review and we will offer\nour comments/suggestios on your proposal.\n\nRegards,\nMark\n\n\n",
"msg_date": "Tue, 6 Apr 2021 17:39:48 +0000",
"msg_from": "Mark Wong <markwkm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GSoc Applicant"
},
{
"msg_contents": "Thank you, I will do that.\n\n\n\n\n*Eng. Mohamed Mansour(EG+) 20 112 003 3329(EG+) 20 106 352 6328Data\nEngineer*\n\n\nOn Tue, Apr 6, 2021 at 7:39 PM Mark Wong <markwkm@gmail.com> wrote:\n\n> Hello,\n>\n> On Mon, Apr 05, 2021 at 11:46:36PM +0200, Mohamed Mansour wrote:\n> > Greetings,\n> >\n> > I'm Mohamed Mansour, a Data Engineer at IBM and a Master's degree student\n> > in the Computer Engineering Department - Faculty of Engineering - Cairo\n> > University.\n> >\n> > I would like to apply to google summer of code to work on the following\n> > project:\n> >\n> > Database Load Stress Benchmark\n> >\n> > Kindly find my attached CV and tell me if there is a place for me related\n> > to this project or if you see another project that fits me better, then I\n> > will build the proposal as soon as possible\n>\n> I don't see anything in your CV that suggest you couldn't be successful\n> in this project, but we'd like you to put together a proposal for the\n> projects are you interested in.\n>\n> If this is the only project that is most interesting to you, then please\n> go ahead and submit a draft for the mentors to review and we will offer\n> our comments/suggestios on your proposal.\n>\n> Regards,\n> Mark\n>\n\nThank you, I will do that. Eng. Mohamed Mansour(EG+) 20 112 003 3329(EG+) 20 106 352 6328Data EngineerOn Tue, Apr 6, 2021 at 7:39 PM Mark Wong <markwkm@gmail.com> wrote:Hello,\n\nOn Mon, Apr 05, 2021 at 11:46:36PM +0200, Mohamed Mansour wrote:\n> Greetings,\n> \n> I'm Mohamed Mansour, a Data Engineer at IBM and a Master's degree student\n> in the Computer Engineering Department - Faculty of Engineering - Cairo\n> University.\n> \n> I would like to apply to google summer of code to work on the following\n> project:\n> \n> Database Load Stress Benchmark\n> \n> Kindly find my attached CV and tell me if there is a place for me related\n> to this project or if you see another project that fits me better, then I\n> will build the proposal as soon as possible\n\nI don't see anything in your CV that suggest you couldn't be successful\nin this project, but we'd like you to put together a proposal for the\nprojects are you interested in.\n\nIf this is the only project that is most interesting to you, then please\ngo ahead and submit a draft for the mentors to review and we will offer\nour comments/suggestios on your proposal.\n\nRegards,\nMark",
"msg_date": "Tue, 6 Apr 2021 20:02:51 +0200",
"msg_from": "Mohamed Mansour <mohamedmansour.mm317@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: GSoc Applicant"
}
] |
[
{
"msg_contents": "Hi\n\nI met a problem about trigger in logical replication.\n\nI created a trigger after inserting data at subscriber, but there is a warning in the log of subscriber when the trigger fired:\nWARNING: relcache reference leak: relation \"xxx\" not closed.\n\nExample of the procedure:\n------publisher------\ncreate table test (a int primary key);\ncreate publication pub for table test;\n\n------subscriber------\ncreate table test (a int primary key);\ncreate subscription sub connection 'dbname=postgres' publication pub;\ncreate function funcA() returns trigger as $$ begin return null; end; $$ language plpgsql;\ncreate trigger my_trig after insert or update or delete on test for each row execute procedure funcA();\nalter table test enable replica trigger my_trig;\n\n------publisher------\ninsert into test values (6);\n\nIt seems an issue about reference leak. Anyone can fix this?\n\nRegards,\nTang\n\n\n",
"msg_date": "Tue, 6 Apr 2021 01:04:51 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Table refer leak in logical replication"
},
{
"msg_contents": "> WARNING: relcache reference leak: relation \"xxx\" not closed.\n> \n> Example of the procedure:\n> ------publisher------\n> create table test (a int primary key);\n> create publication pub for table test;\n> \n> ------subscriber------\n> create table test (a int primary key);\n> create subscription sub connection 'dbname=postgres' publication pub;\n> create function funcA() returns trigger as $$ begin return null; end; $$ language\n> plpgsql; create trigger my_trig after insert or update or delete on test for each\n> row execute procedure funcA(); alter table test enable replica trigger my_trig;\n> \n> ------publisher------\n> insert into test values (6);\n> \n> It seems an issue about reference leak. Anyone can fix this?\n\nIt seems ExecGetTriggerResultRel will reopen the target table because it cannot find an existing one.\nStoring the opened table in estate->es_opened_result_relations seems solves the problem.\n\nAttaching a patch that fix this.\nBTW, it seems better to add a testcase for this ?\n\nBest regards,\nhouzj",
"msg_date": "Tue, 6 Apr 2021 01:15:33 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Table refer leak in logical replication"
},
{
"msg_contents": "> BTW, it seems better to add a testcase for this ?\n\nI think the test for it can be added in src/test/subscription/t/003_constraints.pl, which is like what in my patch.\n\nRegards,\nShi yu",
"msg_date": "Tue, 6 Apr 2021 01:49:15 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Table refer leak in logical replication"
},
{
"msg_contents": "On Tue, Apr 6, 2021 at 10:15 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> > WARNING: relcache reference leak: relation \"xxx\" not closed.\n> >\n> > Example of the procedure:\n> > ------publisher------\n> > create table test (a int primary key);\n> > create publication pub for table test;\n> >\n> > ------subscriber------\n> > create table test (a int primary key);\n> > create subscription sub connection 'dbname=postgres' publication pub;\n> > create function funcA() returns trigger as $$ begin return null; end; $$ language\n> > plpgsql; create trigger my_trig after insert or update or delete on test for each\n> > row execute procedure funcA(); alter table test enable replica trigger my_trig;\n> >\n> > ------publisher------\n> > insert into test values (6);\n> >\n> > It seems an issue about reference leak. Anyone can fix this?\n>\n> It seems ExecGetTriggerResultRel will reopen the target table because it cannot find an existing one.\n> Storing the opened table in estate->es_opened_result_relations seems solves the problem.\n\nIt seems like commit 1375422c is related to this bug. The commit\nintroduced a new function ExecInitResultRelation() that sets both\nestate->es_result_relations and estate->es_opened_result_relations. I\nthink it's better to use ExecInitResultRelation() rather than directly\nsetting estate->es_opened_result_relations. It might be better to do\nthat in create_estate_for_relation() though. Please find an attached\npatch.\n\nSince this issue happens on only HEAD and it seems an oversight of\ncommit 1375422c, I don't think regression tests for this are\nessential.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 6 Apr 2021 12:23:56 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "> > > insert into test values (6);\r\n> > >\r\n> > > It seems an issue about reference leak. Anyone can fix this?\r\n> >\r\n> > It seems ExecGetTriggerResultRel will reopen the target table because it\r\n> cannot find an existing one.\r\n> > Storing the opened table in estate->es_opened_result_relations seems\r\n> solves the problem.\r\n> \r\n> It seems like commit 1375422c is related to this bug. The commit introduced a\r\n> new function ExecInitResultRelation() that sets both\r\n> estate->es_result_relations and estate->es_opened_result_relations. I\r\n> think it's better to use ExecInitResultRelation() rather than directly setting\r\n> estate->es_opened_result_relations. It might be better to do that in\r\n> create_estate_for_relation() though. Please find an attached patch.\r\n> \r\n> Since this issue happens on only HEAD and it seems an oversight of commit\r\n> 1375422c, I don't think regression tests for this are essential.\r\n\r\nIt seems we can not only use ExecInitResultRelation.\r\nIn function ExecInitResultRelation, it will use ExecGetRangeTableRelation which\r\nwill also open the target table and store the rel in \"Estate->es_relations\".\r\nWe should call ExecCloseRangeTableRelations at the end of apply_handle_xxx to\r\nclose the rel in \"Estate->es_relations\".\r\n\r\nAttaching the patch with this change.\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Tue, 6 Apr 2021 04:01:13 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Table refer leak in logical replication"
},
{
"msg_contents": "On Tue, Apr 6, 2021 at 1:01 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> > > > insert into test values (6);\n> > > >\n> > > > It seems an issue about reference leak. Anyone can fix this?\n> > >\n> > > It seems ExecGetTriggerResultRel will reopen the target table because it\n> > cannot find an existing one.\n> > > Storing the opened table in estate->es_opened_result_relations seems\n> > solves the problem.\n> >\n> > It seems like commit 1375422c is related to this bug.\n\nRight, thanks for pointing this out.\n\n> The commit introduced a\n> > new function ExecInitResultRelation() that sets both\n> > estate->es_result_relations and estate->es_opened_result_relations. I\n> > think it's better to use ExecInitResultRelation() rather than directly setting\n> > estate->es_opened_result_relations. It might be better to do that in\n> > create_estate_for_relation() though. Please find an attached patch.\n\nAgree that ExecInitResultRelations() would be better.\n\n> > Since this issue happens on only HEAD and it seems an oversight of commit\n> > 1375422c, I don't think regression tests for this are essential.\n>\n> It seems we can not only use ExecInitResultRelation.\n> In function ExecInitResultRelation, it will use ExecGetRangeTableRelation which\n> will also open the target table and store the rel in \"Estate->es_relations\".\n> We should call ExecCloseRangeTableRelations at the end of apply_handle_xxx to\n> close the rel in \"Estate->es_relations\".\n\nRight, ExecCloseRangeTableRelations() was missing.\n\nI think it may be better to create a sibling function to\ncreate_estate_for_relation(), say, close_estate(EState *), that\nperforms the cleanup actions, including the firing of any AFTER\ntriggers. See attached updated patch to see what I mean.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 6 Apr 2021 13:15:26 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Tue, Apr 6, 2021 at 1:15 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Tue, Apr 6, 2021 at 1:01 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> > > > > insert into test values (6);\n> > > > >\n> > > > > It seems an issue about reference leak. Anyone can fix this?\n> > > >\n> > > > It seems ExecGetTriggerResultRel will reopen the target table because it\n> > > cannot find an existing one.\n> > > > Storing the opened table in estate->es_opened_result_relations seems\n> > > solves the problem.\n> > >\n> > > It seems like commit 1375422c is related to this bug.\n>\n> Right, thanks for pointing this out.\n>\n> > The commit introduced a\n> > > new function ExecInitResultRelation() that sets both\n> > > estate->es_result_relations and estate->es_opened_result_relations. I\n> > > think it's better to use ExecInitResultRelation() rather than directly setting\n> > > estate->es_opened_result_relations. It might be better to do that in\n> > > create_estate_for_relation() though. Please find an attached patch.\n>\n> Agree that ExecInitResultRelations() would be better.\n>\n> > > Since this issue happens on only HEAD and it seems an oversight of commit\n> > > 1375422c, I don't think regression tests for this are essential.\n> >\n> > It seems we can not only use ExecInitResultRelation.\n> > In function ExecInitResultRelation, it will use ExecGetRangeTableRelation which\n> > will also open the target table and store the rel in \"Estate->es_relations\".\n> > We should call ExecCloseRangeTableRelations at the end of apply_handle_xxx to\n> > close the rel in \"Estate->es_relations\".\n>\n> Right, ExecCloseRangeTableRelations() was missing.\n\nYeah, I had missed it. Thank you for pointing out it.\n\n>\n> I think it may be better to create a sibling function to\n> create_estate_for_relation(), say, close_estate(EState *), that\n> performs the cleanup actions, including the firing of any AFTER\n> triggers. See attached updated patch to see what I mean.\n\nLooks good to me.\n\nBTW I found the following comments in create_estate_for_relation():\n\n/*\n * Executor state preparation for evaluation of constraint expressions,\n * indexes and triggers.\n *\n * This is based on similar code in copy.c\n */\nstatic EState *\ncreate_estate_for_relation(LogicalRepRelMapEntry *rel)\n\nIt seems like the comments meant the code around CopyFrom() and\nDoCopy() but it would no longer be true since copy.c has been split\ninto some files and I don't find similar code in copy.c. I think it’s\nbetter to remove the sentence rather than update the file name as this\ncomment doesn’t really informative and hard to track the updates. What\ndo you think?\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 6 Apr 2021 13:56:39 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Tue, Apr 6, 2021 at 1:57 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Tue, Apr 6, 2021 at 1:15 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Tue, Apr 6, 2021 at 1:01 PM houzj.fnst@fujitsu.com\n> > > The commit introduced a\n> > > > new function ExecInitResultRelation() that sets both\n> > > > estate->es_result_relations and estate->es_opened_result_relations. I\n> > > > think it's better to use ExecInitResultRelation() rather than directly setting\n> > > > estate->es_opened_result_relations. It might be better to do that in\n> > > > create_estate_for_relation() though. Please find an attached patch.\n> >\n> > Agree that ExecInitResultRelations() would be better.\n> >\n> > > > Since this issue happens on only HEAD and it seems an oversight of commit\n> > > > 1375422c, I don't think regression tests for this are essential.\n> > >\n> > > It seems we can not only use ExecInitResultRelation.\n> > > In function ExecInitResultRelation, it will use ExecGetRangeTableRelation which\n> > > will also open the target table and store the rel in \"Estate->es_relations\".\n> > > We should call ExecCloseRangeTableRelations at the end of apply_handle_xxx to\n> > > close the rel in \"Estate->es_relations\".\n> >\n> > Right, ExecCloseRangeTableRelations() was missing.\n>\n> Yeah, I had missed it. Thank you for pointing out it.\n> >\n> > I think it may be better to create a sibling function to\n> > create_estate_for_relation(), say, close_estate(EState *), that\n> > performs the cleanup actions, including the firing of any AFTER\n> > triggers. See attached updated patch to see what I mean.\n>\n> Looks good to me.\n>\n> BTW I found the following comments in create_estate_for_relation():\n>\n> /*\n> * Executor state preparation for evaluation of constraint expressions,\n> * indexes and triggers.\n> *\n> * This is based on similar code in copy.c\n> */\n> static EState *\n> create_estate_for_relation(LogicalRepRelMapEntry *rel)\n>\n> It seems like the comments meant the code around CopyFrom() and\n> DoCopy() but it would no longer be true since copy.c has been split\n> into some files and I don't find similar code in copy.c. I think it’s\n> better to remove the sentence rather than update the file name as this\n> comment doesn’t really informative and hard to track the updates. What\n> do you think?\n\nYeah, agree with simply removing that comment.\n\nWhile updating the patch to do so, it occurred to me that maybe we\ncould move the ExecInitResultRelation() call into\ncreate_estate_for_relation() too, in the spirit of removing\nduplicative code. See attached updated patch. Actually I remember\nproposing that as part of the commit you shared in your earlier email,\nbut for some reason it didn't end up in the commit. I now think maybe\nwe should do that after all.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 6 Apr 2021 14:25:05 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Tuesday, April 6, 2021 2:25 PM Amit Langote <amitlangote09@gmail.com> wrote:\r\n>While updating the patch to do so, it occurred to me that maybe we\r\n>could move the ExecInitResultRelation() call into\r\n>create_estate_for_relation() too, in the spirit of removing\r\n>duplicative code. See attached updated patch.\r\n\r\nThanks for your fixing. The code LGTM.\r\nMade a confirmation right now, the problem has been solved after applying your patch.\r\n\r\nRegards,\r\nTang\r\n",
"msg_date": "Tue, 6 Apr 2021 06:03:42 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Table refer leak in logical replication"
},
{
"msg_contents": "I added this as an Open Item.\nhttps://wiki.postgresql.org/index.php?title=PostgreSQL_14_Open_Items&type=revision&diff=35895&oldid=35890\nhttps://www.postgresql.org/message-id/flat/OS0PR01MB6113BA0A760C9964A4A0C5C2FB769%40OS0PR01MB6113.jpnprd01.prod.outlook.com#2fc410dff5cd27eea357ffc17fc174f2\n\nOn Tue, Apr 06, 2021 at 02:25:05PM +0900, Amit Langote wrote:\n> On Tue, Apr 6, 2021 at 1:57 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Tue, Apr 6, 2021 at 1:15 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Tue, Apr 6, 2021 at 1:01 PM houzj.fnst@fujitsu.com\n> > > > The commit introduced a\n> > > > > new function ExecInitResultRelation() that sets both\n> > > > > estate->es_result_relations and estate->es_opened_result_relations. I\n> > > > > think it's better to use ExecInitResultRelation() rather than directly setting\n> > > > > estate->es_opened_result_relations. It might be better to do that in\n> > > > > create_estate_for_relation() though. Please find an attached patch.\n> > >\n> > > Agree that ExecInitResultRelations() would be better.\n> > >\n> > > > > Since this issue happens on only HEAD and it seems an oversight of commit\n> > > > > 1375422c, I don't think regression tests for this are essential.\n> > > >\n> > > > It seems we can not only use ExecInitResultRelation.\n> > > > In function ExecInitResultRelation, it will use ExecGetRangeTableRelation which\n> > > > will also open the target table and store the rel in \"Estate->es_relations\".\n> > > > We should call ExecCloseRangeTableRelations at the end of apply_handle_xxx to\n> > > > close the rel in \"Estate->es_relations\".\n> > >\n> > > Right, ExecCloseRangeTableRelations() was missing.\n> >\n> > Yeah, I had missed it. Thank you for pointing out it.\n> > >\n> > > I think it may be better to create a sibling function to\n> > > create_estate_for_relation(), say, close_estate(EState *), that\n> > > performs the cleanup actions, including the firing of any AFTER\n> > > triggers. See attached updated patch to see what I mean.\n> >\n> > Looks good to me.\n> >\n> > BTW I found the following comments in create_estate_for_relation():\n> >\n> > /*\n> > * Executor state preparation for evaluation of constraint expressions,\n> > * indexes and triggers.\n> > *\n> > * This is based on similar code in copy.c\n> > */\n> > static EState *\n> > create_estate_for_relation(LogicalRepRelMapEntry *rel)\n> >\n> > It seems like the comments meant the code around CopyFrom() and\n> > DoCopy() but it would no longer be true since copy.c has been split\n> > into some files and I don't find similar code in copy.c. I think it’s\n> > better to remove the sentence rather than update the file name as this\n> > comment doesn’t really informative and hard to track the updates. What\n> > do you think?\n> \n> Yeah, agree with simply removing that comment.\n> \n> While updating the patch to do so, it occurred to me that maybe we\n> could move the ExecInitResultRelation() call into\n> create_estate_for_relation() too, in the spirit of removing\n> duplicative code. See attached updated patch. Actually I remember\n> proposing that as part of the commit you shared in your earlier email,\n> but for some reason it didn't end up in the commit. I now think maybe\n> we should do that after all.\n\n\n",
"msg_date": "Fri, 9 Apr 2021 20:39:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Sat, Apr 10, 2021 at 10:39 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I added this as an Open Item.\n\nThanks, Justin.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 12 Apr 2021 10:29:39 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Tue, Apr 06, 2021 at 02:25:05PM +0900, Amit Langote wrote:\n> While updating the patch to do so, it occurred to me that maybe we\n> could move the ExecInitResultRelation() call into\n> create_estate_for_relation() too, in the spirit of removing\n> duplicative code. See attached updated patch. Actually I remember\n> proposing that as part of the commit you shared in your earlier email,\n> but for some reason it didn't end up in the commit. I now think maybe\n> we should do that after all.\n\nSo, I have been studying 1375422c and this thread. Using\nExecCloseRangeTableRelations() when cleaning up the executor state is\nreasonable as of the equivalent call to ExecInitRangeTable(). Now,\nthere is something that itched me quite a lot while reviewing the\nproposed patch: ExecCloseResultRelations() is missing, but other\ncode paths make an effort to mirror any calls of ExecInitRangeTable()\nwith it. Looking closer, I think that the worker code is actually\nconfused with the handling of the opening and closing of the indexes\nneeded by a result relation once you use that, because some code paths\nrelated to tuple routing for partitions may, or may not, need to do\nthat. However, once the code integrates with ExecInitRangeTable() and\nExecCloseResultRelations(), the index handlings could get much simpler\nto understand as the internal apply routines for INSERT/UPDATE/DELETE\nhave no need to think about the opening or closing them. Well,\nmostly, to be honest.\n\nThere are two exceptions when it comes the tuple routing for\npartitioned tables, one for INSERT/DELETE as the result relation found\nat the top of apply_handle_tuple_routing() can be used, and a second\nfor the UPDATE case as it is necessary to re-route the tuple to the\nnew partition, as it becomes necessary to open and close the indexes\nof the new partition relation where a tuple is sent to. I think that\nthere is a lot of room for a much better integration in terms of\nestate handling for this stuff with the executor, but that would be\ntoo invasive for v14 post feature freeze, and I am not sure what a\ngood design would be.\n\nRelated to that, I found confusing that the patch was passing down a\nresult relation from create_estate_for_relation() for something that's\njust stored in the executor state. Having a \"close\" routine that maps\nto the \"create\" routine gives a good vibe, though \"close\" is usually\nused in parallel of \"open\" in the PG code, and instead of \"free\" I\nhave found \"finish\" a better term.\n\nAnother thing, and I think that it is a good change, is that it is\nnecessary to push a snapshot in the worker process before creating the\nexecutor state as any index predicates of the result relation are\ngoing to need that when opened. My impression of the code of worker.c\nis that the amount of code duplication is quite high between the three\nDML code paths, with the update tuple routing logic being a space of\nimprovements on its own, and that it could gain in clarity with more\nrefactoring work around the executor states, but I am fine to leave\nthat as future work. That's too late for v14.\n\nAttached is v5 that I am finishing with. Much more could be done but\nI don't want to do something too invasive at this stage of the game.\nThere are a couple of extra relations in terms of relations opened for\na partitioned table within create_estate_for_relation() when\nredirecting to the tuple routing, but my guess is that this would be\nbetter in the long-term. We could bypass doing that when working on a\npartitioned table, but I really don't think that this code should be\nmade more confusing and that we had better apply\nExecCloseResultRelations() for all the relations faced. That's\nsimpler to reason about IMO.\n--\nMichael",
"msg_date": "Fri, 16 Apr 2021 15:24:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Fri, Apr 16, 2021 at 11:55 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Apr 06, 2021 at 02:25:05PM +0900, Amit Langote wrote:\n>\n> Attached is v5 that I am finishing with. Much more could be done but\n> I don't want to do something too invasive at this stage of the game.\n> There are a couple of extra relations in terms of relations opened for\n> a partitioned table within create_estate_for_relation() when\n> redirecting to the tuple routing, but my guess is that this would be\n> better in the long-term.\n>\n\nHmm, I am not sure if it is a good idea to open indexes needlessly\nespecially when it is not done in the previous code.\n\n@@ -1766,8 +1771,11 @@ apply_handle_tuple_routing(ResultRelInfo *relinfo,\n slot_getallattrs(remoteslot);\n }\n MemoryContextSwitchTo(oldctx);\n+\n+ ExecOpenIndices(partrelinfo_new, false);\n apply_handle_insert_internal(partrelinfo_new, estate,\n remoteslot_part);\n+ ExecCloseIndices(partrelinfo_new);\n }\n\nIt seems you forgot to call open indexes before apply_handle_delete_internal.\n\nI am not sure if it is a good idea to do the refactoring related to\nindexes or other things to fix a minor bug in commit 1375422c. It\nmight be better to add a simple fix like what Hou-San has originally\nproposed [1] because even using ExecInitResultRelation might not be\nthe best thing as it is again trying to open a range table which we\nhave already opened in logicalrep_rel_open. OTOH, using\nExecInitResultRelation might encapsulate the assignment we are doing\noutside. In general, it seems doing bigger refactoring or changes\nmight lead to some bugs or unintended consequences, so if possible, we\ncan try such refactoring as a separate patch. One argument against the\nproposed refactoring could be that with the previous code, we were\ntrying to open the indexes just before it is required but with the new\npatch in some cases, they will be opened during the initial phase and\nfor other cases, they are opened just before they are required. It\nmight not necessarily be a bad idea to rearrange code like that but\nmaybe there is a better way to do that.\n\n[1] - https://www.postgresql.org/message-id/OS0PR01MB571686F75FBDC219FF3DFF0D94769%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 17 Apr 2021 19:02:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Sat, Apr 17, 2021 at 07:02:00PM +0530, Amit Kapila wrote:\n> Hmm, I am not sure if it is a good idea to open indexes needlessly\n> especially when it is not done in the previous code.\n\nStudying the history of this code, I think that f1ac27b is to blame\nhere for making the code of the apply worker much messier than it was\nbefore. Before that, we were at a point where we had to rely on one\nsingle ResultRelInfo with all its indexes opened and closed before\ndoing the DML. After f1ac27b, the code becomes shaped so as the\noriginal ResultRelInfo may or may not be used depending on if this\ncode is working on a partitioned table or not. With an UPDATE, not\none but *two* ResultRelInfo may be used if a tuple is moved to a\ndifferent partition. I think that in the long term, and if we want to\nmake use of ExecInitResultRelation() in this area, we are going to\nneed to split the apply code in two parts, roughly (easier to say in\nwords than actually doing it, still):\n- Find out first which relations it is necessary to work on, and\ncreate a set of ResultRelInfo assigned to an executor state by\nExecInitResultRelation(), doing all the relation openings that are\nnecessary. The gets funky when attempting to do an update across\npartitions.\n- Do the actual DML, with all the relations already opened and ready\nfor use.\n\nOn top of that, I am really getting scared by the following, done in\nnot one but now two places:\n\t/*\n\t * The tuple to be updated could not be found.\n\t *\n\t * TODO what to do here, change the log level to LOG perhaps?\n\t */\n\telog(DEBUG1,\n\t\t \"logical replication did not find row for update \"\n\t\t \"in replication target relation \\\"%s\\\"\",\n\t\t RelationGetRelationName(localrel));\nThis already existed in once place before f1ac27b, but this got\nduplicated in a second area when applying the first update to a\npartition table.\n\nThe refactoring change done in 1375422c in worker.c without the tuple\nrouting pieces would be a piece of cake in terms of relations that\nrequire to be opened and closed, including the timings of each call\nbecause they could be unified in single code paths, and I also guess\nthat we would avoid leak issues really easily. If the tuple routing\ncode actually does not consider the case of moving a tuple across\npartitions, the level of difficulty to do an integration with\nExecInitResultRelation() is much more reduced, though it makes the\nfeature much less appealing as it becomes much more difficult to do\nsome data redistribution across a different set of partitions with\nlogical changes.\n\n> I am not sure if it is a good idea to do the refactoring related to\n> indexes or other things to fix a minor bug in commit 1375422c. It\n> might be better to add a simple fix like what Hou-San has originally\n> proposed [1] because even using ExecInitResultRelation might not be\n> the best thing as it is again trying to open a range table which we\n> have already opened in logicalrep_rel_open. OTOH, using\n> ExecInitResultRelation might encapsulate the assignment we are doing\n> outside.\n\nYeah, that would be nice to just rely on that. copyfrom.c does\nbasically what I guess we should try to copy a maximum here. With a\nproper cleanup of the executor state using ExecCloseResultRelations()\nonce we are done with the tuple apply.\n\n> In general, it seems doing bigger refactoring or changes\n> might lead to some bugs or unintended consequences, so if possible, we\n> can try such refactoring as a separate patch. One argument against the\n> proposed refactoring could be that with the previous code, we were\n> trying to open the indexes just before it is required but with the new\n> patch in some cases, they will be opened during the initial phase and\n> for other cases, they are opened just before they are required. It\n> might not necessarily be a bad idea to rearrange code like that but\n> maybe there is a better way to do that.\n> \n> [1] - https://www.postgresql.org/message-id/OS0PR01MB571686F75FBDC219FF3DFF0D94769%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nThis feels like piling one extra hack on top of what looks like an\nabuse of the executor calls to me, and the apply code is already full\nof it. True that we do that in ExecuteTruncateGuts() for allow\ntriggers to be fired, but I think that it would be better to avoid\nspread that to consolidate the trigger and execution code. FWIW, I\nwould be tempted to send back f1ac27b to the blackboard, then refactor\nthe code of the apply worker to use ExecInitResultRelation() so as we\nget more consistency with resource releases, simplifying the business\nwith indexes. Once the code is in a cleaner state, we could come back\ninto making an integration with partitioned tables into this code.\n--\nMichael",
"msg_date": "Mon, 19 Apr 2021 15:08:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 03:08:41PM +0900, Michael Paquier wrote:\n> FWIW, I\n> would be tempted to send back f1ac27b to the blackboard, then refactor\n> the code of the apply worker to use ExecInitResultRelation() so as we\n> get more consistency with resource releases, simplifying the business\n> with indexes. Once the code is in a cleaner state, we could come back\n> into making an integration with partitioned tables into this code.\n\nBut you cannot do that either as f1ac27bf got into 13..\n--\nMichael",
"msg_date": "Mon, 19 Apr 2021 15:12:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Sat, Apr 17, 2021 at 10:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Fri, Apr 16, 2021 at 11:55 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Tue, Apr 06, 2021 at 02:25:05PM +0900, Amit Langote wrote:\n> >\n> > Attached is v5 that I am finishing with. Much more could be done but\n> > I don't want to do something too invasive at this stage of the game.\n> > There are a couple of extra relations in terms of relations opened for\n> > a partitioned table within create_estate_for_relation() when\n> > redirecting to the tuple routing, but my guess is that this would be\n> > better in the long-term.\n> >\n>\n> Hmm, I am not sure if it is a good idea to open indexes needlessly\n> especially when it is not done in the previous code.\n>\n> @@ -1766,8 +1771,11 @@ apply_handle_tuple_routing(ResultRelInfo *relinfo,\n> slot_getallattrs(remoteslot);\n> }\n> MemoryContextSwitchTo(oldctx);\n> +\n> + ExecOpenIndices(partrelinfo_new, false);\n> apply_handle_insert_internal(partrelinfo_new, estate,\n> remoteslot_part);\n> + ExecCloseIndices(partrelinfo_new);\n> }\n>\n> It seems you forgot to call open indexes before apply_handle_delete_internal.\n>\n> I am not sure if it is a good idea to do the refactoring related to\n> indexes or other things to fix a minor bug in commit 1375422c. It\n> might be better to add a simple fix like what Hou-San has originally\n> proposed [1] because even using ExecInitResultRelation might not be\n> the best thing as it is again trying to open a range table which we\n> have already opened in logicalrep_rel_open.\n\nFWIW, I agree with fixing this bug of 1375422c in as least scary\nmanner as possible. Hou-san proposed that we add the ResultRelInfo\nthat apply_handle_{insert|update|delete} initialize themselves to\nes_opened_result_relations. I would prefer that only\nExecInitResultRelation() add anything to es_opened_result_relations()\nto avoid future maintenance problems. Instead, a fix as simple as the\nHou-san's proposed fix would be to add a ExecCloseResultRelations()\ncall at the end of each of apply_handle_{insert|update|delete}. That\nwould fix the originally reported leak, because\nExecCloseResultRelations() has this:\n\n /* Close any relations that have been opened by\nExecGetTriggerResultRel(). */\n foreach(l, estate->es_trig_target_relations)\n {\n\nWe do end up with the reality though that trigger.c now opens the\nreplication target relation on its own (adding it to\nes_trig_target_relations) to fire its triggers.\n\nI am also not opposed to reviewing the changes of 1375422c in light of\nthese findings while we still have time. For example, it might\nperhaps be nice for ExecInitResultRelation to accept a Relation\npointer that the callers from copyfrom.c, worker.c can use to pass\ntheir locally opened Relation. In that case, ExecInitResultRelation()\nwould perform InitResultRelInfo() with that Relation pointer, instead\nof getting one using ExecGetRangeTableRelation() which is what causes\nthe Relation to be opened again.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Apr 2021 16:02:47 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Fri, Apr 16, 2021 at 3:24 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Apr 06, 2021 at 02:25:05PM +0900, Amit Langote wrote:\n> > While updating the patch to do so, it occurred to me that maybe we\n> > could move the ExecInitResultRelation() call into\n> > create_estate_for_relation() too, in the spirit of removing\n> > duplicative code. See attached updated patch. Actually I remember\n> > proposing that as part of the commit you shared in your earlier email,\n> > but for some reason it didn't end up in the commit. I now think maybe\n> > we should do that after all.\n>\n> So, I have been studying 1375422c and this thread. Using\n> ExecCloseRangeTableRelations() when cleaning up the executor state is\n> reasonable as of the equivalent call to ExecInitRangeTable(). Now,\n> there is something that itched me quite a lot while reviewing the\n> proposed patch: ExecCloseResultRelations() is missing, but other\n> code paths make an effort to mirror any calls of ExecInitRangeTable()\n> with it. Looking closer, I think that the worker code is actually\n> confused with the handling of the opening and closing of the indexes\n> needed by a result relation once you use that, because some code paths\n> related to tuple routing for partitions may, or may not, need to do\n> that. However, once the code integrates with ExecInitRangeTable() and\n> ExecCloseResultRelations(), the index handlings could get much simpler\n> to understand as the internal apply routines for INSERT/UPDATE/DELETE\n> have no need to think about the opening or closing them. Well,\n> mostly, to be honest.\n\nTo bring up another point, if we were to follow the spirit of the\nrecent c5b7ba4e67a, whereby we moved ExecOpenIndices() from\nExecInitModifyTable() into ExecInsert() and ExecUpdate(), that is,\nfrom during the initialization phase of an INSERT/UPDATE to its actual\nexecution, we could even consider performing Exec{Open|Close}Indices()\nfor a replication target relation in\nExecSimpleRelation{Insert|Update}. The ExecOpenIndices() performed in\nworker.c is pointless in some cases, for example, when\nExecSimpleRelation{Insert|Update} end up skipping the insert/update of\na tuple due to BR triggers.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Apr 2021 17:21:17 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 1:51 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Fri, Apr 16, 2021 at 3:24 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Tue, Apr 06, 2021 at 02:25:05PM +0900, Amit Langote wrote:\n> > > While updating the patch to do so, it occurred to me that maybe we\n> > > could move the ExecInitResultRelation() call into\n> > > create_estate_for_relation() too, in the spirit of removing\n> > > duplicative code. See attached updated patch. Actually I remember\n> > > proposing that as part of the commit you shared in your earlier email,\n> > > but for some reason it didn't end up in the commit. I now think maybe\n> > > we should do that after all.\n> >\n> > So, I have been studying 1375422c and this thread. Using\n> > ExecCloseRangeTableRelations() when cleaning up the executor state is\n> > reasonable as of the equivalent call to ExecInitRangeTable(). Now,\n> > there is something that itched me quite a lot while reviewing the\n> > proposed patch: ExecCloseResultRelations() is missing, but other\n> > code paths make an effort to mirror any calls of ExecInitRangeTable()\n> > with it. Looking closer, I think that the worker code is actually\n> > confused with the handling of the opening and closing of the indexes\n> > needed by a result relation once you use that, because some code paths\n> > related to tuple routing for partitions may, or may not, need to do\n> > that. However, once the code integrates with ExecInitRangeTable() and\n> > ExecCloseResultRelations(), the index handlings could get much simpler\n> > to understand as the internal apply routines for INSERT/UPDATE/DELETE\n> > have no need to think about the opening or closing them. Well,\n> > mostly, to be honest.\n>\n> To bring up another point, if we were to follow the spirit of the\n> recent c5b7ba4e67a, whereby we moved ExecOpenIndices() from\n> ExecInitModifyTable() into ExecInsert() and ExecUpdate(), that is,\n> from during the initialization phase of an INSERT/UPDATE to its actual\n> execution, we could even consider performing Exec{Open|Close}Indices()\n> for a replication target relation in\n> ExecSimpleRelation{Insert|Update}. The ExecOpenIndices() performed in\n> worker.c is pointless in some cases, for example, when\n> ExecSimpleRelation{Insert|Update} end up skipping the insert/update of\n> a tuple due to BR triggers.\n>\n\nYeah, that is also worth considering and sounds like a good idea. But,\nas I mentioned before it might be better to consider this separately.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 19 Apr 2021 14:02:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 12:32 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Sat, Apr 17, 2021 at 10:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, Apr 16, 2021 at 11:55 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > On Tue, Apr 06, 2021 at 02:25:05PM +0900, Amit Langote wrote:\n> > >\n> > > Attached is v5 that I am finishing with. Much more could be done but\n> > > I don't want to do something too invasive at this stage of the game.\n> > > There are a couple of extra relations in terms of relations opened for\n> > > a partitioned table within create_estate_for_relation() when\n> > > redirecting to the tuple routing, but my guess is that this would be\n> > > better in the long-term.\n> > >\n> >\n> > Hmm, I am not sure if it is a good idea to open indexes needlessly\n> > especially when it is not done in the previous code.\n> >\n> > @@ -1766,8 +1771,11 @@ apply_handle_tuple_routing(ResultRelInfo *relinfo,\n> > slot_getallattrs(remoteslot);\n> > }\n> > MemoryContextSwitchTo(oldctx);\n> > +\n> > + ExecOpenIndices(partrelinfo_new, false);\n> > apply_handle_insert_internal(partrelinfo_new, estate,\n> > remoteslot_part);\n> > + ExecCloseIndices(partrelinfo_new);\n> > }\n> >\n> > It seems you forgot to call open indexes before apply_handle_delete_internal.\n> >\n> > I am not sure if it is a good idea to do the refactoring related to\n> > indexes or other things to fix a minor bug in commit 1375422c. It\n> > might be better to add a simple fix like what Hou-San has originally\n> > proposed [1] because even using ExecInitResultRelation might not be\n> > the best thing as it is again trying to open a range table which we\n> > have already opened in logicalrep_rel_open.\n>\n> FWIW, I agree with fixing this bug of 1375422c in as least scary\n> manner as possible. Hou-san proposed that we add the ResultRelInfo\n> that apply_handle_{insert|update|delete} initialize themselves to\n> es_opened_result_relations. I would prefer that only\n> ExecInitResultRelation() add anything to es_opened_result_relations()\n> to avoid future maintenance problems. Instead, a fix as simple as the\n> Hou-san's proposed fix would be to add a ExecCloseResultRelations()\n> call at the end of each of apply_handle_{insert|update|delete}.\n>\n\nYeah, that will work too but might look a bit strange. BTW, how that\nis taken care of for ExecuteTruncateGuts? I mean we do add rels there\nlike Hou-San's patch without calling ExecCloseResultRelations, the\nrels are probably closed when we close the relation in worker.c but\nwhat about memory for the list?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 19 Apr 2021 14:33:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 6:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Mon, Apr 19, 2021 at 12:32 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Sat, Apr 17, 2021 at 10:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Fri, Apr 16, 2021 at 11:55 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > > Attached is v5 that I am finishing with. Much more could be done but\n> > > > I don't want to do something too invasive at this stage of the game.\n> > > > There are a couple of extra relations in terms of relations opened for\n> > > > a partitioned table within create_estate_for_relation() when\n> > > > redirecting to the tuple routing, but my guess is that this would be\n> > > > better in the long-term.\n> > > >\n> > >\n> > > Hmm, I am not sure if it is a good idea to open indexes needlessly\n> > > especially when it is not done in the previous code.\n> > >\n> > > @@ -1766,8 +1771,11 @@ apply_handle_tuple_routing(ResultRelInfo *relinfo,\n> > > slot_getallattrs(remoteslot);\n> > > }\n> > > MemoryContextSwitchTo(oldctx);\n> > > +\n> > > + ExecOpenIndices(partrelinfo_new, false);\n> > > apply_handle_insert_internal(partrelinfo_new, estate,\n> > > remoteslot_part);\n> > > + ExecCloseIndices(partrelinfo_new);\n> > > }\n> > >\n> > > It seems you forgot to call open indexes before apply_handle_delete_internal.\n> > >\n> > > I am not sure if it is a good idea to do the refactoring related to\n> > > indexes or other things to fix a minor bug in commit 1375422c. It\n> > > might be better to add a simple fix like what Hou-San has originally\n> > > proposed [1] because even using ExecInitResultRelation might not be\n> > > the best thing as it is again trying to open a range table which we\n> > > have already opened in logicalrep_rel_open.\n> >\n> > FWIW, I agree with fixing this bug of 1375422c in as least scary\n> > manner as possible. Hou-san proposed that we add the ResultRelInfo\n> > that apply_handle_{insert|update|delete} initialize themselves to\n> > es_opened_result_relations. I would prefer that only\n> > ExecInitResultRelation() add anything to es_opened_result_relations()\n> > to avoid future maintenance problems. Instead, a fix as simple as the\n> > Hou-san's proposed fix would be to add a ExecCloseResultRelations()\n> > call at the end of each of apply_handle_{insert|update|delete}.\n> >\n>\n> Yeah, that will work too but might look a bit strange. BTW, how that\n> is taken care of for ExecuteTruncateGuts? I mean we do add rels there\n> like Hou-San's patch without calling ExecCloseResultRelations, the\n> rels are probably closed when we close the relation in worker.c but\n> what about memory for the list?\n\nIt seems I had forgotten the code I had written myself. The following\nis from ExecuteTruncateGuts():\n\n /*\n * To fire triggers, we'll need an EState as well as a ResultRelInfo for\n * each relation. We don't need to call ExecOpenIndices, though.\n *\n * We put the ResultRelInfos in the es_opened_result_relations list, even\n * though we don't have a range table and don't populate the\n * es_result_relations array. That's a bit bogus, but it's enough to make\n * ExecGetTriggerResultRel() find them.\n */\n estate = CreateExecutorState();\n resultRelInfos = (ResultRelInfo *)\n palloc(list_length(rels) * sizeof(ResultRelInfo));\n resultRelInfo = resultRelInfos;\n foreach(cell, rels)\n {\n Relation rel = (Relation) lfirst(cell);\n\n InitResultRelInfo(resultRelInfo,\n rel,\n 0, /* dummy rangetable index */\n NULL,\n 0);\n estate->es_opened_result_relations =\n lappend(estate->es_opened_result_relations, resultRelInfo);\n resultRelInfo++;\n }\n\nSo, that is exactly what Hou-san's patch did. Although, the comment\ndoes admit that doing this is a bit bogus and maybe written (by Heikki\nIIRC) as a caution against repeating the pattern.\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Apr 2021 18:27:33 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 02:33:10PM +0530, Amit Kapila wrote:\n> On Mon, Apr 19, 2021 at 12:32 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> FWIW, I agree with fixing this bug of 1375422c in as least scary\n>> manner as possible. Hou-san proposed that we add the ResultRelInfo\n>> that apply_handle_{insert|update|delete} initialize themselves to\n>> es_opened_result_relations. I would prefer that only\n>> ExecInitResultRelation() add anything to es_opened_result_relations()\n>> to avoid future maintenance problems. Instead, a fix as simple as the\n>> Hou-san's proposed fix would be to add a ExecCloseResultRelations()\n>> call at the end of each of apply_handle_{insert|update|delete}.\n> \n> Yeah, that will work too but might look a bit strange. BTW, how that\n> is taken care of for ExecuteTruncateGuts? I mean we do add rels there\n> like Hou-San's patch without calling ExecCloseResultRelations, the\n> rels are probably closed when we close the relation in worker.c but\n> what about memory for the list?\n\nTRUNCATE relies on FreeExecutorState() for that, no? FWIW, I'd rather\nagree to use what has been proposed with es_opened_result_relations\nlike TRUNCATE does rather than attempt to use ExecInitResultRelation()\ncombined with potentially asymmetric calls to\nExecCloseResultRelations().\n--\nMichael",
"msg_date": "Mon, 19 Apr 2021 18:32:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 3:02 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Apr 19, 2021 at 02:33:10PM +0530, Amit Kapila wrote:\n> > On Mon, Apr 19, 2021 at 12:32 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >> FWIW, I agree with fixing this bug of 1375422c in as least scary\n> >> manner as possible. Hou-san proposed that we add the ResultRelInfo\n> >> that apply_handle_{insert|update|delete} initialize themselves to\n> >> es_opened_result_relations. I would prefer that only\n> >> ExecInitResultRelation() add anything to es_opened_result_relations()\n> >> to avoid future maintenance problems. Instead, a fix as simple as the\n> >> Hou-san's proposed fix would be to add a ExecCloseResultRelations()\n> >> call at the end of each of apply_handle_{insert|update|delete}.\n> >\n> > Yeah, that will work too but might look a bit strange. BTW, how that\n> > is taken care of for ExecuteTruncateGuts? I mean we do add rels there\n> > like Hou-San's patch without calling ExecCloseResultRelations, the\n> > rels are probably closed when we close the relation in worker.c but\n> > what about memory for the list?\n>\n> TRUNCATE relies on FreeExecutorState() for that, no?\n>\n\nI am not sure about that because it doesn't seem to be allocated in\nes_query_cxt. Note, we switch to oldcontext in the\nCreateExecutorState.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 19 Apr 2021 15:12:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 3:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 19, 2021 at 3:02 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Mon, Apr 19, 2021 at 02:33:10PM +0530, Amit Kapila wrote:\n> > > On Mon, Apr 19, 2021 at 12:32 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > >> FWIW, I agree with fixing this bug of 1375422c in as least scary\n> > >> manner as possible. Hou-san proposed that we add the ResultRelInfo\n> > >> that apply_handle_{insert|update|delete} initialize themselves to\n> > >> es_opened_result_relations. I would prefer that only\n> > >> ExecInitResultRelation() add anything to es_opened_result_relations()\n> > >> to avoid future maintenance problems. Instead, a fix as simple as the\n> > >> Hou-san's proposed fix would be to add a ExecCloseResultRelations()\n> > >> call at the end of each of apply_handle_{insert|update|delete}.\n> > >\n> > > Yeah, that will work too but might look a bit strange. BTW, how that\n> > > is taken care of for ExecuteTruncateGuts? I mean we do add rels there\n> > > like Hou-San's patch without calling ExecCloseResultRelations, the\n> > > rels are probably closed when we close the relation in worker.c but\n> > > what about memory for the list?\n> >\n> > TRUNCATE relies on FreeExecutorState() for that, no?\n> >\n>\n> I am not sure about that because it doesn't seem to be allocated in\n> es_query_cxt. Note, we switch to oldcontext in the\n> CreateExecutorState.\n>\n\nI have just checked that the memory for the list is allocated in\nApplyMessageContext. So, it appears a memory leak to me unless I am\nmissing something.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 19 Apr 2021 15:25:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 3:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 19, 2021 at 3:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 19, 2021 at 3:02 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Mon, Apr 19, 2021 at 02:33:10PM +0530, Amit Kapila wrote:\n> > > > On Mon, Apr 19, 2021 at 12:32 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > >> FWIW, I agree with fixing this bug of 1375422c in as least scary\n> > > >> manner as possible. Hou-san proposed that we add the ResultRelInfo\n> > > >> that apply_handle_{insert|update|delete} initialize themselves to\n> > > >> es_opened_result_relations. I would prefer that only\n> > > >> ExecInitResultRelation() add anything to es_opened_result_relations()\n> > > >> to avoid future maintenance problems. Instead, a fix as simple as the\n> > > >> Hou-san's proposed fix would be to add a ExecCloseResultRelations()\n> > > >> call at the end of each of apply_handle_{insert|update|delete}.\n> > > >\n> > > > Yeah, that will work too but might look a bit strange. BTW, how that\n> > > > is taken care of for ExecuteTruncateGuts? I mean we do add rels there\n> > > > like Hou-San's patch without calling ExecCloseResultRelations, the\n> > > > rels are probably closed when we close the relation in worker.c but\n> > > > what about memory for the list?\n> > >\n> > > TRUNCATE relies on FreeExecutorState() for that, no?\n> > >\n> >\n> > I am not sure about that because it doesn't seem to be allocated in\n> > es_query_cxt. Note, we switch to oldcontext in the\n> > CreateExecutorState.\n> >\n>\n> I have just checked that the memory for the list is allocated in\n> ApplyMessageContext. So, it appears a memory leak to me unless I am\n> missing something.\n>\n\nIt seems like the memory will be freed after we apply the truncate\nbecause we reset the ApplyMessageContext after applying each message,\nso maybe we don't need to bother about it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 19 Apr 2021 15:29:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 7:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Mon, Apr 19, 2021 at 3:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Mon, Apr 19, 2021 at 3:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Mon, Apr 19, 2021 at 3:02 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > > > On Mon, Apr 19, 2021 at 02:33:10PM +0530, Amit Kapila wrote:\n> > > > > On Mon, Apr 19, 2021 at 12:32 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > > >> FWIW, I agree with fixing this bug of 1375422c in as least scary\n> > > > >> manner as possible. Hou-san proposed that we add the ResultRelInfo\n> > > > >> that apply_handle_{insert|update|delete} initialize themselves to\n> > > > >> es_opened_result_relations. I would prefer that only\n> > > > >> ExecInitResultRelation() add anything to es_opened_result_relations()\n> > > > >> to avoid future maintenance problems. Instead, a fix as simple as the\n> > > > >> Hou-san's proposed fix would be to add a ExecCloseResultRelations()\n> > > > >> call at the end of each of apply_handle_{insert|update|delete}.\n> > > > >\n> > > > > Yeah, that will work too but might look a bit strange. BTW, how that\n> > > > > is taken care of for ExecuteTruncateGuts? I mean we do add rels there\n> > > > > like Hou-San's patch without calling ExecCloseResultRelations, the\n> > > > > rels are probably closed when we close the relation in worker.c but\n> > > > > what about memory for the list?\n> > > >\n> > > > TRUNCATE relies on FreeExecutorState() for that, no?\n> > > >\n> > >\n> > > I am not sure about that because it doesn't seem to be allocated in\n> > > es_query_cxt. Note, we switch to oldcontext in the\n> > > CreateExecutorState.\n> > >\n> >\n> > I have just checked that the memory for the list is allocated in\n> > ApplyMessageContext. So, it appears a memory leak to me unless I am\n> > missing something.\n> >\n>\n> It seems like the memory will be freed after we apply the truncate\n> because we reset the ApplyMessageContext after applying each message,\n> so maybe we don't need to bother about it.\n\nYes, ApplyMessageContext seems short-lived enough for this not to\nmatter. In the case of ExecuteTruncateGuts(), it's PortalContext, but\nwe don't seem to bother about leaking into that one too.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Apr 2021 20:09:05 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 08:09:05PM +0900, Amit Langote wrote:\n> On Mon, Apr 19, 2021 at 7:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> It seems like the memory will be freed after we apply the truncate\n>> because we reset the ApplyMessageContext after applying each message,\n>> so maybe we don't need to bother about it.\n> \n> Yes, ApplyMessageContext seems short-lived enough for this not to\n> matter. In the case of ExecuteTruncateGuts(), it's PortalContext, but\n> we don't seem to bother about leaking into that one too.\n\nSorry for the dump question because I have not studied this part of\nthe code in any extensive way, but how many changes at maximum can be\napplied within a single ApplyMessageContext? I am wondering if we\ncould run into problems depending on the number of relations touched\nwithin a single message, or if there are any patches that could run\ninto problems because of this limitation, meaning that we may want to\nadd a proper set of comments within this area to document the\nlimitations attached to a DML operation applied.\n--\nMichael",
"msg_date": "Mon, 19 Apr 2021 20:50:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 5:20 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Apr 19, 2021 at 08:09:05PM +0900, Amit Langote wrote:\n> > On Mon, Apr 19, 2021 at 7:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> It seems like the memory will be freed after we apply the truncate\n> >> because we reset the ApplyMessageContext after applying each message,\n> >> so maybe we don't need to bother about it.\n> >\n> > Yes, ApplyMessageContext seems short-lived enough for this not to\n> > matter. In the case of ExecuteTruncateGuts(), it's PortalContext, but\n> > we don't seem to bother about leaking into that one too.\n>\n> Sorry for the dump question because I have not studied this part of\n> the code in any extensive way, but how many changes at maximum can be\n> applied within a single ApplyMessageContext?\n>\n\nIt is one change for Insert/UpdateDelete. See apply_dispatch() for\ndifferent change messages.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 19 Apr 2021 17:27:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 6:32 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Apr 19, 2021 at 02:33:10PM +0530, Amit Kapila wrote:\n> > On Mon, Apr 19, 2021 at 12:32 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >> FWIW, I agree with fixing this bug of 1375422c in as least scary\n> >> manner as possible. Hou-san proposed that we add the ResultRelInfo\n> >> that apply_handle_{insert|update|delete} initialize themselves to\n> >> es_opened_result_relations. I would prefer that only\n> >> ExecInitResultRelation() add anything to es_opened_result_relations()\n> >> to avoid future maintenance problems. Instead, a fix as simple as the\n> >> Hou-san's proposed fix would be to add a ExecCloseResultRelations()\n> >> call at the end of each of apply_handle_{insert|update|delete}.\n> >\n> > Yeah, that will work too but might look a bit strange. BTW, how that\n> > is taken care of for ExecuteTruncateGuts? I mean we do add rels there\n> > like Hou-San's patch without calling ExecCloseResultRelations, the\n> > rels are probably closed when we close the relation in worker.c but\n> > what about memory for the list?\n>\n> ... FWIW, I'd rather\n> agree to use what has been proposed with es_opened_result_relations\n> like TRUNCATE does rather than attempt to use ExecInitResultRelation()\n> combined with potentially asymmetric calls to\n> ExecCloseResultRelations().\n\nOkay, how about the attached then? I decided to go with just\nfinish_estate(), because we no longer have to do anything relation\nspecific there.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 19 Apr 2021 21:44:05 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 09:44:05PM +0900, Amit Langote wrote:\n> Okay, how about the attached then?\n\ncreate_estate_for_relation() returns an extra resultRelInfo that's\nalso saved within es_opened_result_relations. Wouldn't is be simpler\nto take the first element from es_opened_result_relations instead?\nOkay, that's a nit and you are documenting things in a sufficient way,\nbut that just seemed duplicated to me.\n\n> I decided to go with just\n> finish_estate(), because we no longer have to do anything relation\n> specific there.\n\nFine by me.\n--\nMichael",
"msg_date": "Tue, 20 Apr 2021 14:09:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "Thanks for taking a look.\n\nOn Tue, Apr 20, 2021 at 2:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Apr 19, 2021 at 09:44:05PM +0900, Amit Langote wrote:\n> > Okay, how about the attached then?\n>\n> create_estate_for_relation() returns an extra resultRelInfo that's\n> also saved within es_opened_result_relations. Wouldn't is be simpler\n> to take the first element from es_opened_result_relations instead?\n> Okay, that's a nit and you are documenting things in a sufficient way,\n> but that just seemed duplicated to me.\n\nManipulating the contents of es_opened_result_relations directly in\nworker.c is admittedly a \"hack\", which I am reluctant to have other\nplaces participating in. As originally designed, that list is to\nspeed up ExecCloseResultRelations(), not as a place to access result\nrelations from. The result relations targeted over the course of\nexecution of a query (update/delete) or a (possibly multi-tuple in the\nfuture) replication apply operation will not be guaranteed to be added\nto the list in any particular order, so assuming where a result\nrelation of interest can be found in the list is bound to be unstable.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Apr 2021 14:48:35 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Tue, Apr 20, 2021 at 02:48:35PM +0900, Amit Langote wrote:\n> Manipulating the contents of es_opened_result_relations directly in\n> worker.c is admittedly a \"hack\", which I am reluctant to have other\n> places participating in. As originally designed, that list is to\n> speed up ExecCloseResultRelations(), not as a place to access result\n> relations from. The result relations targeted over the course of\n> execution of a query (update/delete) or a (possibly multi-tuple in the\n> future) replication apply operation will not be guaranteed to be added\n> to the list in any particular order, so assuming where a result\n> relation of interest can be found in the list is bound to be unstable.\n\nI really hope that this code gets heavily reorganized before\nconsidering more features or more manipulations of dependencies within\nthe relations used for any apply operations. From what I can\nunderstand, I think that it would be really nicer and less bug prone\nto have a logic like COPY FROM, where we'd rely on a set of \nExecInitResultRelation() with one final ExecCloseResultRelations(),\nand as bonus it should be possible to not have to do any business with\nExecOpenIndices() or ExecCloseIndices() as part of worker.c. Anyway,\nI also understand that we do with what we have now in this code, so I\nam fine to live with this patch as of 14.\n--\nMichael",
"msg_date": "Tue, 20 Apr 2021 16:21:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Tue, Apr 20, 2021 at 4:22 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Apr 20, 2021 at 02:48:35PM +0900, Amit Langote wrote:\n> > Manipulating the contents of es_opened_result_relations directly in\n> > worker.c is admittedly a \"hack\", which I am reluctant to have other\n> > places participating in. As originally designed, that list is to\n> > speed up ExecCloseResultRelations(), not as a place to access result\n> > relations from. The result relations targeted over the course of\n> > execution of a query (update/delete) or a (possibly multi-tuple in the\n> > future) replication apply operation will not be guaranteed to be added\n> > to the list in any particular order, so assuming where a result\n> > relation of interest can be found in the list is bound to be unstable.\n>\n> I really hope that this code gets heavily reorganized before\n> considering more features or more manipulations of dependencies within\n> the relations used for any apply operations. From what I can\n> understand, I think that it would be really nicer and less bug prone\n> to have a logic like COPY FROM, where we'd rely on a set of\n> ExecInitResultRelation() with one final ExecCloseResultRelations(),\n> and as bonus it should be possible to not have to do any business with\n> ExecOpenIndices() or ExecCloseIndices() as part of worker.c.\n\nAs pointed out by Amit K, a problem with using\nExecInitResultRelation() in both copyfrom.c and worker.c is that it\neffectively ignores the Relation pointer that's already been acquired\nby other parts of the code. Upthread [1], I proposed that we add a\nRelation pointer argument to ExecInitResultRelation() so that the\ncallers that are not interested in setting up es_range_table, but only\nes_result_relations, can do so.\n\nBTW, I tend to agree that ExecCloseIndices() is better only done in\nExecCloseResultRelations(), but...\n\n> Anyway,\n> I also understand that we do with what we have now in this code, so I\n> am fine to live with this patch as of 14.\n\n...IIUC, Amit K prefers starting another thread for other improvements\non top of 1375422c.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA%2BHiwqF%2Bq3MyGqLvGdC%2BJk5Xx%3DJpwpR-m5moXN%2Baf-LC-RMvdw%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 20 Apr 2021 17:51:58 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "> > ... FWIW, I'd rather\r\n> > agree to use what has been proposed with es_opened_result_relations\r\n> > like TRUNCATE does rather than attempt to use ExecInitResultRelation()\r\n> > combined with potentially asymmetric calls to\r\n> > ExecCloseResultRelations().\r\n> \r\n> Okay, how about the attached then? I decided to go with just finish_estate(),\r\n> because we no longer have to do anything relation specific there.\r\n> \r\n\r\nI think the patch looks good.\r\nBut I noticed that there seems no testcase to test the [aftertrigger in subscriber] when using logical replication.\r\nAs we seems planned to do some further refactor in the future, Is it better to add one testcase to cover this code ?\r\n\r\nBest regards,\r\nhouzj\r\n",
"msg_date": "Tue, 20 Apr 2021 11:29:43 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Table refer leak in logical replication"
},
{
"msg_contents": "On Tue, Apr 20, 2021 at 4:59 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> > > ... FWIW, I'd rather\n> > > agree to use what has been proposed with es_opened_result_relations\n> > > like TRUNCATE does rather than attempt to use ExecInitResultRelation()\n> > > combined with potentially asymmetric calls to\n> > > ExecCloseResultRelations().\n> >\n> > Okay, how about the attached then? I decided to go with just finish_estate(),\n> > because we no longer have to do anything relation specific there.\n> >\n>\n> I think the patch looks good.\n> But I noticed that there seems no testcase to test the [aftertrigger in subscriber] when using logical replication.\n> As we seems planned to do some further refactor in the future, Is it better to add one testcase to cover this code ?\n>\n\n+1. I think it makes sense to add a test case especially because we\ndon't have any existing test in this area.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 20 Apr 2021 18:20:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Tue, Apr 20, 2021 at 06:20:03PM +0530, Amit Kapila wrote:\n> +1. I think it makes sense to add a test case especially because we\n> don't have any existing test in this area.\n\nYes, let's add add something into 013_partition.pl within both\nsubscriber1 and subscriber2. This will not catch up the relation\nleak, but it is better to make sure that the trigger is fired as we'd\nlike to expect. This will become helpful if this code gets refactored\nor changed in the future. What about adding an extra table inserted\ninto by the trigger itself? If I were to design that, I would insert\nthe following information that gets checked by a simple psql call once\nthe changes are applied in the subscriber: relation name, TG_WHEN,\nTG_OP and TG_LEVEL. So such a table would need at least 4 columns. \n--\nMichael",
"msg_date": "Wed, 21 Apr 2021 09:31:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Wed, Apr 21, 2021 at 9:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Apr 20, 2021 at 06:20:03PM +0530, Amit Kapila wrote:\n> > +1. I think it makes sense to add a test case especially because we\n> > don't have any existing test in this area.\n>\n> Yes, let's add add something into 013_partition.pl within both\n> subscriber1 and subscriber2. This will not catch up the relation\n> leak, but it is better to make sure that the trigger is fired as we'd\n> like to expect. This will become helpful if this code gets refactored\n> or changed in the future. What about adding an extra table inserted\n> into by the trigger itself? If I were to design that, I would insert\n> the following information that gets checked by a simple psql call once\n> the changes are applied in the subscriber: relation name, TG_WHEN,\n> TG_OP and TG_LEVEL. So such a table would need at least 4 columns.\n\nAgree about adding tests along these lines. Will post in a bit.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 21 Apr 2021 11:13:06 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Wed, Apr 21, 2021 at 11:13:06AM +0900, Amit Langote wrote:\n> Agree about adding tests along these lines. Will post in a bit.\n\nThanks!\n--\nMichael",
"msg_date": "Wed, 21 Apr 2021 12:18:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Wed, Apr 21, 2021 at 11:13 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Apr 21, 2021 at 9:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Tue, Apr 20, 2021 at 06:20:03PM +0530, Amit Kapila wrote:\n> > > +1. I think it makes sense to add a test case especially because we\n> > > don't have any existing test in this area.\n> >\n> > Yes, let's add add something into 013_partition.pl within both\n> > subscriber1 and subscriber2. This will not catch up the relation\n> > leak, but it is better to make sure that the trigger is fired as we'd\n> > like to expect. This will become helpful if this code gets refactored\n> > or changed in the future. What about adding an extra table inserted\n> > into by the trigger itself? If I were to design that, I would insert\n> > the following information that gets checked by a simple psql call once\n> > the changes are applied in the subscriber: relation name, TG_WHEN,\n> > TG_OP and TG_LEVEL. So such a table would need at least 4 columns.\n>\n> Agree about adding tests along these lines. Will post in a bit.\n\nHere you go.\n\nSo I had started last night by adding some tests for this in\n003_constraints.pl because there are already some replica BR trigger\ntests there. I like your suggestion to have some tests around\npartitions, so added some in 013_partition.pl too. Let me know what\nyou think.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 21 Apr 2021 16:21:52 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Wed, Apr 21, 2021 at 04:21:52PM +0900, Amit Langote wrote:\n> So I had started last night by adding some tests for this in\n> 003_constraints.pl because there are already some replica BR trigger\n> tests there. I like your suggestion to have some tests around\n> partitions, so added some in 013_partition.pl too. Let me know what\n> you think.\n\nThanks, cool!\n\n+ IF (NEW.bid = 4 AND NEW.id = OLD.id) THEN\n+ RETURN NEW;\n+ ELSE\n+ RETURN NULL;\n+ END IF;\nNit: the indentation is a bit off here.\n\n+CREATE FUNCTION log_tab_fk_ref_upd() RETURNS TRIGGER AS \\$\\$\n+BEGIN\n+ CREATE TABLE IF NOT EXISTS public.tab_fk_ref_op_log (tgtab text,\ntgop text, tgwhen text, tglevel text, oldbid int, newbid int);\n+ INSERT INTO public.tab_fk_ref_op_log SELECT TG_RELNAME, TG_OP,\nTG_WHEN, TG_LEVEL, OLD.bid, NEW.bid;\n+ RETURN NULL;\n+END;\nLet's use only one function here, then you can just do something like\nthat and use NEW and OLD as you wish conditionally:\nIF (TG_OP = 'INSERT') THEN\n INSERT INTO tab_fk_ref_op_log blah;\nELSIF (TG_OP = 'UPDATE') THEN\n INSERT INTO tab_fk_ref_op_log blah_;\nEND IF;\n\nThe same remark applies to the two files where the tests have been\nintroduced.\n\nWhy don't you create the table beforehand rather than making it part\nof the trigger function?\n\n+CREATE TRIGGER tab_fk_ref_log_ins_after_trg\n[...]\n+CREATE TRIGGER tab_fk_ref_log_upd_after_trg\nNo need for two triggers either once there is only one function.\n\n+ \"SELECT * FROM tab_fk_ref_op_log ORDER BY tgop, newbid;\");\nI would add tgtab and tgwhen to the ORDER BY here, just to be on the\nsafe side, and apply the same rule everywhere. Your patch is already\nconsistent regarding that and help future debugging, that's good.\n--\nMichael",
"msg_date": "Wed, 21 Apr 2021 19:38:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Wed, Apr 21, 2021 at 7:38 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Apr 21, 2021 at 04:21:52PM +0900, Amit Langote wrote:\n> > So I had started last night by adding some tests for this in\n> > 003_constraints.pl because there are already some replica BR trigger\n> > tests there. I like your suggestion to have some tests around\n> > partitions, so added some in 013_partition.pl too. Let me know what\n> > you think.\n>\n> Thanks, cool!\n\nThanks for looking.\n\n> + IF (NEW.bid = 4 AND NEW.id = OLD.id) THEN\n> + RETURN NEW;\n> + ELSE\n> + RETURN NULL;\n> + END IF;\n> Nit: the indentation is a bit off here.\n\nHmm, I checked that I used 4 spaces for indenting, but maybe you're\nconcerned that the whole thing is indented unnecessarily relative to\nthe parent ELSIF block?\n\n> +CREATE FUNCTION log_tab_fk_ref_upd() RETURNS TRIGGER AS \\$\\$\n> +BEGIN\n> + CREATE TABLE IF NOT EXISTS public.tab_fk_ref_op_log (tgtab text,\n> tgop text, tgwhen text, tglevel text, oldbid int, newbid int);\n> + INSERT INTO public.tab_fk_ref_op_log SELECT TG_RELNAME, TG_OP,\n> TG_WHEN, TG_LEVEL, OLD.bid, NEW.bid;\n> + RETURN NULL;\n> +END;\n> Let's use only one function here, then you can just do something like\n> that and use NEW and OLD as you wish conditionally:\n> IF (TG_OP = 'INSERT') THEN\n> INSERT INTO tab_fk_ref_op_log blah;\n> ELSIF (TG_OP = 'UPDATE') THEN\n> INSERT INTO tab_fk_ref_op_log blah_;\n> END IF;\n>\n> The same remark applies to the two files where the tests have been\n> introduced.\n\nThat's certainly better with fewer lines.\n\n> Why don't you create the table beforehand rather than making it part\n> of the trigger function?\n\nMakes sense too.\n\n> +CREATE TRIGGER tab_fk_ref_log_ins_after_trg\n> [...]\n> +CREATE TRIGGER tab_fk_ref_log_upd_after_trg\n> No need for two triggers either once there is only one function.\n\nRight.\n\n> + \"SELECT * FROM tab_fk_ref_op_log ORDER BY tgop, newbid;\");\n> I would add tgtab and tgwhen to the ORDER BY here, just to be on the\n> safe side, and apply the same rule everywhere. Your patch is already\n> consistent regarding that and help future debugging, that's good.\n\nOkay, done.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 21 Apr 2021 21:58:10 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Wed, Apr 21, 2021 at 09:58:10PM +0900, Amit Langote wrote:\n> Okay, done.\n\nSo, I have been working on that today, and tried to apply the full set\nbefore realizing when writing the commit message that this was a set\nof bullet points, and that this was too much for a single commit. The\ntests are a nice thing to have to improve the coverage related to\ntuple routing, but that these are not really mandatory for the sake of\nthe fix discussed here. So for now I have applied the main fix as of\nf3b141c to close the open item.\n\nNow.. Coming back to the tests.\n\n- RETURN NULL;\n+ IF (NEW.bid = 4 AND NEW.id = OLD.id) THEN\n+ RETURN NEW;\n+ ELSE\n+ RETURN NULL;\n+ END IF\nThis part added in test 003 is subtle. This is a tweak to make sure\nthat the second trigger, AFTER trigger added in this patch, that would\nbe fired after the already-existing BEFORE entry, gets its hands on\nthe NEW tuple values. I think that this makes the test more confusing\nthan it should, and that could cause unnecessary pain to understand\nwhat's going on here for a future reader. Anyway, what's actually\nthe coverage we gain with this extra trigger in 003? The tests of\nHEAD make already sure that if a trigger fires or not, so that seems\nsufficient in itself. I guess that we could replace the existing\nBEFORE trigger with something like what's proposed in this set to\ntrack precisely which and when an operation happens on a relation with\na NEW and/or OLD set of tuples saved into this extra table, but the\ninterest looks limited for single relations.\n\nOn the other hand, the tests for partitions have much more value IMO,\nbut looking closely I think that we can do better:\n- Create triggers on more relations of the partition tree,\nparticularly to also check when a trigger does not fire.\n- Use a more generic name for tab1_2_op_log and its function\nlog_tab1_2_op(), say subs{1,2}_log_trigger_activity.\n- Create some extra BEFORE triggers perhaps?\n\nBy the way, I had an idea of trick we could use to check if relations\ndo not leak: we could scan the logs for this pattern patterns,\nsimilarly to what issues_sql_like() or connect_{fails,ok}() do\nalready, but that would mean increasing the log level and we don't do\nthat to ease the load of the nodes.\n--\nMichael",
"msg_date": "Thu, 22 Apr 2021 13:45:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 1:45 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Apr 21, 2021 at 09:58:10PM +0900, Amit Langote wrote:\n> > Okay, done.\n>\n> So, I have been working on that today, and tried to apply the full set\n> before realizing when writing the commit message that this was a set\n> of bullet points, and that this was too much for a single commit. The\n> tests are a nice thing to have to improve the coverage related to\n> tuple routing, but that these are not really mandatory for the sake of\n> the fix discussed here. So for now I have applied the main fix as of\n> f3b141c to close the open item.\n\nThanks for that.\n\n> Now.. Coming back to the tests.\n>\n> - RETURN NULL;\n> + IF (NEW.bid = 4 AND NEW.id = OLD.id) THEN\n> + RETURN NEW;\n> + ELSE\n> + RETURN NULL;\n> + END IF\n> This part added in test 003 is subtle. This is a tweak to make sure\n> that the second trigger, AFTER trigger added in this patch, that would\n> be fired after the already-existing BEFORE entry, gets its hands on\n> the NEW tuple values. I think that this makes the test more confusing\n> than it should, and that could cause unnecessary pain to understand\n> what's going on here for a future reader. Anyway, what's actually\n> the coverage we gain with this extra trigger in 003?\n\nNot much maybe. I am fine with dropping the changes made to 003 if\nthey are confusing, which I agree they can be.\n\n> On the other hand, the tests for partitions have much more value IMO,\n> but looking closely I think that we can do better:\n> - Create triggers on more relations of the partition tree,\n> particularly to also check when a trigger does not fire.\n\nIt seems you're suggesting to adopt 003's trigger firing tests for\npartitions in 013, but would we gain much by that?\n\n> - Use a more generic name for tab1_2_op_log and its function\n> log_tab1_2_op(), say subs{1,2}_log_trigger_activity.\n\nSure, done.\n\n> - Create some extra BEFORE triggers perhaps?\n\nAgain, that seems separate from what we're trying to do here. AIUI,\nour aim here is to expand coverage for after triggers, and not really\nthat of the trigger functionality proper, because nothing seems broken\nabout it, but that of the trigger target relation opening/closing.\nISTM that's what you're talking about below...\n\n> By the way, I had an idea of trick we could use to check if relations\n> do not leak: we could scan the logs for this pattern patterns,\n\nIt would be interesting to be able to do something like that, but....\n\n> similarly to what issues_sql_like() or connect_{fails,ok}() do\n> already, but that would mean increasing the log level and we don't do\n> that to ease the load of the nodes.\n\n...sorry, I am not very familiar with our Perl testing infra. Is\nthere some script that already does something like this? For example,\nchecking in the logs generated by a \"node\" that no instance of a\ncertain WARNING is logged?\n\nMeanwhile, attached is the updated version containing some of the\nchanges mentioned above.\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 23 Apr 2021 21:38:01 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Fri, Apr 23, 2021 at 09:38:01PM +0900, Amit Langote wrote:\n> On Thu, Apr 22, 2021 at 1:45 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> On the other hand, the tests for partitions have much more value IMO,\n>> but looking closely I think that we can do better:\n>> - Create triggers on more relations of the partition tree,\n>> particularly to also check when a trigger does not fire.\n> \n> It seems you're suggesting to adopt 003's trigger firing tests for\n> partitions in 013, but would we gain much by that?\n\nI was suggesting the opposite, aka apply the trigger design that you\nare introducing in 013 within 003. But that may not be necessary\neither :)\n\n>> - Use a more generic name for tab1_2_op_log and its function\n>> log_tab1_2_op(), say subs{1,2}_log_trigger_activity.\n> \n> Sure, done.\n\nAt the end I have used something simpler, as of\nsub{1,2}_trigger_activity and sub{1,2}_trigger_activity_func.\n\n>> - Create some extra BEFORE triggers perhaps?\n> \n> Again, that seems separate from what we're trying to do here. AIUI,\n> our aim here is to expand coverage for after triggers, and not really\n> that of the trigger functionality proper, because nothing seems broken\n> about it, but that of the trigger target relation opening/closing.\n> ISTM that's what you're talking about below...\n\nExactly. My review of the worker code is make me feeling that\nreworking this code could easily lead to some incorrect behavior, so\nI'd rather be careful with a couple of extra triggers created across\nthe partition tree, down to the partitions on which the triggers are\nfired actually.\n\n>> similarly to what issues_sql_like() or connect_{fails,ok}() do\n>> already, but that would mean increasing the log level and we don't do\n>> that to ease the load of the nodes.\n> \n> ...sorry, I am not very familiar with our Perl testing infra. Is\n> there some script that already does something like this? For example,\n> checking in the logs generated by a \"node\" that no instance of a\n> certain WARNING is logged?\n\nSee for example how we test for SQL patterns with the backend logs\nusing issues_sql_like(), or the more recent connect_ok() and\nconnect_fails(). This functions match regexps with the logs of the\nbackend for patterns. I am not sure if that's worth the extra cycles\nthough. I also feel that we may want a more centralized place as well\nto check such things, with more matching patterns, like at the end of\none run on a set of log files?\n\nSo, after a set of edits related to the format of the SQL queries, the\nobject names and some indentation (including a perltidy run), I have\napplied this patch to close the loop.\n--\nMichael",
"msg_date": "Mon, 26 Apr 2021 15:27:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
},
{
"msg_contents": "On Mon, Apr 26, 2021 at 3:27 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, Apr 23, 2021 at 09:38:01PM +0900, Amit Langote wrote:\n> > On Thu, Apr 22, 2021 at 1:45 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >> On the other hand, the tests for partitions have much more value IMO,\n> >> but looking closely I think that we can do better:\n> >> - Create triggers on more relations of the partition tree,\n> >> particularly to also check when a trigger does not fire.\n> >\n> > It seems you're suggesting to adopt 003's trigger firing tests for\n> > partitions in 013, but would we gain much by that?\n>\n> I was suggesting the opposite, aka apply the trigger design that you\n> are introducing in 013 within 003. But that may not be necessary\n> either :)\n\nYou mentioned adding \"triggers on more relations of the partition\ntrees\", so I thought you were talking about 013; 003 doesn't test\npartitioning at all at the moment.\n\n> >> - Create some extra BEFORE triggers perhaps?\n> >\n> > Again, that seems separate from what we're trying to do here. AIUI,\n> > our aim here is to expand coverage for after triggers, and not really\n> > that of the trigger functionality proper, because nothing seems broken\n> > about it, but that of the trigger target relation opening/closing.\n> > ISTM that's what you're talking about below...\n>\n> Exactly. My review of the worker code is make me feeling that\n> reworking this code could easily lead to some incorrect behavior, so\n> I'd rather be careful with a couple of extra triggers created across\n> the partition tree, down to the partitions on which the triggers are\n> fired actually.\n\nAh, okay. You are talking about improving the coverage in general,\nNOT in the context of the fix committed in f3b141c482552.\n\nHowever, note that BEFORE triggers work the same no matter whether the\ntarget relation is directly mentioned in the apply message or found as\na result of tuple routing. That's because the routines\nexecReplication.c (like ExecSimpleRelationInsert) and in\nnodeModifyTable.c (like ExecInsert) pass the ResultRelInfo *directly*\nto the BR trigger.c routines. So, there's no need for the complexity\nof the code and data structures for looking up trigger target\nrelations, such as what AFTER triggers need --\nExecGetTargetResultRel(). Given that, it's understandable to have\nmore coverage for the AFTER trigger case like that added by the patch\nyou just committed.\n\n> >> similarly to what issues_sql_like() or connect_{fails,ok}() do\n> >> already, but that would mean increasing the log level and we don't do\n> >> that to ease the load of the nodes.\n> >\n> > ...sorry, I am not very familiar with our Perl testing infra. Is\n> > there some script that already does something like this? For example,\n> > checking in the logs generated by a \"node\" that no instance of a\n> > certain WARNING is logged?\n>\n> See for example how we test for SQL patterns with the backend logs\n> using issues_sql_like(), or the more recent connect_ok() and\n> connect_fails(). This functions match regexps with the logs of the\n> backend for patterns.\n\nSo I assume those pattern-matching functions would catch, for example,\nrelation leak warnings in case they get introduced later, right? If\nso, I can see the merit of trying the idea.\n\n> I am not sure if that's worth the extra cycles\n> though. I also feel that we may want a more centralized place as well\n> to check such things, with more matching patterns, like at the end of\n> one run on a set of log files?\n\nI guess that makes sense.\n\n> So, after a set of edits related to the format of the SQL queries, the\n> object names and some indentation (including a perltidy run), I have\n> applied this patch to close the loop.\n\nThanks a lot.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Apr 2021 17:45:04 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table refer leak in logical replication"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nJust noted that the default value of autosummarize reloption for brin\nindexes is not documented, or at least not well documented.\n\nI added the default value in create_index.sgml where other options\nmention their own defaults, also made a little change in brin.sgml to \nmake it more clear that is disabled by default (at least the way it \nwas written made no sense for me, but it could be that my english is \nnot that good).\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL",
"msg_date": "Mon, 5 Apr 2021 23:02:54 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "document that brin's autosummarize parameter is off by default"
},
{
"msg_contents": "Ten months ago, Jaime Casanova wrote:\n> Hi everyone,\n> \n> Just noted that the default value of autosummarize reloption for brin\n> indexes is not documented, or at least not well documented.\n> \n> I added the default value in create_index.sgml where other options\n> mention their own defaults, also made a little change in brin.sgml to \n> make it more clear that is disabled by default (at least the way it \n> was written made no sense for me, but it could be that my english is \n> not that good).\n\nIt seems like \"This last trigger\" in the current text is intended to mean \"The\nsecond condition\". Your change improves that.\n\nShould we also consider enabling autosummarize by default ?\nIt was added in v10, after BRIN was added in v9.5. For us, brin wasn't usable\nwithout autosummarize.\n\nAlso, note that vacuums are now triggered by insertions, since v13, so it might\nbe that autosummarize is needed much less.\n\n-- \nJustin\n\nPS. I hope there's a faster response to your pg_upgrade patch.\n\n\n",
"msg_date": "Thu, 24 Feb 2022 13:35:20 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: document that brin's autosummarize parameter is off by default"
}
] |
[
{
"msg_contents": "Hi,\n\nIn a recent thread ([1]) I found a performance regression of the\nfollowing statement\nDO $do$\n BEGIN FOR i IN 1 .. 10000 LOOP\n BEGIN\n EXECUTE $cf$CREATE OR REPLACE FUNCTION foo() RETURNS VOID LANGUAGE plpgsql AS $f$BEGIN frakbar; END;$f$;$cf$;\n EXCEPTION WHEN others THEN\n END;\n END LOOP;\nEND;$do$;\n\n13: 1617.798\n14-dev: 34088.505\n\nThe time in 14 is spent mostly below:\n- 94.58% 0.01% postgres postgres [.] CreateFunction\n - 94.57% CreateFunction\n - 94.49% ProcedureCreate\n - 90.95% record_object_address_dependencies\n - 90.93% recordMultipleDependencies\n - 89.65% isObjectPinned\n - 89.12% systable_getnext\n - 89.06% index_getnext_slot\n - 56.13% index_fetch_heap\n - 54.82% table_index_fetch_tuple\n + 53.79% heapam_index_fetch_tuple\n 0.07% heap_hot_search_buffer\n 0.01% ReleaseAndReadBuffer\n 0.01% LockBuffer\n 0.08% heapam_index_fetch_tuple\n\n\nAfter a bit of debugging I figured out that the direct failure lies with\n623a9ba79b. The problem is that subtransaction abort does not increment\nShmemVariableCache->xactCompletionCount. That's trivial to remedy (we\nalready lock ProcArrayLock during XidCacheRemoveRunningXids).\n\nWhat happens is that heap_hot_search_buffer()-> correctly recognizes the\naborted subtransaction's rows as dead, but they are not recognized as\n\"surely dead\". Which then leads to O(iterations^2) index->heap lookups,\nbecause the rows from previous iterations are never recognized as dead.\n\nI initially was a bit worried that this could be a correctness issue as\nwell. The more I thought about it the more confused I got. A\ntransaction's subtransaction abort should not actually change the\ncontents of a snapshot, right?\n\nSnapshot\nGetSnapshotData(Snapshot snapshot)\n...\n /*\n * We don't include our own XIDs (if any) in the snapshot. It\n * needs to be includeded in the xmin computation, but we did so\n * outside the loop.\n */\n if (pgxactoff == mypgxactoff)\n continue;\n\nThe sole reason for the behavioural difference is that the cached\nsnapshot's xmax is *lower* than a new snapshot's would be, because\nGetSnapshotData() initializes xmax as\nShmemVariableCache->latestCompletedXid - which\nXidCacheRemoveRunningXids() incremented, without incrementing\nShmemVariableCache->xactCompletionCount.\n\nWhich then causes HeapTupleSatisfiesMVCC to go down\n if (!HeapTupleHeaderXminCommitted(tuple))\n...\n else if (XidInMVCCSnapshot(HeapTupleHeaderGetRawXmin(tuple), snapshot))\n return false;\n else if (TransactionIdDidCommit(HeapTupleHeaderGetRawXmin(tuple)))\n SetHintBits(tuple, buffer, HEAP_XMIN_COMMITTED,\n HeapTupleHeaderGetRawXmin(tuple));\n else\n {\n /* it must have aborted or crashed */\n SetHintBits(tuple, buffer, HEAP_XMIN_INVALID,\n InvalidTransactionId);\n return false;\n }\n\nthe \"return false\" for XidInMVCCSnapshot() rather than the\nSetHintBits(HEAP_XMIN_INVALID) path. Which then in turn causes\nHeapTupleIsSurelyDead() to not recognize the rows as surely dead.\n\nbool\nXidInMVCCSnapshot(TransactionId xid, Snapshot snapshot)\n{\n uint32 i;\n\n /*\n * Make a quick range check to eliminate most XIDs without looking at the\n * xip arrays. Note that this is OK even if we convert a subxact XID to\n * its parent below, because a subxact with XID < xmin has surely also got\n * a parent with XID < xmin, while one with XID >= xmax must belong to a\n * parent that was not yet committed at the time of this snapshot.\n */\n\n /* Any xid < xmin is not in-progress */\n if (TransactionIdPrecedes(xid, snapshot->xmin))\n return false;\n /* Any xid >= xmax is in-progress */\n if (TransactionIdFollowsOrEquals(xid, snapshot->xmax))\n return true;\n\n\nI *think* this issue doesn't lead to actually wrong query results. For\nHeapTupleSatisfiesMVCC purposes there's not much of a difference between\nan aborted transaction and one that's \"in progress\" according to the\nsnapshot (that's required - we don't check for aborts for xids in the\nsnapshot).\n\nIt is a bit disappointing that there - as far as I could find - are no\ntests for kill_prior_tuple actually working. I guess that lack, and that\nthere's no difference in query results, explains why nobody noticed the\nissue in the last ~9 months.\n\nSee the attached fix. I did include a test that verifies that the\nkill_prior_tuples optimization actually prevents the index from growing,\nwhen subtransactions are involved. I think it should be stable, even\nwith concurrent activity. But I'd welcome a look.\n\n\nI don't think that's why the issue exists, but I very much hate the\nXidCache* name. Makes it sound much less important than it is...\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20210317055718.v6qs3ltzrformqoa%40alap3.anarazel.de",
"msg_date": "Mon, 5 Apr 2021 21:35:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "subtransaction performance regression [kind of] due to snapshot\n caching"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The time in 14 is spent mostly below:\n> - 94.58% 0.01% postgres postgres [.] CreateFunction\n> - 94.57% CreateFunction\n> - 94.49% ProcedureCreate\n> - 90.95% record_object_address_dependencies\n> - 90.93% recordMultipleDependencies\n> - 89.65% isObjectPinned\n> - 89.12% systable_getnext\n> - 89.06% index_getnext_slot\n> - 56.13% index_fetch_heap\n> - 54.82% table_index_fetch_tuple\n> + 53.79% heapam_index_fetch_tuple\n> 0.07% heap_hot_search_buffer\n> 0.01% ReleaseAndReadBuffer\n> 0.01% LockBuffer\n> 0.08% heapam_index_fetch_tuple\n\nNot wanting to distract from your point about xactCompletionCount,\nbut ... I wonder if we could get away with defining \"isObjectPinned\"\nas \"is the OID <= 9999\" (and, in consequence, dropping explicit pin\nentries from pg_depend). I had not previously seen a case where the\ncost of looking into pg_depend for this info was this much of the\ntotal query runtime.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Apr 2021 00:47:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: subtransaction performance regression [kind of] due to snapshot\n caching"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-06 00:47:13 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > The time in 14 is spent mostly below:\n> > - 94.58% 0.01% postgres postgres [.] CreateFunction\n> > - 94.57% CreateFunction\n> > - 94.49% ProcedureCreate\n> > - 90.95% record_object_address_dependencies\n> > - 90.93% recordMultipleDependencies\n> > - 89.65% isObjectPinned\n> > - 89.12% systable_getnext\n> > - 89.06% index_getnext_slot\n> > - 56.13% index_fetch_heap\n> > - 54.82% table_index_fetch_tuple\n> > + 53.79% heapam_index_fetch_tuple\n> > 0.07% heap_hot_search_buffer\n> > 0.01% ReleaseAndReadBuffer\n> > 0.01% LockBuffer\n> > 0.08% heapam_index_fetch_tuple\n> \n> Not wanting to distract from your point about xactCompletionCount,\n> but ... I wonder if we could get away with defining \"isObjectPinned\"\n> as \"is the OID <= 9999\" (and, in consequence, dropping explicit pin\n> entries from pg_depend). I had not previously seen a case where the\n> cost of looking into pg_depend for this info was this much of the\n> total query runtime.\n\nI had the same thought, and yes, I do think we should do that. I've seen\nit show up in non-buggy workloads too (not to that degree though).\n\nThe <= 9999 pg_depend entries area also just a substantial proportion of\nthe size of an empty database, making it worth to remove <= 9999 entries:\n\nfreshly initdb'd empty cluster:\n\nVACUUM FULL pg_depend;\ndropme[926131][1]=# SELECT oid::regclass, pg_relation_size(oid) FROM pg_class WHERE relfilenode <> 0 ORDER BY 2 DESC LIMIT 10;\n┌─────────────────────────────────┬──────────────────┐\n│ oid │ pg_relation_size │\n├─────────────────────────────────┼──────────────────┤\n│ pg_depend │ 532480 │\n│ pg_toast.pg_toast_2618 │ 516096 │\n│ pg_collation │ 360448 │\n│ pg_description │ 352256 │\n│ pg_depend_depender_index │ 294912 │\n│ pg_depend_reference_index │ 294912 │\n│ pg_description_o_c_o_index │ 221184 │\n│ pg_statistic │ 155648 │\n│ pg_operator │ 114688 │\n│ pg_collation_name_enc_nsp_index │ 106496 │\n└─────────────────────────────────┴──────────────────┘\n(10 rows)\n\nDELETE FROM pg_depend WHERE deptype = 'p' AND refobjid <> 0 AND refobjid < 10000;\nVACUUM FULL pg_depend;\n\ndropme[926131][1]=# SELECT oid::regclass, pg_relation_size(oid) FROM pg_class WHERE relfilenode <> 0 ORDER BY 2 DESC LIMIT 10;\n┌─────────────────────────────────┬──────────────────┐\n│ oid │ pg_relation_size │\n├─────────────────────────────────┼──────────────────┤\n│ pg_toast.pg_toast_2618 │ 516096 │\n│ pg_collation │ 360448 │\n│ pg_description │ 352256 │\n│ pg_depend │ 253952 │\n│ pg_description_o_c_o_index │ 221184 │\n│ pg_statistic │ 155648 │\n│ pg_depend_depender_index │ 147456 │\n│ pg_depend_reference_index │ 147456 │\n│ pg_operator │ 114688 │\n│ pg_collation_name_enc_nsp_index │ 106496 │\n└─────────────────────────────────┴──────────────────┘\n(10 rows)\n\nA reduction from 8407kB to 7863kB of the size of the \"dropme\" database\njust by deleting the \"implicitly pinned\" entries seems like a good deal.\n\nTo save people the time to look it up: pg_toast.pg_toast_2618 is\npg_description...\n\n\nCouldn't we also treat FirstGenbkiObjectId to FirstBootstrapObjectId as\npinned? That'd be another 400kB of database size...\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Apr 2021 22:23:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: subtransaction performance regression [kind of] due to snapshot\n caching"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-04-06 00:47:13 -0400, Tom Lane wrote:\n>> Not wanting to distract from your point about xactCompletionCount,\n>> but ... I wonder if we could get away with defining \"isObjectPinned\"\n>> as \"is the OID <= 9999\" (and, in consequence, dropping explicit pin\n>> entries from pg_depend). I had not previously seen a case where the\n>> cost of looking into pg_depend for this info was this much of the\n>> total query runtime.\n\n> Couldn't we also treat FirstGenbkiObjectId to FirstBootstrapObjectId as\n> pinned? That'd be another 400kB of database size...\n\nYeah, it'd require some close study of exactly what we want to pin\nor not pin. Certainly everything with hand-assigned OIDs should\nbe pinned, but I think there's a lot of critical stuff like index\nopclasses that don't get hand-assigned OIDs. On the other hand,\nit's intentional that nothing in information_schema is pinned.\n\nWe might have to rejigger initdb so that there's a clearer\ndistinction between the OID ranges we want to pin or not.\nMaybe we'd even get initdb to record the cutoff OID in\npg_control or someplace.\n\nAnyway, just idle late-night speculation for now ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Apr 2021 01:34:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: subtransaction performance regression [kind of] due to snapshot\n caching"
},
{
"msg_contents": "On 2021-04-06 01:34:02 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-04-06 00:47:13 -0400, Tom Lane wrote:\n> >> Not wanting to distract from your point about xactCompletionCount,\n> >> but ... I wonder if we could get away with defining \"isObjectPinned\"\n> >> as \"is the OID <= 9999\" (and, in consequence, dropping explicit pin\n> >> entries from pg_depend). I had not previously seen a case where the\n> >> cost of looking into pg_depend for this info was this much of the\n> >> total query runtime.\n> \n> > Couldn't we also treat FirstGenbkiObjectId to FirstBootstrapObjectId as\n> > pinned? That'd be another 400kB of database size...\n> \n> Yeah, it'd require some close study of exactly what we want to pin\n> or not pin.\n\nOne interesting bit is:\n\npostgres[947554][1]=# SELECT classid::regclass, objid, refclassid::regclass, refobjid, deptype, refobjversion FROM pg_depend WHERE refobjid < 13000 AND deptype <> 'p';\n┌───────────────┬───────┬──────────────┬──────────┬─────────┬───────────────┐\n│ classid │ objid │ refclassid │ refobjid │ deptype │ refobjversion │\n├───────────────┼───────┼──────────────┼──────────┼─────────┼───────────────┤\n│ pg_constraint │ 15062 │ pg_collation │ 100 │ n │ 2.31 │\n└───────────────┴───────┴──────────────┴──────────┴─────────┴───────────────┘\n(1 row)\n\n\n\n> Certainly everything with hand-assigned OIDs should\n> be pinned, but I think there's a lot of critical stuff like index\n> opclasses that don't get hand-assigned OIDs. On the other hand,\n> it's intentional that nothing in information_schema is pinned.\n\nIsn't that pretty much the difference between FirstGenbkiObjectId and\nFirstBootstrapObjectId? Genbki will have assigned things like opclasses,\nbut not things like information_schema?\n\n\n> We might have to rejigger initdb so that there's a clearer\n> distinction between the OID ranges we want to pin or not.\n> Maybe we'd even get initdb to record the cutoff OID in\n> pg_control or someplace.\n\nThe only non-pinned pg_depend entry below FirstBootstrapObjectId is the\ncollation versioning one above. The only pinned entries above\nFirstBootstrapObjectId are the ones created via\nsystem_constraints.sql. So it seems we \"just\" would need to resolve the\nconstraint versioning stuff? And that could probably just be handled as\na hardcoded special case for now...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Apr 2021 22:59:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: subtransaction performance regression [kind of] due to snapshot\n caching"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-05 21:35:21 -0700, Andres Freund wrote:\n> See the attached fix. I did include a test that verifies that the\n> kill_prior_tuples optimization actually prevents the index from growing,\n> when subtransactions are involved. I think it should be stable, even\n> with concurrent activity. But I'd welcome a look.\n\nPushed that now, after trying and failing to make the test spuriously\nfail due to concurrent activity.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Apr 2021 09:28:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: subtransaction performance regression [kind of] due to snapshot\n caching"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile reviewing replication statistics I found a small typo. Attached\npatch for a typo in:\nsrc/backend/postmaster/pgstat.c\n................\n /*\n- * Check if the slot exits with the given name. It is\npossible that by\n+ * Check if the slot exists with the given name. It is\npossible that by\n * the time this message is executed the slot is\ndropped but at least\n * this check will ensure that the given name is for a\nvalid slot.\n */\n................\n\nRegards,\nVignesh",
"msg_date": "Tue, 6 Apr 2021 10:27:31 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "typo fix in pgstat.c: \"exits should be exists\""
},
{
"msg_contents": "\n\nOn 2021/04/06 13:57, vignesh C wrote:\n> Hi,\n> \n> While reviewing replication statistics I found a small typo. Attached\n> patch for a typo in:\n> src/backend/postmaster/pgstat.c\n\nThanks for the report and patch! Pushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 6 Apr 2021 14:11:18 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: typo fix in pgstat.c: \"exits should be exists\""
},
{
"msg_contents": "On Tue, Apr 6, 2021 at 10:41 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/04/06 13:57, vignesh C wrote:\n> > Hi,\n> >\n> > While reviewing replication statistics I found a small typo. Attached\n> > patch for a typo in:\n> > src/backend/postmaster/pgstat.c\n>\n> Thanks for the report and patch! Pushed.\n>\n\nThanks for pushing the patch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 6 Apr 2021 10:54:55 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: typo fix in pgstat.c: \"exits should be exists\""
}
] |
[
{
"msg_contents": "Hi All ,\n\n I found the below reference leak on master.\n\nSteps to reproduce the issue :\n--1) create type\ncreate type float_array_typ as ( i float8);\n\n--2) create anonymous block\npostgres=# do $$\n declare\n a float_array_typ[];\n begin\n a[1].i := 11;\n commit;\n end\n$$;\nWARNING: TupleDesc reference leak: TupleDesc 0x7ff7673b15f0 (16386,-1)\nstill referenced\nERROR: tupdesc reference 0x7ff7673b15f0 is not owned by resource owner\nTopTransaction\npostgres=#\n\n*Regards,*\nRohit\n\nHi All , I found the below reference leak on master.Steps to reproduce the issue :--1) create typecreate type float_array_typ as ( i float8);--2) create anonymous blockpostgres=# do $$ declare a float_array_typ[]; begin a[1].i := 11; commit; end$$;WARNING: TupleDesc reference leak: TupleDesc 0x7ff7673b15f0 (16386,-1) still referencedERROR: tupdesc reference 0x7ff7673b15f0 is not owned by resource owner TopTransactionpostgres=#Regards,Rohit",
"msg_date": "Tue, 6 Apr 2021 11:09:13 +0530",
"msg_from": "Rohit Bhogate <rohit.bhogate@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Reference Leak with type"
},
{
"msg_contents": "On Tue, Apr 06, 2021 at 11:09:13AM +0530, Rohit Bhogate wrote:\n> I found the below reference leak on master.\n\nThanks for the report. This is indeed a new problem as of HEAD,\ncoming from c9d52984 as far as I can see, and 13 does not support this\ngrammar. From what I can see, there seems to be an issue with the\nreference count of the TupleDesc here, your test case increments two\ntimes a TupleDesc for this custom type in a portal, and tries to\ndecrement it three times, causing what looks like a leak.\n--\nMichael",
"msg_date": "Tue, 6 Apr 2021 21:19:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reference Leak with type"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Apr 06, 2021 at 11:09:13AM +0530, Rohit Bhogate wrote:\n>> I found the below reference leak on master.\n\n> Thanks for the report. This is indeed a new problem as of HEAD,\n\nJust for the record, it's not new. The issue is (I think) that\nthe tupledesc refcount created by get_cached_rowtype is being\nlogged in the wrong ResourceOwner. Other cases that use\nget_cached_rowtype, such as IS NOT NULL on a composite value,\nreproduce the same type of failure back to v11:\n\ncreate type float_rec_typ as (i float8);\n\ndo $$\n declare\n f float_rec_typ := row(42);\n r bool;\n begin\n r := f is not null;\n commit;\n end\n$$;\n\nWARNING: TupleDesc reference leak: TupleDesc 0x7f5f549809d8 (53719,-1) still referenced\nERROR: tupdesc reference 0x7f5f549809d8 is not owned by resource owner TopTransaction\n\nStill poking at a suitable fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Apr 2021 13:30:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reference Leak with type"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 10:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Tue, Apr 06, 2021 at 11:09:13AM +0530, Rohit Bhogate wrote:\n> >> I found the below reference leak on master.\n>\n> > Thanks for the report. This is indeed a new problem as of HEAD,\n>\n> Just for the record, it's not new. The issue is (I think) that\n> the tupledesc refcount created by get_cached_rowtype is being\n> logged in the wrong ResourceOwner. Other cases that use\n> get_cached_rowtype, such as IS NOT NULL on a composite value,\n> reproduce the same type of failure back to v11:\n>\n> create type float_rec_typ as (i float8);\n>\n> do $$\n> declare\n> f float_rec_typ := row(42);\n> r bool;\n> begin\n> r := f is not null;\n> commit;\n> end\n> $$;\n>\n> WARNING: TupleDesc reference leak: TupleDesc 0x7f5f549809d8 (53719,-1)\n> still referenced\n> ERROR: tupdesc reference 0x7f5f549809d8 is not owned by resource owner\n> TopTransaction\n>\n> Still poking at a suitable fix.\n>\n> regards, tom lane\n>\n>\n> Hi,\nI think I have some idea about the cause for the 'resource owner' error.\n\nWhen commit results in calling exec_stmt_commit(), the ResourceOwner\nswitches to a new one.\nLater, when ResourceOwnerForgetTupleDesc() is called, we get the error\nsince owner->tupdescarr doesn't carry the tuple Desc to be forgotten.\n\nOne potential fix is to add the following to resowner.c\n/*\n * Transfer resources from resarr1 to resarr2\n */\nstatic void\nResourceArrayTransfer(ResourceArray *resarr1, ResourceArray *resarr2)\n{\n}\n\nIn exec_stmt_commit(), we save reference to the old ResourceOwner before\ncalling SPI_commit() (line 4824).\nThen after the return from SPI_start_transaction(), ResourceArrayTransfer()\nis called to transfer remaining items in tupdescarr from old ResourceOwner\nto the current ResourceOwner.\n\nI want to get some opinion on the feasibility of this route.\n\nIt seems ResourceOwner is opaque inside exec_stmt_commit(). And\nno ResourceArrayXX call exists in pl_exec.c\nSo I am still looking for the proper structure of the solution.\n\nCheers\n\nOn Fri, Apr 9, 2021 at 10:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Apr 06, 2021 at 11:09:13AM +0530, Rohit Bhogate wrote:\n>> I found the below reference leak on master.\n\n> Thanks for the report. This is indeed a new problem as of HEAD,\n\nJust for the record, it's not new. The issue is (I think) that\nthe tupledesc refcount created by get_cached_rowtype is being\nlogged in the wrong ResourceOwner. Other cases that use\nget_cached_rowtype, such as IS NOT NULL on a composite value,\nreproduce the same type of failure back to v11:\n\ncreate type float_rec_typ as (i float8);\n\ndo $$\n declare\n f float_rec_typ := row(42);\n r bool;\n begin\n r := f is not null;\n commit;\n end\n$$;\n\nWARNING: TupleDesc reference leak: TupleDesc 0x7f5f549809d8 (53719,-1) still referenced\nERROR: tupdesc reference 0x7f5f549809d8 is not owned by resource owner TopTransaction\n\nStill poking at a suitable fix.\n\n regards, tom lane\n\nHi,I think I have some idea about the cause for the 'resource owner' error.When commit results in calling exec_stmt_commit(), the ResourceOwner switches to a new one.Later, when ResourceOwnerForgetTupleDesc() is called, we get the error since owner->tupdescarr doesn't carry the tuple Desc to be forgotten.One potential fix is to add the following to resowner.c/* * Transfer resources from resarr1 to resarr2 */static voidResourceArrayTransfer(ResourceArray *resarr1, ResourceArray *resarr2){}In exec_stmt_commit(), we save reference to the old ResourceOwner before calling SPI_commit() (line 4824).Then after the return from SPI_start_transaction(), ResourceArrayTransfer() is called to transfer remaining items in tupdescarr from old ResourceOwner to the current ResourceOwner.I want to get some opinion on the feasibility of this route.It seems ResourceOwner is opaque inside exec_stmt_commit(). And no ResourceArrayXX call exists in pl_exec.cSo I am still looking for the proper structure of the solution.Cheers",
"msg_date": "Sat, 10 Apr 2021 14:12:32 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Reference Leak with type"
},
{
"msg_contents": "Here's a proposed patch for this problem.\n\nThe core problem in this test case is that the refcount is logged in the\nPortal resowner, which is a child of the initial transaction's resowner,\nso it goes away in the COMMIT (after warning of a resource leak); but\nthe expression tree is still there and still thinks it has a refcount.\nBy chance a new ResourceOwner is created in the same place where the old\none was, so that when the expression tree is finally destroyed at the\nend of the DO block, we see an error about \"this refcount isn't logged\nhere\" rather than a crash. Unrelated-looking code changes could turn\nthat into a real crash, of course.\n\nI spent quite a bit of time fruitlessly trying to fix it by manipulating\nwhich resowner the tupledesc refcount is logged in, specifically by\nrunning plpgsql \"simple expressions\" with the simple_eval_resowner as\nCurrentResourceOwner. But this just causes other problems to appear,\nbecause then that resowner becomes responsible for more stuff than\njust the plancache refcounts that plpgsql is expecting it to hold.\nSome of that stuff needs to be released at subtransaction abort,\nwhich is problematic because most of what plpgsql wants it to deal\nin needs to survive until end of main transaction --- in particular,\nthe plancache refcounts need to live that long, and so do the tupdesc\nrefcounts we're concerned with here, because those are associated with\n\"simple expression\" trees that are supposed to have that lifespan.\nIt's possible that we could make this approach work, but at minimum\nit'd require creating and destroying an additional resowner per\nsubtransaction; and maybe we'd have to give up on sharing \"simple\nexpression\" trees across subtransactions. So the potential performance\nhit is pretty bad, and I'm not even 100% sure it'd work at all.\n\nSo the alternative proposed in the attached is to give up on associating\na long-lived tupdesc refcount with these expression nodes at all.\nIntead, we can use a method that plpgsql has been using for a few\nyears, which is to rely on the fact that typcache entries never go\naway once made, and just save a pointer into the typcache. We can\ndetect possible changes in the cache entry by watching for changes\nin its tupDesc_identifier counter.\n\nThis infrastructure exists as far back as v11, so using it doesn't\npresent any problems for back-patchability. It is slightly\nnervous-making that we have to change some fields in struct ExprEvalStep\n--- but the overall struct size isn't changing, and I can't really\nsee a reason why extensions would be interested in the contents of\nthese particular subfield types.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 10 Apr 2021 17:57:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reference Leak with type"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on one of the issue, I have noticed below unexpected behavior\nwith \"PREPARE TRANSACTION\".\n\nWe are getting this unexpected behavior with PREPARE TRANSACTION when it is\nmixed with Temporary Objects. Please consider the below setup and SQL block.\n\nset max_prepared_transactions to 1 (or any non zero value), this is to\nenable the “prepare transaction”.\n\nNow please try to run the below set of statements.\n[BLOCK-1:]\npostgres=# create temp table fullname (first text, last text);\nCREATE TABLE\npostgres=# BEGIN;\nBEGIN\npostgres=*# create function longname(fullname) returns text language sql\npostgres-*# as $$select $1.first || ' ' || $1.last$$;\nCREATE FUNCTION\npostgres=*# prepare transaction 'mytran';\nERROR: cannot PREPARE a transaction that has operated on temporary objects\n\nAbove error is expected.\n\nThe problem is if we again try to create the same function in the “PREPARE\nTRANSACTION” as below.\n\n[BLOCK-2:]\npostgres=# BEGIN;\nBEGIN\npostgres=*# create function longname(fullname) returns text language sql\nas $$select $1.first || ' ' || $1.last$$;\nCREATE FUNCTION\npostgres=*# PREPARE transaction 'mytran';\nPREPARE TRANSACTION\n\nNow, this time we succeed and not getting the above error (”ERROR: cannot\nPREPARE a transaction that has operated on temporary objects), like the way\nwe were getting with BLOCK-1\n\nThis is happening because we set MyXactFlags in relation_open function\ncall, and here relation_open is getting called from load_typcache_tupdesc,\nbut in the second run of “create function…” in the above #2 block will not\ncall load_typcache_tupdesc because of the below condition(typentry->tupDesc\n== NULL) in lookup_type_cache().\n\n /*\n * If it's a composite type (row type), get tupdesc if requested\n */\n if ((flags & TYPECACHE_TUPDESC) &&\n typentry->tupDesc == NULL &&\n typentry->typtype == TYPTYPE_COMPOSITE)\n {\n load_typcache_tupdesc(typentry);\n }\n\nWe set typentry->tupDesc to non NULL(and populates it with proper tuple\ndescriptor in the cache) value during our first call to “create function…”\nin BLOCK-1.\nWe have logic in file xact.c::PrepareTransaction() to simply error out if\nwe have accessed any temporary object in the current transaction but\nbecause of the above-described issue of not setting\nXACT_FLAGS_ACCESSEDTEMPNAMESPACE in MyXactFlags second run of “create\nfunction..” Works and PREPARE TRANSACTION succeeds(but it should fail).\n\nPlease find attached the proposed patch to FIX this issue.\n\nThoughts?\n\nThanks,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 6 Apr 2021 20:17:57 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] PREPARE TRANSACTION unexpected behavior with TEMP TABLE"
},
{
"msg_contents": "On Tue, Apr 6, 2021 at 8:18 PM Himanshu Upadhyaya\n<upadhyaya.himanshu@gmail.com> wrote:\n>\n> Hi,\n>\n> While working on one of the issue, I have noticed below unexpected behavior with \"PREPARE TRANSACTION\".\n>\n> We are getting this unexpected behavior with PREPARE TRANSACTION when it is mixed with Temporary Objects. Please consider the below setup and SQL block.\n>\n> set max_prepared_transactions to 1 (or any non zero value), this is to enable the “prepare transaction”.\n>\n> Now please try to run the below set of statements.\n> [BLOCK-1:]\n> postgres=# create temp table fullname (first text, last text);\n> CREATE TABLE\n> postgres=# BEGIN;\n> BEGIN\n> postgres=*# create function longname(fullname) returns text language sql\n> postgres-*# as $$select $1.first || ' ' || $1.last$$;\n> CREATE FUNCTION\n> postgres=*# prepare transaction 'mytran';\n> ERROR: cannot PREPARE a transaction that has operated on temporary objects\n>\n> Above error is expected.\n>\n> The problem is if we again try to create the same function in the “PREPARE TRANSACTION” as below.\n>\n> [BLOCK-2:]\n> postgres=# BEGIN;\n> BEGIN\n> postgres=*# create function longname(fullname) returns text language sql\n> as $$select $1.first || ' ' || $1.last$$;\n> CREATE FUNCTION\n> postgres=*# PREPARE transaction 'mytran';\n> PREPARE TRANSACTION\n>\n> Now, this time we succeed and not getting the above error (”ERROR: cannot PREPARE a transaction that has operated on temporary objects), like the way we were getting with BLOCK-1\n>\n> This is happening because we set MyXactFlags in relation_open function call, and here relation_open is getting called from load_typcache_tupdesc, but in the second run of “create function…” in the above #2 block will not call load_typcache_tupdesc because of the below condition(typentry->tupDesc == NULL) in lookup_type_cache().\n>\n> /*\n> * If it's a composite type (row type), get tupdesc if requested\n> */\n> if ((flags & TYPECACHE_TUPDESC) &&\n> typentry->tupDesc == NULL &&\n> typentry->typtype == TYPTYPE_COMPOSITE)\n> {\n> load_typcache_tupdesc(typentry);\n> }\n>\n> We set typentry->tupDesc to non NULL(and populates it with proper tuple descriptor in the cache) value during our first call to “create function…” in BLOCK-1.\n> We have logic in file xact.c::PrepareTransaction() to simply error out if we have accessed any temporary object in the current transaction but because of the above-described issue of not setting XACT_FLAGS_ACCESSEDTEMPNAMESPACE in MyXactFlags second run of “create function..” Works and PREPARE TRANSACTION succeeds(but it should fail).\n>\n> Please find attached the proposed patch to FIX this issue.\n\nI was able to reproduce the issue with your patch and your patch fixes\nthe issue.\n\nFew comments:\n1) We can drop the table after this test.\n+CREATE TEMP TABLE temp_tbl (first TEXT, last TEXT);\n+BEGIN;\n+CREATE FUNCTION longname(temp_tbl) RETURNS TEXT LANGUAGE SQL\n+AS $$SELECT $1.first || ' ' || $1.last$$;\n+PREPARE TRANSACTION 'temp_tbl_access';\n+\n+BEGIN;\n+CREATE FUNCTION longname(temp_tbl) RETURNS TEXT LANGUAGE SQL\n+AS $$SELECT $1.first || ' ' || $1.last$$;\n+PREPARE TRANSACTION 'temp_tbl_access';\n\n2) +-- Test for accessing Temporary table\n+-- in prepare transaction.\ncan be changed to\n-- Test for accessing cached temporary table in a prepared transaction.\n\n3) +-- These cases must fail and generate errors about Temporary objects.\ncan be changed to\n-- These cases should fail with cannot access temporary object error.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 7 Apr 2021 16:10:55 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] PREPARE TRANSACTION unexpected behavior with TEMP TABLE"
},
{
"msg_contents": "Hi Vignesh,\nThanks for sharing the review comments. Please find my response below.\n\n> 1) We can drop the table after this test.\n>\nDone.\n\n> 2) +-- Test for accessing Temporary table\n> +-- in prepare transaction.\n> can be changed to\n> -- Test for accessing cached temporary table in a prepared transaction.\n>\nComment is now modified as above.\n\n> 3) +-- These cases must fail and generate errors about Temporary objects.\n> can be changed to\n> -- These cases should fail with cannot access temporary object error.\n>\nThe error is not about accessing the temporary object rather it's about\ndisallowing create transaction as it is referring to the temporary objects.\nI have changed it with the exact error we get in those cases.\n\nPlease find attached the V2 patch.\n\nThanks,\nHimanshu\n\nOn Wed, Apr 7, 2021 at 4:11 PM vignesh C <vignesh21@gmail.com> wrote:\n\n> On Tue, Apr 6, 2021 at 8:18 PM Himanshu Upadhyaya\n> <upadhyaya.himanshu@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > While working on one of the issue, I have noticed below unexpected\n> behavior with \"PREPARE TRANSACTION\".\n> >\n> > We are getting this unexpected behavior with PREPARE TRANSACTION when it\n> is mixed with Temporary Objects. Please consider the below setup and SQL\n> block.\n> >\n> > set max_prepared_transactions to 1 (or any non zero value), this is to\n> enable the “prepare transaction”.\n> >\n> > Now please try to run the below set of statements.\n> > [BLOCK-1:]\n> > postgres=# create temp table fullname (first text, last text);\n> > CREATE TABLE\n> > postgres=# BEGIN;\n> > BEGIN\n> > postgres=*# create function longname(fullname) returns text language sql\n> > postgres-*# as $$select $1.first || ' ' || $1.last$$;\n> > CREATE FUNCTION\n> > postgres=*# prepare transaction 'mytran';\n> > ERROR: cannot PREPARE a transaction that has operated on temporary\n> objects\n> >\n> > Above error is expected.\n> >\n> > The problem is if we again try to create the same function in the\n> “PREPARE TRANSACTION” as below.\n> >\n> > [BLOCK-2:]\n> > postgres=# BEGIN;\n> > BEGIN\n> > postgres=*# create function longname(fullname) returns text language sql\n> > as $$select $1.first || ' ' || $1.last$$;\n> > CREATE FUNCTION\n> > postgres=*# PREPARE transaction 'mytran';\n> > PREPARE TRANSACTION\n> >\n> > Now, this time we succeed and not getting the above error (”ERROR:\n> cannot PREPARE a transaction that has operated on temporary objects), like\n> the way we were getting with BLOCK-1\n> >\n> > This is happening because we set MyXactFlags in relation_open function\n> call, and here relation_open is getting called from load_typcache_tupdesc,\n> but in the second run of “create function…” in the above #2 block will not\n> call load_typcache_tupdesc because of the below condition(typentry->tupDesc\n> == NULL) in lookup_type_cache().\n> >\n> > /*\n> > * If it's a composite type (row type), get tupdesc if requested\n> > */\n> > if ((flags & TYPECACHE_TUPDESC) &&\n> > typentry->tupDesc == NULL &&\n> > typentry->typtype == TYPTYPE_COMPOSITE)\n> > {\n> > load_typcache_tupdesc(typentry);\n> > }\n> >\n> > We set typentry->tupDesc to non NULL(and populates it with proper tuple\n> descriptor in the cache) value during our first call to “create function…”\n> in BLOCK-1.\n> > We have logic in file xact.c::PrepareTransaction() to simply error out\n> if we have accessed any temporary object in the current transaction but\n> because of the above-described issue of not setting\n> XACT_FLAGS_ACCESSEDTEMPNAMESPACE in MyXactFlags second run of “create\n> function..” Works and PREPARE TRANSACTION succeeds(but it should fail).\n> >\n> > Please find attached the proposed patch to FIX this issue.\n>\n> I was able to reproduce the issue with your patch and your patch fixes\n> the issue.\n>\n> Few comments:\n> 1) We can drop the table after this test.\n> +CREATE TEMP TABLE temp_tbl (first TEXT, last TEXT);\n> +BEGIN;\n> +CREATE FUNCTION longname(temp_tbl) RETURNS TEXT LANGUAGE SQL\n> +AS $$SELECT $1.first || ' ' || $1.last$$;\n> +PREPARE TRANSACTION 'temp_tbl_access';\n> +\n> +BEGIN;\n> +CREATE FUNCTION longname(temp_tbl) RETURNS TEXT LANGUAGE SQL\n> +AS $$SELECT $1.first || ' ' || $1.last$$;\n> +PREPARE TRANSACTION 'temp_tbl_access';\n>\n> 2) +-- Test for accessing Temporary table\n> +-- in prepare transaction.\n> can be changed to\n> -- Test for accessing cached temporary table in a prepared transaction.\n>\n> 3) +-- These cases must fail and generate errors about Temporary objects.\n> can be changed to\n> -- These cases should fail with cannot access temporary object error.\n>\n> Regards,\n> Vignesh\n>",
"msg_date": "Wed, 7 Apr 2021 21:49:19 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] PREPARE TRANSACTION unexpected behavior with TEMP TABLE"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 12:19 PM Himanshu Upadhyaya\n<upadhyaya.himanshu@gmail.com> wrote:\n> Please find attached the V2 patch.\n\nHi,\n\nThis patch is essentially taking the position that calling\nload_typcache_tupdesc before using that tupdesc for anything is\nrequired for correctness. I'm not sure whether that's a good\narchitectural decision: to me, it looks like whoever wrote this code\noriginally - I think it was Tom - had the idea that it would be OK to\nskip calling that function whenever we already have the value.\nChanging that has some small performance cost, and it also just looks\nkind of weird, because you don't expect a function called\nload_typcache_tupdesc() to have the side effect of preventing some\nkind of bad thing from happening. You just expect it to be loading\nstuff. The comments in this code are not exactly stellar as things\nstand, but the patch also doesn't update them in a meaningful way.\nSure, it corrects a few comments that would be flat-out wrong\notherwise, but it doesn't add any kind of explanation that would help\nthe next person who looks at this code understand why they shouldn't\njust put back the exact same performance optimization you're proposing\nto rip out.\n\nAn alternative design would be to find some new place to set\nXACT_FLAGS_ACCESSEDTEMPNAMESPACE. For example, we could set a flag in\nthe TypeCacheEntry indicating whether this flag ought to be set when\nsomebody looks up the entry.\n\nBut, before we get too deeply into what the design should be, I think\nwe need to be clear about what problem we're trying to fix. I agree\nthat the behavior you demonstrate in your example looks inconsistent,\nbut that doesn't necessarily mean that the code is wrong. What exactly\nis the code trying to prohibit here, and does this test case really\nshow that principle being violated? The comments in\nPrepareTransaction() justify this prohibition by saying that \"Having\nthe prepared xact hold locks on another backend's temp table seems a\nbad idea --- for instance it would prevent the backend from exiting.\nThere are other problems too, such as how to clean up the source\nbackend's local buffers and ON COMMIT state if the prepared xact\nincludes a DROP of a temp table.\" But, in this case, none of that\nstuff actually happens. If I run your test case without the patch, the\nbackend has no problem exiting, and the prepared xact holds no lock on\nthe temp table, and the prepared xact does not include a DROP of a\ntemp table. That's not to say that everything is great, because after\nstarting a new session and committing mytran, this happens:\n\nrhaas=# \\df longname\nERROR: cache lookup failed for type 16432\n\nBut the patch doesn't actually prevent that from happening, because\neven with the patch applied I can still recreate the same failure\nusing a different sequence of steps:\n\n[ session 1 ]\nrhaas=# create temp table fullname (first text, last text);\nCREATE TABLE\n\n[ session 2 ]\nrhaas=# select oid::regclass from pg_class where relname = 'fullname';\n oid\n--------------------\n pg_temp_3.fullname\n(1 row)\n\nrhaas=# begin;\nBEGIN\nrhaas=*# create function longname(pg_temp_3.fullname) returns text\nlanguage sql as $$select $1.first || ' ' || $1.last$$;\nCREATE FUNCTION\n\n[ session 1 ]\nrhaas=# \\q\n\n[ session 2 ]\nrhaas=*# commit;\nCOMMIT\nrhaas=# \\df longname\nERROR: cache lookup failed for type 16448\n\nTo really fix this, you'd need CREATE FUNCTION to take a lock on the\ncontaining namespace, whether permanent or temporary. You'd also need\nevery other CREATE statement that creates a schema-qualified object to\ndo the same thing. Maybe that's a good idea, but we've been reluctant\nto go that far in the past due to performance consequences, and it's\nnot clear whether any of those problems are related to the issue that\nprompted you to submit the patch. So, I'm kind of left wondering what\nexactly you're trying to solve here. Can you clarify?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Apr 2021 10:13:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] PREPARE TRANSACTION unexpected behavior with TEMP TABLE"
},
{
"msg_contents": "Hi Robert,\n\nThanks for sharing your thoughts.\nThe purpose of this FIX is to mainly focus on getting consistent behavior\nwith PREPARE TRANSACTION. With the case that I had mentioned\npreviously, my expectation was either both PREPARE TRANSACTION should fail\nor both should succeed but here second same \"PREPARE TRANSACTION\" was\nsuccessful however first one was failing with an error, which is kind of\nweird to me.\n\n\nI have also tried to reproduce the behavior.\n\n[session:1]\npostgres=# create temp table fullname (first text, last text);\nCREATE TABLE\n\n[session:2]\npostgres=# select oid::regclass from pg_class where relname = 'fullname';\n oid\n--------------------\n pg_temp_3.fullname\n\npostgres=# BEGIN;\ncreate function longname1(pg_temp_3.fullname) returns text language sql\nas $$select $1.first || ' ' || $1.last$$;\nBEGIN\nCREATE FUNCTION\npostgres=*# prepare transaction 'mytran2';\nERROR: cannot PREPARE a transaction that has operated on temporary objects\npostgres=# BEGIN;\ncreate function longname1(pg_temp_3.fullname) returns text language sql\nas $$select $1.first || ' ' || $1.last$$;\nBEGIN\nCREATE FUNCTION\n\n[session:1]\npostgres=# \\q // no problem in exiting\n\n[session:2]\n\npostgres=*# prepare transaction 'mytran2';\nPREPARE TRANSACTION\npostgres=# \\df\nERROR: cache lookup failed for type 16429\n\nlooking at the comment in the code [session:1] should hang while exiting\nbut\nI don't see a problem here, you have already explained that in your reply.\nEven then I feel that behavior should be consistent when we mix temporary\nobjects in PREPARE TRANSACTION.\n\nThe comments in\n> PrepareTransaction() justify this prohibition by saying that \"Having\n> the prepared xact hold locks on another backend's temp table seems\n> a bad idea --- for instance it would prevent the backend from exiting.\n> There are other problems too, such as how to clean up the source\n> backend's local buffers and ON COMMIT state if the prepared xact\n> includes a DROP of a temp table.\"\n>\nI can see from the above experiment that there is no problem with the\nlock in the above case but not sure if there is any issue with \"clean up\nthe source backend's local buffers\", if not then we don't even need this\nERROR (ERROR: cannot PREPARE a transaction that has operated\non temporary objects) in PREPARE TRANSACTION?\n\nTo really fix this, you'd need CREATE FUNCTION to take a lock on the\n> containing namespace, whether permanent or temporary. You'd also need\n> every other CREATE statement that creates a schema-qualified object to\n> do the same thing. Maybe that's a good idea, but we've been reluctant\n> to go that far in the past due to performance consequences, and it's\n> not clear whether any of those problems are related to the issue that\n> prompted you to submit the patch.\n\nYes, the purpose of this patch is to actually have a valid value in\nXACT_FLAGS_ACCESSEDTEMPNAMESPACE, having said that it should\nalways be true if we access temporary object else false.\nEven if we do changes to have lock in case of \"CREATE FUNCTION\", we\nalso need to have this FIX in place so that \"PREPARE TRANSACTION\"\nmixed with TEMPORARY OBJECT will always be restricted and will not\ncause any hang issue(which we will start observing once we implement\nthese \"CREATE STATEMENT\" changes) as mentioned in the comment\nin the PrepareTransaction().\n\nJust thinking if it's acceptable to FIX this and make it consistent by\nproperly\nsetting XACT_FLAGS_ACCESSEDTEMPNAMESPACE, so that it should always\nfail if we access the temporary object, I also agree here that it will not\nactually cause\nany issue because of xact lock but then from user perspective it seems\nweird\nwhen the same PREPARE TRANSACTION is working second time onwards, thoughts?\n\nThanks,\nHimanshu\n\n\nOn Thu, Apr 8, 2021 at 7:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Apr 7, 2021 at 12:19 PM Himanshu Upadhyaya\n> <upadhyaya.himanshu@gmail.com> wrote:\n> > Please find attached the V2 patch.\n>\n> Hi,\n>\n> This patch is essentially taking the position that calling\n> load_typcache_tupdesc before using that tupdesc for anything is\n> required for correctness. I'm not sure whether that's a good\n> architectural decision: to me, it looks like whoever wrote this code\n> originally - I think it was Tom - had the idea that it would be OK to\n> skip calling that function whenever we already have the value.\n> Changing that has some small performance cost, and it also just looks\n> kind of weird, because you don't expect a function called\n> load_typcache_tupdesc() to have the side effect of preventing some\n> kind of bad thing from happening. You just expect it to be loading\n> stuff. The comments in this code are not exactly stellar as things\n> stand, but the patch also doesn't update them in a meaningful way.\n> Sure, it corrects a few comments that would be flat-out wrong\n> otherwise, but it doesn't add any kind of explanation that would help\n> the next person who looks at this code understand why they shouldn't\n> just put back the exact same performance optimization you're proposing\n> to rip out.\n>\n> An alternative design would be to find some new place to set\n> XACT_FLAGS_ACCESSEDTEMPNAMESPACE. For example, we could set a flag in\n> the TypeCacheEntry indicating whether this flag ought to be set when\n> somebody looks up the entry.\n>\n> But, before we get too deeply into what the design should be, I think\n> we need to be clear about what problem we're trying to fix. I agree\n> that the behavior you demonstrate in your example looks inconsistent,\n> but that doesn't necessarily mean that the code is wrong. What exactly\n> is the code trying to prohibit here, and does this test case really\n> show that principle being violated? The comments in\n> PrepareTransaction() justify this prohibition by saying that \"Having\n> the prepared xact hold locks on another backend's temp table seems a\n> bad idea --- for instance it would prevent the backend from exiting.\n> There are other problems too, such as how to clean up the source\n> backend's local buffers and ON COMMIT state if the prepared xact\n> includes a DROP of a temp table.\" But, in this case, none of that\n> stuff actually happens. If I run your test case without the patch, the\n> backend has no problem exiting, and the prepared xact holds no lock on\n> the temp table, and the prepared xact does not include a DROP of a\n> temp table. That's not to say that everything is great, because after\n> starting a new session and committing mytran, this happens:\n>\n> rhaas=# \\df longname\n> ERROR: cache lookup failed for type 16432\n>\n> But the patch doesn't actually prevent that from happening, because\n> even with the patch applied I can still recreate the same failure\n> using a different sequence of steps:\n>\n> [ session 1 ]\n> rhaas=# create temp table fullname (first text, last text);\n> CREATE TABLE\n>\n> [ session 2 ]\n> rhaas=# select oid::regclass from pg_class where relname = 'fullname';\n> oid\n> --------------------\n> pg_temp_3.fullname\n> (1 row)\n>\n> rhaas=# begin;\n> BEGIN\n> rhaas=*# create function longname(pg_temp_3.fullname) returns text\n> language sql as $$select $1.first || ' ' || $1.last$$;\n> CREATE FUNCTION\n>\n> [ session 1 ]\n> rhaas=# \\q\n>\n> [ session 2 ]\n> rhaas=*# commit;\n> COMMIT\n> rhaas=# \\df longname\n> ERROR: cache lookup failed for type 16448\n>\n> To really fix this, you'd need CREATE FUNCTION to take a lock on the\n> containing namespace, whether permanent or temporary. You'd also need\n> every other CREATE statement that creates a schema-qualified object to\n> do the same thing. Maybe that's a good idea, but we've been reluctant\n> to go that far in the past due to performance consequences, and it's\n> not clear whether any of those problems are related to the issue that\n> prompted you to submit the patch. So, I'm kind of left wondering what\n> exactly you're trying to solve here. Can you clarify?\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\nHi Robert,Thanks for sharing your thoughts. The purpose of this FIX is to mainly focus on getting consistent behaviorwith PREPARE TRANSACTION. With the case that I had mentioned previously, my expectation was either both PREPARE TRANSACTION should fail or both should succeed but here second same \"PREPARE TRANSACTION\" was successful however first one was failing with an error, which is kind of weird to me.I have also tried to reproduce the behavior. [session:1]postgres=# create temp table fullname (first text, last text);CREATE TABLE[session:2]postgres=# select oid::regclass from pg_class where relname = 'fullname'; oid -------------------- pg_temp_3.fullnamepostgres=# BEGIN;create function longname1(pg_temp_3.fullname) returns text language sqlas $$select $1.first || ' ' || $1.last$$;BEGINCREATE FUNCTIONpostgres=*# prepare transaction 'mytran2';ERROR: cannot PREPARE a transaction that has operated on temporary objectspostgres=# BEGIN; create function longname1(pg_temp_3.fullname) returns text language sqlas $$select $1.first || ' ' || $1.last$$;BEGINCREATE FUNCTION[session:1]postgres=# \\q // no problem in exiting[session:2]postgres=*# prepare transaction 'mytran2';PREPARE TRANSACTIONpostgres=# \\dfERROR: cache lookup failed for type 16429looking at the comment in the code [session:1] should hang while exiting but I don't see a problem here, you have already explained that in your reply.Even then I feel that behavior should be consistent when we mix temporaryobjects in PREPARE TRANSACTION.The comments in\nPrepareTransaction() justify this prohibition by saying that \"Having\nthe prepared xact hold locks on another backend's temp table seems\na bad idea --- for instance it would prevent the backend from exiting.\nThere are other problems too, such as how to clean up the source\nbackend's local buffers and ON COMMIT state if the prepared xact\nincludes a DROP of a temp table.\" I can see from the above experiment that there is no problem with the lock in the above case but not sure if there is any issue with \"clean up the source backend's local buffers\", if not then we don't even need this ERROR (ERROR: cannot PREPARE a transaction that has operated on temporary objects) in PREPARE TRANSACTION?To really fix this, you'd need CREATE FUNCTION to take a lock on the\ncontaining namespace, whether permanent or temporary. You'd also need\nevery other CREATE statement that creates a schema-qualified object to\ndo the same thing. Maybe that's a good idea, but we've been reluctant\nto go that far in the past due to performance consequences, and it's\nnot clear whether any of those problems are related to the issue that\nprompted you to submit the patch.Yes, the purpose of this patch is to actually have a valid value in XACT_FLAGS_ACCESSEDTEMPNAMESPACE, having said that it should always be true if we access temporary object else false.Even if we do changes to have lock in case of \"CREATE FUNCTION\", we also need to have this FIX in place so that \"PREPARE TRANSACTION\" mixed with TEMPORARY OBJECT will always be restricted and will not cause any hang issue(which we will start observing once we implement these \"CREATE STATEMENT\" changes) as mentioned in the comment in the PrepareTransaction().Just thinking if it's acceptable to FIX this and make it consistent by properly setting XACT_FLAGS_ACCESSEDTEMPNAMESPACE, so that it should always fail if we access the temporary object, I also agree here that it will not actually causeany issue because of xact lock but then from user perspective it seems weird when the same PREPARE TRANSACTION is working second time onwards, thoughts?Thanks,HimanshuOn Thu, Apr 8, 2021 at 7:43 PM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Apr 7, 2021 at 12:19 PM Himanshu Upadhyaya\n<upadhyaya.himanshu@gmail.com> wrote:\n> Please find attached the V2 patch.\n\nHi,\n\nThis patch is essentially taking the position that calling\nload_typcache_tupdesc before using that tupdesc for anything is\nrequired for correctness. I'm not sure whether that's a good\narchitectural decision: to me, it looks like whoever wrote this code\noriginally - I think it was Tom - had the idea that it would be OK to\nskip calling that function whenever we already have the value.\nChanging that has some small performance cost, and it also just looks\nkind of weird, because you don't expect a function called\nload_typcache_tupdesc() to have the side effect of preventing some\nkind of bad thing from happening. You just expect it to be loading\nstuff. The comments in this code are not exactly stellar as things\nstand, but the patch also doesn't update them in a meaningful way.\nSure, it corrects a few comments that would be flat-out wrong\notherwise, but it doesn't add any kind of explanation that would help\nthe next person who looks at this code understand why they shouldn't\njust put back the exact same performance optimization you're proposing\nto rip out.\n\nAn alternative design would be to find some new place to set\nXACT_FLAGS_ACCESSEDTEMPNAMESPACE. For example, we could set a flag in\nthe TypeCacheEntry indicating whether this flag ought to be set when\nsomebody looks up the entry.\n\nBut, before we get too deeply into what the design should be, I think\nwe need to be clear about what problem we're trying to fix. I agree\nthat the behavior you demonstrate in your example looks inconsistent,\nbut that doesn't necessarily mean that the code is wrong. What exactly\nis the code trying to prohibit here, and does this test case really\nshow that principle being violated? The comments in\nPrepareTransaction() justify this prohibition by saying that \"Having\nthe prepared xact hold locks on another backend's temp table seems a\nbad idea --- for instance it would prevent the backend from exiting.\nThere are other problems too, such as how to clean up the source\nbackend's local buffers and ON COMMIT state if the prepared xact\nincludes a DROP of a temp table.\" But, in this case, none of that\nstuff actually happens. If I run your test case without the patch, the\nbackend has no problem exiting, and the prepared xact holds no lock on\nthe temp table, and the prepared xact does not include a DROP of a\ntemp table. That's not to say that everything is great, because after\nstarting a new session and committing mytran, this happens:\n\nrhaas=# \\df longname\nERROR: cache lookup failed for type 16432\n\nBut the patch doesn't actually prevent that from happening, because\neven with the patch applied I can still recreate the same failure\nusing a different sequence of steps:\n\n[ session 1 ]\nrhaas=# create temp table fullname (first text, last text);\nCREATE TABLE\n\n[ session 2 ]\nrhaas=# select oid::regclass from pg_class where relname = 'fullname';\n oid\n--------------------\n pg_temp_3.fullname\n(1 row)\n\nrhaas=# begin;\nBEGIN\nrhaas=*# create function longname(pg_temp_3.fullname) returns text\nlanguage sql as $$select $1.first || ' ' || $1.last$$;\nCREATE FUNCTION\n\n[ session 1 ]\nrhaas=# \\q\n\n[ session 2 ]\nrhaas=*# commit;\nCOMMIT\nrhaas=# \\df longname\nERROR: cache lookup failed for type 16448\n\nTo really fix this, you'd need CREATE FUNCTION to take a lock on the\ncontaining namespace, whether permanent or temporary. You'd also need\nevery other CREATE statement that creates a schema-qualified object to\ndo the same thing. Maybe that's a good idea, but we've been reluctant\nto go that far in the past due to performance consequences, and it's\nnot clear whether any of those problems are related to the issue that\nprompted you to submit the patch. So, I'm kind of left wondering what\nexactly you're trying to solve here. Can you clarify?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 14 Apr 2021 16:15:27 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] PREPARE TRANSACTION unexpected behavior with TEMP TABLE"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 6:45 AM Himanshu Upadhyaya\n<upadhyaya.himanshu@gmail.com> wrote:\n> The purpose of this FIX is to mainly focus on getting consistent behavior\n> with PREPARE TRANSACTION. With the case that I had mentioned\n> previously, my expectation was either both PREPARE TRANSACTION should fail\n> or both should succeed but here second same \"PREPARE TRANSACTION\" was\n> successful however first one was failing with an error, which is kind of weird to me.\n\nI agree that it's weird, but that doesn't mean that your patch is an\nimprovement, and I don't think it is. If we could make this thing more\nconsistent without incurring any negatives, I'd be in favor of that.\nBut the patch does have some negatives, which in my opinion are more\nsubstantial than the problem you're trying to fix. Namely, possible\nperformance consequences, and undocumented and fragile assumptions\nthat, as it seems to me, may easily get broken in the future. I see\nthat you've repeatedly capitalized the word FIX in your reply, but\nit's just not that simple. If this had really bad consequences like\ncorrupting data or crashing the server then it would be essential to\ndo something about it, but so far the worst consequence you've\nindicated is that an obscure sequence of SQL commands that no real\nuser is likely to issue produces a slightly surprising result. That's\nnot great, but neither is it an emergency.\n\n> I have also tried to reproduce the behavior.\n\nYour test case isn't ideal for reproducing the problem that the\ncomment is worrying about. Normally, when we take a lock on a table,\nwe hold it until commit. But, that only applies when we run a query\nthat mentions the table, like a SELECT or an UPDATE. In your case, we\nonly end up opening the table to build a relcache entry for it, so\nthat we can look at the metadata. And, catalog scans used to build\nsyscache and relcache entries release locks immediately, rather than\nwaiting until the end of the transaction. So it might be that if we\nfailed to ever set XACT_FLAGS_ACCESSEDTEMPNAMESPACE in your test case,\neverything would be fine.\n\nThat doesn't seem to be true in general though. I tried changing\n\"cannot PREPARE a transaction that has operated on temporary objects\"\nfrom an ERROR to a NOTICE and then ran 'make check'. It hung. I think\nthis test case is the same problem as the regression tests hit; in any\ncase, it also hangs:\n\nrhaas=# begin;\nBEGIN\nrhaas=*# create temp table first ();\nCREATE TABLE\nrhaas=*# prepare transaction 'whatever';\nNOTICE: cannot PREPARE a transaction that has operated on temporary objects\nPREPARE TRANSACTION\nrhaas=# create temp table second ();\n[ hangs ]\n\nI haven't checked, but I think the problem here is that the first\ntransaction had to create this backend's pg_temp schema and the second\none can't see the results of the first one doing it so it wants to do\nthe same thing and that results in waiting for a lock the prepared\ntransaction already holds. I made a quick attempt to reproduce a hang\nat backend exit time, but couldn't find a case where that happened.\nThat doesn't mean there isn't one, though. There's a very good chance\nthat the person who wrote that comment knew that a real problem\nexisted, and just didn't describe it well enough for you or I to\nimmediately know what it is. It is also possible that they were\ncompletely wrong, or that things have changed since the comment was\nwritten, but we can't assume that without considerably more research\nand analysis than either of us has done so far.\n\nI think one point to take away here is that question of whether a\ntemporary relation has been \"accessed\" is not all black and white. If\nI ran a SELECT statement against a relation, I think it's clear that\nI've accessed it. But, if I just used the name of that relation as a\ntype name in some other SQL command, did I really access it? The\ncurrent code's answer to that is that if we had to open and lock the\nrelation to get its metadata, then we accessed it, and if we already\nhad the details that we needed in cache, then we did not access it.\nNow, again, I agree that looks a little weird from a user point of\nview, but looking at the implementation, you can kind of see why it\nends up like that. From a certain point of view, it would be more\nsurprising if we never opened or locked the relation and yet ended up\ndeciding that we'd accessed it. Now maybe we should further explore\ngoing the other direction and avoiding setting the flag at all in this\ncase, since I think we're neither retaining a lock nor touching any\nrelation buffers, but I think that needs more analysis. Even if we\ndecide that's safe, there's still the problem of finding a better\nimplementation that's not overly complex for what really feels like a\nvery minor issue.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Apr 2021 09:46:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] PREPARE TRANSACTION unexpected behavior with TEMP TABLE"
}
] |
[
{
"msg_contents": "Hello\n\nExcuse me in advance for my English, I'm improving :-)\n\nCould you tell me if it is possible that as well as the configuration that\nthe log presents the duration of the delayed queries, it can also present\nthe size of the result data? especially those who want to return a lot of\ninformation\n\n\n-- \nCordialmente,\n\nIng. Hellmuth I. Vargas S.\n\nHelloExcuse me in advance for my English, I'm improving :-)Could\n you tell me if it is possible that as well as the configuration that \nthe log presents the duration of the delayed queries, it can also \npresent the size of the result data? especially those who want to return\n a lot of information-- Cordialmente, Ing. Hellmuth I. Vargas S.",
"msg_date": "Tue, 6 Apr 2021 14:03:03 -0500",
"msg_from": "Hellmuth Vargas <hivs77@gmail.com>",
"msg_from_op": true,
"msg_subject": "PostgreSQL log query's result size"
},
{
"msg_contents": "On Tue, Apr 06, 2021 at 02:03:03PM -0500, Hellmuth Vargas wrote:\n> Could you tell me if it is possible that as well as the configuration that\n> the log presents the duration of the delayed queries, it can also present\n> the size of the result data? especially those who want to return a lot of\n> information\n\nI think you can get what you want by with auto_explain.\nhttps://www.postgresql.org/docs/current/auto-explain.html\n\nYou can set:\nauto_explain.log_analyze\n\nAnd then the \"width\" and \"rows\" are logged:\n Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.002..0.004 rows=1 loops=1)\n\nPS, you should first ask on the pgsql-general list, rather than this\ndevelopment list.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 6 Apr 2021 14:15:56 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL log query's result size"
},
{
"msg_contents": "?? Well, the truth does not show the data that I request, what I request is\nthat by configuring some parameter, the size of the obtained records can be\nobtained from the execution of a query something similar to the\nlog_min_duration_statement parameter\n\nNow I think it is pertinent to write here because, according to I have\ncarefully reviewed the documentation, this functionality does not exist\n\nEl mar, 6 de abr. de 2021 a la(s) 14:15, Justin Pryzby (pryzby@telsasoft.com)\nescribió:\n\n> On Tue, Apr 06, 2021 at 02:03:03PM -0500, Hellmuth Vargas wrote:\n> > Could you tell me if it is possible that as well as the configuration\n> that\n> > the log presents the duration of the delayed queries, it can also present\n> > the size of the result data? especially those who want to return a lot of\n> > information\n>\n> I think you can get what you want by with auto_explain.\n> https://www.postgresql.org/docs/current/auto-explain.html\n>\n> You can set:\n> auto_explain.log_analyze\n>\n> And then the \"width\" and \"rows\" are logged:\n> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.002..0.004 rows=1\n> loops=1)\n>\n> PS, you should first ask on the pgsql-general list, rather than this\n> development list.\n>\n> --\n> Justin\n>\n\n\n-- \nCordialmente,\n\nIng. Hellmuth I. Vargas S.\nEsp. Telemática y Negocios por Internet\nOracle Database 10g Administrator Certified Associate\nEnterpriseDB Certified PostgreSQL 9.3 Associate\n\n?? Well, the truth does not show the data that I request, what I request is that by configuring some parameter, the size of the obtained records can be obtained from the execution of a query something similar to the log_min_duration_statement parameterNow I think it is pertinent to write here because, according to I have carefully reviewed the documentation, this functionality does not existEl mar, 6 de abr. de 2021 a la(s) 14:15, Justin Pryzby (pryzby@telsasoft.com) escribió:On Tue, Apr 06, 2021 at 02:03:03PM -0500, Hellmuth Vargas wrote:\n> Could you tell me if it is possible that as well as the configuration that\n> the log presents the duration of the delayed queries, it can also present\n> the size of the result data? especially those who want to return a lot of\n> information\n\nI think you can get what you want by with auto_explain.\nhttps://www.postgresql.org/docs/current/auto-explain.html\n\nYou can set:\nauto_explain.log_analyze\n\nAnd then the \"width\" and \"rows\" are logged:\n Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.002..0.004 rows=1 loops=1)\n\nPS, you should first ask on the pgsql-general list, rather than this\ndevelopment list.\n\n-- \nJustin\n-- Cordialmente, Ing. Hellmuth I. Vargas S. Esp. Telemática y Negocios por Internet Oracle Database 10g Administrator Certified AssociateEnterpriseDB Certified PostgreSQL 9.3 Associate",
"msg_date": "Wed, 7 Apr 2021 09:12:47 -0500",
"msg_from": "Hellmuth Vargas <hivs77@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL log query's result size"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 7:13 AM Hellmuth Vargas <hivs77@gmail.com> wrote:\n\n>\n> ?? Well, the truth does not show the data that I request, what I request\n> is that by configuring some parameter, the size of the obtained records can\n> be obtained from the execution of a query something similar to the\n> log_min_duration_statement parameter\n>\n\n> Now I think it is pertinent to write here because, according to I have\n> carefully reviewed the documentation, this functionality does not exist\n>\n\nYou were provided the closest answer to what you wanted from what does\nexist today. As you say, your exact request is not available. Feature\nrequests belong on pgsql-general, not psql-hackers.\n\nAs this wasn't a question or request about active development work the\n-general list is the preferred location.\n\nDavid J.\n\nOn Wed, Apr 7, 2021 at 7:13 AM Hellmuth Vargas <hivs77@gmail.com> wrote:?? Well, the truth does not show the data that I request, what I request is that by configuring some parameter, the size of the obtained records can be obtained from the execution of a query something similar to the log_min_duration_statement parameter Now I think it is pertinent to write here because, according to I have carefully reviewed the documentation, this functionality does not existYou were provided the closest answer to what you wanted from what does exist today. As you say, your exact request is not available. Feature requests belong on pgsql-general, not psql-hackers.As this wasn't a question or request about active development work the -general list is the preferred location.David J.",
"msg_date": "Wed, 7 Apr 2021 07:20:16 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL log query's result size"
},
{
"msg_contents": "Thank you for the clarification.\n\nEl mié, 7 de abr. de 2021 a la(s) 09:20, David G. Johnston (\ndavid.g.johnston@gmail.com) escribió:\n\n> On Wed, Apr 7, 2021 at 7:13 AM Hellmuth Vargas <hivs77@gmail.com> wrote:\n>\n>>\n>> ?? Well, the truth does not show the data that I request, what I request\n>> is that by configuring some parameter, the size of the obtained records can\n>> be obtained from the execution of a query something similar to the\n>> log_min_duration_statement parameter\n>>\n>\n>> Now I think it is pertinent to write here because, according to I have\n>> carefully reviewed the documentation, this functionality does not exist\n>>\n>\n> You were provided the closest answer to what you wanted from what does\n> exist today. As you say, your exact request is not available. Feature\n> requests belong on pgsql-general, not psql-hackers.\n>\n> As this wasn't a question or request about active development work the\n> -general list is the preferred location.\n>\n> David J.\n>\n\n\n-- \nCordialmente,\n\nIng. Hellmuth I. Vargas S.\nEsp. Telemática y Negocios por Internet\nOracle Database 10g Administrator Certified Associate\nEnterpriseDB Certified PostgreSQL 9.3 Associate\n\nThank you for the clarification. El mié, 7 de abr. de 2021 a la(s) 09:20, David G. Johnston (david.g.johnston@gmail.com) escribió:On Wed, Apr 7, 2021 at 7:13 AM Hellmuth Vargas <hivs77@gmail.com> wrote:?? Well, the truth does not show the data that I request, what I request is that by configuring some parameter, the size of the obtained records can be obtained from the execution of a query something similar to the log_min_duration_statement parameter Now I think it is pertinent to write here because, according to I have carefully reviewed the documentation, this functionality does not existYou were provided the closest answer to what you wanted from what does exist today. As you say, your exact request is not available. Feature requests belong on pgsql-general, not psql-hackers.As this wasn't a question or request about active development work the -general list is the preferred location.David J.\n-- Cordialmente, Ing. Hellmuth I. Vargas S. Esp. Telemática y Negocios por Internet Oracle Database 10g Administrator Certified AssociateEnterpriseDB Certified PostgreSQL 9.3 Associate",
"msg_date": "Wed, 7 Apr 2021 09:48:12 -0500",
"msg_from": "Hellmuth Vargas <hivs77@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL log query's result size"
}
] |
[
{
"msg_contents": "Hi,\n\nBichir's been stuck for the past month and is unable to run regression\ntests since 6a2a70a02018d6362f9841cc2f499cc45405e86b.\n\nIt is interesting that that commit's a month old and probably no other\nclient has complained since, but diving in, I can see that it's been unable\nto even start regression tests after that commit went in.\n\nNote that Bichir is running on WSL1 (not WSL2) - i.e. Windows Subsystem for\nLinux inside Windows 10 - and so isn't really production use-case. The only\nrun that actually got submitted to Buildfarm was from a few days back when\nI killed it after a long wait - see [1].\n\nSince yesterday, I have another run that's again stuck on CREATE DATABASE\n(see outputs below) and although pstack not working may be a limitation of\nthe architecture / installation (unsure), a trace shows it is stuck at poll.\n\nTracing commits, it seems that the commit\n6a2a70a02018d6362f9841cc2f499cc45405e86b broke things and I can confirm\nthat 'make check' works if I rollback to the preceding commit (\n83709a0d5a46559db016c50ded1a95fd3b0d3be6 ).\n\nNot sure if many agree but 2 things stood out here:\n1) Buildfarm never got the message that a commit broke an instance. Ideally\nI'd have expected buildfarm to have an optimistic timeout that could have\nhelped - for e.g. right now, the CREATE DATABASE is still stuck since 18\nhrs.\n\n2) bichir is clearly not a production use-case (it takes 5 hrs to complete\na HEAD run!), so let me know if this change is intentional (I guess I'll\nstop maintaining it if so) but thought I'd still put this out in case\nit interests someone.\n\n-\nthanks\nrobins\n\nReference:\n1) Last run that I had to kill -\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bichir&dt=2021-03-31%2012%3A00%3A05\n\n#####################################################\nThe current run is running since yesterday.\n\n\npostgres@WSLv1:/opt/postgres/bf/v11/buildroot/HEAD/bichir.lastrun-logs$\ntail -2 lastcommand.log\nrunning on port 5678 with PID 8715\n============== creating database \"regression\" ==============\n\n\npostgres@WSLv1:/opt/postgres/bf/v11/buildroot/HEAD/bichir.lastrun-logs$ date\nWed Apr 7 12:48:26 AEST 2021\n\n\npostgres@WSLv1:/opt/postgres/bf/v11/buildroot/HEAD/bichir.lastrun-logs$ ls\n-la\ntotal 840\ndrwxrwxr-x 1 postgres postgres 4096 Apr 6 09:00 .\ndrwxrwxr-x 1 postgres postgres 4096 Apr 6 08:55 ..\n-rw-rw-r-- 1 postgres postgres 1358 Apr 6 08:55 SCM-checkout.log\n-rw-rw-r-- 1 postgres postgres 91546 Apr 6 08:56 configure.log\n-rw-rw-r-- 1 postgres postgres 40 Apr 6 08:55 githead.log\n-rw-rw-r-- 1 postgres postgres 2890 Apr 6 09:01 lastcommand.log\n-rw-rw-r-- 1 postgres postgres 712306 Apr 6 09:00 make.log\n\n\nroot@WSLv1:~# pstack 8729\n8729: psql -X -c CREATE DATABASE \"regression\" TEMPLATE=template0\nLC_COLLATE='C' LC_CTYPE='C' postgres\npstack: Bad address\nfailed to read target.\n\n\nroot@WSLv1:~# gdb -batch -ex bt -p 8729\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\n0x00007f41a8ea4c84 in __GI___poll (fds=fds@entry=0x7fffe13d7be8,\nnfds=nfds@entry=1, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29\n29 ../sysdeps/unix/sysv/linux/poll.c: No such file or directory.\n#0 0x00007f41a8ea4c84 in __GI___poll (fds=fds@entry=0x7fffe13d7be8,\nnfds=nfds@entry=1, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29\n#1 0x00007f41a9bc8eb1 in poll (__timeout=<optimized out>, __nfds=1,\n__fds=0x7fffe13d7be8) at /usr/include/x86_64-linux-gnu/bits/poll2.h:46\n#2 pqSocketPoll (end_time=-1, forWrite=0, forRead=1, sock=<optimized out>)\nat fe-misc.c:1133\n#3 pqSocketCheck (conn=0x7fffd979a0b0, forRead=1, forWrite=0, end_time=-1)\nat fe-misc.c:1075\n#4 0x00007f41a9bc8ff0 in pqWaitTimed (forRead=<optimized out>,\nforWrite=<optimized out>, conn=0x7fffd979a0b0, finish_time=<optimized out>)\nat fe-misc.c:1007\n#5 0x00007f41a9bc5ac9 in PQgetResult (conn=0x7fffd979a0b0) at\nfe-exec.c:1963\n#6 0x00007f41a9bc5ea3 in PQexecFinish (conn=0x7fffd979a0b0) at\nfe-exec.c:2306\n#7 0x00007f41a9bc5ef2 in PQexec (conn=<optimized out>,\nquery=query@entry=0x7fffd9799f70\n\"CREATE DATABASE \\\"regression\\\" TEMPLATE=template0 LC_COLLATE='C'\nLC_CTYPE='C'\") at fe-exec.c:2148\n#8 0x00007f41aa21e7a0 in SendQuery (query=0x7fffd9799f70 \"CREATE DATABASE\n\\\"regression\\\" TEMPLATE=template0 LC_COLLATE='C' LC_CTYPE='C'\") at\ncommon.c:1303\n#9 0x00007f41aa2160a6 in main (argc=<optimized out>, argv=<optimized out>)\nat startup.c:369\n\n\n\n#####################################################\n\n\n\nHere we can see that 83709a0d5a46559db016c50ded1a95fd3b0d3be6 goes past\n'CREATE DATABASE'\n=======================\nrobins@WSLv1:~/proj/postgres/postgres$ git checkout\n83709a0d5a46559db016c50ded1a95fd3b0d3be6\nPrevious HEAD position was 6a2a70a020 Use signalfd(2) for epoll latches.\nHEAD is now at 83709a0d5a Use SIGURG rather than SIGUSR1 for latches.\n\nrobins@WSLv1:~/proj/postgres/postgres$ cd src/test/regress/\n\nrobins@WSLv1:~/proj/postgres/postgres/src/test/regress$ make -j4\nNO_LOCALE=1 check\nmake -C ../../../src/backend generated-headers\nrm -rf ./testtablespace\nmake[1]: Entering directory\n'/home/robins/proj/postgres/postgres/src/backend'\nmake -C catalog distprep generated-header-symlinks\nmake -C utils distprep generated-header-symlinks\nmkdir ./testtablespace\nmake[2]: Entering directory\n'/home/robins/proj/postgres/postgres/src/backend/utils'\nmake[2]: Nothing to be done for 'distprep'.\nmake[2]: Nothing to be done for 'generated-header-symlinks'.\nmake[2]: Leaving directory\n'/home/robins/proj/postgres/postgres/src/backend/utils'\nmake[2]: Entering directory\n'/home/robins/proj/postgres/postgres/src/backend/catalog'\nmake[2]: Nothing to be done for 'distprep'.\nmake[2]: Nothing to be done for 'generated-header-symlinks'.\nmake[2]: Leaving directory\n'/home/robins/proj/postgres/postgres/src/backend/catalog'\nmake[1]: Leaving directory '/home/robins/proj/postgres/postgres/src/backend'\nmake -C ../../../src/port all\nrm -rf '/home/robins/proj/postgres/postgres'/tmp_install\nmake[1]: Entering directory '/home/robins/proj/postgres/postgres/src/port'\nmake[1]: Nothing to be done for 'all'.\nmake[1]: Leaving directory '/home/robins/proj/postgres/postgres/src/port'\nmake -C ../../../src/common all\nmake[1]: Entering directory '/home/robins/proj/postgres/postgres/src/common'\nmake[1]: Nothing to be done for 'all'.\nmake[1]: Leaving directory '/home/robins/proj/postgres/postgres/src/common'\nmake -C ../../../contrib/spi\nmake[1]: Entering directory\n'/home/robins/proj/postgres/postgres/contrib/spi'\nmake[1]: Nothing to be done for 'all'.\nmake[1]: Leaving directory '/home/robins/proj/postgres/postgres/contrib/spi'\n/bin/mkdir -p '/home/robins/proj/postgres/postgres'/tmp_install/log\nmake -C '../../..'\nDESTDIR='/home/robins/proj/postgres/postgres'/tmp_install install\n>'/home/robins/proj/postgres/postgres'/tmp_install/log/install.log 2>&1\nmake -j1 checkprep\n>>'/home/robins/proj/postgres/postgres'/tmp_install/log/install.log 2>&1\nPATH=\"/home/robins/proj/postgres/postgres/tmp_install/opt/postgres/master/bin:$PATH\"\nLD_LIBRARY_PATH=\"/home/robins/proj/postgres/postgres/tmp_install/opt/postgres/master/li\nb\" ../../../src/test/regress/pg_regress --temp-instance=./tmp_check\n--inputdir=. --bindir= --no-locale --dlpath=. --max-concurrent-tests=20\n --schedule=./parallel_sched ule\n============== removing existing temp instance ==============\n============== creating temporary instance ==============\n============== initializing database system ==============\n============== starting postmaster ==============\nrunning on port 58080 with PID 25879\n============== creating database \"regression\" ==============\nCREATE DATABASE\nALTER DATABASE\n============== running regression test queries ==============\ntest tablespace ... ok 1239 ms\nparallel group (20 tests): boolean char varchar name text int2 int4 int8\noid float4 float8 bit^CGNUmakefile:132: recipe for target 'check' failed\nmake: *** [check] Interrupt\n\n\n\nBut checking out 6a2a70a02018d6362f9841cc2f499cc45405e86b we can see that\nit hangs at 'CREATE DATABASE'\n=======================================\nrobins@WSLv1:~/proj/postgres/postgres/src/test/regress$ git checkout\n6a2a70a02018d6362f9841cc2f499cc45405e86b\nPrevious HEAD position was 83709a0d5a Use SIGURG rather than SIGUSR1 for\nlatches.\nHEAD is now at 6a2a70a020 Use signalfd(2) for epoll latches.\nrobins@WSLv1:~/proj/postgres/postgres/src/test/regress$ make -j4\nNO_LOCALE=1 check\nmake -C ../../../src/backend generated-headers\nrm -rf ./testtablespace\nmake[1]: Entering directory\n'/home/robins/proj/postgres/postgres/src/backend'\nmake -C catalog distprep generated-header-symlinks\nmake -C utils distprep generated-header-symlinks\nmkdir ./testtablespace\nmake[2]: Entering directory\n'/home/robins/proj/postgres/postgres/src/backend/utils'\nmake[2]: Nothing to be done for 'distprep'.\nmake[2]: Nothing to be done for 'generated-header-symlinks'.\nmake[2]: Leaving directory\n'/home/robins/proj/postgres/postgres/src/backend/utils'\nmake[2]: Entering directory\n'/home/robins/proj/postgres/postgres/src/backend/catalog'\nmake[2]: Nothing to be done for 'distprep'.\nmake[2]: Nothing to be done for 'generated-header-symlinks'.\nmake[2]: Leaving directory\n'/home/robins/proj/postgres/postgres/src/backend/catalog'\nmake[1]: Leaving directory '/home/robins/proj/postgres/postgres/src/backend'\nmake -C ../../../src/port all\nrm -rf '/home/robins/proj/postgres/postgres'/tmp_install\nmake[1]: Entering directory '/home/robins/proj/postgres/postgres/src/port'\nmake[1]: Nothing to be done for 'all'.\nmake[1]: Leaving directory '/home/robins/proj/postgres/postgres/src/port'\nmake -C ../../../src/common all\nmake[1]: Entering directory '/home/robins/proj/postgres/postgres/src/common'\nmake[1]: Nothing to be done for 'all'.\nmake[1]: Leaving directory '/home/robins/proj/postgres/postgres/src/common'\nmake -C ../../../contrib/spi\nmake[1]: Entering directory\n'/home/robins/proj/postgres/postgres/contrib/spi'\nmake[1]: Nothing to be done for 'all'.\nmake[1]: Leaving directory '/home/robins/proj/postgres/postgres/contrib/spi'\n/bin/mkdir -p '/home/robins/proj/postgres/postgres'/tmp_install/log\nmake -C '../../..'\nDESTDIR='/home/robins/proj/postgres/postgres'/tmp_install install\n>'/home/robins/proj/postgres/postgres'/tmp_install/log/install.log 2>&1\nmake -j1 checkprep\n>>'/home/robins/proj/postgres/postgres'/tmp_install/log/install.log 2>&1\nPATH=\"/home/robins/proj/postgres/postgres/tmp_install/opt/postgres/master/bin:$PATH\"\nLD_LIBRARY_PATH=\"/home/robins/proj/postgres/postgres/tmp_install/opt/postgres/master/lib\"\n ../../../src/test/regress/pg_regress --temp-instance=./tmp_check\n--inputdir=. --bindir= --no-locale --dlpath=. --max-concurrent-tests=20\n --schedule=./parallel_schedule\n============== removing existing temp instance ==============\n============== creating temporary instance ==============\n============== initializing database system ==============\n============== starting postmaster ==============\nrunning on port 58080 with PID 26702\n============== creating database \"regression\" ==============\nstuck here ^^^\n^CCancel request sent\nFATAL: terminating connection due to administrator command\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nconnection to server was lost\ncommand failed: \"psql\" -X -c \"CREATE DATABASE \\\"regression\\\"\nTEMPLATE=template0 LC_COLLATE='C' LC_CTYPE='C'\" \"postgres\"\npg_ctl: PID file\n\"/home/robins/proj/postgres/postgres/src/test/regress/./tmp_check/data/postmaster.pid\"\ndoes not exist\nIs server running?\n\npg_regress: could not stop postmaster: exit code was 256\nGNUmakefile:132: recipe for target 'check' failed\nmake: *** [check] Interrupt\n\nHi,Bichir's been stuck for the past month and is unable to run regression tests since 6a2a70a02018d6362f9841cc2f499cc45405e86b.It is interesting that that commit's a month old and probably no other client has complained since, but diving in, I can see that it's been unable to even start regression tests after that commit went in.Note that Bichir is running on WSL1 (not WSL2) - i.e. Windows Subsystem for Linux inside Windows 10 - and so isn't really production use-case. The only run that actually got submitted to Buildfarm was from a few days back when I killed it after a long wait - see [1].Since yesterday, I have another run that's again stuck on CREATE DATABASE (see outputs below) and although pstack not working may be a limitation of the architecture / installation (unsure), a trace shows it is stuck at poll.Tracing commits, it seems that the commit 6a2a70a02018d6362f9841cc2f499cc45405e86b broke things and I can confirm that 'make check' works if I rollback to the preceding commit (\n\n83709a0d5a46559db016c50ded1a95fd3b0d3be6 ).Not sure if many agree but 2 things stood out here:1) Buildfarm never got the message that a commit broke an instance. Ideally I'd have expected buildfarm to have an optimistic timeout that could have helped - for e.g. right now, the CREATE DATABASE is still stuck since 18 hrs.2) bichir is clearly not a production use-case (it takes 5 hrs to complete a HEAD run!), so let me know if this change is intentional (I guess I'll stop maintaining it if so) but thought I'd still put this out in case it interests someone.-thanksrobinsReference:1) Last run that I had to kill - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bichir&dt=2021-03-31%2012%3A00%3A05#####################################################The current run is running since yesterday.postgres@WSLv1:/opt/postgres/bf/v11/buildroot/HEAD/bichir.lastrun-logs$ tail -2 lastcommand.logrunning on port 5678 with PID 8715============== creating database \"regression\" ==============postgres@WSLv1:/opt/postgres/bf/v11/buildroot/HEAD/bichir.lastrun-logs$ dateWed Apr 7 12:48:26 AEST 2021postgres@WSLv1:/opt/postgres/bf/v11/buildroot/HEAD/bichir.lastrun-logs$ ls -latotal 840drwxrwxr-x 1 postgres postgres 4096 Apr 6 09:00 .drwxrwxr-x 1 postgres postgres 4096 Apr 6 08:55 ..-rw-rw-r-- 1 postgres postgres 1358 Apr 6 08:55 SCM-checkout.log-rw-rw-r-- 1 postgres postgres 91546 Apr 6 08:56 configure.log-rw-rw-r-- 1 postgres postgres 40 Apr 6 08:55 githead.log-rw-rw-r-- 1 postgres postgres 2890 Apr 6 09:01 lastcommand.log-rw-rw-r-- 1 postgres postgres 712306 Apr 6 09:00 make.logroot@WSLv1:~# pstack 87298729: psql -X -c CREATE DATABASE \"regression\" TEMPLATE=template0 LC_COLLATE='C' LC_CTYPE='C' postgrespstack: Bad addressfailed to read target.root@WSLv1:~# gdb -batch -ex bt -p 8729[Thread debugging using libthread_db enabled]Using host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".0x00007f41a8ea4c84 in __GI___poll (fds=fds@entry=0x7fffe13d7be8, nfds=nfds@entry=1, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:2929 ../sysdeps/unix/sysv/linux/poll.c: No such file or directory.#0 0x00007f41a8ea4c84 in __GI___poll (fds=fds@entry=0x7fffe13d7be8, nfds=nfds@entry=1, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29#1 0x00007f41a9bc8eb1 in poll (__timeout=<optimized out>, __nfds=1, __fds=0x7fffe13d7be8) at /usr/include/x86_64-linux-gnu/bits/poll2.h:46#2 pqSocketPoll (end_time=-1, forWrite=0, forRead=1, sock=<optimized out>) at fe-misc.c:1133#3 pqSocketCheck (conn=0x7fffd979a0b0, forRead=1, forWrite=0, end_time=-1) at fe-misc.c:1075#4 0x00007f41a9bc8ff0 in pqWaitTimed (forRead=<optimized out>, forWrite=<optimized out>, conn=0x7fffd979a0b0, finish_time=<optimized out>) at fe-misc.c:1007#5 0x00007f41a9bc5ac9 in PQgetResult (conn=0x7fffd979a0b0) at fe-exec.c:1963#6 0x00007f41a9bc5ea3 in PQexecFinish (conn=0x7fffd979a0b0) at fe-exec.c:2306#7 0x00007f41a9bc5ef2 in PQexec (conn=<optimized out>, query=query@entry=0x7fffd9799f70 \"CREATE DATABASE \\\"regression\\\" TEMPLATE=template0 LC_COLLATE='C' LC_CTYPE='C'\") at fe-exec.c:2148#8 0x00007f41aa21e7a0 in SendQuery (query=0x7fffd9799f70 \"CREATE DATABASE \\\"regression\\\" TEMPLATE=template0 LC_COLLATE='C' LC_CTYPE='C'\") at common.c:1303#9 0x00007f41aa2160a6 in main (argc=<optimized out>, argv=<optimized out>) at startup.c:369#####################################################Here we can see that 83709a0d5a46559db016c50ded1a95fd3b0d3be6 goes past 'CREATE DATABASE'=======================robins@WSLv1:~/proj/postgres/postgres$ git checkout 83709a0d5a46559db016c50ded1a95fd3b0d3be6Previous HEAD position was 6a2a70a020 Use signalfd(2) for epoll latches.HEAD is now at 83709a0d5a Use SIGURG rather than SIGUSR1 for latches.robins@WSLv1:~/proj/postgres/postgres$ cd src/test/regress/robins@WSLv1:~/proj/postgres/postgres/src/test/regress$ make -j4 NO_LOCALE=1 checkmake -C ../../../src/backend generated-headersrm -rf ./testtablespacemake[1]: Entering directory '/home/robins/proj/postgres/postgres/src/backend'make -C catalog distprep generated-header-symlinksmake -C utils distprep generated-header-symlinksmkdir ./testtablespacemake[2]: Entering directory '/home/robins/proj/postgres/postgres/src/backend/utils'make[2]: Nothing to be done for 'distprep'.make[2]: Nothing to be done for 'generated-header-symlinks'.make[2]: Leaving directory '/home/robins/proj/postgres/postgres/src/backend/utils'make[2]: Entering directory '/home/robins/proj/postgres/postgres/src/backend/catalog'make[2]: Nothing to be done for 'distprep'.make[2]: Nothing to be done for 'generated-header-symlinks'.make[2]: Leaving directory '/home/robins/proj/postgres/postgres/src/backend/catalog'make[1]: Leaving directory '/home/robins/proj/postgres/postgres/src/backend'make -C ../../../src/port allrm -rf '/home/robins/proj/postgres/postgres'/tmp_installmake[1]: Entering directory '/home/robins/proj/postgres/postgres/src/port'make[1]: Nothing to be done for 'all'.make[1]: Leaving directory '/home/robins/proj/postgres/postgres/src/port'make -C ../../../src/common allmake[1]: Entering directory '/home/robins/proj/postgres/postgres/src/common'make[1]: Nothing to be done for 'all'.make[1]: Leaving directory '/home/robins/proj/postgres/postgres/src/common'make -C ../../../contrib/spimake[1]: Entering directory '/home/robins/proj/postgres/postgres/contrib/spi'make[1]: Nothing to be done for 'all'.make[1]: Leaving directory '/home/robins/proj/postgres/postgres/contrib/spi'/bin/mkdir -p '/home/robins/proj/postgres/postgres'/tmp_install/logmake -C '../../..' DESTDIR='/home/robins/proj/postgres/postgres'/tmp_install install >'/home/robins/proj/postgres/postgres'/tmp_install/log/install.log 2>&1make -j1 checkprep >>'/home/robins/proj/postgres/postgres'/tmp_install/log/install.log 2>&1PATH=\"/home/robins/proj/postgres/postgres/tmp_install/opt/postgres/master/bin:$PATH\" LD_LIBRARY_PATH=\"/home/robins/proj/postgres/postgres/tmp_install/opt/postgres/master/li b\" ../../../src/test/regress/pg_regress --temp-instance=./tmp_check --inputdir=. --bindir= --no-locale --dlpath=. --max-concurrent-tests=20 --schedule=./parallel_sched ule============== removing existing temp instance ============================ creating temporary instance ============================ initializing database system ============================ starting postmaster ==============running on port 58080 with PID 25879============== creating database \"regression\" ==============CREATE DATABASEALTER DATABASE============== running regression test queries ==============test tablespace ... ok 1239 msparallel group (20 tests): boolean char varchar name text int2 int4 int8 oid float4 float8 bit^CGNUmakefile:132: recipe for target 'check' failedmake: *** [check] InterruptBut checking out 6a2a70a02018d6362f9841cc2f499cc45405e86b we can see that it hangs at 'CREATE DATABASE'=======================================robins@WSLv1:~/proj/postgres/postgres/src/test/regress$ git checkout 6a2a70a02018d6362f9841cc2f499cc45405e86bPrevious HEAD position was 83709a0d5a Use SIGURG rather than SIGUSR1 for latches.HEAD is now at 6a2a70a020 Use signalfd(2) for epoll latches.robins@WSLv1:~/proj/postgres/postgres/src/test/regress$ make -j4 NO_LOCALE=1 checkmake -C ../../../src/backend generated-headersrm -rf ./testtablespacemake[1]: Entering directory '/home/robins/proj/postgres/postgres/src/backend'make -C catalog distprep generated-header-symlinksmake -C utils distprep generated-header-symlinksmkdir ./testtablespacemake[2]: Entering directory '/home/robins/proj/postgres/postgres/src/backend/utils'make[2]: Nothing to be done for 'distprep'.make[2]: Nothing to be done for 'generated-header-symlinks'.make[2]: Leaving directory '/home/robins/proj/postgres/postgres/src/backend/utils'make[2]: Entering directory '/home/robins/proj/postgres/postgres/src/backend/catalog'make[2]: Nothing to be done for 'distprep'.make[2]: Nothing to be done for 'generated-header-symlinks'.make[2]: Leaving directory '/home/robins/proj/postgres/postgres/src/backend/catalog'make[1]: Leaving directory '/home/robins/proj/postgres/postgres/src/backend'make -C ../../../src/port allrm -rf '/home/robins/proj/postgres/postgres'/tmp_installmake[1]: Entering directory '/home/robins/proj/postgres/postgres/src/port'make[1]: Nothing to be done for 'all'.make[1]: Leaving directory '/home/robins/proj/postgres/postgres/src/port'make -C ../../../src/common allmake[1]: Entering directory '/home/robins/proj/postgres/postgres/src/common'make[1]: Nothing to be done for 'all'.make[1]: Leaving directory '/home/robins/proj/postgres/postgres/src/common'make -C ../../../contrib/spimake[1]: Entering directory '/home/robins/proj/postgres/postgres/contrib/spi'make[1]: Nothing to be done for 'all'.make[1]: Leaving directory '/home/robins/proj/postgres/postgres/contrib/spi'/bin/mkdir -p '/home/robins/proj/postgres/postgres'/tmp_install/logmake -C '../../..' DESTDIR='/home/robins/proj/postgres/postgres'/tmp_install install >'/home/robins/proj/postgres/postgres'/tmp_install/log/install.log 2>&1make -j1 checkprep >>'/home/robins/proj/postgres/postgres'/tmp_install/log/install.log 2>&1PATH=\"/home/robins/proj/postgres/postgres/tmp_install/opt/postgres/master/bin:$PATH\" LD_LIBRARY_PATH=\"/home/robins/proj/postgres/postgres/tmp_install/opt/postgres/master/lib\" ../../../src/test/regress/pg_regress --temp-instance=./tmp_check --inputdir=. --bindir= --no-locale --dlpath=. --max-concurrent-tests=20 --schedule=./parallel_schedule============== removing existing temp instance ============================ creating temporary instance ============================ initializing database system ============================ starting postmaster ==============running on port 58080 with PID 26702============== creating database \"regression\" ==============stuck here ^^^^CCancel request sentFATAL: terminating connection due to administrator commandserver closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.connection to server was lostcommand failed: \"psql\" -X -c \"CREATE DATABASE \\\"regression\\\" TEMPLATE=template0 LC_COLLATE='C' LC_CTYPE='C'\" \"postgres\"pg_ctl: PID file \"/home/robins/proj/postgres/postgres/src/test/regress/./tmp_check/data/postmaster.pid\" does not existIs server running?pg_regress: could not stop postmaster: exit code was 256GNUmakefile:132: recipe for target 'check' failedmake: *** [check] Interrupt",
"msg_date": "Wed, 7 Apr 2021 15:43:43 +1000",
"msg_from": "Robins Tharakan <tharakan@gmail.com>",
"msg_from_op": true,
"msg_subject": "buildfarm instance bichir stuck"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 5:44 PM Robins Tharakan <tharakan@gmail.com> wrote:\n> Bichir's been stuck for the past month and is unable to run regression tests since 6a2a70a02018d6362f9841cc2f499cc45405e86b.\n\nHrmph. That's \"Use signalfd(2) for epoll latches.\" I had a similar\nreport from an illumos user (but it was intermittent). I have never\nseen such a failure on Linux. My first guess is that these two\nsystems that are doing Linux system call emulation have implemented\nsubtly different semantics, and something is going wrong like this: a\nSIGUSR1 arrives to tell you some important news about a procsignal and\nthe signal handler calls SetLatch(MyLatch) which does kill(MyProcPid,\nSIGURG), but somehow that fails to wake up the epoll() you are\nsleeping in which contains the signalfd that should receive the signal\nand report it by being readable, due to some internal race. Or\nsomething like that. But I haven't been able to verify that theory\nbecause I don't have any of those computers. If it is indeed\nsomething like that and not a bug in my code, then I was thinking that\nthe main tool available to deal with it would be to set WAIT_USE_POLL\nin the relevant template file, so that we don't use the combination of\nepoll + signalfd on illlumos, but then WSL1 thows a spanner in the\nworks because AFAIK it's masquerading as Ubuntu, running PostgreSQL\nfrom an Ubuntu package with a freaky kernel. Hmm.\n\n> It is interesting that that commit's a month old and probably no other client has complained since, but diving in, I can see that it's been unable to even start regression tests after that commit went in.\n\nOh, well at least it's easily reproducible then, that's something!\n\n> Note that Bichir is running on WSL1 (not WSL2) - i.e. Windows Subsystem for Linux inside Windows 10 - and so isn't really production use-case. The only run that actually got submitted to Buildfarm was from a few days back when I killed it after a long wait - see [1].\n>\n> Since yesterday, I have another run that's again stuck on CREATE DATABASE (see outputs below) and although pstack not working may be a limitation of the architecture / installation (unsure), a trace shows it is stuck at poll.\n\nThat's actually the client. I guess there is also a backend process\nstuck somewhere in epoll_wait()?\n\n\n",
"msg_date": "Wed, 7 Apr 2021 18:16:28 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm instance bichir stuck"
},
{
"msg_contents": "Hi Thomas,\n\nThanks for taking a look at this promptly.\n\n\nOn Wed, 7 Apr 2021 at 16:17, Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Apr 7, 2021 at 5:44 PM Robins Tharakan <tharakan@gmail.com> wrote:\n> > It is interesting that that commit's a month old and probably no other\nclient has complained since, but diving in, I can see that it's been unable\nto even start regression tests after that commit went in.\n>\n> Oh, well at least it's easily reproducible then, that's something!\n\nCorrect. This is easily reproducible on this test-instance, so let me know\nif you want me to test a patch.\n\n\n>\n> That's actually the client. I guess there is also a backend process\n> stuck somewhere in epoll_wait()?\n\nYou're right (and yes my bad, I was looking at the client). The server\nprocess is stuck in epoll_wait(). Let me know if you need me to give any\nother info that may be helpful.\n\n\nroot@WSLv1:~# gdb -batch -ex bt -p 29887\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\n0x00007fa087741a07 in epoll_wait (epfd=10, events=0x7fffcbcc5748,\nmaxevents=maxevents@entry=1, timeout=timeout@entry=-1) at\n../sysdeps/unix/sysv/linux/epoll_wait.c:30\n30 ../sysdeps/unix/sysv/linux/epoll_wait.c: No such file or directory.\n#0 0x00007fa087741a07 in epoll_wait (epfd=10, events=0x7fffcbcc5748,\nmaxevents=maxevents@entry=1, timeout=timeout@entry=-1) at\n../sysdeps/unix/sysv/linux/epoll_wait.c:30\n#1 0x00007fa088c355dc in WaitEventSetWaitBlock (nevents=1,\noccurred_events=0x7fffd2d4c090, cur_timeout=-1, set=0x7fffcbcc56e8) at\nlatch.c:1428\n#2 WaitEventSetWait (set=0x7fffcbcc56e8, timeout=timeout@entry=-1,\noccurred_events=occurred_events@entry=0x7fffd2d4c090, nevents=nevents@entry=1,\nwait_event_info=wait_ev\n#3 0x00007fa088c35a14 in WaitLatch (latch=<optimized out>,\nwakeEvents=wakeEvents@entry=33, timeout=timeout@entry=-1,\nwait_event_info=wait_event_info@entry=134217733) at\n#4 0x00007fa088c43ed8 in ConditionVariableTimedSleep (cv=0x7fa0873cc498,\ntimeout=-1, wait_event_info=134217733) at condition_variable.c:163\n#5 0x00007fa088bba8bc in RequestCheckpoint (flags=flags@entry=44) at\ncheckpointer.c:1017\n#6 0x00007fa088a46315 in createdb (pstate=pstate@entry=0x7fffcbcebbc0,\nstmt=stmt@entry=0x7fffcbcca558) at dbcommands.c:711\n.\n.\n.\n\n-\nrobins\n\nHi Thomas,Thanks for taking a look at this promptly.On Wed, 7 Apr 2021 at 16:17, Thomas Munro <thomas.munro@gmail.com> wrote:> On Wed, Apr 7, 2021 at 5:44 PM Robins Tharakan <tharakan@gmail.com> wrote:> > It is interesting that that commit's a month old and probably no other client has complained since, but diving in, I can see that it's been unable to even start regression tests after that commit went in.>> Oh, well at least it's easily reproducible then, that's something!Correct. This is easily reproducible on this test-instance, so let me know if you want me to test a patch. >> That's actually the client. I guess there is also a backend process> stuck somewhere in epoll_wait()?You're right (and yes my bad, I was looking at the client). The server process is stuck in epoll_wait(). Let me know if you need me to give any other info that may be helpful.root@WSLv1:~# gdb -batch -ex bt -p 29887[Thread debugging using libthread_db enabled]Using host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".0x00007fa087741a07 in epoll_wait (epfd=10, events=0x7fffcbcc5748, maxevents=maxevents@entry=1, timeout=timeout@entry=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:3030 ../sysdeps/unix/sysv/linux/epoll_wait.c: No such file or directory.#0 0x00007fa087741a07 in epoll_wait (epfd=10, events=0x7fffcbcc5748, maxevents=maxevents@entry=1, timeout=timeout@entry=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30#1 0x00007fa088c355dc in WaitEventSetWaitBlock (nevents=1, occurred_events=0x7fffd2d4c090, cur_timeout=-1, set=0x7fffcbcc56e8) at latch.c:1428#2 WaitEventSetWait (set=0x7fffcbcc56e8, timeout=timeout@entry=-1, occurred_events=occurred_events@entry=0x7fffd2d4c090, nevents=nevents@entry=1, wait_event_info=wait_ev#3 0x00007fa088c35a14 in WaitLatch (latch=<optimized out>, wakeEvents=wakeEvents@entry=33, timeout=timeout@entry=-1, wait_event_info=wait_event_info@entry=134217733) at#4 0x00007fa088c43ed8 in ConditionVariableTimedSleep (cv=0x7fa0873cc498, timeout=-1, wait_event_info=134217733) at condition_variable.c:163#5 0x00007fa088bba8bc in RequestCheckpoint (flags=flags@entry=44) at checkpointer.c:1017#6 0x00007fa088a46315 in createdb (pstate=pstate@entry=0x7fffcbcebbc0, stmt=stmt@entry=0x7fffcbcca558) at dbcommands.c:711...-robins",
"msg_date": "Wed, 7 Apr 2021 17:30:39 +1000",
"msg_from": "Robins Tharakan <tharakan@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm instance bichir stuck"
},
{
"msg_contents": "\nOn 4/7/21 2:16 AM, Thomas Munro wrote:\n> On Wed, Apr 7, 2021 at 5:44 PM Robins Tharakan <tharakan@gmail.com> wrote:\n>> Bichir's been stuck for the past month and is unable to run regression tests since 6a2a70a02018d6362f9841cc2f499cc45405e86b.\n> Hrmph. That's \"Use signalfd(2) for epoll latches.\" I had a similar\n> report from an illumos user (but it was intermittent). I have never\n> seen such a failure on Linux. My first guess is that these two\n> systems that are doing Linux system call emulation have implemented\n> subtly different semantics, and something is going wrong like this: a\n> SIGUSR1 arrives to tell you some important news about a procsignal and\n> the signal handler calls SetLatch(MyLatch) which does kill(MyProcPid,\n> SIGURG), but somehow that fails to wake up the epoll() you are\n> sleeping in which contains the signalfd that should receive the signal\n> and report it by being readable, due to some internal race. Or\n> something like that. But I haven't been able to verify that theory\n> because I don't have any of those computers. If it is indeed\n> something like that and not a bug in my code, then I was thinking that\n> the main tool available to deal with it would be to set WAIT_USE_POLL\n> in the relevant template file, so that we don't use the combination of\n> epoll + signalfd on illlumos, but then WSL1 thows a spanner in the\n> works because AFAIK it's masquerading as Ubuntu, running PostgreSQL\n> from an Ubuntu package with a freaky kernel. Hmm.\n>\n\nTo test this the OP could just add\n\n\n CPPFLAGS => '-DWAIT_USE_POLL',\n\n\nto his animal's config's config_env stanza.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 7 Apr 2021 07:49:38 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm instance bichir stuck"
},
{
"msg_contents": "Thanks Andrew.\n\nThe build's still running but the CPPFLAGS hint does seem to have helped\n(see below).\n\nUnless advised otherwise, I intend to let that option be, so as to get\nbichir back online. If a future commit 'fixes' things, I could rollback\nthis flag to test things out (or try out other options if required).\n\n\nOn Wed, 7 Apr 2021 at 21:49, Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 4/7/21 2:16 AM, Thomas Munro wrote:\n> > On Wed, Apr 7, 2021 at 5:44 PM Robins Tharakan <tharakan@gmail.com>\nwrote:\n> >> Bichir's been stuck for the past month and is unable to run regression\ntests since 6a2a70a02018d6362f9841cc2f499cc45405e86b.\n> > ...If it is indeed\n> > something like that and not a bug in my code, then I was thinking that\n> > the main tool available to deal with it would be to set WAIT_USE_POLL\n> > in the relevant template file, so that we don't use the combination of\n> > epoll + signalfd on illlumos, but then WSL1 thows a spanner in the\n> > works because AFAIK it's masquerading as Ubuntu, running PostgreSQL\n> > from an Ubuntu package with a freaky kernel. Hmm.\n> To test this the OP could just add\n> CPPFLAGS => '-DWAIT_USE_POLL',\n> to his animal's config's config_env stanza.\n\nThis did help in getting past the previous hurdle.\n\npostgres@WSLv1:/opt/postgres/bf/v11/buildroot/HEAD/bichir.lastrun-logs$\ngrep CPPFLAGS configure.log| grep using\nconfigure: using CPPFLAGS=-DWAIT_USE_POLL -D_GNU_SOURCE\n-I/usr/include/libxml2\nconfigure:19511: using CPPFLAGS=-DWAIT_USE_POLL -D_GNU_SOURCE\n-I/usr/include/libxml2\n\npostgres@WSLv1:/opt/postgres/bf/v11/buildroot/HEAD/bichir.lastrun-logs$\ngrep -A2 \"creating database\" lastcommand.log\n============== creating database \"regression\" ==============\nCREATE DATABASE\nALTER DATABASE\n\n-\nthanks\nrobins\n\nThanks Andrew.The build's still running but the CPPFLAGS hint does seem to have helped (see below).Unless advised otherwise, I intend to let that option be, so as to get bichir back online. If a future commit 'fixes' things, I could rollback this flag to test things out (or try out other options if required).On Wed, 7 Apr 2021 at 21:49, Andrew Dunstan <andrew@dunslane.net> wrote:> On 4/7/21 2:16 AM, Thomas Munro wrote:> > On Wed, Apr 7, 2021 at 5:44 PM Robins Tharakan <tharakan@gmail.com> wrote:> >> Bichir's been stuck for the past month and is unable to run regression tests since 6a2a70a02018d6362f9841cc2f499cc45405e86b.> > ...If it is indeed> > something like that and not a bug in my code, then I was thinking that> > the main tool available to deal with it would be to set WAIT_USE_POLL> > in the relevant template file, so that we don't use the combination of> > epoll + signalfd on illlumos, but then WSL1 thows a spanner in the> > works because AFAIK it's masquerading as Ubuntu, running PostgreSQL> > from an Ubuntu package with a freaky kernel. Hmm.> To test this the OP could just add> CPPFLAGS => '-DWAIT_USE_POLL',> to his animal's config's config_env stanza.This did help in getting past the previous hurdle.postgres@WSLv1:/opt/postgres/bf/v11/buildroot/HEAD/bichir.lastrun-logs$ grep CPPFLAGS configure.log| grep usingconfigure: using CPPFLAGS=-DWAIT_USE_POLL -D_GNU_SOURCE -I/usr/include/libxml2configure:19511: using CPPFLAGS=-DWAIT_USE_POLL -D_GNU_SOURCE -I/usr/include/libxml2postgres@WSLv1:/opt/postgres/bf/v11/buildroot/HEAD/bichir.lastrun-logs$ grep -A2 \"creating database\" lastcommand.log============== creating database \"regression\" ==============CREATE DATABASEALTER DATABASE-thanksrobins",
"msg_date": "Wed, 7 Apr 2021 22:55:53 +1000",
"msg_from": "Robins Tharakan <tharakan@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm instance bichir stuck"
},
{
"msg_contents": "Robins Tharakan <tharakan@gmail.com> writes:\n> Not sure if many agree but 2 things stood out here:\n> 1) Buildfarm never got the message that a commit broke an instance. Ideally\n> I'd have expected buildfarm to have an optimistic timeout that could have\n> helped - for e.g. right now, the CREATE DATABASE is still stuck since 18\n> hrs.\n\nAs far as that goes, you can set wait_timeout in the animal's config\nto something comfortably more than the longest run time you expect.\nIt doesn't default to enabled though, possibly because picking a\none-size-fits-all value would be impossible.\n\nI do use it on some of my flakier dinosaurs, and I've noticed that\nwhen it does kick in, the buildfarm run just stops dead and no report\nis sent to the BF server. That has advantages in not cluttering the\nBF status with run-failed-because-of-$weird_problem issues, but it\ndoesn't help from the standpoint of noticing when your animal is stuck.\nMaybe it'd be better to change that behavior.\n\n(I can also attest from personal experience that what had been a\ncomfortable amount of slop when you picked it tends to become less\nso over time. Consider yourself warned.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Apr 2021 13:07:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm instance bichir stuck"
},
{
"msg_contents": "\nOn 4/7/21 1:07 PM, Tom Lane wrote:\n> Robins Tharakan <tharakan@gmail.com> writes:\n>> Not sure if many agree but 2 things stood out here:\n>> 1) Buildfarm never got the message that a commit broke an instance. Ideally\n>> I'd have expected buildfarm to have an optimistic timeout that could have\n>> helped - for e.g. right now, the CREATE DATABASE is still stuck since 18\n>> hrs.\n> As far as that goes, you can set wait_timeout in the animal's config\n> to something comfortably more than the longest run time you expect.\n> It doesn't default to enabled though, possibly because picking a\n> one-size-fits-all value would be impossible.\n>\n> I do use it on some of my flakier dinosaurs, and I've noticed that\n> when it does kick in, the buildfarm run just stops dead and no report\n> is sent to the BF server. That has advantages in not cluttering the\n> BF status with run-failed-because-of-$weird_problem issues, but it\n> doesn't help from the standpoint of noticing when your animal is stuck.\n> Maybe it'd be better to change that behavior.\n>\n\nYeah, I'll have a look. It's not simple for a bunch of reasons.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 7 Apr 2021 14:53:25 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm instance bichir stuck"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 4/7/21 1:07 PM, Tom Lane wrote:\n>> I do use it on some of my flakier dinosaurs, and I've noticed that\n>> when it does kick in, the buildfarm run just stops dead and no report\n>> is sent to the BF server. That has advantages in not cluttering the\n>> BF status with run-failed-because-of-$weird_problem issues, but it\n>> doesn't help from the standpoint of noticing when your animal is stuck.\n>> Maybe it'd be better to change that behavior.\n\n> Yeah, I'll have a look. It's not simple for a bunch of reasons.\n\nOn further thought, that doesn't seem like the place to fix it.\nI'd rather be able to ask the buildfarm server to send me nagmail\nif my animal hasn't sent a report in N days (where N had better\nbe owner-configurable). This would catch not only animal-is-hung,\nbut also other classes of problems like whole-machine-is-hung or\nyou-broke-your-firewall-configuration-so-it-cant-contact-the-server.\nI've had issues of those sorts before ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Apr 2021 16:02:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm instance bichir stuck"
},
{
"msg_contents": "\nOn 4/7/21 4:02 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 4/7/21 1:07 PM, Tom Lane wrote:\n>>> I do use it on some of my flakier dinosaurs, and I've noticed that\n>>> when it does kick in, the buildfarm run just stops dead and no report\n>>> is sent to the BF server. That has advantages in not cluttering the\n>>> BF status with run-failed-because-of-$weird_problem issues, but it\n>>> doesn't help from the standpoint of noticing when your animal is stuck.\n>>> Maybe it'd be better to change that behavior.\n>> Yeah, I'll have a look. It's not simple for a bunch of reasons.\n> On further thought, that doesn't seem like the place to fix it.\n> I'd rather be able to ask the buildfarm server to send me nagmail\n> if my animal hasn't sent a report in N days (where N had better\n> be owner-configurable). This would catch not only animal-is-hung,\n> but also other classes of problems like whole-machine-is-hung or\n> you-broke-your-firewall-configuration-so-it-cant-contact-the-server.\n> I've had issues of those sorts before ...\n>\n> \t\t\t\n\n\n\nThat already exists, and has for a long time. See the 'alerts' stanza of\nyour config file.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 7 Apr 2021 16:27:33 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm instance bichir stuck"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 4/7/21 4:02 PM, Tom Lane wrote:\n>> On further thought, that doesn't seem like the place to fix it.\n>> I'd rather be able to ask the buildfarm server to send me nagmail\n>> if my animal hasn't sent a report in N days (where N had better\n>> be owner-configurable).\n\n> That already exists, and has for a long time. See the 'alerts' stanza of\n> your config file.\n\nOh! In that case, I don't think we need anything else.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Apr 2021 16:30:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm instance bichir stuck"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 7:31 PM Robins Tharakan <tharakan@gmail.com> wrote:\n> Correct. This is easily reproducible on this test-instance, so let me know if you want me to test a patch.\n\n From your description it sounds like signals are not arriving at all,\nrather than some more complicated race. Let's go back to basics...\nwhat does the attached program print for you? I see:\n\ntmunro@x1:~/junk$ cc test-signalfd.c\ntmunro@x1:~/junk$ ./a.out\nblocking SIGURG...\ncreating a signalfd to receive SIGURG...\ncreating an epoll set...\nadding signalfd to epoll set...\npolling the epoll set... 0\nsending a signal...\npolling the epoll set... 1",
"msg_date": "Fri, 9 Apr 2021 18:11:35 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm instance bichir stuck"
},
{
"msg_contents": "On Fri, 9 Apr 2021 at 16:12, Thomas Munro <thomas.munro@gmail.com> wrote:\n> From your description it sounds like signals are not arriving at all,\n> rather than some more complicated race. Let's go back to basics...\n> what does the attached program print for you? I see:\n>\n> tmunro@x1:~/junk$ cc test-signalfd.c\n> tmunro@x1:~/junk$ ./a.out\n> blocking SIGURG...\n> creating a signalfd to receive SIGURG...\n> creating an epoll set...\n> adding signalfd to epoll set...\n> polling the epoll set... 0\n> sending a signal...\n> polling the epoll set... 1\n\n\nI get pretty much the same. Some additional info below, although not sure\nif it'd be of any help here.\n\nrobins@WSLv1:~/proj/hackers$ cc test-signalfd.c\n\nrobins@WSLv1:~/proj/hackers$ ./a.out\nblocking SIGURG...\ncreating a signalfd to receive SIGURG...\ncreating an epoll set...\nadding signalfd to epoll set...\npolling the epoll set... 0\nsending a signal...\npolling the epoll set... 1\n\nrobins@WSLv1:~/proj/hackers$ cat /proc/cpuinfo | egrep 'flags|model' | sort\n-u\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca\ncmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx\npdpe1gb rdtscp lm pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3\nfma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt\ntsc_deadline_timer aes xsave osxsave avx f16c rdrand lahf_lm abm\n3dnowprefetch fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm\nmpx rdseed adx smap clflushopt intel_pt ibrs ibpb stibp ssbd\nmodel : 142\nmodel name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz\n\nrobins@WSLv1:~/proj/hackers$ uname -a\nLinux WSLv1 4.4.0-19041-Microsoft #488-Microsoft Mon Sep 01 13:43:00 PST\n2020 x86_64 x86_64 x86_64 GNU/Linux\n\nC:>wsl -l -v\n NAME STATE VERSION\n* Ubuntu-18.04 Running 1\n\n-\nrobins\n\nOn Fri, 9 Apr 2021 at 16:12, Thomas Munro <thomas.munro@gmail.com> wrote:> From your description it sounds like signals are not arriving at all,> rather than some more complicated race. Let's go back to basics...> what does the attached program print for you? I see:>> tmunro@x1:~/junk$ cc test-signalfd.c> tmunro@x1:~/junk$ ./a.out> blocking SIGURG...> creating a signalfd to receive SIGURG...> creating an epoll set...> adding signalfd to epoll set...> polling the epoll set... 0> sending a signal...> polling the epoll set... 1I get pretty much the same. Some additional info below, although not sure if it'd be of any help here.robins@WSLv1:~/proj/hackers$ cc test-signalfd.crobins@WSLv1:~/proj/hackers$ ./a.outblocking SIGURG...creating a signalfd to receive SIGURG...creating an epoll set...adding signalfd to epoll set...polling the epoll set... 0sending a signal...polling the epoll set... 1robins@WSLv1:~/proj/hackers$ cat /proc/cpuinfo | egrep 'flags|model' | sort -uflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave osxsave avx f16c rdrand lahf_lm abm 3dnowprefetch fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt ibrs ibpb stibp ssbdmodel : 142model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHzrobins@WSLv1:~/proj/hackers$ uname -aLinux WSLv1 4.4.0-19041-Microsoft #488-Microsoft Mon Sep 01 13:43:00 PST 2020 x86_64 x86_64 x86_64 GNU/LinuxC:>wsl -l -v NAME STATE VERSION* Ubuntu-18.04 Running 1-robins",
"msg_date": "Fri, 9 Apr 2021 16:44:54 +1000",
"msg_from": "Robins Tharakan <tharakan@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm instance bichir stuck"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 6:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Apr 7, 2021 at 7:31 PM Robins Tharakan <tharakan@gmail.com> wrote:\n> > Correct. This is easily reproducible on this test-instance, so let me know if you want me to test a patch.\n>\n> From your description it sounds like signals are not arriving at all,\n> rather than some more complicated race. Let's go back to basics...\n\nI was looking into the portability of SIGURG and OOB socket data for\nsomething totally different (hallway track discussion from PGCon,\ncould we use that for query cancel, like FTP does, instead of opening\nanother socket?), and lo and behold, someone has figured out a\nworkaround for this latch problem:\n\nhttps://github.com/microsoft/WSL/issues/8619\n\nI don't really want to add code to scrape uname() ouput detect\ndifferent kernels at runtime as shown there, but it doesn't seem to\nmake a difference on Linux if we just always do what was suggested. I\ndidn't look too hard into whether that is the right place to put the\ncall, or really understand *why* it works, and since I am not a\nWindows user and we don't have a WSL1 CI, I can't confirm that it\nworks or explore whether there is some other ordering of operations\nthat would be better but still work, but if that does the trick then\nmaybe we should just do something like the attached.\n\nThoughts?",
"msg_date": "Sun, 30 Jul 2023 11:33:18 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm instance bichir stuck"
}
] |
[
{
"msg_contents": "Hi\n\nI met a problem in synchronous logical replication. The client hangs when TRUNCATE TABLE at publisher.\n\nExample of the procedure:\n------publisher------\ncreate table test (a int primary key);\ncreate publication pub for table test;\n\n------subscriber------\ncreate table test (a int primary key);\ncreate subscription sub connection 'dbname=postgres' publication pub;\n\nThen, set synchronous_standby_names = 'sub’ on publisher, and reload publisher.\n\n------publisher------\ntruncate test;\n\nThen the client of publisher will wait for a long time. A moment later, the publisher and subscriber will report following errors.\nSubscriber log\n2021-04-07 12:13:07.700 CST [3542235] logical replication worker ERROR: terminating logical replication worker due to timeout\n2021-04-07 12:13:07.722 CST [3542217] postmaster LOG: background worker \"logical replication worker\" (PID 3542235) exited with exit code 1\n2021-04-07 12:13:07.723 CST [3542357] logical replication worker LOG: logical replication apply worker for subscription \"sub\" has started\n2021-04-07 12:13:07.745 CST [3542357] logical replication worker ERROR: could not start WAL streaming: ERROR: replication slot \"sub\" is active for PID 3542236\nPublisher log\n2021-04-07 12:13:07.745 CST [3542358] walsender ERROR: replication slot \"sub\" is active for PID 3542236\n2021-04-07 12:13:07.745 CST [3542358] walsender STATEMENT: START_REPLICATION SLOT \"sub\" LOGICAL 0/169ECE8 (proto_version '2', publication_names '\"pub\"')\n\nI checked the PG-DOC, found it says that “Replication of TRUNCATE commands is supported”[1], so maybe TRUNCATE is not supported in synchronous logical replication? \n\nIf my understanding is right, maybe PG-DOC can be modified like this. Any thought?\nReplication of TRUNCATE commands is supported\n->\nReplication of TRUNCATE commands is supported in asynchronous mode\n\n[1]https://www.postgresql.org/docs/devel/logical-replication-restrictions.html\n\nRegards,\nTang\n\n\n",
"msg_date": "Wed, 7 Apr 2021 06:56:15 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 12:26 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> Hi\n>\n> I met a problem in synchronous logical replication. The client hangs when TRUNCATE TABLE at publisher.\n>\n\nCan you please check if the behavior is the same for PG-13? This is\njust to ensure that we have not introduced any bug in PG-14.\n\n--\nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 7 Apr 2021 13:57:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Wednesday, April 7, 2021 5:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote\r\n\r\n>Can you please check if the behavior is the same for PG-13? This is\r\n>just to ensure that we have not introduced any bug in PG-14.\r\n\r\nYes, same failure happens at PG-13, too.\r\n\r\nRegards,\r\nTang\r\n",
"msg_date": "Wed, 7 Apr 2021 08:34:50 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "\nOn Wed, 07 Apr 2021 at 16:34, tanghy.fnst@fujitsu.com <tanghy.fnst@fujitsu.com> wrote:\n> On Wednesday, April 7, 2021 5:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote\n>\n>>Can you please check if the behavior is the same for PG-13? This is\n>>just to ensure that we have not introduced any bug in PG-14.\n>\n> Yes, same failure happens at PG-13, too.\n>\n\nI found that when we truncate a table in synchronous logical replication,\nLockAcquireExtended() [1] will try to take a lock via fast path and it\nfailed (FastPathStrongRelationLocks->count[fasthashcode] = 1).\nHowever, it can acquire the lock when in asynchronous logical replication.\nI'm not familiar with the locks, any suggestions? What the difference\nbetween sync and async logical replication for locks?\n\n[1]\n if (EligibleForRelationFastPath(locktag, lockmode) &&\n FastPathLocalUseCount < FP_LOCK_SLOTS_PER_BACKEND)\n {\n uint32 fasthashcode = FastPathStrongLockHashPartition(hashcode);\n bool acquired;\n\n /*\n * LWLockAcquire acts as a memory sequencing point, so it's safe to\n * assume that any strong locker whose increment to\n * FastPathStrongRelationLocks->counts becomes visible after we test\n * it has yet to begin to transfer fast-path locks.\n */\n LWLockAcquire(&MyProc->fpInfoLock, LW_EXCLUSIVE);\n if (FastPathStrongRelationLocks->count[fasthashcode] != 0)\n acquired = false;\n else\n acquired = FastPathGrantRelationLock(locktag->locktag_field2,\n lockmode);\n LWLockRelease(&MyProc->fpInfoLock);\n if (acquired)\n {\n /*\n * The locallock might contain stale pointers to some old shared\n * objects; we MUST reset these to null before considering the\n * lock to be acquired via fast-path.\n */\n locallock->lock = NULL;\n locallock->proclock = NULL;\n GrantLockLocal(locallock, owner);\n return LOCKACQUIRE_OK;\n }\n }\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 08 Apr 2021 19:20:48 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "\nOn Thu, 08 Apr 2021 at 19:20, Japin Li <japinli@hotmail.com> wrote:\n> On Wed, 07 Apr 2021 at 16:34, tanghy.fnst@fujitsu.com <tanghy.fnst@fujitsu.com> wrote:\n>> On Wednesday, April 7, 2021 5:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote\n>>\n>>>Can you please check if the behavior is the same for PG-13? This is\n>>>just to ensure that we have not introduced any bug in PG-14.\n>>\n>> Yes, same failure happens at PG-13, too.\n>>\n>\n> I found that when we truncate a table in synchronous logical replication,\n> LockAcquireExtended() [1] will try to take a lock via fast path and it\n> failed (FastPathStrongRelationLocks->count[fasthashcode] = 1).\n> However, it can acquire the lock when in asynchronous logical replication.\n> I'm not familiar with the locks, any suggestions? What the difference\n> between sync and async logical replication for locks?\n>\n\nAfter some analyze, I find that when the TRUNCATE finish, it will call\nSyncRepWaitForLSN(), for asynchronous logical replication, it will exit\nearly, and then it calls ResourceOwnerRelease(RESOURCE_RELEASE_LOCKS) to\nrelease the locks, so the walsender can acquire the lock.\n\nBut for synchronous logical replication, SyncRepWaitForLSN() will wait\nfor specified LSN to be confirmed, so it cannot release the lock, and\nthe walsender try to acquire the lock. Obviously, it cannot acquire the\nlock, because the lock hold by the process which performs TRUNCATE\ncommand. This is why the TRUNCATE in synchronous logical replication is\nblocked.\n\n\nI don't know if it makes sense to fix this, if so, how to do fix it?\nThoughts?\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Sat, 10 Apr 2021 22:52:10 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "Hi\n\n\nOn Saturday, April 10, 2021 11:52 PM Japin Li <japinli@hotmail.com> wrote:\n> On Thu, 08 Apr 2021 at 19:20, Japin Li <japinli@hotmail.com> wrote:\n> > On Wed, 07 Apr 2021 at 16:34, tanghy.fnst@fujitsu.com\n> <tanghy.fnst@fujitsu.com> wrote:\n> >> On Wednesday, April 7, 2021 5:28 PM Amit Kapila\n> >> <amit.kapila16@gmail.com> wrote\n> >>\n> >>>Can you please check if the behavior is the same for PG-13? This is\n> >>>just to ensure that we have not introduced any bug in PG-14.\n> >>\n> >> Yes, same failure happens at PG-13, too.\n> >>\n> >\n> > I found that when we truncate a table in synchronous logical\n> > replication,\n> > LockAcquireExtended() [1] will try to take a lock via fast path and it\n> > failed (FastPathStrongRelationLocks->count[fasthashcode] = 1).\n> > However, it can acquire the lock when in asynchronous logical replication.\n> > I'm not familiar with the locks, any suggestions? What the difference\n> > between sync and async logical replication for locks?\n> >\n> \n> After some analyze, I find that when the TRUNCATE finish, it will call\n> SyncRepWaitForLSN(), for asynchronous logical replication, it will exit early,\n> and then it calls ResourceOwnerRelease(RESOURCE_RELEASE_LOCKS) to\n> release the locks, so the walsender can acquire the lock.\n> \n> But for synchronous logical replication, SyncRepWaitForLSN() will wait for\n> specified LSN to be confirmed, so it cannot release the lock, and the\n> walsender try to acquire the lock. Obviously, it cannot acquire the lock,\n> because the lock hold by the process which performs TRUNCATE command.\n> This is why the TRUNCATE in synchronous logical replication is blocked.\nYeah, the TRUNCATE waits in SyncRepWaitForLSN() while\nthe walsender is blocked by the AccessExclusiveLock taken by it,\nwhich makes the subscriber cannot take the change and leads to a sort of deadlock.\n\n\nOn Wednesday, April 7, 2021 3:56 PM tanghy.fnst@fujitsu.com <tanghy.fnst@fujitsu.com> wrote:\n> I checked the PG-DOC, found it says that “Replication of TRUNCATE\n> commands is supported”[1], so maybe TRUNCATE is not supported in\n> synchronous logical replication?\n> \n> If my understanding is right, maybe PG-DOC can be modified like this. Any\n> thought?\n> Replication of TRUNCATE commands is supported\n> ->\n> Replication of TRUNCATE commands is supported in asynchronous mode\nI'm not sure if this becomes the final solution,\nbut if we take a measure to fix the doc, we have to be careful for the description,\nbecause when we remove the primary keys of 'test' tables on the scenario in [1], we don't have this issue.\nIt means TRUNCATE in synchronous logical replication is not always blocked.\n\nHaving the primary key on the pub only causes the hang.\nAlso, I can observe the same hang using REPLICA IDENTITY USING INDEX and without primary key on the pub,\nwhile I cannot reproduce the problem with the REPLICA IDENTITY FULL and without primary key.\nThis difference comes from logicalrep_write_attrs() which has a branch to call RelationGetIndexAttrBitmap().\nTherefore, the description above is not correct, strictly speaking, I thought.\n\nI'll share my analysis when I get a better idea to address this.\n\n[1] - https://www.postgresql.org/message-id/OS0PR01MB6113C2499C7DC70EE55ADB82FB759%40OS0PR01MB6113.jpnprd01.prod.outlook.com\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Mon, 12 Apr 2021 04:33:23 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 10:03 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> > I checked the PG-DOC, found it says that “Replication of TRUNCATE\n> > commands is supported”[1], so maybe TRUNCATE is not supported in\n> > synchronous logical replication?\n> >\n> > If my understanding is right, maybe PG-DOC can be modified like this. Any\n> > thought?\n> > Replication of TRUNCATE commands is supported\n> > ->\n> > Replication of TRUNCATE commands is supported in asynchronous mode\n> I'm not sure if this becomes the final solution,\n>\n\nI think unless the solution is not possible or extremely complicated\ngoing via this route doesn't seem advisable.\n\n> but if we take a measure to fix the doc, we have to be careful for the description,\n> because when we remove the primary keys of 'test' tables on the scenario in [1], we don't have this issue.\n> It means TRUNCATE in synchronous logical replication is not always blocked.\n>\n\nThe problem happens only when we try to fetch IDENTITY_KEY attributes\nbecause pgoutput uses RelationGetIndexAttrBitmap() to get that\ninformation which locks the required indexes. Now, because TRUNCATE\nhas already acquired an exclusive lock on the index, it seems to\ncreate a sort of deadlock where the actual Truncate operation waits\nfor logical replication of operation to complete and logical\nreplication waits for actual Truncate operation to finish.\n\nDo we really need to use RelationGetIndexAttrBitmap() to build\nIDENTITY_KEY attributes? During decoding, we don't even lock the main\nrelation, we just scan the system table and build that information\nusing a historic snapshot. Can't we do something similar here?\n\nAdding Petr J. and Peter E. to know their views as this seems to be an\nold problem (since the decoding of Truncate operation is introduced).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 12 Apr 2021 12:28:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Monday, April 12, 2021 3:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Mon, Apr 12, 2021 at 10:03 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > but if we take a measure to fix the doc, we have to be careful for the\r\n> > description, because when we remove the primary keys of 'test' tables on the\r\n> scenario in [1], we don't have this issue.\r\n> > It means TRUNCATE in synchronous logical replication is not always\r\n> blocked.\r\n> >\r\n> \r\n> The problem happens only when we try to fetch IDENTITY_KEY attributes\r\n> because pgoutput uses RelationGetIndexAttrBitmap() to get that information\r\n> which locks the required indexes. Now, because TRUNCATE has already\r\n> acquired an exclusive lock on the index, it seems to create a sort of deadlock\r\n> where the actual Truncate operation waits for logical replication of operation to\r\n> complete and logical replication waits for actual Truncate operation to finish.\r\n> \r\n> Do we really need to use RelationGetIndexAttrBitmap() to build IDENTITY_KEY\r\n> attributes? During decoding, we don't even lock the main relation, we just scan\r\n> the system table and build that information using a historic snapshot. Can't we\r\n> do something similar here?\r\nI think we can build the IDENTITY_KEY attributes with NoLock\r\ninstead of calling RelationGetIndexAttrBitmap().\r\n\r\nWhen we trace back the caller side of logicalrep_write_attrs(),\r\ndoing the thing equivalent to RelationGetIndexAttrBitmap()\r\nfor INDEX_ATTR_BITMAP_IDENTITY_KEY impacts only pgoutput_truncate.\r\n\r\nOTOH, I can't find codes similar to RelationGetIndexAttrBitmap()\r\nin pgoutput_* functions and in the file of relcache.c.\r\nTherefore, I'd like to discuss how to address the hang.\r\n\r\nMy first idea is to extract some parts of RelationGetIndexAttrBitmap()\r\nonly for INDEX_ATTR_BITMAP_IDENTITY_KEY and implement those\r\neither in a logicalrep_write_attrs() or as a new function.\r\nRelationGetIndexAttrBitmap() has 'restart' label for goto statement\r\nin order to ensure to return up-to-date attribute bitmaps, so\r\nI prefer having a new function when we choose this direction.\r\nHaving that goto in logicalrep_write_attrs() makes it a little bit messy, I felt.\r\n\r\nThe other direction might be to extend RelationGetIndexAttrBitmap's function definition\r\nto accept lockmode to give NoLock from logicalrep_write_attrs().\r\nBut, this change impacts on other several callers so is not as good as the first direction above, I think.\r\n\r\nIf someone has any better idea, please let me know.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 13 Apr 2021 13:54:06 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "\n> On 12 Apr 2021, at 08:58, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> On Mon, Apr 12, 2021 at 10:03 AM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n>> \n>>> I checked the PG-DOC, found it says that “Replication of TRUNCATE\n>>> commands is supported”[1], so maybe TRUNCATE is not supported in\n>>> synchronous logical replication?\n>>> \n>>> If my understanding is right, maybe PG-DOC can be modified like this. Any\n>>> thought?\n>>> Replication of TRUNCATE commands is supported\n>>> ->\n>>> Replication of TRUNCATE commands is supported in asynchronous mode\n>> I'm not sure if this becomes the final solution,\n>> \n> \n> I think unless the solution is not possible or extremely complicated\n> going via this route doesn't seem advisable.\n> \n>> but if we take a measure to fix the doc, we have to be careful for the description,\n>> because when we remove the primary keys of 'test' tables on the scenario in [1], we don't have this issue.\n>> It means TRUNCATE in synchronous logical replication is not always blocked.\n>> \n> \n> The problem happens only when we try to fetch IDENTITY_KEY attributes\n> because pgoutput uses RelationGetIndexAttrBitmap() to get that\n> information which locks the required indexes. Now, because TRUNCATE\n> has already acquired an exclusive lock on the index, it seems to\n> create a sort of deadlock where the actual Truncate operation waits\n> for logical replication of operation to complete and logical\n> replication waits for actual Truncate operation to finish.\n> \n> Do we really need to use RelationGetIndexAttrBitmap() to build\n> IDENTITY_KEY attributes? During decoding, we don't even lock the main\n> relation, we just scan the system table and build that information\n> using a historic snapshot. Can't we do something similar here?\n> \n> Adding Petr J. and Peter E. to know their views as this seems to be an\n> old problem (since the decoding of Truncate operation is introduced).\n\nWe used RelationGetIndexAttrBitmap because it already existed, no other reason. I am not sure what exact locking we need but I would have guessed at least AccessShareLock would be needed.\n\n--\nPetr\n\n\n\n",
"msg_date": "Tue, 13 Apr 2021 16:37:47 +0200",
"msg_from": "Petr Jelinek <petr.jelinek@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "\nOn Tue, 13 Apr 2021 at 21:54, osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\n> On Monday, April 12, 2021 3:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> On Mon, Apr 12, 2021 at 10:03 AM osumi.takamichi@fujitsu.com\n>> <osumi.takamichi@fujitsu.com> wrote:\n>> > but if we take a measure to fix the doc, we have to be careful for the\n>> > description, because when we remove the primary keys of 'test' tables on the\n>> scenario in [1], we don't have this issue.\n>> > It means TRUNCATE in synchronous logical replication is not always\n>> blocked.\n>> >\n>> \n>> The problem happens only when we try to fetch IDENTITY_KEY attributes\n>> because pgoutput uses RelationGetIndexAttrBitmap() to get that information\n>> which locks the required indexes. Now, because TRUNCATE has already\n>> acquired an exclusive lock on the index, it seems to create a sort of deadlock\n>> where the actual Truncate operation waits for logical replication of operation to\n>> complete and logical replication waits for actual Truncate operation to finish.\n>> \n>> Do we really need to use RelationGetIndexAttrBitmap() to build IDENTITY_KEY\n>> attributes? During decoding, we don't even lock the main relation, we just scan\n>> the system table and build that information using a historic snapshot. Can't we\n>> do something similar here?\n> I think we can build the IDENTITY_KEY attributes with NoLock\n> instead of calling RelationGetIndexAttrBitmap().\n>\n> When we trace back the caller side of logicalrep_write_attrs(),\n> doing the thing equivalent to RelationGetIndexAttrBitmap()\n> for INDEX_ATTR_BITMAP_IDENTITY_KEY impacts only pgoutput_truncate.\n>\n> OTOH, I can't find codes similar to RelationGetIndexAttrBitmap()\n> in pgoutput_* functions and in the file of relcache.c.\n> Therefore, I'd like to discuss how to address the hang.\n>\n> My first idea is to extract some parts of RelationGetIndexAttrBitmap()\n> only for INDEX_ATTR_BITMAP_IDENTITY_KEY and implement those\n> either in a logicalrep_write_attrs() or as a new function.\n> RelationGetIndexAttrBitmap() has 'restart' label for goto statement\n> in order to ensure to return up-to-date attribute bitmaps, so\n> I prefer having a new function when we choose this direction.\n> Having that goto in logicalrep_write_attrs() makes it a little bit messy, I felt.\n>\n> The other direction might be to extend RelationGetIndexAttrBitmap's function definition\n> to accept lockmode to give NoLock from logicalrep_write_attrs().\n> But, this change impacts on other several callers so is not as good as the first direction above, I think.\n>\n> If someone has any better idea, please let me know.\n>\n\nI think the first idea is better than the second. OTOH, can we release the\nlocks before SyncRepWaitForLSN(), since it already flush to local WAL files.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Wed, 14 Apr 2021 10:38:13 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Tuesday, April 13, 2021 11:38 PM Petr Jelinek <petr.jelinek@enterprisedb.com> wrote:\r\n> > On 12 Apr 2021, at 08:58, Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > On Mon, Apr 12, 2021 at 10:03 AM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> >>\r\n> >>> I checked the PG-DOC, found it says that “Replication of TRUNCATE\r\n> >>> commands is supported”[1], so maybe TRUNCATE is not supported in\r\n> >>> synchronous logical replication?\r\n> >>>\r\n> >>> If my understanding is right, maybe PG-DOC can be modified like\r\n> >>> this. Any thought?\r\n> >>> Replication of TRUNCATE commands is supported\r\n> >>> ->\r\n> >>> Replication of TRUNCATE commands is supported in asynchronous\r\n> mode\r\n> >> I'm not sure if this becomes the final solution,\r\n> >>\r\n> >\r\n> > I think unless the solution is not possible or extremely complicated\r\n> > going via this route doesn't seem advisable.\r\n> >\r\n> >> but if we take a measure to fix the doc, we have to be careful for\r\n> >> the description, because when we remove the primary keys of 'test' tables\r\n> on the scenario in [1], we don't have this issue.\r\n> >> It means TRUNCATE in synchronous logical replication is not always\r\n> blocked.\r\n> >>\r\n> >\r\n> > The problem happens only when we try to fetch IDENTITY_KEY attributes\r\n> > because pgoutput uses RelationGetIndexAttrBitmap() to get that\r\n> > information which locks the required indexes. Now, because TRUNCATE\r\n> > has already acquired an exclusive lock on the index, it seems to\r\n> > create a sort of deadlock where the actual Truncate operation waits\r\n> > for logical replication of operation to complete and logical\r\n> > replication waits for actual Truncate operation to finish.\r\n> >\r\n> > Do we really need to use RelationGetIndexAttrBitmap() to build\r\n> > IDENTITY_KEY attributes? During decoding, we don't even lock the main\r\n> > relation, we just scan the system table and build that information\r\n> > using a historic snapshot. Can't we do something similar here?\r\n> >\r\n> > Adding Petr J. and Peter E. to know their views as this seems to be an\r\n> > old problem (since the decoding of Truncate operation is introduced).\r\n> \r\n> We used RelationGetIndexAttrBitmap because it already existed, no other\r\n> reason.I am not sure what exact locking we need but I would have guessed at\r\n> least AccessShareLock would be needed.\r\nThis was true.\r\n\r\nHaving a look at the comment of index_open(), there's a description of basic rule\r\nthat NoLock should be used if appropriate lock on the index is already taken.\r\nAnd, making the walsender use NoLock to build the attributes\r\nleads us to the Assert in the relation_open().\r\n\r\nPlease ignore the two ideas I suggested in another mail,\r\nwhich doesn't follow the basic and work.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 14 Apr 2021 07:48:16 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Wednesday, April 14, 2021 11:38 AM Japin Li <japinli@hotmail.com> wrote:\n> On Tue, 13 Apr 2021 at 21:54, osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> > On Monday, April 12, 2021 3:58 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >> On Mon, Apr 12, 2021 at 10:03 AM osumi.takamichi@fujitsu.com\n> >> <osumi.takamichi@fujitsu.com> wrote:\n> >> > but if we take a measure to fix the doc, we have to be careful for\n> >> > the description, because when we remove the primary keys of 'test'\n> >> > tables on the\n> >> scenario in [1], we don't have this issue.\n> >> > It means TRUNCATE in synchronous logical replication is not always\n> >> blocked.\n> >> >\n> >>\n> >> The problem happens only when we try to fetch IDENTITY_KEY attributes\n> >> because pgoutput uses RelationGetIndexAttrBitmap() to get that\n> >> information which locks the required indexes. Now, because TRUNCATE\n> >> has already acquired an exclusive lock on the index, it seems to\n> >> create a sort of deadlock where the actual Truncate operation waits\n> >> for logical replication of operation to complete and logical replication waits\n> for actual Truncate operation to finish.\n> >>\n> >> Do we really need to use RelationGetIndexAttrBitmap() to build\n> >> IDENTITY_KEY attributes? During decoding, we don't even lock the main\n> >> relation, we just scan the system table and build that information\n> >> using a historic snapshot. Can't we do something similar here?\n> > I think we can build the IDENTITY_KEY attributes with NoLock instead\n> > of calling RelationGetIndexAttrBitmap().\n> >\n> > When we trace back the caller side of logicalrep_write_attrs(), doing\n> > the thing equivalent to RelationGetIndexAttrBitmap() for\n> > INDEX_ATTR_BITMAP_IDENTITY_KEY impacts only pgoutput_truncate.\n> >\n> > OTOH, I can't find codes similar to RelationGetIndexAttrBitmap() in\n> > pgoutput_* functions and in the file of relcache.c.\n> > Therefore, I'd like to discuss how to address the hang.\n> >\n> > My first idea is to extract some parts of RelationGetIndexAttrBitmap()\n> > only for INDEX_ATTR_BITMAP_IDENTITY_KEY and implement those either\n> in\n> > a logicalrep_write_attrs() or as a new function.\n> > RelationGetIndexAttrBitmap() has 'restart' label for goto statement in\n> > order to ensure to return up-to-date attribute bitmaps, so I prefer\n> > having a new function when we choose this direction.\n> > Having that goto in logicalrep_write_attrs() makes it a little bit messy, I felt.\n> >\n> > The other direction might be to extend RelationGetIndexAttrBitmap's\n> > function definition to accept lockmode to give NoLock from\n> logicalrep_write_attrs().\n> > But, this change impacts on other several callers so is not as good as the first\n> direction above, I think.\n> >\n> > If someone has any better idea, please let me know.\n> >\n> \n> I think the first idea is better than the second. OTOH, can we release the locks\n> before SyncRepWaitForLSN(), since it already flush to local WAL files.\nThank you for your comments.\nI didn't mean to change and touch TRUNCATE side to release the locks,\nbecause I expected that its AccessExclusiveLock\nprotects any other operation (e.g. DROP INDEX) to the table\nwhich affects IDENTITY KEY building. But, now as I said in another e-mail,\nboth ideas above can't work. Really sorry for making noises.\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Wed, 14 Apr 2021 08:22:25 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 8:07 PM Petr Jelinek\n<petr.jelinek@enterprisedb.com> wrote:\n>\n> > On 12 Apr 2021, at 08:58, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > The problem happens only when we try to fetch IDENTITY_KEY attributes\n> > because pgoutput uses RelationGetIndexAttrBitmap() to get that\n> > information which locks the required indexes. Now, because TRUNCATE\n> > has already acquired an exclusive lock on the index, it seems to\n> > create a sort of deadlock where the actual Truncate operation waits\n> > for logical replication of operation to complete and logical\n> > replication waits for actual Truncate operation to finish.\n> >\n> > Do we really need to use RelationGetIndexAttrBitmap() to build\n> > IDENTITY_KEY attributes? During decoding, we don't even lock the main\n> > relation, we just scan the system table and build that information\n> > using a historic snapshot. Can't we do something similar here?\n> >\n> > Adding Petr J. and Peter E. to know their views as this seems to be an\n> > old problem (since the decoding of Truncate operation is introduced).\n>\n> We used RelationGetIndexAttrBitmap because it already existed, no other reason.\n>\n\nFair enough. But I think we should do something about it because using\nthe same (RelationGetIndexAttrBitmap) just breaks the synchronous\nlogical replication. I think this is broken since the logical\nreplication of Truncate is supported.\n\n> I am not sure what exact locking we need but I would have guessed at least AccessShareLock would be needed.\n>\n\nAre you telling that we need AccessShareLock on the index? If so, why\nis it different from how we access the relation during decoding\n(basically in ReorderBufferProcessTXN, we directly use\nRelationIdGetRelation() without any lock on the relation)? I think we\ndo it that way because we need it to process WAL entries and we need\nthe relation state based on the historic snapshot, so, even if the\nrelation is later changed/dropped, we are fine with the old state we\ngot. Isn't the same thing applies here in logicalrep_write_attrs? If\nthat is true then some equivalent of RelationGetIndexAttrBitmap where\nwe use RelationIdGetRelation instead of index_open should work? Am, I\nmissing something?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 14 Apr 2021 15:31:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 3:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 13, 2021 at 8:07 PM Petr Jelinek\n> <petr.jelinek@enterprisedb.com> wrote:\n> >\n> > > On 12 Apr 2021, at 08:58, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > The problem happens only when we try to fetch IDENTITY_KEY attributes\n> > > because pgoutput uses RelationGetIndexAttrBitmap() to get that\n> > > information which locks the required indexes. Now, because TRUNCATE\n> > > has already acquired an exclusive lock on the index, it seems to\n> > > create a sort of deadlock where the actual Truncate operation waits\n> > > for logical replication of operation to complete and logical\n> > > replication waits for actual Truncate operation to finish.\n> > >\n> > > Do we really need to use RelationGetIndexAttrBitmap() to build\n> > > IDENTITY_KEY attributes? During decoding, we don't even lock the main\n> > > relation, we just scan the system table and build that information\n> > > using a historic snapshot. Can't we do something similar here?\n> > >\n> > > Adding Petr J. and Peter E. to know their views as this seems to be an\n> > > old problem (since the decoding of Truncate operation is introduced).\n> >\n> > We used RelationGetIndexAttrBitmap because it already existed, no other reason.\n> >\n>\n> Fair enough. But I think we should do something about it because using\n> the same (RelationGetIndexAttrBitmap) just breaks the synchronous\n> logical replication. I think this is broken since the logical\n> replication of Truncate is supported.\n>\n> > I am not sure what exact locking we need but I would have guessed at least AccessShareLock would be needed.\n> >\n>\n> Are you telling that we need AccessShareLock on the index? If so, why\n> is it different from how we access the relation during decoding\n> (basically in ReorderBufferProcessTXN, we directly use\n> RelationIdGetRelation() without any lock on the relation)? I think we\n> do it that way because we need it to process WAL entries and we need\n> the relation state based on the historic snapshot, so, even if the\n> relation is later changed/dropped, we are fine with the old state we\n> got. Isn't the same thing applies here in logicalrep_write_attrs? If\n> that is true then some equivalent of RelationGetIndexAttrBitmap where\n> we use RelationIdGetRelation instead of index_open should work?\n>\n\nToday, again I have thought about this and don't see a problem with\nthe above idea. If the above understanding is correct, then I think\nfor our purpose in pgoutput, we just need to call RelationGetIndexList\nand then build the idattr list for relation->rd_replidindex. We can\nthen cache it in relation->rd_idattr. I am not sure if it is really\nrequired to do all the other work in RelationGetIndexAttrBitmap.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 15 Apr 2021 15:52:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "\nOn Thu, 15 Apr 2021 at 18:22, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Wed, Apr 14, 2021 at 3:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Tue, Apr 13, 2021 at 8:07 PM Petr Jelinek\n>> <petr.jelinek@enterprisedb.com> wrote:\n>> >\n>> > > On 12 Apr 2021, at 08:58, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> > >\n>> > > The problem happens only when we try to fetch IDENTITY_KEY attributes\n>> > > because pgoutput uses RelationGetIndexAttrBitmap() to get that\n>> > > information which locks the required indexes. Now, because TRUNCATE\n>> > > has already acquired an exclusive lock on the index, it seems to\n>> > > create a sort of deadlock where the actual Truncate operation waits\n>> > > for logical replication of operation to complete and logical\n>> > > replication waits for actual Truncate operation to finish.\n>> > >\n>> > > Do we really need to use RelationGetIndexAttrBitmap() to build\n>> > > IDENTITY_KEY attributes? During decoding, we don't even lock the main\n>> > > relation, we just scan the system table and build that information\n>> > > using a historic snapshot. Can't we do something similar here?\n>> > >\n>> > > Adding Petr J. and Peter E. to know their views as this seems to be an\n>> > > old problem (since the decoding of Truncate operation is introduced).\n>> >\n>> > We used RelationGetIndexAttrBitmap because it already existed, no other reason.\n>> >\n>>\n>> Fair enough. But I think we should do something about it because using\n>> the same (RelationGetIndexAttrBitmap) just breaks the synchronous\n>> logical replication. I think this is broken since the logical\n>> replication of Truncate is supported.\n>>\n>> > I am not sure what exact locking we need but I would have guessed at least AccessShareLock would be needed.\n>> >\n>>\n>> Are you telling that we need AccessShareLock on the index? If so, why\n>> is it different from how we access the relation during decoding\n>> (basically in ReorderBufferProcessTXN, we directly use\n>> RelationIdGetRelation() without any lock on the relation)? I think we\n>> do it that way because we need it to process WAL entries and we need\n>> the relation state based on the historic snapshot, so, even if the\n>> relation is later changed/dropped, we are fine with the old state we\n>> got. Isn't the same thing applies here in logicalrep_write_attrs? If\n>> that is true then some equivalent of RelationGetIndexAttrBitmap where\n>> we use RelationIdGetRelation instead of index_open should work?\n>>\n>\n> Today, again I have thought about this and don't see a problem with\n> the above idea. If the above understanding is correct, then I think\n> for our purpose in pgoutput, we just need to call RelationGetIndexList\n> and then build the idattr list for relation->rd_replidindex.\n\nSorry, I don't know how can we build the idattr without open the index.\nIf we should open the index, then we should use NoLock, since the TRUNCATE\nside hold AccessExclusiveLock. As Osumi points out in [1], The NoLock mode\nassumes that the appropriate lock on the index is already taken.\n\nPlease let me know if I have misunderstood.\n\n[1] https://www.postgresql.org/message-id/OSBPR01MB488834BDBD67BFF2FB048B3DED4E9%40OSBPR01MB4888.jpnprd01.prod.outlook.com\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 15 Apr 2021 19:00:40 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Thu, Apr 15, 2021 at 4:30 PM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> On Thu, 15 Apr 2021 at 18:22, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Wed, Apr 14, 2021 at 3:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> On Tue, Apr 13, 2021 at 8:07 PM Petr Jelinek\n> >> <petr.jelinek@enterprisedb.com> wrote:\n> >> >\n> >> > > On 12 Apr 2021, at 08:58, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> > >\n> >> > > The problem happens only when we try to fetch IDENTITY_KEY attributes\n> >> > > because pgoutput uses RelationGetIndexAttrBitmap() to get that\n> >> > > information which locks the required indexes. Now, because TRUNCATE\n> >> > > has already acquired an exclusive lock on the index, it seems to\n> >> > > create a sort of deadlock where the actual Truncate operation waits\n> >> > > for logical replication of operation to complete and logical\n> >> > > replication waits for actual Truncate operation to finish.\n> >> > >\n> >> > > Do we really need to use RelationGetIndexAttrBitmap() to build\n> >> > > IDENTITY_KEY attributes? During decoding, we don't even lock the main\n> >> > > relation, we just scan the system table and build that information\n> >> > > using a historic snapshot. Can't we do something similar here?\n> >> > >\n> >> > > Adding Petr J. and Peter E. to know their views as this seems to be an\n> >> > > old problem (since the decoding of Truncate operation is introduced).\n> >> >\n> >> > We used RelationGetIndexAttrBitmap because it already existed, no other reason.\n> >> >\n> >>\n> >> Fair enough. But I think we should do something about it because using\n> >> the same (RelationGetIndexAttrBitmap) just breaks the synchronous\n> >> logical replication. I think this is broken since the logical\n> >> replication of Truncate is supported.\n> >>\n> >> > I am not sure what exact locking we need but I would have guessed at least AccessShareLock would be needed.\n> >> >\n> >>\n> >> Are you telling that we need AccessShareLock on the index? If so, why\n> >> is it different from how we access the relation during decoding\n> >> (basically in ReorderBufferProcessTXN, we directly use\n> >> RelationIdGetRelation() without any lock on the relation)? I think we\n> >> do it that way because we need it to process WAL entries and we need\n> >> the relation state based on the historic snapshot, so, even if the\n> >> relation is later changed/dropped, we are fine with the old state we\n> >> got. Isn't the same thing applies here in logicalrep_write_attrs? If\n> >> that is true then some equivalent of RelationGetIndexAttrBitmap where\n> >> we use RelationIdGetRelation instead of index_open should work?\n> >>\n> >\n> > Today, again I have thought about this and don't see a problem with\n> > the above idea. If the above understanding is correct, then I think\n> > for our purpose in pgoutput, we just need to call RelationGetIndexList\n> > and then build the idattr list for relation->rd_replidindex.\n>\n> Sorry, I don't know how can we build the idattr without open the index.\n> If we should open the index, then we should use NoLock, since the TRUNCATE\n> side hold AccessExclusiveLock. As Osumi points out in [1], The NoLock mode\n> assumes that the appropriate lock on the index is already taken.\n>\n\nWhy can't we use RelationIdGetRelation() by passing the required\nindexOid to it?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 15 Apr 2021 16:55:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "\nOn Thu, 15 Apr 2021 at 19:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Thu, Apr 15, 2021 at 4:30 PM Japin Li <japinli@hotmail.com> wrote:\n>>\n>>\n>> On Thu, 15 Apr 2021 at 18:22, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> > On Wed, Apr 14, 2021 at 3:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >>\n>> >> On Tue, Apr 13, 2021 at 8:07 PM Petr Jelinek\n>> >> <petr.jelinek@enterprisedb.com> wrote:\n>> >> >\n>> >> > > On 12 Apr 2021, at 08:58, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >> > >\n>> >> > > The problem happens only when we try to fetch IDENTITY_KEY attributes\n>> >> > > because pgoutput uses RelationGetIndexAttrBitmap() to get that\n>> >> > > information which locks the required indexes. Now, because TRUNCATE\n>> >> > > has already acquired an exclusive lock on the index, it seems to\n>> >> > > create a sort of deadlock where the actual Truncate operation waits\n>> >> > > for logical replication of operation to complete and logical\n>> >> > > replication waits for actual Truncate operation to finish.\n>> >> > >\n>> >> > > Do we really need to use RelationGetIndexAttrBitmap() to build\n>> >> > > IDENTITY_KEY attributes? During decoding, we don't even lock the main\n>> >> > > relation, we just scan the system table and build that information\n>> >> > > using a historic snapshot. Can't we do something similar here?\n>> >> > >\n>> >> > > Adding Petr J. and Peter E. to know their views as this seems to be an\n>> >> > > old problem (since the decoding of Truncate operation is introduced).\n>> >> >\n>> >> > We used RelationGetIndexAttrBitmap because it already existed, no other reason.\n>> >> >\n>> >>\n>> >> Fair enough. But I think we should do something about it because using\n>> >> the same (RelationGetIndexAttrBitmap) just breaks the synchronous\n>> >> logical replication. I think this is broken since the logical\n>> >> replication of Truncate is supported.\n>> >>\n>> >> > I am not sure what exact locking we need but I would have guessed at least AccessShareLock would be needed.\n>> >> >\n>> >>\n>> >> Are you telling that we need AccessShareLock on the index? If so, why\n>> >> is it different from how we access the relation during decoding\n>> >> (basically in ReorderBufferProcessTXN, we directly use\n>> >> RelationIdGetRelation() without any lock on the relation)? I think we\n>> >> do it that way because we need it to process WAL entries and we need\n>> >> the relation state based on the historic snapshot, so, even if the\n>> >> relation is later changed/dropped, we are fine with the old state we\n>> >> got. Isn't the same thing applies here in logicalrep_write_attrs? If\n>> >> that is true then some equivalent of RelationGetIndexAttrBitmap where\n>> >> we use RelationIdGetRelation instead of index_open should work?\n>> >>\n>> >\n>> > Today, again I have thought about this and don't see a problem with\n>> > the above idea. If the above understanding is correct, then I think\n>> > for our purpose in pgoutput, we just need to call RelationGetIndexList\n>> > and then build the idattr list for relation->rd_replidindex.\n>>\n>> Sorry, I don't know how can we build the idattr without open the index.\n>> If we should open the index, then we should use NoLock, since the TRUNCATE\n>> side hold AccessExclusiveLock. As Osumi points out in [1], The NoLock mode\n>> assumes that the appropriate lock on the index is already taken.\n>>\n>\n> Why can't we use RelationIdGetRelation() by passing the required\n> indexOid to it?\n\nThanks for your reminder. It might be a way to solve this problem.\nI'll try it later.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 16 Apr 2021 10:02:25 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Thu, 15 Apr 2021 at 19:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Thu, Apr 15, 2021 at 4:30 PM Japin Li <japinli@hotmail.com> wrote:\n>>\n>>\n>> On Thu, 15 Apr 2021 at 18:22, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> > On Wed, Apr 14, 2021 at 3:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >>\n>> >> On Tue, Apr 13, 2021 at 8:07 PM Petr Jelinek\n>> >> <petr.jelinek@enterprisedb.com> wrote:\n>> >> >\n>> >> > > On 12 Apr 2021, at 08:58, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >> > >\n>> >> > > The problem happens only when we try to fetch IDENTITY_KEY attributes\n>> >> > > because pgoutput uses RelationGetIndexAttrBitmap() to get that\n>> >> > > information which locks the required indexes. Now, because TRUNCATE\n>> >> > > has already acquired an exclusive lock on the index, it seems to\n>> >> > > create a sort of deadlock where the actual Truncate operation waits\n>> >> > > for logical replication of operation to complete and logical\n>> >> > > replication waits for actual Truncate operation to finish.\n>> >> > >\n>> >> > > Do we really need to use RelationGetIndexAttrBitmap() to build\n>> >> > > IDENTITY_KEY attributes? During decoding, we don't even lock the main\n>> >> > > relation, we just scan the system table and build that information\n>> >> > > using a historic snapshot. Can't we do something similar here?\n>> >> > >\n>> >> > > Adding Petr J. and Peter E. to know their views as this seems to be an\n>> >> > > old problem (since the decoding of Truncate operation is introduced).\n>> >> >\n>> >> > We used RelationGetIndexAttrBitmap because it already existed, no other reason.\n>> >> >\n>> >>\n>> >> Fair enough. But I think we should do something about it because using\n>> >> the same (RelationGetIndexAttrBitmap) just breaks the synchronous\n>> >> logical replication. I think this is broken since the logical\n>> >> replication of Truncate is supported.\n>> >>\n>> >> > I am not sure what exact locking we need but I would have guessed at least AccessShareLock would be needed.\n>> >> >\n>> >>\n>> >> Are you telling that we need AccessShareLock on the index? If so, why\n>> >> is it different from how we access the relation during decoding\n>> >> (basically in ReorderBufferProcessTXN, we directly use\n>> >> RelationIdGetRelation() without any lock on the relation)? I think we\n>> >> do it that way because we need it to process WAL entries and we need\n>> >> the relation state based on the historic snapshot, so, even if the\n>> >> relation is later changed/dropped, we are fine with the old state we\n>> >> got. Isn't the same thing applies here in logicalrep_write_attrs? If\n>> >> that is true then some equivalent of RelationGetIndexAttrBitmap where\n>> >> we use RelationIdGetRelation instead of index_open should work?\n>> >>\n>> >\n>> > Today, again I have thought about this and don't see a problem with\n>> > the above idea. If the above understanding is correct, then I think\n>> > for our purpose in pgoutput, we just need to call RelationGetIndexList\n>> > and then build the idattr list for relation->rd_replidindex.\n>>\n>> Sorry, I don't know how can we build the idattr without open the index.\n>> If we should open the index, then we should use NoLock, since the TRUNCATE\n>> side hold AccessExclusiveLock. As Osumi points out in [1], The NoLock mode\n>> assumes that the appropriate lock on the index is already taken.\n>>\n>\n> Why can't we use RelationIdGetRelation() by passing the required\n> indexOid to it?\n\nHi Amit, as your suggested, I try to use RelationIdGetRelation() replace\nthe index_open() to avoid specify the AccessSharedLock, then the TRUNCATE\nwill not be blocked.\n\nAttached patch fixed it. Thoughts?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Fri, 16 Apr 2021 15:25:26 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "Hi\n\n\nOn Friday, April 16, 2021 11:02 AM Japin Li <japinli@hotmail.com>\n> On Thu, 15 Apr 2021 at 19:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Thu, Apr 15, 2021 at 4:30 PM Japin Li <japinli@hotmail.com> wrote:\n> >>\n> >>\n> >> On Thu, 15 Apr 2021 at 18:22, Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >> > On Wed, Apr 14, 2021 at 3:31 PM Amit Kapila\n> <amit.kapila16@gmail.com> wrote:\n> >> >>\n> >> >> On Tue, Apr 13, 2021 at 8:07 PM Petr Jelinek\n> >> >> <petr.jelinek@enterprisedb.com> wrote:\n> >> >> >\n> >> >> > > On 12 Apr 2021, at 08:58, Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >> >> > >\n> >> >> > > The problem happens only when we try to fetch IDENTITY_KEY\n> >> >> > > attributes because pgoutput uses RelationGetIndexAttrBitmap()\n> >> >> > > to get that information which locks the required indexes. Now,\n> >> >> > > because TRUNCATE has already acquired an exclusive lock on the\n> >> >> > > index, it seems to create a sort of deadlock where the actual\n> >> >> > > Truncate operation waits for logical replication of operation\n> >> >> > > to complete and logical replication waits for actual Truncate\n> operation to finish.\n> >> >> > >\n> >> >> > > Do we really need to use RelationGetIndexAttrBitmap() to build\n> >> >> > > IDENTITY_KEY attributes? During decoding, we don't even lock\n> >> >> > > the main relation, we just scan the system table and build\n> >> >> > > that information using a historic snapshot. Can't we do something\n> similar here?\n> >> >> > >\n> >> >> > > Adding Petr J. and Peter E. to know their views as this seems\n> >> >> > > to be an old problem (since the decoding of Truncate operation is\n> introduced).\n> >> >> >\n> >> >> > We used RelationGetIndexAttrBitmap because it already existed, no\n> other reason.\n> >> >> >\n> >> >>\n> >> >> Fair enough. But I think we should do something about it because\n> >> >> using the same (RelationGetIndexAttrBitmap) just breaks the\n> >> >> synchronous logical replication. I think this is broken since the\n> >> >> logical replication of Truncate is supported.\n> >> >>\n> >> >> > I am not sure what exact locking we need but I would have guessed\n> at least AccessShareLock would be needed.\n> >> >> >\n> >> >>\n> >> >> Are you telling that we need AccessShareLock on the index? If so,\n> >> >> why is it different from how we access the relation during\n> >> >> decoding (basically in ReorderBufferProcessTXN, we directly use\n> >> >> RelationIdGetRelation() without any lock on the relation)? I think\n> >> >> we do it that way because we need it to process WAL entries and we\n> >> >> need the relation state based on the historic snapshot, so, even\n> >> >> if the relation is later changed/dropped, we are fine with the old\n> >> >> state we got. Isn't the same thing applies here in\n> >> >> logicalrep_write_attrs? If that is true then some equivalent of\n> >> >> RelationGetIndexAttrBitmap where we use RelationIdGetRelation\n> instead of index_open should work?\n> >> >>\n> >> >\n> >> > Today, again I have thought about this and don't see a problem with\n> >> > the above idea. If the above understanding is correct, then I think\n> >> > for our purpose in pgoutput, we just need to call\n> >> > RelationGetIndexList and then build the idattr list for\n> relation->rd_replidindex.\n> >>\n> >> Sorry, I don't know how can we build the idattr without open the index.\n> >> If we should open the index, then we should use NoLock, since the\n> >> TRUNCATE side hold AccessExclusiveLock. As Osumi points out in [1],\n> >> The NoLock mode assumes that the appropriate lock on the index is\n> already taken.\n> >>\n> >\n> > Why can't we use RelationIdGetRelation() by passing the required\n> > indexOid to it?\n> \n> Thanks for your reminder. It might be a way to solve this problem.\nYeah. I've made the 1st patch for this issue.\n\nIn my env, with the patch\nthe TRUNCATE in synchronous logical replication doesn't hang.\nIt's OK with make check-world as well.\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Fri, 16 Apr 2021 07:26:25 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Fri, Apr 16, 2021 at 12:56 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> > Thanks for your reminder. It might be a way to solve this problem.\n> Yeah. I've made the 1st patch for this issue.\n>\n> In my env, with the patch\n> the TRUNCATE in synchronous logical replication doesn't hang.\n>\n\nFew initial comments:\n=====================\n1.\n+ relreplindex = relation->rd_replidindex;\n+\n+ /*\n+ * build attributes to idindexattrs.\n+ */\n+ idindexattrs = NULL;\n+ foreach(l, indexoidlist)\n+ {\n+ Oid indexOid = lfirst_oid(l);\n+ Relation indexDesc;\n+ int i;\n+ bool isIDKey; /* replica identity index */\n+\n+ indexDesc = RelationIdGetRelation(indexOid);\n\nWhen you have oid of replica identity index (relreplindex) then what\nis the need to traverse all the indexes?\n\n2.\nIt is better to name the function as RelationGet...\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 16 Apr 2021 14:20:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Fri, Apr 16, 2021 at 12:55 PM Japin Li <japinli@hotmail.com> wrote:\n>\n> On Thu, 15 Apr 2021 at 19:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Thu, Apr 15, 2021 at 4:30 PM Japin Li <japinli@hotmail.com> wrote:\n> >>\n> >>\n> >> On Thu, 15 Apr 2021 at 18:22, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> > On Wed, Apr 14, 2021 at 3:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> >>\n> >> >> On Tue, Apr 13, 2021 at 8:07 PM Petr Jelinek\n> >> >> <petr.jelinek@enterprisedb.com> wrote:\n> >> >> >\n> >> >> > > On 12 Apr 2021, at 08:58, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> >> > >\n> >> >> > > The problem happens only when we try to fetch IDENTITY_KEY attributes\n> >> >> > > because pgoutput uses RelationGetIndexAttrBitmap() to get that\n> >> >> > > information which locks the required indexes. Now, because TRUNCATE\n> >> >> > > has already acquired an exclusive lock on the index, it seems to\n> >> >> > > create a sort of deadlock where the actual Truncate operation waits\n> >> >> > > for logical replication of operation to complete and logical\n> >> >> > > replication waits for actual Truncate operation to finish.\n> >> >> > >\n> >> >> > > Do we really need to use RelationGetIndexAttrBitmap() to build\n> >> >> > > IDENTITY_KEY attributes? During decoding, we don't even lock the main\n> >> >> > > relation, we just scan the system table and build that information\n> >> >> > > using a historic snapshot. Can't we do something similar here?\n> >> >> > >\n> >> >> > > Adding Petr J. and Peter E. to know their views as this seems to be an\n> >> >> > > old problem (since the decoding of Truncate operation is introduced).\n> >> >> >\n> >> >> > We used RelationGetIndexAttrBitmap because it already existed, no other reason.\n> >> >> >\n> >> >>\n> >> >> Fair enough. But I think we should do something about it because using\n> >> >> the same (RelationGetIndexAttrBitmap) just breaks the synchronous\n> >> >> logical replication. I think this is broken since the logical\n> >> >> replication of Truncate is supported.\n> >> >>\n> >> >> > I am not sure what exact locking we need but I would have guessed at least AccessShareLock would be needed.\n> >> >> >\n> >> >>\n> >> >> Are you telling that we need AccessShareLock on the index? If so, why\n> >> >> is it different from how we access the relation during decoding\n> >> >> (basically in ReorderBufferProcessTXN, we directly use\n> >> >> RelationIdGetRelation() without any lock on the relation)? I think we\n> >> >> do it that way because we need it to process WAL entries and we need\n> >> >> the relation state based on the historic snapshot, so, even if the\n> >> >> relation is later changed/dropped, we are fine with the old state we\n> >> >> got. Isn't the same thing applies here in logicalrep_write_attrs? If\n> >> >> that is true then some equivalent of RelationGetIndexAttrBitmap where\n> >> >> we use RelationIdGetRelation instead of index_open should work?\n> >> >>\n> >> >\n> >> > Today, again I have thought about this and don't see a problem with\n> >> > the above idea. If the above understanding is correct, then I think\n> >> > for our purpose in pgoutput, we just need to call RelationGetIndexList\n> >> > and then build the idattr list for relation->rd_replidindex.\n> >>\n> >> Sorry, I don't know how can we build the idattr without open the index.\n> >> If we should open the index, then we should use NoLock, since the TRUNCATE\n> >> side hold AccessExclusiveLock. As Osumi points out in [1], The NoLock mode\n> >> assumes that the appropriate lock on the index is already taken.\n> >>\n> >\n> > Why can't we use RelationIdGetRelation() by passing the required\n> > indexOid to it?\n>\n> Hi Amit, as your suggested, I try to use RelationIdGetRelation() replace\n> the index_open() to avoid specify the AccessSharedLock, then the TRUNCATE\n> will not be blocked.\n>\n\nIt is okay as POC but we can't change the existing function\nRelationGetIndexAttrBitmap. It is used from other places as well. It\nmight be better to write a separate function for this, something like\nwhat Osumi-San's patch is trying to do.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 16 Apr 2021 14:22:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "Hi\r\n\r\n\r\nOn Friday, April 16, 2021 5:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, Apr 16, 2021 at 12:56 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > > Thanks for your reminder. It might be a way to solve this problem.\r\n> > Yeah. I've made the 1st patch for this issue.\r\n> >\r\n> > In my env, with the patch\r\n> > the TRUNCATE in synchronous logical replication doesn't hang.\r\n> >\r\n> \r\n> Few initial comments:\r\n> =====================\r\n> 1.\r\n> + relreplindex = relation->rd_replidindex;\r\n> +\r\n> + /*\r\n> + * build attributes to idindexattrs.\r\n> + */\r\n> + idindexattrs = NULL;\r\n> + foreach(l, indexoidlist)\r\n> + {\r\n> + Oid indexOid = lfirst_oid(l);\r\n> + Relation indexDesc;\r\n> + int i;\r\n> + bool isIDKey; /* replica identity index */\r\n> +\r\n> + indexDesc = RelationIdGetRelation(indexOid);\r\n> \r\n> When you have oid of replica identity index (relreplindex) then what is the\r\n> need to traverse all the indexes?\r\nOk. No need to traverse all the indexes. Will fix this part.\r\n\r\n> 2.\r\n> It is better to name the function as RelationGet...\r\nYou are right. I'll modify this in my next version.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 16 Apr 2021 09:19:08 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "\nOn Fri, 16 Apr 2021 at 16:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Fri, Apr 16, 2021 at 12:55 PM Japin Li <japinli@hotmail.com> wrote:\n>>\n>> On Thu, 15 Apr 2021 at 19:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> > On Thu, Apr 15, 2021 at 4:30 PM Japin Li <japinli@hotmail.com> wrote:\n>> >>\n>> >>\n>> >> On Thu, 15 Apr 2021 at 18:22, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >> > On Wed, Apr 14, 2021 at 3:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >> >>\n>> >> >> On Tue, Apr 13, 2021 at 8:07 PM Petr Jelinek\n>> >> >> <petr.jelinek@enterprisedb.com> wrote:\n>> >> >> >\n>> >> >> > > On 12 Apr 2021, at 08:58, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >> >> > >\n>> >> >> > > The problem happens only when we try to fetch IDENTITY_KEY attributes\n>> >> >> > > because pgoutput uses RelationGetIndexAttrBitmap() to get that\n>> >> >> > > information which locks the required indexes. Now, because TRUNCATE\n>> >> >> > > has already acquired an exclusive lock on the index, it seems to\n>> >> >> > > create a sort of deadlock where the actual Truncate operation waits\n>> >> >> > > for logical replication of operation to complete and logical\n>> >> >> > > replication waits for actual Truncate operation to finish.\n>> >> >> > >\n>> >> >> > > Do we really need to use RelationGetIndexAttrBitmap() to build\n>> >> >> > > IDENTITY_KEY attributes? During decoding, we don't even lock the main\n>> >> >> > > relation, we just scan the system table and build that information\n>> >> >> > > using a historic snapshot. Can't we do something similar here?\n>> >> >> > >\n>> >> >> > > Adding Petr J. and Peter E. to know their views as this seems to be an\n>> >> >> > > old problem (since the decoding of Truncate operation is introduced).\n>> >> >> >\n>> >> >> > We used RelationGetIndexAttrBitmap because it already existed, no other reason.\n>> >> >> >\n>> >> >>\n>> >> >> Fair enough. But I think we should do something about it because using\n>> >> >> the same (RelationGetIndexAttrBitmap) just breaks the synchronous\n>> >> >> logical replication. I think this is broken since the logical\n>> >> >> replication of Truncate is supported.\n>> >> >>\n>> >> >> > I am not sure what exact locking we need but I would have guessed at least AccessShareLock would be needed.\n>> >> >> >\n>> >> >>\n>> >> >> Are you telling that we need AccessShareLock on the index? If so, why\n>> >> >> is it different from how we access the relation during decoding\n>> >> >> (basically in ReorderBufferProcessTXN, we directly use\n>> >> >> RelationIdGetRelation() without any lock on the relation)? I think we\n>> >> >> do it that way because we need it to process WAL entries and we need\n>> >> >> the relation state based on the historic snapshot, so, even if the\n>> >> >> relation is later changed/dropped, we are fine with the old state we\n>> >> >> got. Isn't the same thing applies here in logicalrep_write_attrs? If\n>> >> >> that is true then some equivalent of RelationGetIndexAttrBitmap where\n>> >> >> we use RelationIdGetRelation instead of index_open should work?\n>> >> >>\n>> >> >\n>> >> > Today, again I have thought about this and don't see a problem with\n>> >> > the above idea. If the above understanding is correct, then I think\n>> >> > for our purpose in pgoutput, we just need to call RelationGetIndexList\n>> >> > and then build the idattr list for relation->rd_replidindex.\n>> >>\n>> >> Sorry, I don't know how can we build the idattr without open the index.\n>> >> If we should open the index, then we should use NoLock, since the TRUNCATE\n>> >> side hold AccessExclusiveLock. As Osumi points out in [1], The NoLock mode\n>> >> assumes that the appropriate lock on the index is already taken.\n>> >>\n>> >\n>> > Why can't we use RelationIdGetRelation() by passing the required\n>> > indexOid to it?\n>>\n>> Hi Amit, as your suggested, I try to use RelationIdGetRelation() replace\n>> the index_open() to avoid specify the AccessSharedLock, then the TRUNCATE\n>> will not be blocked.\n>>\n>\n> It is okay as POC but we can't change the existing function\n> RelationGetIndexAttrBitmap. It is used from other places as well. It\n> might be better to write a separate function for this, something like\n> what Osumi-San's patch is trying to do.\n\nAgreed!\n\nHi Osumi-San, can you merge the test case in your next version?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 16 Apr 2021 17:37:56 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Fri, Apr 16, 2021 at 3:08 PM Japin Li <japinli@hotmail.com> wrote:\n>\n> On Fri, 16 Apr 2021 at 16:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, Apr 16, 2021 at 12:55 PM Japin Li <japinli@hotmail.com> wrote:\n> > It is okay as POC but we can't change the existing function\n> > RelationGetIndexAttrBitmap. It is used from other places as well. It\n> > might be better to write a separate function for this, something like\n> > what Osumi-San's patch is trying to do.\n>\n> Agreed!\n>\n> Hi Osumi-San, can you merge the test case in your next version?\n>\n\n+1. Your test looks reasonable to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 16 Apr 2021 15:18:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Fri, 16 Apr 2021 at 17:19, osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\n> Hi\n>\n>\n> On Friday, April 16, 2021 5:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> On Fri, Apr 16, 2021 at 12:56 PM osumi.takamichi@fujitsu.com\n>> <osumi.takamichi@fujitsu.com> wrote:\n>> >\n>> > > Thanks for your reminder. It might be a way to solve this problem.\n>> > Yeah. I've made the 1st patch for this issue.\n>> >\n>> > In my env, with the patch\n>> > the TRUNCATE in synchronous logical replication doesn't hang.\n>> >\n>>\n>> Few initial comments:\n>> =====================\n>> 1.\n>> + relreplindex = relation->rd_replidindex;\n>> +\n>> + /*\n>> + * build attributes to idindexattrs.\n>> + */\n>> + idindexattrs = NULL;\n>> + foreach(l, indexoidlist)\n>> + {\n>> + Oid indexOid = lfirst_oid(l);\n>> + Relation indexDesc;\n>> + int i;\n>> + bool isIDKey; /* replica identity index */\n>> +\n>> + indexDesc = RelationIdGetRelation(indexOid);\n>>\n>> When you have oid of replica identity index (relreplindex) then what is the\n>> need to traverse all the indexes?\n> Ok. No need to traverse all the indexes. Will fix this part.\n>\n>> 2.\n>> It is better to name the function as RelationGet...\n> You are right. I'll modify this in my next version.\n>\n\nI took the liberty to address review comments and provide a v2 patch on top of\nyour's v1 patch, also merge the test case.\n\nSorry for attaching.\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Fri, 16 Apr 2021 23:53:20 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Saturday, April 17, 2021 12:53 AM Japin Li <japinli@hotmail.com>\n> On Fri, 16 Apr 2021 at 17:19, osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> > On Friday, April 16, 2021 5:50 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >> On Fri, Apr 16, 2021 at 12:56 PM osumi.takamichi@fujitsu.com\n> >> <osumi.takamichi@fujitsu.com> wrote:\n> >> >\n> >> > > Thanks for your reminder. It might be a way to solve this problem.\n> >> > Yeah. I've made the 1st patch for this issue.\n> >> >\n> >> > In my env, with the patch\n> >> > the TRUNCATE in synchronous logical replication doesn't hang.\n> >> >\n> >>\n> >> Few initial comments:\n> >> =====================\n> >> 1.\n> >> + relreplindex = relation->rd_replidindex;\n> >> +\n> >> + /*\n> >> + * build attributes to idindexattrs.\n> >> + */\n> >> + idindexattrs = NULL;\n> >> + foreach(l, indexoidlist)\n> >> + {\n> >> + Oid indexOid = lfirst_oid(l);\n> >> + Relation indexDesc;\n> >> + int i;\n> >> + bool isIDKey; /* replica identity index */\n> >> +\n> >> + indexDesc = RelationIdGetRelation(indexOid);\n> >>\n> >> When you have oid of replica identity index (relreplindex) then what\n> >> is the need to traverse all the indexes?\n> > Ok. No need to traverse all the indexes. Will fix this part.\n> >\n> >> 2.\n> >> It is better to name the function as RelationGet...\n> > You are right. I'll modify this in my next version.\n> >\n> \n> I took the liberty to address review comments and provide a v2 patch on top\n> of your's v1 patch, also merge the test case.\n> \n> Sorry for attaching.\nNo problem. Thank you for updating the patch.\nI've conducted some cosmetic changes. Could you please check this ?\nThat's already applied by pgindent.\n\nI executed RT for this and made no failure.\nJust in case, I executed 010_truncate.pl test 100 times in a tight loop,\nwhich also didn't fail.\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Sat, 17 Apr 2021 04:03:48 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "\nOn Sat, 17 Apr 2021 at 12:03, osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\n> On Saturday, April 17, 2021 12:53 AM Japin Li <japinli@hotmail.com>\n>> On Fri, 16 Apr 2021 at 17:19, osumi.takamichi@fujitsu.com\n>> <osumi.takamichi@fujitsu.com> wrote:\n>> > On Friday, April 16, 2021 5:50 PM Amit Kapila <amit.kapila16@gmail.com>\n>> wrote:\n>> >> On Fri, Apr 16, 2021 at 12:56 PM osumi.takamichi@fujitsu.com\n>> >> <osumi.takamichi@fujitsu.com> wrote:\n>> >> >\n>> >> > > Thanks for your reminder. It might be a way to solve this problem.\n>> >> > Yeah. I've made the 1st patch for this issue.\n>> >> >\n>> >> > In my env, with the patch\n>> >> > the TRUNCATE in synchronous logical replication doesn't hang.\n>> >> >\n>> >>\n>> >> Few initial comments:\n>> >> =====================\n>> >> 1.\n>> >> + relreplindex = relation->rd_replidindex;\n>> >> +\n>> >> + /*\n>> >> + * build attributes to idindexattrs.\n>> >> + */\n>> >> + idindexattrs = NULL;\n>> >> + foreach(l, indexoidlist)\n>> >> + {\n>> >> + Oid indexOid = lfirst_oid(l);\n>> >> + Relation indexDesc;\n>> >> + int i;\n>> >> + bool isIDKey; /* replica identity index */\n>> >> +\n>> >> + indexDesc = RelationIdGetRelation(indexOid);\n>> >>\n>> >> When you have oid of replica identity index (relreplindex) then what\n>> >> is the need to traverse all the indexes?\n>> > Ok. No need to traverse all the indexes. Will fix this part.\n>> >\n>> >> 2.\n>> >> It is better to name the function as RelationGet...\n>> > You are right. I'll modify this in my next version.\n>> >\n>>\n>> I took the liberty to address review comments and provide a v2 patch on top\n>> of your's v1 patch, also merge the test case.\n>>\n>> Sorry for attaching.\n> No problem. Thank you for updating the patch.\n> I've conducted some cosmetic changes. Could you please check this ?\n> That's already applied by pgindent.\n>\n> I executed RT for this and made no failure.\n> Just in case, I executed 010_truncate.pl test 100 times in a tight loop,\n> which also didn't fail.\n>\n\nLGTM, I made an entry in the commitfest[1], so that the patches will get a\nchance to run on all the platforms.\n\n[1] - https://commitfest.postgresql.org/33/3081/\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Sat, 17 Apr 2021 15:35:17 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Sat, Apr 17, 2021 at 2:04 PM osumi.takamichi@fujitsu.com <\nosumi.takamichi@fujitsu.com> wrote:\n\n>\n> No problem. Thank you for updating the patch.\n> I've conducted some cosmetic changes. Could you please check this ?\n> That's already applied by pgindent.\n>\n> I executed RT for this and made no failure.\n> Just in case, I executed 010_truncate.pl test 100 times in a tight loop,\n> which also didn't fail.\n>\n>\nI reviewed the patch, ran make check, no issues. One minor comment:\n\nCould you add the comment similar to RelationGetIndexAttrBitmap() on why\nthe redo, it's not very obvious\nto someone reading the code, why we are refetching the index list here.\n\n+ /* Check if we need to redo */\n+ newindexoidlist = RelationGetIndexList(relation);\n\nthanks,\nAjin Cherian\nFujitsu Australia\n\nOn Sat, Apr 17, 2021 at 2:04 PM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\nNo problem. Thank you for updating the patch.\nI've conducted some cosmetic changes. Could you please check this ?\nThat's already applied by pgindent.\n\nI executed RT for this and made no failure.\nJust in case, I executed 010_truncate.pl test 100 times in a tight loop,\nwhich also didn't fail.I reviewed the patch, ran make check, no issues. One minor comment:Could you add the comment similar to RelationGetIndexAttrBitmap() on why the redo, it's not very obviousto someone reading the code, why we are refetching the index list here.+\t/* Check if we need to redo */+\tnewindexoidlist = RelationGetIndexList(relation); thanks,Ajin CherianFujitsu Australia",
"msg_date": "Tue, 20 Apr 2021 11:52:43 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Tuesday, April 20, 2021 10:53 AM Ajin Cherian <itsajin@gmail.com> wrote:\r\n> On Sat, Apr 17, 2021 at 2:04 PM osumi.takamichi@fujitsu.com\r\n> <mailto:osumi.takamichi@fujitsu.com> <osumi.takamichi@fujitsu.com\r\n> <mailto:osumi.takamichi@fujitsu.com> > wrote:\r\n> \r\n> \tNo problem. Thank you for updating the patch.\r\n> \tI've conducted some cosmetic changes. Could you please check\r\n> this ?\r\n> \tThat's already applied by pgindent.\r\n> \r\n> \tI executed RT for this and made no failure.\r\n> \tJust in case, I executed 010_truncate.pl <http://010_truncate.pl>\r\n> test 100 times in a tight loop,\r\n> \twhich also didn't fail.\r\n> \r\n> I reviewed the patch, ran make check, no issues. One minor comment:\r\n> \r\n> Could you add the comment similar to RelationGetIndexAttrBitmap() on why\r\n> the redo, it's not very obvious to someone reading the code, why we are\r\n> refetching the index list here.\r\n> \r\n> + /* Check if we need to redo */\r\n> \r\n> + newindexoidlist = RelationGetIndexList(relation);\r\nYeah, makes sense. Fixed.\r\nIts indents seem a bit weird but came from pgindent.\r\nThank you for your review !\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Tue, 20 Apr 2021 03:30:34 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Tue, Apr 20, 2021 at 9:00 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Tuesday, April 20, 2021 10:53 AM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > I reviewed the patch, ran make check, no issues. One minor comment:\n> >\n> > Could you add the comment similar to RelationGetIndexAttrBitmap() on why\n> > the redo, it's not very obvious to someone reading the code, why we are\n> > refetching the index list here.\n> >\n> > + /* Check if we need to redo */\n> >\n> > + newindexoidlist = RelationGetIndexList(relation);\n> Yeah, makes sense. Fixed.\n\nI don't think here we need to restart to get a stable list of indexes\nas we do in RelationGetIndexAttrBitmap. The reason is here we build\nthe cache entry using a historic snapshot and all the later changes\nare absorbed while decoding WAL. I have updated that and modified few\ncomments in the attached patch. Can you please test this in\nclobber_cache_always mode? I think just testing\nsubscription/t/010_truncate.pl would be sufficient.\n\nNow, this bug exists in prior versions (>= v11) where we have\nintroduced decoding of Truncate. Do we want to backpatch this? As no\nuser has reported this till now apart from Tang who I think got it\nwhile doing some other patch testing, we might want to just push it in\nHEAD. I am fine either way. Petr, others, do you have any opinion on\nthis matter?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 22 Apr 2021 15:03:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "> I don't think here we need to restart to get a stable list of indexes\r\n> as we do in RelationGetIndexAttrBitmap. The reason is here we build\r\n> the cache entry using a historic snapshot and all the later changes\r\n> are absorbed while decoding WAL. I have updated that and modified few\r\n> comments in the attached patch. Can you please test this in\r\n> clobber_cache_always mode? I think just testing\r\n> subscription/t/010_truncate.pl would be sufficient.\r\n\r\nThanks for your patch. I tested your patch and it passes 'make check-world' and it works as expected.\r\nBy the way, I also tested in clobber_cache_always mode, it passed, too.(I only tested subscription/t/010_truncate.pl.)\r\n\r\nRegards,\r\nShi yu\r\n\r\n",
"msg_date": "Fri, 23 Apr 2021 03:41:14 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Thursday, April 22, 2021 6:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Apr 20, 2021 at 9:00 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Tuesday, April 20, 2021 10:53 AM Ajin Cherian <itsajin@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > I reviewed the patch, ran make check, no issues. One minor comment:\r\n> > >\r\n> > > Could you add the comment similar to RelationGetIndexAttrBitmap() on\r\n> > > why the redo, it's not very obvious to someone reading the code, why\r\n> > > we are refetching the index list here.\r\n> > >\r\n> > > + /* Check if we need to redo */\r\n> > >\r\n> > > + newindexoidlist = RelationGetIndexList(relation);\r\n> > Yeah, makes sense. Fixed.\r\n> \r\n> I don't think here we need to restart to get a stable list of indexes as we do in\r\n> RelationGetIndexAttrBitmap. The reason is here we build the cache entry\r\n> using a historic snapshot and all the later changes are absorbed while\r\n> decoding WAL.\r\nI rechecked this and I agree with this.\r\nI don't see any problem to remove the redo check.\r\nBased on this direction, I'll share my several minor comments.\r\n\r\n[1] a typo of RelationGetIdentityKeyBitmap's comment\r\n\r\n+ * This is a special purpose function used during logical replication. Here,\r\n+ * unlike RelationGetIndexAttrBitmap(), we don't a acquire lock on the required\r\n\r\nWe have \"a\" in an unnatural place. It should be \"we don't acquire...\".\r\n\r\n[2] suggestion to fix RelationGetIdentityKeyBitmap's comment\r\n\r\n+ * later changes are absorbed while decoding WAL. Due to this reason, we don't\r\n+ * need to retry here in case of a change in the set of indexes.\r\n\r\nI think it's better to use \"Because of\" instead of \"Due to\".\r\nThis patch works to solve the deadlock.\r\n\r\n[3] wrong comment in RelationGetIdentityKeyBitmap\r\n\r\n+ /* Save some values to compare after building attributes */\r\n\r\nI wrote this comment for the redo check\r\nthat has been removed already. We can delete it.\r\n\r\n[4] suggestion to remove local variable relreplindex in RelationGetIdentityKeyBitmap\r\n\r\nI think we don't need relreplindex.\r\nWe can just pass relaton->rd_replidindex to RelationIdGetRelation().\r\nThere is no more reference of the variable.\r\n\r\n[5] suggestion to fix the place to release indexoidlist\r\n\r\nI thought we can do its list_free() ealier,\r\nafter checking if there is no indexes.\r\n\r\n[6] suggestion to remove period in one comment.\r\n\r\n+ /* Quick exit if we already computed the result. */\r\n\r\nThis comes from RelationGetIndexAttrBitmap (and my posted versions)\r\nbut I think we can remove the period to give better alignment shared with other comments in the function.\r\n\r\n> I have updated that and modified few comments in the\r\n> attached patch. Can you please test this in clobber_cache_always mode? I\r\n> think just testing subscription/t/010_truncate.pl would be sufficient.\r\nI did it. It didn't fail. No problem.\r\n\r\n> Now, this bug exists in prior versions (>= v11) where we have introduced\r\n> decoding of Truncate. Do we want to backpatch this? As no user has reported\r\n> this till now apart from Tang who I think got it while doing some other patch\r\n> testing, we might want to just push it in HEAD. I am fine either way. Petr,\r\n> others, do you have any opinion on this matter?\r\nI think we are fine with applying this patch to the HEAD only,\r\nsince nobody has complained about the hang issue.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 23 Apr 2021 06:34:12 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Fri, Apr 23, 2021 at 12:04 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, April 22, 2021 6:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Tue, Apr 20, 2021 at 9:00 AM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > On Tuesday, April 20, 2021 10:53 AM Ajin Cherian <itsajin@gmail.com>\n> > wrote:\n> > > >\n> > > > I reviewed the patch, ran make check, no issues. One minor comment:\n> > > >\n> > > > Could you add the comment similar to RelationGetIndexAttrBitmap() on\n> > > > why the redo, it's not very obvious to someone reading the code, why\n> > > > we are refetching the index list here.\n> > > >\n> > > > + /* Check if we need to redo */\n> > > >\n> > > > + newindexoidlist = RelationGetIndexList(relation);\n> > > Yeah, makes sense. Fixed.\n> >\n> > I don't think here we need to restart to get a stable list of indexes as we do in\n> > RelationGetIndexAttrBitmap. The reason is here we build the cache entry\n> > using a historic snapshot and all the later changes are absorbed while\n> > decoding WAL.\n> I rechecked this and I agree with this.\n> I don't see any problem to remove the redo check.\n> Based on this direction, I'll share my several minor comments.\n>\n> [1] a typo of RelationGetIdentityKeyBitmap's comment\n>\n> + * This is a special purpose function used during logical replication. Here,\n> + * unlike RelationGetIndexAttrBitmap(), we don't a acquire lock on the required\n>\n> We have \"a\" in an unnatural place. It should be \"we don't acquire...\".\n>\n> [2] suggestion to fix RelationGetIdentityKeyBitmap's comment\n>\n> + * later changes are absorbed while decoding WAL. Due to this reason, we don't\n> + * need to retry here in case of a change in the set of indexes.\n>\n> I think it's better to use \"Because of\" instead of \"Due to\".\n> This patch works to solve the deadlock.\n>\n\nI am not sure which one is better. I would like to keep it as it is\nunless you feel strongly about point 2.\n\n> [3] wrong comment in RelationGetIdentityKeyBitmap\n>\n> + /* Save some values to compare after building attributes */\n>\n> I wrote this comment for the redo check\n> that has been removed already. We can delete it.\n>\n> [4] suggestion to remove local variable relreplindex in RelationGetIdentityKeyBitmap\n>\n> I think we don't need relreplindex.\n> We can just pass relaton->rd_replidindex to RelationIdGetRelation().\n> There is no more reference of the variable.\n>\n> [5] suggestion to fix the place to release indexoidlist\n>\n> I thought we can do its list_free() ealier,\n> after checking if there is no indexes.\n>\n\nHmm, why? If there are no indexes then we wouldn't have allocated any memory.\n\n> [6] suggestion to remove period in one comment.\n>\n> + /* Quick exit if we already computed the result. */\n>\n> This comes from RelationGetIndexAttrBitmap (and my posted versions)\n> but I think we can remove the period to give better alignment shared with other comments in the function.\n>\n\nCan you please update the patch except for the two points to which I\nresponded above?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 23 Apr 2021 12:12:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Friday, April 23, 2021 3:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, Apr 23, 2021 at 12:04 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Thursday, April 22, 2021 6:33 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > > On Tue, Apr 20, 2021 at 9:00 AM osumi.takamichi@fujitsu.com\r\n> > > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > >\r\n> > > > On Tuesday, April 20, 2021 10:53 AM Ajin Cherian\r\n> > > > <itsajin@gmail.com>\r\n> > > wrote:\r\n> > > > >\r\n> > > > > I reviewed the patch, ran make check, no issues. One minor\r\n> comment:\r\n> > > > >\r\n> > > > > Could you add the comment similar to\r\n> > > > > RelationGetIndexAttrBitmap() on why the redo, it's not very\r\n> > > > > obvious to someone reading the code, why we are refetching the\r\n> index list here.\r\n> > > > >\r\n> > > > > + /* Check if we need to redo */\r\n> > > > >\r\n> > > > > + newindexoidlist = RelationGetIndexList(relation);\r\n> > > > Yeah, makes sense. Fixed.\r\n> > >\r\n> > > I don't think here we need to restart to get a stable list of\r\n> > > indexes as we do in RelationGetIndexAttrBitmap. The reason is here\r\n> > > we build the cache entry using a historic snapshot and all the later\r\n> > > changes are absorbed while decoding WAL.\r\n> > I rechecked this and I agree with this.\r\n> > I don't see any problem to remove the redo check.\r\n> > Based on this direction, I'll share my several minor comments.\r\n> >\r\n> > [1] a typo of RelationGetIdentityKeyBitmap's comment\r\n> >\r\n> > + * This is a special purpose function used during logical\r\n> > + replication. Here,\r\n> > + * unlike RelationGetIndexAttrBitmap(), we don't a acquire lock on\r\n> > + the required\r\n> >\r\n> > We have \"a\" in an unnatural place. It should be \"we don't acquire...\".\r\n> >\r\n> > [2] suggestion to fix RelationGetIdentityKeyBitmap's comment\r\n> >\r\n> > + * later changes are absorbed while decoding WAL. Due to this reason,\r\n> > + we don't\r\n> > + * need to retry here in case of a change in the set of indexes.\r\n> >\r\n> > I think it's better to use \"Because of\" instead of \"Due to\".\r\n> > This patch works to solve the deadlock.\r\n> >\r\n> \r\n> I am not sure which one is better. I would like to keep it as it is unless you feel\r\n> strongly about point 2.\r\n> \r\n> > [3] wrong comment in RelationGetIdentityKeyBitmap\r\n> >\r\n> > + /* Save some values to compare after building attributes */\r\n> >\r\n> > I wrote this comment for the redo check that has been removed already.\r\n> > We can delete it.\r\n> >\r\n> > [4] suggestion to remove local variable relreplindex in\r\n> > RelationGetIdentityKeyBitmap\r\n> >\r\n> > I think we don't need relreplindex.\r\n> > We can just pass relaton->rd_replidindex to RelationIdGetRelation().\r\n> > There is no more reference of the variable.\r\n> >\r\n> > [5] suggestion to fix the place to release indexoidlist\r\n> >\r\n> > I thought we can do its list_free() ealier, after checking if there is\r\n> > no indexes.\r\n> >\r\n> \r\n> Hmm, why? If there are no indexes then we wouldn't have allocated any\r\n> memory.\r\n> \r\n> > [6] suggestion to remove period in one comment.\r\n> >\r\n> > + /* Quick exit if we already computed the result. */\r\n> >\r\n> > This comes from RelationGetIndexAttrBitmap (and my posted versions)\r\n> > but I think we can remove the period to give better alignment shared with\r\n> other comments in the function.\r\n> >\r\n> \r\n> Can you please update the patch except for the two points to which I\r\n> responded above?\r\nI've combined v5 with above accepted comments.\r\n\r\nJust in case, I've conducted make check-world, \r\nthe test of clobber_cache_always mode again for v6\r\nand tight loop test of 100 times for 010_truncate.pl. \r\nThe result is that all passed with no failure.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Fri, 23 Apr 2021 09:03:02 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Friday, April 23, 2021 6:03 PM I wrote:\r\n> I've combined v5 with above accepted comments.\r\n> \r\n> Just in case, I've conducted make check-world, the test of\r\n> clobber_cache_always mode again for v6 and tight loop test of 100 times for\r\n> 010_truncate.pl.\r\n> The result is that all passed with no failure.\r\nI'm sorry, I realized another minor thing which should be fixed.\r\n\r\nIn v6, I did below.\r\n+Bitmapset *\r\n+RelationGetIdentityKeyBitmap(Relation relation)\r\n+{\r\n+ Bitmapset *idindexattrs; /* columns in the replica identity */\r\n...\r\n+ /* Build attributes to idindexattrs */\r\n+ idindexattrs = NULL;\r\n\r\nBut, we removed the loop, so we can insert NULL\r\nat the beginning to declare idindexattrs.\r\nv7 is the version to update this part and\r\nrelated comments from v6.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Fri, 23 Apr 2021 13:48:37 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Fri, Apr 23, 2021 at 7:18 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n\nThe latest patch looks good to me. I have made a minor modification\nand added a commit message in the attached. I would like to once again\nask whether anybody else thinks we should backpatch this? Just a\nsummary for anybody not following this thread:\n\nThis patch fixes the Logical Replication of Truncate in synchronous\ncommit mode. The Truncate operation acquires an exclusive lock on the\ntarget relation and indexes and waits for logical replication of the\noperation to finish at commit. Now because we are acquiring the shared\nlock on the target index to get index attributes in pgoutput while\nsending the changes for the Truncate operation, it leads to a\ndeadlock.\n\nActually, we don't need to acquire a lock on the target index as we\nbuild the cache entry using a historic snapshot and all the later\nchanges are absorbed while decoding WAL. So, we wrote a special\npurpose function for logical replication to get a bitmap of replica\nidentity attribute numbers where we get that information without\nlocking the target index.\n\nWe are planning not to backpatch this as there doesn't seem to be any\nfield complaint about this issue since it was introduced in commit\n5dfd1e5a in v11.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 26 Apr 2021 10:19:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Monday, April 26, 2021 1:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, Apr 23, 2021 at 7:18 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> \r\n> The latest patch looks good to me. I have made a minor modification and\r\n> added a commit message in the attached.\r\nThank you for updating the patch.\r\n\r\nI think we need one space for \"targetindex\" in the commit message.\r\nFrom my side, there is no more additional comments !\r\n\r\n> I would like to once again ask\r\n> whether anybody else thinks we should backpatch this? Just a summary for\r\n> anybody not following this thread:\r\n> \r\n> This patch fixes the Logical Replication of Truncate in synchronous commit\r\n> mode. The Truncate operation acquires an exclusive lock on the target\r\n> relation and indexes and waits for logical replication of the operation to finish\r\n> at commit. Now because we are acquiring the shared lock on the target index\r\n> to get index attributes in pgoutput while sending the changes for the Truncate\r\n> operation, it leads to a deadlock.\r\n> \r\n> Actually, we don't need to acquire a lock on the target index as we build the\r\n> cache entry using a historic snapshot and all the later changes are absorbed\r\n> while decoding WAL. So, we wrote a special purpose function for logical\r\n> replication to get a bitmap of replica identity attribute numbers where we get\r\n> that information without locking the target index.\r\n> \r\n> We are planning not to backpatch this as there doesn't seem to be any field\r\n> complaint about this issue since it was introduced in commit 5dfd1e5a in v11.\r\nPlease anyone, share your opinion on this matter, when you have.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 26 Apr 2021 06:16:43 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "\nOn Mon, 26 Apr 2021 at 12:49, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Fri, Apr 23, 2021 at 7:18 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n>>\n>\n> The latest patch looks good to me. I have made a minor modification\n> and added a commit message in the attached. I would like to once again\n> ask whether anybody else thinks we should backpatch this? Just a\n> summary for anybody not following this thread:\n>\n> This patch fixes the Logical Replication of Truncate in synchronous\n> commit mode. The Truncate operation acquires an exclusive lock on the\n> target relation and indexes and waits for logical replication of the\n> operation to finish at commit. Now because we are acquiring the shared\n> lock on the target index to get index attributes in pgoutput while\n> sending the changes for the Truncate operation, it leads to a\n> deadlock.\n>\n> Actually, we don't need to acquire a lock on the target index as we\n> build the cache entry using a historic snapshot and all the later\n> changes are absorbed while decoding WAL. So, we wrote a special\n> purpose function for logical replication to get a bitmap of replica\n> identity attribute numbers where we get that information without\n> locking the target index.\n>\n> We are planning not to backpatch this as there doesn't seem to be any\n> field complaint about this issue since it was introduced in commit\n> 5dfd1e5a in v11.\n\n+1 for apply only on HEAD.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Mon, 26 Apr 2021 15:07:44 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Mon, Apr 26, 2021 at 12:37 PM Japin Li <japinli@hotmail.com> wrote:\n>\n> On Mon, 26 Apr 2021 at 12:49, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, Apr 23, 2021 at 7:18 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> >>\n> >\n> > The latest patch looks good to me. I have made a minor modification\n> > and added a commit message in the attached. I would like to once again\n> > ask whether anybody else thinks we should backpatch this? Just a\n> > summary for anybody not following this thread:\n> >\n> > This patch fixes the Logical Replication of Truncate in synchronous\n> > commit mode. The Truncate operation acquires an exclusive lock on the\n> > target relation and indexes and waits for logical replication of the\n> > operation to finish at commit. Now because we are acquiring the shared\n> > lock on the target index to get index attributes in pgoutput while\n> > sending the changes for the Truncate operation, it leads to a\n> > deadlock.\n> >\n> > Actually, we don't need to acquire a lock on the target index as we\n> > build the cache entry using a historic snapshot and all the later\n> > changes are absorbed while decoding WAL. So, we wrote a special\n> > purpose function for logical replication to get a bitmap of replica\n> > identity attribute numbers where we get that information without\n> > locking the target index.\n> >\n> > We are planning not to backpatch this as there doesn't seem to be any\n> > field complaint about this issue since it was introduced in commit\n> > 5dfd1e5a in v11.\n>\n> +1 for apply only on HEAD.\n>\n\nSeeing no other suggestions, I have pushed this in HEAD only. Thanks!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 27 Apr 2021 09:47:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncate in synchronous logical replication failed"
},
{
"msg_contents": "On Tuesday, April 27, 2021 1:17 PM, Amit Kapila <amit.kapila16@gmail.com> wrote\r\n\r\n>Seeing no other suggestions, I have pushed this in HEAD only. Thanks!\r\n\r\nSorry for the later reply on this issue.\r\nI tested with the latest HEAD, the issue is fixed now. Thanks a lot.\r\n\r\nRegards\r\nTang\r\n",
"msg_date": "Tue, 27 Apr 2021 06:31:47 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Truncate in synchronous logical replication failed"
}
] |
[
{
"msg_contents": "Hi,\n\nFound that some documentation hasn't been updated for the changes made as\npart of\nstreaming large in-progress transactions. Attached a patch that adds the\nmissing changes. Let me know if anything more needs to be added or any\ncomments on this change.\n\nregards,\nAjin Cherian\nFujitsu Australia",
"msg_date": "Wed, 7 Apr 2021 17:40:56 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": true,
"msg_subject": "missing documentation for streaming in-progress transactions"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 1:11 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> Hi,\n>\n> Found that some documentation hasn't been updated for the changes made as part of\n> streaming large in-progress transactions. Attached a patch that adds the missing changes. Let me know if anything more needs to be added or any comments on this change.\n>\n\nThanks, this mostly looks good to me. I have slightly modified it.\nSee, what do you think of the attached?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 7 Apr 2021 17:45:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing documentation for streaming in-progress transactions"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 10:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 7, 2021 at 1:11 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Found that some documentation hasn't been updated for the changes made as part of\n> > streaming large in-progress transactions. Attached a patch that adds the missing changes. Let me know if anything more needs to be added or any comments on this change.\n> >\n>\n> Thanks, this mostly looks good to me. I have slightly modified it.\n> See, what do you think of the attached?\n>\n\n\n1.\nI felt that this protocol documentation needs to include something\nlike a \"Since: 2\" notation (e.g. see how the javadoc API [1] does it)\notherwise with more message types and more protocol versions it is\nquickly going to become too complicated to know what message or\nmessage attribute belongs with what protocol.\n\n\n2.\nThere are inconsistent terms used for a transaction id.\ne.g.1 Sometimes called \" Transaction id.\"\ne.g.2 Sometimes called \"Xid of the transaction\"\nI think there should be consistent terminology used on this page\n\n\n3.\nThere is inconsistent wording for what seems to be the same condition.\ne.g.1 The existing documentation [2] says \"Xid of the transaction. The\nXID is sent only when user has requested streaming of in-progress\ntransactions.\"\ne.g.2 For a similar case the patch says \"Xid of the transaction (only\npresent for streamed transactions).\"\nI think there should be consistent wording used on this page where possible.\n\n\n------\n[1] https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/String.html\n[2] https://www.postgresql.org/docs/devel/protocol-logicalrep-message-formats.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 8 Apr 2021 08:19:45 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing documentation for streaming in-progress transactions"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 3:49 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Apr 7, 2021 at 10:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Apr 7, 2021 at 1:11 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> 3.\n> There is inconsistent wording for what seems to be the same condition.\n> e.g.1 The existing documentation [2] says \"Xid of the transaction. The\n> XID is sent only when user has requested streaming of in-progress\n> transactions.\"\n> e.g.2 For a similar case the patch says \"Xid of the transaction (only\n> present for streamed transactions).\"\n> I think there should be consistent wording used on this page where possible.\n>\n\nI think this is already modified as below in the patch. Is there any\nother place you are referring to?\n\n@@ -6457,8 +6462,7 @@ Message\n </term>\n <listitem>\n <para>\n- Xid of the transaction. The XID is sent only when user has\n- requested streaming of in-progress transactions.\n+ Xid of the transaction (only present for streamed\ntransactions).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 8 Apr 2021 08:26:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing documentation for streaming in-progress transactions"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 12:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 8, 2021 at 3:49 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Wed, Apr 7, 2021 at 10:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Apr 7, 2021 at 1:11 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > 3.\n> > There is inconsistent wording for what seems to be the same condition.\n> > e.g.1 The existing documentation [2] says \"Xid of the transaction. The\n> > XID is sent only when user has requested streaming of in-progress\n> > transactions.\"\n> > e.g.2 For a similar case the patch says \"Xid of the transaction (only\n> > present for streamed transactions).\"\n> > I think there should be consistent wording used on this page where possible.\n> >\n>\n> I think this is already modified as below in the patch. Is there any\n> other place you are referring to?\n\nNo. My mistake. Sorry for the false alarm.\n\n------\nKInd Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 8 Apr 2021 13:04:27 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing documentation for streaming in-progress transactions"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 8:19 AM Peter Smith <smithpb2250@gmail.com> wrote:\n\n>\n> 1.\n> I felt that this protocol documentation needs to include something\n> like a \"Since: 2\" notation (e.g. see how the javadoc API [1] does it)\n> otherwise with more message types and more protocol versions it is\n> quickly going to become too complicated to know what message or\n> message attribute belongs with what protocol.\n>\n>\n> Updated.\n\n\n> 2.\n> There are inconsistent terms used for a transaction id.\n> e.g.1 Sometimes called \" Transaction id.\"\n> e.g.2 Sometimes called \"Xid of the transaction\"\n> I think there should be consistent terminology used on this page\n>\n\nUpdated.\n\nregards,\nAjin Cherian\nFujitsu Australia",
"msg_date": "Thu, 8 Apr 2021 17:25:33 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: missing documentation for streaming in-progress transactions"
},
{
"msg_contents": "On Thu, Apr 8, 2021, at 4:25 AM, Ajin Cherian wrote:\n> Updated.\n\n- Protocol version. Currently only version <literal>1</literal> is\n- supported.\n- </para>\n+ Protocol version. Currently versions <literal>1</literal> and\n+ <literal>2</literal> are supported. The version <literal>2</literal>\n+ is supported only for server versions 14 and above, and is used to allow\n+ streaming of large in-progress transactions.\n+ </para>\n\ns/server versions/server version/\n\nI suggest that the last part of the sentence might be \"and it allows streaming\nof large in-progress transactions\"\n\n+ Since: 2\n+</para>\n+<para>\n\nI didn't like this style because it is not descriptive enough. It is also not a\nstyle adopted by Postgres. I suggest to add something like \"This field was\nintroduced in version 2\" or \"This field is available since version 2\" after the\nfield description.\n\n+ Xid of the sub-transaction (will be same as xid of the transaction for top-level\n+ transactions).\n+</para>\n\nAlthough, sub-transaction is also used in the documentation, I suggest to use\nsubtransaction. Maybe change the other sub-transaction occurrences too.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Apr 8, 2021, at 4:25 AM, Ajin Cherian wrote:Updated.- Protocol version. Currently only version <literal>1</literal> is- supported.- </para>+ Protocol version. Currently versions <literal>1</literal> and+ <literal>2</literal> are supported. The version <literal>2</literal>+ is supported only for server versions 14 and above, and is used to allow+ streaming of large in-progress transactions.+ </para>s/server versions/server version/I suggest that the last part of the sentence might be \"and it allows streamingof large in-progress transactions\"+ Since: 2+</para>+<para>I didn't like this style because it is not descriptive enough. It is also not astyle adopted by Postgres. I suggest to add something like \"This field wasintroduced in version 2\" or \"This field is available since version 2\" after thefield description.+ Xid of the sub-transaction (will be same as xid of the transaction for top-level+ transactions).+</para>Although, sub-transaction is also used in the documentation, I suggest to usesubtransaction. Maybe change the other sub-transaction occurrences too.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 08 Apr 2021 21:23:02 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: missing documentation for streaming in-progress transactions"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 10:23 AM Euler Taveira <euler@eulerto.com> wrote:\n\n>\n> I didn't like this style because it is not descriptive enough. It is also\n> not a\n> style adopted by Postgres. I suggest to add something like \"This field was\n> introduced in version 2\" or \"This field is available since version 2\"\n> after the\n> field description.\n>\n\nI have updated this to \"Since protocol version 2\"\n\n>\n> + Xid of the sub-transaction (will be same as xid of the\n> transaction for top-level\n> + transactions).\n> +</para>\n>\n> Although, sub-transaction is also used in the documentation, I suggest to\n> use\n> subtransaction. Maybe change the other sub-transaction occurrences too.\n>\n\nUpdated.\n\nregards,\nAjin Cherian\nFujitsu Australia",
"msg_date": "Fri, 9 Apr 2021 12:59:32 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: missing documentation for streaming in-progress transactions"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 8:29 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Fri, Apr 9, 2021 at 10:23 AM Euler Taveira <euler@eulerto.com> wrote:\n>>\n>>\n>> I didn't like this style because it is not descriptive enough. It is also not a\n>> style adopted by Postgres. I suggest to add something like \"This field was\n>> introduced in version 2\" or \"This field is available since version 2\" after the\n>> field description.\n>\n>\n> I have updated this to \"Since protocol version 2\"\n>>\n>>\n>> + Xid of the sub-transaction (will be same as xid of the transaction for top-level\n>> + transactions).\n>> +</para>\n>>\n>> Although, sub-transaction is also used in the documentation, I suggest to use\n>> subtransaction. Maybe change the other sub-transaction occurrences too.\n>\n>\n> Updated.\n>\n\nI don't like repeating the same thing for all new messages. So added\nseparate para for the same and few other changes. See what do you\nthink of the attached?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Fri, 9 Apr 2021 09:39:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing documentation for streaming in-progress transactions"
},
{
"msg_contents": "On Wed, Apr 07, 2021 at 05:45:16PM +0530, Amit Kapila wrote:\n> On Wed, Apr 7, 2021 at 1:11 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > Found that some documentation hasn't been updated for the changes made as part of\n> > streaming large in-progress transactions. Attached a patch that adds the missing changes. Let me know if anything more needs to be added or any comments on this change.\n> >\n> \n> Thanks, this mostly looks good to me. I have slightly modified it.\n> See, what do you think of the attached?\n\n+ Protocol version. Currently versions <literal>1</literal> and\n+ <literal>2</literal> are supported. The version <literal>2</literal>\n+ is supported only for server versions 14 and above, and is used to allow\n+ streaming of large in-progress transactions.\n\nThe diff briefly confused me, since this is in protocol.sgml, and since the\nlibpq protocol version is 1/2/3, with 2 being removed in v14 (3174d69fb).\nI suggest to say \"replication protocol version 2\".\n\nI realize that the headings make this more clear when reading the .html, so\nmaybe there's no issue.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 8 Apr 2021 23:28:52 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: missing documentation for streaming in-progress transactions"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 9:58 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Apr 07, 2021 at 05:45:16PM +0530, Amit Kapila wrote:\n> > On Wed, Apr 7, 2021 at 1:11 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> > >\n> > > Found that some documentation hasn't been updated for the changes made as part of\n> > > streaming large in-progress transactions. Attached a patch that adds the missing changes. Let me know if anything more needs to be added or any comments on this change.\n> > >\n> >\n> > Thanks, this mostly looks good to me. I have slightly modified it.\n> > See, what do you think of the attached?\n>\n> + Protocol version. Currently versions <literal>1</literal> and\n> + <literal>2</literal> are supported. The version <literal>2</literal>\n> + is supported only for server versions 14 and above, and is used to allow\n> + streaming of large in-progress transactions.\n>\n> The diff briefly confused me, since this is in protocol.sgml, and since the\n> libpq protocol version is 1/2/3, with 2 being removed in v14 (3174d69fb).\n> I suggest to say \"replication protocol version 2\".\n>\n> I realize that the headings make this more clear when reading the .html, so\n> maybe there's no issue.\n>\n\nYeah, this was the reason to not include replication. If one is\nreading the document or even *.sgml, there shouldn't be any confusion\nbut if you or others feel so, we can use 'replication' as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 9 Apr 2021 10:34:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing documentation for streaming in-progress transactions"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 9:39 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n>\n> I don't like repeating the same thing for all new messages. So added\n> separate para for the same and few other changes. See what do you\n> think of the attached?\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 12 Apr 2021 12:02:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing documentation for streaming in-progress transactions"
}
] |
[
{
"msg_contents": "During recent developments in the vacuum, it has been noticed [1] that\nparallel vacuum workers don't use any buffer access strategy. I think\nwe can fix it either by propagating the required information from the\nleader or just get the access strategy in each worker separately. The\npatches for both approaches for PG-13 are attached.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CAH2-Wz%3Dgf6FXW-jPVRdeCZk0QjhduCqH_XD3QbES9wPmhircuA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 7 Apr 2021 15:30:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Set access strategy for parallel vacuum workers"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 7:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> During recent developments in the vacuum, it has been noticed [1] that\n> parallel vacuum workers don't use any buffer access strategy. I think\n> we can fix it either by propagating the required information from the\n> leader or just get the access strategy in each worker separately. The\n> patches for both approaches for PG-13 are attached.\n\nThank you for starting the new thread.\n\nI'd prefer to just have parallel vacuum workers get BAS_VACUUM buffer\naccess strategy. If we want to have set different buffer access\nstrategies or ring buffer sizes for the leader and worker processes,\nthe former approach would be better. But I think we're unlikely to\nwant to do that, especially in back branches.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 7 Apr 2021 20:41:23 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Set access strategy for parallel vacuum workers"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 3:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> During recent developments in the vacuum, it has been noticed [1] that\n> parallel vacuum workers don't use any buffer access strategy. I think\n> we can fix it either by propagating the required information from the\n> leader or just get the access strategy in each worker separately. The\n> patches for both approaches for PG-13 are attached.\n>\n> Thoughts?\n>\n> [1] - https://www.postgresql.org/message-id/CAH2-Wz%3Dgf6FXW-jPVRdeCZk0QjhduCqH_XD3QbES9wPmhircuA%40mail.gmail.com\n\nNote: I have not followed the original discussion in [1].\n\nMy understanding of the approach #1 i.e. propagating the vacuum\nstrategy down to the parallel vacuum workers from the leader is that\nthe same ring buffer (of 256KB for vacuum) will be used by both leader\nand all the workers. In that case, I think we see more page flushes\n(thus more IO) because 256KB is now shared by all of them. Whereas\nwith approach #2 each worker gets its own ring buffer (of 256KB) thus\nless IO occurs compared to approach #1.\n\nAnd in case of parallel inserts (although they are not yet committed\nand in various levels discussions) we let each worker get its own ring\nbuffer (of size 16MB). Whatever the approach is chosen here, I think\nit should be consistent.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Apr 2021 19:11:55 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Set access strategy for parallel vacuum workers"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 7:12 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Apr 7, 2021 at 3:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > During recent developments in the vacuum, it has been noticed [1] that\n> > parallel vacuum workers don't use any buffer access strategy. I think\n> > we can fix it either by propagating the required information from the\n> > leader or just get the access strategy in each worker separately. The\n> > patches for both approaches for PG-13 are attached.\n> >\n> > Thoughts?\n> >\n> > [1] - https://www.postgresql.org/message-id/CAH2-Wz%3Dgf6FXW-jPVRdeCZk0QjhduCqH_XD3QbES9wPmhircuA%40mail.gmail.com\n>\n> Note: I have not followed the original discussion in [1].\n>\n> My understanding of the approach #1 i.e. propagating the vacuum\n> strategy down to the parallel vacuum workers from the leader is that\n> the same ring buffer (of 256KB for vacuum) will be used by both leader\n> and all the workers.\n>\n\nNo that is not the intention, each worker will use its ring buffer.\nThe first approach just passes the relevant information to workers so\nthat they can use the same strategy as used by the leader but both\nwill use separate ring buffer.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 8 Apr 2021 08:44:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Set access strategy for parallel vacuum workers"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 5:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Apr 7, 2021 at 7:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > During recent developments in the vacuum, it has been noticed [1] that\n> > parallel vacuum workers don't use any buffer access strategy. I think\n> > we can fix it either by propagating the required information from the\n> > leader or just get the access strategy in each worker separately. The\n> > patches for both approaches for PG-13 are attached.\n>\n> Thank you for starting the new thread.\n>\n> I'd prefer to just have parallel vacuum workers get BAS_VACUUM buffer\n> access strategy. If we want to have set different buffer access\n> strategies or ring buffer sizes for the leader and worker processes,\n> the former approach would be better. But I think we're unlikely to\n> want to do that, especially in back branches.\n>\n\nFair enough. Just to be clear, you prefer to go with\nfix_access_strategy_workers_11.patch, right?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 8 Apr 2021 08:47:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Set access strategy for parallel vacuum workers"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 12:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 7, 2021 at 5:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Apr 7, 2021 at 7:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > During recent developments in the vacuum, it has been noticed [1] that\n> > > parallel vacuum workers don't use any buffer access strategy. I think\n> > > we can fix it either by propagating the required information from the\n> > > leader or just get the access strategy in each worker separately. The\n> > > patches for both approaches for PG-13 are attached.\n> >\n> > Thank you for starting the new thread.\n> >\n> > I'd prefer to just have parallel vacuum workers get BAS_VACUUM buffer\n> > access strategy. If we want to have set different buffer access\n> > strategies or ring buffer sizes for the leader and worker processes,\n> > the former approach would be better. But I think we're unlikely to\n> > want to do that, especially in back branches.\n> >\n>\n> Fair enough. Just to be clear, you prefer to go with\n> fix_access_strategy_workers_11.patch, right?\n\nThat's right.\n\nIn HEAD, we fixed it in that way in commit f6b8f19.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 8 Apr 2021 12:22:09 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Set access strategy for parallel vacuum workers"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 8:52 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Apr 8, 2021 at 12:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Apr 7, 2021 at 5:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Apr 7, 2021 at 7:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > During recent developments in the vacuum, it has been noticed [1] that\n> > > > parallel vacuum workers don't use any buffer access strategy. I think\n> > > > we can fix it either by propagating the required information from the\n> > > > leader or just get the access strategy in each worker separately. The\n> > > > patches for both approaches for PG-13 are attached.\n> > >\n> > > Thank you for starting the new thread.\n> > >\n> > > I'd prefer to just have parallel vacuum workers get BAS_VACUUM buffer\n> > > access strategy. If we want to have set different buffer access\n> > > strategies or ring buffer sizes for the leader and worker processes,\n> > > the former approach would be better. But I think we're unlikely to\n> > > want to do that, especially in back branches.\n> > >\n> >\n> > Fair enough. Just to be clear, you prefer to go with\n> > fix_access_strategy_workers_11.patch, right?\n>\n> That's right.\n>\n\nOkay, I'll wait for a day or two to see if anyone else has comments or\nsuggestions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 8 Apr 2021 08:54:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Set access strategy for parallel vacuum workers"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 8:44 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 7, 2021 at 7:12 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, Apr 7, 2021 at 3:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > During recent developments in the vacuum, it has been noticed [1] that\n> > > parallel vacuum workers don't use any buffer access strategy. I think\n> > > we can fix it either by propagating the required information from the\n> > > leader or just get the access strategy in each worker separately. The\n> > > patches for both approaches for PG-13 are attached.\n> > >\n> > > Thoughts?\n> > >\n> > > [1] - https://www.postgresql.org/message-id/CAH2-Wz%3Dgf6FXW-jPVRdeCZk0QjhduCqH_XD3QbES9wPmhircuA%40mail.gmail.com\n> >\n> > Note: I have not followed the original discussion in [1].\n> >\n> > My understanding of the approach #1 i.e. propagating the vacuum\n> > strategy down to the parallel vacuum workers from the leader is that\n> > the same ring buffer (of 256KB for vacuum) will be used by both leader\n> > and all the workers.\n> >\n>\n> No that is not the intention, each worker will use its ring buffer.\n> The first approach just passes the relevant information to workers so\n> that they can use the same strategy as used by the leader but both\n> will use separate ring buffer.\n\nThanks for the clarification. I understood now.\n\nOn the patch fix_access_strategy_workers_11.patch: can we have the\nmore descriptive comment like \"/* Each parallel VACUUM worker gets its\nown access strategy */\" that's introduced by commit f6b8f19 instead of\njust saying \"/* Set up vacuum access strategy */\" which is quite\nobvious from the function name GetAccessStrategy?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Apr 2021 09:42:23 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Set access strategy for parallel vacuum workers"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 9:42 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Apr 8, 2021 at 8:44 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Apr 7, 2021 at 7:12 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Wed, Apr 7, 2021 at 3:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > During recent developments in the vacuum, it has been noticed [1] that\n> > > > parallel vacuum workers don't use any buffer access strategy. I think\n> > > > we can fix it either by propagating the required information from the\n> > > > leader or just get the access strategy in each worker separately. The\n> > > > patches for both approaches for PG-13 are attached.\n> > > >\n> > > > Thoughts?\n> > > >\n> > > > [1] - https://www.postgresql.org/message-id/CAH2-Wz%3Dgf6FXW-jPVRdeCZk0QjhduCqH_XD3QbES9wPmhircuA%40mail.gmail.com\n> > >\n> > > Note: I have not followed the original discussion in [1].\n> > >\n> > > My understanding of the approach #1 i.e. propagating the vacuum\n> > > strategy down to the parallel vacuum workers from the leader is that\n> > > the same ring buffer (of 256KB for vacuum) will be used by both leader\n> > > and all the workers.\n> > >\n> >\n> > No that is not the intention, each worker will use its ring buffer.\n> > The first approach just passes the relevant information to workers so\n> > that they can use the same strategy as used by the leader but both\n> > will use separate ring buffer.\n>\n> Thanks for the clarification. I understood now.\n>\n> On the patch fix_access_strategy_workers_11.patch: can we have the\n> more descriptive comment like \"/* Each parallel VACUUM worker gets its\n> own access strategy */\" that's introduced by commit f6b8f19 instead of\n> just saying \"/* Set up vacuum access strategy */\" which is quite\n> obvious from the function name GetAccessStrategy?\n>\n\nYeah, I will change that before commit unless there are more suggestions.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 8 Apr 2021 11:21:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Set access strategy for parallel vacuum workers"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 11:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Yeah, I will change that before commit unless there are more suggestions.\n\nI have no further comments on the patch\nfix_access_strategy_workers_11.patch, it LGTM.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Apr 2021 12:37:46 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Set access strategy for parallel vacuum workers"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 12:37 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Apr 8, 2021 at 11:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Yeah, I will change that before commit unless there are more suggestions.\n>\n> I have no further comments on the patch\n> fix_access_strategy_workers_11.patch, it LGTM.\n>\n\nThanks, seeing no further comments, I have pushed\nfix_access_strategy_workers_11.patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 12 Apr 2021 09:41:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Set access strategy for parallel vacuum workers"
}
] |
[
{
"msg_contents": "Hi,\n\nIt looks like we do allow $subject which has following behaviour:\ncreate sequence myseq restart 200; --> sequence is starting from\nrestart value overriding start value\ncreate sequence myseq start 100 restart 200; --> sequence is starting\nfrom restart value overriding start value\ncreate sequence myseq start 100 restart; --> sequence is starting from\nstart value no overriding of start value occurs\ncreate sequence myseq restart; --> sequence is starting from default\nstart value no overriding of start value occurs\n\nWhile we have documented the \"restart\" option behaviour for ALTER\nSEQUENCE, we have no mention of it in the CREATE SEQUENCE docs page.\nDo we need to document the above behaviour for CREATE SEQUENCE?\nAlternatively, do we need to throw an error if the user is not\nsupposed to use the \"restart\" option with CREATE SEQUENCE?\n\nIMO, allowing the \"restart\" option for CREATE SEQUENCE doesn't make\nsense when we have the \"start\" option, so it's better to throw an\nerror.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Apr 2021 15:55:50 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "CREATE SEQUENCE with RESTART option"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 3:56 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> It looks like we do allow $subject which has following behaviour:\n> create sequence myseq restart 200; --> sequence is starting from\n> restart value overriding start value\n> create sequence myseq start 100 restart 200; --> sequence is starting\n> from restart value overriding start value\n> create sequence myseq start 100 restart; --> sequence is starting from\n> start value no overriding of start value occurs\n> create sequence myseq restart; --> sequence is starting from default\n> start value no overriding of start value occurs\n>\n> While we have documented the \"restart\" option behaviour for ALTER\n> SEQUENCE, we have no mention of it in the CREATE SEQUENCE docs page.\n> Do we need to document the above behaviour for CREATE SEQUENCE?\n> Alternatively, do we need to throw an error if the user is not\n> supposed to use the \"restart\" option with CREATE SEQUENCE?\n>\n> IMO, allowing the \"restart\" option for CREATE SEQUENCE doesn't make\n> sense when we have the \"start\" option, so it's better to throw an\n> error.\n\nUsing restart in CREATE SEQUENCE command looks, umm, funny. But\nlooking at the code it makes me wonder whether it's intentional.\n\n1567 /* RESTART [WITH] */\n1568 if (restart_value != NULL)\n1569 {\n1570 if (restart_value->arg != NULL)\n1571 seqdataform->last_value = defGetInt64(restart_value);\n1572 else\n1573 seqdataform->last_value = seqform->seqstart;\n1574 seqdataform->is_called = false;\n1575 seqdataform->log_cnt = 0;\n1576 }\n1577 else if (isInit)\n1578 {\n1579 seqdataform->last_value = seqform->seqstart;\n1580 seqdataform->is_called = false;\n1581 }\n\n\"restart\" as the name suggests \"restarts\" a sequence from a given\nvalue or start of sequence. \"start\" on the other hand specifies the\n\"start\" value of sequence and is also the value used to \"restart\" by\ndefault from.\n\nSo here's what will happen in each of the cases you mentioned\n\n> create sequence myseq restart 200; --> sequence is starting from\n> restart value overriding start value\n\nthe first time sequence will be used it will use value 200, but if\nsomeone does a \"restart\" it will start from default start of that\nsequence.\n\n> create sequence myseq start 100 restart 200; --> sequence is starting\n> from restart value overriding start value\n\nthe first time sequence will be used it will use value 200, but if\nsomeone does a \"restart\" it will start from 100\n\n> create sequence myseq start 100 restart; --> sequence is starting from\n> start value no overriding of start value occurs\n\nthe first time sequence will be used it will use value 100, and if\nsomeone does a \"restart\" it will start from 100\n\n> create sequence myseq restart; --> sequence is starting from default\n> start value no overriding of start value occurs\n\nthis is equivalent to \"create sequence myseq\"\n\nThis is the behaviour implied when we read\nhttps://www.postgresql.org/docs/13/sql-createsequence.html and\nhttps://www.postgresql.org/docs/13/sql-altersequence.html together.\n\nAt best CREATE SEQUENCE .... START ... RESTART ... can be a shorthand\nfor CREATE SEQUENCE ... START; ALTER SEQUENCE ... RESTART run back to\nback. So it looks useful but in rare cases.\n\nSaid all that I agree that if we are supporting CREATE SEQUENCE ...\nRESTART then we should document it, correctly. If that's not the\nintention, we should disallow RESTART with CREATE SEQUENCE.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 7 Apr 2021 18:04:28 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE SEQUENCE with RESTART option"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 6:04 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> At best CREATE SEQUENCE .... START ... RESTART ... can be a shorthand\n> for CREATE SEQUENCE ... START; ALTER SEQUENCE ... RESTART run back to\n> back. So it looks useful but in rare cases.\n\nI personally feel that let's not mix up START and RESTART in CREATE\nSEQUENCE. If required, users will run ALTER SEQUENCE RESTART\nseparately, that will be a clean way.\n\n> Said all that I agree that if we are supporting CREATE SEQUENCE ...\n> RESTART then we should document it, correctly. If that's not the\n> intention, we should disallow RESTART with CREATE SEQUENCE.\n\nAs I mentioned upthread, it's better to disallow (throw error) if\nRESTART is specified for CREATE SEQUENCE. Having said that, I would\nlike to hear from others.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Apr 2021 18:51:49 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE SEQUENCE with RESTART option"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 6:52 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Apr 7, 2021 at 6:04 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > At best CREATE SEQUENCE .... START ... RESTART ... can be a shorthand\n> > for CREATE SEQUENCE ... START; ALTER SEQUENCE ... RESTART run back to\n> > back. So it looks useful but in rare cases.\n>\n> I personally feel that let's not mix up START and RESTART in CREATE\n> SEQUENCE. If required, users will run ALTER SEQUENCE RESTART\n> separately, that will be a clean way.\n>\n> > Said all that I agree that if we are supporting CREATE SEQUENCE ...\n> > RESTART then we should document it, correctly. If that's not the\n> > intention, we should disallow RESTART with CREATE SEQUENCE.\n>\n> As I mentioned upthread, it's better to disallow (throw error) if\n> RESTART is specified for CREATE SEQUENCE. Having said that, I would\n> like to hear from others.\n>\n\nFWIW, +1.\n\nThe RESTART clause in the CREATE SEQUENCE doesn't make sense\nto me, it should be restricted, IMO.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Thu, 8 Apr 2021 10:08:35 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE SEQUENCE with RESTART option"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 10:09 AM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Wed, Apr 7, 2021 at 6:52 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, Apr 7, 2021 at 6:04 PM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> > > At best CREATE SEQUENCE .... START ... RESTART ... can be a shorthand\n> > > for CREATE SEQUENCE ... START; ALTER SEQUENCE ... RESTART run back to\n> > > back. So it looks useful but in rare cases.\n> >\n> > I personally feel that let's not mix up START and RESTART in CREATE\n> > SEQUENCE. If required, users will run ALTER SEQUENCE RESTART\n> > separately, that will be a clean way.\n> >\n> > > Said all that I agree that if we are supporting CREATE SEQUENCE ...\n> > > RESTART then we should document it, correctly. If that's not the\n> > > intention, we should disallow RESTART with CREATE SEQUENCE.\n> >\n> > As I mentioned upthread, it's better to disallow (throw error) if\n> > RESTART is specified for CREATE SEQUENCE. Having said that, I would\n> > like to hear from others.\n> >\n>\n> FWIW, +1.\n>\n> The RESTART clause in the CREATE SEQUENCE doesn't make sense\n> to me, it should be restricted, IMO.\n\nThanks! Attaching a patch that throws an error if the RESTART option\nis specified with CREATE SEQUENCE. Please have a look and let me know\nif the error message wording is fine or not. Is it better to include\nthe reason as to why we disallow something like \"Because it may\noverride the START option.\" in err_detail along with the error\nmessage?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 8 Apr 2021 14:02:57 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE SEQUENCE with RESTART option"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 2:03 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n>\n> >\n> > The RESTART clause in the CREATE SEQUENCE doesn't make sense\n> > to me, it should be restricted, IMO.\n>\n\n+1\n\n\n>\n> Thanks! Attaching a patch that throws an error if the RESTART option\n> is specified with CREATE SEQUENCE. Please have a look and let me know\n> if the error message wording is fine or not. Is it better to include\n> the reason as to why we disallow something like \"Because it may\n> override the START option.\" in err_detail along with the error\n> message?\n>\n\nPatch looks good to me. Current error message looks ok to me.\nDo we need to add double quotes for RESTART word in the error message since\nit is an option?\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\n\n\n\nedbpostgres.com\n\nOn Thu, Apr 8, 2021 at 2:03 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> The RESTART clause in the CREATE SEQUENCE doesn't make sense\n> to me, it should be restricted, IMO. +1 \n\nThanks! Attaching a patch that throws an error if the RESTART option\nis specified with CREATE SEQUENCE. Please have a look and let me know\nif the error message wording is fine or not. Is it better to include\nthe reason as to why we disallow something like \"Because it may\noverride the START option.\" in err_detail along with the error\nmessage? Patch looks good to me. Current error message looks ok to me.Do we need to add double quotes for RESTART word in the error message since it is an option?-- --Thanks & Regards, Suraj kharage, edbpostgres.com",
"msg_date": "Thu, 8 Apr 2021 15:16:26 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE SEQUENCE with RESTART option"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 3:16 PM Suraj Kharage\n<suraj.kharage@enterprisedb.com> wrote:\n>> > The RESTART clause in the CREATE SEQUENCE doesn't make sense\n>> > to me, it should be restricted, IMO.\n>\n> +1\n>\n>>\n>> Thanks! Attaching a patch that throws an error if the RESTART option\n>> is specified with CREATE SEQUENCE. Please have a look and let me know\n>> if the error message wording is fine or not. Is it better to include\n>> the reason as to why we disallow something like \"Because it may\n>> override the START option.\" in err_detail along with the error\n>> message?\n>\n>\n> Patch looks good to me. Current error message looks ok to me.\n> Do we need to add double quotes for RESTART word in the error message since it is an option?\n\nThanks for taking a look at the patch. Looks like the other options\nare used in the error message without quotes, see\n\"MINVALUE (%s) is out of range for sequence data type\n\"START value (%s) cannot be less than MINVALUE\n\"RESTART value (%s) cannot be less than MINVALUE\n\"CACHE (%s) must be greater than zero\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Apr 2021 16:04:09 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE SEQUENCE with RESTART option"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nHi\r\n\r\nI have applied and run your patch, which works fine in my environment. Regarding your comments in the patch:\r\n\r\n/*\r\n * Restarting a sequence while defining it doesn't make any sense\r\n * and it may override the START value. Allowing both START and\r\n * RESTART option for CREATE SEQUENCE may cause confusion to user.\r\n * Hence, we throw error for CREATE SEQUENCE if RESTART option is\r\n * specified. However, it can be used with ALTER SEQUENCE.\r\n */\r\n\r\nI would remove the first sentence, because it seems like a personal opinion to me. I am sure someone, somewhere may think it makes total sense :).\r\n\r\nI would rephrase like this:\r\n\r\n/* \r\n * Allowing both START and RESTART option for CREATE SEQUENCE \r\n * could override the START value and cause confusion to user. Hence, \r\n * we throw an error for CREATE SEQUENCE if RESTART option is\r\n * specified; it can only be used with ALTER SEQUENCE.\r\n */\r\n\r\njust a thought.\r\n\r\nthanks!\r\n\r\n-------------------------------------\r\nCary Huang\r\nHighGo Software Canada\r\nwww.highgo.ca",
"msg_date": "Fri, 23 Jul 2021 21:49:24 +0000",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: CREATE SEQUENCE with RESTART option"
},
{
"msg_contents": "On Sat, Jul 24, 2021 at 3:20 AM Cary Huang <cary.huang@highgo.ca> wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, passed\n>\n> Hi\n>\n> I have applied and run your patch, which works fine in my environment. Regarding your comments in the patch:\n\nThanks for the review.\n\n> /*\n> * Restarting a sequence while defining it doesn't make any sense\n> * and it may override the START value. Allowing both START and\n> * RESTART option for CREATE SEQUENCE may cause confusion to user.\n> * Hence, we throw error for CREATE SEQUENCE if RESTART option is\n> * specified. However, it can be used with ALTER SEQUENCE.\n> */\n>\n> I would remove the first sentence, because it seems like a personal opinion to me. I am sure someone, somewhere may think it makes total sense :).\n\nAgree.\n\n> I would rephrase like this:\n>\n> /*\n> * Allowing both START and RESTART option for CREATE SEQUENCE\n> * could override the START value and cause confusion to user. Hence,\n> * we throw an error for CREATE SEQUENCE if RESTART option is\n> * specified; it can only be used with ALTER SEQUENCE.\n> */\n>\n> just a thought.\n\nLGTM. PSA v2 patch.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Sat, 24 Jul 2021 21:56:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE SEQUENCE with RESTART option"
},
{
"msg_contents": "On Sat, Jul 24, 2021 at 09:56:40PM +0530, Bharath Rupireddy wrote:\n> LGTM. PSA v2 patch.\n\nFWIW, like Ashutosh upthread, my vote would be to do nothing here in\nterms of behavior changes as this is just breaking a behavior for the\nsake of breaking it, so there are chances that this is going to piss\nsome users that relied accidentally on the existing behavior.\n\nI think that we should document that RESTART is accepted within the\nset of options as a consequence of the set of options supported by\ngram.y, though.\n--\nMichael",
"msg_date": "Mon, 26 Jul 2021 16:57:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: CREATE SEQUENCE with RESTART option"
},
{
"msg_contents": "On Mon, Jul 26, 2021 at 04:57:53PM +0900, Michael Paquier wrote:\n> FWIW, like Ashutosh upthread, my vote would be to do nothing here in\n> terms of behavior changes as this is just breaking a behavior for the\n> sake of breaking it, so there are chances that this is going to piss\n> some users that relied accidentally on the existing behavior.\n\nIn short, I would be tempted with something like the attached, that\ndocuments RESTART in CREATE SEQUENCE, while describing its behavior\naccording to START. In terms of regression tests, there is already a\nlot in this area with ALTER SEQUENCE, but I think that having two\ntests makes sense for CREATE SEQUENCE: one for RESTART without a\nvalue and one with, where both explicitely set START.\n\nThoughts?\n--\nMichael",
"msg_date": "Wed, 28 Jul 2021 15:20:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: CREATE SEQUENCE with RESTART option"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 11:50 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jul 26, 2021 at 04:57:53PM +0900, Michael Paquier wrote:\n> > FWIW, like Ashutosh upthread, my vote would be to do nothing here in\n> > terms of behavior changes as this is just breaking a behavior for the\n> > sake of breaking it, so there are chances that this is going to piss\n> > some users that relied accidentally on the existing behavior.\n>\n> In short, I would be tempted with something like the attached, that\n> documents RESTART in CREATE SEQUENCE, while describing its behavior\n> according to START. In terms of regression tests, there is already a\n> lot in this area with ALTER SEQUENCE, but I think that having two\n> tests makes sense for CREATE SEQUENCE: one for RESTART without a\n> value and one with, where both explicitely set START.\n>\n> Thoughts?\n\n-1. IMHO, this is something creating more confusion to the user. We\nsay that we allow both START and RESTART that RESTART is accepted as a\nconsequence of our internal option handling in gram.y. Instead, I\nrecommend throwing errorConflictingDefElem or errmsg(\"START and\nRESTART are mutually exclusive options\"). We do throw these errors in\na lot of other places for various options. Others may have better\nthoughts though.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 28 Jul 2021 20:23:25 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE SEQUENCE with RESTART option"
},
{
"msg_contents": "\n\nOn 2021/07/28 23:53, Bharath Rupireddy wrote:\n> On Wed, Jul 28, 2021 at 11:50 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Mon, Jul 26, 2021 at 04:57:53PM +0900, Michael Paquier wrote:\n>>> FWIW, like Ashutosh upthread, my vote would be to do nothing here in\n>>> terms of behavior changes as this is just breaking a behavior for the\n>>> sake of breaking it, so there are chances that this is going to piss\n>>> some users that relied accidentally on the existing behavior.\n>>\n>> In short, I would be tempted with something like the attached, that\n>> documents RESTART in CREATE SEQUENCE, while describing its behavior\n>> according to START. In terms of regression tests, there is already a\n>> lot in this area with ALTER SEQUENCE, but I think that having two\n>> tests makes sense for CREATE SEQUENCE: one for RESTART without a\n>> value and one with, where both explicitely set START.\n>>\n>> Thoughts?\n> \n> -1. IMHO, this is something creating more confusion to the user. We\n> say that we allow both START and RESTART that RESTART is accepted as a\n> consequence of our internal option handling in gram.y. Instead, I\n> recommend throwing errorConflictingDefElem or errmsg(\"START and\n> RESTART are mutually exclusive options\"). We do throw these errors in\n> a lot of other places for various options. Others may have better\n> thoughts though.\n\nPer docs, CREATE SEQUENCE conforms to the SQL standard, with some exceptions.\nSo I'd agree with Michael if CREATE SEQUENCE with RESTART also conforms to\nthe SQL standard, but I'd agree with Bharath otherwise.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 29 Jul 2021 01:58:41 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE SEQUENCE with RESTART option"
},
{
"msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> On 2021/07/28 23:53, Bharath Rupireddy wrote:\n>> -1. IMHO, this is something creating more confusion to the user. We\n>> say that we allow both START and RESTART that RESTART is accepted as a\n>> consequence of our internal option handling in gram.y. Instead, I\n>> recommend throwing errorConflictingDefElem or errmsg(\"START and\n>> RESTART are mutually exclusive options\"). We do throw these errors in\n>> a lot of other places for various options. Others may have better\n>> thoughts though.\n\n> Per docs, CREATE SEQUENCE conforms to the SQL standard, with some exceptions.\n> So I'd agree with Michael if CREATE SEQUENCE with RESTART also conforms to\n> the SQL standard, but I'd agree with Bharath otherwise.\n\nI do not see any RESTART option in SQL:2021 11.72 <sequence generator\ndefinition>. Since we don't document it either, there's really no\nexpectation that anyone would use it.\n\nI don't particularly think that we should document it, so I'm with Michael\nthat we don't need to do anything. This is hardly the only undocumented\ncorner case in PG.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Jul 2021 13:16:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CREATE SEQUENCE with RESTART option"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 01:16:19PM -0400, Tom Lane wrote:\n> I do not see any RESTART option in SQL:2021 11.72 <sequence generator\n> definition>. Since we don't document it either, there's really no\n> expectation that anyone would use it.\n\nOkay, good point. I was not aware of that.\n\n> I don't particularly think that we should document it, so I'm with Michael\n> that we don't need to do anything. This is hardly the only undocumented\n> corner case in PG.\n\nMore regression tests for CREATE SEQUENCE may be interesting, but\nthat's not an issue either with the ones for ALTER SEQUENCE. Let's\ndrop the patch and move on. \n--\nMichael",
"msg_date": "Thu, 29 Jul 2021 08:49:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: CREATE SEQUENCE with RESTART option"
}
] |
[
{
"msg_contents": "Hi,\n\nWe generally throw an error when create table options are specified\nmore than once, see below:\npostgres=# create table t1(a1 int) with (fillfactor = 10, fillfactor = 15);\nERROR: parameter \"fillfactor\" specified more than once\n\nAlthough \"with oids\" support is removed by the commit 578b229718 and\nwe do still support with (oids = false) as a no-op which may be for\nbackward compatibility. But, why do we need to allow specifying oids =\nfalse multiple times(see below)? Shouldn't we throw an error for\nconsistency with other options?\npostgres=# create table t1(a1 int) with (oids = false, oids = false,\noids = false);\nCREATE TABLE\n\nAnd also, the commit 578b229718 talks about removing \"with (oids =\nfalse)\" someday. Is it the time now to remove that and error out with\n\"unrecognized parameter \"oids\"\"?\n /*\n * This is not a great place for this test, but there's no other\n * convenient place to filter the option out. As WITH (oids =\n * false) will be removed someday, this seems like an acceptable\n * amount of ugly.\n */\npostgres=# create table t1(a1 int) with (oids = 10);\nERROR: unrecognized parameter \"oids\"\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Apr 2021 16:00:46 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Why is specifying oids = false multiple times in create table is\n silently ignored?"
},
{
"msg_contents": "On Wed, Apr 07, 2021 at 04:00:46PM +0530, Bharath Rupireddy wrote:\n> And also, the commit 578b229718 talks about removing \"with (oids =\n> false)\" someday. Is it the time now to remove that and error out with\n> \"unrecognized parameter \"oids\"\"?\n\nNope, and I think that it will remain around for some time. Keeping\naround the code necessary to silence WITH OIDS has no real maintenance\ncost, and removing it could easily break applications. So there is\nlittle gain in cleaning up that, and a lot of potential loss for\nusers.\n--\nMichael",
"msg_date": "Wed, 7 Apr 2021 19:50:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Why is specifying oids = false multiple times in create table is\n silently ignored?"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 4:20 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Apr 07, 2021 at 04:00:46PM +0530, Bharath Rupireddy wrote:\n> > And also, the commit 578b229718 talks about removing \"with (oids =\n> > false)\" someday. Is it the time now to remove that and error out with\n> > \"unrecognized parameter \"oids\"\"?\n>\n> Nope, and I think that it will remain around for some time. Keeping\n> around the code necessary to silence WITH OIDS has no real maintenance\n> cost, and removing it could easily break applications. So there is\n> little gain in cleaning up that, and a lot of potential loss for\n> users.\n\nI agree to not remove \"with (oids = false)\". At least shouldn't we fix\nthe \"create table ... with (oids = false, oids = false ....)\" case,\njust to be consistent with other options?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Apr 2021 18:55:04 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why is specifying oids = false multiple times in create table is\n silently ignored?"
},
{
"msg_contents": "On Wed, Apr 7, 2021, at 10:25 AM, Bharath Rupireddy wrote:\n> On Wed, Apr 7, 2021 at 4:20 PM Michael Paquier <michael@paquier.xyz <mailto:michael%40paquier.xyz>> wrote:\n> >\n> > On Wed, Apr 07, 2021 at 04:00:46PM +0530, Bharath Rupireddy wrote:\n> > > And also, the commit 578b229718 talks about removing \"with (oids =\n> > > false)\" someday. Is it the time now to remove that and error out with\n> > > \"unrecognized parameter \"oids\"\"?\n> >\n> > Nope, and I think that it will remain around for some time. Keeping\n> > around the code necessary to silence WITH OIDS has no real maintenance\n> > cost, and removing it could easily break applications. So there is\n> > little gain in cleaning up that, and a lot of potential loss for\n> > users.\n> \n> I agree to not remove \"with (oids = false)\". At least shouldn't we fix\n> the \"create table ... with (oids = false, oids = false ....)\" case,\n> just to be consistent with other options?\nIt would be weird to error out while parsing a no-op option, no?\n\n> But, why do we need to allow specifying oids = false multiple times(see\n> below)? Shouldn't we throw an error for consistency with other options?\n>\nIf you look at transformReloptions(), the no-op code is just a hack. Such a\npatch should add 'oids' as a reloption to test for multiple occurrences.\nAlthough, CREATE TABLE says you can use 'oids=false', Storage Parameters\nsection does not mention it as a parameter. The code is fine as is.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Apr 7, 2021, at 10:25 AM, Bharath Rupireddy wrote:On Wed, Apr 7, 2021 at 4:20 PM Michael Paquier <michael@paquier.xyz> wrote:>> On Wed, Apr 07, 2021 at 04:00:46PM +0530, Bharath Rupireddy wrote:> > And also, the commit 578b229718 talks about removing \"with (oids => > false)\" someday. Is it the time now to remove that and error out with> > \"unrecognized parameter \"oids\"\"?>> Nope, and I think that it will remain around for some time. Keeping> around the code necessary to silence WITH OIDS has no real maintenance> cost, and removing it could easily break applications. So there is> little gain in cleaning up that, and a lot of potential loss for> users.I agree to not remove \"with (oids = false)\". At least shouldn't we fixthe \"create table ... with (oids = false, oids = false ....)\" case,just to be consistent with other options?It would be weird to error out while parsing a no-op option, no?> But, why do we need to allow specifying oids = false multiple times(see> below)? Shouldn't we throw an error for consistency with other options?>If you look at transformReloptions(), the no-op code is just a hack. Such apatch should add 'oids' as a reloption to test for multiple occurrences.Although, CREATE TABLE says you can use 'oids=false', Storage Parameterssection does not mention it as a parameter. The code is fine as is.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 07 Apr 2021 11:09:41 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re:_Why_is_specifying_oids_=3D_false_multiple_times_in_create_?=\n =?UTF-8?Q?table_is_silently_ignored=3F?="
},
{
"msg_contents": "On Wed, Apr 07, 2021 at 11:09:41AM -0300, Euler Taveira wrote:\n> On Wed, Apr 7, 2021, at 10:25 AM, Bharath Rupireddy wrote:\n>> I agree to not remove \"with (oids = false)\". At least shouldn't we fix\n>> the \"create table ... with (oids = false, oids = false ....)\" case,\n>> just to be consistent with other options?\n>\n> It would be weird to error out while parsing a no-op option, no?\n\nThere is an argument to be made both ways here.\n\n>> But, why do we need to allow specifying oids = false multiple times(see\n>> below)? Shouldn't we throw an error for consistency with other options?\n>>\n>\n> If you look at transformReloptions(), the no-op code is just a hack. Such a\n> patch should add 'oids' as a reloption to test for multiple occurrences.\n> Although, CREATE TABLE says you can use 'oids=false', Storage Parameters\n> section does not mention it as a parameter. The code is fine as is.\n\nBut I agree with letting what we have here as it is, per the same\nargument of upthread that this could just break stuff for free, and\nthat's not a maintenance burden either.\n--\nMichael",
"msg_date": "Thu, 8 Apr 2021 09:17:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Why is specifying oids = false multiple times in create table is\n silently ignored?"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-08 09:17:42 +0900, Michael Paquier wrote:\n> On Wed, Apr 07, 2021 at 11:09:41AM -0300, Euler Taveira wrote:\n> > On Wed, Apr 7, 2021, at 10:25 AM, Bharath Rupireddy wrote:\n> >> I agree to not remove \"with (oids = false)\". At least shouldn't we fix\n> >> the \"create table ... with (oids = false, oids = false ....)\" case,\n> >> just to be consistent with other options?\n> >\n> > It would be weird to error out while parsing a no-op option, no?\n> \n> There is an argument to be made both ways here.\n\n> >> But, why do we need to allow specifying oids = false multiple times(see\n> >> below)? Shouldn't we throw an error for consistency with other options?\n> >>\n> >\n> > If you look at transformReloptions(), the no-op code is just a hack. Such a\n> > patch should add 'oids' as a reloption to test for multiple occurrences.\n> > Although, CREATE TABLE says you can use 'oids=false', Storage Parameters\n> > section does not mention it as a parameter. The code is fine as is.\n> \n> But I agree with letting what we have here as it is, per the same\n> argument of upthread that this could just break stuff for free, and\n> that's not a maintenance burden either.\n\nAgreed.\n\nGiven that this case didn't error out before the OIDs removal, it seems\nlike it'd be really strange to make it error out in the compat code...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Apr 2021 17:26:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why is specifying oids = false multiple times in create table is\n silently ignored?"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 5:56 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-04-08 09:17:42 +0900, Michael Paquier wrote:\n> > On Wed, Apr 07, 2021 at 11:09:41AM -0300, Euler Taveira wrote:\n> > > On Wed, Apr 7, 2021, at 10:25 AM, Bharath Rupireddy wrote:\n> > >> I agree to not remove \"with (oids = false)\". At least shouldn't we fix\n> > >> the \"create table ... with (oids = false, oids = false ....)\" case,\n> > >> just to be consistent with other options?\n> > >\n> > > It would be weird to error out while parsing a no-op option, no?\n> >\n> > There is an argument to be made both ways here.\n>\n> > >> But, why do we need to allow specifying oids = false multiple times(see\n> > >> below)? Shouldn't we throw an error for consistency with other options?\n> > >>\n> > >\n> > > If you look at transformReloptions(), the no-op code is just a hack. Such a\n> > > patch should add 'oids' as a reloption to test for multiple occurrences.\n> > > Although, CREATE TABLE says you can use 'oids=false', Storage Parameters\n> > > section does not mention it as a parameter. The code is fine as is.\n> >\n> > But I agree with letting what we have here as it is, per the same\n> > argument of upthread that this could just break stuff for free, and\n> > that's not a maintenance burden either.\n>\n> Agreed.\n>\n> Given that this case didn't error out before the OIDs removal, it seems\n> like it'd be really strange to make it error out in the compat code...\n\nAgreed to not error out for a no-op case i.e. with (oids = false, oids\n= false). Thank you all for providing thoughts. I'm ending the\ndiscussion here.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Apr 2021 07:39:48 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why is specifying oids = false multiple times in create table is\n silently ignored?"
}
] |
[
{
"msg_contents": "Recently (last day or so), I get this warning from gcc 10.2:\n\n-----\nhba.c:3160:18: warning: comparison of unsigned enum expression < 0 is always false [-Wtautological-compare]\n if (auth_method < 0 || USER_AUTH_LAST < auth_method)\n ~~~~~~~~~~~ ^ ~\n1 warning generated.\n-----\n\nErik\n\n\n",
"msg_date": "Wed, 7 Apr 2021 13:00:48 +0200 (CEST)",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "hba.c:3160:18: warning: comparison of unsigned enum expression"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 1:01 PM Erik Rijkers <er@xs4all.nl> wrote:\n>\n> Recently (last day or so), I get this warning from gcc 10.2:\n>\n> -----\n> hba.c:3160:18: warning: comparison of unsigned enum expression < 0 is always false [-Wtautological-compare]\n> if (auth_method < 0 || USER_AUTH_LAST < auth_method)\n> ~~~~~~~~~~~ ^ ~\n> 1 warning generated.\n> -----\n\nThis one is from 9afffcb833d3c5e59a328a2af674fac7e7334fc1 (adding\nJacob and Michael to cc)\n\nAnd it makes sense to give warning on that. AuthMethod is an enum. It\ncan by definition not have a value that's not in the enum. That check\nsimply seems wrong/unnecessary.\n\nThe only other use fo USER_AUTH_LAST is in fill_hba_line() which also\ngets the name of the auth. That one uses :\n StaticAssertStmt(lengthof(UserAuthName) == USER_AUTH_LAST + 1,\n \"UserAuthName[] must match the UserAuth enum\");\n\nWhich seems like a more reasonable check.\n\nBut that also highlights -- shouldn't that function then also be made\nto use hba_authname(), and the assert moved into the function? That\nseems like the cleanest way?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 7 Apr 2021 13:24:01 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: hba.c:3160:18: warning: comparison of unsigned enum expression"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 1:24 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Wed, Apr 7, 2021 at 1:01 PM Erik Rijkers <er@xs4all.nl> wrote:\n> >\n> > Recently (last day or so), I get this warning from gcc 10.2:\n> >\n> > -----\n> > hba.c:3160:18: warning: comparison of unsigned enum expression < 0 is always false [-Wtautological-compare]\n> > if (auth_method < 0 || USER_AUTH_LAST < auth_method)\n> > ~~~~~~~~~~~ ^ ~\n> > 1 warning generated.\n> > -----\n>\n> This one is from 9afffcb833d3c5e59a328a2af674fac7e7334fc1 (adding\n> Jacob and Michael to cc)\n>\n> And it makes sense to give warning on that. AuthMethod is an enum. It\n> can by definition not have a value that's not in the enum. That check\n> simply seems wrong/unnecessary.\n>\n> The only other use fo USER_AUTH_LAST is in fill_hba_line() which also\n> gets the name of the auth. That one uses :\n> StaticAssertStmt(lengthof(UserAuthName) == USER_AUTH_LAST + 1,\n> \"UserAuthName[] must match the UserAuth enum\");\n>\n> Which seems like a more reasonable check.\n>\n> But that also highlights -- shouldn't that function then also be made\n> to use hba_authname(), and the assert moved into the function? That\n> seems like the cleanest way?\n\n\nSo to be clear, this is what I'm suggesting.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 7 Apr 2021 13:32:37 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: hba.c:3160:18: warning: comparison of unsigned enum expression"
},
{
"msg_contents": "On Wed, Apr 07, 2021 at 01:24:01PM +0200, Magnus Hagander wrote:\n> On Wed, Apr 7, 2021 at 1:01 PM Erik Rijkers <er@xs4all.nl> wrote:\n> > Recently (last day or so), I get this warning from gcc 10.2:\n\nSame compiler version here, but I did not get warned. Are you using\nany particular flag?\n\n> But that also highlights -- shouldn't that function then also be made\n> to use hba_authname(), and the assert moved into the function? That\n> seems like the cleanest way?\n\nGood idea, that's much cleaner this way. Do you like the attached?\n--\nMichael",
"msg_date": "Wed, 7 Apr 2021 20:57:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: hba.c:3160:18: warning: comparison of unsigned enum expression"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 1:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Apr 07, 2021 at 01:24:01PM +0200, Magnus Hagander wrote:\n> > On Wed, Apr 7, 2021 at 1:01 PM Erik Rijkers <er@xs4all.nl> wrote:\n> > > Recently (last day or so), I get this warning from gcc 10.2:\n>\n> Same compiler version here, but I did not get warned. Are you using\n> any particular flag?\n>\n> > But that also highlights -- shouldn't that function then also be made\n> > to use hba_authname(), and the assert moved into the function? That\n> > seems like the cleanest way?\n>\n> Good idea, that's much cleaner this way. Do you like the attached?\n\nThat's very close to mine (see one email later). Let's bikeshed about\nthe details. I think it's basically the same for current usecases, but\nthat taking the UserAuth as the parameter is cleaner and potentially\nmore useful for the future.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 7 Apr 2021 14:01:42 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: hba.c:3160:18: warning: comparison of unsigned enum expression"
},
{
"msg_contents": "On Wed, Apr 07, 2021 at 02:01:42PM +0200, Magnus Hagander wrote:\n> That's very close to mine (see one email later). Let's bikeshed about\n> the details. I think it's basically the same for current usecases, but\n> that taking the UserAuth as the parameter is cleaner and potentially\n> more useful for the future.\n\nMissed it, sorry about that. Using UserAuth as argument is fine by\nme. If you wish to apply that, please feel free. I am fine to do\nthat myself, but that will have to wait until tomorrow my time.\n--\nMichael",
"msg_date": "Wed, 7 Apr 2021 21:17:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: hba.c:3160:18: warning: comparison of unsigned enum expression"
},
{
"msg_contents": "> On 2021.04.07. 13:57 Michael Paquier <michael@paquier.xyz> wrote:\n> \n> \n> On Wed, Apr 07, 2021 at 01:24:01PM +0200, Magnus Hagander wrote:\n> > On Wed, Apr 7, 2021 at 1:01 PM Erik Rijkers <er@xs4all.nl> wrote:\n> > > Recently (last day or so), I get this warning from gcc 10.2:\n\n> [gcc-hba-warning.patch]\n\nFWIW, this fixes the warning.\n\n(and no, I don't think I am using special gcc settings..)\n\nErik\n\n\n",
"msg_date": "Wed, 7 Apr 2021 14:18:13 +0200 (CEST)",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "Re: hba.c:3160:18: warning: comparison of unsigned enum expression"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 2:17 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Apr 07, 2021 at 02:01:42PM +0200, Magnus Hagander wrote:\n> > That's very close to mine (see one email later). Let's bikeshed about\n> > the details. I think it's basically the same for current usecases, but\n> > that taking the UserAuth as the parameter is cleaner and potentially\n> > more useful for the future.\n>\n> Missed it, sorry about that. Using UserAuth as argument is fine by\n> me. If you wish to apply that, please feel free. I am fine to do\n> that myself, but that will have to wait until tomorrow my time.\n\nOk, I'll go ahead and push it. Thanks for confirming the fix!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 7 Apr 2021 14:20:25 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: hba.c:3160:18: warning: comparison of unsigned enum expression"
},
{
"msg_contents": "On Wed, Apr 07, 2021 at 02:20:25PM +0200, Magnus Hagander wrote:\n> Ok, I'll go ahead and push it. Thanks for confirming the fix!\n\nCool. Thanks!\n--\nMichael",
"msg_date": "Wed, 7 Apr 2021 21:24:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: hba.c:3160:18: warning: comparison of unsigned enum expression"
}
] |
[
{
"msg_contents": "I was looking at changes in Sp-Gist by\ncommit 4c0239cb7a7775e3183cb575e62703d71bf3302d\n(discussion\nhttps://postgr.es/m/CALj2ACViOo2qyaPT7krWm4LRyRTw9kOXt+g6PfNmYuGA=YHj9A@mail.gmail.com\n) and realized that during PageInit, both page header and page special are\nexpected to be maxaligned but in reality, their treatment is quite\ndifferent:\n1. page special size is silently enforced to be maxaligned by PageInit()\neven if caller-specified specialSize is not of a maxalign'ed size.\n2. page header size alignment is not checked at all (but we expect it\nmaxalign'ed, yes).\n\nI'd propose do both things in the same way: just Assert both sizes are\nmaxalign'ed during page init.\n\nI dived further and it appears that the only caller, who provides not\nproperly aligned page special is fill_seq_with_data() and corrected it.\n\nI am really convinced, that _callers_ should care about proper special\nsize. So now PageInit() just checks the right lengths of page special and\npage header with assert, not enforcing size change silently. PFA my small\npatch on this. I'd propose it to commit if in the HEAD only likewise the\ncommit 4c0239cb7a7775e3183cb575e62703d71bf3302d.\n\nWhat do you think?\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Wed, 7 Apr 2021 16:02:18 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Improve treatment of page special and page header alignment\n during page init."
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 5:32 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>\n> I was looking at changes in Sp-Gist by commit 4c0239cb7a7775e3183cb575e62703d71bf3302d\n> (discussion\n> https://postgr.es/m/CALj2ACViOo2qyaPT7krWm4LRyRTw9kOXt+g6PfNmYuGA=YHj9A@mail.gmail.com ) and realized that during PageInit, both page header and page special are expected to be maxaligned but in reality, their treatment is quite different:\n\nHow can we say that in PageInit the SizeOfPageHeaderData is expected\nto be max aligned? Am I missing something? There are lots of other\nplaces where SizeOfPageHeaderData is used, not\nMAXALIGN(SizeOfPageHeaderData).\n\n> 1. page special size is silently enforced to be maxaligned by PageInit() even if caller-specified specialSize is not of a maxalign'ed size.\n> 2. page header size alignment is not checked at all (but we expect it maxalign'ed, yes).\n>\n> I'd propose do both things in the same way: just Assert both sizes are maxalign'ed during page init.\n>\n> I dived further and it appears that the only caller, who provides not properly aligned page special is fill_seq_with_data() and corrected it.\n>\n> I am really convinced, that _callers_ should care about proper special size. So now PageInit() just checks the right lengths of page special and page header with assert, not enforcing size change silently. PFA my small patch on this. I'd propose it to commit if in the HEAD only likewise the commit 4c0239cb7a7775e3183cb575e62703d71bf3302d.\n>\n> What do you think?\n\nI still feel that for special size let callers call PageInit with\nsizeof(special_structure) and PageInit do the alignment. Others may\nhave different opinion.\n\nOn the patch itself, how can we say that other special sizes are max\naligned except sequence_magic structure?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Apr 2021 19:25:22 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve treatment of page special and page header\n alignment during page init."
},
{
"msg_contents": "ср, 7 апр. 2021 г. в 17:55, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com>:\n\n> On Wed, Apr 7, 2021 at 5:32 PM Pavel Borisov <pashkin.elfe@gmail.com>\n> wrote:\n> >\n> > I was looking at changes in Sp-Gist by commit\n> 4c0239cb7a7775e3183cb575e62703d71bf3302d\n> > (discussion\n> >\n> https://postgr.es/m/CALj2ACViOo2qyaPT7krWm4LRyRTw9kOXt+g6PfNmYuGA=YHj9A@mail.gmail.com\n> ) and realized that during PageInit, both page header and page special are\n> expected to be maxaligned but in reality, their treatment is quite\n> different:\n>\n> How can we say that in PageInit the SizeOfPageHeaderData is expected\n> to be max aligned? Am I missing something? There are lots of other\n> places where SizeOfPageHeaderData is used, not\n> MAXALIGN(SizeOfPageHeaderData).\n>\nIts maxalign is ensured by its size of 24bytes (which is maxalign'ed). I\nthink if we change this to not-maxalign'ed value bad things can happen. So\nI've added assert checking for this value. I think it is similar situation\nfor both page header and page special, I wonder why they've been treated\ndifferently in PageInit.\n\n\n> > 1. page special size is silently enforced to be maxaligned by PageInit()\n> even if caller-specified specialSize is not of a maxalign'ed size.\n> > 2. page header size alignment is not checked at all (but we expect it\n> maxalign'ed, yes).\n> >\n> > I'd propose do both things in the same way: just Assert both sizes are\n> maxalign'ed during page init.\n> >\n> > I dived further and it appears that the only caller, who provides not\n> properly aligned page special is fill_seq_with_data() and corrected it.\n> >\n> > I am really convinced, that _callers_ should care about proper special\n> size. So now PageInit() just checks the right lengths of page special and\n> page header with assert, not enforcing size change silently. PFA my small\n> patch on this. I'd propose it to commit if in the HEAD only likewise the\n> commit 4c0239cb7a7775e3183cb575e62703d71bf3302d.\n> >\n> > What do you think?\n>\n> I still feel that for special size let callers call PageInit with\n> sizeof(special_structure) and PageInit do the alignment. Others may\n> have different opinion.\n>\n> On the patch itself, how can we say that other special sizes are max\n> aligned except sequence_magic structure?\n>\nAlike for page header, it is ensured by the current size of page special in\nall access methods now (except the size of sequence_magic, which I've\ncorrected in the call). If someone wants to break this in the future, there\nis an added assert checking in PageInit.\n\nI think we should not maxalign both SizeOfPageHeaderData and specialSize\nmanually, just check they have the right (already maxalign'ed) length to be\nsafe in the future.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nср, 7 апр. 2021 г. в 17:55, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:On Wed, Apr 7, 2021 at 5:32 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>\n> I was looking at changes in Sp-Gist by commit 4c0239cb7a7775e3183cb575e62703d71bf3302d\n> (discussion\n> https://postgr.es/m/CALj2ACViOo2qyaPT7krWm4LRyRTw9kOXt+g6PfNmYuGA=YHj9A@mail.gmail.com ) and realized that during PageInit, both page header and page special are expected to be maxaligned but in reality, their treatment is quite different:\n\nHow can we say that in PageInit the SizeOfPageHeaderData is expected\nto be max aligned? Am I missing something? There are lots of other\nplaces where SizeOfPageHeaderData is used, not\nMAXALIGN(SizeOfPageHeaderData).Its maxalign is ensured by its size of 24bytes (which is maxalign'ed). I think if we change this to not-maxalign'ed value bad things can happen. So I've added assert checking for this value. I think it is similar situation for both page header and page special, I wonder why they've been treated differently in PageInit. \n> 1. page special size is silently enforced to be maxaligned by PageInit() even if caller-specified specialSize is not of a maxalign'ed size.\n> 2. page header size alignment is not checked at all (but we expect it maxalign'ed, yes).\n>\n> I'd propose do both things in the same way: just Assert both sizes are maxalign'ed during page init.\n>\n> I dived further and it appears that the only caller, who provides not properly aligned page special is fill_seq_with_data() and corrected it.\n>\n> I am really convinced, that _callers_ should care about proper special size. So now PageInit() just checks the right lengths of page special and page header with assert, not enforcing size change silently. PFA my small patch on this. I'd propose it to commit if in the HEAD only likewise the commit 4c0239cb7a7775e3183cb575e62703d71bf3302d.\n>\n> What do you think?\n\nI still feel that for special size let callers call PageInit with\nsizeof(special_structure) and PageInit do the alignment. Others may\nhave different opinion.\n\nOn the patch itself, how can we say that other special sizes are max\naligned except sequence_magic structure?Alike for page header, it is ensured by the current size of page special in all access methods now (except the size of sequence_magic, which I've corrected in the call). If someone wants to break this in the future, there is an added assert checking in PageInit. I think we should not maxalign both SizeOfPageHeaderData and specialSize manually, just check they have the right (already maxalign'ed) length to be safe in the future.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Wed, 7 Apr 2021 19:23:03 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Improve treatment of page special and page header\n alignment during page init."
},
{
"msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n>> How can we say that in PageInit the SizeOfPageHeaderData is expected\n>> to be max aligned? Am I missing something? There are lots of other\n>> places where SizeOfPageHeaderData is used, not\n>> MAXALIGN(SizeOfPageHeaderData).\n\n> Its maxalign is ensured by its size of 24bytes (which is maxalign'ed). I\n> think if we change this to not-maxalign'ed value bad things can happen. So\n> I've added assert checking for this value. I think it is similar situation\n> for both page header and page special, I wonder why they've been treated\n> differently in PageInit.\n\nNo, that's wrong. What follows the page header is the line pointer\narray, which is only int-aligned. We need to maxalign the special\nspace because tuples are stored working backwards from that, and\nwe want maxalignment for tuples.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Apr 2021 11:38:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve treatment of page special and page header\n alignment during page init."
},
{
"msg_contents": "> No, that's wrong. What follows the page header is the line pointer\n> array, which is only int-aligned. We need to maxalign the special\n> space because tuples are stored working backwards from that, and\n> we want maxalignment for tuples.\n>\nOk, I realized. Thanks!\nThen I'd call off the proposal.\n\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nNo, that's wrong. What follows the page header is the line pointer\narray, which is only int-aligned. We need to maxalign the special\nspace because tuples are stored working backwards from that, and\nwe want maxalignment for tuples.Ok, I realized. Thanks!Then I'd call off the proposal.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Wed, 7 Apr 2021 20:08:37 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Improve treatment of page special and page header\n alignment during page init."
}
] |
[
{
"msg_contents": "Hi postgres community,\nI am willing to participate in GSoC to speed up the build of the gist index\nin postgis, which is based on postgresql.\nAnd I need to know *everything* about the GiST API.\nTo do so I need to acquire the necessary theory and concepts to start this\njourney.\nI do not have a computer science background, I have little knowledge about\nmachines and I code in Python (for scientific computation, data science and\nml).\nSo I am asking what I should learn to complete in an efficient way this\ntask at hand: speeding up the build of gist index, if possible how much\ntime is needed to accomplish each task.\nNow I am learning the C language.\nBest regards.\n-- \nFATIHI Ayoub\n\nHi postgres community,I am willing to participate in GSoC to speed up the build of the gist index in postgis, which is based on postgresql.And I need to know *everything* about the GiST API.To do so I need to acquire the necessary theory and concepts to start this journey.I do not have a computer science background, I have little knowledge about machines and I code in Python (for scientific computation, data science and ml).So I am asking what I should learn to complete in an efficient way this task at hand: speeding up the build of gist index, if possible how much time is needed to accomplish each task.Now I am learning the C language.Best regards.-- FATIHI Ayoub",
"msg_date": "Wed, 7 Apr 2021 13:11:21 +0100",
"msg_from": "FATIHI Ayoub <ayoubfatihi1999@gmail.com>",
"msg_from_op": true,
"msg_subject": "Need help!"
},
{
"msg_contents": "On Wed, Apr 7, 2021, 09:29 FATIHI Ayoub <ayoubfatihi1999@gmail.com> wrote:\n\n> Hi postgres community,\n> I am willing to participate in GSoC to speed up the build of the gist\n> index in postgis, which is based on postgresql.\n>\n\nYou should mention and link to where you cross-posted this to Reddit.\n\nOn Wed, Apr 7, 2021, 09:29 FATIHI Ayoub <ayoubfatihi1999@gmail.com> wrote:Hi postgres community,I am willing to participate in GSoC to speed up the build of the gist index in postgis, which is based on postgresql.You should mention and link to where you cross-posted this to Reddit.",
"msg_date": "Wed, 7 Apr 2021 09:34:56 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Need help!"
},
{
"msg_contents": "Hi Ayoub,\n\nIl giorno mer 7 apr 2021 alle ore 17:29 FATIHI Ayoub <\nayoubfatihi1999@gmail.com> ha scritto:\n\n> Hi postgres community,\n> I am willing to participate in GSoC to speed up the build of the gist\n> index in postgis, which is based on postgresql.\n> And I need to know *everything* about the GiST API.\n> To do so I need to acquire the necessary theory and concepts to start this\n> journey.\n> I do not have a computer science background, I have little knowledge about\n> machines and I code in Python (for scientific computation, data science and\n> ml).\n> So I am asking what I should learn to complete in an efficient way this\n> task at hand: speeding up the build of gist index, if possible how much\n> time is needed to accomplish each task.\n>\n\nThe main thing you have to know IMO is the concept of \"extensibility of\nindexes\" in PostgreSQL. More specifically about GiST, you can have a look\nhere:\n\nhttps://www.postgresql.org/docs/devel/gist-extensibility.html\n\nHere there's also a note about the new added method of the API sortsupport,\nwhich is what you need for your task.\n\nGiuseppe.\n\nHi Ayoub,Il giorno mer 7 apr 2021 alle ore 17:29 FATIHI Ayoub <ayoubfatihi1999@gmail.com> ha scritto:Hi postgres community,I am willing to participate in GSoC to speed up the build of the gist index in postgis, which is based on postgresql.And I need to know *everything* about the GiST API.To do so I need to acquire the necessary theory and concepts to start this journey.I do not have a computer science background, I have little knowledge about machines and I code in Python (for scientific computation, data science and ml).So I am asking what I should learn to complete in an efficient way this task at hand: speeding up the build of gist index, if possible how much time is needed to accomplish each task.The main thing you have to know IMO is the concept of \"extensibility of indexes\" in PostgreSQL. More specifically about GiST, you can have a look here:https://www.postgresql.org/docs/devel/gist-extensibility.htmlHere there's also a note about the new added method of the API sortsupport, which is what you need for your task.Giuseppe.",
"msg_date": "Wed, 7 Apr 2021 18:41:08 +0100",
"msg_from": "Giuseppe Broccolo <g.broccolo.7@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Need help!"
}
] |
[
{
"msg_contents": "Currently the cost model ignores the initial partition prune and run time\npartition prune totally. This impacts includes: 1). The cost of Nest Loop\npath\nis highly overrated. 2). And the rows estimator can be very wrong as well\nsome\ntime. We can use the following cases to demonstrate.\n\nCREATE TABLE p (c_type INT, v INT) partition by list(c_type);\nSELECT 'create table p_'|| i || ' partition of p for values in ('|| i|| ');'\nfrom generate_series(1, 100) i; \\gexec\nSELECT 'insert into p select ' || i ||', v from generate_series(1, 1000000)\nv;'\nfrom generate_series(1, 100) i; \\gexec\nANALYZE P;\nCREATE INDEX on p(v);\n\nCase 1:\n\nPREPARE s AS\nSELECT * FROM generate_series(1, 10) i JOIN p ON i = p.v;\nEXPLAIN execute s;\n\npostgres=# EXPLAIN execute s;\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Nested Loop (cost=0.43..8457.60 rows=1029 width=12)\n -> Function Scan on generate_series i (cost=0.00..0.10 rows=10 width=4)\n -> Append (cost=0.42..844.75 rows=100 width=8)\n -> Index Scan using p_1_v_idx on p_1 (cost=0.42..8.44 rows=1\nwidth=8)\n Index Cond: (v = i.i)\n -> Index Scan using p_2_v_idx on p_2 (cost=0.42..8.44 rows=1\nwidth=8)\n Index Cond: (v = i.i)\n -> Index Scan using p_3_v_idx on p_3 (cost=0.42..8.44 rows=1\nwidth=8)\n Index Cond: (v = i.i)\n\n ...\n\nWe can see the cost/rows of Append Path is highly overrated. (the rows\nshould be 1\nrather than 100, cost should be 8.44 rather than 844).\n\nCase 2:\nPREPARE s2 AS\nSELECT * FROM p a JOIN p b ON a.v = b.v and a.c_type = $1 and a.v < 10;\nEXPLAIN execute s2(3);\n\npostgres=# EXPLAIN execute s2(3);\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\n Gather (cost=1000.85..5329.91 rows=926 width=16)\n Workers Planned: 1\n -> Nested Loop (cost=0.85..4237.31 rows=545 width=16)\n -> Parallel Index Scan using p_3_v_idx on p_3 a (cost=0.42..8.56\nrows=5 width=8)\n Index Cond: (v < 10)\n Filter: (c_type = 3)\n -> Append (cost=0.42..844.75 rows=100 width=8)\n -> Index Scan using p_1_v_idx on p_1 b_1 (cost=0.42..8.44\nrows=1 width=8)\n Index Cond: (v = a.v)\n -> Index Scan using p_2_v_idx on p_2 b_2 (cost=0.42..8.44\nrows=1 width=8)\n Index Cond: (v = a.v)\n ...\n\nset plan_cache_mode = force_generic_plan;\nEXPLAIN ANALYZE execute s2(3);\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n Gather (cost=1000.85..312162.57 rows=93085 width=16)\n Workers Planned: 2\n -> Nested Loop (cost=0.85..301854.07 rows=38785 width=16)\n -> Parallel Append (cost=0.42..857.94 rows=400 width=8)\n Subplans Removed: 99\n -> Parallel Index Scan using p_3_v_idx on p_3 a_1\n (cost=0.42..8.56 rows=5 width=8)\n Index Cond: (v < 10)\n Filter: (c_type = $1)\n -> Append (cost=0.42..751.49 rows=100 width=8)\n ...\n\nWe can see the rows for Gather node changed from 926 to 93085, while the\nactual\nrows is 900 rows. The reason for case 2 is because we adjust the\nrel->tuples for\nplan time partition prune, but we did nothing for initial partition prune.\nI would like to\naim to fix both of the issues.\n\nThe scope I want to cover at the first stage are\n1. Only support limited operators like '=', 'in', 'partkey = $1 OR partkey\n=\n $2'; which means the operators like '>', '<', 'BETWEEN .. AND ' are not\n supported.\n2. Only supporting all the partition keys are used in prune quals. for\nexample,\n if we have partkey (p1, p2). but user just have p1 = $1 in the quals.\n3. All the other cases should be supported.\n\nThe reason I put some limits above is because 1). they are not common. 2).\nthere\nare no way to guess the reasonable ratio.\n\nThe design principle are:\n1). Adjust the AppendPath's cost and rows for both initial partition prune\nand\nrun time partition prune in cost_append. The ratio is just 1/nparts for\nall the\nsupported case, even for partkey in ($1, $2, $3).\n\n2). Adjust rel->tuples for initial partition prune only.\n3). Use the adjusted AppendPath's cost/rows for sub-partitioned case,\naround the\ncases accumulate_append_subpath.\n\nI have implemented this for 1-level partition and 1 partition key only at\n[1],\nand I have tested it on my real user case, looks the algorithm works great.\nI am\nplanning to implement the full version recently. Any suggestion for the\ndesign/scope part?\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAKU4AWpO4KegS6tw8UUnWA4GWr-Di%3DWBmuQnnyjxFGA0MhEHyA%40mail.gmail.com\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nCurrently the cost model ignores the initial partition prune and run timepartition prune totally. This impacts includes: 1). The cost of Nest Loop pathis highly overrated. 2). And the rows estimator can be very wrong as well sometime. We can use the following cases to demonstrate.CREATE TABLE p (c_type INT, v INT) partition by list(c_type);SELECT 'create table p_'|| i || ' partition of p for values in ('|| i|| ');'from generate_series(1, 100) i; \\gexecSELECT 'insert into p select ' || i ||', v from generate_series(1, 1000000) v;'from generate_series(1, 100) i; \\gexecANALYZE P;CREATE INDEX on p(v);Case 1: PREPARE s ASSELECT * FROM generate_series(1, 10) i JOIN p ON i = p.v;EXPLAIN execute s;postgres=# EXPLAIN execute s; QUERY PLAN------------------------------------------------------------------------------------- Nested Loop (cost=0.43..8457.60 rows=1029 width=12) -> Function Scan on generate_series i (cost=0.00..0.10 rows=10 width=4) -> Append (cost=0.42..844.75 rows=100 width=8) -> Index Scan using p_1_v_idx on p_1 (cost=0.42..8.44 rows=1 width=8) Index Cond: (v = i.i) -> Index Scan using p_2_v_idx on p_2 (cost=0.42..8.44 rows=1 width=8) Index Cond: (v = i.i) -> Index Scan using p_3_v_idx on p_3 (cost=0.42..8.44 rows=1 width=8) Index Cond: (v = i.i) ...We can see the cost/rows of Append Path is highly overrated. (the rows should be 1rather than 100, cost should be 8.44 rather than 844). Case 2: PREPARE s2 ASSELECT * FROM p a JOIN p b ON a.v = b.v and a.c_type = $1 and a.v < 10;EXPLAIN execute s2(3);postgres=# EXPLAIN execute s2(3); QUERY PLAN------------------------------------------------------------------------------------------------- Gather (cost=1000.85..5329.91 rows=926 width=16) Workers Planned: 1 -> Nested Loop (cost=0.85..4237.31 rows=545 width=16) -> Parallel Index Scan using p_3_v_idx on p_3 a (cost=0.42..8.56 rows=5 width=8) Index Cond: (v < 10) Filter: (c_type = 3) -> Append (cost=0.42..844.75 rows=100 width=8) -> Index Scan using p_1_v_idx on p_1 b_1 (cost=0.42..8.44 rows=1 width=8) Index Cond: (v = a.v) -> Index Scan using p_2_v_idx on p_2 b_2 (cost=0.42..8.44 rows=1 width=8) Index Cond: (v = a.v) ...set plan_cache_mode = force_generic_plan;EXPLAIN ANALYZE execute s2(3); QUERY PLAN---------------------------------------------------------------------------------------------------- Gather (cost=1000.85..312162.57 rows=93085 width=16) Workers Planned: 2 -> Nested Loop (cost=0.85..301854.07 rows=38785 width=16) -> Parallel Append (cost=0.42..857.94 rows=400 width=8) Subplans Removed: 99 -> Parallel Index Scan using p_3_v_idx on p_3 a_1 (cost=0.42..8.56 rows=5 width=8) Index Cond: (v < 10) Filter: (c_type = $1) -> Append (cost=0.42..751.49 rows=100 width=8) ...We can see the rows for Gather node changed from 926 to 93085, while the actualrows is 900 rows. The reason for case 2 is because we adjust the rel->tuples forplan time partition prune, but we did nothing for initial partition prune. I would like toaim to fix both of the issues.The scope I want to cover at the first stage are1. Only support limited operators like '=', 'in', 'partkey = $1 OR partkey = $2'; which means the operators like '>', '<', 'BETWEEN .. AND ' are not supported.2. Only supporting all the partition keys are used in prune quals. for example, if we have partkey (p1, p2). but user just have p1 = $1 in the quals.3. All the other cases should be supported.The reason I put some limits above is because 1). they are not common. 2). thereare no way to guess the reasonable ratio.The design principle are:1). Adjust the AppendPath's cost and rows for both initial partition prune andrun time partition prune in cost_append. The ratio is just 1/nparts for all thesupported case, even for partkey in ($1, $2, $3).2). Adjust rel->tuples for initial partition prune only.3). Use the adjusted AppendPath's cost/rows for sub-partitioned case, around thecases accumulate_append_subpath.I have implemented this for 1-level partition and 1 partition key only at [1],and I have tested it on my real user case, looks the algorithm works great. I amplanning to implement the full version recently. Any suggestion for thedesign/scope part?[1] https://www.postgresql.org/message-id/CAKU4AWpO4KegS6tw8UUnWA4GWr-Di%3DWBmuQnnyjxFGA0MhEHyA%40mail.gmail.com -- Best RegardsAndy Fan (https://www.aliyun.com/)",
"msg_date": "Wed, 7 Apr 2021 22:04:11 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Cost model improvement for run-time partition prune"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nI noticed that in some situations involving the use of REVOKE ON SCHEMA,\npg_dump\ncan produce a dump that cannot be restored. This prevents successful\npg_restore (and by corollary, pg_upgrade).\n\nAn example shell script to recreate this problem is attached. The error\noutput appears at the end like this:\n\n<snippet>\n+ pg_restore -d postgres /tmp/foo.dmp\npg_restore: [archiver (db)] Error while PROCESSING TOC:\npg_restore: [archiver (db)] Error from TOC entry 2748; 0 0 ACL TABLE\nmytable owneruser\npg_restore: [archiver (db)] could not execute query: ERROR: permission\ndenied for schema private\n Command was: GRANT SELECT ON TABLE private.mytable TO privileged WITH\nGRANT OPTION;\nSET SESSION AUTHORIZATION privileged;\nGRANT SELECT ON TABLE private.mytable TO enduser WITH GRANT OPTION;\nRESET SESSION AUTHORIZATION;\nWARNING: errors ignored on restore: 1\n-bash-4.2$\n</snippet>\n\nNote that `privileged` user needs to grant permissions to `enduser`, but\ncan no longer do so because `privileged` no longer has access to the\n`private` schema (it was revoked).\n\nHow might we fix up pg_dump to handle these sorts of situations? It seems\nlike pg_dump might need extra logic to GRANT the schema permissions to the\n`privileged` user and then REVOKE them later on?\n\nThanks for looking,\n--Richard",
"msg_date": "Wed, 7 Apr 2021 10:13:30 -0700",
"msg_from": "Richard Yen <richyen3@gmail.com>",
"msg_from_op": true,
"msg_subject": "dump cannot be restored if schema permissions revoked"
},
{
"msg_contents": "On Wed, Apr 07, 2021 at 10:13:30AM -0700, Richard Yen wrote:\n> I noticed that in some situations involving the use of REVOKE ON SCHEMA,\n> pg_dump\n> can produce a dump that cannot be restored. This prevents successful\n> pg_restore (and by corollary, pg_upgrade).\n> \n> An example shell script to recreate this problem is attached. The error\n> output appears at the end like this:\n> \n> <snippet>\n> + pg_restore -d postgres /tmp/foo.dmp\n> pg_restore: [archiver (db)] Error while PROCESSING TOC:\n> pg_restore: [archiver (db)] Error from TOC entry 2748; 0 0 ACL TABLE\n> mytable owneruser\n> pg_restore: [archiver (db)] could not execute query: ERROR: permission\n> denied for schema private\n> Command was: GRANT SELECT ON TABLE private.mytable TO privileged WITH\n> GRANT OPTION;\n> SET SESSION AUTHORIZATION privileged;\n> GRANT SELECT ON TABLE private.mytable TO enduser WITH GRANT OPTION;\n> RESET SESSION AUTHORIZATION;\n> WARNING: errors ignored on restore: 1\n> -bash-4.2$\n> </snippet>\n> \n> Note that `privileged` user needs to grant permissions to `enduser`, but\n> can no longer do so because `privileged` no longer has access to the\n> `private` schema (it was revoked).\n> \n> How might we fix up pg_dump to handle these sorts of situations?\n\nI would approach this by allowing GRANT to take a grantor role name. Then,\nwe'd remove the SET SESSION AUTHORIZATION, and the user running the restore\nwould set the grantor. \"GRANT SELECT ON TABLE foo TO bob GRANTED BY alice;\"\nlooks reasonable to me, though one would need to check if SQL requires that to\nhave some different behavior.\n\n> It seems\n> like pg_dump might need extra logic to GRANT the schema permissions to the\n> `privileged` user and then REVOKE them later on?\n\nThat could work, but I would avoid it for a couple of reasons. In some\n\"pg_restore --use-list\" partial restores, the schema privilege may already\nexist, and this design may surprise the DBA by removing the existing\nprivilege. When running a restore as a non-superuser, the additional\nGRANT/REVOKE could be a source of permission denied failures.\n\n\n",
"msg_date": "Fri, 14 May 2021 01:50:30 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: dump cannot be restored if schema permissions revoked"
}
] |
[
{
"msg_contents": "Hi,\n\nI was looking at InvalidateObsoleteReplicationSlots() while reviewing /\npolishing the logical decoding on standby patch. Which lead me to notice that\nI think there's a race in InvalidateObsoleteReplicationSlots() (because\nResolveRecoveryConflictWithLogicalSlots has a closely related one).\n\nvoid\nInvalidateObsoleteReplicationSlots(XLogSegNo oldestSegno)\n{\n...\n LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);\n for (int i = 0; i < max_replication_slots; i++)\n {\n...\n if (XLogRecPtrIsInvalid(restart_lsn) || restart_lsn >= oldestLSN)\n continue;\n LWLockRelease(ReplicationSlotControlLock);\n...\n for (;;)\n {\n...\n wspid = ReplicationSlotAcquireInternal(s, NULL, SAB_Inquire);\n...\n SpinLockAcquire(&s->mutex);\n s->data.invalidated_at = s->data.restart_lsn;\n s->data.restart_lsn = InvalidXLogRecPtr;\n SpinLockRelease(&s->mutex);\n...\n\n\nAs far as I can tell there's no guarantee that the slot wasn't concurrently\ndropped and another replication slot created at the same offset in\nReplicationSlotCtl->replication_slots. Which we then promptly would\ninvalidate, regardless of the slot not actually needing to be invalidated.\n\nNote that this is different from the race mentioned in a comment:\n /*\n * Signal to terminate the process that owns the slot.\n *\n * There is the race condition where other process may own\n * the slot after the process using it was terminated and before\n * this process owns it. To handle this case, we signal again\n * if the PID of the owning process is changed than the last.\n *\n * XXX This logic assumes that the same PID is not reused\n * very quickly.\n */\n\nIt's one thing to terminate a connection erroneously - permanently breaking a\nreplica due to invalidating the wrong slot or such imo is different.\n\n\nInterestingly this problem seems to have been present both in\n\ncommit c6550776394e25c1620bc8258427c8f1d448080d\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: 2020-04-07 18:35:00 -0400\n\n Allow users to limit storage reserved by replication slots\n\ncommit f9e9704f09daf882f5a1cf1fbe3f5a3150ae2bb9\nAuthor: Fujii Masao <fujii@postgresql.org>\nDate: 2020-06-19 17:15:52 +0900\n\n Fix issues in invalidation of obsolete replication slots.\n\n\nI think this can be solved in two different ways:\n\n1) Hold ReplicationSlotAllocationLock with LW_SHARED across most of\n InvalidateObsoleteReplicationSlots(). That way nobody could re-create a new\n slot in the to-be-obsoleted-slot's place.\n\n2) Atomically check whether the slot needs to be invalidated and try to\n acquire if needed. Don't release ReplicationSlotControlLock between those\n two steps. Signal the owner to release the slot iff we couldn't acquire the\n slot. In the latter case wait and then recheck if the slot still needs to\n be dropped.\n\nTo me 2) seems better, because we then can also be sure that the slot still\nneeds to be obsoleted, rather than potentially doing so unnecessarily.\n\n\nIt looks to me like several of the problems here stem from trying to reuse\ncode from ReplicationSlotAcquireInternal() (which before this was just named\nReplicationSlotAcquire()). I don't think that makes sense, because cases like\nthis want to check if a condition is true, and acquire it only if so.\n\nIOW, I think this basically needs to look like ReplicationSlotsDropDBSlots(),\nexcept that a different condition is checked, and the if (active_pid) case\nneeds to prepare a condition variable, signal the owner and then wait on the\ncondition variable, to restart after.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Apr 2021 17:10:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-07 17:10:37 -0700, Andres Freund wrote:\n> I think this can be solved in two different ways:\n>\n> 1) Hold ReplicationSlotAllocationLock with LW_SHARED across most of\n> InvalidateObsoleteReplicationSlots(). That way nobody could re-create a new\n> slot in the to-be-obsoleted-slot's place.\n>\n> 2) Atomically check whether the slot needs to be invalidated and try to\n> acquire if needed. Don't release ReplicationSlotControlLock between those\n> two steps. Signal the owner to release the slot iff we couldn't acquire the\n> slot. In the latter case wait and then recheck if the slot still needs to\n> be dropped.\n>\n> To me 2) seems better, because we then can also be sure that the slot still\n> needs to be obsoleted, rather than potentially doing so unnecessarily.\n>\n>\n> It looks to me like several of the problems here stem from trying to reuse\n> code from ReplicationSlotAcquireInternal() (which before this was just named\n> ReplicationSlotAcquire()). I don't think that makes sense, because cases like\n> this want to check if a condition is true, and acquire it only if so.\n>\n> IOW, I think this basically needs to look like ReplicationSlotsDropDBSlots(),\n> except that a different condition is checked, and the if (active_pid) case\n> needs to prepare a condition variable, signal the owner and then wait on the\n> condition variable, to restart after.\n\nI'm also confused by the use of ConditionVariableTimedSleep(timeout =\n10). Why do we need a timed sleep here in the first place? And why with\nsuch a short sleep?\n\nI also noticed that the code is careful to use CHECK_FOR_INTERRUPTS(); -\nbut is aware it's running in checkpointer. I don't think CFI does much\nthere? If we are worried about needing to check for interrupts, more\nwork is needed.\n\n\nSketch for a fix attached. I did leave the odd\nConditionVariableTimedSleep(10ms) in, because I wasn't sure why it's\nthere...\n\nAfter this I don't see a reason to have SAB_Inquire - as far as I can\ntell it's practically impossible to use without race conditions? Except\nfor raising an error - which is \"builtin\"...\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 7 Apr 2021 19:09:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 7:39 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2021-04-07 17:10:37 -0700, Andres Freund wrote:\n> > I think this can be solved in two different ways:\n> >\n> > 1) Hold ReplicationSlotAllocationLock with LW_SHARED across most of\n> > InvalidateObsoleteReplicationSlots(). That way nobody could re-create a new\n> > slot in the to-be-obsoleted-slot's place.\n> >\n> > 2) Atomically check whether the slot needs to be invalidated and try to\n> > acquire if needed. Don't release ReplicationSlotControlLock between those\n> > two steps. Signal the owner to release the slot iff we couldn't acquire the\n> > slot. In the latter case wait and then recheck if the slot still needs to\n> > be dropped.\n> >\n> > To me 2) seems better, because we then can also be sure that the slot still\n> > needs to be obsoleted, rather than potentially doing so unnecessarily.\n> >\n\n+1.\n\n> >\n> > It looks to me like several of the problems here stem from trying to reuse\n> > code from ReplicationSlotAcquireInternal() (which before this was just named\n> > ReplicationSlotAcquire()). I don't think that makes sense, because cases like\n> > this want to check if a condition is true, and acquire it only if so.\n> >\n> > IOW, I think this basically needs to look like ReplicationSlotsDropDBSlots(),\n> > except that a different condition is checked, and the if (active_pid) case\n> > needs to prepare a condition variable, signal the owner and then wait on the\n> > condition variable, to restart after.\n>\n> I'm also confused by the use of ConditionVariableTimedSleep(timeout =\n> 10). Why do we need a timed sleep here in the first place? And why with\n> such a short sleep?\n>\n> I also noticed that the code is careful to use CHECK_FOR_INTERRUPTS(); -\n> but is aware it's running in checkpointer. I don't think CFI does much\n> there? If we are worried about needing to check for interrupts, more\n> work is needed.\n>\n>\n> Sketch for a fix attached. I did leave the odd\n> ConditionVariableTimedSleep(10ms) in, because I wasn't sure why it's\n> there...\n>\n\nI haven't tested the patch but I couldn't spot any problems while\nreading it. A minor point, don't we need to use\nConditionVariableCancelSleep() at some point after doing\nConditionVariableTimedSleep?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 8 Apr 2021 17:03:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "On 2021-Apr-07, Andres Freund wrote:\n\n> I'm also confused by the use of ConditionVariableTimedSleep(timeout =\n> 10). Why do we need a timed sleep here in the first place? And why with\n> such a short sleep?\n\nI was scared of the possibility that a process would not set the CV for\nwhatever reason, causing checkpointing to become stuck. Maybe that's\nmisguided thinking if CVs are reliable enough.\n\n> I also noticed that the code is careful to use CHECK_FOR_INTERRUPTS(); -\n> but is aware it's running in checkpointer. I don't think CFI does much\n> there? If we are worried about needing to check for interrupts, more\n> work is needed.\n\nHmm .. yeah, doing CFI seems pretty useless. I think that should just\nbe removed. If checkpointer gets USR2 (request for shutdown) it's not\ngoing to affect the behavior of CFI anyway.\n\nI attach a couple of changes to your 0001. It's all cosmetic; what\nlooks not so cosmetic is the change of \"continue\" to \"break\" in helper\nroutine; if !s->in_use, we'd loop infinitely. The other routine\nalready checks that before calling the helper; since you hold\nReplicationSlotControlLock at that point, it should not be possible to\ndrop it in between. Anyway, it's a trivial change to make, so it should\nbe correct.\n\nI also added a \"continue\" at the bottom of one block; currently that\ndoesn't change any behavior, but if we add code at the other block, it\nmight not be what's intended.\n\n> After this I don't see a reason to have SAB_Inquire - as far as I can\n> tell it's practically impossible to use without race conditions? Except\n> for raising an error - which is \"builtin\"...\n\nHmm, interesting ... If not needed, yeah let's get rid of that.\n\n\nAre you getting this set pushed, or would you like me to handle it?\n(There seems to be some minor conflict in 13)\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Pido que me den el Nobel por razones humanitarias\" (Nicanor Parra)",
"msg_date": "Thu, 29 Apr 2021 13:28:20 -0400",
"msg_from": "=?iso-8859-1?Q?=C1lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-08 17:03:41 +0530, Amit Kapila wrote:\n> I haven't tested the patch but I couldn't spot any problems while\n> reading it. A minor point, don't we need to use\n> ConditionVariableCancelSleep() at some point after doing\n> ConditionVariableTimedSleep?\n\nIt's not really necessary - unless the CV could get deallocated as part\nof dynamic shared memory or such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 30 Apr 2021 08:57:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-29 13:28:20 -0400, �lvaro Herrera wrote:\n> On 2021-Apr-07, Andres Freund wrote:\n> \n> > I'm also confused by the use of ConditionVariableTimedSleep(timeout =\n> > 10). Why do we need a timed sleep here in the first place? And why with\n> > such a short sleep?\n> \n> I was scared of the possibility that a process would not set the CV for\n> whatever reason, causing checkpointing to become stuck. Maybe that's\n> misguided thinking if CVs are reliable enough.\n\nThey better be, or we have bigger problems. And if it's an escape hatch\nwe surely ought not to use 10ms as the timeout. That's an appropriate\ntime for something *not* using condition variables...\n\n\n> I attach a couple of changes to your 0001. It's all cosmetic; what\n> looks not so cosmetic is the change of \"continue\" to \"break\" in helper\n> routine; if !s->in_use, we'd loop infinitely. The other routine\n> already checks that before calling the helper; since you hold\n> ReplicationSlotControlLock at that point, it should not be possible to\n> drop it in between. Anyway, it's a trivial change to make, so it should\n> be correct.\n\n> I also added a \"continue\" at the bottom of one block; currently that\n> doesn't change any behavior, but if we add code at the other block, it\n> might not be what's intended.\n\nSeems sane.\n\n\n> Are you getting this set pushed, or would you like me to handle it?\n> (There seems to be some minor conflict in 13)\n\nI'd be quite happy for you to handle it...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 30 Apr 2021 08:59:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "Here's a version that I feel is committable (0001). There was an issue\nwhen returning from the inner loop, if in a previous iteration we had\nreleased the lock. In that case we need to return with the lock not\nheld, so that the caller can acquire it again, but weren't. This seems\npretty hard to hit in practice (I suppose somebody needs to destroy the\nslot just as checkpointer killed the walsender, but before checkpointer\nmarks it as its own) ... but if it did happen, I think checkpointer\nwould block with no recourse. Also added some comments and slightly\nrestructured the code.\n\nThere are plenty of conflicts in pg13, but it's all easy to handle.\n\nI wrote a test (0002) to cover the case of signalling a walsender, which\nis currently not covered (we only deal with the case of a standby that's\nnot running). There are some sharp edges in this code -- I had to make\nit use background_psql() to send a CHECKPOINT, which hangs, because I\npreviously send a SIGSTOP to the walreceiver. Maybe there's a better\nway to achieve a walreceiver that remains connected but doesn't consume\ninput from the primary, but I don't know what it is. Anyway, the code\nbecomes covered with this. I would like to at least see it in master,\nto gather some reactions from buildfarm.\n\n-- \n�lvaro Herrera Valdivia, Chile\n<Schwern> It does it in a really, really complicated way\n<crab> why does it need to be complicated?\n<Schwern> Because it's MakeMaker.",
"msg_date": "Thu, 10 Jun 2021 18:02:58 -0400",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "On 2021-Jun-10, �lvaro Herrera wrote:\n\n> I wrote a test (0002) to cover the case of signalling a walsender, which\n> is currently not covered (we only deal with the case of a standby that's\n> not running). There are some sharp edges in this code -- I had to make\n> it use background_psql() to send a CHECKPOINT, which hangs, because I\n> previously send a SIGSTOP to the walreceiver. Maybe there's a better\n> way to achieve a walreceiver that remains connected but doesn't consume\n> input from the primary, but I don't know what it is. Anyway, the code\n> becomes covered with this. I would like to at least see it in master,\n> to gather some reactions from buildfarm.\n\nSmall fixup to the test one, so that skipping it on Windows works\ncorrectly.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\nVoy a acabar con todos los humanos / con los humanos yo acabar�\nvoy a acabar con todos (bis) / con todos los humanos acabar� �acabar�! (Bender)",
"msg_date": "Thu, 10 Jun 2021 20:58:17 -0400",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "On 2021-Jun-10, �lvaro Herrera wrote:\n\n> Here's a version that I feel is committable (0001). There was an issue\n> when returning from the inner loop, if in a previous iteration we had\n> released the lock. In that case we need to return with the lock not\n> held, so that the caller can acquire it again, but weren't. This seems\n> pretty hard to hit in practice (I suppose somebody needs to destroy the\n> slot just as checkpointer killed the walsender, but before checkpointer\n> marks it as its own) ... but if it did happen, I think checkpointer\n> would block with no recourse. Also added some comments and slightly\n> restructured the code.\n> \n> There are plenty of conflicts in pg13, but it's all easy to handle.\n\nPushed, with additional minor changes.\n\n> I wrote a test (0002) to cover the case of signalling a walsender, which\n> is currently not covered (we only deal with the case of a standby that's\n> not running). There are some sharp edges in this code -- I had to make\n> it use background_psql() to send a CHECKPOINT, which hangs, because I\n> previously send a SIGSTOP to the walreceiver. Maybe there's a better\n> way to achieve a walreceiver that remains connected but doesn't consume\n> input from the primary, but I don't know what it is. Anyway, the code\n> becomes covered with this. I would like to at least see it in master,\n> to gather some reactions from buildfarm.\n\nI tried hard to make this stable, but it just isn't (it works fine one\nthousand runs, then I grab some coffee and run it once more and that one\nfails. Why? that's not clear to me). Attached is the last one I have,\nin case somebody wants to make it better. Maybe there's some completely\ndifferent approach that works better, but I'm out of ideas for now.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"La experiencia nos dice que el hombre pel� millones de veces las patatas,\npero era forzoso admitir la posibilidad de que en un caso entre millones,\nlas patatas pelar�an al hombre\" (Ijon Tichy)",
"msg_date": "Fri, 11 Jun 2021 12:27:57 -0400",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "On 2021-Apr-07, Andres Freund wrote:\n\n> After this I don't see a reason to have SAB_Inquire - as far as I can\n> tell it's practically impossible to use without race conditions? Except\n> for raising an error - which is \"builtin\"...\n\nPushed 0002.\n\nThanks!\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"La persona que no quer�a pecar / estaba obligada a sentarse\n en duras y empinadas sillas / desprovistas, por cierto\n de blandos atenuantes\" (Patricio Vogel)\n\n\n",
"msg_date": "Fri, 11 Jun 2021 15:52:21 -0400",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "On 2021-06-11 15:52:21 -0400, �lvaro Herrera wrote:\n> On 2021-Apr-07, Andres Freund wrote:\n> \n> > After this I don't see a reason to have SAB_Inquire - as far as I can\n> > tell it's practically impossible to use without race conditions? Except\n> > for raising an error - which is \"builtin\"...\n> \n> Pushed 0002.\n> \n> Thanks!\n\nThank you for your work on this!\n\n\n",
"msg_date": "Fri, 11 Jun 2021 13:38:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "On 2021-Jun-11, �lvaro Herrera wrote:\n\n> I tried hard to make this stable, but it just isn't (it works fine one\n> thousand runs, then I grab some coffee and run it once more and that one\n> fails. Why? that's not clear to me). Attached is the last one I have,\n> in case somebody wants to make it better. Maybe there's some completely\n> different approach that works better, but I'm out of ideas for now.\n\nIt occurred to me that this could be made better by sigstopping both\nwalreceiver and walsender, then letting only the latter run; AFAICS this\nmakes the test stable. I'll register this on the upcoming commitfest to\nlet cfbot run it, and if it looks good there I'll get it pushed to\nmaster. If there's any problem I'll just remove it before beta2 is\nstamped.\n\n-- \n�lvaro Herrera Valdivia, Chile",
"msg_date": "Fri, 18 Jun 2021 16:59:00 -0400",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "Apologies, I inadvertently sent the version before I added a maximum\nnumber of iterations in the final loop.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"La fuerza no est� en los medios f�sicos\nsino que reside en una voluntad indomable\" (Gandhi)",
"msg_date": "Fri, 18 Jun 2021 17:05:45 -0400",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org> writes:\n> It occurred to me that this could be made better by sigstopping both\n> walreceiver and walsender, then letting only the latter run; AFAICS this\n> makes the test stable. I'll register this on the upcoming commitfest to\n> let cfbot run it, and if it looks good there I'll get it pushed to\n> master. If there's any problem I'll just remove it before beta2 is\n> stamped.\n\nHmm ... desmoxytes has failed this test once, out of four runs since\nit went in:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=desmoxytes&dt=2021-06-19%2003%3A06%3A04\n\nNone of the other animals that have reported in so far are unhappy.\nStill, maybe that's not a track record we want to have for beta2?\n\nI've just launched a run on gaur, which given its dinosaur status\nmight be the most likely animal to have an actual portability problem\nwith this test technique. If you want to wait a few hours to see what\nit says, that'd be fine with me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 19 Jun 2021 15:16:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "Hah, desmoxytes failed once:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=desmoxytes&dt=2021-06-19%2003%3A06%3A04\nI'll revert it and investigate later. There have been no other\nfailures.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Hay que recordar que la existencia en el cosmos, y particularmente la\nelaboraci�n de civilizaciones dentro de �l no son, por desgracia,\nnada id�licas\" (Ijon Tichy)\n\n\n",
"msg_date": "Sun, 20 Jun 2021 12:01:03 -0400",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "I wrote:\n> Hmm ... desmoxytes has failed this test once, out of four runs since\n> it went in:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=desmoxytes&dt=2021-06-19%2003%3A06%3A04\n\nI studied this failure a bit more, and I think the test itself has\na race condition. It's doing\n\n# freeze walsender and walreceiver. Slot will still be active, but walreceiver\n# won't get anything anymore.\nkill 'STOP', $senderpid, $receiverpid;\n$logstart = get_log_size($node_primary3);\nadvance_wal($node_primary3, 4);\nok(find_in_log($node_primary3, \"to release replication slot\", $logstart),\n\t\"walreceiver termination logged\");\n\nThe string it's looking for does show up in node_primary3's log, but\nnot for another second or so; we can see instances of the following\npoll_query_until query before that happens. So the problem is that\nthere is no interlock to ensure that the walreceiver terminates\nbefore this find_in_log check looks for it.\n\nYou should be able to fix this by adding a retry loop around the\nfind_in_log check (which would likely mean that you don't need\nto do multiple advance_wal iterations here).\n\nHowever, I agree with reverting the test for now and then trying\nagain after beta2.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 20 Jun 2021 13:19:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "I wrote:\n> I studied this failure a bit more, and I think the test itself has\n> a race condition. It's doing\n>\n> # freeze walsender and walreceiver. Slot will still be active, but walreceiver\n> # won't get anything anymore.\n> kill 'STOP', $senderpid, $receiverpid;\n> $logstart = get_log_size($node_primary3);\n> advance_wal($node_primary3, 4);\n> ok(find_in_log($node_primary3, \"to release replication slot\", $logstart),\n> \t\"walreceiver termination logged\");\n\nActually ... isn't there a second race, in the opposite direction?\nIIUC, the point of this is that once we force some WAL to be sent\nto the frozen sender/receiver, they'll be killed for failure to\nrespond. But the advance_wal call is not the only possible cause\nof that; a background autovacuum for example could emit some WAL.\nSo I fear it's possible for the 'to release replication slot'\nmessage to come out before we capture $logstart. I think you\nneed to capture that value before the kill not after.\n\nI also suggest that it wouldn't be a bad idea to make the\nfind_in_log check more specific, by including the expected PID\nand perhaps the expected slot name in the string. The full\nmessage in primary3's log looks like\n\n2021-06-19 05:24:36.221 CEST [60cd636f.362648:12] LOG: terminating process 3548959 to release replication slot \"rep3\"\n\nand I don't understand why we wouldn't match on the whole\nmessage text. (I think doing so will also reveal that what\nwe are looking for here is the walsender pid, not the walreceiver\npid, and thus that the description in the ok() call is backwards.\nOr maybe we do want to check the walreceiver side, in which case\nwe are searching the wrong postmaster's log?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 20 Jun 2021 14:37:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "On 2021-Jun-20, Tom Lane wrote:\n\n> Actually ... isn't there a second race, in the opposite direction?\n> IIUC, the point of this is that once we force some WAL to be sent\n> to the frozen sender/receiver, they'll be killed for failure to\n> respond. But the advance_wal call is not the only possible cause\n> of that; a background autovacuum for example could emit some WAL.\n> So I fear it's possible for the 'to release replication slot'\n> message to come out before we capture $logstart. I think you\n> need to capture that value before the kill not after.\n\nI accounted for all those things and pushed again.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"I can see support will not be a problem. 10 out of 10.\" (Simon Wittber)\n (http://archives.postgresql.org/pgsql-general/2004-12/msg00159.php)\n\n\n",
"msg_date": "Wed, 23 Jun 2021 10:02:12 -0400",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "On Wed, Jun 23, 2021 at 7:32 PM Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jun-20, Tom Lane wrote:\n>\n> > Actually ... isn't there a second race, in the opposite direction?\n> > IIUC, the point of this is that once we force some WAL to be sent\n> > to the frozen sender/receiver, they'll be killed for failure to\n> > respond. But the advance_wal call is not the only possible cause\n> > of that; a background autovacuum for example could emit some WAL.\n> > So I fear it's possible for the 'to release replication slot'\n> > message to come out before we capture $logstart. I think you\n> > need to capture that value before the kill not after.\n>\n> I accounted for all those things and pushed again.\n\nI saw that this patch is pushed. If there is no pending work left for\nthis, can we change the commitfest entry to Committed.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 5 Jul 2021 22:17:20 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "On 2021-Jul-05, vignesh C wrote:\n\n> On Wed, Jun 23, 2021 at 7:32 PM Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Jun-20, Tom Lane wrote:\n> >\n> > > Actually ... isn't there a second race, in the opposite direction?\n> > > IIUC, the point of this is that once we force some WAL to be sent\n> > > to the frozen sender/receiver, they'll be killed for failure to\n> > > respond. But the advance_wal call is not the only possible cause\n> > > of that; a background autovacuum for example could emit some WAL.\n> > > So I fear it's possible for the 'to release replication slot'\n> > > message to come out before we capture $logstart. I think you\n> > > need to capture that value before the kill not after.\n> >\n> > I accounted for all those things and pushed again.\n> \n> I saw that this patch is pushed. If there is no pending work left for\n> this, can we change the commitfest entry to Committed.\n\nThere is none that I'm aware of, please mark it committed. Thanks\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Siempre hay que alimentar a los dioses, aunque la tierra esté seca\" (Orual)\n\n\n",
"msg_date": "Mon, 5 Jul 2021 13:00:38 -0400",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "On Mon, Jul 5, 2021 at 10:30 PM Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jul-05, vignesh C wrote:\n>\n> > On Wed, Jun 23, 2021 at 7:32 PM Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >\n> > > On 2021-Jun-20, Tom Lane wrote:\n> > >\n> > > > Actually ... isn't there a second race, in the opposite direction?\n> > > > IIUC, the point of this is that once we force some WAL to be sent\n> > > > to the frozen sender/receiver, they'll be killed for failure to\n> > > > respond. But the advance_wal call is not the only possible cause\n> > > > of that; a background autovacuum for example could emit some WAL.\n> > > > So I fear it's possible for the 'to release replication slot'\n> > > > message to come out before we capture $logstart. I think you\n> > > > need to capture that value before the kill not after.\n> > >\n> > > I accounted for all those things and pushed again.\n> >\n> > I saw that this patch is pushed. If there is no pending work left for\n> > this, can we change the commitfest entry to Committed.\n>\n> There is none that I'm aware of, please mark it committed. Thanks\n\nThanks for confirming, I have marked it as committed.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 5 Jul 2021 22:40:24 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "Hi,\n\nOn 2021-06-11 12:27:57 -0400, �lvaro Herrera wrote:\n> On 2021-Jun-10, �lvaro Herrera wrote:\n> \n> > Here's a version that I feel is committable (0001). There was an issue\n> > when returning from the inner loop, if in a previous iteration we had\n> > released the lock. In that case we need to return with the lock not\n> > held, so that the caller can acquire it again, but weren't. This seems\n> > pretty hard to hit in practice (I suppose somebody needs to destroy the\n> > slot just as checkpointer killed the walsender, but before checkpointer\n> > marks it as its own) ... but if it did happen, I think checkpointer\n> > would block with no recourse. Also added some comments and slightly\n> > restructured the code.\n> > \n> > There are plenty of conflicts in pg13, but it's all easy to handle.\n> \n> Pushed, with additional minor changes.\n\nI stared at this code, due to [1], and I think I found a bug. I think it's not\nthe cause of the failures in that thread, but we probably should still do\nsomething about it.\n\nI think the minor changes might unfortunately have introduced a race? Before\nthe patch just used ConditionVariableSleep(), but now it also has a\nConditionVariablePrepareToSleep(). Without re-checking the sleep condition\nuntil\n /* Wait until the slot is released. */\n ConditionVariableSleep(&s->active_cv,\n WAIT_EVENT_REPLICATION_SLOT_DROP);\n\nwhich directly violates what ConditionVariablePrepareToSleep() documents:\n\n * This can optionally be called before entering a test/sleep loop.\n * Doing so is more efficient if we'll need to sleep at least once.\n * However, if the first test of the exit condition is likely to succeed,\n * it's more efficient to omit the ConditionVariablePrepareToSleep call.\n * See comments in ConditionVariableSleep for more detail.\n *\n * Caution: \"before entering the loop\" means you *must* test the exit\n * condition between calling ConditionVariablePrepareToSleep and calling\n * ConditionVariableSleep. If that is inconvenient, omit calling\n * ConditionVariablePrepareToSleep.\n\n\nAfaics this means we can potentially sleep forever if the prior owner of the\nslot releases it before the ConditionVariablePrepareToSleep().\n\nThere's a comment that's mentioning this danger:\n\n /*\n * Prepare the sleep on the slot's condition variable before\n * releasing the lock, to close a possible race condition if the\n * slot is released before the sleep below.\n */\n\t\t\tConditionVariablePrepareToSleep(&s->active_cv);\n\n\t\t\tLWLockRelease(ReplicationSlotControlLock);\n\nbut afaics that is bogus, because releasing a slot doesn't take\nReplicationSlotControlLock. That just protects against the slot being dropped,\nnot against it being released.\n\nWe can ConditionVariablePrepareToSleep() here, but we'd have to it earlier,\nbefore the checks at the start of the while loop.\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20220218231415.c4plkp4i3reqcwip%40alap3.anarazel.de\n\n\n",
"msg_date": "Tue, 22 Feb 2022 17:48:55 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-22 17:48:55 -0800, Andres Freund wrote:\n> I think the minor changes might unfortunately have introduced a race? Before\n> the patch just used ConditionVariableSleep(), but now it also has a\n> ConditionVariablePrepareToSleep(). Without re-checking the sleep condition\n> until\n> /* Wait until the slot is released. */\n> ConditionVariableSleep(&s->active_cv,\n> WAIT_EVENT_REPLICATION_SLOT_DROP);\n> \n> which directly violates what ConditionVariablePrepareToSleep() documents:\n> \n> * This can optionally be called before entering a test/sleep loop.\n> * Doing so is more efficient if we'll need to sleep at least once.\n> * However, if the first test of the exit condition is likely to succeed,\n> * it's more efficient to omit the ConditionVariablePrepareToSleep call.\n> * See comments in ConditionVariableSleep for more detail.\n> *\n> * Caution: \"before entering the loop\" means you *must* test the exit\n> * condition between calling ConditionVariablePrepareToSleep and calling\n> * ConditionVariableSleep. If that is inconvenient, omit calling\n> * ConditionVariablePrepareToSleep.\n> \n> \n> Afaics this means we can potentially sleep forever if the prior owner of the\n> slot releases it before the ConditionVariablePrepareToSleep().\n> \n> There's a comment that's mentioning this danger:\n> \n> /*\n> * Prepare the sleep on the slot's condition variable before\n> * releasing the lock, to close a possible race condition if the\n> * slot is released before the sleep below.\n> */\n> \t\t\tConditionVariablePrepareToSleep(&s->active_cv);\n> \n> \t\t\tLWLockRelease(ReplicationSlotControlLock);\n> \n> but afaics that is bogus, because releasing a slot doesn't take\n> ReplicationSlotControlLock. That just protects against the slot being dropped,\n> not against it being released.\n> \n> We can ConditionVariablePrepareToSleep() here, but we'd have to it earlier,\n> before the checks at the start of the while loop.\n\nNot at the start of the while loop, outside of the while loop. Doing it in the\nloop body doesn't make sense, even if it's at the top. Each\nConditionVariablePrepareToSleep() will unregister itself:\n\n /*\n * If some other sleep is already prepared, cancel it; this is necessary\n * because we have just one static variable tracking the prepared sleep,\n * and also only one cvWaitLink in our PGPROC. It's okay to do this\n * because whenever control does return to the other test-and-sleep loop,\n * its ConditionVariableSleep call will just re-establish that sleep as\n * the prepared one.\n */\n if (cv_sleep_target != NULL)\n ConditionVariableCancelSleep();\n\nThe intended use is documented in this comment:\n\n * This should be called in a predicate loop that tests for a specific exit\n * condition and otherwise sleeps, like so:\n *\n *\t ConditionVariablePrepareToSleep(cv); // optional\n *\t while (condition for which we are waiting is not true)\n *\t\t ConditionVariableSleep(cv, wait_event_info);\n *\t ConditionVariableCancelSleep();\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 22 Feb 2022 17:56:29 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Race condition in InvalidateObsoleteReplicationSlots()"
}
] |
[
{
"msg_contents": "Hi,\n\nI found a typo in jsonfuncs.c, probably.\n s/an an/an/\nPlease find attached patch.\n\nThanks,\nTatsuro Yamada",
"msg_date": "Thu, 08 Apr 2021 10:06:56 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Typo in jsonfuncs.c"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 10:06:56AM +0900, Tatsuro Yamada wrote:\n> Hi,\n> \n> I found a typo in jsonfuncs.c, probably.\n> s/an an/an/\n> Please find attached patch.\n\nFor the archives' sake, this has been pushed as of 8ffb003591.\n\n\n",
"msg_date": "Thu, 8 Apr 2021 16:33:18 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Typo in jsonfuncs.c"
},
{
"msg_contents": "Hi Julien and Amit Kapila,\n\nOn 2021/04/08 17:33, Julien Rouhaud wrote:\n> On Thu, Apr 08, 2021 at 10:06:56AM +0900, Tatsuro Yamada wrote:\n>> Hi,\n>>\n>> I found a typo in jsonfuncs.c, probably.\n>> s/an an/an/\n>> Please find attached patch.\n> \n> For the archives' sake, this has been pushed as of 8ffb003591.\n\n\nJulien, thanks for the info! :-D\nAlso, thanks for taking your time to push this, Amit.\n \nRegards,\nTatsuro Yamada\n\n\n\n",
"msg_date": "Fri, 09 Apr 2021 08:18:14 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Typo in jsonfuncs.c"
}
] |
[
{
"msg_contents": "autovacuum: handle analyze for partitioned tables\n\nPreviously, autovacuum would completely ignore partitioned tables, which\nis not good regarding analyze -- failing to analyze those tables means\npoor plans may be chosen. Make autovacuum aware of those tables by\npropagating \"changes since analyze\" counts from the leaf partitions up\nthe partitioning hierarchy.\n\nThis also introduces necessary reloptions support for partitioned tables\n(autovacuum_enabled, autovacuum_analyze_scale_factor,\nautovacuum_analyze_threshold). It's unclear how best to document this\naspect.\n\nAuthor: Yuzuko Hosoya <yuzukohosoya@gmail.com>\nReviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nReviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>\nReviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>\nDiscussion: https://postgr.es/m/CAKkQ508_PwVgwJyBY=0Lmkz90j8CmWNPUxgHvCUwGhMrouz6UA@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/0827e8af70f4653ba17ed773f123a60eadd9f9c9\n\nModified Files\n--------------\nsrc/backend/access/common/reloptions.c | 15 +++--\nsrc/backend/catalog/system_views.sql | 4 +-\nsrc/backend/commands/analyze.c | 40 ++++++++----\nsrc/backend/postmaster/autovacuum.c | 105 +++++++++++++++++++++++++++++---\nsrc/backend/postmaster/pgstat.c | 108 ++++++++++++++++++++++++++++++---\nsrc/include/pgstat.h | 25 +++++++-\nsrc/test/regress/expected/rules.out | 4 +-\n7 files changed, 257 insertions(+), 44 deletions(-)",
"msg_date": "Thu, 08 Apr 2021 05:20:39 +0000",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> autovacuum: handle analyze for partitioned tables\n\nLooks like this has issues under EXEC_BACKEND:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2021-04-08%2005%3A50%3A08\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Apr 2021 02:16:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On 2021-Apr-08, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > autovacuum: handle analyze for partitioned tables\n> \n> Looks like this has issues under EXEC_BACKEND:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2021-04-08%2005%3A50%3A08\n\nHmm, I couldn't reproduce this under EXEC_BACKEND or otherwise, but I\nthink this is unrelated to that, but rather a race condition.\n\nThe backtrace saved by buildfarm is:\n\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 relation_needs_vacanalyze (relid=relid@entry=43057, relopts=relopts@entry=0x0, classForm=classForm@entry=0x7e000501eef0, tabentry=0x5611ec71b030, effective_multixact_freeze_max_age=effective_multixact_freeze_max_age@entry=400000000, dovacuum=dovacuum@entry=0x7ffd78cc4ee0, doanalyze=0x7ffd78cc4ee1, wraparound=0x7ffd78cc4ee2) at /mnt/resource/andres/bf/culicidae/HEAD/pgsql.build/../pgsql/src/backend/postmaster/autovacuum.c:3237\n3237\t\t\t\t\tchildclass = (Form_pg_class) GETSTRUCT(childtuple);\n#0 relation_needs_vacanalyze (relid=relid@entry=43057, relopts=relopts@entry=0x0, classForm=classForm@entry=0x7e000501eef0, tabentry=0x5611ec71b030, effective_multixact_freeze_max_age=effective_multixact_freeze_max_age@entry=400000000, dovacuum=dovacuum@entry=0x7ffd78cc4ee0, doanalyze=0x7ffd78cc4ee1, wraparound=0x7ffd78cc4ee2) at /mnt/resource/andres/bf/culicidae/HEAD/pgsql.build/../pgsql/src/backend/postmaster/autovacuum.c:3237\n#1 0x00005611eb09fc91 in do_autovacuum () at /mnt/resource/andres/bf/culicidae/HEAD/pgsql.build/../pgsql/src/backend/postmaster/autovacuum.c:2168\n#2 0x00005611eb0a0f8b in AutoVacWorkerMain (argc=argc@entry=1, argv=argv@entry=0x5611ec61f1e0) at /mnt/resource/andres/bf/culicidae/HEAD/pgsql.build/../pgsql/src/backend/postmaster/autovacuum.c:1715\n\nthe code in question is:\n\n\t\t\tchildren = find_all_inheritors(relid, AccessShareLock, NULL);\n\n\t\t\tforeach(lc, children)\n\t\t\t{\n\t\t\t\tOid\t\t\tchildOID = lfirst_oid(lc);\n\t\t\t\tHeapTuple\tchildtuple;\n\t\t\t\tForm_pg_class childclass;\n\n\t\t\t\tchildtuple = SearchSysCache1(RELOID, ObjectIdGetDatum(childOID));\n\t\t\t\tchildclass = (Form_pg_class) GETSTRUCT(childtuple);\n\nEvidently SearchSysCache must be returning NULL, but how come that\nhappens, when we have acquired lock on the rel during\nfind_all_inheritors?\n\nI would suggest that we do not take lock here at all, and just skip the\nrel if SearchSysCache returns empty, as in the attached. Still, I am\nbaffled about this crash.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Oh, great altar of passive entertainment, bestow upon me thy discordant images\nat such speed as to render linear thought impossible\" (Calvin a la TV)",
"msg_date": "Thu, 8 Apr 2021 11:19:37 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Apr-08, Tom Lane wrote:\n>> Looks like this has issues under EXEC_BACKEND:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2021-04-08%2005%3A50%3A08\n\n> Hmm, I couldn't reproduce this under EXEC_BACKEND or otherwise, but I\n> think this is unrelated to that, but rather a race condition.\n\nYeah. I hit this on another machine that isn't using EXEC_BACKEND,\nand I concur it looks more like a race condition. I think the problem\nis that autovacuum is calling find_all_inheritors() on a relation it\nhas no lock on, contrary to that function's API spec. find_all_inheritors\nassumes the OID it's given is valid and locked, and adds it to the\nresult list automatically. Then it looks for children, and won't find\nany in the race case where somebody else just dropped the table.\nSo we come back to relation_needs_vacanalyze with a list of just the\noriginal rel OID, and since this loop believes that every syscache fetch\nit does will succeed, kaboom.\n\nI do not think it is sane to do find_all_inheritors() with no lock,\nso I'd counsel doing something about that rather than band-aiding the\nsymptom. On the other hand, it's also not really okay not to have\nan if-test-and-elog after the SearchSysCache call. \"A cache lookup\ncannot fail\" is not an acceptable assumption in my book.\n\nBTW, another thing that looks like a race condition is the\nextract_autovac_opts() call that is done a little bit earlier,\nalso without lock. I think this is actually safe, but it's ONLY\nsafe because we resisted the calls by certain people to add a\ntoast table to pg_class. Otherwise, fetching reloptions could\nhave involved a toast pointer dereference, and it would then be\nracy whether the toasted data was still there. As-is, even if\nthe pg_class row we're looking at has been deleted, we can safely\ndisassemble its reloptions. I think this matter is deserving\nof a comment at least.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Apr 2021 13:57:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On 2021-Apr-08, Tom Lane wrote:\n\n> Yeah. I hit this on another machine that isn't using EXEC_BACKEND,\n> and I concur it looks more like a race condition. I think the problem\n> is that autovacuum is calling find_all_inheritors() on a relation it\n> has no lock on, contrary to that function's API spec. find_all_inheritors\n> assumes the OID it's given is valid and locked, and adds it to the\n> result list automatically. Then it looks for children, and won't find\n> any in the race case where somebody else just dropped the table.\n\nHmm. Autovacuum tries hard to avoid grabbing locks on relations until\nreally needed (at vacuum/analyze time), which is why all these tests\nonly use data that can be found in the pg_class rows and pgstat entries.\nSo I tend to think that my initial instinct was the better direction: we\nshould not be doing any find_all_inheritors() here at all, but instead\nrely on pg_class.reltuples to be set for the partitioned table.\n\nI'll give that another look. Most places already assume that reltuples\nisn't set for a partitioned table, so they shouldn't care. I wonder,\nthough, whether we should set relpages to some value other than 0 or -1.\n(I'm inclined not to, since autovacuum does not use it.)\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Thu, 8 Apr 2021 14:35:51 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Apr-08, Tom Lane wrote:\n>> Yeah. I hit this on another machine that isn't using EXEC_BACKEND,\n>> and I concur it looks more like a race condition. I think the problem\n>> is that autovacuum is calling find_all_inheritors() on a relation it\n>> has no lock on, contrary to that function's API spec.\n\n> Hmm. Autovacuum tries hard to avoid grabbing locks on relations until\n> really needed (at vacuum/analyze time), which is why all these tests\n> only use data that can be found in the pg_class rows and pgstat entries.\n\nYeah, I was worried about that.\n\n> So I tend to think that my initial instinct was the better direction: we\n> should not be doing any find_all_inheritors() here at all, but instead\n> rely on pg_class.reltuples to be set for the partitioned table.\n\n+1\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Apr 2021 14:47:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On 2021-Apr-08, Tom Lane wrote:\n\n> > So I tend to think that my initial instinct was the better direction: we\n> > should not be doing any find_all_inheritors() here at all, but instead\n> > rely on pg_class.reltuples to be set for the partitioned table.\n> \n> +1\n\nThis patch does that.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"I dream about dreams about dreams\", sang the nightingale\nunder the pale moon (Sandman)",
"msg_date": "Thu, 8 Apr 2021 16:11:49 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 1:12 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2021-Apr-08, Tom Lane wrote:\n>\n> > > So I tend to think that my initial instinct was the better direction:\n> we\n> > > should not be doing any find_all_inheritors() here at all, but instead\n> > > rely on pg_class.reltuples to be set for the partitioned table.\n> >\n> > +1\n>\n> This patch does that.\n>\n> --\n> Álvaro Herrera 39°49'30\"S 73°17'W\n> \"I dream about dreams about dreams\", sang the nightingale\n> under the pale moon (Sandman)\n>\n\nHi,\nWithin truncate_update_partedrel_stats(), dirty is declared within the loop.\n+ if (rd_rel->reltuples != 0)\n+ {\n...\n+ if (dirty)\n\nThe two if blocks can be merged. The variable dirty can be dropped.\n\nCheers\n\nOn Thu, Apr 8, 2021 at 1:12 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2021-Apr-08, Tom Lane wrote:\n\n> > So I tend to think that my initial instinct was the better direction: we\n> > should not be doing any find_all_inheritors() here at all, but instead\n> > rely on pg_class.reltuples to be set for the partitioned table.\n> \n> +1\n\nThis patch does that.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W\n\"I dream about dreams about dreams\", sang the nightingale\nunder the pale moon (Sandman)Hi,Within truncate_update_partedrel_stats(), dirty is declared within the loop.+ if (rd_rel->reltuples != 0)+ {...+ if (dirty)The two if blocks can be merged. The variable dirty can be dropped.Cheers",
"msg_date": "Thu, 8 Apr 2021 13:25:24 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On 2021-Apr-08, Zhihong Yu wrote:\n\n> Hi,\n> Within truncate_update_partedrel_stats(), dirty is declared within the loop.\n> + if (rd_rel->reltuples != 0)\n> + {\n> ...\n> + if (dirty)\n> \n> The two if blocks can be merged. The variable dirty can be dropped.\n\nHi, thanks for reviewing. Yes, evidently I copied vac_update_relstats\ntoo closely -- that boolean is not necessary.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Thu, 8 Apr 2021 17:29:51 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "Could we get this pushed sooner rather than later? The buildfarm\nis showing a wide variety of intermittent failures on HEAD, and it's\nhard to tell how many of them trace to this one bug.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Apr 2021 07:53:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On 2021-Apr-09, Tom Lane wrote:\n\n> Could we get this pushed sooner rather than later? The buildfarm\n> is showing a wide variety of intermittent failures on HEAD, and it's\n> hard to tell how many of them trace to this one bug.\n\nPushed now, thanks.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Digital and video cameras have this adjustment and film cameras don't for the\nsame reason dogs and cats lick themselves: because they can.\" (Ken Rockwell)\n\n\n",
"msg_date": "Fri, 9 Apr 2021 11:54:30 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 11:54 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2021-Apr-09, Tom Lane wrote:\n> > Could we get this pushed sooner rather than later? The buildfarm\n> > is showing a wide variety of intermittent failures on HEAD, and it's\n> > hard to tell how many of them trace to this one bug.\n>\n> Pushed now, thanks.\n\nDoes this need to worry about new partitions getting attached to a\npartitioned table, or old ones getting detached? (Maybe it does\nalready, not sure.)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 9 Apr 2021 16:39:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On 2021-Apr-09, Robert Haas wrote:\n\n> Does this need to worry about new partitions getting attached to a\n> partitioned table, or old ones getting detached? (Maybe it does\n> already, not sure.)\n\nGood question. It does not.\n\nI suppose you could just let that happen automatically -- I mean, next\ntime the partitioned table is analyzed, it'll scan all attached\npartitions. But if no tuples are modified afterwards in existing\npartitions (a common scenario), and the newly attached partition\ncontains lots of rows, then only future rows in the newly attached\npartition would affect the stats of the partitioned table, and it could\nbe a long time before that causes an analyze on the partitioned table to\noccur.\n\nMaybe a way to attack this is to send a the \"anl_ancestors\" message to\nthe collector on attach and detach, adding a new flag (\"is\nattach/detach\"), which indicates to add not only \"changes_since_analyze\n- changes_since_analyze_reported\", but also \"n_live_tuples\".\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\nEssentially, you're proposing Kevlar shoes as a solution for the problem\nthat you want to walk around carrying a loaded gun aimed at your foot.\n(Tom Lane)\n\n\n",
"msg_date": "Fri, 9 Apr 2021 17:31:55 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 05:31:55PM -0400, Alvaro Herrera wrote:\n> On 2021-Apr-09, Robert Haas wrote:\n> \n> > Does this need to worry about new partitions getting attached to a\n> > partitioned table, or old ones getting detached? (Maybe it does\n> > already, not sure.)\n> \n> Good question. It does not.\n\nI think there's probably cases where this is desirable, and cases where it's\nundesirable, so I don't think it's necessarily a problem.\n\nOne data point: we do DETACH/ATTACH tables during normal operation, before\ntype-promoting ALTERs, to avoid worst-case disk use, and to avoid locking the\ntable for a long time. It'd be undesirable (but maybe of no great consequence)\nto trigger an ALTER when we DETACH them, since we'll re-ATTACH it shortly\nafterwards.\n\nHowever, I think DROP should be handled ?\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 9 Apr 2021 16:45:08 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "\n\nOn 4/9/21 11:45 PM, Justin Pryzby wrote:\n> On Fri, Apr 09, 2021 at 05:31:55PM -0400, Alvaro Herrera wrote:\n>> On 2021-Apr-09, Robert Haas wrote:\n>>\n>>> Does this need to worry about new partitions getting attached to a\n>>> partitioned table, or old ones getting detached? (Maybe it does\n>>> already, not sure.)\n>>\n>> Good question. It does not.\n> \n> I think there's probably cases where this is desirable, and cases where it's\n> undesirable, so I don't think it's necessarily a problem.\n> \n> One data point: we do DETACH/ATTACH tables during normal operation, before\n> type-promoting ALTERs, to avoid worst-case disk use, and to avoid locking the\n> table for a long time. It'd be undesirable (but maybe of no great consequence)\n> to trigger an ALTER when we DETACH them, since we'll re-ATTACH it shortly\n> afterwards.\n> \n> However, I think DROP should be handled ?\n> \n\nIMHO we should prefer the default behavior which favors having updated\nstatistics, and maybe have a way to override it for individual commands.\nSo ATTACH would update changes_since_analyze by default, but it would be\npossible to disable that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 9 Apr 2021 23:53:08 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On 2021-Apr-09, Justin Pryzby wrote:\n\n> One data point: we do DETACH/ATTACH tables during normal operation, before\n> type-promoting ALTERs, to avoid worst-case disk use, and to avoid locking the\n> table for a long time. It'd be undesirable (but maybe of no great consequence)\n> to trigger an ALTER when we DETACH them, since we'll re-ATTACH it shortly\n> afterwards.\n\nYou mean to trigger an ANALYZE, not to trigger an ALTER, right?\n\nI think I agree with Tomas: we should do it by default, and offer some\nway to turn that off. I suppose a new reloptions, solely for\npartitioned tables, would be the way to do it.\n\n> However, I think DROP should be handled ?\n\nDROP of a partition? ... I would think it should do the same as DETACH,\nright? Inform that however many rows the partition had, are now changed\nin ancestors.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Aprender sin pensar es in�til; pensar sin aprender, peligroso\" (Confucio)\n\n\n",
"msg_date": "Fri, 9 Apr 2021 18:16:59 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 06:16:59PM -0400, Alvaro Herrera wrote:\n> On 2021-Apr-09, Justin Pryzby wrote:\n> \n> > One data point: we do DETACH/ATTACH tables during normal operation, before\n> > type-promoting ALTERs, to avoid worst-case disk use, and to avoid locking the\n> > table for a long time. It'd be undesirable (but maybe of no great consequence)\n> > to trigger an ALTER when we DETACH them, since we'll re-ATTACH it shortly\n> > afterwards.\n> \n> You mean to trigger an ANALYZE, not to trigger an ALTER, right?\n\nOops, right. It's slightly undesirable for a DETACH to cause an ANALYZE.\n\n> I think I agree with Tomas: we should do it by default, and offer some\n> way to turn that off. I suppose a new reloptions, solely for\n> partitioned tables, would be the way to do it.\n> \n> > However, I think DROP should be handled ?\n> \n> DROP of a partition? ... I would think it should do the same as DETACH,\n> right? Inform that however many rows the partition had, are now changed\n> in ancestors.\n\nYes, drop of an (attached) partition. The case for DROP is clear, since it\nwas clearly meant to go away forever. The case for DETACH seems somewhat less\nclear.\n\nThe current behavior of pg_dump/restore (since 33a53130a) is to CREATE+ATTACH,\nso there's an argument that if DROPping the partition counts towards the\nparent's analyze, then so should CREATE+ATTACH.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 9 Apr 2021 17:29:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-09 11:54:30 -0400, Alvaro Herrera wrote:\n> On 2021-Apr-09, Tom Lane wrote:\n>\n> > Could we get this pushed sooner rather than later? The buildfarm\n> > is showing a wide variety of intermittent failures on HEAD, and it's\n> > hard to tell how many of them trace to this one bug.\n>\n> Pushed now, thanks.\n\nI assume this is also the likely explanation for / fix for:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2021-04-08%2016%3A03%3A03\n\n==3500389== VALGRINDERROR-BEGIN\n==3500389== Invalid read of size 8\n==3500389== at 0x4EC4B8: relation_needs_vacanalyze (autovacuum.c:3237)\n==3500389== by 0x4EE0AF: do_autovacuum (autovacuum.c:2168)\n==3500389== by 0x4EEEA8: AutoVacWorkerMain (autovacuum.c:1715)\n==3500389== by 0x4EEF7F: StartAutoVacWorker (autovacuum.c:1500)\n==3500389== by 0x4FD2E4: StartAutovacuumWorker (postmaster.c:5539)\n==3500389== by 0x4FE50A: sigusr1_handler (postmaster.c:5243)\n==3500389== by 0x4A6513F: ??? (in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so)\n==3500389== by 0x4DCA865: select (select.c:41)\n==3500389== by 0x4FEB75: ServerLoop (postmaster.c:1701)\n==3500389== by 0x4FFE52: PostmasterMain (postmaster.c:1409)\n==3500389== by 0x442563: main (main.c:209)\n==3500389== Address 0x10 is not stack'd, malloc'd or (recently) free'd\n==3500389==\n==3500389== VALGRINDERROR-END\n==3500389==\n==3500389== Process terminating with default action of signal 11 (SIGSEGV): dumping core\n==3500389== Access not within mapped region at address 0x10\n==3500389== at 0x4EC4B8: relation_needs_vacanalyze (autovacuum.c:3237)\n==3500389== by 0x4EE0AF: do_autovacuum (autovacuum.c:2168)\n==3500389== by 0x4EEEA8: AutoVacWorkerMain (autovacuum.c:1715)\n==3500389== by 0x4EEF7F: StartAutoVacWorker (autovacuum.c:1500)\n==3500389== by 0x4FD2E4: StartAutovacuumWorker (postmaster.c:5539)\n==3500389== by 0x4FE50A: sigusr1_handler (postmaster.c:5243)\n==3500389== by 0x4A6513F: ??? (in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so)\n==3500389== by 0x4DCA865: select (select.c:41)\n==3500389== by 0x4FEB75: ServerLoop (postmaster.c:1701)\n==3500389== by 0x4FFE52: PostmasterMain (postmaster.c:1409)\n==3500389== by 0x442563: main (main.c:209)\n==3500389== If you believe this happened as a result of a stack\n==3500389== overflow in your program's main thread (unlikely but\n==3500389== possible), you can try to increase the size of the\n==3500389== main thread stack using the --main-stacksize= flag.\n==3500389== The main thread stack size used in this run was 8388608.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 9 Apr 2021 16:47:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "Hello\n\nOn 2021-Apr-09, Andres Freund wrote:\n\n> I assume this is also the likely explanation for / fix for:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2021-04-08%2016%3A03%3A03\n> \n> ==3500389== VALGRINDERROR-BEGIN\n> ==3500389== Invalid read of size 8\n> ==3500389== at 0x4EC4B8: relation_needs_vacanalyze (autovacuum.c:3237)\n> ==3500389== by 0x4EE0AF: do_autovacuum (autovacuum.c:2168)\n> ==3500389== by 0x4EEEA8: AutoVacWorkerMain (autovacuum.c:1715)\n> ==3500389== by 0x4EEF7F: StartAutoVacWorker (autovacuum.c:1500)\n> ==3500389== by 0x4FD2E4: StartAutovacuumWorker (postmaster.c:5539)\n\nHmm, I didn't try to reproduce this, but yeah it sounds quite likely\nthat it's the same issue -- line 3237 is the GETSTRUCT call where the\nother one was crashing, which is now gone.\n\nThanks for pointing it out,\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Fri, 9 Apr 2021 20:01:04 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-04-09 11:54:30 -0400, Alvaro Herrera wrote:\n>> Pushed now, thanks.\n\n> I assume this is also the likely explanation for / fix for:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2021-04-08%2016%3A03%3A03\n\n> ==3500389== VALGRINDERROR-BEGIN\n> ==3500389== Invalid read of size 8\n> ==3500389== at 0x4EC4B8: relation_needs_vacanalyze (autovacuum.c:3237)\n\nYeah, looks like the same thing to me; it's the same line that was\ncrashing in the non-valgrind reports.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Apr 2021 22:52:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On 4/10/21 12:29 AM, Justin Pryzby wrote:\n> On Fri, Apr 09, 2021 at 06:16:59PM -0400, Alvaro Herrera wrote:\n>> On 2021-Apr-09, Justin Pryzby wrote:\n>>\n>>> One data point: we do DETACH/ATTACH tables during normal operation, before\n>>> type-promoting ALTERs, to avoid worst-case disk use, and to avoid locking the\n>>> table for a long time. It'd be undesirable (but maybe of no great consequence)\n>>> to trigger an ALTER when we DETACH them, since we'll re-ATTACH it shortly\n>>> afterwards.\n>>\n>> You mean to trigger an ANALYZE, not to trigger an ALTER, right?\n> \n> Oops, right. It's slightly undesirable for a DETACH to cause an ANALYZE.\n> \n>> I think I agree with Tomas: we should do it by default, and offer some\n>> way to turn that off. I suppose a new reloptions, solely for\n>> partitioned tables, would be the way to do it.\n>>\n>>> However, I think DROP should be handled ?\n>>\n>> DROP of a partition? ... I would think it should do the same as DETACH,\n>> right? Inform that however many rows the partition had, are now changed\n>> in ancestors.\n> \n> Yes, drop of an (attached) partition. The case for DROP is clear, since it\n> was clearly meant to go away forever. The case for DETACH seems somewhat less\n> clear.\n> \n> The current behavior of pg_dump/restore (since 33a53130a) is to CREATE+ATTACH,\n> so there's an argument that if DROPping the partition counts towards the\n> parent's analyze, then so should CREATE+ATTACH.\n> \n\nI think it's tricky to \"optimize\" the behavior after ATTACH/DETACH. I'd\nargue that in principle, we should aim to keep accurate statistics, so\nATTACH should be treated as insert of all rows, and DETACH should be\ntreated as delete of all rows. Se for the purpose of ANALYZE, we should\npropagate reltuples as changes_since_analyze after ATTACH/DETACH.\n\nYes, it may result in more frequent ANALYZE on the parent, but I think\nthat's necessary. Repeated attach/detach of the same partition may bloat\nthe value, but I guess that's an example of \"If it hurts don't do it.\"\n\nWhat I think we might do is offer some light-weight analyze variant,\ne.g. based on the merging of statistics (I've posted a PoC patch a\ncouple days ago.). That would make the ANALYZEs on parent much cheaper,\nso those \"unnecessary\" analyzes would not be an issue.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 10 Apr 2021 23:22:53 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 04:11:49PM -0400, Alvaro Herrera wrote:\n> On 2021-Apr-08, Tom Lane wrote:\n> \n> > > So I tend to think that my initial instinct was the better direction: we\n> > > should not be doing any find_all_inheritors() here at all, but instead\n> > > rely on pg_class.reltuples to be set for the partitioned table.\n> > \n> > +1\n> \n> This patch does that.\n\n|commit 0e69f705cc1a3df273b38c9883fb5765991e04fe\n|Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n|Date: Fri Apr 9 11:29:08 2021 -0400\n|\n| Set pg_class.reltuples for partitioned tables\n| \n| When commit 0827e8af70f4 added auto-analyze support for partitioned\n| tables, it included code to obtain reltuples for the partitioned table\n| as a number of catalog accesses to read pg_class.reltuples for each\n| partition. That's not only very inefficient, but also problematic\n| because autovacuum doesn't hold any locks on any of those tables -- and\n| doesn't want to. Replace that code with a read of pg_class.reltuples\n| for the partitioned table, and make sure ANALYZE and TRUNCATE properly\n| maintain that value.\n| \n| I found no code that would be affected by the change of relpages from\n| zero to non-zero for partitioned tables, and no other code that should\n| be maintaining it, but if there is, hopefully it'll be an easy fix.\n\n+ else if (onerel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n+ {\n+ /*\n+ * Partitioned tables don't have storage, so we don't set any fields in\n+ * their pg_class entries except for relpages, which is necessary for\n+ * auto-analyze to work properly.\n+ */\n+ vac_update_relstats(onerel, -1, totalrows,\n+ 0, false, InvalidTransactionId,\n+ InvalidMultiXactId,\n+ in_outer_xact);\n+ }\n\nThis refers to \"relpages\", but I think it means \"reltuples\".\n\nsrc/include/commands/vacuum.h:extern void vac_update_relstats(Relation relation,\nsrc/include/commands/vacuum.h- BlockNumber num_pages,\nsrc/include/commands/vacuum.h- double num_tuples,\nsrc/include/commands/vacuum.h- BlockNumber num_all_visible_pages,\n\nI'm adding it for the next round of \"v14docs\" patch if you don't want to make a\nseparate commit for that.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 10 Apr 2021 20:55:37 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On 2021-Apr-08, Tom Lane wrote:\n\n> BTW, another thing that looks like a race condition is the\n> extract_autovac_opts() call that is done a little bit earlier,\n> also without lock. I think this is actually safe, but it's ONLY\n> safe because we resisted the calls by certain people to add a\n> toast table to pg_class. Otherwise, fetching reloptions could\n> have involved a toast pointer dereference, and it would then be\n> racy whether the toasted data was still there. As-is, even if\n> the pg_class row we're looking at has been deleted, we can safely\n> disassemble its reloptions. I think this matter is deserving\n> of a comment at least.\n\nTrue. I added a comment there.\n\nThanks,\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"I am amazed at [the pgsql-sql] mailing list for the wonderful support, and\nlack of hesitasion in answering a lost soul's question, I just wished the rest\nof the mailing list could be like this.\" (Fotis)\n (http://archives.postgresql.org/pgsql-sql/2006-06/msg00265.php)\n\n\n",
"msg_date": "Wed, 21 Apr 2021 18:40:19 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On 2021-Apr-09, Robert Haas wrote:\n\n> Does this need to worry about new partitions getting attached to a\n> partitioned table, or old ones getting detached? (Maybe it does\n> already, not sure.)\n\nI was pinged because this is listed as an open item. I don't think it\nis one. Handling ATTACH/DETACH/DROP is important for overall\nconsistency, of course, so we should do it eventually, but the fact that\nautovacuum runs analyze *at all* for partitioned tables is an enormous\nstep forward from it not doing so. I think we should treat ATTACH/\nDETACH/DROP handling as a further feature to be added in a future\nrelease, not an open item to be fixed in the current one.\n\nNow, if somebody sees a very trivial way to handle it, let's discuss it,\nbut *I* don't see it.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"You're _really_ hosed if the person doing the hiring doesn't understand\nrelational systems: you end up with a whole raft of programmers, none of\nwhom has had a Date with the clue stick.\" (Andrew Sullivan)\n\n\n",
"msg_date": "Wed, 21 Apr 2021 19:06:49 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "Hi,\n\nThank you for discussing this item.\n\n> I think we should treat ATTACH/\n> DETACH/DROP handling as a further feature to be added in a future\n> release, not an open item to be fixed in the current one.\n>\nI agree with your opinion.\n\n> Now, if somebody sees a very trivial way to handle it, let's discuss it,\n> but *I* don't see it.\n>\nI started thinking about the way to handle ATTACH/DETACH/DROP,\nbut I haven't created patches. If no one has done it yet, I'll keep working.\n\n--\nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 22 Apr 2021 08:30:52 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On Wed, Apr 21, 2021 at 07:06:49PM -0400, Alvaro Herrera wrote:\n> On 2021-Apr-09, Robert Haas wrote:\n> > Does this need to worry about new partitions getting attached to a\n> > partitioned table, or old ones getting detached? (Maybe it does\n> > already, not sure.)\n> \n> I was pinged because this is listed as an open item. I don't think it\n> is one. Handling ATTACH/DETACH/DROP is important for overall\n> consistency, of course, so we should do it eventually, but the fact that\n> autovacuum runs analyze *at all* for partitioned tables is an enormous\n> step forward from it not doing so. I think we should treat ATTACH/\n> DETACH/DROP handling as a further feature to be added in a future\n> release, not an open item to be fixed in the current one.\n\nI think this is okay, with the caveat that we'd be changing the behavior\n(again) in a future release, rather than doing it all in v14.\n\nMaybe the behavior should be documented, though. Actually, I thought the\npre-existing (non)behavior of autoanalyze would've been documented, and we'd\nnow update that. All I can find is this:\n\nhttps://www.postgresql.org/docs/current/sql-analyze.html\n|The autovacuum daemon, however, will only consider inserts or updates on the\n|parent table itself when deciding whether to trigger an automatic analyze for\n|that table\n\nI think that should probably have been written down somewhere other than for\nthe manual ANALYZE command, but in any case it seems to be outdated now.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 22 Apr 2021 12:43:46 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 12:43:46PM -0500, Justin Pryzby wrote:\n> Maybe the behavior should be documented, though. Actually, I thought the\n> pre-existing (non)behavior of autoanalyze would've been documented, and we'd\n> now update that. All I can find is this:\n> \n> https://www.postgresql.org/docs/current/sql-analyze.html\n> |The autovacuum daemon, however, will only consider inserts or updates on the\n> |parent table itself when deciding whether to trigger an automatic analyze for\n> |that table\n> \n> I think that should probably have been written down somewhere other than for\n> the manual ANALYZE command, but in any case it seems to be outdated now.\n\nStarting with this \n\n From a7ae56a879b6bacc4fc22cbd769851713be89840 Mon Sep 17 00:00:00 2001\nFrom: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Fri, 23 Apr 2021 09:15:58 -0500\nSubject: [PATCH] WIP: Add docs for autovacuum processing of partitioned tables\n\n---\n doc/src/sgml/perform.sgml | 3 ++-\n doc/src/sgml/ref/analyze.sgml | 4 +++-\n doc/src/sgml/ref/pg_restore.sgml | 6 ++++--\n 3 files changed, 9 insertions(+), 4 deletions(-)\n\ndiff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml\nindex 89ff58338e..814c3cffbe 100644\n--- a/doc/src/sgml/perform.sgml\n+++ b/doc/src/sgml/perform.sgml\n@@ -1767,7 +1767,8 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;\n <para>\n Whenever you have significantly altered the distribution of data\n within a table, running <link linkend=\"sql-analyze\"><command>ANALYZE</command></link> is strongly recommended. This\n- includes bulk loading large amounts of data into the table. Running\n+ includes bulk loading large amounts of data into the table,\n+ or attaching/detaching partitions. Running\n <command>ANALYZE</command> (or <command>VACUUM ANALYZE</command>)\n ensures that the planner has up-to-date statistics about the\n table. With no statistics or obsolete statistics, the planner might\ndiff --git a/doc/src/sgml/ref/analyze.sgml b/doc/src/sgml/ref/analyze.sgml\nindex c8fcebc161..179ae3555d 100644\n--- a/doc/src/sgml/ref/analyze.sgml\n+++ b/doc/src/sgml/ref/analyze.sgml\n@@ -255,11 +255,13 @@ ANALYZE [ VERBOSE ] [ <replaceable class=\"parameter\">table_and_columns</replacea\n rows of the parent table only, and a second time on the rows of the\n parent table with all of its children. This second set of statistics\n is needed when planning queries that traverse the entire inheritance\n- tree. The autovacuum daemon, however, will only consider inserts or\n+ tree. For legacy inheritence, the autovacuum daemon, only considers inserts or\n updates on the parent table itself when deciding whether to trigger an\n automatic analyze for that table. If that table is rarely inserted into\n or updated, the inheritance statistics will not be up to date unless you\n run <command>ANALYZE</command> manually.\n+ For partitioned tables, inserts and updates on the partitions are counted\n+ towards auto-analyze on the parent.\n </para>\n \n <para>\ndiff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml\nindex 93ea937ac8..260bf0feb7 100644\n--- a/doc/src/sgml/ref/pg_restore.sgml\n+++ b/doc/src/sgml/ref/pg_restore.sgml\n@@ -922,8 +922,10 @@ CREATE DATABASE foo WITH TEMPLATE template0;\n \n <para>\n Once restored, it is wise to run <command>ANALYZE</command> on each\n- restored table so the optimizer has useful statistics; see\n- <xref linkend=\"vacuum-for-statistics\"/> and\n+ restored table so the optimizer has useful statistics.\n+ If the table is a partition or an inheritence child, it may also be useful\n+ to analyze the parent table.\n+ See <xref linkend=\"vacuum-for-statistics\"/> and\n <xref linkend=\"autovacuum\"/> for more information.\n </para>\n \n-- \n2.17.0\n\n\n\n",
"msg_date": "Fri, 23 Apr 2021 13:01:52 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On Wed, Apr 21, 2021 at 07:06:49PM -0400, Alvaro Herrera wrote:\n> On 2021-Apr-09, Robert Haas wrote:\n>> Does this need to worry about new partitions getting attached to a\n>> partitioned table, or old ones getting detached? (Maybe it does\n>> already, not sure.)\n> \n> I was pinged because this is listed as an open item. I don't think it\n> is one. Handling ATTACH/DETACH/DROP is important for overall\n> consistency, of course, so we should do it eventually, but the fact that\n> autovacuum runs analyze *at all* for partitioned tables is an enormous\n> step forward from it not doing so. I think we should treat ATTACH/\n> DETACH/DROP handling as a further feature to be added in a future\n> release, not an open item to be fixed in the current one.\n\nYeah, I'd agree that this could be done as some future work so it\nlooks fine to move it to the section for \"won't fix\" items, but that\nsounds rather tricky to me as there are dependencies across the\npartitions.\n\nNow, I don't think that we are completely done either, as one\ndocumentation patch has been sent here:\nhttps://www.postgresql.org/message-id/20210423180152.GA17270@telsasoft.com\n\nAlvaro, could you look at that?\n--\nMichael",
"msg_date": "Tue, 11 May 2021 17:08:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On 2021-Apr-23, Justin Pryzby wrote:\n\n> On Thu, Apr 22, 2021 at 12:43:46PM -0500, Justin Pryzby wrote:\n> > \n> > I think that should probably have been written down somewhere other than for\n> > the manual ANALYZE command, but in any case it seems to be outdated now.\n> \n> Starting with this \n\nAgreed, we need some more docs here. I lightly edited yours and ended\nup with this -- mostly I think partitioned tables should not be in the\nsame paragraph as legacy inheritance because the behavior is different\nenough (partitioned tables are not analyzed twice).\n\nI'll give a deeper look tomorrow to see if other places also need edits.\n\nThanks\n\n-- \n�lvaro Herrera Valdivia, Chile",
"msg_date": "Tue, 11 May 2021 17:56:16 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "New version, a bit more ambitious. I think it's better to describe\nbehavior for partitioned tables ahead of inheritance. Also, in the\nANALYZE reference page I split the topic in two: in one single paragraph\nwe now describe what happens with manual analyze for partitioned tables\nand inheritance hierarchies; we describe the behavior of autovacuum in\none separate paragraph for each type of hierarchy, since the differences\nare stark.\n\nI noticed that difference while verifying the behavior that I was to\ndocument. If you look at ANALYZE VERBOSE output, it seems a bit\nwasteful:\n\ncreate table part (a int) partition by list (a);\ncreate table part0 partition of part for values in (0);\ncreate table part1 partition of part for values in (1);\ncreate table part23 partition of part for values in (2, 3) partition by list (a);\ncreate table part2 partition of part23 for values in (2);\ncreate table part3 partition of part23 for values in (3);\ninsert into part select g%4 from generate_series(1, 50000000) g;\n\nanalyze verbose part;\n\nINFO: analyzing \"public.part\" inheritance tree\nINFO: \"part1\": scanned 7500 of 55310 pages, containing 1695000 live rows and 0 dead rows; 7500 rows in sample, 12500060 estimated total rows\nINFO: \"part2\": scanned 7500 of 55310 pages, containing 1695000 live rows and 0 dead rows; 7500 rows in sample, 12500060 estimated total rows\nINFO: \"part3\": scanned 7500 of 55310 pages, containing 1695000 live rows and 0 dead rows; 7500 rows in sample, 12500060 estimated total rows\nINFO: \"part4\": scanned 7500 of 55310 pages, containing 1695000 live rows and 0 dead rows; 7500 rows in sample, 12500060 estimated total rows\nINFO: analyzing \"public.part1\"\nINFO: \"part1\": scanned 30000 of 55310 pages, containing 6779940 live rows and 0 dead rows; 30000 rows in sample, 12499949 estimated total rows\nINFO: analyzing \"public.part2\"\nINFO: \"part2\": scanned 30000 of 55310 pages, containing 6779940 live rows and 0 dead rows; 30000 rows in sample, 12499949 estimated total rows\nINFO: analyzing \"public.part34\" inheritance tree\nINFO: \"part3\": scanned 15000 of 55310 pages, containing 3390000 live rows and 0 dead rows; 15000 rows in sample, 12500060 estimated total rows\nINFO: \"part4\": scanned 15000 of 55310 pages, containing 3389940 live rows and 0 dead rows; 15000 rows in sample, 12499839 estimated total rows\nINFO: analyzing \"public.part3\"\nINFO: \"part3\": scanned 30000 of 55310 pages, containing 6780000 live rows and 0 dead rows; 30000 rows in sample, 12500060 estimated total rows\nINFO: analyzing \"public.part4\"\nINFO: \"part4\": scanned 30000 of 55310 pages, containing 6780000 live rows and 0 dead rows; 30000 rows in sample, 12500060 estimated total rows\nANALYZE\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"The eagle never lost so much time, as\nwhen he submitted to learn of the crow.\" (William Blake)",
"msg_date": "Thu, 13 May 2021 17:33:33 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "With English fixes from Bruce.\n\nI think the note about autovacuum in the reference page for ANALYZE is a\nbit out of place, but not enough to make me move the whole paragraph\nelsewhere. Maybe we should try to do that sometime.\n\n-- \n�lvaro Herrera Valdivia, Chile\nOfficer Krupke, what are we to do?\nGee, officer Krupke, Krup you! (West Side Story, \"Gee, Officer Krupke\")",
"msg_date": "Thu, 13 May 2021 19:02:23 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "On Thu, May 13, 2021 at 05:33:33PM -0400, Alvaro Herrera wrote:\n> +++ b/doc/src/sgml/maintenance.sgml\n> @@ -817,6 +817,11 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu\n> </programlisting>\n> is compared to the total number of tuples inserted, updated, or deleted\n> since the last <command>ANALYZE</command>.\n> + For partitioned tables, inserts and updates on partitions are counted\n> + towards this threshold; however partition meta-operations such as\n> + attachment, detachment or drop are not, so running a manual\n> + <command>ANALYZE</command> is recommended if the partition added or\n> + removed contains a statistically significant volume of data.\n\nI suggest: \"Inserts, updates and deletes on partitions of a partitioned table\nare counted towards this threshold; however DDL operations such as ATTACH,\nDETACH and DROP are not, ...\n\n> + and in addition it will analyze each individual partition separately.\n\nremove \"and\" and say in addition COMMA\nOr:\n\"it will also recursive into each partition and update its statistics.\"\n\n> + By constrast, if the table being analyzed has inheritance children,\n> + <command>ANALYZE</command> will gather statistics for that table twice:\n> + once on the rows of the parent table only, and a second time on the\n> + rows of the parent table with all of its children. This second set of\n> + statistics is needed when planning queries that traverse the entire\n> + inheritance tree. The children tables are not individually analyzed\n> + in this case.\n\nsay \"The child tables themselves..\"\n\n> + <para>\n> + For tables with inheritance children, the autovacuum daemon only\n> + counts inserts and deletes in the parent table itself when deciding\n> + whether to trigger an automatic analyze for that table. If that table\n> + is rarely inserted into or updated, the inheritance statistics will\n> + not be up to date unless you run <command>ANALYZE</command> manually.\n> + </para>\n\nThis should be emphasized:\nTuples changed in inheritence children do not count towards analyze on the\nparent table. If the parent table is empty or rarely changed, it may never \nbe processed by autovacuum. It's necesary to periodically run an manual\nANALYZE to keep the statistics of the table hierarchy up to date.\n\nI don't know why it says \"inserted or updated\" but doesn't say \"or deleted\" -\nthat seems like a back-patchable fix.\n\n> +++ b/doc/src/sgml/ref/pg_restore.sgml\n> @@ -922,8 +922,10 @@ CREATE DATABASE foo WITH TEMPLATE template0;\n> \n> <para>\n> Once restored, it is wise to run <command>ANALYZE</command> on each\n> - restored table so the optimizer has useful statistics; see\n> - <xref linkend=\"vacuum-for-statistics\"/> and\n> + restored table so the optimizer has useful statistics.\n> + If the table is a partition or an inheritance child, it may also be useful\n> + to analyze the parent table.\n> + See <xref linkend=\"vacuum-for-statistics\"/> and\n> <xref linkend=\"autovacuum\"/> for more information.\n\nmaybe say: \"analyze the parent to update statistics for the table hierarchy\".\n\n\n",
"msg_date": "Thu, 13 May 2021 18:25:17 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
},
{
"msg_contents": "Thanks for these corrections -- I applied them and a few minor changes\nfrom myself, and pushed. Another set of eyes over the result would be\nmost welcome.\n\nI hope we can close this now :-)\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Those who use electric razors are infidels destined to burn in hell while\nwe drink from rivers of beer, download free vids and mingle with naked\nwell shaved babes.\" (http://slashdot.org/comments.pl?sid=44793&cid=4647152)\n\n\n",
"msg_date": "Fri, 14 May 2021 13:13:12 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: autovacuum: handle analyze for partitioned tables"
}
] |
[
{
"msg_contents": "Hello,\n\nI'd like to propose adding `--drop-cascade` option for pg_dump/restore\n\n\nUsecase:\n\nI'd like to be able to restore an old custom format database dump as a\nsingle transaction ( so the current data won't lose if restore fails). The\ndatabase has added some new constraints after backup so a CASCADE DROP is\nneeded.\n\n\n This allows for restoring an old backup after adding new constraints,\n\n at the risk of losing new data.\n\n\nThere're already some requests for supporting cascade drop:\n\n -\n https://dba.stackexchange.com/questions/281384/pg-restore-clean-not-working-because-cascade-drop\n -\n https://www.postgresql.org/message-id/flat/Pine.LNX.4.33.0308281409440.6957-100000%40dev2.int.journyx.com\n -\n https://www.postgresql.org/message-id/flat/50EC9574.9060500%40encs.concordia.ca\n\n\nDesign & Implementation\n\n\nBasically I'm following the changes in adding `--if-exists` patch:\nhttps://github.com/postgres/postgres/commit/9067310cc5dd590e36c2c3219dbf3961d7c9f8cb\n. pg_dump/restore will inject a CASCADE clause to each DROP command.\n\n\nThe attached patch has been tested on our old backup. I'm happy to get some\nfeedback.",
"msg_date": "Thu, 8 Apr 2021 14:24:42 +0800",
"msg_from": "Haotian Wu <whtsky@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add option --drop-cascade for pg_dump/restore"
},
{
"msg_contents": "Overall the patch looks good, but I did notice a few small things:\r\n\r\n1. In pg_dumpall.c, the section /* Add long options to the pg_dump argument list */, we are now \r\npassing along the --drop-cascade option. However, --clean is not passed in, so \r\nany call to pg_dumpall using --drop-cascade fails a the pg_dump step. You'll note \r\nthat --if-exists it not passed along either; because we are dropping the whole database, we don't \r\nneed to have pg_dump worry about dropping objects at all. So I think that \r\n--drop-cascade should NOT be passed along from pg_dumpall to pg_dump.\r\n\r\n2. I'm not even sure if --drop-cascade makes sense for pg_dumpall, as you cannot cascade global things like databases and roles.\r\n\r\n3. In the file pg_backup_archiver.c, the patch does a \r\nstmtEnd = strstr(mark + strlen(buffer), \";\");\" and then spits \r\nout things \"past\" the semicolon as the final %s in the appendPQExpBuffer line. \r\nI'm not clear why: are we expecting more things to appear after the semi-colon? \r\nWhy not just append a \"\\n\" manually as part of the previous %s?\r\n\r\nCheers,\r\nGreg\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Fri, 28 May 2021 18:39:09 +0000",
"msg_from": "Greg Sabino Mullane <htamfids@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add option --drop-cascade for pg_dump/restore"
},
{
"msg_contents": "Hi,\n\nI agree that —drop-cascade does not make sense for pg_dumpall, so I removed them.\n\n> are we expecting more things to appear after the semi-colon? \n\nNo, I was just trying to “reuse” original statement as much as possible. Append “\\n” manually should also do the job, and I’ve updated the patch as you suggests.\n\n\n\n\n\n> 2021年5月29日 上午2:39,Greg Sabino Mullane <htamfids@gmail.com> 写道:\n> \n> Overall the patch looks good, but I did notice a few small things:\n> \n> 1. In pg_dumpall.c, the section /* Add long options to the pg_dump argument list */, we are now \n> passing along the --drop-cascade option. However, --clean is not passed in, so \n> any call to pg_dumpall using --drop-cascade fails a the pg_dump step. You'll note \n> that --if-exists it not passed along either; because we are dropping the whole database, we don't \n> need to have pg_dump worry about dropping objects at all. So I think that \n> --drop-cascade should NOT be passed along from pg_dumpall to pg_dump.\n> \n> 2. I'm not even sure if --drop-cascade makes sense for pg_dumpall, as you cannot cascade global things like databases and roles.\n> \n> 3. In the file pg_backup_archiver.c, the patch does a \n> stmtEnd = strstr(mark + strlen(buffer), \";\");\" and then spits \n> out things \"past\" the semicolon as the final %s in the appendPQExpBuffer line. \n> I'm not clear why: are we expecting more things to appear after the semi-colon? \n> Why not just append a \"\\n\" manually as part of the previous %s?\n> \n> Cheers,\n> Greg\n> \n> The new status of this patch is: Waiting on Author",
"msg_date": "Fri, 2 Jul 2021 14:40:52 +0800",
"msg_from": "Haotian Wu <whtsky@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add option --drop-cascade for pg_dump/restore"
},
{
"msg_contents": "On Fri, Jul 2, 2021 at 12:11 PM Haotian Wu <whtsky@gmail.com> wrote:\n>\n> Hi,\n>\n> I agree that —drop-cascade does not make sense for pg_dumpall, so I removed them.\n>\n> > are we expecting more things to appear after the semi-colon?\n>\n> No, I was just trying to “reuse” original statement as much as possible. Append “\\n” manually should also do the job, and I’ve updated the patch as you suggests.\n\n1) This change is not required as it is not supported for pg_dumpall\n+++ b/doc/src/sgml/ref/pg_dumpall.sgml\n@@ -289,6 +289,16 @@ PostgreSQL documentation\n </listitem>\n </varlistentry>\n\n+ <varlistentry>\n+ <term><option>--drop-cascade</option></term>\n+ <listitem>\n+ <para>\n+ Use <literal>CASCADE</literal> to drop database objects.\n+ This option is not valid unless <option>--clean</option> is\nalso specified.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+\n\n2) I felt pg_dump will include the cascade option for plain format and\npg_restore will include the cascade option from pg_restore for other\nformats. If my understanding is correct, should we document this?\n\n3) This change is not required\n\ndestroyPQExpBuffer(ftStmt);\n pg_free(dropStmtOrig);\n }\n+\n }\n\n4) Is it possible to add a few tests for this?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 13 Jul 2021 19:52:51 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add option --drop-cascade for pg_dump/restore"
},
{
"msg_contents": "> 2) I felt pg_dump will include the cascade option for plain format and\n> pg_restore will include the cascade option from pg_restore for other\n> formats. If my understanding is correct, should we document this?\n\nI may not understand it correctly, are you saying\npg_dump will include the cascade option only for plain format, or\npg_dump will enable the cascade option for plain by default?\n\n> 4) Is it possible to add a few tests for this?\n\nLooks like tests should be added to\n`src/bin/pg_dump/t/002_pg_dump.pl`, I'll try to add some.\n\nvignesh C <vignesh21@gmail.com> 于2021年7月13日周二 下午10:23写道:\n>\n> On Fri, Jul 2, 2021 at 12:11 PM Haotian Wu <whtsky@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I agree that —drop-cascade does not make sense for pg_dumpall, so I removed them.\n> >\n> > > are we expecting more things to appear after the semi-colon?\n> >\n> > No, I was just trying to “reuse” original statement as much as possible. Append “\\n” manually should also do the job, and I’ve updated the patch as you suggests.\n>\n> 1) This change is not required as it is not supported for pg_dumpall\n> +++ b/doc/src/sgml/ref/pg_dumpall.sgml\n> @@ -289,6 +289,16 @@ PostgreSQL documentation\n> </listitem>\n> </varlistentry>\n>\n> + <varlistentry>\n> + <term><option>--drop-cascade</option></term>\n> + <listitem>\n> + <para>\n> + Use <literal>CASCADE</literal> to drop database objects.\n> + This option is not valid unless <option>--clean</option> is\n> also specified.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n>\n> 2) I felt pg_dump will include the cascade option for plain format and\n> pg_restore will include the cascade option from pg_restore for other\n> formats. If my understanding is correct, should we document this?\n>\n> 3) This change is not required\n>\n> destroyPQExpBuffer(ftStmt);\n> pg_free(dropStmtOrig);\n> }\n> +\n> }\n>\n> 4) Is it possible to add a few tests for this?\n>\n> Regards,\n> Vignesh\n\n\n",
"msg_date": "Tue, 13 Jul 2021 23:45:49 +0800",
"msg_from": "Wu Haotian <whtsky@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add option --drop-cascade for pg_dump/restore"
},
{
"msg_contents": "On Tue, Jul 13, 2021 at 9:16 PM Wu Haotian <whtsky@gmail.com> wrote:\n>\n> > 2) I felt pg_dump will include the cascade option for plain format and\n> > pg_restore will include the cascade option from pg_restore for other\n> > formats. If my understanding is correct, should we document this?\n>\n> I may not understand it correctly, are you saying\n> pg_dump will include the cascade option only for plain format, or\n> pg_dump will enable the cascade option for plain by default?\n\npg_dump support plain, custom, tar and directory format, I think,\ncascade option will be added by pg_dump only for plain format and for\nthe other format pg_restore will include the cascade option. Should we\ndocument this somewhere?\n\n> > 4) Is it possible to add a few tests for this?\n>\n> Looks like tests should be added to\n> `src/bin/pg_dump/t/002_pg_dump.pl`, I'll try to add some.\n\nYes, that should be the right place.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 16 Jul 2021 18:39:07 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add option --drop-cascade for pg_dump/restore"
},
{
"msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> On Tue, Jul 13, 2021 at 9:16 PM Wu Haotian <whtsky@gmail.com> wrote:\n>> I may not understand it correctly, are you saying\n>> pg_dump will include the cascade option only for plain format, or\n>> pg_dump will enable the cascade option for plain by default?\n\n> pg_dump support plain, custom, tar and directory format, I think,\n> cascade option will be added by pg_dump only for plain format and for\n> the other format pg_restore will include the cascade option. Should we\n> document this somewhere?\n\nThat would require pg_restore to try to edit the DROP commands during\nrestore, which sounds horribly fragile. I'm inclined to think that\nsupporting this option only during initial dump is safer.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Jul 2021 09:40:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add option --drop-cascade for pg_dump/restore"
},
{
"msg_contents": "\nOn 7/16/21 9:40 AM, Tom Lane wrote:\n> vignesh C <vignesh21@gmail.com> writes:\n>> On Tue, Jul 13, 2021 at 9:16 PM Wu Haotian <whtsky@gmail.com> wrote:\n>>> I may not understand it correctly, are you saying\n>>> pg_dump will include the cascade option only for plain format, or\n>>> pg_dump will enable the cascade option for plain by default?\n>> pg_dump support plain, custom, tar and directory format, I think,\n>> cascade option will be added by pg_dump only for plain format and for\n>> the other format pg_restore will include the cascade option. Should we\n>> document this somewhere?\n> That would require pg_restore to try to edit the DROP commands during\n> restore, which sounds horribly fragile. I'm inclined to think that\n> supporting this option only during initial dump is safer.\n>\n> \t\t\t\n\n\n\nMaybe, but that would push back the time when you would need to decide\nyou needed this quite a lot. We could also have pg_dump stash a copy of\nthe CASCADE variant in the TOC that could be used by pg_restore if\nrequired. I'm not sure if it's worth the trouble and extra space though.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 16 Jul 2021 10:00:02 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Add option --drop-cascade for pg_dump/restore"
},
{
"msg_contents": "On Fri, Jul 16, 2021 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> That would require pg_restore to try to edit the DROP commands during\n> restore, which sounds horribly fragile. I'm inclined to think that\n> supporting this option only during initial dump is safer.\n>\n\nSafer, but not nearly as useful. Maybe see what the OP (Wu Haotian) can\ncome up with as a first implementation?\n\nCheers,\nGreg\n\nOn Fri, Jul 16, 2021 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:That would require pg_restore to try to edit the DROP commands during\nrestore, which sounds horribly fragile. I'm inclined to think that\nsupporting this option only during initial dump is safer.Safer, but not nearly as useful. Maybe see what the OP (Wu Haotian) can come up with as a first implementation? Cheers,Greg",
"msg_date": "Tue, 10 Aug 2021 10:57:19 -0400",
"msg_from": "Greg Sabino Mullane <htamfids@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add option --drop-cascade for pg_dump/restore"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 10:57 PM Greg Sabino Mullane <htamfids@gmail.com> wrote:\n>\n> On Fri, Jul 16, 2021 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> That would require pg_restore to try to edit the DROP commands during\n>> restore, which sounds horribly fragile. I'm inclined to think that\n>> supporting this option only during initial dump is safer.\n>\n>\n> Safer, but not nearly as useful. Maybe see what the OP (Wu Haotian) can come up with as a first implementation?\n>\n> Cheers,\n> Greg\n>\n\npg_restore already tries to edit the DROP commands during restore in\norder to support `--if-exists`.\n\n> supporting this option only during initial dump is safer.\n\npg_dump & pg_restores use the same function to inject `IF EXISTS` (\nand `DROP .. CASCADE` in this patch`).\nSupporting this option only during pg_dump may not make it safer, as\nthe logic is the same.\n\n\n",
"msg_date": "Wed, 11 Aug 2021 11:15:00 +0800",
"msg_from": "Wu Haotian <whtsky@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add option --drop-cascade for pg_dump/restore"
},
{
"msg_contents": "Hi,\n\nI've updated the patch to remove unnecessary changes and added tests.\n\nOn Fri, Jul 16, 2021 at 9:09 PM vignesh C <vignesh21@gmail.com> wrote:\n> pg_dump support plain, custom, tar and directory format, I think,\n> cascade option will be added by pg_dump only for plain format and for\n> the other format pg_restore will include the cascade option. Should we\n> document this somewhere?\n\nYes, cascade option relies on `--clean` which only works for plain\nformat in pg_dump.\nMaybe we can add checks like \"option --clean requires plain text format\"?\nIf so, should I start a new mail thread for this?",
"msg_date": "Thu, 12 Aug 2021 10:53:36 +0800",
"msg_from": "Wu Haotian <whtsky@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add option --drop-cascade for pg_dump/restore"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 10:53 PM Wu Haotian <whtsky@gmail.com> wrote:\n\n> Maybe we can add checks like \"option --clean requires plain text format\"?\n> If so, should I start a new mail thread for this?\n>\n\nShrug. To me, that seems related enough it could go into the existing\npatch/thread.\n\nCheers,\nGreg\n\nOn Wed, Aug 11, 2021 at 10:53 PM Wu Haotian <whtsky@gmail.com> wrote:Maybe we can add checks like \"option --clean requires plain text format\"?\nIf so, should I start a new mail thread for this?Shrug. To me, that seems related enough it could go into the existing patch/thread.Cheers,Greg",
"msg_date": "Thu, 12 Aug 2021 11:45:53 -0400",
"msg_from": "Greg Sabino Mullane <htamfids@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add option --drop-cascade for pg_dump/restore"
},
{
"msg_contents": "There are already documents for \"--clean only works with plain text output\",\nso adding checks for --clean seems like a breaking change to me.\n\nI've updated the docs to indicate --drop-cascade and --if-exists only\nworks with plain text output.",
"msg_date": "Mon, 16 Aug 2021 14:35:06 +0800",
"msg_from": "Wu Haotian <whtsky@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add option --drop-cascade for pg_dump/restore"
},
{
"msg_contents": "> On 16 Aug 2021, at 08:35, Wu Haotian <whtsky@gmail.com> wrote:\n> \n> There are already documents for \"--clean only works with plain text output\",\n> so adding checks for --clean seems like a breaking change to me.\n> \n> I've updated the docs to indicate --drop-cascade and --if-exists only\n> works with plain text output.\n\nThis patch fails to apply after recent changes to the pg_dump TAP tests.\nPlease submit a rebased version.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 1 Sep 2021 11:05:46 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Add option --drop-cascade for pg_dump/restore"
},
{
"msg_contents": "Hi,\nhere's the rebased patch.",
"msg_date": "Wed, 8 Sep 2021 22:41:28 +0800",
"msg_from": "Wu Haotian <whtsky@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add option --drop-cascade for pg_dump/restore"
},
{
"msg_contents": "Wu Haotian <whtsky@gmail.com> writes:\n> here's the rebased patch.\n\nLooks like it needs rebasing again, probably as a result of our recent\nrenaming of our Perl test modules.\n\nFWIW, I'd strongly recommend that it's time to pull all that SQL code\nhacking out of RestoreArchive and put it in its own function.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Nov 2021 15:03:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add option --drop-cascade for pg_dump/restore"
},
{
"msg_contents": "> On 3 Nov 2021, at 20:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Wu Haotian <whtsky@gmail.com> writes:\n>> here's the rebased patch.\n> \n> Looks like it needs rebasing again, probably as a result of our recent\n> renaming of our Perl test modules.\n\nAs this patch hasn't been updated, I'm marking this entry Returned with\nFeedback. Please feel free to open a new entry when a rebased patch is\navailable.\n\n> FWIW, I'd strongly recommend that it's time to pull all that SQL code\n> hacking out of RestoreArchive and put it in its own function.\n\n+1\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 1 Dec 2021 11:45:54 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Add option --drop-cascade for pg_dump/restore"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nWhen testing brin bloom indexes I noted that we need to reduce the\nPAGES_PER_RANGE parameter of the index to allow more columns on it.\n\nSadly, this could be a problem if you create the index before the table\ngrows, once it reaches some number of rows (i see the error as early as\n1000 rows) it starts error out.\n\n\tcreate table t1(i int, j int);\n\t\n\t-- uses default PAGES_PER_RANGE=128\n\tcreate index on t1 using brin(i int4_bloom_ops, j int4_bloom_ops ) ;\n\t\n\tinsert into t1 \n\t\tselect random()*1000, random()*1000 from generate_series(1, 1000);\n\tERROR: index row size 8968 exceeds maximum 8152 for index \"t1_i_j_idx\"\n\nif instead you create the index with a minor PAGES_PER_RANGE it goes\nfine, in this case it works once you reduce it to at least 116\n\n\tcreate index on t1 using brin(i int4_bloom_ops, j int4_bloom_ops ) \n\t\twith (pages_per_range=116);\n\n\nso, for having:\ntwo int columns - PAGES_PER_RANGE should be max 116\nthree int columns - PAGES_PER_RANGE should be max 77\none int and one timestamp - PAGES_PER_RANGE should be max 121 \n\nand so on\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Thu, 8 Apr 2021 02:08:52 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "maximum columns for brin bloom indexes"
},
{
"msg_contents": "On 4/8/21 9:08 AM, Jaime Casanova wrote:\n> Hi everyone,\n> \n> When testing brin bloom indexes I noted that we need to reduce the\n> PAGES_PER_RANGE parameter of the index to allow more columns on it.\n> \n> Sadly, this could be a problem if you create the index before the table\n> grows, once it reaches some number of rows (i see the error as early as\n> 1000 rows) it starts error out.\n> \n> \tcreate table t1(i int, j int);\n> \t\n> \t-- uses default PAGES_PER_RANGE=128\n> \tcreate index on t1 using brin(i int4_bloom_ops, j int4_bloom_ops ) ;\n> \t\n> \tinsert into t1 \n> \t\tselect random()*1000, random()*1000 from generate_series(1, 1000);\n> \tERROR: index row size 8968 exceeds maximum 8152 for index \"t1_i_j_idx\"\n> \n> if instead you create the index with a minor PAGES_PER_RANGE it goes\n> fine, in this case it works once you reduce it to at least 116\n> \n> \tcreate index on t1 using brin(i int4_bloom_ops, j int4_bloom_ops ) \n> \t\twith (pages_per_range=116);\n> \n> \n> so, for having:\n> two int columns - PAGES_PER_RANGE should be max 116\n> three int columns - PAGES_PER_RANGE should be max 77\n> one int and one timestamp - PAGES_PER_RANGE should be max 121 \n> \n> and so on\n> \n\nNo, because this very much depends on the number if distinct values in\nthe page page range, which determines how well the bloom filter\ncompresses. You used 1000, but that's just an arbitrary value and the\nactual data might have any other value. And it's unlikely that all three\ncolumns will have the same number of distinct values.\n\nOf course, this also depends on the false positive rate.\n\nFWIW I doubt people are using multi-column BRIN indexes very often.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 8 Apr 2021 12:18:36 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: maximum columns for brin bloom indexes"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 12:18:36PM +0200, Tomas Vondra wrote:\n> On 4/8/21 9:08 AM, Jaime Casanova wrote:\n> > Hi everyone,\n> > \n> > When testing brin bloom indexes I noted that we need to reduce the\n> > PAGES_PER_RANGE parameter of the index to allow more columns on it.\n> > \n> > Sadly, this could be a problem if you create the index before the table\n> > grows, once it reaches some number of rows (i see the error as early as\n> > 1000 rows) it starts error out.\n> > \n> > \tcreate table t1(i int, j int);\n> > \t\n> > \t-- uses default PAGES_PER_RANGE=128\n> > \tcreate index on t1 using brin(i int4_bloom_ops, j int4_bloom_ops ) ;\n> > \t\n> > \tinsert into t1 \n> > \t\tselect random()*1000, random()*1000 from generate_series(1, 1000);\n> > \tERROR: index row size 8968 exceeds maximum 8152 for index \"t1_i_j_idx\"\n> > \n> > if instead you create the index with a minor PAGES_PER_RANGE it goes\n> > fine, in this case it works once you reduce it to at least 116\n> > \n> > \tcreate index on t1 using brin(i int4_bloom_ops, j int4_bloom_ops ) \n> > \t\twith (pages_per_range=116);\n> > \n> > \n> > so, for having:\n> > two int columns - PAGES_PER_RANGE should be max 116\n> > three int columns - PAGES_PER_RANGE should be max 77\n> > one int and one timestamp - PAGES_PER_RANGE should be max 121 \n> > \n> > and so on\n> > \n> \n> No, because this very much depends on the number if distinct values in\n> the page page range, which determines how well the bloom filter\n> compresses. You used 1000, but that's just an arbitrary value and the\n> actual data might have any other value. And it's unlikely that all three\n> columns will have the same number of distinct values.\n>\n\nOk, that makes sense. Still I see a few odd things: \n\n\t\"\"\"\n\tdrop table if exists t1;\n\tcreate table t1(i int, j int);\n\tcreate index on t1 using brin(i int4_bloom_ops, j int4_bloom_ops ) ;\n\n\t-- This one will succeed, I guess because it has less different\n\t-- values\n\tinsert into t1\n\tselect random()*20, random()*100 from generate_series(1, 1000);\n\n\t-- succeed\n\tinsert into t1\n\tselect random()*20, random()*100 from generate_series(1, 100000);\n\n\t-- succeed\n\tinsert into t1\n\tselect random()*200, random()*1000 from generate_series(1, 1000);\n\n\t-- succeed\n\tinsert into t1\n\tselect random()*200, random()*1000 from generate_series(1, 1000);\n\n\t-- succeed? This is the case it has been causing problems before\n\tinsert into t1\n\tselect random()*1000, random()*1000 from generate_series(1, 1000);\n\t\"\"\"\n\nMaybe this makes sense, but it looks random to me. If it makes sense\nthis is something we should document better. \n\nLet's try another combination:\n\n\t\"\"\"\n\tdrop table if exists t1;\n\tcreate table t1(i int, j int);\n\tcreate index on t1 using brin(i int4_bloom_ops, j int4_bloom_ops ) ;\n\n\t-- this fails again\n\tinsert into t1\n\tselect random()*1000, random()*1000 from generate_series(1, 1000);\n\n\t-- and this starts to fail now, but this worked before\n\tinsert into t1\n\tselect random()*20, random()*100 from generate_series(1, 1000);\n\t\"\"\"\n\n> Of course, this also depends on the false positive rate.\n> \n\nHow the false positive rate work?\n\n> FWIW I doubt people are using multi-column BRIN indexes very often.\n> \n\ntrue. \n\nAnother question, should we allow to create a brin multi column index\nthat uses different opclasses?\n\nCREATE INDEX ON t1 USING brin (i int4_bloom_ops, j int4_minmax_ops);\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Thu, 8 Apr 2021 09:49:18 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "Re: maximum columns for brin bloom indexes"
},
{
"msg_contents": "\n\nOn 4/8/21 4:49 PM, Jaime Casanova wrote:\n> On Thu, Apr 08, 2021 at 12:18:36PM +0200, Tomas Vondra wrote:\n>> On 4/8/21 9:08 AM, Jaime Casanova wrote:\n>>> Hi everyone,\n>>>\n>>> When testing brin bloom indexes I noted that we need to reduce the\n>>> PAGES_PER_RANGE parameter of the index to allow more columns on it.\n>>>\n>>> Sadly, this could be a problem if you create the index before the table\n>>> grows, once it reaches some number of rows (i see the error as early as\n>>> 1000 rows) it starts error out.\n>>>\n>>> \tcreate table t1(i int, j int);\n>>> \t\n>>> \t-- uses default PAGES_PER_RANGE=128\n>>> \tcreate index on t1 using brin(i int4_bloom_ops, j int4_bloom_ops ) ;\n>>> \t\n>>> \tinsert into t1 \n>>> \t\tselect random()*1000, random()*1000 from generate_series(1, 1000);\n>>> \tERROR: index row size 8968 exceeds maximum 8152 for index \"t1_i_j_idx\"\n>>>\n>>> if instead you create the index with a minor PAGES_PER_RANGE it goes\n>>> fine, in this case it works once you reduce it to at least 116\n>>>\n>>> \tcreate index on t1 using brin(i int4_bloom_ops, j int4_bloom_ops ) \n>>> \t\twith (pages_per_range=116);\n>>>\n>>>\n>>> so, for having:\n>>> two int columns - PAGES_PER_RANGE should be max 116\n>>> three int columns - PAGES_PER_RANGE should be max 77\n>>> one int and one timestamp - PAGES_PER_RANGE should be max 121 \n>>>\n>>> and so on\n>>>\n>>\n>> No, because this very much depends on the number if distinct values in\n>> the page page range, which determines how well the bloom filter\n>> compresses. You used 1000, but that's just an arbitrary value and the\n>> actual data might have any other value. And it's unlikely that all three\n>> columns will have the same number of distinct values.\n>>\n> \n> Ok, that makes sense. Still I see a few odd things: \n> \n> \t\"\"\"\n> \tdrop table if exists t1;\n> \tcreate table t1(i int, j int);\n> \tcreate index on t1 using brin(i int4_bloom_ops, j int4_bloom_ops ) ;\n> \n> \t-- This one will succeed, I guess because it has less different\n> \t-- values\n> \tinsert into t1\n> \tselect random()*20, random()*100 from generate_series(1, 1000);\n> \n> \t-- succeed\n> \tinsert into t1\n> \tselect random()*20, random()*100 from generate_series(1, 100000);\n> \n> \t-- succeed\n> \tinsert into t1\n> \tselect random()*200, random()*1000 from generate_series(1, 1000);\n> \n> \t-- succeed\n> \tinsert into t1\n> \tselect random()*200, random()*1000 from generate_series(1, 1000);\n> \n> \t-- succeed? This is the case it has been causing problems before\n> \tinsert into t1\n> \tselect random()*1000, random()*1000 from generate_series(1, 1000);\n> \t\"\"\"\n> \n> Maybe this makes sense, but it looks random to me. If it makes sense\n> this is something we should document better. \n> \n\nPresumably it's about where exactly are the new rows added, and when we\nsummarize the page range.\n\n> Let's try another combination:\n> \n> \t\"\"\"\n> \tdrop table if exists t1;\n> \tcreate table t1(i int, j int);\n> \tcreate index on t1 using brin(i int4_bloom_ops, j int4_bloom_ops ) ;\n> \n> \t-- this fails again\n> \tinsert into t1\n> \tselect random()*1000, random()*1000 from generate_series(1, 1000);\n> \n> \t-- and this starts to fail now, but this worked before\n> \tinsert into t1\n> \tselect random()*20, random()*100 from generate_series(1, 1000);\n> \t\"\"\"\n> \n>> Of course, this also depends on the false positive rate.\n>>\n> \n> How the false positive rate work?\n> \n\nThe lower the false positive rate, the more \"accurate\" the index is,\nbecause fewer page ranges not containing the value will be added to the\nbitmap. The bloom filter however has to be larger.\n\n>> FWIW I doubt people are using multi-column BRIN indexes very often.\n>>\n> \n> true. \n> \n> Another question, should we allow to create a brin multi column index\n> that uses different opclasses?\n> \n> CREATE INDEX ON t1 USING brin (i int4_bloom_ops, j int4_minmax_ops);\n> \n\nWhy not? Without that you couldn't create index on (int, bigint) because\nthose are in principle different opclasses too. I don't see what would\nthis restriction give us.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 8 Apr 2021 16:57:59 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: maximum columns for brin bloom indexes"
}
] |
[
{
"msg_contents": "Looking at https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2021-04-08%2009%3A43%3A13\nwhich broke with the patch to add pg_wait_backend_termination().\n\nAFAICT the change is that the order of rows coming back from \"SELECT\nroutine_name, sequence_name FROM\ninformation_schema.routine_sequence_usage\" has changed. This test was\nadded in f40c6969d0e (\"Routine usage information schema tables\"),\n\nIt does not change consistently, as it works fine on my machine and\nhas also passed on other buildfarm animals (including other archs and\ncompilers).\n\nMy guess is that maybe the query plan is different, ending up with a\ndifferent order, since there is no explicit ORDER BY in the query.\n\nIs there a particular thing we want to check on it that requires it to\nrun without ORDER BY, or should we add one to solve the problem? Or,\nof course, am I completely misunderstanding it? :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 8 Apr 2021 12:04:23 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Order dependency in function test"
},
{
"msg_contents": "On 08.04.21 12:04, Magnus Hagander wrote:\n> Looking at https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2021-04-08%2009%3A43%3A13\n> which broke with the patch to add pg_wait_backend_termination().\n> \n> AFAICT the change is that the order of rows coming back from \"SELECT\n> routine_name, sequence_name FROM\n> information_schema.routine_sequence_usage\" has changed. This test was\n> added in f40c6969d0e (\"Routine usage information schema tables\"),\n> \n> It does not change consistently, as it works fine on my machine and\n> has also passed on other buildfarm animals (including other archs and\n> compilers).\n> \n> My guess is that maybe the query plan is different, ending up with a\n> different order, since there is no explicit ORDER BY in the query.\n> \n> Is there a particular thing we want to check on it that requires it to\n> run without ORDER BY, or should we add one to solve the problem? Or,\n> of course, am I completely misunderstanding it? :)\n\nI added some ORDER BY clauses to fix this.\n\n\n",
"msg_date": "Thu, 8 Apr 2021 12:22:06 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Order dependency in function test"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 12:22 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 08.04.21 12:04, Magnus Hagander wrote:\n> > Looking at https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2021-04-08%2009%3A43%3A13\n> > which broke with the patch to add pg_wait_backend_termination().\n> >\n> > AFAICT the change is that the order of rows coming back from \"SELECT\n> > routine_name, sequence_name FROM\n> > information_schema.routine_sequence_usage\" has changed. This test was\n> > added in f40c6969d0e (\"Routine usage information schema tables\"),\n> >\n> > It does not change consistently, as it works fine on my machine and\n> > has also passed on other buildfarm animals (including other archs and\n> > compilers).\n> >\n> > My guess is that maybe the query plan is different, ending up with a\n> > different order, since there is no explicit ORDER BY in the query.\n> >\n> > Is there a particular thing we want to check on it that requires it to\n> > run without ORDER BY, or should we add one to solve the problem? Or,\n> > of course, am I completely misunderstanding it? :)\n>\n> I added some ORDER BY clauses to fix this.\n\nThanks!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 8 Apr 2021 12:22:22 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Order dependency in function test"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 3:34 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> Looking at https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2021-04-08%2009%3A43%3A13\n> which broke with the patch to add pg_wait_backend_termination().\n>\n> AFAICT the change is that the order of rows coming back from \"SELECT\n> routine_name, sequence_name FROM\n> information_schema.routine_sequence_usage\" has changed. This test was\n> added in f40c6969d0e (\"Routine usage information schema tables\"),\n>\n> It does not change consistently, as it works fine on my machine and\n> has also passed on other buildfarm animals (including other archs and\n> compilers).\n>\n> My guess is that maybe the query plan is different, ending up with a\n> different order, since there is no explicit ORDER BY in the query.\n>\n> Is there a particular thing we want to check on it that requires it to\n> run without ORDER BY, or should we add one to solve the problem? Or,\n> of course, am I completely misunderstanding it? :)\n\nThe buildfarm failure is due to lack of ORDER BY clause. Upon\nsearching in that file, I found below statements are returning more\nthan one row but doesn't have ORDER BY clause which can make output\nquite unstable.\n\nSELECT routine_name, sequence_name FROM\ninformation_schema.routine_sequence_usage;\nSELECT routine_name, table_name, column_name FROM\ninformation_schema.routine_column_usage;\nSELECT routine_name, table_name FROM information_schema.routine_table_usage;\nSELECT * FROM functest_sri1();\nSELECT * FROM functest_sri2();\nTABLE sometable;\n\nI added a ORDER BY 1 clause for each of the above statements and\nreplaced TABLE sometable; with SELECT * FROM sometable ORDER BY 1;\n\nHere's the patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 8 Apr 2021 15:53:02 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Order dependency in function test"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 3:53 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Apr 8, 2021 at 3:34 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > Looking at https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2021-04-08%2009%3A43%3A13\n> > which broke with the patch to add pg_wait_backend_termination().\n> >\n> > AFAICT the change is that the order of rows coming back from \"SELECT\n> > routine_name, sequence_name FROM\n> > information_schema.routine_sequence_usage\" has changed. This test was\n> > added in f40c6969d0e (\"Routine usage information schema tables\"),\n> >\n> > It does not change consistently, as it works fine on my machine and\n> > has also passed on other buildfarm animals (including other archs and\n> > compilers).\n> >\n> > My guess is that maybe the query plan is different, ending up with a\n> > different order, since there is no explicit ORDER BY in the query.\n> >\n> > Is there a particular thing we want to check on it that requires it to\n> > run without ORDER BY, or should we add one to solve the problem? Or,\n> > of course, am I completely misunderstanding it? :)\n>\n> The buildfarm failure is due to lack of ORDER BY clause. Upon\n> searching in that file, I found below statements are returning more\n> than one row but doesn't have ORDER BY clause which can make output\n> quite unstable.\n>\n> SELECT routine_name, sequence_name FROM\n> information_schema.routine_sequence_usage;\n> SELECT routine_name, table_name, column_name FROM\n> information_schema.routine_column_usage;\n> SELECT routine_name, table_name FROM information_schema.routine_table_usage;\n> SELECT * FROM functest_sri1();\n> SELECT * FROM functest_sri2();\n> TABLE sometable;\n>\n> I added a ORDER BY 1 clause for each of the above statements and\n> replaced TABLE sometable; with SELECT * FROM sometable ORDER BY 1;\n>\n> Here's the patch.\n\nI realized that the ORDER BY is added. Isn't it good if we add ORDER\nBY for SELECT * FROM functest_sri2();, SELECT * FROM functest_sri1();\nand replace TABLE sometable; with SELECT * FROM sometable ORDER BY 1;\n? Otherwise they might become unstable at some other time?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Apr 2021 15:56:45 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Order dependency in function test"
}
] |
[
{
"msg_contents": "Hi,\n\nWith the recent commit aaf0432572 which introduced a waiting/timeout\ncapability for pg_teriminate_backend function, I would like to do\n$subject. Attaching a patch, please have a look.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 8 Apr 2021 16:55:22 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 04:55:22PM +0530, Bharath Rupireddy wrote:\n> With the recent commit aaf0432572 which introduced a waiting/timeout\n> capability for pg_teriminate_backend function, I would like to do\n> $subject. Attaching a patch, please have a look.\n\n+-- Terminate the remote backend having the specified application_name and wait\n+-- for the termination to complete. 10 seconds timeout here is chosen randomly,\n+-- we will see a warning if the process doesn't go away within that time.\n+SELECT pg_terminate_backend(pid, 10000) FROM pg_stat_activity\n+ WHERE application_name = 'fdw_retry_check';\n\nI think that you are making the tests less stable by doing that. A\ncouple of buildfarm machines are very slow, and 10 seconds would not\nbe enough. So it seems to me that this patch is trading what is a\nstable solution for a solution that may finish by randomly bite.\n--\nMichael",
"msg_date": "Thu, 8 Apr 2021 20:58:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 5:28 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Apr 08, 2021 at 04:55:22PM +0530, Bharath Rupireddy wrote:\n> > With the recent commit aaf0432572 which introduced a waiting/timeout\n> > capability for pg_teriminate_backend function, I would like to do\n> > $subject. Attaching a patch, please have a look.\n>\n> +-- Terminate the remote backend having the specified application_name and wait\n> +-- for the termination to complete. 10 seconds timeout here is chosen randomly,\n> +-- we will see a warning if the process doesn't go away within that time.\n> +SELECT pg_terminate_backend(pid, 10000) FROM pg_stat_activity\n> + WHERE application_name = 'fdw_retry_check';\n>\n> I think that you are making the tests less stable by doing that. A\n> couple of buildfarm machines are very slow, and 10 seconds would not\n> be enough. So it seems to me that this patch is trading what is a\n> stable solution for a solution that may finish by randomly bite.\n\nAgree. Please see the attached patch, I removed a fixed waiting time.\nInstead of relying on pg_stat_activity, pg_sleep and\npg_stat_clear_snapshot, now it depends on pg_terminate_backend and\npg_wait_for_backend_termination. This way we could reduce the\nfunctions that the procedure terminate_backend_and_wait uses and also\nthe new functions pg_terminate_backend and\npg_wait_for_backend_termination gets a test coverage.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 8 Apr 2021 18:27:56 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 06:27:56PM +0530, Bharath Rupireddy wrote:\n> Agree. Please see the attached patch, I removed a fixed waiting time.\n> Instead of relying on pg_stat_activity, pg_sleep and\n> pg_stat_clear_snapshot, now it depends on pg_terminate_backend and\n> pg_wait_for_backend_termination. This way we could reduce the\n> functions that the procedure terminate_backend_and_wait uses and also\n> the new functions pg_terminate_backend and\n> pg_wait_for_backend_termination gets a test coverage.\n\n+ EXIT WHEN is_terminated;\n+ SELECT * INTO is_terminated FROM pg_wait_for_backend_termination(pid_v, 1000);\nThis is still a regression if the termination takes more than 1s,\nno? In such a case terminate_backend_and_wait() would issue a WARNING\nand pollute the regression test output. I can see the point of what\nyou are achieving here, and that's a good idea, but from the point of\nview of the buildfarm the WARNING introduced by aaf0432 is a no-go. I\nhonestly don't quite get the benefit in issuing a WARNING in this case\nanyway, as the code already returns false on timeout so as caller\nwould know the status of the operation.\n--\nMichael",
"msg_date": "Fri, 9 Apr 2021 09:21:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 5:51 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Apr 08, 2021 at 06:27:56PM +0530, Bharath Rupireddy wrote:\n> > Agree. Please see the attached patch, I removed a fixed waiting time.\n> > Instead of relying on pg_stat_activity, pg_sleep and\n> > pg_stat_clear_snapshot, now it depends on pg_terminate_backend and\n> > pg_wait_for_backend_termination. This way we could reduce the\n> > functions that the procedure terminate_backend_and_wait uses and also\n> > the new functions pg_terminate_backend and\n> > pg_wait_for_backend_termination gets a test coverage.\n>\n> + EXIT WHEN is_terminated;\n> + SELECT * INTO is_terminated FROM pg_wait_for_backend_termination(pid_v, 1000);\n> This is still a regression if the termination takes more than 1s,\n> no? In such a case terminate_backend_and_wait() would issue a WARNING\n> and pollute the regression test output. I can see the point of what\n> you are achieving here, and that's a good idea, but from the point of\n> view of the buildfarm the WARNING introduced by aaf0432 is a no-go.\n\nI didn't think of the warning cases, my bad. How about using SET\nclient_min_messages = 'ERROR'; before we call\npg_wait_for_backend_termination? We can only depend on the return\nvalue of pg_wait_for_backend_termination, when true we can exit. This\nway the buildfarm will not see warnings. Thoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 9 Apr 2021 06:53:21 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 06:53:21AM +0530, Bharath Rupireddy wrote:\n> I didn't think of the warning cases, my bad. How about using SET\n> client_min_messages = 'ERROR'; before we call\n> pg_wait_for_backend_termination? We can only depend on the return\n> value of pg_wait_for_backend_termination, when true we can exit. This\n> way the buildfarm will not see warnings. Thoughts?\n\nYou could do that, but I would also bet that this is going to get\nforgotten in the future if this gets extended in more SQL tests that\nare output-sensitive, in or out of core. Honestly, I can get behind a\nwarning in pg_wait_for_backend_termination() to inform that the\nprocess poked at is not a PostgreSQL one, because it offers new and\nuseful information to the user. But, and my apologies for sounding a\nbit noisy, I really don't get why pg_wait_until_termination() has any\nneed to do that. From what I can see, it provides the following\ninformation:\n- A PID, that we already know from the caller or just from\npg_stat_activity.\n- A timeout, already known as well.\n- The fact that the process did not terminate, information given by\nthe \"false\" status, only used in this case.\n\nSo there is no new information here to the user, only a duplicate of\nwhat's already known to the caller of this function. I see more\nadvantages in removing this WARNING rather than keeping it.\n--\nMichael",
"msg_date": "Fri, 9 Apr 2021 10:59:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "At Fri, 9 Apr 2021 10:59:44 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Apr 09, 2021 at 06:53:21AM +0530, Bharath Rupireddy wrote:\n> > I didn't think of the warning cases, my bad. How about using SET\n> > client_min_messages = 'ERROR'; before we call\n> > pg_wait_for_backend_termination? We can only depend on the return\n> > value of pg_wait_for_backend_termination, when true we can exit. This\n> > way the buildfarm will not see warnings. Thoughts?\n> \n> You could do that, but I would also bet that this is going to get\n> forgotten in the future if this gets extended in more SQL tests that\n> are output-sensitive, in or out of core. Honestly, I can get behind a\n> warning in pg_wait_for_backend_termination() to inform that the\n> process poked at is not a PostgreSQL one, because it offers new and\n> useful information to the user. But, and my apologies for sounding a\n> bit noisy, I really don't get why pg_wait_until_termination() has any\n> need to do that. From what I can see, it provides the following\n> information:\n> - A PID, that we already know from the caller or just from\n> pg_stat_activity.\n> - A timeout, already known as well.\n> - The fact that the process did not terminate, information given by\n> the \"false\" status, only used in this case.\n> \n> So there is no new information here to the user, only a duplicate of\n> what's already known to the caller of this function. I see more\n> advantages in removing this WARNING rather than keeping it.\n\nFWIW I agree to Michael. I faintly remember that I thought the same\nwhile reviewing but it seems that I forgot to write a comment like\nthat. It's a work of the caller, concretely the existing callers and\nany possible script that calls the function.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 09 Apr 2021 11:31:18 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 7:29 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Apr 09, 2021 at 06:53:21AM +0530, Bharath Rupireddy wrote:\n> > I didn't think of the warning cases, my bad. How about using SET\n> > client_min_messages = 'ERROR'; before we call\n> > pg_wait_for_backend_termination? We can only depend on the return\n> > value of pg_wait_for_backend_termination, when true we can exit. This\n> > way the buildfarm will not see warnings. Thoughts?\n>\n> You could do that, but I would also bet that this is going to get\n> forgotten in the future if this gets extended in more SQL tests that\n> are output-sensitive, in or out of core. Honestly, I can get behind a\n> warning in pg_wait_for_backend_termination() to inform that the\n> process poked at is not a PostgreSQL one, because it offers new and\n> useful information to the user. But, and my apologies for sounding a\n> bit noisy, I really don't get why pg_wait_until_termination() has any\n> need to do that. From what I can see, it provides the following\n> information:\n> - A PID, that we already know from the caller or just from\n> pg_stat_activity.\n> - A timeout, already known as well.\n> - The fact that the process did not terminate, information given by\n> the \"false\" status, only used in this case.\n>\n> So there is no new information here to the user, only a duplicate of\n> what's already known to the caller of this function. I see more\n> advantages in removing this WARNING rather than keeping it.\n\nIMO it does make sense to provide a warning for a bool returning\nfunction, if there are multiple situations in which the function\nreturns false. This will give clear information as to why the false is\nreturned.\n\npg_terminate_backend: false is returned 1) when the process with given\npid is not a backend(warning \"PID %d is not a PostgreSQL server\nprocess\") 2) if the kill() fails(warning \"could not send signal to\nprocess %d: %m\") 3) if the timeout is specified and the backend is not\nterminated within it(warning \"backend with PID %d did not terminate\nwithin %lld milliseconds\").\npg_cancel_backend: false is returned 1) when the process with the\ngiven pid is not a backend 2) if the kill() fails.\npg_wait_for_backend_termination: false is returned 1) when the process\nwith a given pid is not a backend 2) the backend is not terminated\nwithin the timeout.\n\nIf we ensure that all the above functions return false in only one\nsituation and error in all other situations, then removing warnings\nmakes sense.\n\nHaving said above, there seems to be a reason for issuing a warning\nand returning false instead of error, that is the callers can just\ncall these functions in a loop until they return true. See the below\ncomments:\n /*\n * This is just a warning so a loop-through-resultset will not abort\n * if one backend terminated on its own during the run.\n */\n /* Again, just a warning to allow loops */\n\nI would like to keep the behaviour of these functions as-is.\n\n> You could do that, but I would also bet that this is going to get\n> forgotten in the future if this gets extended in more SQL tests that\n> are output-sensitive, in or out of core\n\nOn the above point of hackers (wanting to use these functions in more\nSQL tests) forgetting that the functions pg_terminate_backend,\npg_cancel_backend, pg_wait_for_backend_termination issue a warning in\nsome cases which might pollute the tests if used without suppressing\nthese warnings, I feel it is best left to the patch implementers and\nthe reviewers. On our part, we mentioned that the functions\npg_terminate_backend and pg_wait_for_backend_termination would emit a\nwarning on timeout \"On timeout a warning is emitted and\n<literal>false</literal> is returned\"\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 9 Apr 2021 10:24:58 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 6:27 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Apr 8, 2021 at 5:28 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Thu, Apr 08, 2021 at 04:55:22PM +0530, Bharath Rupireddy wrote:\n> > > With the recent commit aaf0432572 which introduced a waiting/timeout\n> > > capability for pg_teriminate_backend function, I would like to do\n> > > $subject. Attaching a patch, please have a look.\n> >\n> > +-- Terminate the remote backend having the specified application_name and wait\n> > +-- for the termination to complete. 10 seconds timeout here is chosen randomly,\n> > +-- we will see a warning if the process doesn't go away within that time.\n> > +SELECT pg_terminate_backend(pid, 10000) FROM pg_stat_activity\n> > + WHERE application_name = 'fdw_retry_check';\n> >\n> > I think that you are making the tests less stable by doing that. A\n> > couple of buildfarm machines are very slow, and 10 seconds would not\n> > be enough. So it seems to me that this patch is trading what is a\n> > stable solution for a solution that may finish by randomly bite.\n>\n> Agree. Please see the attached patch, I removed a fixed waiting time.\n> Instead of relying on pg_stat_activity, pg_sleep and\n> pg_stat_clear_snapshot, now it depends on pg_terminate_backend and\n> pg_wait_for_backend_termination. This way we could reduce the\n> functions that the procedure terminate_backend_and_wait uses and also\n> the new functions pg_terminate_backend and\n> pg_wait_for_backend_termination gets a test coverage.\n>\n> Thoughts?\n\nI realized that the usage like below in v2 patch is completely wrong,\nbecause pg_terminate_backend without timeout will return true and the\nloop exits without calling pg_wait_for_backend_terminatioin. Sorry for\nnot realizing this earlier.\n SELECT * INTO is_terminated FROM pg_terminate_backend(pid_v);\n LOOP\n EXIT WHEN is_terminated;\n SELECT * INTO is_terminated FROM\npg_wait_for_backend_termination(pid_v, 1000);\n END LOOP;\n\nI feel that we can provide a high timeout value (It can be 1hr on the\nsimilar lines of using pg_sleep(3600) for crash tests in\n013_crash_restart.pl with the assumption that the backend running that\ncommand will get killed with SIGQUIT) and make pg_terminate_backend\nwait. This unreasonably high timeout looks okay because of the\nassumption that the servers in the build farm will not take that much\ntime to remove the backend from the system processes, so the function\nwill return much earlier than that. If at all there's a server(which\nis impractical IMO) that doesn't remove the backend process even\nwithin 1hr, then that is anyways will fail with the warning.\n\nWith the attached patch, we could remove the procedure\nterminate_backend_and_wait altogether. Thoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 9 Apr 2021 16:53:01 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 04:53:01PM +0530, Bharath Rupireddy wrote:\n> I feel that we can provide a high timeout value (It can be 1hr on the\n> similar lines of using pg_sleep(3600) for crash tests in\n> 013_crash_restart.pl with the assumption that the backend running that\n> command will get killed with SIGQUIT) and make pg_terminate_backend\n> wait. This unreasonably high timeout looks okay because of the\n> assumption that the servers in the build farm will not take that much\n> time to remove the backend from the system processes, so the function\n> will return much earlier than that. If at all there's a server(which\n> is impractical IMO) that doesn't remove the backend process even\n> within 1hr, then that is anyways will fail with the warning.\n\nYou may not need a value as large as 1h for that :)\n \nLooking at the TAP tests, some areas have been living with timeouts of\nup to 180s. It is a matter of balance here, a timeout too long would\nbe annoying as it would make the detection of a problem longer for\nmachines that are stuck, and a too short value generates false\npositives. 5 minutes gives some balance, but there is really no\nperfect value.\n\n> With the attached patch, we could remove the procedure\n> terminate_backend_and_wait altogether. Thoughts?\n\nThat's clearly better, and logically it would work. As those tests\nare new in 14, it may be a good idea to cleanup all that so as all the\nbranches have the same set of tests. Would people object to that?\n--\nMichael",
"msg_date": "Mon, 12 Apr 2021 14:48:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 11:18 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Apr 09, 2021 at 04:53:01PM +0530, Bharath Rupireddy wrote:\n> > I feel that we can provide a high timeout value (It can be 1hr on the\n> > similar lines of using pg_sleep(3600) for crash tests in\n> > 013_crash_restart.pl with the assumption that the backend running that\n> > command will get killed with SIGQUIT) and make pg_terminate_backend\n> > wait. This unreasonably high timeout looks okay because of the\n> > assumption that the servers in the build farm will not take that much\n> > time to remove the backend from the system processes, so the function\n> > will return much earlier than that. If at all there's a server(which\n> > is impractical IMO) that doesn't remove the backend process even\n> > within 1hr, then that is anyways will fail with the warning.\n>\n> You may not need a value as large as 1h for that :)\n>\n> Looking at the TAP tests, some areas have been living with timeouts of\n> up to 180s. It is a matter of balance here, a timeout too long would\n> be annoying as it would make the detection of a problem longer for\n> machines that are stuck, and a too short value generates false\n> positives. 5 minutes gives some balance, but there is really no\n> perfect value.\n\nI changed to 5min. If at all there's any server that would take more\nthan 5min to remove a process from the system processes list, then it\nwould see a warning on timeout.\n\n> > With the attached patch, we could remove the procedure\n> > terminate_backend_and_wait altogether. Thoughts?\n>\n> That's clearly better, and logically it would work. As those tests\n> are new in 14, it may be a good idea to cleanup all that so as all the\n> branches have the same set of tests. Would people object to that?\n\nYes, these tests are introduced in v14, +1 to clean them with this\npatch on v14 as well along with master.\n\nAttaching v4, please review further.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 12 Apr 2021 11:29:28 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 11:29:28AM +0530, Bharath Rupireddy wrote:\n> I changed to 5min. If at all there's any server that would take more\n> than 5min to remove a process from the system processes list, then it\n> would see a warning on timeout.\n\nLooks fine to me. Let's wait a bit first to see if Fujii-san has any\nobjections to this cleanup as that's his code originally, from\n32a9c0bd.\n--\nMichael",
"msg_date": "Tue, 13 Apr 2021 16:39:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 04:39:58PM +0900, Michael Paquier wrote:\n> Looks fine to me. Let's wait a bit first to see if Fujii-san has any\n> objections to this cleanup as that's his code originally, from\n> 32a9c0bd.\n\nAnd hearing nothing, done. The tests of postgres_fdw are getting much\nfaster for me now, from basically 6s to 4s (actually that's roughly\n1.8s of gain as pg_wait_until_termination waits at least 100ms,\ntwice), so that's a nice gain.\n--\nMichael",
"msg_date": "Wed, 14 Apr 2021 14:33:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Apr 13, 2021 at 04:39:58PM +0900, Michael Paquier wrote:\n>> Looks fine to me. Let's wait a bit first to see if Fujii-san has any\n>> objections to this cleanup as that's his code originally, from\n>> 32a9c0bd.\n\n> And hearing nothing, done. The tests of postgres_fdw are getting much\n> faster for me now, from basically 6s to 4s (actually that's roughly\n> 1.8s of gain as pg_wait_until_termination waits at least 100ms,\n> twice), so that's a nice gain.\n\nThe buildfarm is showing that one of these test queries is not stable\nunder CLOBBER_CACHE_ALWAYS:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-05-01%2007%3A44%3A47\n\nof which the relevant part is:\n\ndiff -U3 /home/buildfarm/buildroot/HEAD/pgsql.build/contrib/postgres_fdw/expected/postgres_fdw.out /home/buildfarm/buildroot/HEAD/pgsql.build/contrib/postgres_fdw/results/postgres_fdw.out\n--- /home/buildfarm/buildroot/HEAD/pgsql.build/contrib/postgres_fdw/expected/postgres_fdw.out\t2021-05-01 03:44:45.022300613 -0400\n+++ /home/buildfarm/buildroot/HEAD/pgsql.build/contrib/postgres_fdw/results/postgres_fdw.out\t2021-05-03 09:11:24.051379288 -0400\n@@ -9215,8 +9215,7 @@\n \tWHERE application_name = 'fdw_retry_check';\n pg_terminate_backend \n ----------------------\n- t\n-(1 row)\n+(0 rows)\n \n -- This query should detect the broken connection when starting new remote\n -- transaction, reestablish new connection, and then succeed.\n\nI can reproduce that locally by setting\n\nalter system set debug_invalidate_system_caches_always = 1;\n\nand running \"make installcheck\" in contrib/postgres_fdw.\n(It takes a good long time to run the whole test script\nthough, so you might want to see if running just these few\nqueries is enough.)\n\nThere's no evidence of distress in the postmaster log,\nso I suspect this might just be a timing instability,\ne.g. remote process already gone before local process\nlooks. If so, it's probably hopeless to make this\ntest stable as-is. Perhaps we should just take it out.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 May 2021 18:42:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "On Tue, May 4, 2021 at 4:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The buildfarm is showing that one of these test queries is not stable\n> under CLOBBER_CACHE_ALWAYS:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-05-01%2007%3A44%3A47\n>\n> of which the relevant part is:\n>\n> diff -U3 /home/buildfarm/buildroot/HEAD/pgsql.build/contrib/postgres_fdw/expected/postgres_fdw.out /home/buildfarm/buildroot/HEAD/pgsql.build/contrib/postgres_fdw/results/postgres_fdw.out\n> --- /home/buildfarm/buildroot/HEAD/pgsql.build/contrib/postgres_fdw/expected/postgres_fdw.out 2021-05-01 03:44:45.022300613 -0400\n> +++ /home/buildfarm/buildroot/HEAD/pgsql.build/contrib/postgres_fdw/results/postgres_fdw.out 2021-05-03 09:11:24.051379288 -0400\n> @@ -9215,8 +9215,7 @@\n> WHERE application_name = 'fdw_retry_check';\n> pg_terminate_backend\n> ----------------------\n> - t\n> -(1 row)\n> +(0 rows)\n>\n> -- This query should detect the broken connection when starting new remote\n> -- transaction, reestablish new connection, and then succeed.\n\nThanks for the report.\n\n> I can reproduce that locally by setting\n>\n> alter system set debug_invalidate_system_caches_always = 1;\n>\n> and running \"make installcheck\" in contrib/postgres_fdw.\n> (It takes a good long time to run the whole test script\n> though, so you might want to see if running just these few\n> queries is enough.)\n\nI can reproduce the issue with the failing case. Issue is that the\nbackend pid will be null in the pg_stat_activity because of the cache\ninvalidation that happens at the beginning of the query and hence\npg_terminate_backend returns null on null input.\n\n> There's no evidence of distress in the postmaster log,\n> so I suspect this might just be a timing instability,\n> e.g. remote process already gone before local process\n> looks. If so, it's probably hopeless to make this\n> test stable as-is. Perhaps we should just take it out.\n\nActually, that test case covers retry code, so removing it worries me.\nInstead, I can do as attached i.e. ignore the pg_terminate_backend\noutput using PERFORM, as the function signals the backend if the given\npid is a valid backend pid and returns on success. If at all, the\nfunction is to return false, it emits a warning, so it will be caught\nin the tests.\n\nAnd having a retry test case with clobber cache enabled doesn't make\nsense because all the cache entries are removed/invalidated for each\nquery, but the test case covers the code on non-clobber cache\nplatforms, so I would like to keep it.\n\nPlease see the attached, it passes with \"alter system set\ndebug_invalidate_system_caches_always = 1;\" on my system.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 4 May 2021 12:43:53 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "On Tue, May 04, 2021 at 12:43:53PM +0530, Bharath Rupireddy wrote:\n> And having a retry test case with clobber cache enabled doesn't make\n> sense because all the cache entries are removed/invalidated for each\n> query, but the test case covers the code on non-clobber cache\n> platforms, so I would like to keep it.\n\nYeah, I'd rather keep this test around as it is specific to connection\ncaches, and it is not time-consuming on fast machines in its new shape\neither. Another trick we could use here could be an aggregate\nchecking for the number of rows returned, say:\nSELECT count(pg_terminate_backend(pid, 180000)) >= 0\n FROM pg_stat_activity\n WHERE application_name = 'fdw_retry_check';\n\nBut using CALL as you are suggesting is much cleaner.\n\n(Worth noting that I am out this week for Golden Week, so if this can\nwait until Monday, that would be nice. I am not willing to take my\nchances with the buildfarm now :p)\n--\nMichael",
"msg_date": "Tue, 4 May 2021 17:22:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> (Worth noting that I am out this week for Golden Week, so if this can\n> wait until Monday, that would be nice. I am not willing to take my\n> chances with the buildfarm now :p)\n\nI will see to it. I think it's important to get a fix in in the next\ncouple of days, because hyrax has not had a clean run in six weeks.\nThat animal takes almost a week per test cycle, so the next HEAD run\nit starts (two or three days from now) is about our last chance to\nget it to go green before beta1 wrap. I feel it's fairly urgent to\ntry to do that, because who knows if any other cache-clobber issues\nsnuck in just before feature freeze.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 May 2021 09:35:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "On Tue, May 4, 2021 at 7:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Michael Paquier <michael@paquier.xyz> writes:\n> > (Worth noting that I am out this week for Golden Week, so if this can\n> > wait until Monday, that would be nice. I am not willing to take my\n> > chances with the buildfarm now :p)\n>\n> I will see to it. I think it's important to get a fix in in the next\n> couple of days, because hyrax has not had a clean run in six weeks.\n> That animal takes almost a week per test cycle, so the next HEAD run\n> it starts (two or three days from now) is about our last chance to\n> get it to go green before beta1 wrap. I feel it's fairly urgent to\n> try to do that, because who knows if any other cache-clobber issues\n> snuck in just before feature freeze.\n\nThanks! Can we then take the patch proposed at [1]?\n\n[1] - https://www.postgresql.org/message-id/CALj2ACWqh2nHzPyzP-bAY%2BCaAAbtQRO55AQ_4ppGiU_w8iOvTg%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 4 May 2021 19:20:44 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Tue, May 4, 2021 at 4:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The buildfarm is showing that one of these test queries is not stable\n>> under CLOBBER_CACHE_ALWAYS:\n\n> I can reproduce the issue with the failing case. Issue is that the\n> backend pid will be null in the pg_stat_activity because of the cache\n> invalidation that happens at the beginning of the query and hence\n> pg_terminate_backend returns null on null input.\n\nNo, that's nonsense: if it were happening that way, the query would\nreturn one row with a NULL result, but actually it's returning no\nrows. What's actually happening, it seems, is that because\npgfdw_inval_callback is constantly getting called due to cache\nflushes, we invariably drop remote connections immediately during\ntransaction termination (cf pgfdw_xact_callback). Thus, by the time\nwe inspect pg_stat_activity, there is no remote session to terminate.\n\nI don't like your patch because what it effectively does is mask\nwhether termination happened or not; if there were a bug there\ncausing that not to happen, the test would still appear to pass.\n\nI think the most expedient fix, if we want to keep this test, is\njust to transiently disable debug_invalidate_system_caches_always.\n(That option wasn't available before v14, but fortunately we\ndon't need a fix for the back branches.)\n\nI believe the attached will do the trick, but I'm running the test\nwith debug_invalidate_system_caches_always turned on to verify\nthat. Should be done in an hour or so...\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 04 May 2021 11:38:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "On Tue, May 4, 2021 at 9:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > On Tue, May 4, 2021 at 4:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> The buildfarm is showing that one of these test queries is not stable\n> >> under CLOBBER_CACHE_ALWAYS:\n>\n> > I can reproduce the issue with the failing case. Issue is that the\n> > backend pid will be null in the pg_stat_activity because of the cache\n> > invalidation that happens at the beginning of the query and hence\n> > pg_terminate_backend returns null on null input.\n>\n> No, that's nonsense: if it were happening that way, the query would\n> return one row with a NULL result, but actually it's returning no\n> rows. What's actually happening, it seems, is that because\n> pgfdw_inval_callback is constantly getting called due to cache\n> flushes, we invariably drop remote connections immediately during\n> transaction termination (cf pgfdw_xact_callback). Thus, by the time\n> we inspect pg_stat_activity, there is no remote session to terminate.\n>\n> I don't like your patch because what it effectively does is mask\n> whether termination happened or not; if there were a bug there\n> causing that not to happen, the test would still appear to pass.\n>\n> I think the most expedient fix, if we want to keep this test, is\n> just to transiently disable debug_invalidate_system_caches_always.\n> (That option wasn't available before v14, but fortunately we\n> don't need a fix for the back branches.)\n>\n> I believe the attached will do the trick, but I'm running the test\n> with debug_invalidate_system_caches_always turned on to verify\n> that. Should be done in an hour or so...\n\nThanks for pushing this change.\n\nIf debug_invalidate_system_caches_always is allowed to be used for\ncache sensitive test cases, I see an opportunity to make the tests,\nthat are adjusted by commit f77717b29, more meaningful as they were\nbefore the commit. That commit changed the way below functions show up\noutput in the tests:\nSELECT 1 FROM postgres_fdw_disconnect_all();\nSELECT server_name FROM postgres_fdw_get_connections() ORDER BY 1;\n\nIf okay, I can work on it (not for PG14 of course). It can be\ndiscussed in a separate thread though.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 May 2021 11:11:35 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
},
{
"msg_contents": "On Tue, May 04, 2021 at 11:38:09AM -0400, Tom Lane wrote:\n> I believe the attached will do the trick, but I'm running the test\n> with debug_invalidate_system_caches_always turned on to verify\n> that. Should be done in an hour or so...\n\nThanks for taking care of that!\n--\nMichael",
"msg_date": "Sat, 8 May 2021 17:46:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Simplify backend terminate and wait logic in postgres_fdw test"
}
] |
[
{
"msg_contents": "Hi,\n\nThis started out as a reply to https://postgr.es/m/20210408170802.GA9392%40alvherre.pgsql\nbut it's independent enough to just start a new thread...\n\nOn 2021-04-08 13:08:02 -0400, Alvaro Herrera wrote:\n> Yes, coverage.pg.org runs \"make check-world\".\n>\n> Maybe it would make sense to change that script, so that it runs the\n> buildfarm's run_build.pl script instead of \"make check-world\". That\n> would make coverage.pg.org report what the buildfarm actually tests ...\n> it would have made this problem a bit more obvious.\n\nWe desperately need to unify the different test run environments we\nhave. I did spent some time trying to do that, and ended up with it\nbeing hard to do in a good way in the make / msvc environment. Not sure\nthat I took the right path, but I end up doing experimental port of the\nbuildsystem meson - which has a builtin test runner (working on all\nplatforms...).\n\n andres@awork3:/tmp$ ccache --clear\n andres@awork3:/tmp$ ~/src/meson/meson.py setup ~/src/postgresql /tmp/pg-meson --prefix=/tmp/pg-meson-install\n The Meson build system\n Version: 0.57.999\n Source dir: /home/andres/src/postgresql\n Build dir: /tmp/pg-meson\n Build type: native build\n Project name: postgresql\n Project version: 14devel\n ...\n Header <unistd.h> has symbol \"fdatasync\" : YES\n Header <fcntl.h> has symbol \"F_FULLSYNC\" : NO\n Checking for alignment of \"short\" : 2\n Checking for alignment of \"int\" : 4\n ...\n Configuring pg_config_ext.h using configuration\n Configuring pg_config.h using configuration\n Configuring pg_config_paths.h using configuration\n Program sed found: YES (/usr/bin/sed)\n Build targets in project: 116\n\n Found ninja-1.10.1 at /usr/bin/ninja\n ...\n\n andres@awork3:/tmp/pg-meson$ time ninja\n [10/1235] Generating snowball_create with a custom command\n Generating tsearch script...............................\n [41/1235] Generating generated_catalog_headers with a custom command\n [1235/1235] Linking target contrib/test_decoding/test_decoding.so\n\n real\t0m10.752s\n user\t3m47.020s\n sys\t0m50.281s\n\n ...\n andres@awork3:/tmp/pg-meson$ time ninja\n [1/1] Generating test clean with a custom command\n\n real\t0m0.085s\n user\t0m0.068s\n sys\t0m0.016s\n ...\n\n andres@awork3:/tmp/pg-meson$ time ~/src/meson/meson.py install --quiet\n ninja: Entering directory `.'\n\n real\t0m0.541s\n user\t0m0.412s\n sys\t0m0.130s\n\n ...\n\n andres@awork3:/tmp/pg-meson$ ninja test\n [1/2] Running all tests.\n 1/74 postgresql:setup / temp_install OK 0.52s\n 2/74 postgresql:setup / cleanup_old OK 0.01s\n 3/74 postgresql:tap+pg_archivecleanup / pg_archivecleanup/t/010_pg_archivecleanup.pl OK 0.29s 42 subtests passed\n 4/74 postgresql:tap+pg_checksums / pg_checksums/t/001_basic.pl OK 0.27s 8 subtests passed\n 5/74 postgresql:tap+pg_config / pg_config/t/001_pg_config.pl OK 0.26s 20 subtests passed\n ...\n 68/74 postgresql:tap+pg_dump / pg_dump/t/002_pg_dump.pl OK 28.26s 6408 subtests passed\n ...\n 74/74 postgresql:isolation / pg_isolation_regress OK 114.91s\n\n\n Ok: 74\n Expected Fail: 0\n Fail: 0\n Unexpected Pass: 0\n Skipped: 0\n Timeout: 0\n\n Full log written to /tmp/pg-meson/meson-logs/testlog.txt\n\n\nAnd in cases of failures it'll show the failure when it happens\n(including the command to rerun just that test, without the harness in\nbetween), and then a summary at the end:\n\n 61/74 postgresql:tap+pg_verifybackup / pg_verifybackup/t/003_corruption.pl OK 10.65s 44 subtests passed\n 49/74 postgresql:tap+recovery / recovery/t/019_replslot_limit.pl ERROR 7.53s exit status 1\n >>> MALLOC_PERTURB_=16 PATH=/tmp/pg-meson/tmp_install///usr/local/bin:/home/andres/bin/pg:/home/andres/bin/bin:/usr/sbin:/sbin:/home/andres/bin/pg:/home/andres/bin/bin:/usr/sbin:/sbin:/home/andres/bin/pg:/home/andres/bin/bin:/usr/sbin:/sbin:/home/andres/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/snap/bin PG_REGRESS=/tmp/pg-meson/src/test/regress/pg_regress REGRESS_SHLIB=/tmp/pg-meson/src/test/regress/regress.so LD_LIBRARY_PATH=/tmp/pg-meson/tmp_install///usr/local/lib/x86_64-linux-gnu /home/andres/src/postgresql/src/tools/testwrap /tmp/pg-meson recovery t/019_replslot_limit.pl perl -I /home/andres/src/postgresql/src/test/perl -I /home/andres/src/postgresql/src/test/recovery /home/andres/src/postgresql/src/test/recovery/t/019_replslot_limit.pl\n ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\n stderr:\n # Failed test 'check that required WAL segments are still available'\n # at /home/andres/src/postgresql/src/test/recovery/t/019_replslot_limit.pl line 168.\n # Looks like you failed 1 test of 14.\n\n (test program exited with status code 1)\n ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\n\n 62/74 postgresql:tap+pg_basebackup / pg_basebackup/t/010_dump_connstr.pl OK 11.59s 14 subtests passed\n ...\n 74/74 postgresql:isolation / pg_isolation_regress OK 112.26s\n\n Summary of Failures:\n\n 49/74 postgresql:tap+recovery / recovery/t/019_replslot_limit.pl ERROR 7.53s exit status 1\n\n\n Ok: 73\n Expected Fail: 0\n Fail: 1\n Unexpected Pass: 0\n Skipped: 0\n Timeout: 0\n\n Full log written to /tmp/pg-meson/meson-logs/testlog.txt\n FAILED: meson-test\n /usr/bin/python3 -u /home/andres/src/meson/meson.py test --no-rebuild --print-errorlogs\n ninja: build stopped: subcommand failed.\n\nIt's hard to convey just how much nicer it is to see a progress report\nduring the test, see the failing tests at the end, without needing to\nwade through reams of log output. The output of the individual tests is\nin testlog.txt referenced above.\n\nOne can get a list of tests and then also just run subsets of them:\n\n andres@awork3:/tmp/pg-meson$ ~/src/meson/meson.py test --list\n postgresql:setup / temp_install\n postgresql:setup / cleanup_old\n postgresql:isolation / pg_isolation_regress\n postgresql:regress / pg_regress\n postgresql:tap+initdb / initdb/t/001_initdb.pl\n\nCan run \"suites\" of tests:\n ~/src/meson/meson.py test --suite setup --suite recovery\n\nCan run individual tests:\n ~/src/meson/meson.py test recovery/t/008_fsm_truncation.pl\n\n\nObviously all very far from being ready, but this seemed like a good\nenough excuse to mention it ;)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 8 Apr 2021 10:50:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "test runner (was Re: SQL-standard function body)"
},
{
"msg_contents": "On 2021-04-08 10:50:39 -0700, Andres Freund wrote:\n> It's hard to convey just how much nicer it is to see a progress report\n> during the test, see the failing tests at the end, without needing to\n> wade through reams of log output. The output of the individual tests is\n> in testlog.txt referenced above.\n\nhttps://anarazel.de/public/t/pg-meson-test-screencap-2021-04-08_10.58.26.mkv\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 8 Apr 2021 11:04:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: test runner (was Re: SQL-standard function body)"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 10:50:39AM -0700, Andres Freund wrote:\n> Obviously all very far from being ready, but this seemed like a good\n> enough excuse to mention it ;)\n\nThis is nice. Are there any parallelism capabilities?\n--\nMichael",
"msg_date": "Fri, 9 Apr 2021 08:39:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: test runner (was Re: SQL-standard function body)"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-09 08:39:46 +0900, Michael Paquier wrote:\n> On Thu, Apr 08, 2021 at 10:50:39AM -0700, Andres Freund wrote:\n> > Obviously all very far from being ready, but this seemed like a good\n> > enough excuse to mention it ;)\n> \n> This is nice. Are there any parallelism capabilities?\n\nYes. It defaults to number-of-cores processes, but obviously can also be\nspecified explicitly. One very nice part about it is that it'd work\nlargely the same on windows (which has practically unusable testing\nright now). It probably doesn't yet, because I just tried to get it\nbuild and run tests at all, but it shouldn't be a lot of additional\nwork.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 8 Apr 2021 19:52:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: test runner (was Re: SQL-standard function body)"
},
{
"msg_contents": ">\n> > This is nice. Are there any parallelism capabilities?\n>\n> Yes. It defaults to number-of-cores processes, but obviously can also be\n> specified explicitly. One very nice part about it is that it'd work\n> largely the same on windows (which has practically unusable testing\n> right now). It probably doesn't yet, because I just tried to get it\n> build and run tests at all, but it shouldn't be a lot of additional\n> work.\n>\n\nThe pidgin developers speak very highly of meson, for the same reasons\nalready mentioned in this thread.\n\n> This is nice. Are there any parallelism capabilities?\n\nYes. It defaults to number-of-cores processes, but obviously can also be\nspecified explicitly. One very nice part about it is that it'd work\nlargely the same on windows (which has practically unusable testing\nright now). It probably doesn't yet, because I just tried to get it\nbuild and run tests at all, but it shouldn't be a lot of additional\nwork.The pidgin developers speak very highly of meson, for the same reasons already mentioned in this thread.",
"msg_date": "Sun, 11 Apr 2021 16:54:17 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: test runner (was Re: SQL-standard function body)"
}
] |
[
{
"msg_contents": "Consider the following snippet\n\ncreate table data as select generate_series(1,1000000) s;\n\ndo $d$\nbegin\n PERFORM * FROM dblink_connect('test','');\n\n PERFORM * from dblink_send_query('test', 'SELECT * FROM data');\n\n LOOP\n if dblink_is_busy('test') = 0\n THEN\n PERFORM * FROM dblink_get_result('test') AS R(V int);\n PERFORM * FROM dblink_get_result('test') AS R(V int);\n RETURN;\n END IF;\n\n PERFORM pg_sleep(.001);\n END LOOP;\n\n PERFORM * FROM dblink_disconnect('test');\nEND;\n$d$;\n\nWhat's interesting here is that, when I vary the sleep parameter, I get:\n0: .4 seconds (per top, this is busywait), same as running synchronous.\n0.000001: 1.4 seconds\n0.001: 2.4 seconds\n0.01: 10.6 seconds\n0.1: does not terminate\n\nThis effect is only noticeable when the remote query is returning\nvolumes of data. My question is, is there any way to sleep loop\nclient side without giving up 3x performance penalty? Why is that\nthat when more local sleep queries are executed, performance improves?\n\nmerlin\n\n\n",
"msg_date": "Thu, 8 Apr 2021 13:05:36 -0500",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": true,
"msg_subject": "weird interaction between asynchronous queries and pg_sleep"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 1:05 PM Merlin Moncure <mmoncure@gmail.com> wrote:\n> This effect is only noticeable when the remote query is returning\n> volumes of data. My question is, is there any way to sleep loop\n> client side without giving up 3x performance penalty? Why is that\n> that when more local sleep queries are executed, performance improves?\n\n\nLooking at this more, it looks like that when sleeping with pg_sleep,\nlibpq does not receive the data. I think for this type of pattern to\nwork correctly, dblink would need a custom sleep function wrapping\npoll (or epoll) that consumes input on the socket when signalled read\nready.\n\nmerlin\n\n\n",
"msg_date": "Thu, 8 Apr 2021 18:18:09 -0500",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: weird interaction between asynchronous queries and pg_sleep"
}
] |
[
{
"msg_contents": "Buildfarm member curculio, which doesn't usually produce\nuninitialized-variable warnings, is showing one here:\n\nnbtinsert.c: In function '_bt_doinsert':\nnbtinsert.c:411: warning: 'curitemid' may be used uninitialized in this function\nnbtinsert.c:411: note: 'curitemid' was declared here\n\nI can see its point: curitemid is set only if !inposting.\nWhile the first two uses of the value are clearly reached\nonly if !inposting, it's FAR from clear that it's impossible\nto reach \"ItemIdMarkDead(curitemid);\" without a valid value.\nCould you clean that up?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Apr 2021 15:19:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Dubious coding in nbtinsert.c"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 12:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Buildfarm member curculio, which doesn't usually produce\n> uninitialized-variable warnings, is showing one here:\n>\n> nbtinsert.c: In function '_bt_doinsert':\n> nbtinsert.c:411: warning: 'curitemid' may be used uninitialized in this function\n> nbtinsert.c:411: note: 'curitemid' was declared here\n>\n> I can see its point: curitemid is set only if !inposting.\n> While the first two uses of the value are clearly reached\n> only if !inposting, it's FAR from clear that it's impossible\n> to reach \"ItemIdMarkDead(curitemid);\" without a valid value.\n> Could you clean that up?\n\nI'll take care of it shortly.\n\nYou had a near-identical complaint about a compiler warning that led\nto my commit d64f1cdf2f4 -- that one involved _bt_check_unique()'s\ncuritup, while this one is about curitemid. While I have no problem\nsilencing this compiler warning now, I don't see any reason to not\njust do the same thing again. Which is to initialize the pointer to\nNULL.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 8 Apr 2021 12:46:15 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Dubious coding in nbtinsert.c"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> You had a near-identical complaint about a compiler warning that led\n> to my commit d64f1cdf2f4 -- that one involved _bt_check_unique()'s\n> curitup, while this one is about curitemid. While I have no problem\n> silencing this compiler warning now, I don't see any reason to not\n> just do the same thing again. Which is to initialize the pointer to\n> NULL.\n\nWorks for me; if there is any bug in the logic, we'll get a core dump\nand can investigate.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Apr 2021 16:57:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Dubious coding in nbtinsert.c"
}
] |
[
{
"msg_contents": "I noticed that nodeFuncs.c appears to have some pretty sloppy work\ndone in many of the comments. Many look like they've just not been\nupdated from a copy/paste/edit from another node function.\n\nThe attached aims to clean these up.\n\nI plan to push this a later today unless anyone has anything they'd\nlike to say about it first.\n\nDavid",
"msg_date": "Fri, 9 Apr 2021 10:04:25 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Lots of incorrect comments in nodeFuncs.c"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I noticed that nodeFuncs.c appears to have some pretty sloppy work\n> done in many of the comments. Many look like they've just not been\n> updated from a copy/paste/edit from another node function.\n> The attached aims to clean these up.\n\nI believe every one of these changes is wrong.\nFor instance:\n\n \t\tcase T_ScalarArrayOpExpr:\n-\t\t\tcoll = InvalidOid;\t/* result is always boolean */\n+\t\t\tcoll = InvalidOid;\t/* result is always InvalidOid */\n \t\t\tbreak;\n\nThe point here is that the result type of ScalarArrayOpExpr is boolean,\nwhich has no collation, therefore reporting its collation as InvalidOid\nis correct. Maybe there's a clearer way to say that, but your text is\nmore confusing not less so.\n\nLikewise, the point of the annotations in exprSetCollation is to not\nlet a collation be applied to a node that must have a noncollatable\nresult type.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Apr 2021 18:11:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Lots of incorrect comments in nodeFuncs.c"
},
{
"msg_contents": "On Fri, 9 Apr 2021 at 10:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I noticed that nodeFuncs.c appears to have some pretty sloppy work\n> > done in many of the comments. Many look like they've just not been\n> > updated from a copy/paste/edit from another node function.\n> > The attached aims to clean these up.\n>\n> I believe every one of these changes is wrong.\n> For instance:\n>\n> case T_ScalarArrayOpExpr:\n> - coll = InvalidOid; /* result is always boolean */\n> + coll = InvalidOid; /* result is always InvalidOid */\n> break;\n>\n> The point here is that the result type of ScalarArrayOpExpr is boolean,\n> which has no collation, therefore reporting its collation as InvalidOid\n> is correct. Maybe there's a clearer way to say that, but your text is\n> more confusing not less so.\n\nhmm ok. I imagine there must be a better way to say that then since\nit confused at least 1 reader so far. My problem is that I assumed\n\"result\" meant the result of the function that the comment is written\nin, not the result of evaluating the given expression during\nexecution. If that was more clear then I'd not have been misled.\n\nDavid\n\n\n",
"msg_date": "Fri, 9 Apr 2021 11:25:49 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lots of incorrect comments in nodeFuncs.c"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> hmm ok. I imagine there must be a better way to say that then since\n> it confused at least 1 reader so far. My problem is that I assumed\n> \"result\" meant the result of the function that the comment is written\n> in, not the result of evaluating the given expression during\n> execution. If that was more clear then I'd not have been misled.\n\nMaybe like\n\n case T_ScalarArrayOpExpr:\n /* ScalarArrayOpExpr's result is boolean ... */\n coll = InvalidOid; /* ... so it has no collation */\n break;\n\n?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Apr 2021 20:17:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Lots of incorrect comments in nodeFuncs.c"
},
{
"msg_contents": "On 2021-Apr-08, Tom Lane wrote:\n\n> Maybe like\n> \n> case T_ScalarArrayOpExpr:\n> /* ScalarArrayOpExpr's result is boolean ... */\n> coll = InvalidOid; /* ... so it has no collation */\n> break;\n\nThis is much clearer, yeah.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Thu, 8 Apr 2021 21:21:30 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Lots of incorrect comments in nodeFuncs.c"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 09:21:30PM -0400, Alvaro Herrera wrote:\n> On 2021-Apr-08, Tom Lane wrote:\n>> Maybe like\n>> \n>> case T_ScalarArrayOpExpr:\n>> /* ScalarArrayOpExpr's result is boolean ... */\n>> coll = InvalidOid; /* ... so it has no collation */\n>> break;\n> \n> This is much clearer, yeah.\n\n+1.\n--\nMichael",
"msg_date": "Fri, 9 Apr 2021 10:52:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Lots of incorrect comments in nodeFuncs.c"
},
{
"msg_contents": "On Fri, 9 Apr 2021 at 13:52, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Apr 08, 2021 at 09:21:30PM -0400, Alvaro Herrera wrote:\n> > On 2021-Apr-08, Tom Lane wrote:\n> >> Maybe like\n> >>\n> >> case T_ScalarArrayOpExpr:\n> >> /* ScalarArrayOpExpr's result is boolean ... */\n> >> coll = InvalidOid; /* ... so it has no collation */\n> >> break;\n> >\n> > This is much clearer, yeah.\n>\n> +1.\n\nYeah, that's much better.\n\nFor the exprSetCollation case, I ended up with:\n\n case T_ScalarArrayOpExpr:\n /* ScalarArrayOpExpr's result is boolean ... */\n Assert(!OidIsValid(collation)); /* ... so never\nset a collation */\n\nI wanted something more like /* ... so we must never set a collation\n*/ but that put the line longer than 80. I thought wrapping to a 2nd\nline was excessive, so I shortened it to that.\n\nDavid",
"msg_date": "Fri, 9 Apr 2021 16:29:00 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lots of incorrect comments in nodeFuncs.c"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I wanted something more like /* ... so we must never set a collation\n> */ but that put the line longer than 80. I thought wrapping to a 2nd\n> line was excessive, so I shortened it to that.\n\nLGTM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Apr 2021 07:22:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Lots of incorrect comments in nodeFuncs.c"
},
{
"msg_contents": "On Fri, 9 Apr 2021 at 23:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> LGTM.\n\nThanks. Pushed.\n\nDavid\n\n\n",
"msg_date": "Sat, 10 Apr 2021 19:20:34 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lots of incorrect comments in nodeFuncs.c"
}
] |
[
{
"msg_contents": "Dear Postgresql community:\nWe are wondeing if Postgresql 13 is supported on Solaris 11 O/S on SPARC hardware?\n\nThe latest version of Postgresql we can download for Solaris SPARC seems to be Postgresql 12:PostgreSQL: Solaris packages\n\n\n| \n| \n| | \nPostgreSQL: Solaris packages\n\n\n |\n\n |\n\n |\n\n\nAre we looking at the correct website?\nThank you very much,\n-Peter\n\nDear Postgresql community:We are wondeing if Postgresql 13 is supported on Solaris 11 O/S on SPARC hardware?The latest version of Postgresql we can download for Solaris SPARC seems to be Postgresql 12:PostgreSQL: Solaris packagesPostgreSQL: Solaris packagesAre we looking at the correct website?Thank you very much,-Peter",
"msg_date": "Fri, 9 Apr 2021 00:14:16 +0000 (UTC)",
"msg_from": "Peter Lee <peterlee3672@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Postgresql 13 supported on Solaris 11 O/S on SPARC hardware?"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 9:38 AM Peter Lee <peterlee3672@yahoo.com> wrote:\n\n> Dear Postgresql community:\n>\n> We are wondeing if Postgresql 13 is supported on Solaris 11 O/S on SPARC\n> hardware?\n>\n> The latest version of Postgresql we can download for Solaris SPARC seems\n> to be Postgresql 12:\n> PostgreSQL: Solaris packages\n> <https://www.postgresql.org/download/solaris/>\n>\n> PostgreSQL: Solaris packages\n>\n> <https://www.postgresql.org/download/solaris/>\n>\n> Are we looking at the correct website?\n>\n>\nThis is unfortunately correct.\n\nOracle no longer provides the means for our Solaris packagers to make any\nbuilds, so we are no longer able to provide binaries for Solaris.\n\nYou should still be able to build the product from source. However, you\nshould be aware that there are no continuous build testing done on the\ncombination of Solaris and SPARC anymore (see\nhttps://buildfarm.postgresql.org/). There are builds for linux on sparc,\nbut not solaris, as part of the testing (and solaris+sparc for older\nversions only).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Apr 9, 2021 at 9:38 AM Peter Lee <peterlee3672@yahoo.com> wrote:Dear Postgresql community:We are wondeing if Postgresql 13 is supported on Solaris 11 O/S on SPARC hardware?The latest version of Postgresql we can download for Solaris SPARC seems to be Postgresql 12:PostgreSQL: Solaris packagesPostgreSQL: Solaris packagesAre we looking at the correct website?This is unfortunately correct.Oracle no longer provides the means for our Solaris packagers to make any builds, so we are no longer able to provide binaries for Solaris.You should still be able to build the product from source. However, you should be aware that there are no continuous build testing done on the combination of Solaris and SPARC anymore (see https://buildfarm.postgresql.org/). There are builds for linux on sparc, but not solaris, as part of the testing (and solaris+sparc for older versions only).-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 9 Apr 2021 10:04:53 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 13 supported on Solaris 11 O/S on SPARC hardware?"
}
] |
[
{
"msg_contents": "Hi,\n\nLooks like the running query is not getting cancelled even though I\nissue CTRL+C from psql or kill the backend with SIGINT. This only\nhappens with PG14 not in PG13. Am I missing something here? Is it a\nbug?\n\ncreate table t1(a1 int);\ninsert into t1 select * from generate_series(1,10000000000); --> I\nchose an intentionally long running query, now either issue CTRL+C or\nkill the backend with SIGINT, the query doesn't get cancelled. Note\nthat I don't even see \"Cancel request sent\" message on psql when I\nissue CTRL+C.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 9 Apr 2021 08:24:51 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Why is Query NOT getting cancelled with SIGINT in PG14?"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 08:24:51AM +0530, Bharath Rupireddy wrote:\n> Looks like the running query is not getting cancelled even though I\n> issue CTRL+C from psql or kill the backend with SIGINT. This only\n> happens with PG14 not in PG13. Am I missing something here? Is it a\n> bug?\n\nYes, see here:\nhttps://www.postgresql.org/message-id/flat/OSZPR01MB631017521EE6887ADC9492E8FD759%40OSZPR01MB6310.jpnprd01.prod.outlook.com#e9228ef1ae32315f8b0df3fa67a32e06\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 8 Apr 2021 22:08:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is Query NOT getting cancelled with SIGINT in PG14?"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 8:38 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Apr 09, 2021 at 08:24:51AM +0530, Bharath Rupireddy wrote:\n> > Looks like the running query is not getting cancelled even though I\n> > issue CTRL+C from psql or kill the backend with SIGINT. This only\n> > happens with PG14 not in PG13. Am I missing something here? Is it a\n> > bug?\n>\n> Yes, see here:\n> https://www.postgresql.org/message-id/flat/OSZPR01MB631017521EE6887ADC9492E8FD759%40OSZPR01MB6310.jpnprd01.prod.outlook.com#e9228ef1ae32315f8b0df3fa67a32e06\n\nThanks. I missed to follow that thread. I will respond there.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 9 Apr 2021 08:44:41 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why is Query NOT getting cancelled with SIGINT in PG14?"
}
] |
[
{
"msg_contents": "Hi, hackers!\nI noticed the Peter's commit 7e3c54168d9ec058cb3c9d47f8105b1d32dc8ceb that\nstabilizes certain tests by adding ORDER BY clause in tests and remember\nthat I saw the same error in tablespaces test for creation of partitioned\nindex. It comes very rarely and test fails on inverted order of parent and\nchild.\n\nPFA small patch that stabilizes that test in the same style by adding ORDER\nBY.\n\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Fri, 9 Apr 2021 12:00:26 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add ORDER BY to stabilize tablespace test for partitioned index"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 1:30 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>\n> Hi, hackers!\n> I noticed the Peter's commit 7e3c54168d9ec058cb3c9d47f8105b1d32dc8ceb that stabilizes certain tests by adding ORDER BY clause in tests and remember that I saw the same error in tablespaces test for creation of partitioned index. It comes very rarely and test fails on inverted order of parent and child.\n>\n> PFA small patch that stabilizes that test in the same style by adding ORDER BY.\n\n+1 and the patch looks good to me.\n\nI think we also need to add ORDER BY clauses to a few more tests(as\npointed in [1]) in create_function_3.sql which the commit 7e3c54168\nmissed to add. I will post the patch there in [1] and see if it gets\npicked up.\n\n[1] - https://www.postgresql.org/message-id/CALj2ACVb%2BFsKAhxAmVWSnTsPQwkvbMsxo4jGhw3uT-E036hvPA%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 9 Apr 2021 13:45:42 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add ORDER BY to stabilize tablespace test for partitioned index"
},
{
"msg_contents": ">\n> > I noticed the Peter's commit 7e3c54168d9ec058cb3c9d47f8105b1d32dc8ceb\n> that stabilizes certain tests by adding ORDER BY clause in tests and\n> remember that I saw the same error in tablespaces test for creation of\n> partitioned index. It comes very rarely and test fails on inverted order of\n> parent and child.\n> >\n> > PFA small patch that stabilizes that test in the same style by adding\n> ORDER BY.\n>\n> +1 and the patch looks good to me.\n>\n> I think we also need to add ORDER BY clauses to a few more tests(as\n> pointed in [1]) in create_function_3.sql which the commit 7e3c54168\n> missed to add. I will post the patch there in [1] and see if it gets\n> picked up.\n>\nThanks! I think the patch you mentioned in [1] is also good, and it's worth\nbeing committed as well.\n\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n> I noticed the Peter's commit 7e3c54168d9ec058cb3c9d47f8105b1d32dc8ceb that stabilizes certain tests by adding ORDER BY clause in tests and remember that I saw the same error in tablespaces test for creation of partitioned index. It comes very rarely and test fails on inverted order of parent and child.\n>\n> PFA small patch that stabilizes that test in the same style by adding ORDER BY.\n\n+1 and the patch looks good to me.\n\nI think we also need to add ORDER BY clauses to a few more tests(as\npointed in [1]) in create_function_3.sql which the commit 7e3c54168\nmissed to add. I will post the patch there in [1] and see if it gets\npicked up.Thanks! I think the patch you mentioned in [1] is also good, and it's worth being committed as well. --Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Fri, 9 Apr 2021 13:29:03 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add ORDER BY to stabilize tablespace test for partitioned index"
}
] |
[
{
"msg_contents": "Hi,\n\nthere is a small typo in guc.c. Attached patch fixes this.\n\nRegards\nDaniel",
"msg_date": "Fri, 9 Apr 2021 09:13:04 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Small typo in guc.c"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 09:13:04AM +0000, Daniel Westermann (DWE) wrote:\n> there is a small typo in guc.c. Attached patch fixes this.\n\nIndeed, there is. I'll apply and backpatch if there are no\nobjections.\n--\nMichael",
"msg_date": "Fri, 9 Apr 2021 19:41:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Small typo in guc.c"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 11:13 AM Daniel Westermann (DWE)\n<daniel.westermann@dbi-services.com> wrote:\n>\n> Hi,\n>\n> there is a small typo in guc.c. Attached patch fixes this.\n\nApplied, thanks!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Fri, 9 Apr 2021 12:41:48 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Small typo in guc.c"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 12:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Apr 09, 2021 at 09:13:04AM +0000, Daniel Westermann (DWE) wrote:\n> > there is a small typo in guc.c. Attached patch fixes this.\n>\n> Indeed, there is. I'll apply and backpatch if there are no\n> objections.\n\nWe seem to have started a trend of replying to the same emails at\nexactly the same time in the past couple of days :)\n\n(Already done)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Fri, 9 Apr 2021 13:12:33 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Small typo in guc.c"
}
] |
[
{
"msg_contents": "Hi,\n\nall \"short_desc\" end with a dot, except these:\n\n- Prefetch referenced blocks during recovery\n- Prefetch blocks that have full page images in the WAL\n\nAttached patch adds a dot to these as well.\n\nRegards\nDaniel",
"msg_date": "Fri, 9 Apr 2021 11:53:01 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Another small guc.c fix"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 11:53 PM Daniel Westermann (DWE)\n<daniel.westermann@dbi-services.com> wrote:\n> all \"short_desc\" end with a dot, except these:\n>\n> - Prefetch referenced blocks during recovery\n> - Prefetch blocks that have full page images in the WAL\n\nPushed, thanks.\n\n\n",
"msg_date": "Sat, 10 Apr 2021 08:44:17 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Another small guc.c fix"
}
] |
[
{
"msg_contents": "Hi,\n\ncheck_function_bodies has this description: \n\npostgres=# select short_desc from pg_settings where name = 'check_function_bodies';\n short_desc \n-----------------------------------------------\n Check function bodies during CREATE FUNCTION.\n(1 row)\n\nThis is not the whole truth since we have procedures, as this affects CREATE PROCEDURE as well:\n\npostgres=# create procedure p1 ( a int ) as $$ beginn null; end $$ language plpgsql;\nERROR: syntax error at or near \"beginn\"\nLINE 1: create procedure p1 ( a int ) as $$ beginn null; end $$ lang...\n ^\npostgres=# set check_function_bodies = false;\nSET\npostgres=# create procedure p1 ( a int ) as $$ beginn null; end $$ language plpgsql;\nCREATE PROCEDURE\npostgres=# \n\nAt least the description should mention procedures. Even the parameter name seems not to be correct anymore. Thoughts?\n\nRegards\nDaniel\n\n\n\n",
"msg_date": "Fri, 9 Apr 2021 12:11:35 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "check_function_bodies: At least the description seems wrong, since we\n have prodedures"
},
{
"msg_contents": "On 04/09/21 08:11, Daniel Westermann (DWE) wrote:\n> At least the description should mention procedures.\n> Even the parameter name seems not to be correct anymore. Thoughts?\n\nIt's possible the parameter name also appears in documentation for\nout-of-tree PLs, as each PL's validator function determines what\n\"check_function_bodies\" really means in that setting. For instance,\nit's documented in PL/Java that check_function_bodies really means\nthe (precompiled) class file is loaded and the presence of its\ndependencies and the target method confirmed.\n\nThat means that any change to the parameter name could result in\nsome documentation churn in the extension world.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 9 Apr 2021 09:21:42 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: check_function_bodies: At least the description seems wrong,\n since we have prodedures"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 04/09/21 08:11, Daniel Westermann (DWE) wrote:\n>> At least the description should mention procedures.\n>> Even the parameter name seems not to be correct anymore. Thoughts?\n\n> It's possible the parameter name also appears in documentation for\n> out-of-tree PLs, as each PL's validator function determines what\n> \"check_function_bodies\" really means in that setting.\n\nThat parameter is also set explicitly in pg_dump output, so we\ncan't rename it without breaking existing dump files.\n\nAdmittedly, guc.c does have provisions for substituting new names\nif we rename some parameter. But I'm not in a hurry to create\nmore instances of that behavior; the potential for confusion\nseems to outweigh any benefit.\n\n+1 for updating the description though. We could s/function/routine/\nwhere space is tight.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Apr 2021 12:17:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: check_function_bodies: At least the description seems wrong,\n since we have prodedures"
},
{
"msg_contents": ">> It's possible the parameter name also appears in documentation for\n>> out-of-tree PLs, as each PL's validator function determines what\n>> \"check_function_bodies\" really means in that setting.\n\n>That parameter is also set explicitly in pg_dump output, so we\n>can't rename it without breaking existing dump files.\n\n>Admittedly, guc.c does have provisions for substituting new names\n>if we rename some parameter. But I'm not in a hurry to create\n>more instances of that behavior; the potential for confusion\n>seems to outweigh any benefit.\n\n>+1 for updating the description though. We could s/function/routine/\n>where space is tight.\n\nThanks for your inputs. Attached a proposal which updates the description.\n\nRegards\nDaniel",
"msg_date": "Sat, 10 Apr 2021 07:56:36 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Re: check_function_bodies: At least the description seems wrong,\n since we have prodedures"
},
{
"msg_contents": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com> writes:\n>> +1 for updating the description though. We could s/function/routine/\n>> where space is tight.\n\n> Thanks for your inputs. Attached a proposal which updates the description.\n\nI changed config.sgml's description similarly, and pushed this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Apr 2021 12:09:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: check_function_bodies: At least the description seems wrong,\n since we have prodedures"
}
] |
[
{
"msg_contents": "Good day, hackers.\n\nI've got HP ProBook 640g8 with i7-1165g7. I've installed Ubuntu 20.04 \nLTS on it\nand started to play with PostgreSQL sources.\n\nOccasinally I found I'm not able to `make check` old Postgresql \nversions.\nAt least 9.6 and 10. They are failed at the initdb stage in the call to \npostgresql.\n\nRaw postgresql version 9.6.8 and 10.0 fails in boostrap stage:\n\n running bootstrap script ... 2021-04-09 12:33:26.424 MSK [161121] \nFATAL: could not find tuple for opclass 1\n 2021-04-09 12:33:26.424 MSK [161121] PANIC: cannot abort \ntransaction 1, it was already committed\n Aborted (core dumped)\n child process exited with exit code 134\n\nOur modified custom version 9.6 fails inside of libc __strncmp_avx2 \nduring post-bootstrap\nwith segmentation fault:\n\n Program terminated with signal SIGSEGV, Segmentation fault.\n #0 __strncmp_avx2 ()\n #1 0x0000557168a7eeda in nameeq\n #2 0x0000557168b4c4a0 in FunctionCall2Coll\n #3 0x0000557168659555 in heapgettup_pagemode\n #4 0x000055716865a617 in heap_getnext\n #5 0x0000557168678cf1 in systable_getnext\n #6 0x0000557168b5651c in GetDatabaseTuple\n #7 0x0000557168b574a4 in InitPostgres\n #8 0x00005571689dcb7d in PostgresMain\n #9 0x00005571688844d5 in main\n\nI've bisected between REL_11_0 and \"Rename pg_rewind's \ncopy_file_range()\" and\nfound 372728b0d49552641f0ea83d9d2e08817de038fa\n> Replace our traditional initial-catalog-data format with a better \n> design.\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=372728b0d49552641f0ea83d9d2e08817de038fa\n\nThis is first commit where `make check` doesn't fail during initdb on my \nmachine.\nTherefore 02f3e558f21c0fbec9f94d5de9ad34f321eb0e57 is the last one where \n`make check` fails.\n\nI've tried with gcc9, gcc10 and clang10.\nI've configured either without parameters or with `CFLAGS=-O0 \n./configure --enable-debug`.\n\nThing doesn't happen on Intel CPU of 10th series (i7-10510U and \ni9-10900K).\nUnfortunately, I have no fellows or colleagues with Intel CPU 11 \nseries,\ntherefore I couldn't tell if this bug of 11 series or bug of concrete \nCPU installed\nin the notebook.\n\nIt will be great if some with i7-11* could try to make check and report\nif it also fails or not.\n\nWith regards,\nYura Sokolov\nPostgresPro\n\n\n",
"msg_date": "Fri, 09 Apr 2021 16:28:25 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Old Postgresql version on i7-1165g7"
},
{
"msg_contents": "Yura Sokolov писал 2021-04-09 16:28:\n> Good day, hackers.\n> \n> I've got HP ProBook 640g8 with i7-1165g7. I've installed Ubuntu 20.04 \n> LTS on it\n> and started to play with PostgreSQL sources.\n> \n> Occasinally I found I'm not able to `make check` old Postgresql \n> versions.\n> At least 9.6 and 10. They are failed at the initdb stage in the call\n> to postgresql.\n> \n> Raw postgresql version 9.6.8 and 10.0 fails in boostrap stage:\n> \n> running bootstrap script ... 2021-04-09 12:33:26.424 MSK [161121]\n> FATAL: could not find tuple for opclass 1\n> 2021-04-09 12:33:26.424 MSK [161121] PANIC: cannot abort\n> transaction 1, it was already committed\n> Aborted (core dumped)\n> child process exited with exit code 134\n> \n> Our modified custom version 9.6 fails inside of libc __strncmp_avx2\n> during post-bootstrap\n> with segmentation fault:\n> \n> Program terminated with signal SIGSEGV, Segmentation fault.\n> #0 __strncmp_avx2 ()\n> #1 0x0000557168a7eeda in nameeq\n> #2 0x0000557168b4c4a0 in FunctionCall2Coll\n> #3 0x0000557168659555 in heapgettup_pagemode\n> #4 0x000055716865a617 in heap_getnext\n> #5 0x0000557168678cf1 in systable_getnext\n> #6 0x0000557168b5651c in GetDatabaseTuple\n> #7 0x0000557168b574a4 in InitPostgres\n> #8 0x00005571689dcb7d in PostgresMain\n> #9 0x00005571688844d5 in main\n> \n> I've bisected between REL_11_0 and \"Rename pg_rewind's \n> copy_file_range()\" and\n> found 372728b0d49552641f0ea83d9d2e08817de038fa\n>> Replace our traditional initial-catalog-data format with a better \n>> design.\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=372728b0d49552641f0ea83d9d2e08817de038fa\n> \n> This is first commit where `make check` doesn't fail during initdb on\n> my machine.\n> Therefore 02f3e558f21c0fbec9f94d5de9ad34f321eb0e57 is the last one\n> where `make check` fails.\n> \n> I've tried with gcc9, gcc10 and clang10.\n> I've configured either without parameters or with `CFLAGS=-O0\n> ./configure --enable-debug`.\n> \n> Thing doesn't happen on Intel CPU of 10th series (i7-10510U and \n> i9-10900K).\n> Unfortunately, I have no fellows or colleagues with Intel CPU 11 \n> series,\n> therefore I couldn't tell if this bug of 11 series or bug of concrete\n> CPU installed\n> in the notebook.\n> \n> It will be great if some with i7-11* could try to make check and report\n> if it also fails or not.\n\nBTW, problem remains in Debian stable (10.4) inside docker on same \nmachine.\n\n> \n> With regards,\n> Yura Sokolov\n> PostgresPro\n\n\n",
"msg_date": "Tue, 13 Apr 2021 13:20:34 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Old Postgresql version on i7-1165g7"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 04:28:25PM +0300, Yura Sokolov wrote:\n> Good day, hackers.\n> \n> I've got HP ProBook 640g8 with i7-1165g7. I've installed Ubuntu 20.04 LTS on\n> it\n> and started to play with PostgreSQL sources.\n> \n> Occasinally I found I'm not able to `make check` old Postgresql versions.\n\nDo you mean that HEAD works consistently, but v9.6 and v10 sometimes work but\nsometimes fail ?\n\n> #5 0x0000557168678cf1 in systable_getnext\n> #6 0x0000557168b5651c in GetDatabaseTuple\n> #7 0x0000557168b574a4 in InitPostgres\n> #8 0x00005571689dcb7d in PostgresMain\n> #9 0x00005571688844d5 in main\n> \n> I've bisected between REL_11_0 and \"Rename pg_rewind's copy_file_range()\"\n> and\n> found 372728b0d49552641f0ea83d9d2e08817de038fa\n> > Replace our traditional initial-catalog-data format with a better\n> > design.\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=372728b0d49552641f0ea83d9d2e08817de038fa\n> \n> This is first commit where `make check` doesn't fail during initdb on my\n> machine. Therefore 02f3e558f21c0fbec9f94d5de9ad34f321eb0e57 is the last one where\n> `make check` fails.\n\nThis doesn't make much sense or help much, since 372728b doesn't actually\nchange the catalogs, or any .c file.\n\n> I've tried with gcc9, gcc10 and clang10.\n> I've configured either without parameters or with `CFLAGS=-O0 ./configure\n> --enable-debug`.\n\nYou used make clean too, right ?\n\nI would also use --with-cassert, since it might catch problems you'd otherwise\nmiss.\n\nIf that doesn't expose anything, maybe try to #define USE_VALGRIND in\nsrc/include/pg_config_manual.h, and run with valgrind --trace-children=yes\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 13 Apr 2021 06:58:33 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Old Postgresql version on i7-1165g7"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Apr 09, 2021 at 04:28:25PM +0300, Yura Sokolov wrote:\n>> Occasinally I found I'm not able to `make check` old Postgresql versions.\n\n>> I've bisected between REL_11_0 and \"Rename pg_rewind's copy_file_range()\"\n>> and\n>> found 372728b0d49552641f0ea83d9d2e08817de038fa\n>>> Replace our traditional initial-catalog-data format with a better\n>>> design.\n>> This is first commit where `make check` doesn't fail during initdb on my\n>> machine.\n\n> This doesn't make much sense or help much, since 372728b doesn't actually\n> change the catalogs, or any .c file.\n\nIt could make sense if some part of the toolchain that was previously\nused to generate postgres.bki doesn't work right on that machine.\nOverall though I'd have thought that 372728b would increase not\ndecrease our toolchain footprint. It also seems unlikely that a\nrecent Ubuntu release would contain toolchain bugs that we hadn't\nalready heard about.\n\n> You used make clean too, right ?\n\nReally, when bisecting, you need to use \"make distclean\" or even\n\"git clean -dfx\" between steps, or you may get bogus results,\nbecause our makefiles aren't that great about tracking dependencies,\nespecially when you move backwards in the history.\n\nSo perhaps a more plausible theory is that this bisection result\nis wrong because you weren't careful enough.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Apr 2021 10:45:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Old Postgresql version on i7-1165g7"
},
{
"msg_contents": "Tom Lane писал 2021-04-13 17:45:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n>> On Fri, Apr 09, 2021 at 04:28:25PM +0300, Yura Sokolov wrote:\n>>> Occasinally I found I'm not able to `make check` old Postgresql \n>>> versions.\n> \n>>> I've bisected between REL_11_0 and \"Rename pg_rewind's \n>>> copy_file_range()\"\n>>> and\n>>> found 372728b0d49552641f0ea83d9d2e08817de038fa\n>>>> Replace our traditional initial-catalog-data format with a better\n>>>> design.\n>>> This is first commit where `make check` doesn't fail during initdb on \n>>> my\n>>> machine.\n> \n>> This doesn't make much sense or help much, since 372728b doesn't \n>> actually\n>> change the catalogs, or any .c file.\n> \n> It could make sense if some part of the toolchain that was previously\n> used to generate postgres.bki doesn't work right on that machine.\n> Overall though I'd have thought that 372728b would increase not\n> decrease our toolchain footprint. It also seems unlikely that a\n> recent Ubuntu release would contain toolchain bugs that we hadn't\n> already heard about.\n> \n>> You used make clean too, right ?\n> \n> Really, when bisecting, you need to use \"make distclean\" or even\n> \"git clean -dfx\" between steps, or you may get bogus results,\n> because our makefiles aren't that great about tracking dependencies,\n> especially when you move backwards in the history.\n> \n> So perhaps a more plausible theory is that this bisection result\n> is wrong because you weren't careful enough.\n> \n> \t\t\tregards, tom lane\n\nSorry for missing mail for a week.\n\nI believe I cleaned before each step since I'm building in external \ndirectory\nand cleanup is just `rm * -r`.\n\nBut I'll repeat bisecting tomorrow to be sure.\n\nI don't think it is really PostgreSQL or toolchain bug. I believe it is \nsome\ncorner case that were changed in new Intel CPU.\n\nWith regards,\nYura Sokolov.\n\n\n",
"msg_date": "Sun, 18 Apr 2021 23:29:03 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Old Postgresql version on i7-1165g7"
},
{
"msg_contents": "Yura Sokolov писал 2021-04-18 23:29:\n> Tom Lane писал 2021-04-13 17:45:\n>> Justin Pryzby <pryzby@telsasoft.com> writes:\n>>> On Fri, Apr 09, 2021 at 04:28:25PM +0300, Yura Sokolov wrote:\n>>>> Occasinally I found I'm not able to `make check` old Postgresql \n>>>> versions.\n>> \n>>>> I've bisected between REL_11_0 and \"Rename pg_rewind's \n>>>> copy_file_range()\"\n>>>> and\n>>>> found 372728b0d49552641f0ea83d9d2e08817de038fa\n>>>>> Replace our traditional initial-catalog-data format with a better\n>>>>> design.\n>>>> This is first commit where `make check` doesn't fail during initdb \n>>>> on my\n>>>> machine.\n>> \n>>> This doesn't make much sense or help much, since 372728b doesn't \n>>> actually\n>>> change the catalogs, or any .c file.\n>> \n>> It could make sense if some part of the toolchain that was previously\n>> used to generate postgres.bki doesn't work right on that machine.\n>> Overall though I'd have thought that 372728b would increase not\n>> decrease our toolchain footprint. It also seems unlikely that a\n>> recent Ubuntu release would contain toolchain bugs that we hadn't\n>> already heard about.\n>> \n>>> You used make clean too, right ?\n>> \n>> Really, when bisecting, you need to use \"make distclean\" or even\n>> \"git clean -dfx\" between steps, or you may get bogus results,\n>> because our makefiles aren't that great about tracking dependencies,\n>> especially when you move backwards in the history.\n\nYep, \"git clean -dfx\" did the job. \"make distclean\" didn't, btw.\nI've had \"src/backend/catalog/schemapg.h\" file in source tree\ngenerated with \"make submake-generated-headers\" on REL_13_0.\nIt were not shown with \"git status\", therefore I didn't notice its\nexistence. It were not deleted neither with \"make distclean\", nor with\n\"git clean -dx\" I tried before. Only \"git clean -dfx\" deletes it.\n\nThank you for the suggestion, Tom. You've saved my sanity.\n\nRegards,\nYura Sokolov.\n\n\n",
"msg_date": "Mon, 19 Apr 2021 12:43:33 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Old Postgresql version on i7-1165g7"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile checking the ExecuteTruncate code for the FOREIGN TRUNCATE\nfeature, I saw that we filter out the duplicate relations specified in\nthe TRUNCATE command. But before skipping the duplicates, we are just\nopening the relation, then if it is present in the already seen\nrelids, then closing it and continuing further.\n\nI think we can just have the duplicate checking before table_open so\nthat in cases like TRUNCATE foo, foo, foo, foo; we could save costs of\ntable_open and table_close. Attaching a small patch. Thoughts?\n\nThis is just like what we already do for child tables, see following\nin ExecuteTruncate:\n foreach(child, children)\n {\n Oid childrelid = lfirst_oid(child);\n\n if (list_member_oid(relids, childrelid))\n continue;\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 9 Apr 2021 20:51:30 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Avoid unnecessary table open/close for TRUNCATE foo, foo, foo; kind\n of commands"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 8:51 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> While checking the ExecuteTruncate code for the FOREIGN TRUNCATE\n> feature, I saw that we filter out the duplicate relations specified in\n> the TRUNCATE command. But before skipping the duplicates, we are just\n> opening the relation, then if it is present in the already seen\n> relids, then closing it and continuing further.\n>\n> I think we can just have the duplicate checking before table_open so\n> that in cases like TRUNCATE foo, foo, foo, foo; we could save costs of\n> table_open and table_close. Attaching a small patch. Thoughts?\n>\n> This is just like what we already do for child tables, see following\n> in ExecuteTruncate:\n> foreach(child, children)\n> {\n> Oid childrelid = lfirst_oid(child);\n>\n> if (list_member_oid(relids, childrelid))\n> continue;\n>\n\nWell yes, the patch looks pretty much reasonable to be.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Fri, 9 Apr 2021 21:09:49 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unnecessary table open/close for TRUNCATE foo, foo, foo;\n kind of commands"
},
{
"msg_contents": "\n\nOn 2021/04/10 0:39, Amul Sul wrote:\n> On Fri, Apr 9, 2021 at 8:51 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> Hi,\n>>\n>> While checking the ExecuteTruncate code for the FOREIGN TRUNCATE\n>> feature, I saw that we filter out the duplicate relations specified in\n>> the TRUNCATE command. But before skipping the duplicates, we are just\n>> opening the relation, then if it is present in the already seen\n>> relids, then closing it and continuing further.\n>>\n>> I think we can just have the duplicate checking before table_open so\n>> that in cases like TRUNCATE foo, foo, foo, foo; we could save costs of\n>> table_open and table_close. Attaching a small patch. Thoughts?\n>>\n>> This is just like what we already do for child tables, see following\n>> in ExecuteTruncate:\n>> foreach(child, children)\n>> {\n>> Oid childrelid = lfirst_oid(child);\n>>\n>> if (list_member_oid(relids, childrelid))\n>> continue;\n>>\n> \n> Well yes, the patch looks pretty much reasonable to be.\n\nLGTM, too. I will commit this patch.\nThough that code exists even in older version, I'm not thinking\nto back-patch that because it's not a bug.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sat, 10 Apr 2021 00:53:02 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unnecessary table open/close for TRUNCATE foo, foo, foo;\n kind of commands"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 9:23 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/04/10 0:39, Amul Sul wrote:\n> > On Fri, Apr 9, 2021 at 8:51 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> While checking the ExecuteTruncate code for the FOREIGN TRUNCATE\n> >> feature, I saw that we filter out the duplicate relations specified in\n> >> the TRUNCATE command. But before skipping the duplicates, we are just\n> >> opening the relation, then if it is present in the already seen\n> >> relids, then closing it and continuing further.\n> >>\n> >> I think we can just have the duplicate checking before table_open so\n> >> that in cases like TRUNCATE foo, foo, foo, foo; we could save costs of\n> >> table_open and table_close. Attaching a small patch. Thoughts?\n> >>\n> >> This is just like what we already do for child tables, see following\n> >> in ExecuteTruncate:\n> >> foreach(child, children)\n> >> {\n> >> Oid childrelid = lfirst_oid(child);\n> >>\n> >> if (list_member_oid(relids, childrelid))\n> >> continue;\n> >>\n> >\n> > Well yes, the patch looks pretty much reasonable to be.\n>\n> LGTM, too. I will commit this patch.\n> Though that code exists even in older version, I'm not thinking\n> to back-patch that because it's not a bug.\n>\nAgree, thanks Fujii-San.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Fri, 9 Apr 2021 21:51:59 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unnecessary table open/close for TRUNCATE foo, foo, foo;\n kind of commands"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 9:23 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/04/10 0:39, Amul Sul wrote:\n> > On Fri, Apr 9, 2021 at 8:51 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> While checking the ExecuteTruncate code for the FOREIGN TRUNCATE\n> >> feature, I saw that we filter out the duplicate relations specified in\n> >> the TRUNCATE command. But before skipping the duplicates, we are just\n> >> opening the relation, then if it is present in the already seen\n> >> relids, then closing it and continuing further.\n> >>\n> >> I think we can just have the duplicate checking before table_open so\n> >> that in cases like TRUNCATE foo, foo, foo, foo; we could save costs of\n> >> table_open and table_close. Attaching a small patch. Thoughts?\n> >>\n> >> This is just like what we already do for child tables, see following\n> >> in ExecuteTruncate:\n> >> foreach(child, children)\n> >> {\n> >> Oid childrelid = lfirst_oid(child);\n> >>\n> >> if (list_member_oid(relids, childrelid))\n> >> continue;\n> >>\n> >\n> > Well yes, the patch looks pretty much reasonable to be.\n>\n> LGTM, too. I will commit this patch.\n> Though that code exists even in older version, I'm not thinking\n> to back-patch that because it's not a bug.\n\nThanks. +1 to not back-patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 10 Apr 2021 08:02:35 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unnecessary table open/close for TRUNCATE foo, foo, foo;\n kind of commands"
},
{
"msg_contents": "\n\nOn 2021/04/10 11:32, Bharath Rupireddy wrote:\n> On Fri, Apr 9, 2021 at 9:23 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> On 2021/04/10 0:39, Amul Sul wrote:\n>>> On Fri, Apr 9, 2021 at 8:51 PM Bharath Rupireddy\n>>> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>>>\n>>>> Hi,\n>>>>\n>>>> While checking the ExecuteTruncate code for the FOREIGN TRUNCATE\n>>>> feature, I saw that we filter out the duplicate relations specified in\n>>>> the TRUNCATE command. But before skipping the duplicates, we are just\n>>>> opening the relation, then if it is present in the already seen\n>>>> relids, then closing it and continuing further.\n>>>>\n>>>> I think we can just have the duplicate checking before table_open so\n>>>> that in cases like TRUNCATE foo, foo, foo, foo; we could save costs of\n>>>> table_open and table_close. Attaching a small patch. Thoughts?\n>>>>\n>>>> This is just like what we already do for child tables, see following\n>>>> in ExecuteTruncate:\n>>>> foreach(child, children)\n>>>> {\n>>>> Oid childrelid = lfirst_oid(child);\n>>>>\n>>>> if (list_member_oid(relids, childrelid))\n>>>> continue;\n>>>>\n>>>\n>>> Well yes, the patch looks pretty much reasonable to be.\n>>\n>> LGTM, too. I will commit this patch.\n>> Though that code exists even in older version, I'm not thinking\n>> to back-patch that because it's not a bug.\n> \n> Thanks. +1 to not back-patch.\n\nPushed only to the master. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 12 Apr 2021 00:09:05 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unnecessary table open/close for TRUNCATE foo, foo, foo;\n kind of commands"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm Junduo Dong, a 22-year-old studying at China University of Geosciences.\n\nI would like to participate in the project \"pgagroal: Metrics and monitoring\"\non page \"https://wiki.postgresql.org/wiki/GSoC_2021\" for GSoC 2021.\n\nAttachment is the proposal, please review.\n\nI hope that my proposal will be successfully accepted and that I will join\nthe PostgreSQL hacker community in the future.\n\nRegards,\nJunduo Dong",
"msg_date": "Fri, 9 Apr 2021 23:53:42 +0800",
"msg_from": "Junduo Dong <andj4cn@gmail.com>",
"msg_from_op": true,
"msg_subject": "[GSoC] Metrics and Monitoring for pgagroal"
},
{
"msg_contents": "Hi Junduo,\n\nOn 4/9/21 11:53 AM, Junduo Dong wrote:\n> I'm Junduo Dong, a 22-year-old studying at China University of Geosciences.\n>\n> I would like to participate in the project \"pgagroal: Metrics and monitoring\"\n> on page \"https://wiki.postgresql.org/wiki/GSoC_2021\" for GSoC 2021.\n>\n> Attachment is the proposal, please review.\n>\n> I hope that my proposal will be successfully accepted and that I will join\n> the PostgreSQL hacker community in the future.\n>\n\nThanks for your interest in Google Summer of Code, and the pgagroal \nproposal within the PostgreSQL umbrella.\n\n\nI'll contact you off-list, and we can get the process going to finalize \nyour submission to the GSoC program before the April 13 deadline.\n\n\nThanks !\n\n\nBest regards,\n\n Jesper\n\n\n\n\n",
"msg_date": "Fri, 9 Apr 2021 12:11:48 -0400",
"msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: [GSoC] Metrics and Monitoring for pgagroal"
}
] |
[
{
"msg_contents": "$SUBJECT is still a very loosely formed idea, so forgive lack of detail or\nthings I've likely missed, but I wanted to get it out there to see if it\nsounded at all intriguing to people.\n\nBackground: One of the big problems with non-local storage such as AWS EBS\nvolumes or a SAN is that in a large database (really, working set, where\nworking set includes reads) exceeds the size of buffer cache (and page\ncache) the cost of random page reads hitting the underlying disk system\ndominates. This is because networked disks have an order of magnitude\nhigher latency than a bunch of RAIDed SSDs (even more so with NVMe\nstorage). In some of our experiments on Aurora I've seen a 10x change\nversus pretty good physical hardware, and I'd assume RDS (since it's\nEBS-backed) is similar.\n\nA specific area where this is particularly painful is btree index reads.\nWalking the tree to leaf pages isn't naturally prefetchable, and so for\neach level you pay the random page cost. Of course higher levels in the\ntree will almost certainly exhibit emergent behavior such that they (just\nby fact of the LRU caching) will be in the buffer cache, but for a large\nindex lower levels likely won't be.\n\nIf we squint a bit, insertions look a whole lot like reads as well since we\nhave to walk the tree to find the leaf insertion page for a new tuple. This\nis particularly true for indexes where inserts are roughly randomly\ndistributed data, like a uuid.\n\nThe read-for-lookups problem is harder to solve, but the cost as it relates\nto table inserts is possibly more tractable. Tables typically have more\nthan one index to update, so the obvious approach is \"let's just\nparallelize the index insertions\". Of course we know that's difficult given\nthe multi-process approach Postgres uses for parallelism.\n\nAnother approach that at first glance seems like it fits better into\nPostgres (I'm not claiming it's easy or a small patch) would be to process\na batch of indexes at once. For example, if the index access methods were\nextended to allow being given a list of indexes that need to be walked,\nthen the btree code could process each layer in the walk as a group --\nissuing IO fetches for all of the first level blocks in the tree, and then\ncomputing all of the next level blocks needed and issuing those IO requests\nat a time, and so on.\n\nIn some workloads we've been testing I believe such an approach could\nplausibly improve table insert (and update) performance by multiple\nhundreds of percent.\n\nI don't have any code at the moment to show here, but I wanted to get the\nidea out there to see if there were any immediate reactions or other\nthoughts on the topic.\n\nThoughts?\n\nJames\n\n$SUBJECT is still a very loosely formed idea, so forgive lack of detail or things I've likely missed, but I wanted to get it out there to see if it sounded at all intriguing to people. Background: One of the big problems with non-local storage such as AWS EBS volumes or a SAN is that in a large database (really, working set, where working set includes reads) exceeds the size of buffer cache (and page cache) the cost of random page reads hitting the underlying disk system dominates. This is because networked disks have an order of magnitude higher latency than a bunch of RAIDed SSDs (even more so with NVMe storage). In some of our experiments on Aurora I've seen a 10x change versus pretty good physical hardware, and I'd assume RDS (since it's EBS-backed) is similar. A specific area where this is particularly painful is btree index reads. Walking the tree to leaf pages isn't naturally prefetchable, and so for each level you pay the random page cost. Of course higher levels in the tree will almost certainly exhibit emergent behavior such that they (just by fact of the LRU caching) will be in the buffer cache, but for a large index lower levels likely won't be. If we squint a bit, insertions look a whole lot like reads as well since we have to walk the tree to find the leaf insertion page for a new tuple. This is particularly true for indexes where inserts are roughly randomly distributed data, like a uuid. The read-for-lookups problem is harder to solve, but the cost as it relates to table inserts is possibly more tractable. Tables typically have more than one index to update, so the obvious approach is \"let's just parallelize the index insertions\". Of course we know that's difficult given the multi-process approach Postgres uses for parallelism. Another approach that at first glance seems like it fits better into Postgres (I'm not claiming it's easy or a small patch) would be to process a batch of indexes at once. For example, if the index access methods were extended to allow being given a list of indexes that need to be walked, then the btree code could process each layer in the walk as a group -- issuing IO fetches for all of the first level blocks in the tree, and then computing all of the next level blocks needed and issuing those IO requests at a time, and so on. In some workloads we've been testing I believe such an approach could plausibly improve table insert (and update) performance by multiple hundreds of percent. I don't have any code at the moment to show here, but I wanted to get the idea out there to see if there were any immediate reactions or other thoughts on the topic.Thoughts?James",
"msg_date": "Fri, 9 Apr 2021 13:33:31 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Processing btree walks as a batch to parallelize IO"
},
{
"msg_contents": "\n\nOn 4/9/21 7:33 PM, James Coleman wrote:\n> $SUBJECT is still a very loosely formed idea, so forgive lack of detail\n> or things I've likely missed, but I wanted to get it out there to see if\n> it sounded at all intriguing to people. \n> \n> Background: One of the big problems with non-local storage such as AWS\n> EBS volumes or a SAN is that in a large database (really, working set,\n> where working set includes reads) exceeds the size of buffer cache (and\n> page cache) the cost of random page reads hitting the underlying disk\n> system dominates. This is because networked disks have an order of\n> magnitude higher latency than a bunch of RAIDed SSDs (even more so with\n> NVMe storage). In some of our experiments on Aurora I've seen a 10x\n> change versus pretty good physical hardware, and I'd assume RDS (since\n> it's EBS-backed) is similar. \n> \n> A specific area where this is particularly painful is btree index reads.\n> Walking the tree to leaf pages isn't naturally prefetchable, and so for\n> each level you pay the random page cost. Of course higher levels in the\n> tree will almost certainly exhibit emergent behavior such that they\n> (just by fact of the LRU caching) will be in the buffer cache, but for a\n> large index lower levels likely won't be. \n> \n\nWhat do you consider a large index level?\n\nConsider a 1TB table, with just a single UUID column - that's ~25B rows,\ngive or take. Real tables will have more columns, so this seems like a\nreasonable model of the largest number of rows per relation. With ~32B\nper index tuple, that's about 100M leaf pages, and with ~256 branches\nper internal page, that's still only ~5 levels. I think it's quite rare\nto see indexes with more than 6 or 7 levels.\n\nAnd the internal pages are maybe 0.5% of the whole index (so ~4GB out of\n750GB). I think the usual expectation is that most of that will fit into\nRAM, but of course there may be more indexes competing for that.\n\nI think the index level is not really the crucial bit - it's more about\nthe total amount of indexes in the DB.\n\n> If we squint a bit, insertions look a whole lot like reads as well since\n> we have to walk the tree to find the leaf insertion page for a new\n> tuple. This is particularly true for indexes where inserts are roughly\n> randomly distributed data, like a uuid. \n> \n\nYep. We need to walk the index to the leaf pages in both cases, both for\nread and insert workloads.\n\n> The read-for-lookups problem is harder to solve, but the cost as it\n> relates to table inserts is possibly more tractable. Tables typically\n> have more than one index to update, so the obvious approach is \"let's\n> just parallelize the index insertions\". Of course we know that's\n> difficult given the multi-process approach Postgres uses for parallelism. \n> \n\nHmm. Not sure if reads are harder to real with, but I think you're right\nthose two cases (reads and writes) may look similar at the level of a\nsingle index, but may need rather different approaches exactly because\ninserts have to deal with all indexes, while reads only really deal with\na single index.\n\nFWIW I think there are a couple options for improving reads, at least in\nsome cases.\n\n1) I wonder if e.g. _bt_readnextpage could prefetch at least one page\nahead. We can't look further ahead, but perhaps this would help.\n\n2) In some cases (e.g. nested loop with inner indexes scan) we could\ncollect an array of values and then look them up at once, which should\nallow us to do at least some fo the I/O in parallel, I think. That's\nsimilar to what you propose for writes, except that it works against the\nsame index.\n\n\n> Another approach that at first glance seems like it fits better into\n> Postgres (I'm not claiming it's easy or a small patch) would be to\n> process a batch of indexes at once. For example, if the index access\n> methods were extended to allow being given a list of indexes that need\n> to be walked, then the btree code could process each layer in the walk\n> as a group -- issuing IO fetches for all of the first level blocks in\n> the tree, and then computing all of the next level blocks needed and\n> issuing those IO requests at a time, and so on. \n> \n\nYeah, I agree having a way to say \"prefetch all pages needed to insert\nthese keys into these indexes\" might be better than just parallelizing\nit in a \"naive\" way.\n\nNot sure how complex would it be - I think the API would need to allow\ntraversing the index with each step split into two phases:\n\n1) determine the page needed for the next step, return it to caller\n\n2) the caller collects pages from all indexes, initiates prefetch\n\n3) instruct indexes to actually do the next step, stop if it's a leaf\npage (otherwise go to (1))\n\nAnd then we might just do index inserts in a serial way, just like we do\ntoday, hoping to hit the prefetched pages.\n\n\nFWIW while this probably helps saturating the I/O, it unfortunately does\nnothing to reduce the write amplification - we still need to modify the\nsame amount of leaf pages in all indexes, produce the same amount of WAL\netc. I think there were some proposals to add small internal buffers,\nand instead of pushing the inserts all the way down to the leaf page,\njust add them to the internal buffer. And when the buffer gets full,\npropagate the contents to the next level of buffers.\n\nFor example, each internal page might have one \"buffer\" page, so the\nindex size would not really change (the internal pages would double, but\nit's still jut ~1% of the total index size). Of course, this makes\nlookups more complex/expensive, because we need to check the internal\nbuffers. But it does reduce the write amplification, because it combines\nchanges to leaf pages.\n\n> In some workloads we've been testing I believe such an approach could\n> plausibly improve table insert (and update) performance by multiple\n> hundreds of percent. \n> \n> I don't have any code at the moment to show here, but I wanted to get\n> the idea out there to see if there were any immediate reactions or other\n> thoughts on the topic.\n> \n> Thoughts?\n> \n\nI think you're right indexes may be a serious bottleneck in some cases,\nso exploring ways to improve that seems useful. Ultimately I think we\nshould be looking for ways to reduce the amount of work we need to do,\nbut parallelizing it (i.e. doing the same amount of work but in multiple\nprocesses) is a valid approach too.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 9 Apr 2021 22:57:44 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Processing btree walks as a batch to parallelize IO"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 4:57 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 4/9/21 7:33 PM, James Coleman wrote:\n> > $SUBJECT is still a very loosely formed idea, so forgive lack of detail\n> > or things I've likely missed, but I wanted to get it out there to see if\n> > it sounded at all intriguing to people.\n> >\n> > Background: One of the big problems with non-local storage such as AWS\n> > EBS volumes or a SAN is that in a large database (really, working set,\n> > where working set includes reads) exceeds the size of buffer cache (and\n> > page cache) the cost of random page reads hitting the underlying disk\n> > system dominates. This is because networked disks have an order of\n> > magnitude higher latency than a bunch of RAIDed SSDs (even more so with\n> > NVMe storage). In some of our experiments on Aurora I've seen a 10x\n> > change versus pretty good physical hardware, and I'd assume RDS (since\n> > it's EBS-backed) is similar.\n> >\n> > A specific area where this is particularly painful is btree index reads.\n> > Walking the tree to leaf pages isn't naturally prefetchable, and so for\n> > each level you pay the random page cost. Of course higher levels in the\n> > tree will almost certainly exhibit emergent behavior such that they\n> > (just by fact of the LRU caching) will be in the buffer cache, but for a\n> > large index lower levels likely won't be.\n> >\n>\n> What do you consider a large index level?\n\nIn general it's probably all levels but the leaves (though depends on\ncache and index size etc.)\n\n> Consider a 1TB table, with just a single UUID column - that's ~25B rows,\n> give or take. Real tables will have more columns, so this seems like a\n> reasonable model of the largest number of rows per relation. With ~32B\n> per index tuple, that's about 100M leaf pages, and with ~256 branches\n> per internal page, that's still only ~5 levels. I think it's quite rare\n> to see indexes with more than 6 or 7 levels.\n>\n> And the internal pages are maybe 0.5% of the whole index (so ~4GB out of\n> 750GB). I think the usual expectation is that most of that will fit into\n> RAM, but of course there may be more indexes competing for that.\n>\n> I think the index level is not really the crucial bit - it's more about\n> the total amount of indexes in the DB.\n\nI suppose? If the tables/indexes/etc. size is sufficiently large\nrelative to cache size it won't matter the quantity.\n\n> > If we squint a bit, insertions look a whole lot like reads as well since\n> > we have to walk the tree to find the leaf insertion page for a new\n> > tuple. This is particularly true for indexes where inserts are roughly\n> > randomly distributed data, like a uuid.\n> >\n>\n> Yep. We need to walk the index to the leaf pages in both cases, both for\n> read and insert workloads.\n>\n> > The read-for-lookups problem is harder to solve, but the cost as it\n> > relates to table inserts is possibly more tractable. Tables typically\n> > have more than one index to update, so the obvious approach is \"let's\n> > just parallelize the index insertions\". Of course we know that's\n> > difficult given the multi-process approach Postgres uses for parallelism.\n> >\n>\n> Hmm. Not sure if reads are harder to real with, but I think you're right\n> those two cases (reads and writes) may look similar at the level of a\n> single index, but may need rather different approaches exactly because\n> inserts have to deal with all indexes, while reads only really deal with\n> a single index.\n\nRight. In practice it's harder to deal with a single index scan\nbecause you don't have multiple such scans to parallelize.\n\n> FWIW I think there are a couple options for improving reads, at least in\n> some cases.\n>\n> 1) I wonder if e.g. _bt_readnextpage could prefetch at least one page\n> ahead. We can't look further ahead, but perhaps this would help.\n>\n> 2) In some cases (e.g. nested loop with inner indexes scan) we could\n> collect an array of values and then look them up at once, which should\n> allow us to do at least some fo the I/O in parallel, I think. That's\n> similar to what you propose for writes, except that it works against the\n> same index.\n\nThe \"collect an array of values\" approach isn't one I'd considered,\nbut seems likely interesting.\n\n> > Another approach that at first glance seems like it fits better into\n> > Postgres (I'm not claiming it's easy or a small patch) would be to\n> > process a batch of indexes at once. For example, if the index access\n> > methods were extended to allow being given a list of indexes that need\n> > to be walked, then the btree code could process each layer in the walk\n> > as a group -- issuing IO fetches for all of the first level blocks in\n> > the tree, and then computing all of the next level blocks needed and\n> > issuing those IO requests at a time, and so on.\n> >\n>\n> Yeah, I agree having a way to say \"prefetch all pages needed to insert\n> these keys into these indexes\" might be better than just parallelizing\n> it in a \"naive\" way.\n>\n> Not sure how complex would it be - I think the API would need to allow\n> traversing the index with each step split into two phases:\n>\n> 1) determine the page needed for the next step, return it to caller\n>\n> 2) the caller collects pages from all indexes, initiates prefetch\n>\n> 3) instruct indexes to actually do the next step, stop if it's a leaf\n> page (otherwise go to (1))\n>\n> And then we might just do index inserts in a serial way, just like we do\n> today, hoping to hit the prefetched pages.\n\nCorrect; this is roughly what I was envisioning.\n\n> FWIW while this probably helps saturating the I/O, it unfortunately does\n> nothing to reduce the write amplification - we still need to modify the\n> same amount of leaf pages in all indexes, produce the same amount of WAL\n> etc. I think there were some proposals to add small internal buffers,\n> and instead of pushing the inserts all the way down to the leaf page,\n> just add them to the internal buffer. And when the buffer gets full,\n> propagate the contents to the next level of buffers.\n>\n> For example, each internal page might have one \"buffer\" page, so the\n> index size would not really change (the internal pages would double, but\n> it's still jut ~1% of the total index size). Of course, this makes\n> lookups more complex/expensive, because we need to check the internal\n> buffers. But it does reduce the write amplification, because it combines\n> changes to leaf pages.\n\nI think I've seen that discussion, and it's very interesting, but also\nI think still orthogonal to this.\n\n> > In some workloads we've been testing I believe such an approach could\n> > plausibly improve table insert (and update) performance by multiple\n> > hundreds of percent.\n> >\n> > I don't have any code at the moment to show here, but I wanted to get\n> > the idea out there to see if there were any immediate reactions or other\n> > thoughts on the topic.\n> >\n> > Thoughts?\n> >\n>\n> I think you're right indexes may be a serious bottleneck in some cases,\n> so exploring ways to improve that seems useful. Ultimately I think we\n> should be looking for ways to reduce the amount of work we need to do,\n> but parallelizing it (i.e. doing the same amount of work but in multiple\n> processes) is a valid approach too.\n\nThanks for the feedback.\n\nJames\n\n\n",
"msg_date": "Fri, 7 May 2021 14:11:23 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Processing btree walks as a batch to parallelize IO"
},
{
"msg_contents": "On Fri, 9 Apr 2021 at 16:58, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 4/9/21 7:33 PM, James Coleman wrote:\n\n> > A specific area where this is particularly painful is btree index reads.\n> > Walking the tree to leaf pages isn't naturally prefetchable, and so for\n> > each level you pay the random page cost. Of course higher levels in the\n> > tree will almost certainly exhibit emergent behavior such that they\n> > (just by fact of the LRU caching) will be in the buffer cache, but for a\n> > large index lower levels likely won't be.\n\nWe've talked before about buffering inserts even just for disk-based\nindexes. Much like how GIN buffers inserts and periodically flushes\nthem out. We talked about doing a local buffer in each session since\nno other session even needs to see these buffered inserts until commit\nanyways. And we can more efficiently merge in multiple keys at once\nthan doing them one by one.\n\nBut that was just for disk i/o. For something longer-latency it would\nbe an even bigger win. Buffer the inserted keys in local memory in\ncase you do lookups in this same session and start the i/o to insert\nthe rows into the index but handle that in the background or in a\nseparate process without blocking the transaction until commit.\n\n> What do you consider a large index level?\n>\n> Consider a 1TB table, with just a single UUID column - that's ~25B rows,\n> give or take. Real tables will have more columns, so this seems like a\n> reasonable model of the largest number of rows per relation. With ~32B\n> per index tuple, that's about 100M leaf pages, and with ~256 branches\n> per internal page, that's still only ~5 levels. I think it's quite rare\n> to see indexes with more than 6 or 7 levels.\n\nThat's a good model for a well-designed schema with an efficient\nindex. There are plenty of less-than-optimal schemas with indexes on\nlonger column lists or fairly large text fields....\n\n-- \ngreg\n\n\n",
"msg_date": "Fri, 7 May 2021 18:33:19 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Processing btree walks as a batch to parallelize IO"
},
{
"msg_contents": "On Fri, May 7, 2021 at 3:34 PM Greg Stark <stark@mit.edu> wrote:\n> We've talked before about buffering inserts even just for disk-based\n> indexes. Much like how GIN buffers inserts and periodically flushes\n> them out. We talked about doing a local buffer in each session since\n> no other session even needs to see these buffered inserts until commit\n> anyways. And we can more efficiently merge in multiple keys at once\n> than doing them one by one.\n\nMark Callaghan's high level analysis of the trade-offs here is worth a\nread, too.\n\n> That's a good model for a well-designed schema with an efficient\n> index. There are plenty of less-than-optimal schemas with indexes on\n> longer column lists or fairly large text fields....\n\nSuffix truncation can take care of this -- all you really need is a\nminimally distinguishing separator key to delineate which values\nbelong on which page one level down. It is almost always possible for\nleaf page splits to find a way to make the new high key (also the key\nto be inserted in the parent level) much smaller than your typical\nkey. Granted, we don't have what I've called \"classic\" suffix\ntruncation (within text column truncation) yet, so this analysis isn't\ngoing to work with long text keys (we only truncate at the attribute\ngranularity currently).\n\nEven if we're pessimistic about suffix truncation, the logarithmic\nrate of growth still wins -- Tomas' analysis is sound. You cannot\nrealistically make a Postgres B-Tree have more than about 1% of all\npages as internal pages, unless you make the indexed keys ludicrously\nlarge -- as in several hundred bytes each (~0.5% is typical in\npractice). I think that 6 levels is very pessimistic, even with a\nmassive B-Tree with weirdly large keys. My mental model for internal\npages is that they are practically guaranteed to be in shared_buffers\nat all times, which is about as accurate as any generalization like\nthat ever can be.\n\nI once wrote a test harness that deliberately created a B-Tree that\nwas as tall as possible -- something with the largest possible index\ntuples on the leaf level (had to disable TOAST for this). I think that\nit was about 7 or 8 levels deep. The CPU overhead of the test case\nmade it excruciatingly slow, but it wasn't I/O bound at all (pretty\nsure it all fitted in shared_buffers).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 7 May 2021 15:56:12 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Processing btree walks as a batch to parallelize IO"
}
] |
[
{
"msg_contents": "Buildfarm members spurfowl[1] and thorntail[2] have each shown $SUBJECT\nonce in the past two days. The circumstances are not quite the same;\nspurfowl's failure is in autovacuum while thorntail's is in a manual\nVACUUM command. Still, it seems clear that there's a recently-introduced\nbug here somewhere. I don't see any obvious candidate for the culprit,\nthough. Any ideas?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=spurfowl&dt=2021-04-08%2010%3A22%3A08\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2021-04-09%2021%3A28%3A10\n\n\n",
"msg_date": "Fri, 09 Apr 2021 18:40:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 3:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Buildfarm members spurfowl[1] and thorntail[2] have each shown $SUBJECT\n> once in the past two days. The circumstances are not quite the same;\n> spurfowl's failure is in autovacuum while thorntail's is in a manual\n> VACUUM command. Still, it seems clear that there's a recently-introduced\n> bug here somewhere. I don't see any obvious candidate for the culprit,\n> though. Any ideas?\n\nThey're both VACUUM ANALYZE. They must be, because the calls to\nvisibilitymap_clear PANIC (they don't ERROR) -- the failing\nvisibilitymap_clear() call must occur inside a critical section, and\nall such calls are made within heapam.c (only VACUUM ANALYZE uses a\ntransaction and does writes). It cannot be the two calls to\nvisibilitymap_clear() inside vacuumlazy.c.\n\nI suspect that you've figured this much already. Just pointing it out.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 9 Apr 2021 16:27:12 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-09 18:40:27 -0400, Tom Lane wrote:\n> Buildfarm members spurfowl[1] and thorntail[2] have each shown $SUBJECT\n> once in the past two days. The circumstances are not quite the same;\n> spurfowl's failure is in autovacuum while thorntail's is in a manual\n> VACUUM command. Still, it seems clear that there's a recently-introduced\n> bug here somewhere. I don't see any obvious candidate for the culprit,\n> though. Any ideas?\n\ncommit 7ab96cf6b312cfcd79cdc1a69c6bdb75de0ed30f\nAuthor: Peter Geoghegan <pg@bowt.ie>\nDate: 2021-04-06 07:49:39 -0700\n\n Refactor lazy_scan_heap() loop.\n\nor some of the other changes in the vicinity could be related. There's\nsome changes when pages are marked as AllVisible, when their free space\nis tracked etc.\n\n\nJust looking at the code in heap_update: I'm a bit confused about\nRelationGetBufferForTuple()'s vmbuffer and vmbuffer_other\narguments. It looks like it's not at all clear which of the two\narguments will have the vmbuffer for which of the pages?\n\n\t\tif (otherBuffer == InvalidBuffer || targetBlock <= otherBlock)\n\t\t\tGetVisibilityMapPins(relation, buffer, otherBuffer,\n\t\t\t\t\t\t\t\t targetBlock, otherBlock, vmbuffer,\n\t\t\t\t\t\t\t\t vmbuffer_other);\n\t\telse\n\t\t\tGetVisibilityMapPins(relation, otherBuffer, buffer,\n\t\t\t\t\t\t\t\t otherBlock, targetBlock, vmbuffer_other,\n\t\t\t\t\t\t\t\t vmbuffer);\n\nWhich then would make any subsequent use of vmbuffer vs vmbuffer_new in\nheap_update() bogus? Because clearly that code associates vmbuffer /\nvmbuffer_new with the respective page?\n\n\t/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */\n\tif (PageIsAllVisible(BufferGetPage(buffer)))\n\t{\n\t\tall_visible_cleared = true;\n\t\tPageClearAllVisible(BufferGetPage(buffer));\n\t\tvisibilitymap_clear(relation, BufferGetBlockNumber(buffer),\n\t\t\t\t\t\t\tvmbuffer, VISIBILITYMAP_VALID_BITS);\n\t}\n\tif (newbuf != buffer && PageIsAllVisible(BufferGetPage(newbuf)))\n\t{\n\t\tall_visible_cleared_new = true;\n\t\tPageClearAllVisible(BufferGetPage(newbuf));\n\t\tvisibilitymap_clear(relation, BufferGetBlockNumber(newbuf),\n\t\t\t\t\t\t\tvmbuffer_new, VISIBILITYMAP_VALID_BITS);\n\t}\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 9 Apr 2021 16:27:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-09 16:27:12 -0700, Peter Geoghegan wrote:\n> They're both VACUUM ANALYZE. They must be, because the calls to\n> visibilitymap_clear PANIC (they don't ERROR) -- the failing\n> visibilitymap_clear() call must occur inside a critical section, and\n> all such calls are made within heapam.c (only VACUUM ANALYZE uses a\n> transaction and does writes). It cannot be the two calls to\n> visibilitymap_clear() inside vacuumlazy.c.\n\nThere's a stacktrace at the bottom of the spurfowl report:\n\n======-=-====== stack trace: pgsql.build/src/test/regress/tmp_check/data/core ======-=-======\n[New LWP 24172]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\nCore was generated by `postgres: autovacuum worker regression '.\nProgram terminated with signal SIGABRT, Aborted.\n#0 0x00007f77a7967428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54\n54\t../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n#0 0x00007f77a7967428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54\n#1 0x00007f77a796902a in __GI_abort () at abort.c:89\n#2 0x000000000095cf8d in errfinish (filename=<optimized out>, filename@entry=0x9c3fa0 \"visibilitymap.c\", lineno=lineno@entry=155, funcname=funcname@entry=0x9c41c0 <__func__.13853> \"visibilitymap_clear\") at elog.c:680\n#3 0x0000000000501498 in visibilitymap_clear (rel=rel@entry=0x7f77a96d2d28, heapBlk=<optimized out>, buf=buf@entry=0, flags=flags@entry=3 '\\\\003') at visibilitymap.c:155\n#4 0x00000000004e6380 in heap_update (relation=relation@entry=0x7f77a96d2d28, otid=otid@entry=0x2c0394c, newtup=newtup@entry=0x2c03948, cid=0, crosscheck=crosscheck@entry=0x0, wait=wait@entry=true, tmfd=0x7ffe119d2c20, lockmode=0x7ffe119d2c1c) at heapam.c:3993\n#5 0x00000000004e7d70 in simple_heap_update (relation=relation@entry=0x7f77a96d2d28, otid=otid@entry=0x2c0394c, tup=tup@entry=0x2c03948) at heapam.c:4211\n#6 0x00000000005811a9 in CatalogTupleUpdate (heapRel=0x7f77a96d2d28, otid=0x2c0394c, tup=0x2c03948) at indexing.c:309\n#7 0x00000000005efc32 in update_attstats (relid=16928, inh=inh@entry=false, natts=natts@entry=1, vacattrstats=vacattrstats@entry=0x2b3c030) at analyze.c:1746\n#8 0x00000000005f264a in update_attstats (vacattrstats=0x2b3c030, natts=1, inh=false, relid=<optimized out>) at analyze.c:589\n#9 do_analyze_rel (onerel=onerel@entry=0x7f77a95c1070, params=params@entry=0x2aba36c, va_cols=va_cols@entry=0x0, acquirefunc=<optimized out>, relpages=33, inh=inh@entry=false, in_outer_xact=false, elevel=13) at analyze.c:589\n#10 0x00000000005f2d8d in analyze_rel (relid=<optimized out>, relation=<optimized out>, params=params@entry=0x2aba36c, va_cols=0x0, in_outer_xact=<optimized out>, bstrategy=<optimized out>) at analyze.c:261\n#11 0x0000000000671721 in vacuum (relations=0x2b492b8, params=params@entry=0x2aba36c, bstrategy=<optimized out>, bstrategy@entry=0x2aba4e8, isTopLevel=isTopLevel@entry=true) at vacuum.c:478\n#12 0x000000000048f02d in autovacuum_do_vac_analyze (bstrategy=0x2aba4e8, tab=0x2aba368) at autovacuum.c:3316\n#13 do_autovacuum () at autovacuum.c:2537\n#14 0x0000000000779d76 in AutoVacWorkerMain (argv=0x0, argc=0) at autovacuum.c:1715\n#15 0x0000000000779e79 in StartAutoVacWorker () at autovacuum.c:1500\n#16 0x0000000000788324 in StartAutovacuumWorker () at postmaster.c:5539\n#17 sigusr1_handler (postgres_signal_arg=<optimized out>) at postmaster.c:5243\n#18 <signal handler called>\n#19 0x00007f77a7a2f5b3 in __select_nocancel () at ../sysdeps/unix/syscall-template.S:84\n#20 0x0000000000788668 in ServerLoop () at postmaster.c:1701\n#21 0x000000000078a187 in PostmasterMain (argc=argc@entry=8, argv=argv@entry=0x2a408c0) at postmaster.c:1409\n#22 0x0000000000490e48 in main (argc=8, argv=0x2a408c0) at main.c:209\n$1 = {si_signo = 6, si_errno = 0, si_code = -6, _sifields = {_pad = {24172, 1001, 0 <repeats 26 times>}, _kill = {si_pid = 24172, si_uid = 1001}, _timer = {si_tid = 24172, si_overrun = 1001, si_sigval = {sival_int = 0, sival_ptr = 0x0}}, _rt = {si_pid = 24172, si_uid = 1001, si_sigval = {sival_int = 0, sival_ptr = 0x0}}, _sigchld = {si_pid = 24172, si_uid = 1001, si_status = 0, si_utime = 0, si_stime = 0}, _sigfault = {si_addr = 0x3e900005e6c, _addr_lsb = 0, _addr_bnd = {_lower = 0x0, _upper = 0x0}}, _sigpoll = {si_band = 4299262287468, si_fd = 0}}}\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 9 Apr 2021 16:30:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "On 2021-04-09 16:27:39 -0700, Andres Freund wrote:\n> Just looking at the code in heap_update: I'm a bit confused about\n> RelationGetBufferForTuple()'s vmbuffer and vmbuffer_other\n> arguments. It looks like it's not at all clear which of the two\n> arguments will have the vmbuffer for which of the pages?\n> \n> \t\tif (otherBuffer == InvalidBuffer || targetBlock <= otherBlock)\n> \t\t\tGetVisibilityMapPins(relation, buffer, otherBuffer,\n> \t\t\t\t\t\t\t\t targetBlock, otherBlock, vmbuffer,\n> \t\t\t\t\t\t\t\t vmbuffer_other);\n> \t\telse\n> \t\t\tGetVisibilityMapPins(relation, otherBuffer, buffer,\n> \t\t\t\t\t\t\t\t otherBlock, targetBlock, vmbuffer_other,\n> \t\t\t\t\t\t\t\t vmbuffer);\n\nOh, I missed that the arguments to GetVisibilityMapPins are\nappropriately swapped too.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 9 Apr 2021 16:32:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "I've managed to reproduce this locally, by dint of running the\nsrc/bin/scripts tests over and over and tweaking the timing by\ntrying different \"taskset\" parameters to vary the number of CPUs\navailable. I find that I duplicated the report from spurfowl,\nparticularly\n\n(gdb) bt\n#0 0x00007f67bb6807d5 in raise () from /lib64/libc.so.6\n#1 0x00007f67bb669895 in abort () from /lib64/libc.so.6\n#2 0x000000000094ce37 in errfinish (filename=<optimized out>, \n lineno=<optimized out>, \n funcname=0x9ac120 <__func__.1> \"visibilitymap_clear\") at elog.c:680\n#3 0x0000000000488b8c in visibilitymap_clear (rel=rel@entry=0x7f67b2837330, \n heapBlk=<optimized out>, buf=buf@entry=0, flags=flags@entry=3 '\\003')\n ^^^^^^^^^^^^^^^\n at visibilitymap.c:155\n#4 0x000000000055cd87 in heap_update (relation=0x7f67b2837330, \n otid=0x7f67b274744c, newtup=0x7f67b2747448, cid=<optimized out>, \n crosscheck=<optimized out>, wait=<optimized out>, tmfd=0x7ffecf4d5700, \n lockmode=0x7ffecf4d56fc) at heapam.c:3993\n#5 0x000000000055dd61 in simple_heap_update (\n relation=relation@entry=0x7f67b2837330, otid=otid@entry=0x7f67b274744c, \n tup=tup@entry=0x7f67b2747448) at heapam.c:4211\n#6 0x00000000005e531c in CatalogTupleUpdate (heapRel=0x7f67b2837330, \n otid=0x7f67b274744c, tup=0x7f67b2747448) at indexing.c:309\n#7 0x00000000006420f9 in update_attstats (relid=1255, inh=false, \n natts=natts@entry=30, vacattrstats=vacattrstats@entry=0x19c9fc0)\n at analyze.c:1758\n#8 0x00000000006430dd in update_attstats (vacattrstats=0x19c9fc0, natts=30, \n inh=false, relid=<optimized out>) at analyze.c:1646\n#9 do_analyze_rel (onerel=<optimized out>, params=0x7ffecf4d5e50, \n va_cols=0x0, acquirefunc=<optimized out>, relpages=86, \n inh=<optimized out>, in_outer_xact=false, elevel=13) at analyze.c:589\n#10 0x00000000006447a1 in analyze_rel (relid=<optimized out>, \n relation=<optimized out>, params=params@entry=0x7ffecf4d5e50, va_cols=0x0, \n in_outer_xact=<optimized out>, bstrategy=<optimized out>) at analyze.c:261\n#11 0x00000000006a5718 in vacuum (relations=0x19c8158, params=0x7ffecf4d5e50, \n bstrategy=<optimized out>, isTopLevel=<optimized out>) at vacuum.c:478\n#12 0x00000000006a5c94 in ExecVacuum (pstate=pstate@entry=0x1915970, \n vacstmt=vacstmt@entry=0x18ed5c8, isTopLevel=isTopLevel@entry=true)\n at vacuum.c:254\n#13 0x000000000083c32c in standard_ProcessUtility (pstmt=0x18ed918, \n queryString=0x18eca20 \"ANALYZE pg_catalog.pg_proc;\", \n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, \n dest=0x18eda08, qc=0x7ffecf4d61c0) at utility.c:826\n\nI'd not paid much attention to that point before, but now it\nseems there is no question that heap_update is reaching line 3993\n\n visibilitymap_clear(relation, BufferGetBlockNumber(buffer),\n vmbuffer, VISIBILITYMAP_VALID_BITS);\n\nwithout having changed \"vmbuffer\" from its initial value of\nInvalidBuffer. It looks that way both at frame 3 and frame 4:\n\n(gdb) f 4\n#4 0x000000000055cd87 in heap_update (relation=0x7f67b2837330, \n otid=0x7f67b274744c, newtup=0x7f67b2747448, cid=<optimized out>, \n crosscheck=<optimized out>, wait=<optimized out>, tmfd=0x7ffecf4d5700, \n lockmode=0x7ffecf4d56fc) at heapam.c:3993\n3993 visibilitymap_clear(relation, BufferGetBlockNumber(buffer),\n(gdb) i locals\n...\nvmbuffer = 0\nvmbuffer_new = 0\n...\n\nIt is also hard to doubt that somebody broke this in the last-minute\ncommit blizzard. There are no reports of this PANIC in the buildfarm for\nthe last month, but we're now up to four (last I checked) since Thursday.\n\nWhile the first thing that comes to mind is a logic bug in heap_update\nitself, that code doesn't seem to have changed much in the last few days.\nMoreover, why is it that this only seems to be happening within\ndo_analyze_rel -> update_attstats? (We only have two stack traces\npositively showing that, but all the buildfarm reports look like the\nfailure is happening within manual or auto analyze of a system catalog.\nFishy as heck.)\n\nJust eyeing the evidence on hand, I'm wondering if something has decided\nit can start setting the page-all-visible bit without adequate lock,\nperhaps only in system catalogs. heap_update is clearly assuming that\nthat flag won't change underneath it, and if it did, it's clear how this\nsymptom would ensue.\n\nToo tired to take it further tonight ... discuss among yourselves.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Apr 2021 01:04:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "On Sat, Apr 10, 2021 at 10:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Just eyeing the evidence on hand, I'm wondering if something has decided\n> it can start setting the page-all-visible bit without adequate lock,\n> perhaps only in system catalogs. heap_update is clearly assuming that\n> that flag won't change underneath it, and if it did, it's clear how this\n> symptom would ensue.\n\nDoes this patch seem to fix the problem?\n\n-- \nPeter Geoghegan",
"msg_date": "Sun, 11 Apr 2021 08:47:16 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Sat, Apr 10, 2021 at 10:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Just eyeing the evidence on hand, I'm wondering if something has decided\n>> it can start setting the page-all-visible bit without adequate lock,\n>> perhaps only in system catalogs. heap_update is clearly assuming that\n>> that flag won't change underneath it, and if it did, it's clear how this\n>> symptom would ensue.\n\n> Does this patch seem to fix the problem?\n\nHmm ... that looks pretty suspicious, I agree, but why wouldn't an\nexclusive buffer lock be enough to prevent concurrency with heap_update?\n\n(I have zero faith in being able to show that this patch fixes the\nproblem by testing, given how hard it is to reproduce. We need to\nconvince ourselves that this is a fix by logic.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Apr 2021 11:57:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "On Sun, Apr 11, 2021 at 8:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Does this patch seem to fix the problem?\n>\n> Hmm ... that looks pretty suspicious, I agree, but why wouldn't an\n> exclusive buffer lock be enough to prevent concurrency with heap_update?\n\nI don't have any reason to believe that using a super-exclusive lock\nduring heap page vacuuming is necessary. My guess is that returning to\ndoing it that way might make the buildfarm green again. That would at\nleast confirm my suspicion that this code is relevant. The\nsuper-exclusive lock might have been masking the problem for a long\ntime.\n\nHow about temporarily committing this patch, just to review how it\naffects the buildfarm?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 11 Apr 2021 09:10:42 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "On Sun, Apr 11, 2021 at 9:10 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I don't have any reason to believe that using a super-exclusive lock\n> during heap page vacuuming is necessary. My guess is that returning to\n> doing it that way might make the buildfarm green again. That would at\n> least confirm my suspicion that this code is relevant. The\n> super-exclusive lock might have been masking the problem for a long\n> time.\n\nThis isn't just any super-exclusive lock, either -- we were calling\nConditionalLockBufferForCleanup() at this point.\n\nI now think that there is a good chance that we are seeing these\nsymptoms because the \"conditional-ness\" went away -- we accidentally\nrelied on that.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 11 Apr 2021 09:43:15 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> This isn't just any super-exclusive lock, either -- we were calling\n> ConditionalLockBufferForCleanup() at this point.\n\n> I now think that there is a good chance that we are seeing these\n> symptoms because the \"conditional-ness\" went away -- we accidentally\n> relied on that.\n\nAh, I see it. In the fragment of heap_update where we have to do some\nTOAST work (starting at line 3815) we transiently *release our lock*\non the old tuple's page. Unlike the earlier fragments that did that,\nthis code path has no provision for rechecking whether the page has\nbecome all-visible, so if that does happen while we're without the\nlock then we lose. (It does look like RelationGetBufferForTuple\nknows about updating vmbuffer, but there's one code path through the\nif-nest at 3850ff that doesn't call that.)\n\nSo the previous coding in vacuumlazy didn't tickle this because it would\nonly set the all-visible bit on a page it had superexclusive lock on;\nthat is, continuing to hold the pin was sufficient. Nonetheless, if\nfour out of five paths through heap_update take care of this matter,\nI'd say it's heap_update's bug not vacuumlazy's bug that the fifth path\ndoesn't.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Apr 2021 13:13:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "I wrote:\n> (It does look like RelationGetBufferForTuple\n> knows about updating vmbuffer, but there's one code path through the\n> if-nest at 3850ff that doesn't call that.)\n\nAlthough ... isn't RelationGetBufferForTuple dropping the ball on this\npoint too, in the code path at the end where it has to extend the relation?\n\nI'm now inclined to think that we should toss every single line of that\ncode, take RelationGetBufferForTuple out of the equation, and have just\n*one* place that rechecks for PageAllVisible having just become set.\nIt's a rare enough case that optimizing it is completely not worth the\ncode complexity and risk (er, reality) of hard-to-locate bugs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Apr 2021 13:41:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "I wrote:\n> I'm now inclined to think that we should toss every single line of that\n> code, take RelationGetBufferForTuple out of the equation, and have just\n> *one* place that rechecks for PageAllVisible having just become set.\n> It's a rare enough case that optimizing it is completely not worth the\n> code complexity and risk (er, reality) of hard-to-locate bugs.\n\nAlternatively, we could do what you suggested and redefine things\nso that one is only allowed to set the all-visible bit while holding\nsuperexclusive lock; which again would allow an enormous simplification\nin heap_update and cohorts. Either way, it's hard to argue that\nheap_update hasn't crossed the complexity threshold where it's\nimpossible to maintain safely. We need to simplify it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Apr 2021 13:55:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "On Sun, Apr 11, 2021 at 10:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alternatively, we could do what you suggested and redefine things\n> so that one is only allowed to set the all-visible bit while holding\n> superexclusive lock; which again would allow an enormous simplification\n> in heap_update and cohorts.\n\nGreat detective work.\n\nI would rather not go back to requiring a superexclusive lock in\nvacuumlazy.c (outside of pruning), actually -- I was only pointing out\nthat that had changed, and was likely to be relevant. It wasn't a real\nproposal.\n\nI think that it would be hard to justify requiring a super-exclusive\nlock just to call PageSetAllVisible(). PD_ALL_VISIBLE is fundamentally\nredundant information, so somehow it feels like the wrong design.\n\n> Either way, it's hard to argue that\n> heap_update hasn't crossed the complexity threshold where it's\n> impossible to maintain safely. We need to simplify it.\n\nIt is way too complicated. I don't think that I quite understand your\nfirst proposal right now, so I'll need to go think about it.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 11 Apr 2021 11:07:33 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Sun, Apr 11, 2021 at 10:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Either way, it's hard to argue that\n>> heap_update hasn't crossed the complexity threshold where it's\n>> impossible to maintain safely. We need to simplify it.\n\n> It is way too complicated. I don't think that I quite understand your\n> first proposal right now, so I'll need to go think about it.\n\nIt wasn't very clear, because I hadn't thought it through very much;\nbut what I'm imagining is that we discard most of the thrashing around\nall-visible rechecks and have just one such test somewhere very late\nin heap_update, after we've successfully acquired a target buffer for\nthe update and are no longer going to possibly need to release any\nbuffer lock. If at that one point we see the page is all-visible\nand we don't have the vmbuffer, then we have to release all our locks\nand go back to \"l2\". Which is less efficient than some of the existing\ncode paths, but given how hard this problem is to reproduce, it seems\nclear that optimizing for the occurrence is just not worth it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Apr 2021 14:16:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "On Sun, Apr 11, 2021 at 11:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> It wasn't very clear, because I hadn't thought it through very much;\n> but what I'm imagining is that we discard most of the thrashing around\n> all-visible rechecks and have just one such test somewhere very late\n> in heap_update, after we've successfully acquired a target buffer for\n> the update and are no longer going to possibly need to release any\n> buffer lock. If at that one point we see the page is all-visible\n> and we don't have the vmbuffer, then we have to release all our locks\n> and go back to \"l2\". Which is less efficient than some of the existing\n> code paths, but given how hard this problem is to reproduce, it seems\n> clear that optimizing for the occurrence is just not worth it.\n\nOh! That sounds way better.\n\nThis reminds me of the tupgone case that I exorcised from vacuumlazy.c\n(in the same commit that stopped using a superexclusive lock). It was\nalso described as an optimization that wasn't quite worth it. But I\ndon't quite buy that. ISTM that there is a better explanation: it\nevolved the appearance of being an optimization that might make sense.\nWhich was just camouflage.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 11 Apr 2021 11:28:22 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Sun, Apr 11, 2021 at 11:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It wasn't very clear, because I hadn't thought it through very much;\n>> but what I'm imagining is that we discard most of the thrashing around\n>> all-visible rechecks and have just one such test somewhere very late\n>> in heap_update, after we've successfully acquired a target buffer for\n>> the update and are no longer going to possibly need to release any\n>> buffer lock. If at that one point we see the page is all-visible\n>> and we don't have the vmbuffer, then we have to release all our locks\n>> and go back to \"l2\". Which is less efficient than some of the existing\n>> code paths, but given how hard this problem is to reproduce, it seems\n>> clear that optimizing for the occurrence is just not worth it.\n\n> Oh! That sounds way better.\n\nAfter poking at this for awhile, it seems like it won't work very nicely.\nThe problem is that once we've invoked the toaster, we really don't want\nto just abandon that work; we'd leak any toasted out-of-line data that\nwas created.\n\nSo I think we have to stick with the current basic design, and just\ntighten things up to make sure that visibility pins are accounted for\nin the places that are missing it.\n\nHence, I propose the attached. It passes check-world, but that proves\nabsolutely nothing of course :-(. I wonder if there is any way to\nexercise these code paths deterministically.\n\n(I have realized BTW that I was exceedingly fortunate to reproduce\nthe buildfarm report here --- I have run hundreds of additional\ncycles of the same test scenario without getting a second failure.)\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 12 Apr 2021 12:19:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 9:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So I think we have to stick with the current basic design, and just\n> tighten things up to make sure that visibility pins are accounted for\n> in the places that are missing it.\n>\n> Hence, I propose the attached. It passes check-world, but that proves\n> absolutely nothing of course :-(. I wonder if there is any way to\n> exercise these code paths deterministically.\n\nThis approach seems reasonable to me. At least you've managed to\nstructure the visibility map page pin check as concomitant with the\nexisting space recheck.\n\n> (I have realized BTW that I was exceedingly fortunate to reproduce\n> the buildfarm report here --- I have run hundreds of additional\n> cycles of the same test scenario without getting a second failure.)\n\nIn the past I've had luck with RR's chaos mode (most notably with the\nJepsen SSI bug). That didn't work for me here, though I might just\nhave not persisted with it for long enough. I should probably come up\nwith a shell script that runs the same thing hundreds of times or more\nin chaos mode, while making sure that useless recordings don't\naccumulate.\n\nThe feature is described here:\n\nhttps://robert.ocallahan.org/2016/02/introducing-rr-chaos-mode.html\n\nYou only have to be lucky once. Once that happens, you're left with a\nrecording to review and re-review at your leisure. This includes all\nPostgres backends, maybe even pg_regress and other scaffolding (if\nthat's what you're after).\n\nBut that's for debugging, not testing. The only way that we'll ever be\nable to test stuff like this is with something like Alexander\nKorotkov's stop events patch [1]. That infrastructure should be added\nsooner rather than later.\n\n[1] https://postgr.es/m/CAPpHfdtSEOHX8dSk9Qp+Z++i4BGQoffKip6JDWngEA+g7Z-XmQ@mail.gmail.com\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 12 Apr 2021 11:03:13 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-11 13:55:30 -0400, Tom Lane wrote:\n> Either way, it's hard to argue that heap_update hasn't crossed the\n> complexity threshold where it's impossible to maintain safely. We\n> need to simplify it.\n\nYea, I think we're well beyond that point. I can see a few possible\nsteps to wrangle the existing complexity into an easier to understand\nshape:\n\n- Rename heapam.c goto labels, they're useless to understand what is\n happening.\n\n- Move HeapTupleSatisfiesUpdate() call and the related branches\n afterwards into its own function.\n\n- Move \"temporarily mark it locked\" branch into its own function. It's a\n minimal implementation of tuple locking, so it seems more than\n separate enough.\n\n- Move the \"store the new tuple\" part into its own function (pretty much\n the critical section).\n\n- Probably worth unifying the exit paths - there's a fair bit of\n duplication by now...\n\nHalf related:\n\n- I think we might also need to do something about the proliferation of\n bitmaps in heap_update(). We now separately allocate 5 bitmapsets -\n that strikes me as fairly insane.\n\n\nHowever, these would not really address the complexity in itself, just\nmake it easier to manage.\n\nISTM that a lot of the complexity is related to needing to retry (and\navoiding doing so unnecessarily), which in turn is related to avoiding\ndeadlocks. We actually know how to not need that to the same degree -\nthe (need_toast || newtupsize > pagefree) locks the tuple and afterwards\nhas a lot more freedom. We obviously can't just always do that, due to\nthe WAL logging overhead.\n\nI wonder if we could make that path avoid the WAL logging overhead. We\ndon't actually need a full blown tuple lock, potentially even with its\nown multixact, here.\n\nThe relevant comment (in heap_lock_tuple()) says:\n\t/*\n\t * XLOG stuff. You might think that we don't need an XLOG record because\n\t * there is no state change worth restoring after a crash. You would be\n\t * wrong however: we have just written either a TransactionId or a\n\t * MultiXactId that may never have been seen on disk before, and we need\n\t * to make sure that there are XLOG entries covering those ID numbers.\n\t * Else the same IDs might be re-used after a crash, which would be\n\t * disastrous if this page made it to disk before the crash. Essentially\n\t * we have to enforce the WAL log-before-data rule even in this case.\n\t * (Also, in a PITR log-shipping or 2PC environment, we have to have XLOG\n\t * entries for everything anyway.)\n\t */\n\nBut I don't really think that doing full-blown WAL tuple-locking WAL\nlogging is really the right solution.\n\n- The \"next xid\" concerns are at least as easily solved by WAL logging a\n distinct \"highest xid assigned\" record when necessary. Either by\n having a shared memory variable saying \"latestLoggedXid\" or such, or\n by having end-of-recovery advance nextXid to beyond what ExtendCLOG()\n extended to. That reduces the overhead to at most once-per-xact (and\n commonly smaller) or nothing, respectively.\n\n- While there's obviously a good bit of simplicity ensuring that a\n replica is exactly the same (\"Also, in a PITR log-shipping or 2PC\n environment ...\"), we don't actually enforce that strictly anyway -\n so I am not sure why it's necessary to pay the price here.\n\nBut maybe I'm all wet here, I certainly haven't had enough coffee.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Apr 2021 13:40:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Mon, Apr 12, 2021 at 9:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hence, I propose the attached. It passes check-world, but that proves\n>> absolutely nothing of course :-(. I wonder if there is any way to\n>> exercise these code paths deterministically.\n\n> This approach seems reasonable to me. At least you've managed to\n> structure the visibility map page pin check as concomitant with the\n> existing space recheck.\n\nThanks for looking it over. Do you have an opinion on whether or not\nto back-patch? As far as we know, these bugs aren't exposed in the\nback branches for lack of code that would set the all-visible flag\nwithout superexclusive lock. But I'd still say that heap_update\nis failing to honor its API contract in these edge cases, and that\nseems like something that could bite us after future back-patches.\nOr there might be third-party code that can set all-visible flags.\nSo I'm kind of tempted to back-patch, but it's a judgment call.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Apr 2021 21:33:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 6:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thanks for looking it over. Do you have an opinion on whether or not\n> to back-patch? As far as we know, these bugs aren't exposed in the\n> back branches for lack of code that would set the all-visible flag\n> without superexclusive lock. But I'd still say that heap_update\n> is failing to honor its API contract in these edge cases, and that\n> seems like something that could bite us after future back-patches.\n\nIf we assume that a scenario like the one we've been debugging can\nnever happen in the backbranches, then we must also assume that your\nfix has negligible risk in the backbranches, because of how it is\nstructured. And so I agree -- I lean towards backpatching.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 12 Apr 2021 18:54:44 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm reading the code for vacuum/analyze and it looks like currently we\ncall vacuum_rel/analyze_rel for each relation specified. Which means\nthat if a relation is specified more than once, then we simply\nvacuum/analyze it that many times. Do we gain any advantage by\nvacuuming/analyzing a relation back-to-back within a single command? I\nstrongly feel no. I'm thinking we could do a simple optimization here,\nby transforming following VACUUM/ANALYZE commands to:\n1) VACUUM t1, t2, t1, t2, t1; transform to -->\nVACUUM t1, t2;\n2) VACUUM ANALYZE t1(a1), t2(a2), t1(b1), t2(b2), t1(c1);\ntransform to --> VACUUM ANALYZE t1(a1, b1, c1), t2(a2, b2)\n3) ANALYZE t1, t2, t1, t2, t1; transform to -->\nANALYZE t1, t2;\n4) ANALYZE t1(a1), t2(a2), t1(b1), t2(b2), t1(c1);\ntransform to --> ANALYZE t1(a1, b1, c1), t2(a2, b2)\n\nAbove use cases may look pretty much unsound and we could think of\ndisallowing with an error for the use cases (1) and 3(), but the use\ncases (2) and (4) are quite possible in customer scenarios(??). Please\nfeel free to add any other use cases you may think of.\n\nThe main advantage of the above said optimization is that the commands\ncan become a bit faster because we will avoid extra processing. I\nwould like to hear opinions on this. I'm not sure if this optimization\nwas already given a thought and not done because of some specific\nreasons. If so, it will be great if someone can point me to those\ndiscussions. Or it could be that I'm badly missing in my understanding\nof current vacuum/analyze code, feel free to correct me.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 10 Apr 2021 13:13:37 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Is it worth to optimize VACUUM/ANALYZE by combining duplicate rel\n instances into single rel instance?"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> I'm reading the code for vacuum/analyze and it looks like currently we\n> call vacuum_rel/analyze_rel for each relation specified. Which means\n> that if a relation is specified more than once, then we simply\n> vacuum/analyze it that many times. Do we gain any advantage by\n> vacuuming/analyzing a relation back-to-back within a single command? I\n> strongly feel no. I'm thinking we could do a simple optimization here,\n\nThis really is not something to expend cycles and code complexity on.\nIf the user wrote the same table more than once, that's their choice.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Apr 2021 10:33:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is it worth to optimize VACUUM/ANALYZE by combining duplicate rel\n instances into single rel instance?"
},
{
"msg_contents": "On Sat, Apr 10, 2021 at 8:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > I'm reading the code for vacuum/analyze and it looks like currently we\n> > call vacuum_rel/analyze_rel for each relation specified. Which means\n> > that if a relation is specified more than once, then we simply\n> > vacuum/analyze it that many times. Do we gain any advantage by\n> > vacuuming/analyzing a relation back-to-back within a single command? I\n> > strongly feel no. I'm thinking we could do a simple optimization here,\n>\n> This really is not something to expend cycles and code complexity on.\n> If the user wrote the same table more than once, that's their choice.\n\nThanks! I think we could avoid extra processing costs for cases like\nVACUUM/ANALYZE foo, foo; when no explicit columns are specified. The\navoided costs can be lock acquire, relation open, vacuum/analyze,\nrelation close, starting new xact command, command counter increment\nin case of analyze etc. This can be done with a simple patch like the\nattached. When explicit columns are specified along with relations\ni.e. VACUUM/ANALYZE foo(c1), foo(c2); we don't want to do the extra\ncomplex processing to optimize the cases when c1 = c2.\n\nNote that the TRUNCATE command currently skips processing repeated\nrelations (see ExecuteTruncate). For example, TRUNCATE foo, foo; and\nTRUNCATE foo, ONLY foo, foo; first instance of relation foo is taken\ninto consideration for processing and other relation instances\n(options specified if any) are ignored.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 21 Apr 2021 07:34:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Is it worth to optimize VACUUM/ANALYZE by combining duplicate rel\n instances into single rel instance?"
},
{
"msg_contents": "At Wed, 21 Apr 2021 07:34:40 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Sat, Apr 10, 2021 at 8:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > > I'm reading the code for vacuum/analyze and it looks like currently we\n> > > call vacuum_rel/analyze_rel for each relation specified. Which means\n> > > that if a relation is specified more than once, then we simply\n> > > vacuum/analyze it that many times. Do we gain any advantage by\n> > > vacuuming/analyzing a relation back-to-back within a single command? I\n> > > strongly feel no. I'm thinking we could do a simple optimization here,\n> >\n> > This really is not something to expend cycles and code complexity on.\n> > If the user wrote the same table more than once, that's their choice.\n> \n> Thanks! I think we could avoid extra processing costs for cases like\n> VACUUM/ANALYZE foo, foo; when no explicit columns are specified. The\n> avoided costs can be lock acquire, relation open, vacuum/analyze,\n> relation close, starting new xact command, command counter increment\n> in case of analyze etc. This can be done with a simple patch like the\n> attached. When explicit columns are specified along with relations\n> i.e. VACUUM/ANALYZE foo(c1), foo(c2); we don't want to do the extra\n> complex processing to optimize the cases when c1 = c2.\n> \n> Note that the TRUNCATE command currently skips processing repeated\n> relations (see ExecuteTruncate). For example, TRUNCATE foo, foo; and\n> TRUNCATE foo, ONLY foo, foo; first instance of relation foo is taken\n> into consideration for processing and other relation instances\n> (options specified if any) are ignored.\n> \n> Thoughts?\n\nAlthough I don't strongly oppose to check that, the check of truncate\nis natural and required. The relation list is anyway used afterwards,\nand we cannot truncate the same relation twice or more since a\nrelation under \"use\" cannot be truncated. (Truncation is one form of\nuse). In short, TRUNCATE runs no checking just for the check's own\nsake.\n\nOn the other hand the patch creates a relation list just for this\npurpose, which is not needed to run VACUUM/ANALYZE, and VACUUM/ANALYE\nworks well with duplicates in target relations.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 21 Apr 2021 11:32:49 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is it worth to optimize VACUUM/ANALYZE by combining duplicate\n rel instances into single rel instance?"
},
{
"msg_contents": "On Wed, Apr 21, 2021 at 11:32:49AM +0900, Kyotaro Horiguchi wrote:\n> On the other hand the patch creates a relation list just for this\n> purpose, which is not needed to run VACUUM/ANALYZE, and VACUUM/ANALYE\n> works well with duplicates in target relations.\n\nYeah, I don't think either that this is worth spending cycles on, not\nto mention that the current behavior could be handy as VACUUM uses\nseparate transactions for each relation vacuumed if more than one\nrelation is listed in the set.\n--\nMichael",
"msg_date": "Wed, 21 Apr 2021 11:50:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Is it worth to optimize VACUUM/ANALYZE by combining duplicate\n rel instances into single rel instance?"
},
{
"msg_contents": "On Wed, Apr 21, 2021 at 8:02 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> > Thanks! I think we could avoid extra processing costs for cases like\n> > VACUUM/ANALYZE foo, foo; when no explicit columns are specified. The\n> > avoided costs can be lock acquire, relation open, vacuum/analyze,\n> > relation close, starting new xact command, command counter increment\n> > in case of analyze etc. This can be done with a simple patch like the\n> > attached. When explicit columns are specified along with relations\n> > i.e. VACUUM/ANALYZE foo(c1), foo(c2); we don't want to do the extra\n> > complex processing to optimize the cases when c1 = c2.\n> >\n> > Note that the TRUNCATE command currently skips processing repeated\n> > relations (see ExecuteTruncate). For example, TRUNCATE foo, foo; and\n> > TRUNCATE foo, ONLY foo, foo; first instance of relation foo is taken\n> > into consideration for processing and other relation instances\n> > (options specified if any) are ignored.\n> >\n> > Thoughts?\n>\n> Although I don't strongly oppose to check that, the check of truncate\n> is natural and required. The relation list is anyway used afterwards,\n> and we cannot truncate the same relation twice or more since a\n> relation under \"use\" cannot be truncated. (Truncation is one form of\n> use). In short, TRUNCATE runs no checking just for the check's own\n> sake.\n\nThanks for the point. Yes, if we don't skip repeated instances we do\nget below error:\npostgres=# truncate t1, t1;\nERROR: cannot TRUNCATE \"t1\" because it is being used by active\nqueries in this session\n\n> On the other hand the patch creates a relation list just for this\n> purpose, which is not needed to run VACUUM/ANALYZE, and VACUUM/ANALYE\n> works well with duplicates in target relations.\n\nYeah, the relids list is only used to skip the duplicates. I feel\nthat's okay given the negligible extra processing (searching for the\nrelids in the list) we add with it versus the extra processing we\navoid with skipping duplicates, see [1].\n\nAlthough VACUUM/ANALYZE works well with duplicate relations without\nany error (unlike TRUNCATE), is there any benefit if we run\nback-to-back VACUUM/ANALYZE within a single command? I assume that\nthere's no benefit. My only point was that even if somebody specifies\nduplicate relations, we could avoid some processing effort see [1] for\nthe gain. For ANALYZE, we can avoid doing extra\nStartTransactionCommand, CommitTransactionCommand and\nCommandCounterIncrement as well.\n\nI know the use cases that I'm trying to optimize with the patch are\nworthless and unrealistic (may be written by someone like me). Since\nwe generally don't optimize for rare and unrecommended scenarios, I'm\nokay if we drop this patch. But I would like to mention [1] the gain\nwe get with the patch.\n\n[1] tested on my dev system, with default postgresql.conf, t1 is\nhaving 10mn rows:\nHEAD:\npostgres=# analyze t1;\nTime: 363.580 ms\npostgres=# analyze t1;\nTime: 384.760 ms\n\npostgres=# analyze t1, t1;\nTime: 687.976 ms\npostgres=# analyze t1, t1;\nTime: 664.420 ms\n\npostgres=# analyze t1, t1, t1;\nTime: 1010.855 ms (00:01.011)\npostgres=# analyze t1, t1, t1;\nTime: 1119.970 ms (00:01.120)\n\npostgres=# analyze t1, t1, t1, t1;\nTime: 1350.345 ms (00:01.350)\npostgres=# analyze t1, t1, t1, t1;\nTime: 1316.738 ms (00:01.317)\n\npostgres=# analyze t1, t1, t1, t1, t1;\nTime: 1651.780 ms (00:01.652)\npostgres=# analyze t1, t1, t1, t1, t1, t1;\nTime: 1983.163 ms (00:01.983)\n\nPATCHed:\npostgres=# analyze t1;\nTime: 356.709 ms\npostgres=# analyze t1;\nTime: 360.780 ms\n\npostgres=# analyze t1, t1;\nTime: 377.193 ms\npostgres=# analyze t1, t1;\nTime: 370.636 ms\n\npostgres=# analyze t1, t1, t1;\nTime: 364.271 ms\npostgres=# analyze t1, t1, t1;\nTime: 349.988 ms\n\npostgres=# analyze t1, t1, t1, t1;\nTime: 362.567 ms\npostgres=# analyze t1, t1, t1, t1;\nTime: 383.292 ms\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 21 Apr 2021 10:24:16 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Is it worth to optimize VACUUM/ANALYZE by combining duplicate rel\n instances into single rel instance?"
}
] |
[
{
"msg_contents": "One of our tests purposely throws an error which returns\n\n\"ERROR: R interpreter parse error\" on linux\nand\n\n\"WARNING: R interpreter parse error\" on windows.\n\nI'm hoping someone can point me to the code that may be responsible? Was\nthere a change in the error handling that might be attributed to this?\n\nDave Cramer\n\nOne of our tests purposely throws an error which returns \"ERROR: R interpreter parse error\" on linux and \"WARNING: R interpreter parse error\" on windows.I'm hoping someone can point me to the code that may be responsible? Was there a change in the error handling that might be attributed to this?Dave Cramer",
"msg_date": "Sat, 10 Apr 2021 19:53:44 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "PL/R regression on windows, but not linux with master."
},
{
"msg_contents": "Dave Cramer <davecramer@gmail.com> writes:\n> One of our tests purposely throws an error which returns\n> \"ERROR: R interpreter parse error\" on linux\n> and\n> \"WARNING: R interpreter parse error\" on windows.\n\nThat's quite bizarre. What is the actual error level according to\nthe source code, and where is the error being thrown exactly?\n\nI recall that elog.c has some code to force ERROR up to FATAL or\nPANIC in some cases, but it shouldn't ever promote a non-error to\nan ERROR.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Apr 2021 20:24:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/R regression on windows, but not linux with master."
},
{
"msg_contents": "On Sat, 10 Apr 2021 at 20:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Dave Cramer <davecramer@gmail.com> writes:\n> > One of our tests purposely throws an error which returns\n> > \"ERROR: R interpreter parse error\" on linux\n> > and\n> > \"WARNING: R interpreter parse error\" on windows.\n>\n> That's quite bizarre. What is the actual error level according to\n> the source code, and where is the error being thrown exactly?\n>\n> I recall that elog.c has some code to force ERROR up to FATAL or\n> PANIC in some cases, but it shouldn't ever promote a non-error to\n> an ERROR.\n>\n\nWell it really is an ERROR, and is being downgraded on windows to WARNING.\n\nI was hoping someone familiar with the code could remember something before\nI dig into this tomorrow.\n\nThanks,\nDave\n\n>\n> regards, tom lane\n>\n\nOn Sat, 10 Apr 2021 at 20:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:Dave Cramer <davecramer@gmail.com> writes:\n> One of our tests purposely throws an error which returns\n> \"ERROR: R interpreter parse error\" on linux\n> and\n> \"WARNING: R interpreter parse error\" on windows.\n\nThat's quite bizarre. What is the actual error level according to\nthe source code, and where is the error being thrown exactly?\n\nI recall that elog.c has some code to force ERROR up to FATAL or\nPANIC in some cases, but it shouldn't ever promote a non-error to\nan ERROR.Well it really is an ERROR, and is being downgraded on windows to WARNING.I was hoping someone familiar with the code could remember something before I dig into this tomorrow.Thanks,Dave \n\n regards, tom lane",
"msg_date": "Sat, 10 Apr 2021 20:27:53 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PL/R regression on windows, but not linux with master."
},
{
"msg_contents": "Dave Cramer <davecramer@gmail.com> writes:\n> On Sat, 10 Apr 2021 at 20:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> That's quite bizarre. What is the actual error level according to\n>> the source code, and where is the error being thrown exactly?\n\n> Well it really is an ERROR, and is being downgraded on windows to WARNING.\n\nThat seems quite awful.\n\nHowever, now that I think about it, the elog.h error-level constants\nwere renumbered not so long ago. Maybe you've failed to recompile\neverything for v14?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Apr 2021 20:34:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/R regression on windows, but not linux with master."
},
{
"msg_contents": "On Sat, 10 Apr 2021 at 20:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Dave Cramer <davecramer@gmail.com> writes:\n> > On Sat, 10 Apr 2021 at 20:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> That's quite bizarre. What is the actual error level according to\n> >> the source code, and where is the error being thrown exactly?\n>\n> > Well it really is an ERROR, and is being downgraded on windows to\n> WARNING.\n>\n> That seems quite awful.\n>\n> However, now that I think about it, the elog.h error-level constants\n> were renumbered not so long ago. Maybe you've failed to recompile\n> everything for v14?\n>\n\nWe see this on a CI with a fresh pull from master.\n\nAs I said I will dig into it and figure it out.\n\nCheers,\n\nDave\n\n>\n> regards, tom lane\n>\n\nOn Sat, 10 Apr 2021 at 20:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:Dave Cramer <davecramer@gmail.com> writes:\n> On Sat, 10 Apr 2021 at 20:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> That's quite bizarre. What is the actual error level according to\n>> the source code, and where is the error being thrown exactly?\n\n> Well it really is an ERROR, and is being downgraded on windows to WARNING.\n\nThat seems quite awful.\n\nHowever, now that I think about it, the elog.h error-level constants\nwere renumbered not so long ago. Maybe you've failed to recompile\neverything for v14?We see this on a CI with a fresh pull from master.As I said I will dig into it and figure it out. Cheers,Dave \n\n regards, tom lane",
"msg_date": "Sat, 10 Apr 2021 20:38:17 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PL/R regression on windows, but not linux with master."
},
{
"msg_contents": "On 4/11/21 2:38 AM, Dave Cramer wrote:\n> \n> \n> \n> \n> On Sat, 10 Apr 2021 at 20:34, Tom Lane <tgl@sss.pgh.pa.us\n> <mailto:tgl@sss.pgh.pa.us>> wrote:\n> \n> Dave Cramer <davecramer@gmail.com <mailto:davecramer@gmail.com>> writes:\n> > On Sat, 10 Apr 2021 at 20:24, Tom Lane <tgl@sss.pgh.pa.us\n> <mailto:tgl@sss.pgh.pa.us>> wrote:\n> >> That's quite bizarre. What is the actual error level according to\n> >> the source code, and where is the error being thrown exactly?\n> \n> > Well it really is an ERROR, and is being downgraded on windows to\n> WARNING.\n> \n> That seems quite awful.\n> \n> However, now that I think about it, the elog.h error-level constants\n> were renumbered not so long ago. Maybe you've failed to recompile\n> everything for v14?\n> \n> \n> We see this on a CI with a fresh pull from master.\n> \n> As I said I will dig into it and figure it out. \n> \n\nWell, plr.h does this:\n\n#define WARNING\t\t19\n#define ERROR\t\t20\n\nwhich seems a bit weird, because elog.h does this (since 1f9158ba481):\n\n#define WARNING\t\t19\n#define WARNING_CLIENT_ONLY\t20\n#define ERROR\t\t21\n\nNot sure why this would break Windows but not Linux, though.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 11 Apr 2021 02:56:35 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PL/R regression on windows, but not linux with master."
},
{
"msg_contents": "On Sat, 10 Apr 2021 at 20:56, Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> On 4/11/21 2:38 AM, Dave Cramer wrote:\n> >\n> >\n> >\n> >\n> > On Sat, 10 Apr 2021 at 20:34, Tom Lane <tgl@sss.pgh.pa.us\n> > <mailto:tgl@sss.pgh.pa.us>> wrote:\n> >\n> > Dave Cramer <davecramer@gmail.com <mailto:davecramer@gmail.com>>\n> writes:\n> > > On Sat, 10 Apr 2021 at 20:24, Tom Lane <tgl@sss.pgh.pa.us\n> > <mailto:tgl@sss.pgh.pa.us>> wrote:\n> > >> That's quite bizarre. What is the actual error level according to\n> > >> the source code, and where is the error being thrown exactly?\n> >\n> > > Well it really is an ERROR, and is being downgraded on windows to\n> > WARNING.\n> >\n> > That seems quite awful.\n> >\n> > However, now that I think about it, the elog.h error-level constants\n> > were renumbered not so long ago. Maybe you've failed to recompile\n> > everything for v14?\n> >\n> >\n> > We see this on a CI with a fresh pull from master.\n> >\n> > As I said I will dig into it and figure it out.\n> >\n>\n> Well, plr.h does this:\n>\n> #define WARNING 19\n> #define ERROR 20\n>\n> which seems a bit weird, because elog.h does this (since 1f9158ba481):\n>\n> #define WARNING 19\n> #define WARNING_CLIENT_ONLY 20\n> #define ERROR 21\n>\n> Not sure why this would break Windows but not Linux, though.\n>\n>\nThanks, I think ERROR is redefined in Windows as well for some strange\nreason.\n\nDave\n\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nOn Sat, 10 Apr 2021 at 20:56, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:On 4/11/21 2:38 AM, Dave Cramer wrote:\n> \n> \n> \n> \n> On Sat, 10 Apr 2021 at 20:34, Tom Lane <tgl@sss.pgh.pa.us\n> <mailto:tgl@sss.pgh.pa.us>> wrote:\n> \n> Dave Cramer <davecramer@gmail.com <mailto:davecramer@gmail.com>> writes:\n> > On Sat, 10 Apr 2021 at 20:24, Tom Lane <tgl@sss.pgh.pa.us\n> <mailto:tgl@sss.pgh.pa.us>> wrote:\n> >> That's quite bizarre. What is the actual error level according to\n> >> the source code, and where is the error being thrown exactly?\n> \n> > Well it really is an ERROR, and is being downgraded on windows to\n> WARNING.\n> \n> That seems quite awful.\n> \n> However, now that I think about it, the elog.h error-level constants\n> were renumbered not so long ago. Maybe you've failed to recompile\n> everything for v14?\n> \n> \n> We see this on a CI with a fresh pull from master.\n> \n> As I said I will dig into it and figure it out. \n> \n\nWell, plr.h does this:\n\n#define WARNING 19\n#define ERROR 20\n\nwhich seems a bit weird, because elog.h does this (since 1f9158ba481):\n\n#define WARNING 19\n#define WARNING_CLIENT_ONLY 20\n#define ERROR 21\n\nNot sure why this would break Windows but not Linux, though.\nThanks, I think ERROR is redefined in Windows as well for some strange reason.Dave \n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 10 Apr 2021 21:11:01 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PL/R regression on windows, but not linux with master."
},
{
"msg_contents": "\nOn 4/10/21 8:56 PM, Tomas Vondra wrote:\n> On 4/11/21 2:38 AM, Dave Cramer wrote:\n>>\n>>\n>>\n>> On Sat, 10 Apr 2021 at 20:34, Tom Lane <tgl@sss.pgh.pa.us\n>> <mailto:tgl@sss.pgh.pa.us>> wrote:\n>>\n>> Dave Cramer <davecramer@gmail.com <mailto:davecramer@gmail.com>> writes:\n>> > On Sat, 10 Apr 2021 at 20:24, Tom Lane <tgl@sss.pgh.pa.us\n>> <mailto:tgl@sss.pgh.pa.us>> wrote:\n>> >> That's quite bizarre. What is the actual error level according to\n>> >> the source code, and where is the error being thrown exactly?\n>>\n>> > Well it really is an ERROR, and is being downgraded on windows to\n>> WARNING.\n>>\n>> That seems quite awful.\n>>\n>> However, now that I think about it, the elog.h error-level constants\n>> were renumbered not so long ago. Maybe you've failed to recompile\n>> everything for v14?\n>>\n>>\n>> We see this on a CI with a fresh pull from master.\n>>\n>> As I said I will dig into it and figure it out. \n>>\n> Well, plr.h does this:\n>\n> #define WARNING\t\t19\n> #define ERROR\t\t20\n>\n> which seems a bit weird, because elog.h does this (since 1f9158ba481):\n>\n> #define WARNING\t\t19\n> #define WARNING_CLIENT_ONLY\t20\n> #define ERROR\t\t21\n>\n> Not sure why this would break Windows but not Linux, though.\n>\n>\n\n\nThe coding pattern in plr.h looks quite breakable. Instead of hard\ncoding values like this they should save the value from the postgres\nheaders in another variable before undefining it and then restore that\nvalue after inclusion of the R headers. That would work across versions\neven if we renumber them.\n\n\ncheers\n\n\nandrew\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 11 Apr 2021 08:04:51 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: PL/R regression on windows, but not linux with master."
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n>> Well, plr.h does this:\n>> \n>> #define WARNING\t\t19\n>> #define ERROR\t\t20\n\n> The coding pattern in plr.h looks quite breakable. Instead of hard\n> coding values like this they should save the value from the postgres\n> headers in another variable before undefining it and then restore that\n> value after inclusion of the R headers.\n\nIndeed. elog.h already provides a \"PGERROR\" macro to use for restoring\nthe value of ERROR. We have not heard of a need to do anything special\nfor WARNING though --- maybe that's R-specific?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Apr 2021 10:13:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/R regression on windows, but not linux with master."
},
{
"msg_contents": "On 4/11/21 10:13 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> Well, plr.h does this:\n>>> \n>>> #define WARNING\t\t19\n>>> #define ERROR\t\t20\n> \n>> The coding pattern in plr.h looks quite breakable.\n\nMeh -- that code has gone 18+ years before breaking.\n\n> Indeed. elog.h already provides a \"PGERROR\" macro to use for restoring\n> the value of ERROR. We have not heard of a need to do anything special\n> for WARNING though --- maybe that's R-specific?\n\nR also defines WARNING in its headers. If I remember correctly there are (or at \nleast were, it *has* been 18+ years since I looked at this particular thing) \nsome odd differences in the R headers under Windows and Linux.\n\nIn any case we would be happy to use \"PGERROR\".\n\nWould an equivalent \"PGWARNING\" be something we are open to adding and \nback-patching?\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Sun, 11 Apr 2021 11:01:54 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: PL/R regression on windows, but not linux with master."
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> On 4/11/21 10:13 AM, Tom Lane wrote:\n>> Indeed. elog.h already provides a \"PGERROR\" macro to use for restoring\n>> the value of ERROR. We have not heard of a need to do anything special\n>> for WARNING though --- maybe that's R-specific?\n\n> R also defines WARNING in its headers.\n\nAh.\n\n> Would an equivalent \"PGWARNING\" be something we are open to adding and \n> back-patching?\n\nIt's not real obvious how pl/r could solve this in a reliable way\notherwise, so adding that would be OK with me, but I wonder whether\nback-patching is going to help you any. You'd still need to compile\nagainst older headers I should think. So I'd suggest\n\n(1) add PGWARNING in HEAD only\n\n(2) in pl/r, do something like\n\t#ifndef PGWARNING\n\t#define PGWARNING 19\n\t#endif\nwhich should be safe since it's that in all previous supported\nversions.\n\nAlso, I notice that elog.h is wrapping PGERROR in #ifdef WIN32,\nwhich might be an overly constricted view of when it's helpful.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Apr 2021 11:34:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/R regression on windows, but not linux with master."
},
{
"msg_contents": "I wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> Would an equivalent \"PGWARNING\" be something we are open to adding and \n>> back-patching?\n\n> It's not real obvious how pl/r could solve this in a reliable way\n> otherwise, so adding that would be OK with me, but I wonder whether\n> back-patching is going to help you any. You'd still need to compile\n> against older headers I should think. So I'd suggest\n> (1) add PGWARNING in HEAD only\n\nConcretely, maybe like the attached?\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 11 Apr 2021 12:42:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/R regression on windows, but not linux with master."
},
{
"msg_contents": "On Sun, 11 Apr 2021 at 12:43, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > Joe Conway <mail@joeconway.com> writes:\n> >> Would an equivalent \"PGWARNING\" be something we are open to adding and\n> >> back-patching?\n>\n> > It's not real obvious how pl/r could solve this in a reliable way\n> > otherwise, so adding that would be OK with me, but I wonder whether\n> > back-patching is going to help you any. You'd still need to compile\n> > against older headers I should think. So I'd suggest\n> > (1) add PGWARNING in HEAD only\n>\n> Concretely, maybe like the attached?\n>\n\n+1 from me.\nI especially like the changes to the comments as it's more apparent what\nthey should be used for.\n\nDave Cramer\n\n>\n> regards, tom lane\n>\n>\n\nOn Sun, 11 Apr 2021 at 12:43, Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> Would an equivalent \"PGWARNING\" be something we are open to adding and \n>> back-patching?\n\n> It's not real obvious how pl/r could solve this in a reliable way\n> otherwise, so adding that would be OK with me, but I wonder whether\n> back-patching is going to help you any. You'd still need to compile\n> against older headers I should think. So I'd suggest\n> (1) add PGWARNING in HEAD only\n\nConcretely, maybe like the attached?+1 from me. I especially like the changes to the comments as it's more apparent what they should be used for.Dave Cramer \n\n regards, tom lane",
"msg_date": "Sun, 11 Apr 2021 12:51:27 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PL/R regression on windows, but not linux with master."
},
{
"msg_contents": "On 4/11/21 12:51 PM, Dave Cramer wrote:\n> \n> \n> On Sun, 11 Apr 2021 at 12:43, Tom Lane <tgl@sss.pgh.pa.us \n> <mailto:tgl@sss.pgh.pa.us>> wrote:\n> \n> I wrote:\n> > Joe Conway <mail@joeconway.com <mailto:mail@joeconway.com>> writes:\n> >> Would an equivalent \"PGWARNING\" be something we are open to adding and\n> >> back-patching?\n> \n> > It's not real obvious how pl/r could solve this in a reliable way\n> > otherwise, so adding that would be OK with me, but I wonder whether\n> > back-patching is going to help you any. You'd still need to compile\n> > against older headers I should think. So I'd suggest\n> > (1) add PGWARNING in HEAD only\n> \n> Concretely, maybe like the attached?\n> \n> \n> +1 from me.\n> I especially like the changes to the comments as it's more apparent what they \n> should be used for.\n\n+1\n\nLooks great to me.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Sun, 11 Apr 2021 12:55:08 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: PL/R regression on windows, but not linux with master."
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> On 4/11/21 12:51 PM, Dave Cramer wrote:\n>> On Sun, 11 Apr 2021 at 12:43, Tom Lane <tgl@sss.pgh.pa.us \n>> <mailto:tgl@sss.pgh.pa.us>> wrote:\n>>> Concretely, maybe like the attached?\n\n>> +1 from me.\n>> I especially like the changes to the comments as it's more apparent what they \n>> should be used for.\n\n> +1\n> Looks great to me.\n\nOK, pushed to HEAD only.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Apr 2021 13:23:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/R regression on windows, but not linux with master."
}
] |
[
{
"msg_contents": "While re-reading heap_update() in connection with that PANIC we're\nchasing, my attention was drawn to this comment:\n\n /*\n * Note: beyond this point, use oldtup not otid to refer to old tuple.\n * otid may very well point at newtup->t_self, which we will overwrite\n * with the new tuple's location, so there's great risk of confusion if we\n * use otid anymore.\n */\n\nThis seemingly sage advice is being ignored in one place:\n\n\tCheckForSerializableConflictIn(relation, otid, BufferGetBlockNumber(buffer));\n\nI wonder whether that's a mistake. There'd be only a low probability\nof our detecting it through testing, I fear.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Apr 2021 12:54:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Possible SSI bug in heap_update"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 4:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> While re-reading heap_update() in connection with that PANIC we're\n> chasing, my attention was drawn to this comment:\n>\n> /*\n> * Note: beyond this point, use oldtup not otid to refer to old tuple.\n> * otid may very well point at newtup->t_self, which we will overwrite\n> * with the new tuple's location, so there's great risk of confusion if we\n> * use otid anymore.\n> */\n>\n> This seemingly sage advice is being ignored in one place:\n>\n> CheckForSerializableConflictIn(relation, otid, BufferGetBlockNumber(buffer));\n>\n> I wonder whether that's a mistake. There'd be only a low probability\n> of our detecting it through testing, I fear.\n\nYeah. Patch attached.\n\nI did a bit of printf debugging, and while it's common that otid ==\n&newtup->t_self, neither our regression tests nor our isolation tests\nreach a case where ItemPointerEquals(otid, &oldtup.t_self) is false at\nthe place where that check runs. Obviously those tests don't exercise\nall the branches and concurrency scenarios where we goto l2, so I'm\nnot at all sure about this, but hmm... at first glance, perhaps there\nis no live bug here because the use of *otid comes before\nRelationPutHeapTuple() which is where newtup->t_self is really\nupdated?",
"msg_date": "Mon, 12 Apr 2021 10:36:51 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible SSI bug in heap_update"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 10:36 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Yeah. Patch attached.\n\nPushed.\n\n\n",
"msg_date": "Tue, 13 Apr 2021 13:05:46 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible SSI bug in heap_update"
}
] |
[
{
"msg_contents": "Hi,\n\nPer Coverity.\n\nIt seems to me that some recent commit has failed to properly initialize a\nstructure,\nin extended_stats.c, when is passed to heap_copytuple.\n\nregards,\nRanier Vilela",
"msg_date": "Sun, 11 Apr 2021 15:38:10 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Uninitialized scalar variable (UNINIT)\n (src/backend/statistics/extended_stats.c)"
},
{
"msg_contents": "On Sun, Apr 11, 2021 at 03:38:10PM -0300, Ranier Vilela wrote:\n> Per Coverity.\n> \n> It seems to me that some recent commit has failed to properly initialize a\n> structure, in extended_stats.c, when is passed to heap_copytuple.\n\nI think you're right. You can look in the commit history to find the relevant\ncommit and copy the committer.\n\nI think it's cleanest to write:\n|HeapTupleData tmptup = {0};\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 11 Apr 2021 14:25:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Uninitialized scalar variable (UNINIT)\n (src/backend/statistics/extended_stats.c)"
},
{
"msg_contents": "Hi Justin, sorry for the delay.\n\nNothing against it, but I looked for similar codes and this is the usual\nway to initialize HeapTupleData.\nPerhaps InvalidOid makes a difference.\n\nregards,\nRanier Vilela\n\n\nEm dom., 11 de abr. de 2021 às 16:25, Justin Pryzby <pryzby@telsasoft.com>\nescreveu:\n\n> On Sun, Apr 11, 2021 at 03:38:10PM -0300, Ranier Vilela wrote:\n> > Per Coverity.\n> >\n> > It seems to me that some recent commit has failed to properly initialize\n> a\n> > structure, in extended_stats.c, when is passed to heap_copytuple.\n>\n> I think you're right. You can look in the commit history to find the\n> relevant\n> commit and copy the committer.\n>\n> I think it's cleanest to write:\n> |HeapTupleData tmptup = {0};\n>\n> --\n> Justin\n>\n\nHi Justin, sorry for the delay.Nothing against it, but I looked for similar codes and this is the usual way to initialize HeapTupleData.Perhaps InvalidOid makes a difference.regards,Ranier VilelaEm dom., 11 de abr. de 2021 às 16:25, Justin Pryzby <pryzby@telsasoft.com> escreveu:On Sun, Apr 11, 2021 at 03:38:10PM -0300, Ranier Vilela wrote:\n> Per Coverity.\n> \n> It seems to me that some recent commit has failed to properly initialize a\n> structure, in extended_stats.c, when is passed to heap_copytuple.\n\nI think you're right. You can look in the commit history to find the relevant\ncommit and copy the committer.\n\nI think it's cleanest to write:\n|HeapTupleData tmptup = {0};\n\n-- \nJustin",
"msg_date": "Sun, 11 Apr 2021 19:42:20 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Uninitialized scalar variable (UNINIT)\n (src/backend/statistics/extended_stats.c)"
},
{
"msg_contents": "On Sun, Apr 11, 2021 at 07:42:20PM -0300, Ranier Vilela wrote:\n> Em dom., 11 de abr. de 2021 às 16:25, Justin Pryzby <pryzby@telsasoft.com>\n> escreveu:\n>> I think you're right. You can look in the commit history to find the\n>> relevant\n>> commit and copy the committer.\n\nIn this case that's a4d75c8, for Tomas.\n\n>> I think it's cleanest to write:\n>> |HeapTupleData tmptup = {0};\n\nI agree that this would be cleaner.\n\nWhile on it, if you could not top-post..\n--\nMichael",
"msg_date": "Mon, 12 Apr 2021 14:07:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Uninitialized scalar variable (UNINIT)\n (src/backend/statistics/extended_stats.c)"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sun, Apr 11, 2021 at 07:42:20PM -0300, Ranier Vilela wrote:\n>> Em dom., 11 de abr. de 2021 às 16:25, Justin Pryzby <pryzby@telsasoft.com>\n>>> I think it's cleanest to write:\n>>> |HeapTupleData tmptup = {0};\n\n> I agree that this would be cleaner.\n\nIt would be wrong, though, or at least not have the same effect.\nItemPointerSetInvalid does not set the target to all-zeroes.\n\n(Regardless of that detail, it's generally best to accomplish\nobjective X in the same way that existing code does. Deciding\nthat you have a better way is often wrong, and even if you\nare right, you should then submit a patch to change all the\nexisting cases.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Apr 2021 02:04:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Uninitialized scalar variable (UNINIT)\n (src/backend/statistics/extended_stats.c)"
},
{
"msg_contents": "Em seg., 12 de abr. de 2021 às 03:04, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Sun, Apr 11, 2021 at 07:42:20PM -0300, Ranier Vilela wrote:\n> >> Em dom., 11 de abr. de 2021 às 16:25, Justin Pryzby <\n> pryzby@telsasoft.com>\n> >>> I think it's cleanest to write:\n> >>> |HeapTupleData tmptup = {0};\n>\n> > I agree that this would be cleaner.\n>\n> It would be wrong, though, or at least not have the same effect.\n>\nI think that you speak about fill pointers with 0 is not the same as fill\npointers with NULL.\n\n\n> ItemPointerSetInvalid does not set the target to all-zeroes.\n>\nItemPointerSetInvalid set or not set the target to all-zeroes?\n\n\n> (Regardless of that detail, it's generally best to accomplish\n> objective X in the same way that existing code does. Deciding\n> that you have a better way is often wrong, and even if you\n> are right, you should then submit a patch to change all the\n> existing cases.)\n>\nI was confused here, does the patch follow the pattern and fix the problem\nor not?\n\nregards,\nRanier Vilela\n\nEm seg., 12 de abr. de 2021 às 03:04, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Michael Paquier <michael@paquier.xyz> writes:\n> On Sun, Apr 11, 2021 at 07:42:20PM -0300, Ranier Vilela wrote:\n>> Em dom., 11 de abr. de 2021 às 16:25, Justin Pryzby <pryzby@telsasoft.com>\n>>> I think it's cleanest to write:\n>>> |HeapTupleData tmptup = {0};\n\n> I agree that this would be cleaner.\n\nIt would be wrong, though, or at least not have the same effect.I think that you speak about fill pointers with 0 is not the same as fill pointers with NULL. \nItemPointerSetInvalid does not set the target to all-zeroes.ItemPointerSetInvalid set or not set \nthe target to all-zeroes?\n\n(Regardless of that detail, it's generally best to accomplish\nobjective X in the same way that existing code does. Deciding\nthat you have a better way is often wrong, and even if you\nare right, you should then submit a patch to change all the\nexisting cases.)I was confused here, does the patch follow the pattern and fix the problem or not? regards,Ranier Vilela",
"msg_date": "Mon, 12 Apr 2021 13:55:13 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Uninitialized scalar variable (UNINIT)\n (src/backend/statistics/extended_stats.c)"
},
{
"msg_contents": "\n\nOn 4/12/21 6:55 PM, Ranier Vilela wrote:\n> \n> \n> Em seg., 12 de abr. de 2021 às 03:04, Tom Lane <tgl@sss.pgh.pa.us\n> <mailto:tgl@sss.pgh.pa.us>> escreveu:\n> \n> Michael Paquier <michael@paquier.xyz <mailto:michael@paquier.xyz>>\n> writes:\n> > On Sun, Apr 11, 2021 at 07:42:20PM -0300, Ranier Vilela wrote:\n> >> Em dom., 11 de abr. de 2021 às 16:25, Justin Pryzby\n> <pryzby@telsasoft.com <mailto:pryzby@telsasoft.com>>\n> >>> I think it's cleanest to write:\n> >>> |HeapTupleData tmptup = {0};\n> \n> > I agree that this would be cleaner.\n> \n> It would be wrong, though, or at least not have the same effect.\n> \n> I think that you speak about fill pointers with 0 is not the same as\n> fill pointers with NULL.\n> \n> \n> ItemPointerSetInvalid does not set the target to all-zeroes.\n> \n> ItemPointerSetInvalid set or not set the target to all-zeroes?\n> \n\nNot sure what exactly are you asking about? What Tom said is that if you\ndo 'struct = {0}' it sets all fields to 0, but we only want/need to set\nthe t_self/t_tableOid fields to 0.\n\n> \n> (Regardless of that detail, it's generally best to accomplish\n> objective X in the same way that existing code does. Deciding\n> that you have a better way is often wrong, and even if you\n> are right, you should then submit a patch to change all the\n> existing cases.)\n> \n> I was confused here, does the patch follow the pattern and fix the\n> problem or not?\n> \n\nI believe it does, and it's doing it in the same way as most other\nsimilar places.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 12 Apr 2021 19:03:11 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Uninitialized scalar variable (UNINIT)\n (src/backend/statistics/extended_stats.c)"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em seg., 12 de abr. de 2021 às 03:04, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>> It would be wrong, though, or at least not have the same effect.\n\n> I think that you speak about fill pointers with 0 is not the same as fill\n> pointers with NULL.\n\nNo, I mean that InvalidBlockNumber isn't 0.\n\n> I was confused here, does the patch follow the pattern and fix the problem\n> or not?\n\nYour patch seems fine. Justin's proposed improvement isn't.\n\n(I'm not real sure whether there's any *actual* bug here --- would we\nreally be looking at either ctid or tableoid of this temporary tuple?\nBut it's probably best to ensure that they're valid anyway.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Apr 2021 13:04:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Uninitialized scalar variable (UNINIT)\n (src/backend/statistics/extended_stats.c)"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 01:55:13PM -0300, Ranier Vilela wrote:\n> Em seg., 12 de abr. de 2021 �s 03:04, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n> > Michael Paquier <michael@paquier.xyz> writes:\n> > > On Sun, Apr 11, 2021 at 07:42:20PM -0300, Ranier Vilela wrote:\n> > >> Em dom., 11 de abr. de 2021 �s 16:25, Justin Pryzby <\n> > pryzby@telsasoft.com>\n> > >>> I think it's cleanest to write:\n> > >>> |HeapTupleData tmptup = {0};\n> >\n> > > I agree that this would be cleaner.\n> >\n> > It would be wrong, though, or at least not have the same effect.\n> >\n> I think that you speak about fill pointers with 0 is not the same as fill\n> pointers with NULL.\n> \n> \n> > ItemPointerSetInvalid does not set the target to all-zeroes.\n> >\n> ItemPointerSetInvalid set or not set the target to all-zeroes?\n\nI think Tom means that it does:\nBlockIdSet(&((pointer)->ip_blkid), InvalidBlockNumber),\n(pointer)->ip_posid = InvalidOffsetNumber\n\nbut it's not zero, as I thought:\n\nsrc/include/storage/block.h:#define InvalidBlockNumber ((BlockNumber) 0xFFFFFFFF)\n\n> > (Regardless of that detail, it's generally best to accomplish\n> > objective X in the same way that existing code does. Deciding\n> > that you have a better way is often wrong, and even if you\n> > are right, you should then submit a patch to change all the\n> > existing cases.)\n\nFYI, I'd gotten the idea from here:\n\n$ git grep 'HeapTupleData.*='\nsrc/backend/executor/execTuples.c: HeapTupleData tuple = {0};\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 12 Apr 2021 12:05:53 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Uninitialized scalar variable (UNINIT)\n (src/backend/statistics/extended_stats.c)"
},
{
"msg_contents": "\n\nOn 4/12/21 7:04 PM, Tom Lane wrote:\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n>> Em seg., 12 de abr. de 2021 às 03:04, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>>> It would be wrong, though, or at least not have the same effect.\n> \n>> I think that you speak about fill pointers with 0 is not the same as fill\n>> pointers with NULL.\n> \n> No, I mean that InvalidBlockNumber isn't 0.\n> \n>> I was confused here, does the patch follow the pattern and fix the problem\n>> or not?\n> \n> Your patch seems fine. Justin's proposed improvement isn't.\n> \n\nPushed.\n\n> (I'm not real sure whether there's any *actual* bug here --- would we\n> really be looking at either ctid or tableoid of this temporary tuple?\n> But it's probably best to ensure that they're valid anyway.)>\n\nYeah, the tuple is only built so that we can pass it to the various\nselectivity estimators. I don't think anything will be actually looking\nat those fields, but initializing them seems easy enough.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 14 Apr 2021 00:55:55 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Uninitialized scalar variable (UNINIT)\n (src/backend/statistics/extended_stats.c)"
}
] |
[
{
"msg_contents": "This is another try of [1].\n\n\nBACKGROUND\n========================================\n\nWe want to realize parallel INSERT SELECT in the following steps:\n1) INSERT + parallel SELECT\n2) Parallel INSERT + parallel SELECT\n\nBelow are example use cases. We don't expect high concurrency or an empty data source.\n* Data loading (ETL or ELT) into an analytics database, typically a data ware house.\n* Batch processing in an OLTP database.\n\n\nPROBLEMS\n========================================\n\n(1) The overhead of checking parallel-safety could be large\nWe have to check the target table and its child partitions for parallel safety. That is, we make sure those relations don't have parallel-unsafe domains, constraints, indexes, or triggers.\n\nWhat we should check is the relations into which the statement actually inserts data. However, the planner does not know which relations will be actually inserted into. So, the planner has to check all descendant partitions of a target table. When the target table has many partitions, this overhead could be unacceptable when compared to the benefit gained from parallelism.\n\n\n(2) There's no mechanism for parallel workers to assign an XID\nParallel workers need an XID of the current (sub)transaction when actually inserting a tuple (i.e., calling heap_insert()). When the leader has not got the XID yet, the worker may have to assign a new XID and communicate it to the leader and other workers so that all parallel processes use the same XID.\n\n\nSOLUTION TO (1)\n========================================\n\nThe candidate ideas are:\n\n1) Caching the result of parallel-safety check\nThe planner stores the result of checking parallel safety for each relation in relcache, or some purpose-built hash table in shared memory.\n\nThe problems are:\n\n* Even if the target relation turns out to be parallel safe by looking at those data structures, we cannot assume it remains true until the SQL statement finishes. For instance, other sessions might add a parallel-unsafe index to its descendant relations. Other examples include that when the user changes the parallel safety of indexes or triggers by running ALTER FUNCTION on the underlying index AM function or trigger function, the relcache entry of the table or index is not invalidated, so the correct parallel safety is not maintained in the cache.\nIn that case, when the executor encounters a parallel-unsafe object, it can change the cached state as being parallel-unsafe and error out.\n\n* Can't ensure fast access. With relcache, the first access in each session has to undergo the overhead of parallel-safety check. With a hash table in shared memory, the number of relations stored there would be limited, so the first access after database startup or the hash table entry eviction similarly experiences slowness.\n\n* With a new hash table, some lwlock for concurrent access must be added, which can have an adverse effect on performance.\n\n\n2) Enabling users to declare that the table allows parallel data modification\nAdd a table property that represents parallel safety of the table for DML statement execution. Users specify it as follows:\n \nCREATE TABLE table_name (...) PARALLEL { UNSAFE | RESTRICTED | SAFE };\n ALTER TABLE table_name PARALLEL { UNSAFE | RESTRICTED | SAFE };\n\nThis property is recorded in pg_class's relparallel column as 'u', 'r', or 's', just like pg_proc's proparallel. The default is UNSAFE.\n\nThe planner assumes that all of the table, its descendant partitions, and their ancillary objects have the specified parallel safety or safer one. The user is responsible for its correctness. If the parallel processes find an object that is less safer than the assumed parallel safety during statement execution, it throws an ERROR and abort the statement execution.\n\nThe objects that relate to the parallel safety of a DML target table are as follows:\n\n * Column default expression\n * DOMAIN type CHECK expression\n * CHECK constraints on column\n * Partition key\n * Partition key support function\n * Index expression\n * Index predicate\n * Index AM function\n * Operator function\n * Trigger function\n\nWhen the parallel safety of some of these objects is changed, it's costly to reflect it on the parallel safety of tables that depend on them. So, we don't do it. Instead, we provide a utility function pg_get_parallel_safety('table_name') that returns records of (objid, classid, parallel_safety) that represent the parallel safety of objects that determine the parallel safety of the specified table. The function only outputs objects that are not parallel safe. Otherwise, it will consume excessive memory while accumulating the output. The user can use this function to identify problematic objects when a parallel DML fails or is not parallelized in an expected manner.\n\nHow does the executor detect parallel unsafe objects? There are two ways:\n\n1) At loading time\nWhen the executor loads the definition of objects (tables, constraints, index, triggers, etc.) during the first access to them after session start or their eviction by sinval message, it checks the parallel safety.\n\nThis is a legitimate way, but may need much code. Also, it might overlook necessary code changes without careful inspection.\n\n\n2) At function execution time\nAll related objects come down to some function execution. So, add a parallel safety check there when in a parallel worker. If the current process is a parallel worker and the function is parallel unsafe, error out with ereport(ERROR). This approach eliminates the oversight of parallel safety check with the additional bonus of tiny code change!\n\nThe place would be FunctionCallInvoke(). It's a macro in fmgr.h now. Perhaps we should make it a function in fmgr.c, so that fmgr.h does not have to include header files for parallelism-related definitions.\n\nWe have to evaluate the performance effect of converting FunctionCallInvoke() into a function and adding an if statement there, because it's a relatively low-level function.\n\n\n\n\nSOLUTION TO (2)\n========================================\n\n1) Make it possible for workers to assign an XID and share it among the parallel processes\nThe problems are:\n\n* Tuple visibility\nIf the worker that acquires the XID writes some row and another worker reads that row before it gets to see the XID information, the latter worker won't treat such a row is written by its own transaction.\n\nFor instance, the worker (w-1) that acquires the XID (501) deletes the tuple (CTID: 0, 2). Now, another worker (w-2) reads that tuple (CTID: 0, 2), it would consider that the tuple is still visible to its snapshot but if the w-2 knows that 501 is its own XID, it would have been considered it as (not-visible) deleted. I think this can happen when multiple updates to the same row happen and new rows get added to the new page.\n\n* The implementation seems complex\nWhen the DML is run inside a deeply nested subtransaction and the parent transactions have not allocated their XIDs yet, the worker needs to allocate the XIDs for its parents. That indeterminate number of XIDs must be stored in shared memory. The stack of TransactionState structures must also be passed.\n\nAlso, TransactionIdIsCurrentTransactionId() uses an array ParallelCurrentXids where parallel workers receive sub-committed XIDs from the leader. This needs to be reconsidered.\n\n\n2) The leader assigns an XID before entering parallel mode and passes it to workers\nThis is what was done in [1].\n\nThe problem is that the XID would not be used if the data source (SELECT query) returns no valid rows. This is a waste of XID.\n\nHowever, the data source should be rarely empty when this feature is used. As the following Oracle manual says, parallel DML will be used in data analytics and OLTP batch jobs. There should be plenty of source data in those scenarios.\n\nWhen to Use Parallel DML\nhttps://docs.oracle.com/en/database/oracle/oracle-database/21/vldbg/types-parallelism.html#GUID-18B2AF09-C548-48DE-A794-86224111549F\n--------------------------------------------------\nSeveral scenarios where parallel DML is used include:\n\nRefreshing Tables in a Data Warehouse System\n\nCreating Intermediate Summary Tables\n\nUsing Scoring Tables\n\nUpdating Historical Tables\n\nRunning Batch Jobs\n--------------------------------------------------\n\n\n\nCONCLUSION\n========================================\n\n(1) The overhead of checking parallel-safety could be large\nWe're inclined to go with solution 2, because it doesn't have a big problem. However, we'd like to try to present some more analysis on solution 1 in this thread.\n\nRegarding how to check parallel safety in executor, I prefer the simpler way of adding a check in function execution. If it turns out to have an untolerable performance problem, we can choose the other approach.\n\n(2) There's no mechanism for parallel workers to assign an XID\nWe'd like to adopt solution 2 because it will really not have a big issue in the assumed use cases. The implementation is very easy and does not look strange.\n\n\nOf course, any better-looking idea would be much appreciated. (But simple, or not unnecessarily complex, one is desired.)\n\n\n\n[1]\nParallel INSERT (INTO ... SELECT ...)\nhttps://www.postgresql.org/message-id/flat/CAJcOf-cXnB5cnMKqWEp2E2z7Mvcd04iLVmV=qpFJrR3AcrTS3g@mail.gmail.com\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n",
"msg_date": "Mon, 12 Apr 2021 01:21:57 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Parallel INSERT SELECT take 2"
},
{
"msg_contents": "> BACKGROUND\n> ========================================\n> \n> We want to realize parallel INSERT SELECT in the following steps:\n> 1) INSERT + parallel SELECT\n> 2) Parallel INSERT + parallel SELECT\n> \n> Below are example use cases. We don't expect high concurrency or an empty\n> data source.\n> * Data loading (ETL or ELT) into an analytics database, typically a data ware\n> house.\n> * Batch processing in an OLTP database.\n> 2) Enabling users to declare that the table allows parallel data modification Add\n> a table property that represents parallel safety of the table for DML statement\n> execution. Users specify it as follows:\n> \n> CREATE TABLE table_name (...) PARALLEL { UNSAFE | RESTRICTED | SAFE };\n> ALTER TABLE table_name PARALLEL { UNSAFE | RESTRICTED | SAFE };\n> \n> This property is recorded in pg_class's relparallel column as 'u', 'r', or 's', just\n> like pg_proc's proparallel. The default is UNSAFE.\n> \n> The planner assumes that all of the table, its descendant partitions, and their\n> ancillary objects have the specified parallel safety or safer one. The user is\n> responsible for its correctness. If the parallel processes find an object that is\n> less safer than the assumed parallel safety during statement execution, it\n> throws an ERROR and abort the statement execution.\n> \n> When the parallel safety of some of these objects is changed, it's costly to\n> reflect it on the parallel safety of tables that depend on them. So, we don't do\n> it. Instead, we provide a utility function pg_get_parallel_safety('table_name')\n> that returns records of (objid, classid, parallel_safety) that represent the\n> parallel safety of objects that determine the parallel safety of the specified\n> table. The function only outputs objects that are not parallel safe. Otherwise,\n> it will consume excessive memory while accumulating the output. The user\n> can use this function to identify problematic objects when a parallel DML fails\n> or is not parallelized in an expected manner.\n> \n> How does the executor detect parallel unsafe objects? There are two ways:\n> \n> 1) At loading time\n> ...\n> 2) At function execution time\n> All related objects come down to some function execution. So, add a parallel\n> safety check there when in a parallel worker. If the current process is a parallel\n> worker and the function is parallel unsafe, error out with ereport(ERROR). This\n> approach eliminates the oversight of parallel safety check with the additional\n> bonus of tiny code change!\n> \n> The place would be FunctionCallInvoke(). It's a macro in fmgr.h now. Perhaps\n> we should make it a function in fmgr.c, so that fmgr.h does not have to include\n> header files for parallelism-related definitions.\n> \n> We have to evaluate the performance effect of converting FunctionCallInvoke()\n> into a function and adding an if statement there, because it's a relatively\n> low-level function.\n\nBased on above, we plan to move forward with the apporache 2) (declarative idea).\n\nAttatching the POC patchset which including the following:\n\n0001: provide a utility function pg_get_parallel_safety('table_name').\n\n The function returns records of (objid, classid, parallel_safety) that represent\n the parallel safety of objects that determine the parallel safety of the specified table.\n Note: The function only outputs objects that are not parallel safe.\n (Thanks a lot for greg's previous work, most of the safety check code here is based on it)\n\n0002: allow user use \"ALTER TABLE PARALLEL SAFE/UNSAFE/RESTRICTED\".\n\n Add proparallel column in pg_class and allow use to change its.\n\n0003: detect parallel unsafe objects in executor.\n \n Currently we choose to check function's parallel safety at function execution time.\n We add safety check at FunctionCallInvoke(), but it may be better to check in fmgr_info_cxt_security.\n we are still discussing it in another thread[1].\n\n TODO: we currently skip checking built-in function's parallel safety, because we lack the information about built-in\n function's parallel safety, we cannot access pg_proc.proparallel in a low level because it could result in infinite recursion.\n Adding parallel property in fmgrBuiltin will enlarge the frequently accessed fmgr_builtins and lock down the value of the \n parallel-safety flag. The solution is still under discussion. Suggestions and comments are welcome.\n \n0004: fix some mislabeled function in testcase\n\n Since we check parallel safety of function at a low level, we found some functions marked as parallel unsafe will be\n executed in parallel mode in regression test when setting force_parallel_mode=regress. After checking, these functions\n are parallel safe, So , we plan to fix these function's parallel label.\n Note: we plan to take 0004 as a separate patch , see[2], I post 0004 here just to prevent some testcase failures.\n\nThe above are the POC patches, it could be imperfect for now and I am still working on improving it. \nSuggestions and comments about the design or code are very welcome and appreciated.\n\nBest regards,\nhouzj",
"msg_date": "Thu, 22 Apr 2021 11:20:59 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "> > BACKGROUND\n> > ========================================\n> >\n> > We want to realize parallel INSERT SELECT in the following steps:\n> > 1) INSERT + parallel SELECT\n> > 2) Parallel INSERT + parallel SELECT\n> >\n> > Below are example use cases. We don't expect high concurrency or an\n> > empty data source.\n> > * Data loading (ETL or ELT) into an analytics database, typically a\n> > data ware house.\n> > * Batch processing in an OLTP database.\n> > 2) Enabling users to declare that the table allows parallel data\n> > modification Add a table property that represents parallel safety of\n> > the table for DML statement execution. Users specify it as follows:\n> >\n> > CREATE TABLE table_name (...) PARALLEL { UNSAFE | RESTRICTED | SAFE };\n> > ALTER TABLE table_name PARALLEL { UNSAFE | RESTRICTED | SAFE };\n> >\n> > This property is recorded in pg_class's relparallel column as 'u',\n> > 'r', or 's', just like pg_proc's proparallel. The default is UNSAFE.\n> >\n> > The planner assumes that all of the table, its descendant partitions,\n> > and their ancillary objects have the specified parallel safety or\n> > safer one. The user is responsible for its correctness. If the\n> > parallel processes find an object that is less safer than the assumed\n> > parallel safety during statement execution, it throws an ERROR and abort the\n> statement execution.\n> >\n> > When the parallel safety of some of these objects is changed, it's\n> > costly to reflect it on the parallel safety of tables that depend on\n> > them. So, we don't do it. Instead, we provide a utility function\n> > pg_get_parallel_safety('table_name')\n> > that returns records of (objid, classid, parallel_safety) that\n> > represent the parallel safety of objects that determine the parallel\n> > safety of the specified table. The function only outputs objects that\n> > are not parallel safe. Otherwise, it will consume excessive memory\n> > while accumulating the output. The user can use this function to\n> > identify problematic objects when a parallel DML fails or is not parallelized in\n> an expected manner.\n> >\n> > How does the executor detect parallel unsafe objects? There are two ways:\n> >\n> > 1) At loading time\n> > ...\n> > 2) At function execution time\n> > All related objects come down to some function execution. So, add a\n> > parallel safety check there when in a parallel worker. If the current\n> > process is a parallel worker and the function is parallel unsafe,\n> > error out with ereport(ERROR). This approach eliminates the oversight\n> > of parallel safety check with the additional bonus of tiny code change!\n> >\n> > The place would be FunctionCallInvoke(). It's a macro in fmgr.h now.\n> > Perhaps we should make it a function in fmgr.c, so that fmgr.h does\n> > not have to include header files for parallelism-related definitions.\n> >\n> > We have to evaluate the performance effect of converting\n> > FunctionCallInvoke() into a function and adding an if statement there,\n> > because it's a relatively low-level function.\n> \n> Based on above, we plan to move forward with the apporache 2) (declarative\n> idea).\n> \n> Attatching the POC patchset which including the following:\n> \n> 0001: provide a utility function pg_get_parallel_safety('table_name').\n> \n> The function returns records of (objid, classid, parallel_safety) that represent\n> the parallel safety of objects that determine the parallel safety of the\n> specified table.\n> Note: The function only outputs objects that are not parallel safe.\n> (Thanks a lot for greg's previous work, most of the safety check code here is\n> based on it)\n> \n> 0002: allow user use \"ALTER TABLE PARALLEL SAFE/UNSAFE/RESTRICTED\".\n> \n> Add proparallel column in pg_class and allow use to change its.\n> \n> 0003: detect parallel unsafe objects in executor.\n> \n> Currently we choose to check function's parallel safety at function execution\n> time.\n> We add safety check at FunctionCallInvoke(), but it may be better to check in\n> fmgr_info_cxt_security.\n> we are still discussing it in another thread[1].\n> \n> TODO: we currently skip checking built-in function's parallel safety, because\n> we lack the information about built-in\n> function's parallel safety, we cannot access pg_proc.proparallel in a low level\n> because it could result in infinite recursion.\n> Adding parallel property in fmgrBuiltin will enlarge the frequently accessed\n> fmgr_builtins and lock down the value of the\n> parallel-safety flag. The solution is still under discussion. Suggestions and\n> comments are welcome.\n> \n> 0004: fix some mislabeled function in testcase\n> \n> Since we check parallel safety of function at a low level, we found some\n> functions marked as parallel unsafe will be\n> executed in parallel mode in regression test when setting\n> force_parallel_mode=regress. After checking, these functions\n> are parallel safe, So , we plan to fix these function's parallel label.\n> Note: we plan to take 0004 as a separate patch , see[2], I post 0004 here just\n> to prevent some testcase failures.\n> \n> The above are the POC patches, it could be imperfect for now and I am still\n> working on improving it.\n> Suggestions and comments about the design or code are very welcome and\n> appreciated.\n\nSorry, I forgot to attach the discussion link about [1] and [2].\n\n[1]\nhttps://www.postgresql.org/message-id/756027.1619012086%40sss.pgh.pa.us\n\n[2]\nhttps://www.postgresql.org/message-id/OS0PR01MB571637085C0D3AFC3AB3600194479%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nBest regards,\nhouzj\n\n\n\n",
"msg_date": "Fri, 23 Apr 2021 00:38:52 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 4:51 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> > BACKGROUND\n> > ========================================\n> >\n> > We want to realize parallel INSERT SELECT in the following steps:\n> > 1) INSERT + parallel SELECT\n> > 2) Parallel INSERT + parallel SELECT\n> >\n> > Below are example use cases. We don't expect high concurrency or an empty\n> > data source.\n> > * Data loading (ETL or ELT) into an analytics database, typically a data ware\n> > house.\n> > * Batch processing in an OLTP database.\n> > 2) Enabling users to declare that the table allows parallel data modification Add\n> > a table property that represents parallel safety of the table for DML statement\n> > execution. Users specify it as follows:\n> >\n> > CREATE TABLE table_name (...) PARALLEL { UNSAFE | RESTRICTED | SAFE };\n> > ALTER TABLE table_name PARALLEL { UNSAFE | RESTRICTED | SAFE };\n> >\n> > This property is recorded in pg_class's relparallel column as 'u', 'r', or 's', just\n> > like pg_proc's proparallel. The default is UNSAFE.\n> >\n> > The planner assumes that all of the table, its descendant partitions, and their\n> > ancillary objects have the specified parallel safety or safer one. The user is\n> > responsible for its correctness. If the parallel processes find an object that is\n> > less safer than the assumed parallel safety during statement execution, it\n> > throws an ERROR and abort the statement execution.\n> >\n> > When the parallel safety of some of these objects is changed, it's costly to\n> > reflect it on the parallel safety of tables that depend on them. So, we don't do\n> > it. Instead, we provide a utility function pg_get_parallel_safety('table_name')\n> > that returns records of (objid, classid, parallel_safety) that represent the\n> > parallel safety of objects that determine the parallel safety of the specified\n> > table. The function only outputs objects that are not parallel safe. Otherwise,\n> > it will consume excessive memory while accumulating the output. The user\n> > can use this function to identify problematic objects when a parallel DML fails\n> > or is not parallelized in an expected manner.\n> >\n> > How does the executor detect parallel unsafe objects? There are two ways:\n> >\n> > 1) At loading time\n> > ...\n> > 2) At function execution time\n> > All related objects come down to some function execution. So, add a parallel\n> > safety check there when in a parallel worker. If the current process is a parallel\n> > worker and the function is parallel unsafe, error out with ereport(ERROR). This\n> > approach eliminates the oversight of parallel safety check with the additional\n> > bonus of tiny code change!\n> >\n> > The place would be FunctionCallInvoke(). It's a macro in fmgr.h now. Perhaps\n> > we should make it a function in fmgr.c, so that fmgr.h does not have to include\n> > header files for parallelism-related definitions.\n> >\n> > We have to evaluate the performance effect of converting FunctionCallInvoke()\n> > into a function and adding an if statement there, because it's a relatively\n> > low-level function.\n>\n> Based on above, we plan to move forward with the apporache 2) (declarative idea).\n\nIIUC, the declarative behaviour idea attributes parallel\nsafe/unsafe/restricted tags to each table with default being the\nunsafe. Does it mean for a parallel unsafe table, no parallel selects,\ninserts (may be updates) will be picked up? Or is it only the parallel\ninserts? If both parallel inserts, selects will be picked, then the\nexisting tables need to be adjusted to set the parallel safety tags\nwhile migrating?\n\nAnother point, what does it mean a table being parallel restricted?\nWhat should happen if it is present in a query of other parallel safe\ntables?\n\nI may be wrong here: IIUC, the main problem we are trying to solve\nwith the declarative approach is to let the user decide parallel\nsafety for partition tables as it may be costlier for postgres to\ndetermine it. And for the normal tables we can perform parallel safety\nchecks without incurring much cost. So, I think we should restrict the\ndeclarative approach to only partitioned tables?\n\nWhile reading the design, I came across this \"erroring out during\nexecution of a query when a parallel unsafe function is detected\". If\nthis is correct, isn't it warranting users to run\npg_get_parallel_safety to know the parallel unsafe objects, set\nparallel safety to all of them if possible, otherwise disable\nparallelism to run the query? Isn't this burdensome? Instead, how\nabout postgres retries the query upon detecting the error that came\nfrom a parallel unsafe function during execution, disable parallelism\nand run the query? I think this kind of retry query feature can be\nbuilt outside of the core postgres, but IMO it will be good to have\ninside (of course configurable). IIRC, the Teradata database has a\nQuery Retry feature.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 23 Apr 2021 21:13:17 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "> > Based on above, we plan to move forward with the apporache 2) (declarative\r\n> idea).\r\n> \r\n> IIUC, the declarative behaviour idea attributes parallel safe/unsafe/restricted\r\n> tags to each table with default being the unsafe. Does it mean for a parallel\r\n> unsafe table, no parallel selects, inserts (may be updates) will be picked up? Or\r\n> is it only the parallel inserts? If both parallel inserts, selects will be picked, then\r\n> the existing tables need to be adjusted to set the parallel safety tags while\r\n> migrating?\r\n\r\nThanks for looking into this.\r\n\r\nThe parallel attributes in table means the parallel safety when user does some data-modification operations on it.\r\nSo, It only limit the use of parallel plan when using INSERT/UPDATE/DELETE.\r\n\r\n> Another point, what does it mean a table being parallel restricted?\r\n> What should happen if it is present in a query of other parallel safe tables?\r\n\r\nIf a table is parallel restricted, it means the table contains some parallel restricted objects(such as: parallel restricted functions in index expressions).\r\nAnd in planner, it means parallel insert plan will not be chosen, but it can use parallel select(with serial insert).\r\n\r\n> I may be wrong here: IIUC, the main problem we are trying to solve with the\r\n> declarative approach is to let the user decide parallel safety for partition tables\r\n> as it may be costlier for postgres to determine it. And for the normal tables we\r\n> can perform parallel safety checks without incurring much cost. So, I think we\r\n> should restrict the declarative approach to only partitioned tables?\r\n\r\nYes, we are tring to avoid overhead when checking parallel safety.\r\nThe cost to check all the partition's parallel safety is the biggest one.\r\nAnother is the safety check of index's expression.\r\nCurrently, for INSERT, the planner does not open the target table's indexinfo and does not\r\nparse the expression of the index. We need to parse the expression in planner if we want\r\nto do parallel safety check for it which can bring some overhead(it will open the index the do the parse in executor again).\r\nSo, we plan to skip all of the extra check and let user take responsibility for the safety.\r\n\r\nOf course, maybe we can try to pass the indexinfo to the executor but it need some further refactor and I will take a look into it.\r\n\r\n> While reading the design, I came across this \"erroring out during execution of a\r\n> query when a parallel unsafe function is detected\". If this is correct, isn't it\r\n> warranting users to run pg_get_parallel_safety to know the parallel unsafe\r\n> objects, set parallel safety to all of them if possible, otherwise disable\r\n> parallelism to run the query? Isn't this burdensome? \r\n\r\nHow about:\r\nIf detecting parallel unsafe objects in executor, then, alter the table to parallel unsafe internally.\r\nSo, user do not need to alter it manually.\r\n\r\n> Instead, how about\r\n> postgres retries the query upon detecting the error that came from a parallel\r\n> unsafe function during execution, disable parallelism and run the query? I think\r\n> this kind of retry query feature can be built outside of the core postgres, but\r\n> IMO it will be good to have inside (of course configurable). IIRC, the Teradata\r\n> database has a Query Retry feature.\r\n> \r\n\r\nThanks for the suggestion. \r\nThe retry query feature sounds like a good idea to me.\r\nOTOH, it sounds more like an independent feature which parallel select can also benefit from it.\r\nI think maybe we can try to achieve it after we commit the parallel insert ?\r\n\r\nBest regards,\r\nhouzj\r\n",
"msg_date": "Mon, 26 Apr 2021 01:30:18 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Mon, Apr 26, 2021 at 7:00 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> > Instead, how about\n> > postgres retries the query upon detecting the error that came from a parallel\n> > unsafe function during execution, disable parallelism and run the query? I think\n> > this kind of retry query feature can be built outside of the core postgres, but\n> > IMO it will be good to have inside (of course configurable). IIRC, the Teradata\n> > database has a Query Retry feature.\n> >\n>\n> Thanks for the suggestion.\n> The retry query feature sounds like a good idea to me.\n> OTOH, it sounds more like an independent feature which parallel select can also benefit from it.\n>\n\n+1. I also think retrying a query on an error is not related to this\nfeature and should be built separately if required.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 26 Apr 2021 11:47:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Mon, Apr 26, 2021 at 7:00 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> > > Based on above, we plan to move forward with the apporache 2) (declarative\n> > idea).\n> >\n> > IIUC, the declarative behaviour idea attributes parallel safe/unsafe/restricted\n> > tags to each table with default being the unsafe. Does it mean for a parallel\n> > unsafe table, no parallel selects, inserts (may be updates) will be picked up? Or\n> > is it only the parallel inserts? If both parallel inserts, selects will be picked, then\n> > the existing tables need to be adjusted to set the parallel safety tags while\n> > migrating?\n>\n> Thanks for looking into this.\n\nThanks for the responses.\n\n> The parallel attributes in table means the parallel safety when user does some data-modification operations on it.\n> So, It only limit the use of parallel plan when using INSERT/UPDATE/DELETE.\n\nIn that case, isn't it better to use the terminology \"PARALLEL DML\nSAFE/UNSAFE/RESTRICTED\" in the code and docs? This way, it will be\nclear that these tags don't affect parallel selects.\n\n> > Another point, what does it mean a table being parallel restricted?\n> > What should happen if it is present in a query of other parallel safe tables?\n>\n> If a table is parallel restricted, it means the table contains some parallel restricted objects(such as: parallel restricted functions in index expressions).\n> And in planner, it means parallel insert plan will not be chosen, but it can use parallel select(with serial insert).\n\nMakes sense. I assume that when there is a parallel restricted\nfunction associated with a table, the current design doesn't enforce\nthe planner to choose parallel select and it is left up to it.\n\n> > I may be wrong here: IIUC, the main problem we are trying to solve with the\n> > declarative approach is to let the user decide parallel safety for partition tables\n> > as it may be costlier for postgres to determine it. And for the normal tables we\n> > can perform parallel safety checks without incurring much cost. So, I think we\n> > should restrict the declarative approach to only partitioned tables?\n>\n> Yes, we are tring to avoid overhead when checking parallel safety.\n> The cost to check all the partition's parallel safety is the biggest one.\n> Another is the safety check of index's expression.\n> Currently, for INSERT, the planner does not open the target table's indexinfo and does not\n> parse the expression of the index. We need to parse the expression in planner if we want\n> to do parallel safety check for it which can bring some overhead(it will open the index the do the parse in executor again).\n> So, we plan to skip all of the extra check and let user take responsibility for the safety.\n> Of course, maybe we can try to pass the indexinfo to the executor but it need some further refactor and I will take a look into it.\n\nWill the planner parse and check parallel safety of index((where\nclause) expressions in case of SELECTs? I'm not sure of this. But if\nit does, maybe we could do the same thing for parallel DML as well for\nnormal tables? What is the overhead of parsing index expressions? If\nthe cost is heavy for checking index expressions parallel safety in\ncase of normal tables, then the current design i.e. attributing\nparallel safety tag to all the tables makes sense.\n\nI was actually thinking that we will have the declarative approach\nonly for partitioned tables as it is the main problem we are trying to\nsolve with this design. Something like: users will run\npg_get_parallel_safety to see the parallel unsafe objects associated\nwith a partitioned table by looking at all of its partitions and be\nable to set a parallel dml safety tag to only partitioned tables.\n\n> > While reading the design, I came across this \"erroring out during execution of a\n> > query when a parallel unsafe function is detected\". If this is correct, isn't it\n> > warranting users to run pg_get_parallel_safety to know the parallel unsafe\n> > objects, set parallel safety to all of them if possible, otherwise disable\n> > parallelism to run the query? Isn't this burdensome?\n>\n> How about:\n> If detecting parallel unsafe objects in executor, then, alter the table to parallel unsafe internally.\n> So, user do not need to alter it manually.\n\nI don't think this is a good idea, because, if there are multiple\ntables involved in the query, do you alter all the tables? Usually, we\nerror out on finding the first such unsafe object.\n\n> > Instead, how about\n> > postgres retries the query upon detecting the error that came from a parallel\n> > unsafe function during execution, disable parallelism and run the query? I think\n> > this kind of retry query feature can be built outside of the core postgres, but\n> > IMO it will be good to have inside (of course configurable). IIRC, the Teradata\n> > database has a Query Retry feature.\n> >\n>\n> Thanks for the suggestion.\n> The retry query feature sounds like a good idea to me.\n> OTOH, it sounds more like an independent feature which parallel select can also benefit from it.\n> I think maybe we can try to achieve it after we commit the parallel insert ?\n\nYeah, it will be a separate thing altogether.\n\n>0001: provide a utility function pg_get_parallel_safety('table_name').\n>\n>The function returns records of (objid, classid, parallel_safety) that represent\n>the parallel safety of objects that determine the parallel safety of the specified table.\n>Note: The function only outputs objects that are not parallel safe.\n\nIf it returns only parallel \"unsafe\" objects and not \"safe\" or\n\"restricted\" objects, how about naming it to\npg_get_table_parallel_unsafe_objects(\"table_name\")? This way we could\nget rid of parallel_safety in the output record? If at all users want\nto see parallel restricted or safe objects, we can also have the\ncounterparts pg_get_table_parallel_safe_objects and\npg_get_table_parallel_restricted_objects. Of course, we can caution\nthe user that execution of these functions might take longer and\n\"might consume excessive memory while accumulating the output\".\n\nOtherwise, we can have a single function\npg_get_parallel_safety(\"table_name\" IN, \"parallel_safety\" IN, \"objid\"\nOUT, \"classid\" OUT)? If required, we could name it\npg_get_parallel_safety_of_table_objects.\n\nThoughts?\n\nAlthough, I have not looked at the patches, few questions on\npg_get_parallel_safety function:\n1) Will it parse all the expressions for the objects that are listed\nunder \"The objects that relate to the parallel safety of a DML target\ntable are as follows:\" in the upthread?\n2) How will it behave if a partitioned table is passed to it? Will it\nrecurse for all the partitions?\n3) How will it behave if a foreign table is passed to it? Will it error out?\n\nIn general:\n1) Is ALTER SET PARALLEL SAFETY on a partitioned table allowed? If\nyes, will it be set based on all the partitions parallel safety?\n2) How will users have to decide on parallel safety of a foreign table\nor a partitioned table with foreign partitions? Or is it that we set\nthese tables parallel unsafe and don't do parallel inserts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Apr 2021 14:56:17 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "> > The parallel attributes in table means the parallel safety when user does some\r\n> data-modification operations on it.\r\n> > So, It only limit the use of parallel plan when using INSERT/UPDATE/DELETE.\r\n> \r\n> In that case, isn't it better to use the terminology \"PARALLEL DML\r\n> SAFE/UNSAFE/RESTRICTED\" in the code and docs? This way, it will be clear that\r\n> these tags don't affect parallel selects.\r\n\r\nMakes sense, I recalled I have heart similar suggestion before.\r\nIf there are no other objections, I plan to change command to\r\nPARALLEL DML in my next version patches.\r\n\r\n> > > I may be wrong here: IIUC, the main problem we are trying to solve\r\n> > > with the declarative approach is to let the user decide parallel\r\n> > > safety for partition tables as it may be costlier for postgres to\r\n> > > determine it. And for the normal tables we can perform parallel\r\n> > > safety checks without incurring much cost. So, I think we should restrict the\r\n> declarative approach to only partitioned tables?\r\n> >\r\n> > Yes, we are tring to avoid overhead when checking parallel safety.\r\n> > The cost to check all the partition's parallel safety is the biggest one.\r\n> > Another is the safety check of index's expression.\r\n> > Currently, for INSERT, the planner does not open the target table's\r\n> > indexinfo and does not parse the expression of the index. We need to\r\n> > parse the expression in planner if we want to do parallel safety check for it\r\n> which can bring some overhead(it will open the index the do the parse in\r\n> executor again).\r\n> > So, we plan to skip all of the extra check and let user take responsibility for\r\n> the safety.\r\n> > Of course, maybe we can try to pass the indexinfo to the executor but it need\r\n> some further refactor and I will take a look into it.\r\n> \r\n> Will the planner parse and check parallel safety of index((where\r\n> clause) expressions in case of SELECTs? I'm not sure of this. But if it does, maybe\r\n> we could do the same thing for parallel DML as well for normal tables? \r\n\r\nThe planner does not check the index expression, because the expression will not be used in SELECT.\r\nI think the expression is only used when a tuple inserted into the index.\r\n\r\n> the overhead of parsing index expressions? If the cost is heavy for checking\r\n> index expressions parallel safety in case of normal tables, then the current\r\n> design i.e. attributing parallel safety tag to all the tables makes sense.\r\n\r\nCurrently, index expression and predicate are stored in text format.\r\nWe need to use stringToNode(expression/predicate) to parse it.\r\nSome committers think doing this twice does not look good,\r\nunless we found some ways to pass parsed info to the executor to avoid the second parse.\r\n\r\n> I was actually thinking that we will have the declarative approach only for\r\n> partitioned tables as it is the main problem we are trying to solve with this\r\n> design. Something like: users will run pg_get_parallel_safety to see the parallel\r\n> unsafe objects associated with a partitioned table by looking at all of its\r\n> partitions and be able to set a parallel dml safety tag to only partitioned tables.\r\n\r\nWe originally want to make the declarative approach for both normal and partitioned table.\r\nIn this way, it will not bring any overhead to planner and looks consistency.\r\nBut, do you think we should put some really cheap safety check to the planner ?\r\n\r\n> \r\n> >0001: provide a utility function pg_get_parallel_safety('table_name').\r\n> >\r\n> >The function returns records of (objid, classid, parallel_safety) that\r\n> >represent the parallel safety of objects that determine the parallel safety of\r\n> the specified table.\r\n> >Note: The function only outputs objects that are not parallel safe.\r\n> \r\n> If it returns only parallel \"unsafe\" objects and not \"safe\" or \"restricted\" objects,\r\n> how about naming it to pg_get_table_parallel_unsafe_objects(\"table_name\")?\r\n\r\nCurrently, the function returns both unsafe and restricted objects(I thought restricted is also not safe),\r\nbecause we thought users only care about the objects that affect the use of parallel plan.\r\n\r\n> This way we could get rid of parallel_safety in the output record? If at all users\r\n> want to see parallel restricted or safe objects, we can also have the\r\n> counterparts pg_get_table_parallel_safe_objects and\r\n> pg_get_table_parallel_restricted_objects. Of course, we can caution the user\r\n> that execution of these functions might take longer and \"might consume\r\n> excessive memory while accumulating the output\".\r\n> \r\n> Otherwise, we can have a single function pg_get_parallel_safety(\"table_name\"\r\n> IN, \"parallel_safety\" IN, \"objid\"\r\n> OUT, \"classid\" OUT)? If required, we could name it\r\n> pg_get_parallel_safety_of_table_objects.\r\n> \r\n> Thoughts?\r\n\r\nI am sure will user want to get safe objects, do you have some usecases ?\r\nIf users do not care the safe objects, I think they can use\r\n\"SELECT * FROM pg_get_parallel_safety() where parallel_safety = 'specified safety' \" to get the specified objects.\r\n\r\n> Although, I have not looked at the patches, few questions on\r\n> pg_get_parallel_safety function:\r\n> 1) Will it parse all the expressions for the objects that are listed under \"The\r\n> objects that relate to the parallel safety of a DML target table are as follows:\" in\r\n> the upthread?\r\n\r\nYes.\r\nBut some parsed expression(such as domain type's expression) can be found in typecache,\r\nwe just check the safety for these already parsed expression.\r\n\r\n> 2) How will it behave if a partitioned table is passed to it? Will it recurse for all\r\n> the partitions?\r\n\r\nYes, because both parent table and child table will be inserted and the parallel\r\nrelated objects on them will be executed. If users want to make sure the parallel insert succeed,\r\nthey need to check all the objects.\r\n\r\n> 3) How will it behave if a foreign table is passed to it? Will it error out?\r\n\r\nIt currently does not error out.\r\nIt will also check the objects on it and return not safe objects.\r\nNote: I consider foreign table itself as a parallel restricted object, because it does not support parallel insert fdw api for now.\r\n\r\n> In general:\r\n> 1) Is ALTER SET PARALLEL SAFETY on a partitioned table allowed?\r\n\r\nYes.\r\n\r\n> If yes, will it be\r\n> set based on all the partitions parallel safety?\r\n\r\nYes,\r\nBut the ALTER PARALLEL command itself does not check all the partition's safety flag.\r\nThe function pg_get_parallel_safety can return all the partition's not safe objects,\r\nuser should set the parallel safety based the function's result. \r\n\r\n> 2) How will users have to decide on parallel safety of a foreign table or a\r\n> partitioned table with foreign partitions? Or is it that we set these tables\r\n> parallel unsafe and don't do parallel inserts?\r\n\r\nForeign table itself is considered as parallel restricted,\r\nbecause we do not support parallel insert fdw api for now.\r\n\r\nBest regards,\r\nhouzj\r\n",
"msg_date": "Mon, 26 Apr 2021 11:26:29 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Mon, Apr 26, 2021 at 4:56 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> > > The parallel attributes in table means the parallel safety when user does some\n> > data-modification operations on it.\n> > > So, It only limit the use of parallel plan when using INSERT/UPDATE/DELETE.\n> >\n> > In that case, isn't it better to use the terminology \"PARALLEL DML\n> > SAFE/UNSAFE/RESTRICTED\" in the code and docs? This way, it will be clear that\n> > these tags don't affect parallel selects.\n>\n> Makes sense, I recalled I have heart similar suggestion before.\n> If there are no other objections, I plan to change command to\n> PARALLEL DML in my next version patches.\n\n+1 from me. Let's hear from others.\n\n> > Will the planner parse and check parallel safety of index((where\n> > clause) expressions in case of SELECTs? I'm not sure of this. But if it does, maybe\n> > we could do the same thing for parallel DML as well for normal tables?\n>\n> The planner does not check the index expression, because the expression will not be used in SELECT.\n> I think the expression is only used when a tuple inserted into the index.\n\nOh.\n\n> > the overhead of parsing index expressions? If the cost is heavy for checking\n> > index expressions parallel safety in case of normal tables, then the current\n> > design i.e. attributing parallel safety tag to all the tables makes sense.\n>\n> Currently, index expression and predicate are stored in text format.\n> We need to use stringToNode(expression/predicate) to parse it.\n> Some committers think doing this twice does not look good,\n> unless we found some ways to pass parsed info to the executor to avoid the second parse.\n\nHow much is the extra cost that's added if we do stringToNode twiice?\nMaybe, we can check for a few sample test cases by just having\nstringToNode(expression/predicate) in the planner and see if it adds\nmuch to the total execution time of the query.\n\n> > I was actually thinking that we will have the declarative approach only for\n> > partitioned tables as it is the main problem we are trying to solve with this\n> > design. Something like: users will run pg_get_parallel_safety to see the parallel\n> > unsafe objects associated with a partitioned table by looking at all of its\n> > partitions and be able to set a parallel dml safety tag to only partitioned tables.\n>\n> We originally want to make the declarative approach for both normal and partitioned table.\n> In this way, it will not bring any overhead to planner and looks consistency.\n> But, do you think we should put some really cheap safety check to the planner ?\n\nI still feel that why we shouldn't limit the declarative approach to\nonly partitioned tables? And for normal tables, possibly with a\nminimal cost(??), the server can do the safety checking. I know this\nfeels a little inconsistent. In the planner we will have different\npaths like: if (partitioned_table) { check the parallel safety tag\nassociated with the table } else { perform the parallel safety of the\nassociated objects }.\n\nOthers may have better thoughts on this.\n\n> > >0001: provide a utility function pg_get_parallel_safety('table_name').\n> > >\n> > >The function returns records of (objid, classid, parallel_safety) that\n> > >represent the parallel safety of objects that determine the parallel safety of\n> > the specified table.\n> > >Note: The function only outputs objects that are not parallel safe.\n> >\n> > If it returns only parallel \"unsafe\" objects and not \"safe\" or \"restricted\" objects,\n> > how about naming it to pg_get_table_parallel_unsafe_objects(\"table_name\")?\n>\n> Currently, the function returns both unsafe and restricted objects(I thought restricted is also not safe),\n> because we thought users only care about the objects that affect the use of parallel plan.\n\nHm.\n\n> > This way we could get rid of parallel_safety in the output record? If at all users\n> > want to see parallel restricted or safe objects, we can also have the\n> > counterparts pg_get_table_parallel_safe_objects and\n> > pg_get_table_parallel_restricted_objects. Of course, we can caution the user\n> > that execution of these functions might take longer and \"might consume\n> > excessive memory while accumulating the output\".\n> >\n> > Otherwise, we can have a single function pg_get_parallel_safety(\"table_name\"\n> > IN, \"parallel_safety\" IN, \"objid\"\n> > OUT, \"classid\" OUT)? If required, we could name it\n> > pg_get_parallel_safety_of_table_objects.\n> >\n> > Thoughts?\n>\n> I am sure will user want to get safe objects, do you have some usecases ?\n> If users do not care the safe objects, I think they can use\n> \"SELECT * FROM pg_get_parallel_safety() where parallel_safety = 'specified safety' \" to get the specified objects.\n\nI don't know any practical scenarios, but If I'm a user, at least I\nwill be tempted to see the parallel safe objects associated with a\nparticular table along with unsafe and restricted ones. Others may\nhave better thoughts on this.\n\n> > Although, I have not looked at the patches, few questions on\n> > pg_get_parallel_safety function:\n> > 1) Will it parse all the expressions for the objects that are listed under \"The\n> > objects that relate to the parallel safety of a DML target table are as follows:\" in\n> > the upthread?\n>\n> Yes.\n> But some parsed expression(such as domain type's expression) can be found in typecache,\n> we just check the safety for these already parsed expression.\n\nOh.\n\n> > 2) How will it behave if a partitioned table is passed to it? Will it recurse for all\n> > the partitions?\n>\n> Yes, because both parent table and child table will be inserted and the parallel\n> related objects on them will be executed. If users want to make sure the parallel insert succeed,\n> they need to check all the objects.\n\nThen, running the pg_get_parallel_safety will have some overhead if\nthere are many partitions associated with a table. And, this is the\noverhead planner would have had to incur without the declarative\napproach which we are trying to avoid with this design.\n\n> > 3) How will it behave if a foreign table is passed to it? Will it error out?\n>\n> It currently does not error out.\n> It will also check the objects on it and return not safe objects.\n> Note: I consider foreign table itself as a parallel restricted object, because it does not support parallel insert fdw api for now.\n>\n> > 2) How will users have to decide on parallel safety of a foreign table or a\n> > partitioned table with foreign partitions? Or is it that we set these tables\n> > parallel unsafe and don't do parallel inserts?\n>\n> Foreign table itself is considered as parallel restricted,\n> because we do not support parallel insert fdw api for now.\n\nMaybe, the ALTER TABLE ... SET PARALLEL on a foreign table should\ndefault to parallel restricted always and emit a warning saying the\nreason?\n\n> > In general:\n> > 1) Is ALTER SET PARALLEL SAFETY on a partitioned table allowed?\n>\n> Yes.\n>\n> > If yes, will it be\n> > set based on all the partitions parallel safety?\n>\n> Yes,\n> But the ALTER PARALLEL command itself does not check all the partition's safety flag.\n> The function pg_get_parallel_safety can return all the partition's not safe objects,\n> user should set the parallel safety based the function's result.\n\nI'm thinking that when users say ALTER TABLE partioned_table SET\nPARALLEL TO 'safe';, we check all the partitions' and their associated\nobjects' parallel safety? If all are parallel safe, then only we set\npartitioned_table as parallel safe. What should happen if the parallel\nsafety of any of the associated objects/partitions changes after\nsetting the partitioned_table safety?\n\nMy understanding was that: the command ALTER TABLE ... SET PARALLEL TO\n'safe' work will check the parallel safety of all the objects\nassociated with the table. If the objects are all parallel safe, then\nthe table will be set to safe. If at least one object is parallel\nunsafe or restricted, then the command will fail. I was also thinking\nthat how will the design cope with situations such as the parallel\nsafety of any of the associated objects changing after setting the\ntable to parallel safe. The planner would have relied on the outdated\nparallel safety of the table and chosen parallel inserts and the\nexecutor will catch such situations. Looks like my understanding was\nwrong.\n\nSo, the ALTER TABLE ... SET PARALLEL TO command just sets the target\ntable safety, doesn't bother what the associated objects' safety is.\nIt just believes the user. If at all there are any parallel unsafe\nobjects it will be caught by the executor. Just like, setting parallel\nsafety of the functions/aggregates, the docs caution users about\naccidentally/intentionally tagging parallel unsafe\nfunctions/aggregates as parallel safe.\n\nNote: I meant the table objects are the ones that are listed under\n\"The objects that relate to the parallel safety of a DML target table\nare as follows:\" in the upthread.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Apr 2021 06:20:43 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "> > Currently, index expression and predicate are stored in text format.\r\n> > We need to use stringToNode(expression/predicate) to parse it.\r\n> > Some committers think doing this twice does not look good, unless we\r\n> > found some ways to pass parsed info to the executor to avoid the second\r\n> parse.\r\n> \r\n> How much is the extra cost that's added if we do stringToNode twiice?\r\n> Maybe, we can check for a few sample test cases by just having\r\n> stringToNode(expression/predicate) in the planner and see if it adds much to\r\n> the total execution time of the query.\r\n\r\n\r\nOK, I will do some test on it.\r\n\r\n> > Yes, because both parent table and child table will be inserted and\r\n> > the parallel related objects on them will be executed. If users want\r\n> > to make sure the parallel insert succeed, they need to check all the objects.\r\n> \r\n> Then, running the pg_get_parallel_safety will have some overhead if there are\r\n> many partitions associated with a table. And, this is the overhead planner\r\n> would have had to incur without the declarative approach which we are trying\r\n> to avoid with this design.\r\n\r\nYes, I think put such overhead in a separate function is better than in a common path(planner).\r\n\r\n> > Foreign table itself is considered as parallel restricted, because we\r\n> > do not support parallel insert fdw api for now.\r\n> \r\n> Maybe, the ALTER TABLE ... SET PARALLEL on a foreign table should default to\r\n> parallel restricted always and emit a warning saying the reason?\r\n\r\nThanks for the comment, I agree.\r\nI will change this in the next version patches.\r\n\r\n> > But the ALTER PARALLEL command itself does not check all the partition's\r\n> safety flag.\r\n> > The function pg_get_parallel_safety can return all the partition's not\r\n> > safe objects, user should set the parallel safety based the function's result.\r\n> \r\n> I'm thinking that when users say ALTER TABLE partioned_table SET PARALLEL\r\n> TO 'safe';, we check all the partitions' and their associated objects' parallel\r\n> safety? If all are parallel safe, then only we set partitioned_table as parallel safe.\r\n> What should happen if the parallel safety of any of the associated\r\n> objects/partitions changes after setting the partitioned_table safety?\r\n\r\nCurrently, nothing happened if any of the associated objects/partitions changes after setting the partitioned_table safety.\r\nBecause , we do not have a really cheap way to catch the change. The existing relcache does not work because alter function\r\ndoes not invalid the relcache which the function belongs to. And it will bring some other overhead(locking, systable scan,...)\r\nto find the table the objects belong to.\r\n\r\n> My understanding was that: the command ALTER TABLE ... SET PARALLEL TO\r\n> 'safe' work will check the parallel safety of all the objects associated with the\r\n> table. If the objects are all parallel safe, then the table will be set to safe. If at\r\n> least one object is parallel unsafe or restricted, then the command will fail.\r\n\r\nI think this idea makes sense. Some detail of the designed can be improved.\r\nI agree with you that we can try to check check all the partitions' and their associated objects' parallel safety when ALTER PARALLEL.\r\nBecause it's a separate new command, add some overhead to it seems not too bad.\r\nIf there are no other objections, I plan to add safety check in the ALTER PARALLEL command.\r\n\r\n> also thinking that how will the design cope with situations such as the parallel\r\n> safety of any of the associated objects changing after setting the table to\r\n> parallel safe. The planner would have relied on the outdated parallel safety of\r\n> the table and chosen parallel inserts and the executor will catch such situations.\r\n> Looks like my understanding was wrong.\r\n\r\nCurrently, we assume user is responsible for its correctness.\r\nBecause, from our research, when the parallel safety of some of these objects is changed,\r\nit's costly to reflect it on the parallel safety of tables that depend on them.\r\n(we need to scan the pg_depend,pg_inherit,pg_index.... to find the target table)\r\n\r\n> So, the ALTER TABLE ... SET PARALLEL TO command just sets the target table\r\n> safety, doesn't bother what the associated objects' safety is.\r\n> It just believes the user. If at all there are any parallel unsafe objects it will be\r\n> caught by the executor. Just like, setting parallel safety of the\r\n> functions/aggregates, the docs caution users about accidentally/intentionally\r\n> tagging parallel unsafe functions/aggregates as parallel safe.\r\n\r\nYes, thanks for looking into this.\r\n\r\nBest regards,\r\nhouzj\r\n",
"msg_date": "Tue, 27 Apr 2021 02:09:01 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Tue, Apr 27, 2021 at 10:51 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n>\n> I still feel that why we shouldn't limit the declarative approach to\n> only partitioned tables? And for normal tables, possibly with a\n> minimal cost(??), the server can do the safety checking. I know this\n> feels a little inconsistent. In the planner we will have different\n> paths like: if (partitioned_table) { check the parallel safety tag\n> associated with the table } else { perform the parallel safety of the\n> associated objects }.\n>\n\nPersonally I think the simplest and best approach is just do it\nconsistently, using the declarative approach across all table types.\n\n>\n> Then, running the pg_get_parallel_safety will have some overhead if\n> there are many partitions associated with a table. And, this is the\n> overhead planner would have had to incur without the declarative\n> approach which we are trying to avoid with this design.\n>\n\nThe big difference is that pg_get_parallel_safety() is intended to be\nused during development, not as part of runtime parallel-safety checks\n(which are avoided using the declarative approach).\nSo there is no runtime overhead associated with pg_get_parallel_safety().\n\n>\n> I'm thinking that when users say ALTER TABLE partioned_table SET\n> PARALLEL TO 'safe';, we check all the partitions' and their associated\n> objects' parallel safety? If all are parallel safe, then only we set\n> partitioned_table as parallel safe. What should happen if the parallel\n> safety of any of the associated objects/partitions changes after\n> setting the partitioned_table safety?\n>\n\nWith the declarative approach, there is no parallel-safety checking on\neither the CREATE/ALTER when the parallel-safety is declared/set.\nIt's up to the user to get it right. If it's actually wrong, it will\nbe detected at runtime.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 27 Apr 2021 12:15:36 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Tue, Apr 27, 2021 at 7:45 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 10:51 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> >\n> > I still feel that why we shouldn't limit the declarative approach to\n> > only partitioned tables? And for normal tables, possibly with a\n> > minimal cost(??), the server can do the safety checking. I know this\n> > feels a little inconsistent. In the planner we will have different\n> > paths like: if (partitioned_table) { check the parallel safety tag\n> > associated with the table } else { perform the parallel safety of the\n> > associated objects }.\n> >\n>\n> Personally I think the simplest and best approach is just do it\n> consistently, using the declarative approach across all table types.\n>\n\nYeah, if we decide to go with a declarative approach then that sounds\nlike a better approach.\n\n> >\n> > Then, running the pg_get_parallel_safety will have some overhead if\n> > there are many partitions associated with a table. And, this is the\n> > overhead planner would have had to incur without the declarative\n> > approach which we are trying to avoid with this design.\n> >\n>\n> The big difference is that pg_get_parallel_safety() is intended to be\n> used during development, not as part of runtime parallel-safety checks\n> (which are avoided using the declarative approach).\n> So there is no runtime overhead associated with pg_get_parallel_safety().\n>\n> >\n> > I'm thinking that when users say ALTER TABLE partioned_table SET\n> > PARALLEL TO 'safe';, we check all the partitions' and their associated\n> > objects' parallel safety? If all are parallel safe, then only we set\n> > partitioned_table as parallel safe. What should happen if the parallel\n> > safety of any of the associated objects/partitions changes after\n> > setting the partitioned_table safety?\n> >\n>\n> With the declarative approach, there is no parallel-safety checking on\n> either the CREATE/ALTER when the parallel-safety is declared/set.\n> It's up to the user to get it right. If it's actually wrong, it will\n> be detected at runtime.\n>\n\nOTOH, even if we want to verify at DDL time, we won't be able to\nmaintain it at the later point of time say if user changed the\nparallel-safety of some function used in check constraint. I think the\nfunction pg_get_parallel_safety() will help the user to decide whether\nit can declare table parallel-safe. Now, it is quite possible that the\nuser can later change the parallel-safe property of some function then\nthat should be caught at runtime.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 27 Apr 2021 08:12:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Tue, Apr 27, 2021 at 7:39 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> > I'm thinking that when users say ALTER TABLE partioned_table SET PARALLEL\n> > TO 'safe';, we check all the partitions' and their associated objects' parallel\n> > safety? If all are parallel safe, then only we set partitioned_table as parallel safe.\n> > What should happen if the parallel safety of any of the associated\n> > objects/partitions changes after setting the partitioned_table safety?\n>\n> Currently, nothing happened if any of the associated objects/partitions changes after setting the partitioned_table safety.\n> Because , we do not have a really cheap way to catch the change. The existing relcache does not work because alter function\n> does not invalid the relcache which the function belongs to. And it will bring some other overhead(locking, systable scan,...)\n> to find the table the objects belong to.\n\nMakes sense. Anyways, the user is responsible for such changes and\notherwise the executor can catch them at run time, if not, the users\nwill see unintended consequences.\n\n> > My understanding was that: the command ALTER TABLE ... SET PARALLEL TO\n> > 'safe' work will check the parallel safety of all the objects associated with the\n> > table. If the objects are all parallel safe, then the table will be set to safe. If at\n> > least one object is parallel unsafe or restricted, then the command will fail.\n>\n> I think this idea makes sense. Some detail of the designed can be improved.\n> I agree with you that we can try to check check all the partitions' and their associated objects' parallel safety when ALTER PARALLEL.\n> Because it's a separate new command, add some overhead to it seems not too bad.\n> If there are no other objections, I plan to add safety check in the ALTER PARALLEL command.\n\nMaybe we can make the parallel safety check of the associated\nobjects/partitions optional for CREATE/ALTER DDLs, with the default\nbeing no checks performed. Both Greg and Amit agree that we don't have\nto perform any parallel safety checks during CREATE/ALTER DDLs.\n\n> > also thinking that how will the design cope with situations such as the parallel\n> > safety of any of the associated objects changing after setting the table to\n> > parallel safe. The planner would have relied on the outdated parallel safety of\n> > the table and chosen parallel inserts and the executor will catch such situations.\n> > Looks like my understanding was wrong.\n>\n> Currently, we assume user is responsible for its correctness.\n> Because, from our research, when the parallel safety of some of these objects is changed,\n> it's costly to reflect it on the parallel safety of tables that depend on them.\n> (we need to scan the pg_depend,pg_inherit,pg_index.... to find the target table)\n\nAgree.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Apr 2021 09:04:45 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Tue, Apr 27, 2021 at 7:45 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 10:51 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> >\n> > I still feel that why we shouldn't limit the declarative approach to\n> > only partitioned tables? And for normal tables, possibly with a\n> > minimal cost(??), the server can do the safety checking. I know this\n> > feels a little inconsistent. In the planner we will have different\n> > paths like: if (partitioned_table) { check the parallel safety tag\n> > associated with the table } else { perform the parallel safety of the\n> > associated objects }.\n> >\n>\n> Personally I think the simplest and best approach is just do it\n> consistently, using the declarative approach across all table types.\n\nAgree.\n\n> > Then, running the pg_get_parallel_safety will have some overhead if\n> > there are many partitions associated with a table. And, this is the\n> > overhead planner would have had to incur without the declarative\n> > approach which we are trying to avoid with this design.\n> >\n>\n> The big difference is that pg_get_parallel_safety() is intended to be\n> used during development, not as part of runtime parallel-safety checks\n> (which are avoided using the declarative approach).\n> So there is no runtime overhead associated with pg_get_parallel_safety().\n\nYes, while we avoid runtime overhead, but we run the risk of changed\nparallel safety of any of the underlying objects/functions/partitions.\nThis risk will anyways be unavoidable with declarative approach.\n\n> > I'm thinking that when users say ALTER TABLE partioned_table SET\n> > PARALLEL TO 'safe';, we check all the partitions' and their associated\n> > objects' parallel safety? If all are parallel safe, then only we set\n> > partitioned_table as parallel safe. What should happen if the parallel\n> > safety of any of the associated objects/partitions changes after\n> > setting the partitioned_table safety?\n> >\n>\n> With the declarative approach, there is no parallel-safety checking on\n> either the CREATE/ALTER when the parallel-safety is declared/set.\n> It's up to the user to get it right. If it's actually wrong, it will\n> be detected at runtime.\n\nAs I said upthread, we can provide the parallel safety check of\nassociated objects/partitions as an option with default as false. I'm\nnot sure if this is a good thing to do at all. Thoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Apr 2021 09:10:24 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Tue, Apr 27, 2021 at 8:12 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Tue, Apr 27, 2021 at 7:45 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > On Tue, Apr 27, 2021 at 10:51 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > >\n> > > I still feel that why we shouldn't limit the declarative approach to\n> > > only partitioned tables? And for normal tables, possibly with a\n> > > minimal cost(??), the server can do the safety checking. I know this\n> > > feels a little inconsistent. In the planner we will have different\n> > > paths like: if (partitioned_table) { check the parallel safety tag\n> > > associated with the table } else { perform the parallel safety of the\n> > > associated objects }.\n> > >\n> >\n> > Personally I think the simplest and best approach is just do it\n> > consistently, using the declarative approach across all table types.\n> >\n>\n> Yeah, if we decide to go with a declarative approach then that sounds\n> like a better approach.\n\nAgree.\n\n> > > Then, running the pg_get_parallel_safety will have some overhead if\n> > > there are many partitions associated with a table. And, this is the\n> > > overhead planner would have had to incur without the declarative\n> > > approach which we are trying to avoid with this design.\n> > >\n> >\n> > The big difference is that pg_get_parallel_safety() is intended to be\n> > used during development, not as part of runtime parallel-safety checks\n> > (which are avoided using the declarative approach).\n> > So there is no runtime overhead associated with pg_get_parallel_safety().\n> >\n> > >\n> > > I'm thinking that when users say ALTER TABLE partioned_table SET\n> > > PARALLEL TO 'safe';, we check all the partitions' and their associated\n> > > objects' parallel safety? If all are parallel safe, then only we set\n> > > partitioned_table as parallel safe. What should happen if the parallel\n> > > safety of any of the associated objects/partitions changes after\n> > > setting the partitioned_table safety?\n> > >\n> >\n> > With the declarative approach, there is no parallel-safety checking on\n> > either the CREATE/ALTER when the parallel-safety is declared/set.\n> > It's up to the user to get it right. If it's actually wrong, it will\n> > be detected at runtime.\n> >\n>\n> OTOH, even if we want to verify at DDL time, we won't be able to\n> maintain it at the later point of time say if user changed the\n> parallel-safety of some function used in check constraint. I think the\n> function pg_get_parallel_safety() will help the user to decide whether\n> it can declare table parallel-safe. Now, it is quite possible that the\n> user can later change the parallel-safe property of some function then\n> that should be caught at runtime.\n\nYeah, this is an unavoidable problem with the declarative approach.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Apr 2021 09:11:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "Hi,\r\n\r\nAttaching new version patches with the following change:\r\n\r\n0002:\r\n1): Change \"ALTER/CREATE TABLE PARALLEL SAFE\" to \"ALTER/CREATE TABLE PARALLEL DML SAFE\"\r\n2): disallow temp/foreign table to be parallel safe.\r\n\r\n0003:\r\n1) Temporarily, add the check of built-in function by adding a member for proparallel in FmgrBuiltin.\r\nTo avoid enlarging FmgrBuiltin struct , change the existing bool members strict and and retset into\r\none member of type char, and represent the original values with some bit flags.\r\n\r\nNote: this will lock down the parallel property of built-in function, but, I think the parallel safety of built-in function\r\nis related to the C code, user should not change the property of it unless they change its code. So, I think it might be\r\nbetter to disallow changing parallel safety for built-in functions, Thoughts ?\r\n\r\nI have not added the parallel safety check in ALTER/CREATE table PARALLEL DML SAFE command.\r\nI think it seems better to add it after some more discussion.\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Wed, 28 Apr 2021 02:44:18 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Wed, Apr 28, 2021 at 12:44 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> 0003:\n> 1) Temporarily, add the check of built-in function by adding a member for proparallel in FmgrBuiltin.\n> To avoid enlarging FmgrBuiltin struct , change the existing bool members strict and and retset into\n> one member of type char, and represent the original values with some bit flags.\n>\n\nI was thinking that it would be better to replace the two bool members\nwith one \"unsigned char\" member for the bitflags for strict and\nretset, and another \"char\" member for parallel.\nThe struct would still remain the same size as it originally was, and\nyou wouldn't need to convert between bit settings and char\n('u'/'r'/'s') each time a built-in function was checked for\nparallel-safety in fmgr_info().\n\n> Note: this will lock down the parallel property of built-in function, but, I think the parallel safety of built-in function\n> is related to the C code, user should not change the property of it unless they change its code. So, I think it might be\n> better to disallow changing parallel safety for built-in functions, Thoughts ?\n>\n\nI'd vote for disallowing it (unless someone can justify why it\ncurrently is allowed).\n\n> I have not added the parallel safety check in ALTER/CREATE table PARALLEL DML SAFE command.\n> I think it seems better to add it after some more discussion.\n>\n\nI'd vote for not adding such a check (as this is a declaration).\n\n\nSome additional comments:\n\n1) In patch 0002 comment, it says:\nThis property is recorded in pg_class's relparallel column as 'u', 'r', or 's',\njust like pg_proc's proparallel. The default is UNSAFE.\nIt should say \"relparalleldml\" column.\n\n2) With the patches applied, I seem to be getting a couple of errors\nwhen running \"make installcheck-world\" with\nforce_parallel_mode=regress in effect.\nCan you please try it?\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 28 Apr 2021 20:26:22 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 6:52 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n>\n> SOLUTION TO (1)\n> ========================================\n>\n> The candidate ideas are:\n>\n> 1) Caching the result of parallel-safety check\n> The planner stores the result of checking parallel safety for each relation in relcache, or some purpose-built hash table in shared memory.\n>\n> The problems are:\n>\n> * Even if the target relation turns out to be parallel safe by looking at those data structures, we cannot assume it remains true until the SQL statement finishes. For instance, other sessions might add a parallel-unsafe index to its descendant relations. Other examples include that when the user changes the parallel safety of indexes or triggers by running ALTER FUNCTION on the underlying index AM function or trigger function, the relcache entry of the table or index is not invalidated, so the correct parallel safety is not maintained in the cache.\n> In that case, when the executor encounters a parallel-unsafe object, it can change the cached state as being parallel-unsafe and error out.\n>\n> * Can't ensure fast access. With relcache, the first access in each session has to undergo the overhead of parallel-safety check. With a hash table in shared memory, the number of relations stored there would be limited, so the first access after database startup or the hash table entry eviction similarly experiences slowness.\n>\n> * With a new hash table, some lwlock for concurrent access must be added, which can have an adverse effect on performance.\n>\n>\n> 2) Enabling users to declare that the table allows parallel data modification\n> Add a table property that represents parallel safety of the table for DML statement execution. Users specify it as follows:\n>\n> CREATE TABLE table_name (...) PARALLEL { UNSAFE | RESTRICTED | SAFE };\n> ALTER TABLE table_name PARALLEL { UNSAFE | RESTRICTED | SAFE };\n>\n> This property is recorded in pg_class's relparallel column as 'u', 'r', or 's', just like pg_proc's proparallel. The default is UNSAFE.\n>\n\nSo, in short, if we need to go with any sort of solution with caching,\nwe can't avoid\n(a) locking all the partitions\n(b) getting an error while execution because at a later point user has\naltered the parallel-safe property of a relation.\n\nWe can't avoid locking all the partitions because while we are\nexecuting the statement, the user can change the parallel-safety for\none of the partitions by changing a particular partition and if we\ndidn't have a lock on that partition, it will lead to an error during\nexecution. Now, here, one option could be that we document this point\nand then don't take lock on any of the partitions except for root\ntable. So, the design would be simpler, that we either cache the\nparallel-safe in relcache or shared hash table and just lock the\nparent table and perform all parallel-safety checks for the first\ntime.\n\nI think if we want to go with the behavior that we will error out\nduring statement execution if any parallel-safe property is changed at\nrun-time, it is better to go with the declarative approach. In the\ndeclarative approach, at least the user will be responsible for taking\nany such decision and the chances of toggling the parallel-safe\nproperty will be less. To aid users, as suggested, we can provide a\nfunction to determine parallel-safety of relation for DML operations.\n\nNow, in the declarative approach, we can either go with whatever the\nuser has mentioned or we can do some checks during DDL to determine\nthe actual parallel-safety. I think even if try to determine\nparallel-safety during DDL it will be quite tricky in some cases, like\nwhen a user performs Alter Function to change parallel-safety of the\nfunction used in some constraint for the table or if the user changes\nparallel-safety of one of the partition then we need to traverse the\npartition hierarchy upwards which doesn't seem advisable. So, I guess\nit is better to go with whatever the user has mentioned but if you or\nothers feel we can have some sort of parallel-safety checks during DDL\nas well.\n\n> The planner assumes that all of the table, its descendant partitions, and their ancillary objects have the specified parallel safety or safer one. The user is responsible for its correctness. If the parallel processes find an object that is less safer than the assumed parallel safety during statement execution, it throws an ERROR and abort the statement execution.\n>\n> The objects that relate to the parallel safety of a DML target table are as follows:\n>\n> * Column default expression\n> * DOMAIN type CHECK expression\n> * CHECK constraints on column\n> * Partition key\n> * Partition key support function\n> * Index expression\n> * Index predicate\n> * Index AM function\n> * Operator function\n> * Trigger function\n>\n> When the parallel safety of some of these objects is changed, it's costly to reflect it on the parallel safety of tables that depend on them. So, we don't do it. Instead, we provide a utility function pg_get_parallel_safety('table_name') that returns records of (objid, classid, parallel_safety) that represent the parallel safety of objects that determine the parallel safety of the specified table. The function only outputs objects that are not parallel safe.\n>\n\nSo, users need to check count(*) for this to determine\nparallel-safety? How about if we provide a wrapper function on top of\nthis function or a separate function that returns char to indicate\nwhether it is safe, unsafe, or restricted to perform a DML operation\non the table?\n\n> How does the executor detect parallel unsafe objects? There are two ways:\n>\n> 1) At loading time\n> When the executor loads the definition of objects (tables, constraints, index, triggers, etc.) during the first access to them after session start or their eviction by sinval message, it checks the parallel safety.\n>\n> This is a legitimate way, but may need much code. Also, it might overlook necessary code changes without careful inspection.\n>\n\nIf we want to go with a declarative approach, then I think we should\ntry to do this because it will be almost free in some cases and we can\ndetect error early. For example, when we decide to insert in a\npartition that is declared unsafe whereas the root (partitioned) table\nis declared safe.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 10 May 2021 11:20:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Mon, May 10, 2021 at 11:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Now, in the declarative approach, we can either go with whatever the\n> user has mentioned or we can do some checks during DDL to determine\n> the actual parallel-safety. I think even if try to determine\n> parallel-safety during DDL it will be quite tricky in some cases, like\n> when a user performs Alter Function to change parallel-safety of the\n> function used in some constraint for the table or if the user changes\n> parallel-safety of one of the partition then we need to traverse the\n> partition hierarchy upwards which doesn't seem advisable. So, I guess\n> it is better to go with whatever the user has mentioned but if you or\n> others feel we can have some sort of parallel-safety checks during DDL\n> as well.\n\nIMHO, it makes sense to go with what the user has declared to avoid\ncomplexity. And, I don't see any problem with that.\n\n> > The planner assumes that all of the table, its descendant partitions, and their ancillary objects have the specified parallel safety or safer one. The user is responsible for its correctness. If the parallel processes find an object that is less safer than the assumed parallel safety during statement execution, it throws an ERROR and abort the statement execution.\n> >\n> > The objects that relate to the parallel safety of a DML target table are as follows:\n> >\n> > * Column default expression\n> > * DOMAIN type CHECK expression\n> > * CHECK constraints on column\n> > * Partition key\n> > * Partition key support function\n> > * Index expression\n> > * Index predicate\n> > * Index AM function\n> > * Operator function\n> > * Trigger function\n> >\n> > When the parallel safety of some of these objects is changed, it's costly to reflect it on the parallel safety of tables that depend on them. So, we don't do it. Instead, we provide a utility function pg_get_parallel_safety('table_name') that returns records of (objid, classid, parallel_safety) that represent the parallel safety of objects that determine the parallel safety of the specified table. The function only outputs objects that are not parallel safe.\n> >\n>\n> So, users need to check count(*) for this to determine\n> parallel-safety? How about if we provide a wrapper function on top of\n> this function or a separate function that returns char to indicate\n> whether it is safe, unsafe, or restricted to perform a DML operation\n> on the table?\n\nSuch wrapper function make sense.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 10 May 2021 11:31:58 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "> > > When the parallel safety of some of these objects is changed, it's costly to\r\n> reflect it on the parallel safety of tables that depend on them. So, we don't do\r\n> it. Instead, we provide a utility function pg_get_parallel_safety('table_name')\r\n> that returns records of (objid, classid, parallel_safety) that represent the\r\n> parallel safety of objects that determine the parallel safety of the specified\r\n> table. The function only outputs objects that are not parallel safe.\r\n> > >\r\n> >\r\n> > So, users need to check count(*) for this to determine\r\n> > parallel-safety? How about if we provide a wrapper function on top of\r\n> > this function or a separate function that returns char to indicate\r\n> > whether it is safe, unsafe, or restricted to perform a DML operation\r\n> > on the table?\r\n> \r\n> Such wrapper function make sense.\r\n\r\nThanks for the suggestion, and I agree.\r\nI will add another wrapper function and post new version patches soon.\r\n\r\nBest regards,\r\nhouzj\r\n",
"msg_date": "Mon, 10 May 2021 06:46:54 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "> > > So, users need to check count(*) for this to determine\r\n> > > parallel-safety? How about if we provide a wrapper function on top\r\n> > > of this function or a separate function that returns char to\r\n> > > indicate whether it is safe, unsafe, or restricted to perform a DML\r\n> > > operation on the table?\r\n> >\r\n> > Such wrapper function make sense.\r\n> \r\n> Thanks for the suggestion, and I agree.\r\n> I will add another wrapper function and post new version patches soon.\r\n\r\nAttaching new version patches with the following changes:\r\n\r\n0001\r\nAdd a new function pg_get_max_parallel_hazard('table_name') returns char('s', 'u', 'r')\r\nwhich indicate whether it is safe, unsafe, or restricted to perform a DML.\r\n\r\n0003\r\nTemporarily, I removed the safety check for function in the executor.\r\nBecause we are trying to post the safety check as a separate patch which\r\ncan help detect parallel unsafe function in parallel mode, and the approach\r\nis still in discussion[1].\r\nComments and suggestions are welcome either in that thread[1] or here.\r\n\r\n[1] https://www.postgresql.org/message-id/OS0PR01MB571646637784DAF1DD4C8BE994539%40OS0PR01MB5716.jpnprd01.prod.outlook.com\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Tue, 11 May 2021 12:40:53 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Tue, May 11, 2021 at 6:11 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> > > > So, users need to check count(*) for this to determine\n> > > > parallel-safety? How about if we provide a wrapper function on top\n> > > > of this function or a separate function that returns char to\n> > > > indicate whether it is safe, unsafe, or restricted to perform a DML\n> > > > operation on the table?\n> > >\n> > > Such wrapper function make sense.\n> >\n> > Thanks for the suggestion, and I agree.\n> > I will add another wrapper function and post new version patches soon.\n>\n> Attaching new version patches with the following changes:\n>\n> 0001\n> Add a new function pg_get_max_parallel_hazard('table_name') returns char('s', 'u', 'r')\n> which indicate whether it is safe, unsafe, or restricted to perform a DML.\n\nThanks for the patches. I think we should have the table name as\nregclass type for pg_get_max_parallel_hazard? See, pg_relation_size,\npg_table_size, pg_filenode_relation and so on.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 11 May 2021 21:45:12 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Tue, May 11, 2021 at 10:41 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Attaching new version patches with the following changes:\n>\n> 0001\n> Add a new function pg_get_max_parallel_hazard('table_name') returns char('s', 'u', 'r')\n> which indicate whether it is safe, unsafe, or restricted to perform a DML.\n>\n\nCurrently the 1st patch actually also contains the changes for the\nParallel-SELECT-for-INSERT functionality. This is not obvious.\nSo I think this should be split out into a separate patch (i.e. the\nminor planner update and related planner comment changes,\nis_parallel_allowed_for_modify() function, max_parallel_hazard()\nupdate, XID changes).\nAlso, the regression tests' \"serial_schedule\" file has been removed\nsince you posted the v3-POC patches, so you need to remove updates for\nthat from your 3rd patch.\n\nHow about reorganisation of the patches like the following?\n\n0001: CREATE ALTER TABLE PARALLEL DML\n0002: parallel-SELECT-for-INSERT (planner changes,\nmax_parallel_hazard() update, XID changes)\n0003: pg_get_parallel_safety()\n0004: regression test updates\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 12 May 2021 17:01:43 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "> \r\n> Currently the 1st patch actually also contains the changes for the\r\n> Parallel-SELECT-for-INSERT functionality. This is not obvious.\r\n> So I think this should be split out into a separate patch (i.e. the minor planner\r\n> update and related planner comment changes,\r\n> is_parallel_allowed_for_modify() function, max_parallel_hazard() update, XID\r\n> changes).\r\n> Also, the regression tests' \"serial_schedule\" file has been removed since you\r\n> posted the v3-POC patches, so you need to remove updates for that from your\r\n> 3rd patch.\r\n\r\nThanks for the comments, I have posted new version patches with this change. \r\n\r\n> How about reorganisation of the patches like the following?\r\n> 0001: CREATE ALTER TABLE PARALLEL DML\r\n> 0002: parallel-SELECT-for-INSERT (planner changes,\r\n> max_parallel_hazard() update, XID changes)\r\n> 0003: pg_get_parallel_safety()\r\n> 0004: regression test updates\r\n\r\nThanks, it looks good and I reorganized the latest patchset in this way.\r\n\r\nAttaching new version patches with the following change.\r\n\r\n0003\r\nChange functions arg type to regclass.\r\n\r\n0004\r\nremove updates for \"serial_schedule\".\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Fri, 14 May 2021 08:24:30 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "> > > > > So, users need to check count(*) for this to determine\r\n> > > > > parallel-safety? How about if we provide a wrapper function on\r\n> > > > > top of this function or a separate function that returns char to\r\n> > > > > indicate whether it is safe, unsafe, or restricted to perform a\r\n> > > > > DML operation on the table?\r\n> > > >\r\n> > > > Such wrapper function make sense.\r\n> > >\r\n> > > Thanks for the suggestion, and I agree.\r\n> > > I will add another wrapper function and post new version patches soon.\r\n> >\r\n> > Attaching new version patches with the following changes:\r\n> >\r\n> > 0001\r\n> > Add a new function pg_get_max_parallel_hazard('table_name') returns\r\n> > char('s', 'u', 'r') which indicate whether it is safe, unsafe, or restricted to\r\n> perform a DML.\r\n> \r\n> Thanks for the patches. I think we should have the table name as regclass type\r\n> for pg_get_max_parallel_hazard? See, pg_relation_size, pg_table_size,\r\n> pg_filenode_relation and so on.\r\n\r\nThanks for the comment.\r\nI have changed the type to regclass in the latest patchset.\r\n\r\nBest regards,\r\nhouzj\r\n",
"msg_date": "Fri, 14 May 2021 08:24:43 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Fri, May 14, 2021 at 6:24 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Thanks for the comments, I have posted new version patches with this change.\n>\n> > How about reorganisation of the patches like the following?\n> > 0001: CREATE ALTER TABLE PARALLEL DML\n> > 0002: parallel-SELECT-for-INSERT (planner changes,\n> > max_parallel_hazard() update, XID changes)\n> > 0003: pg_get_parallel_safety()\n> > 0004: regression test updates\n>\n> Thanks, it looks good and I reorganized the latest patchset in this way.\n>\n> Attaching new version patches with the following change.\n>\n> 0003\n> Change functions arg type to regclass.\n>\n> 0004\n> remove updates for \"serial_schedule\".\n>\n\nI've got some comments for the V4 set of patches:\n\n(0001)\n\n(i) Patch comment needs a little updating (suggested change is below):\n\nEnable users to declare a table's parallel data-modification safety\n(SAFE/RESTRICTED/UNSAFE).\n\nAdd a table property that represents parallel safety of a table for\nDML statement execution.\nIt may be specified as follows:\n\nCREATE TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE };\nALTER TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE };\n\nThis property is recorded in pg_class's relparallel column as 'u',\n'r', or 's', just like pg_proc's proparallel.\nThe default is UNSAFE.\n\nThe planner assumes that all of the table, its descendant partitions,\nand their ancillary objects have,\nat worst, the specified parallel safety. The user is responsible for\nits correctness.\n\n---\n\nNOTE: The following sentence was removed from the original V4 0001\npatch comment (since this version of the patch is not doing runtime\nparallel-safety checks on functions):.\n\nIf the parallel processes\nfind an object that is less safer than the assumed parallel safety during\nstatement execution, it throws an ERROR and abort the statement execution.\n\n\n(ii) Update message to say \"a foreign ...\":\n\nBEFORE:\n+ errmsg(\"cannot support parallel data modification on foreign or\ntemporary table\")));\n\nAFTER:\n+ errmsg(\"cannot support parallel data modification on a foreign or\ntemporary table\")));\n\n\n(iii) strVal() macro already casts to \"Value *\", so the cast can be\nremoved from the following:\n\n+ char *parallel = strVal((Value *) def);\n\n\n(0003)\n\n(i) Suggested updates to the patch comment:\n\nProvide a utility function \"pg_get_parallel_safety(regclass)\" that\nreturns records of\n(objid, classid, parallel_safety) for all parallel unsafe/restricted\ntable-related objects\nfrom which the table's parallel DML safety is determined. The user can\nuse this information\nduring development in order to accurately declare a table's parallel\nDML safety. or to\nidentify any problematic objects if a parallel DML fails or behaves\nunexpectedly.\n\nWhen the use of an index-related parallel unsafe/restricted function\nis detected, both the\nfunction oid and the index oid are returned.\n\nProvide a utility function \"pg_get_max_parallel_hazard(regclass)\" that\nreturns the worst\nparallel DML safety hazard that can be found in the given relation.\nUsers can use this\nfunction to do a quick check without caring about specific\nparallel-related objects.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 19 May 2021 21:55:24 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "From: Greg Nancarrow <gregn4422@gmail.com>\r\nSent: Wednesday, May 19, 2021 7:55 PM\r\n> \r\n> On Fri, May 14, 2021 at 6:24 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Thanks for the comments, I have posted new version patches with this\r\n> change.\r\n> >\r\n> > > How about reorganisation of the patches like the following?\r\n> > > 0001: CREATE ALTER TABLE PARALLEL DML\r\n> > > 0002: parallel-SELECT-for-INSERT (planner changes,\r\n> > > max_parallel_hazard() update, XID changes)\r\n> > > 0003: pg_get_parallel_safety()\r\n> > > 0004: regression test updates\r\n> >\r\n> > Thanks, it looks good and I reorganized the latest patchset in this way.\r\n> >\r\n> > Attaching new version patches with the following change.\r\n> >\r\n> > 0003\r\n> > Change functions arg type to regclass.\r\n> >\r\n> > 0004\r\n> > remove updates for \"serial_schedule\".\r\n> >\r\n> \r\n> I've got some comments for the V4 set of patches:\r\n> \r\n> (0001)\r\n> \r\n> (i) Patch comment needs a little updating (suggested change is below):\r\n> \r\n> Enable users to declare a table's parallel data-modification safety\r\n> (SAFE/RESTRICTED/UNSAFE).\r\n> \r\n> Add a table property that represents parallel safety of a table for\r\n> DML statement execution.\r\n> It may be specified as follows:\r\n> \r\n> CREATE TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE };\r\n> ALTER TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE };\r\n> \r\n> This property is recorded in pg_class's relparallel column as 'u',\r\n> 'r', or 's', just like pg_proc's proparallel.\r\n> The default is UNSAFE.\r\n> \r\n> The planner assumes that all of the table, its descendant partitions,\r\n> and their ancillary objects have,\r\n> at worst, the specified parallel safety. The user is responsible for\r\n> its correctness.\r\n> \r\n> ---\r\n> \r\n> NOTE: The following sentence was removed from the original V4 0001\r\n> patch comment (since this version of the patch is not doing runtime\r\n> parallel-safety checks on functions):.\r\n> \r\n> If the parallel processes\r\n> find an object that is less safer than the assumed parallel safety during\r\n> statement execution, it throws an ERROR and abort the statement execution.\r\n> \r\n> \r\n> (ii) Update message to say \"a foreign ...\":\r\n> \r\n> BEFORE:\r\n> + errmsg(\"cannot support parallel data modification on foreign or\r\n> temporary table\")));\r\n> \r\n> AFTER:\r\n> + errmsg(\"cannot support parallel data modification on a foreign or\r\n> temporary table\")));\r\n> \r\n> \r\n> (iii) strVal() macro already casts to \"Value *\", so the cast can be\r\n> removed from the following:\r\n> \r\n> + char *parallel = strVal((Value *) def);\r\n> \r\n> \r\n> (0003)\r\n> \r\n> (i) Suggested updates to the patch comment:\r\n> \r\n> Provide a utility function \"pg_get_parallel_safety(regclass)\" that\r\n> returns records of\r\n> (objid, classid, parallel_safety) for all parallel unsafe/restricted\r\n> table-related objects\r\n> from which the table's parallel DML safety is determined. The user can\r\n> use this information\r\n> during development in order to accurately declare a table's parallel\r\n> DML safety. or to\r\n> identify any problematic objects if a parallel DML fails or behaves\r\n> unexpectedly.\r\n> \r\n> When the use of an index-related parallel unsafe/restricted function\r\n> is detected, both the\r\n> function oid and the index oid are returned.\r\n> \r\n> Provide a utility function \"pg_get_max_parallel_hazard(regclass)\" that\r\n> returns the worst\r\n> parallel DML safety hazard that can be found in the given relation.\r\n> Users can use this\r\n> function to do a quick check without caring about specific\r\n> parallel-related objects.\r\n\r\nThanks for the comments and your descriptions looks good.\r\nAttaching v5 patchset with all these changes.\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Mon, 24 May 2021 05:15:46 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Mon, May 24, 2021 at 3:15 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n>\n> Thanks for the comments and your descriptions looks good.\n> Attaching v5 patchset with all these changes.\n>\n\nA few other minor things I noticed:\n\n(1) error message wording when declaring a table SAFE for parallel DML\n\nsrc/backend/commands/tablecmds.c\n\nSince data modification for the RELKIND_FOREIGN_TABLE and\nRELPERSISTENCE_TEMP types are allowed in the parallel-restricted case\n(i.e. leader may modify in parallel mode)\nI'm thinking it may be better to use wording like:\n\n \"cannot support foreign or temporary table data modification by\nparallel workers\"\n\ninstead of\n\n \"cannot support parallel data modification on a foreign or temporary table\"\n\nThere are TWO places where this error message is used.\n\n(What do you think?)\n\n(2) Minor formatting issue\n\nsrc/backend/optimizer/util/clauses.c\n\n static safety_object *make_safety_object(Oid objid, Oid classid,\nchar proparallel)\n\nshould be:\n\n static safety_object *\n make_safety_object(Oid objid, Oid classid, char proparallel)\n\n(3) Minor formatting issue\n\nsrc/backend/utils/cache/typcache.c\n\n\n List *GetDomainConstraints(Oid type_id)\n\nshould be:\n\n List *\n GetDomainConstraints(Oid type_id)\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 28 May 2021 18:41:42 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "From: Greg Nancarrow <gregn4422@gmail.com>\r\nSent: Friday, May 28, 2021 4:42 PM\r\n> On Mon, May 24, 2021 at 3:15 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> >\r\n> > Thanks for the comments and your descriptions looks good.\r\n> > Attaching v5 patchset with all these changes.\r\n> >\r\n> \r\n> A few other minor things I noticed:\r\n> \r\n> (1) error message wording when declaring a table SAFE for parallel DML\r\n> \r\n> src/backend/commands/tablecmds.c\r\n> \r\n> Since data modification for the RELKIND_FOREIGN_TABLE and\r\n> RELPERSISTENCE_TEMP types are allowed in the parallel-restricted case (i.e.\r\n> leader may modify in parallel mode) I'm thinking it may be better to use\r\n> wording like:\r\n> \r\n> \"cannot support foreign or temporary table data modification by parallel\r\n> workers\"\r\n> \r\n> instead of\r\n> \r\n> \"cannot support parallel data modification on a foreign or temporary table\"\r\n> \r\n> There are TWO places where this error message is used.\r\n> \r\n> (What do you think?)\r\n\r\nI think your change looks good.\r\nI used your msg in the latest patchset.\r\n\r\n> (2) Minor formatting issue\r\n> \r\n> src/backend/optimizer/util/clauses.c\r\n> \r\n> static safety_object *make_safety_object(Oid objid, Oid classid, char\r\n> proparallel)\r\n> \r\n> should be:\r\n> \r\n> static safety_object *\r\n> make_safety_object(Oid objid, Oid classid, char proparallel)\r\n\r\nChanged.\r\n \r\n> (3) Minor formatting issue\r\n>\r\n> src/backend/utils/cache/typcache.c\r\n> \r\n> \r\n> List *GetDomainConstraints(Oid type_id)\r\n> \r\n> should be:\r\n> \r\n> List *\r\n> GetDomainConstraints(Oid type_id)\r\n\r\nChanged.\r\n\r\nAttaching v6 patchset.\r\nAnd I registered it in CF https://commitfest.postgresql.org/33/3143/,\r\ncomments are welcome.\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Mon, 31 May 2021 05:34:09 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Mon, May 31, 2021 at 3:34 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n>\n> Attaching v6 patchset.\n> And I registered it in CF https://commitfest.postgresql.org/33/3143/,\n> comments are welcome.\n>\n\nThe latest patchset has some documentation updates. I'd like to\nsuggest a couple of documentation tweaks (this is mainly just minor\nword changes and some extra details):\n\n(1)\ndoc/src/sgml/ref/create_foreign_table.sgml\ndoc/src/sgml/ref/create_table.sgml\n\nPARALLEL DML UNSAFE indicates that the data in the table can't be\nmodified in parallel mode, and this forces a serial execution plan for\nDML statements operating on the table. This is the default. PARALLEL\nDML RESTRICTED indicates that the data in the table can be modified in\nparallel mode, but the modification is restricted to the parallel\ngroup leader. PARALLEL DML SAFE indicates that the data in the table\ncan be modified in parallel mode without restriction. Note that\nPostgreSQL currently does not support data modification by parallel\nworkers.\n\nTables should be labeled parallel dml unsafe/restricted if any\nparallel unsafe/restricted function could be executed when modifying\nthe data in the table (e.g., functions in triggers/index\nexpressions/constraints etc.).\n\nTo assist in correctly labeling the parallel DML safety level of a\ntable, PostgreSQL provides some utility functions that may be used\nduring application development. Refer to pg_get_parallel_safety() and\npg_get_max_parallel_hazard() for more information.\n\n\n(2) doc/src/sgml/func.sgml\n\n(i) pg_get_parallel_safety\nReturns a row containing enough information to uniquely identify the\nparallel unsafe/restricted table-related objects from which the\ntable's parallel DML safety is determined. The user can use this\ninformation during development in order to accurately declare a\ntable's parallel DML safety, or to identify any problematic objects if\nparallel DML fails or behaves unexpectedly. Note that when the use of\nan object-related parallel unsafe/restricted function is detected,\nboth the function OID and the object OID are returned. classid is the\nOID of the system catalog containing the object; objid is the OID of\nthe object itself.\n\n(ii) pg_get_max_parallel_hazard\nReturns the worst parallel DML safety hazard that can be found in the\ngiven relation:\n s safe\n r restricted\n u unsafe\nUsers can use this function to do a quick check without caring about\nspecific parallel-related objects.\n\n ---\n\nAlso, shouldn't support for \"Parallel\" be added for table output in\nPSQL? (e.g. \\dt+)\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 1 Jun 2021 19:32:24 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "From: Greg Nancarrow <gregn4422@gmail.com>\r\n> On Mon, May 31, 2021 at 3:34 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> >\r\n> > Attaching v6 patchset.\r\n> > And I registered it in CF https://commitfest.postgresql.org/33/3143/,\r\n> > comments are welcome.\r\n> >\r\n> \r\n> The latest patchset has some documentation updates. I'd like to suggest a\r\n> couple of documentation tweaks (this is mainly just minor word changes and\r\n> some extra details):\r\n> \r\n> (1)\r\n> doc/src/sgml/ref/create_foreign_table.sgml\r\n> doc/src/sgml/ref/create_table.sgml\r\n> \r\n> PARALLEL DML UNSAFE indicates that the data in the table can't be modified in\r\n> parallel mode, and this forces a serial execution plan for DML statements\r\n> operating on the table. This is the default. PARALLEL DML RESTRICTED\r\n> indicates that the data in the table can be modified in parallel mode, but the\r\n> modification is restricted to the parallel group leader. PARALLEL DML SAFE\r\n> indicates that the data in the table can be modified in parallel mode without\r\n> restriction. Note that PostgreSQL currently does not support data\r\n> modification by parallel workers.\r\n> \r\n> Tables should be labeled parallel dml unsafe/restricted if any parallel\r\n> unsafe/restricted function could be executed when modifying the data in the\r\n> table (e.g., functions in triggers/index expressions/constraints etc.).\r\n> \r\n> To assist in correctly labeling the parallel DML safety level of a table,\r\n> PostgreSQL provides some utility functions that may be used during\r\n> application development. Refer to pg_get_parallel_safety() and\r\n> pg_get_max_parallel_hazard() for more information.\r\n> \r\n> \r\n> (2) doc/src/sgml/func.sgml\r\n> \r\n> (i) pg_get_parallel_safety\r\n> Returns a row containing enough information to uniquely identify the parallel\r\n> unsafe/restricted table-related objects from which the table's parallel DML\r\n> safety is determined. The user can use this information during development in\r\n> order to accurately declare a table's parallel DML safety, or to identify any\r\n> problematic objects if parallel DML fails or behaves unexpectedly. Note that\r\n> when the use of an object-related parallel unsafe/restricted function is\r\n> detected, both the function OID and the object OID are returned. classid is the\r\n> OID of the system catalog containing the object; objid is the OID of the object\r\n> itself.\r\n> \r\n> (ii) pg_get_max_parallel_hazard\r\n> Returns the worst parallel DML safety hazard that can be found in the given\r\n> relation:\r\n> s safe\r\n> r restricted\r\n> u unsafe\r\n> Users can use this function to do a quick check without caring about specific\r\n> parallel-related objects.\r\n\r\nThanks for looking into the doc change, I think your change looks better and\r\nhave merged it in the attached patch.\r\n\r\n> Also, shouldn't support for \"Parallel\" be added for table output in PSQL? (e.g.\r\n> \\dt+)\r\n\r\nYeah, I think we should add it and I added it in the attached 0001 patch.\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Wed, 2 Jun 2021 07:30:32 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "From: houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>\r\n> Thanks for looking into the doc change, I think your change looks better and\r\n> have merged it in the attached patch.\r\n> \r\n> > Also, shouldn't support for \"Parallel\" be added for table output in PSQL? (e.g.\r\n> > \\dt+)\r\n> \r\n> Yeah, I think we should add it and I added it in the attached 0001 patch.\r\n\r\nOops, forgot to update the regression test in contrib/.\r\nAttaching new version patchset with this fix.\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Wed, 2 Jun 2021 09:33:26 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "Hi,\r\n\r\nWhen testing the patch, I found some issues in the 0003,0004 patch.\r\nAttaching new version patchset which fix these issues.\r\n\r\n0003\r\n* don't check parallel safety of partition's default column expression.\r\n* rename some function/variable names to be consistent with existing code.\r\n* remove some unused function parameters.\r\n* fix a max_hazard overwrite issue.\r\n* add some code comments and adjust some code forms.\r\n\r\n0004\r\n* Remove some unrelated comments in the regression test.\r\n* add the 'begin;', 'rollback;' in insert_parallel.sql.\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Tue, 8 Jun 2021 09:12:31 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "From: houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>\r\n> Hi,\r\n> \r\n> When testing the patch, I found some issues in the 0003,0004 patch.\r\n> Attaching new version patchset which fix these issues.\r\n> \r\n> 0003\r\n> * don't check parallel safety of partition's default column expression.\r\n> * rename some function/variable names to be consistent with existing code.\r\n> * remove some unused function parameters.\r\n> * fix a max_hazard overwrite issue.\r\n> * add some code comments and adjust some code forms.\r\n> \r\n> 0004\r\n> * Remove some unrelated comments in the regression test.\r\n> * add the 'begin;', 'rollback;' in insert_parallel.sql.\r\n\r\nThrough further review and thanks for greg-san's suggestions,\r\nI attached a new version patchset with some minor change in 0001,0003 and 0004.\r\n\r\n0001.\r\n* fix a typo in variable name.\r\n* add a TODO in patch comment about updating the version number when branch PG15.\r\n\r\n0003\r\n* fix a 'git apply white space' warning.\r\n* Remove some unnecessary if condition.\r\n* add some code comments above the safety check function.\r\n* Fix some typo.\r\n\r\n0004\r\n* add a testcase to test ALTER PARALLEL DML UNSAFE/RESTRICTED.\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Thu, 10 Jun 2021 01:26:38 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Thu, Jun 10, 2021 at 11:26 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Through further review and thanks for greg-san's suggestions,\n> I attached a new version patchset with some minor change in 0001,0003 and 0004.\n>\n> 0001.\n> * fix a typo in variable name.\n> * add a TODO in patch comment about updating the version number when branch PG15.\n>\n> 0003\n> * fix a 'git apply white space' warning.\n> * Remove some unnecessary if condition.\n> * add some code comments above the safety check function.\n> * Fix some typo.\n>\n> 0004\n> * add a testcase to test ALTER PARALLEL DML UNSAFE/RESTRICTED.\n>\n\nThanks, those updates addressed most of what I was going to comment\non for the v9 patches.\n\nSome additional comments on the v10 patches:\n\n(1) I noticed some functions in the 0003 patch have no function header:\n\n make_safety_object\n parallel_hazard_walker\n target_rel_all_parallel_hazard_recurse\n\n(2) I found the \"recurse_partition\" parameter of the\ntarget_rel_all_parallel_hazard_recurse() function a bit confusing,\nbecause the function recursively checks partitions without looking at\nthat flag. How about naming it \"is_partition\"?\n\n(3) The names of the utility functions don't convey that they operate on tables.\n\nHow about:\n\n pg_get_parallel_safety() -> pg_get_table_parallel_safety()\n pg_get_max_parallel_hazard() -> pg_get_table_max_parallel_hazard()\n\nor pg_get_rel_xxxxx()?\n\nWhat do you think?\n\n(4) I think that some of the tests need parallel dml settings to match\ntheir expected output:\n\n(i)\n+-- Test INSERT with underlying query - and RETURNING (no projection)\n+-- (should create a parallel plan; parallel SELECT)\n\n-> but creates a serial plan (so needs to set parallel dml safe, so a\nparallel plan is created)\n\n(ii)\n+-- Parallel INSERT with unsafe column default, should not use a parallel plan\n+--\n+alter table testdef parallel dml safe;\n\n-> should set \"unsafe\" not \"safe\"\n\n(iii)\n+-- Parallel INSERT with restricted column default, should use parallel SELECT\n+--\n+explain (costs off) insert into testdef(a,b,d) select a,a*2,a*8 from test_data;\n\n-> should use \"alter table testdef parallel dml restricted;\" before the explain\n\n(iv)\n+--\n+-- Parallel INSERT with restricted and unsafe column defaults, should\nnot use a parallel plan\n+--\n+explain (costs off) insert into testdef(a,d) select a,a*8 from test_data;\n\n-> should use \"alter table testdef parallel dml unsafe;\" before the explain\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 10 Jun 2021 15:39:20 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Thursday, June 10, 2021 1:39 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\r\n> On Thu, Jun 10, 2021 at 11:26 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Through further review and thanks for greg-san's suggestions, I\r\n> > attached a new version patchset with some minor change in 0001,0003 and\r\n> 0004.\r\n> >\r\n> > 0001.\r\n> > * fix a typo in variable name.\r\n> > * add a TODO in patch comment about updating the version number when\r\n> branch PG15.\r\n> >\r\n> > 0003\r\n> > * fix a 'git apply white space' warning.\r\n> > * Remove some unnecessary if condition.\r\n> > * add some code comments above the safety check function.\r\n> > * Fix some typo.\r\n> >\r\n> > 0004\r\n> > * add a testcase to test ALTER PARALLEL DML UNSAFE/RESTRICTED.\r\n> >\r\n> \r\n> Thanks, those updates addressed most of what I was going to comment on\r\n> for the v9 patches.\r\n> \r\n> Some additional comments on the v10 patches:\r\n> \r\n> (1) I noticed some functions in the 0003 patch have no function header:\r\n> \r\n> make_safety_object\r\n> parallel_hazard_walker\r\n> target_rel_all_parallel_hazard_recurse\r\n\r\nThanks, added.\r\n\r\n> (2) I found the \"recurse_partition\" parameter of the\r\n> target_rel_all_parallel_hazard_recurse() function a bit confusing, because the\r\n> function recursively checks partitions without looking at that flag. How about\r\n> naming it \"is_partition\"?\r\n\r\nYeah, it looks better. Changed.\r\n\r\n> (3) The names of the utility functions don't convey that they operate on tables.\r\n> \r\n> How about:\r\n> \r\n> pg_get_parallel_safety() -> pg_get_table_parallel_safety()\r\n> pg_get_max_parallel_hazard() -> pg_get_table_max_parallel_hazard()\r\n> \r\n> or pg_get_rel_xxxxx()?\r\n> \r\n> What do you think?\r\n\r\nI changed it like the following:\r\npg_get_parallel_safety -> pg_get_table_parallel_dml_safety\r\npg_get_max_parallel_hazard -> pg_get_table_max_parallel_dml_hazard\r\n\r\n> (4) I think that some of the tests need parallel dml settings to match their\r\n> expected output:\r\n> \r\n> (i)\r\n> +-- Test INSERT with underlying query - and RETURNING (no projection)\r\n> +-- (should create a parallel plan; parallel SELECT)\r\n> \r\n> -> but creates a serial plan (so needs to set parallel dml safe, so a\r\n> parallel plan is created)\r\n\r\nChanged.\r\n\r\n> (ii)\r\n> +-- Parallel INSERT with unsafe column default, should not use a\r\n> +parallel plan\r\n> +--\r\n> +alter table testdef parallel dml safe;\r\n> \r\n> -> should set \"unsafe\" not \"safe\"\r\n\r\nI thought the testcase about table 'testdef' is to test if the planner is able to\r\ncheck whether it has parallel unsafe or restricted column default expression,\r\nbecause column default expression will be merged into select part in planner.\r\nSo, It seems we don't need to change the table's parallel safety for these cases ?\r\n\r\n> (iii)\r\n> +-- Parallel INSERT with restricted column default, should use parallel\r\n> +SELECT\r\n> +--\r\n> +explain (costs off) insert into testdef(a,b,d) select a,a*2,a*8 from\r\n> +test_data;\r\n> \r\n> -> should use \"alter table testdef parallel dml restricted;\" before the\r\n> -> explain\r\n> \r\n> (iv)\r\n> +--\r\n> +-- Parallel INSERT with restricted and unsafe column defaults, should\r\n> not use a parallel plan\r\n> +--\r\n> +explain (costs off) insert into testdef(a,d) select a,a*8 from\r\n> +test_data;\r\n> \r\n> -> should use \"alter table testdef parallel dml unsafe;\" before the\r\n> -> explain\r\n\r\nI addressed most of the comments and rebased the patch.\r\nBesides, I changed the following things:\r\n* I removed the safety check for index-am function as discussed[1].\r\n* change version 140000 to 150000\r\n\r\nAttach new version patchset for further review.\r\n\r\n[1] https://www.postgresql.org/message-id/164474.1623652853%40sss.pgh.pa.us\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Tue, 6 Jul 2021 03:42:28 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Tuesday, July 6, 2021 11:42 AM houzj.fnst@fujitsu.com wrote:\r\n> \r\n> I addressed most of the comments and rebased the patch.\r\n> Besides, I changed the following things:\r\n> * I removed the safety check for index-am function as discussed[1].\r\n> * change version 140000 to 150000\r\n> \r\n> Attach new version patchset for further review.\r\n> \r\n> [1]\r\n> https://www.postgresql.org/message-id/164474.1623652853%40sss.pgh.pa.us\r\n\r\nAttach a rebased patchset which fix some compile warnings\r\nand errors due to recent commit 2ed532.\r\n\r\nBest regards,\r\nHou zhijie",
"msg_date": "Mon, 12 Jul 2021 08:00:52 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Monday, July 12, 2021 4:01 PM <houzj.fnst@fujitsu.com> wrote:\r\n> Attach a rebased patchset which fix some compile warnings and errors due to\r\n> recent commit 2ed532.\r\n\r\nAttach rebased patches.\r\n\r\nBest regards\r\nHouzj",
"msg_date": "Tue, 20 Jul 2021 01:47:37 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 11:47 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Attach rebased patches.\n>\n\nJust letting you know that CRLFs are in the patch comments for the\n0001 and 0003 patches.\n(It doesn't affect patch application)\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 20 Jul 2021 12:40:48 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 11:47 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n>\n> Attach rebased patches.\n>\n\nThere's a failure in the \"triggers\" tests and the cfbot is not happy.\nAttaching an updated set of patches with a minor update to the\nexpected test results to fix this.\nAlso removed some CRLFs in some of the patch comments. No other changes.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia",
"msg_date": "Fri, 23 Jul 2021 15:00:07 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel INSERT SELECT take 2"
},
{
"msg_contents": "Hi,\r\n\r\nBased on the discussion in another thread[1], we plan to change the design like\r\nthe following:\r\n\r\nallow users to specify a parallel-safety option for both partitioned and\r\nnon-partitioned relations but for non-partitioned relations if users didn't\r\nspecify, it would be computed automatically? If the user has specified\r\nparallel-safety option for non-partitioned relation then we would consider that\r\ninstead of computing the value by ourselves.\r\n\r\nIn this approach, it will be more convenient for users to use and get the\r\nbenefit of parallel select for insert.\r\n\r\nSince most of the technical discussion happened in another thread, I\r\nposted the new version patch including the new design to that thread[2].\r\nComments are also welcome in that thread.\r\n\r\n[1] https://www.postgresql.org/message-id/CAA4eK1%2BMQnm6RkqooHA7R-y7riRa84qsh5j3FZDScw71m_n4OA%40mail.gmail.com\r\n\r\n[2] https://www.postgresql.org/message-id/OS0PR01MB5716DB1E3F723F86314D080094F09%40OS0PR01MB5716.jpnprd01.prod.outlook.com\r\n\r\nBest regards,\r\nhouzj\r\n",
"msg_date": "Thu, 5 Aug 2021 09:41:10 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Parallel INSERT SELECT take 2"
}
] |
[
{
"msg_contents": "Hi all,\n\nStarting a new thread as the one that has introduced compute_query_id\nis already long enough.\n\nFujii-san has reported on Twitter that enabling the computation of\nquery IDs does not work properly with log_statement as the query ID is\ncalculated at parse analyze time and the query is logged before that.\nAs far as I can see, that's really a problem as any queries logged are\ncombined with a query ID of 0, and log parsers would not really be\nable to use that, even if the information provided by for example\nlog_duration gives the computed query ID prevent in pg_stat_activity.\n\nWhile making the feature run on some test server, I have noticed that\n%Q would log some garbage query ID for autovacuum workers that's kept\naround. That looks wrong.\n\nThoughts?\n--\nMichael",
"msg_date": "Mon, 12 Apr 2021 15:12:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Problems around compute_query_id"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 03:12:40PM +0900, Michael Paquier wrote:\n> Hi all,\n> \n> Starting a new thread as the one that has introduced compute_query_id\n> is already long enough.\n> \n> Fujii-san has reported on Twitter that enabling the computation of\n> query IDs does not work properly with log_statement as the query ID is\n> calculated at parse analyze time and the query is logged before that.\n> As far as I can see, that's really a problem as any queries logged are\n> combined with a query ID of 0, and log parsers would not really be\n> able to use that, even if the information provided by for example\n> log_duration gives the computed query ID prevent in pg_stat_activity.\n\nI don't see any way around that. The goal of log_statements is to log all\nsyntactically valid queries, including otherwise invalid queries. For\ninstance, it would log\n\nSELECT notacolumn;\n\nand there's no way to compute a queryid in that case. I think that\nlog_statements already causes some issues with log parsers. At least pgbadger\nrecommends to avoid using that:\n\n\"Do not enable log_statement as its log format will not be parsed by pgBadger.\"\n\nI think we should simply document that %Q is not compatible with\nlog_statements.\n\n> While making the feature run on some test server, I have noticed that\n> %Q would log some garbage query ID for autovacuum workers that's kept\n> around. That looks wrong.\n\nI've not been able to reproduce it, do you have some hint on how to do it?\n\nMaybe setting a zero queryid at the beginning of AutoVacWorkerMain() could fix\nthe problem?\n\n\n",
"msg_date": "Mon, 12 Apr 2021 14:56:59 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problems around compute_query_id"
},
{
"msg_contents": "Hi,\n\nOn Mon, Apr 12, 2021 at 02:56:59PM +0800, Julien Rouhaud wrote:\n> On Mon, Apr 12, 2021 at 03:12:40PM +0900, Michael Paquier wrote:\n> > Fujii-san has reported on Twitter that enabling the computation of\n> > query IDs does not work properly with log_statement as the query ID is\n> > calculated at parse analyze time and the query is logged before that.\n> > As far as I can see, that's really a problem as any queries logged are\n> > combined with a query ID of 0, and log parsers would not really be\n> > able to use that, even if the information provided by for example\n> > log_duration gives the computed query ID prevent in pg_stat_activity.\n> \n> I don't see any way around that. The goal of log_statements is to log all\n> syntactically valid queries, including otherwise invalid queries. For\n> instance, it would log\n> \n> SELECT notacolumn;\n> \n> and there's no way to compute a queryid in that case. I think that\n> log_statements already causes some issues with log parsers. At least pgbadger\n> recommends to avoid using that:\n> \n> \"Do not enable log_statement as its log format will not be parsed by pgBadger.\"\n> \n> I think we should simply document that %Q is not compatible with\n> log_statements.\n\nWhat about log_statement_sample_rate ? Does compute_query_id have the\nsame problem with that?\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB M�nchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 M�nchengladbach\nGesch�ftsf�hrung: Dr. Michael Meskes, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n",
"msg_date": "Mon, 12 Apr 2021 09:20:07 +0200",
"msg_from": "Michael Banck <michael.banck@credativ.de>",
"msg_from_op": false,
"msg_subject": "Re: Problems around compute_query_id"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 09:20:07AM +0200, Michael Banck wrote:\n> \n> What about log_statement_sample_rate ? Does compute_query_id have the\n> same problem with that?\n\nNo, log_statement_sample_rate samples log_min_duration_statements, not\nlog_statements so it works as expected.\n\n\n",
"msg_date": "Mon, 12 Apr 2021 15:26:33 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problems around compute_query_id"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 03:26:33PM +0800, Julien Rouhaud wrote:\n> On Mon, Apr 12, 2021 at 09:20:07AM +0200, Michael Banck wrote:\n> > \n> > What about log_statement_sample_rate ? Does compute_query_id have the\n> > same problem with that?\n> \n> No, log_statement_sample_rate samples log_min_duration_statements, not\n> log_statements so it works as expected.\n\nWhile on that topic, it's probably worth mentioning that log_duration is now\nway more useful if you have the queryid in you log_line_prefix. It avoids to\nlog the full query text while still being able to know what was the underlying\nnormalized query by dumping the content of pg_stat_statements.\n\n\n",
"msg_date": "Mon, 12 Apr 2021 16:43:24 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problems around compute_query_id"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 02:56:59PM +0800, Julien Rouhaud wrote:\n> I think we should simply document that %Q is not compatible with\n> log_statements.\n\nHearing no objection I documented that limitation.\n\n> \n> > While making the feature run on some test server, I have noticed that\n> > %Q would log some garbage query ID for autovacuum workers that's kept\n> > around. That looks wrong.\n> \n> I've not been able to reproduce it, do you have some hint on how to do it?\n> \n> Maybe setting a zero queryid at the beginning of AutoVacWorkerMain() could fix\n> the problem?\n\nIt turns out that the problem was simply that some process can inherit a\nPgBackendStatus for which a previous backend reported a queryid. For processes\nlike autovacuum process, they will never report a new identifier so they\nreported the previous one. Resetting the field like the other ones in\npgstat_bestart() fixes the problem for autovacuum and any similar process.",
"msg_date": "Thu, 15 Apr 2021 15:43:59 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problems around compute_query_id"
},
{
"msg_contents": "On Thu, Apr 15, 2021 at 03:43:59PM +0800, Julien Rouhaud wrote:\n> On Mon, Apr 12, 2021 at 02:56:59PM +0800, Julien Rouhaud wrote:\n> > I think we should simply document that %Q is not compatible with\n> > log_statements.\n> \n> Hearing no objection I documented that limitation.\n> \n> > \n> > > While making the feature run on some test server, I have noticed that\n> > > %Q would log some garbage query ID for autovacuum workers that's kept\n> > > around. That looks wrong.\n> > \n> > I've not been able to reproduce it, do you have some hint on how to do it?\n> > \n> > Maybe setting a zero queryid at the beginning of AutoVacWorkerMain() could fix\n> > the problem?\n> \n> It turns out that the problem was simply that some process can inherit a\n> PgBackendStatus for which a previous backend reported a queryid. For processes\n> like autovacuum process, they will never report a new identifier so they\n> reported the previous one. Resetting the field like the other ones in\n> pgstat_bestart() fixes the problem for autovacuum and any similar process.\n\nI slightly adjusted the patch and applied it. Thanks. I think this\ncloses all the open issues around query_id. :-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Tue, 20 Apr 2021 12:59:10 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Problems around compute_query_id"
},
{
"msg_contents": "On Tue, Apr 20, 2021 at 12:59:10PM -0400, Bruce Momjian wrote:\n> On Thu, Apr 15, 2021 at 03:43:59PM +0800, Julien Rouhaud wrote:\n> > On Mon, Apr 12, 2021 at 02:56:59PM +0800, Julien Rouhaud wrote:\n> > > I think we should simply document that %Q is not compatible with\n> > > log_statements.\n> > \n> > Hearing no objection I documented that limitation.\n> > \n> > > \n> > > > While making the feature run on some test server, I have noticed that\n> > > > %Q would log some garbage query ID for autovacuum workers that's kept\n> > > > around. That looks wrong.\n> > > \n> > > I've not been able to reproduce it, do you have some hint on how to do it?\n> > > \n> > > Maybe setting a zero queryid at the beginning of AutoVacWorkerMain() could fix\n> > > the problem?\n> > \n> > It turns out that the problem was simply that some process can inherit a\n> > PgBackendStatus for which a previous backend reported a queryid. For processes\n> > like autovacuum process, they will never report a new identifier so they\n> > reported the previous one. Resetting the field like the other ones in\n> > pgstat_bestart() fixes the problem for autovacuum and any similar process.\n> \n> I slightly adjusted the patch and applied it. Thanks. I think this\n> closes all the open issues around query_id. :-)\n\nThanks a lot Bruce! There was also [1], but Michael already committed the\nproposed fix, so I also think that all open issues for query_id are not taken\ncare of!\n\n[1]: https://postgr.es/m/CAJcOf-fXyb2QiDbwftD813UF70w-+BsK-03bFp1GrijXU9GQYQ@mail.gmail.com\n\n\n\n",
"msg_date": "Thu, 22 Apr 2021 16:37:35 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problems around compute_query_id"
}
] |
[
{
"msg_contents": "Hi Postgres Community,\n\nRegarding anti wraparound vacuums (to freeze tuples), I see it has to scan\nall the pages which are not frozen-all (looking at visibility map). That\nmeans even if we want to freeze less transactions only (For ex - by\nincreasing parameter vacuum_freeze_min_age to 1B), still it will scan all\nthe pages in the visibility map and a time taking process.\n\nCan there be any improvement on this process so VACUUM knows the\ntuple/pages of those transactions which need to freeze up.\n\nBenefit of such an improvement is that if we are reaching transaction id\nclose to 2B (and downtime), that time we can quickly recover the database\nwith vacuuming freeze only a few millions rows with quick lookup rather\nthan going all the pages from visibility map.\n\nFor Ex - A Binary Tree structure where it gets all the rows corresponding\nto a table including transaction ids. So whenever we say free all tuples\nhaving transaction id greater than x and less than y. Yes that makes extra\noverhead on data load and lots of other things to consider.\n\n\nThanks,\nVirender\n\nHi Postgres Community,Regarding anti wraparound vacuums (to freeze tuples), I see it has to scan all the pages which are not frozen-all (looking at visibility map). That means even if we want to freeze less transactions only (For ex - by increasing parameter vacuum_freeze_min_age to 1B), still it will scan all the pages in the visibility map and a time taking process.Can there be any improvement on this process so VACUUM knows the tuple/pages of those transactions which need to freeze up.Benefit of such an improvement is that if we are reaching transaction id close to 2B (and downtime), that time we can quickly recover the database with vacuuming freeze only a few millions rows with quick lookup rather than going all the pages from visibility map.For Ex - A Binary Tree structure where it gets all the rows corresponding to a table including transaction ids. So whenever we say free all tuples having transaction id greater than x and less than y. Yes that makes extra overhead on data load and lots of other things to consider.Thanks,Virender",
"msg_date": "Mon, 12 Apr 2021 13:49:04 +0530",
"msg_from": "Virender Singla <virender.cse@gmail.com>",
"msg_from_op": true,
"msg_subject": "vacuum freeze - possible improvements"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 5:38 PM Virender Singla <virender.cse@gmail.com> wrote:\n>\n> Hi Postgres Community,\n>\n> Regarding anti wraparound vacuums (to freeze tuples), I see it has to scan all the pages which are not frozen-all (looking at visibility map). That means even if we want to freeze less transactions only (For ex - by increasing parameter vacuum_freeze_min_age to 1B), still it will scan all the pages in the visibility map and a time taking process.\n\n If vacuum_freeze_min_age is 1 billion, autovacuum_freeze_max_age is 2\nbillion (vacuum_freeze_min_age is limited to the half of\nautovacuum_freeze_max_age). So vacuum freeze will still have to\nprocess tuples that are inserted/modified during consuming 1 billion\ntransactions. It seems to me that it’s not fewer transactions. What is\nthe use case where users want to freeze fewer transactions, meaning\ninvoking anti-wraparound frequently?\n\n>\n> Can there be any improvement on this process so VACUUM knows the tuple/pages of those transactions which need to freeze up.\n>\n> Benefit of such an improvement is that if we are reaching transaction id close to 2B (and downtime), that time we can quickly recover the database with vacuuming freeze only a few millions rows with quick lookup rather than going all the pages from visibility map.\n\nApart from this idea, in terms of speeding up vacuum,\nvacuum_failsafe_age parameter, introduced to PG14[1], would also be\nhelpful. When the failsafe is triggered, cost-based delay is no longer\nbe applied, and index vacuuming is bypassed in order to finish vacuum\nwork and advance relfrozenxid as quickly as possible.\n\nRegards\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1e55e7d1755cefbb44982fbacc7da461fa8684e6\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 13 Apr 2021 11:22:13 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vacuum freeze - possible improvements"
},
{
"msg_contents": "Thanks Masahiko for the response.\n\n\"What is\nthe use case where users want to freeze fewer transactions, meaning\ninvoking anti-wraparound frequently?\"\n\nMy overall focus here is anti wraparound vacuum on huge tables in emergency\nsituations (where we reached very close to 2B transactions or already in\noutage window). In this situation we want to recover ASAP instead of having\nmany hours of outage.The Purpose of increasing \"vacuum_freeze_min_age\" to\nhigh value is that anti wraparound vacuum will have to do less work because\nwe are asking less transactions/tuples to freeze (Of Course subsequent\nvacuum has to do the remaining work).\n\n\"So the vacuum freeze will still have to\nprocess tuples that are inserted/modified during consuming 1 billion\ntransactions. It seems to me that it’s not fewer transactions.\"\n\nYes another thing here is anti wraparound vacuum also cleans dead tuples\nbut i am not sure what we can do to avoid that.\nThere can be vacuum to only freeze the tulpes?\n\nThanks for sharing PG14 improvements, those are nice to have. But still the\nanti wraparound vacuum will have to scan all the pages (from visibility\nmap) even if we are freezing fewer transactions because currently there is\nno way to know what block/tuple contains which transaction id. If there is\na way then it would be easier to directly freeze those tuples quickly and\nadvance the relfrozenxid for the table.\n\n\nOn Tue, Apr 13, 2021 at 7:52 AM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Mon, Apr 12, 2021 at 5:38 PM Virender Singla <virender.cse@gmail.com>\n> wrote:\n> >\n> > Hi Postgres Community,\n> >\n> > Regarding anti wraparound vacuums (to freeze tuples), I see it has to\n> scan all the pages which are not frozen-all (looking at visibility map).\n> That means even if we want to freeze less transactions only (For ex - by\n> increasing parameter vacuum_freeze_min_age to 1B), still it will scan all\n> the pages in the visibility map and a time taking process.\n>\n> If vacuum_freeze_min_age is 1 billion, autovacuum_freeze_max_age is 2\n> billion (vacuum_freeze_min_age is limited to the half of\n> autovacuum_freeze_max_age). So vacuum freeze will still have to\n> process tuples that are inserted/modified during consuming 1 billion\n> transactions. It seems to me that it’s not fewer transactions. What is\n> the use case where users want to freeze fewer transactions, meaning\n> invoking anti-wraparound frequently?\n>\n> >\n> > Can there be any improvement on this process so VACUUM knows the\n> tuple/pages of those transactions which need to freeze up.\n> >\n> > Benefit of such an improvement is that if we are reaching transaction id\n> close to 2B (and downtime), that time we can quickly recover the database\n> with vacuuming freeze only a few millions rows with quick lookup rather\n> than going all the pages from visibility map.\n>\n> Apart from this idea, in terms of speeding up vacuum,\n> vacuum_failsafe_age parameter, introduced to PG14[1], would also be\n> helpful. When the failsafe is triggered, cost-based delay is no longer\n> be applied, and index vacuuming is bypassed in order to finish vacuum\n> work and advance relfrozenxid as quickly as possible.\n>\n> Regards\n>\n> [1]\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1e55e7d1755cefbb44982fbacc7da461fa8684e6\n>\n> --\n> Masahiko Sawada\n> EDB: https://www.enterprisedb.com/\n>\n\nThanks Masahiko for the response.\"What is\nthe use case where users want to freeze fewer transactions, meaning\ninvoking anti-wraparound frequently?\"My overall focus here is anti wraparound vacuum on huge tables in emergency situations (where we reached very close to 2B transactions or already in outage window). In this situation we want to recover ASAP instead of having many hours of outage.The Purpose of increasing \"vacuum_freeze_min_age\" to high value is that anti wraparound vacuum will have to do less work because we are asking less transactions/tuples to freeze (Of Course subsequent vacuum has to do the remaining work). \"So the vacuum freeze will still have to\nprocess tuples that are inserted/modified during consuming 1 billion\ntransactions. It seems to me that it’s not fewer transactions.\"Yes another thing here is anti wraparound vacuum also cleans dead tuples but i am not sure what we can do to avoid that. There can be vacuum to only freeze the tulpes?Thanks for sharing PG14 improvements, those are nice to have. But still the anti wraparound vacuum will have to scan all the pages (from visibility map) even if we are freezing fewer transactions because currently there is no way to know what block/tuple contains which transaction id. If there is a way then it would be easier to directly freeze those tuples quickly and advance the relfrozenxid for the table.On Tue, Apr 13, 2021 at 7:52 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Mon, Apr 12, 2021 at 5:38 PM Virender Singla <virender.cse@gmail.com> wrote:\n>\n> Hi Postgres Community,\n>\n> Regarding anti wraparound vacuums (to freeze tuples), I see it has to scan all the pages which are not frozen-all (looking at visibility map). That means even if we want to freeze less transactions only (For ex - by increasing parameter vacuum_freeze_min_age to 1B), still it will scan all the pages in the visibility map and a time taking process.\n\n If vacuum_freeze_min_age is 1 billion, autovacuum_freeze_max_age is 2\nbillion (vacuum_freeze_min_age is limited to the half of\nautovacuum_freeze_max_age). So vacuum freeze will still have to\nprocess tuples that are inserted/modified during consuming 1 billion\ntransactions. It seems to me that it’s not fewer transactions. What is\nthe use case where users want to freeze fewer transactions, meaning\ninvoking anti-wraparound frequently?\n\n>\n> Can there be any improvement on this process so VACUUM knows the tuple/pages of those transactions which need to freeze up.\n>\n> Benefit of such an improvement is that if we are reaching transaction id close to 2B (and downtime), that time we can quickly recover the database with vacuuming freeze only a few millions rows with quick lookup rather than going all the pages from visibility map.\n\nApart from this idea, in terms of speeding up vacuum,\nvacuum_failsafe_age parameter, introduced to PG14[1], would also be\nhelpful. When the failsafe is triggered, cost-based delay is no longer\nbe applied, and index vacuuming is bypassed in order to finish vacuum\nwork and advance relfrozenxid as quickly as possible.\n\nRegards\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1e55e7d1755cefbb44982fbacc7da461fa8684e6\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 13 Apr 2021 10:21:03 +0530",
"msg_from": "Virender Singla <virender.cse@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: vacuum freeze - possible improvements"
},
{
"msg_contents": "On Tue, 13 Apr 2021 at 19:48, Virender Singla <virender.cse@gmail.com> wrote:\n> Yes another thing here is anti wraparound vacuum also cleans dead tuples but i am not sure what we can do to avoid that.\n> There can be vacuum to only freeze the tulpes?\n\nYou might want to have a look at [1], which was just pushed for PG14.\n\nIn particular:\n\n> When the failsafe triggers, VACUUM takes extraordinary measures to\n> finish as quickly as possible so that relfrozenxid and/or relminmxid can\n> be advanced. VACUUM will stop applying any cost-based delay that may be\n> in effect. VACUUM will also bypass any further index vacuuming and heap\n> vacuuming -- it only completes whatever remaining pruning and freezing\n> is required. Bypassing index/heap vacuuming is enabled by commit\n> 8523492d, which made it possible to dynamically trigger the mechanism\n> already used within VACUUM when it is run with INDEX_CLEANUP off.\n\nDavid\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1e55e7d1755cefbb44982fbacc7da461fa8684e6\n\n\n",
"msg_date": "Tue, 13 Apr 2021 23:05:26 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vacuum freeze - possible improvements"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 1:51 PM Virender Singla <virender.cse@gmail.com> wrote:\n>\n> Thanks Masahiko for the response.\n>\n> \"What is\n> the use case where users want to freeze fewer transactions, meaning\n> invoking anti-wraparound frequently?\"\n>\n> My overall focus here is anti wraparound vacuum on huge tables in emergency situations (where we reached very close to 2B transactions or already in outage window). In this situation we want to recover ASAP instead of having many hours of outage.The Purpose of increasing \"vacuum_freeze_min_age\" to high value is that anti wraparound vacuum will have to do less work because we are asking less transactions/tuples to freeze (Of Course subsequent vacuum has to do the remaining work).\n\nI think I understood your proposal. For example, if we insert 500GB\ntuples during the first 1 billion transactions and then insert more\n500GB tuples into another 500GB blocks during the next 1 billion\ntransactions, vacuum freeze scans 1TB whereas we scans only 500GB that\nare modified by the first insertions if we’re able to freeze directly\ntuples that are older than the cut-off. Is that right?\n\n>\n> \"So the vacuum freeze will still have to\n> process tuples that are inserted/modified during consuming 1 billion\n> transactions. It seems to me that it’s not fewer transactions.\"\n>\n> Yes another thing here is anti wraparound vacuum also cleans dead tuples but i am not sure what we can do to avoid that.\n> There can be vacuum to only freeze the tulpes?\n\nI think it's a good idea to skip all work except for freezing tuples\nin emergency cases. Thanks to vacuum_failsafe_age we can avoid index\nvacuuming, index cleanup, and heap vacuuming.\n\n>\n> Thanks for sharing PG14 improvements, those are nice to have. But still the anti wraparound vacuum will have to scan all the pages (from visibility map) even if we are freezing fewer transactions because currently there is no way to know what block/tuple contains which transaction id.\n\nYes, that feature is to speed up vacuum by dynamically disabling both\ncost-based delay and some cleanup work whereas your idea is to do that\nby speeding up heap scan.\n\n> If there is a way then it would be easier to directly freeze those tuples quickly and advance the relfrozenxid for the table.\n\nMaybe we can track the oldest xid per page in a map like visiblity map\nor integrate it with visibility map. We need to freeze only pages that\nare all-visible and whose oldest xid is older than the cut-off xid. I\nthink we need to track both xid and multi xid.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 13 Apr 2021 21:32:15 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vacuum freeze - possible improvements"
},
{
"msg_contents": "exactly my point, want to scan only 500GB data instead of 1TB. That can be\nhandy for vacuum freeze at a dangerous stage (reaching towards 2B).\n\n\"Maybe we can track the oldest xid per page in a map like visiblity map\nor integrate it with visibility map. We need to freeze only pages that\nare all-visible and whose oldest xid is older than the cut-off xid. I\nthink we need to track both xid and multi xid.\"\n\nYes I thought of that (keep track of olderst xid per page instead of per\ntuple), only thing here is every time there is some modification on the\npage, that oldest xid needs to be recalculated for respective page. Still\nthat makes sense with kind of BRIN type structure to keep the xid per page.\nWith Binary Tree Index structure, new transaction/tuple will fit right\nside (as that would be news transaction until 2B) and then other side leaf\nblocks can be removed with every vacuum freeze.\n\n\n\n\nOn Tue, Apr 13, 2021 at 6:02 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Tue, Apr 13, 2021 at 1:51 PM Virender Singla <virender.cse@gmail.com>\n> wrote:\n> >\n> > Thanks Masahiko for the response.\n> >\n> > \"What is\n> > the use case where users want to freeze fewer transactions, meaning\n> > invoking anti-wraparound frequently?\"\n> >\n> > My overall focus here is anti wraparound vacuum on huge tables in\n> emergency situations (where we reached very close to 2B transactions or\n> already in outage window). In this situation we want to recover ASAP\n> instead of having many hours of outage.The Purpose of increasing\n> \"vacuum_freeze_min_age\" to high value is that anti wraparound vacuum will\n> have to do less work because we are asking less transactions/tuples to\n> freeze (Of Course subsequent vacuum has to do the remaining work).\n>\n> I think I understood your proposal. For example, if we insert 500GB\n> tuples during the first 1 billion transactions and then insert more\n> 500GB tuples into another 500GB blocks during the next 1 billion\n> transactions, vacuum freeze scans 1TB whereas we scans only 500GB that\n> are modified by the first insertions if we’re able to freeze directly\n> tuples that are older than the cut-off. Is that right?\n>\n> >\n> > \"So the vacuum freeze will still have to\n> > process tuples that are inserted/modified during consuming 1 billion\n> > transactions. It seems to me that it’s not fewer transactions.\"\n> >\n> > Yes another thing here is anti wraparound vacuum also cleans dead tuples\n> but i am not sure what we can do to avoid that.\n> > There can be vacuum to only freeze the tulpes?\n>\n> I think it's a good idea to skip all work except for freezing tuples\n> in emergency cases. Thanks to vacuum_failsafe_age we can avoid index\n> vacuuming, index cleanup, and heap vacuuming.\n>\n> >\n> > Thanks for sharing PG14 improvements, those are nice to have. But still\n> the anti wraparound vacuum will have to scan all the pages (from visibility\n> map) even if we are freezing fewer transactions because currently there is\n> no way to know what block/tuple contains which transaction id.\n>\n> Yes, that feature is to speed up vacuum by dynamically disabling both\n> cost-based delay and some cleanup work whereas your idea is to do that\n> by speeding up heap scan.\n>\n> > If there is a way then it would be easier to directly freeze those\n> tuples quickly and advance the relfrozenxid for the table.\n>\n> Maybe we can track the oldest xid per page in a map like visiblity map\n> or integrate it with visibility map. We need to freeze only pages that\n> are all-visible and whose oldest xid is older than the cut-off xid. I\n> think we need to track both xid and multi xid.\n>\n> Regards,\n>\n>\n> --\n> Masahiko Sawada\n> EDB: https://www.enterprisedb.com/\n>\n\nexactly my point, want to scan only 500GB data instead of 1TB. That can be handy for vacuum freeze at a dangerous stage (reaching towards 2B).\"Maybe we can track the oldest xid per page in a map like visiblity map\nor integrate it with visibility map. We need to freeze only pages that\nare all-visible and whose oldest xid is older than the cut-off xid. I\nthink we need to track both xid and multi xid.\"Yes I thought of that (keep track of olderst xid per page instead of per tuple), only thing here is every time there is some modification on the page, that oldest xid needs to be recalculated for respective page. Still that makes sense with kind of BRIN type structure to keep the xid per page.With Binary Tree Index structure, new transaction/tuple will fit right side (as that would be news transaction until 2B) and then other side leaf blocks can be removed with every vacuum freeze.On Tue, Apr 13, 2021 at 6:02 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Tue, Apr 13, 2021 at 1:51 PM Virender Singla <virender.cse@gmail.com> wrote:\n>\n> Thanks Masahiko for the response.\n>\n> \"What is\n> the use case where users want to freeze fewer transactions, meaning\n> invoking anti-wraparound frequently?\"\n>\n> My overall focus here is anti wraparound vacuum on huge tables in emergency situations (where we reached very close to 2B transactions or already in outage window). In this situation we want to recover ASAP instead of having many hours of outage.The Purpose of increasing \"vacuum_freeze_min_age\" to high value is that anti wraparound vacuum will have to do less work because we are asking less transactions/tuples to freeze (Of Course subsequent vacuum has to do the remaining work).\n\nI think I understood your proposal. For example, if we insert 500GB\ntuples during the first 1 billion transactions and then insert more\n500GB tuples into another 500GB blocks during the next 1 billion\ntransactions, vacuum freeze scans 1TB whereas we scans only 500GB that\nare modified by the first insertions if we’re able to freeze directly\ntuples that are older than the cut-off. Is that right?\n\n>\n> \"So the vacuum freeze will still have to\n> process tuples that are inserted/modified during consuming 1 billion\n> transactions. It seems to me that it’s not fewer transactions.\"\n>\n> Yes another thing here is anti wraparound vacuum also cleans dead tuples but i am not sure what we can do to avoid that.\n> There can be vacuum to only freeze the tulpes?\n\nI think it's a good idea to skip all work except for freezing tuples\nin emergency cases. Thanks to vacuum_failsafe_age we can avoid index\nvacuuming, index cleanup, and heap vacuuming.\n\n>\n> Thanks for sharing PG14 improvements, those are nice to have. But still the anti wraparound vacuum will have to scan all the pages (from visibility map) even if we are freezing fewer transactions because currently there is no way to know what block/tuple contains which transaction id.\n\nYes, that feature is to speed up vacuum by dynamically disabling both\ncost-based delay and some cleanup work whereas your idea is to do that\nby speeding up heap scan.\n\n> If there is a way then it would be easier to directly freeze those tuples quickly and advance the relfrozenxid for the table.\n\nMaybe we can track the oldest xid per page in a map like visiblity map\nor integrate it with visibility map. We need to freeze only pages that\nare all-visible and whose oldest xid is older than the cut-off xid. I\nthink we need to track both xid and multi xid.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 13 Apr 2021 19:19:42 +0530",
"msg_from": "Virender Singla <virender.cse@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: vacuum freeze - possible improvements"
}
] |
[
{
"msg_contents": "> Then I get timeout error occurs and the subscriber worker keep re-launching\n> over and over (you did not mention see such errors?)\nI test again and get errors, too. I didn't check log after timeout in the previous test. \n\nRegards,\nTang\n\n\n\n\n",
"msg_date": "Mon, 12 Apr 2021 10:50:54 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Could you help testing logical replication?"
},
{
"msg_contents": "Sorry for sending a wrong mail. Please ignore it.\n\n> -----Original Message-----\n> From: Shi, Yu/侍 雨\n> Sent: Monday, April 12, 2021 6:51 PM\n> To: Tang, Haiying/唐 海英 <tanghy.fnst@fujitsu.com>\n> Cc: pgsql-hackers@lists.postgresql.org\n> Subject: RE: Could you help testing logical replication?\n> \n> > Then I get timeout error occurs and the subscriber worker keep\n> > re-launching over and over (you did not mention see such errors?)\n> I test again and get errors, too. I didn't check log after timeout in the previous\n> test.\n> \n> Regards,\n> Tang\n> \n\n\n\n",
"msg_date": "Mon, 12 Apr 2021 10:55:37 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Could you help testing logical replication?"
}
] |
[
{
"msg_contents": "Shi, Yu/侍 雨 将撤回邮件“Could you help testing logical replication?”。",
"msg_date": "Mon, 12 Apr 2021 11:40:54 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "\n =?gb2312?B?s7e72DogQ291bGQgeW91IGhlbHAgdGVzdGluZyBsb2dpY2FsIHJlcGxpY2F0?=\n =?gb2312?Q?ion=3F?="
}
] |
[
{
"msg_contents": "Shi, Yu/侍 雨 将撤回邮件“Could you help testing logical replication?”。",
"msg_date": "Mon, 12 Apr 2021 11:48:38 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "\n =?gb2312?B?s7e72DogQ291bGQgeW91IGhlbHAgdGVzdGluZyBsb2dpY2FsIHJlcGxpY2F0?=\n =?gb2312?Q?ion=3F?="
}
] |
[
{
"msg_contents": "Hi,\n\nWhen trying to run on master (but afaik also PG-13) TPC-DS queries 94, \n95 and 96 on a SF10 I get the error \"could not find pathkey item to sort\".\nWhen I disable enable_gathermerge the problem goes away and then the \nplan for query 94 looks like below. I tried figuring out what the \nproblem is but to be honest I would need some pointers as the code that \ntries to matching equivalence members in prepare_sort_from_pathkeys is \nsomething i'm really not familiar with.\n\nTo reproduce you can either ingest and test using the toolkit I used too \n(see https://github.com/swarm64/s64da-benchmark-toolkit/), or \nalternatively just use the schema (see \nhttps://github.com/swarm64/s64da-benchmark-toolkit/tree/master/benchmarks/tpcds/schemas/psql_native)\n\nBest,\nLuc\n\n------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=229655.62..229655.63 rows=1 width=72)\n -> Sort (cost=229655.62..229655.63 rows=1 width=72)\n Sort Key: (count(DISTINCT ws1.ws_order_number))\n -> Aggregate (cost=229655.60..229655.61 rows=1 width=72)\n -> Nested Loop Semi Join (cost=1012.65..229655.59 \nrows=1 width=16)\n -> Nested Loop (cost=1012.22..229653.73 rows=1 \nwidth=20)\n Join Filter: (ws1.ws_web_site_sk = \nweb_site.web_site_sk)\n -> Nested Loop (cost=1012.22..229651.08 \nrows=1 width=24)\n -> Gather (cost=1011.80..229650.64 \nrows=1 width=28)\n Workers Planned: 2\n -> Nested Loop Anti Join \n(cost=11.80..228650.54 rows=1 width=28)\n -> Hash Join \n(cost=11.37..227438.35 rows=2629 width=28)\n Hash Cond: \n(ws1.ws_ship_date_sk = date_dim.d_date_sk)\n -> Parallel Seq \nScan on web_sales ws1 (cost=0.00..219548.92 rows=3000992 width=32)\n -> Hash \n(cost=10.57..10.57 rows=64 width=4)\n -> Index Scan \nusing idx_d_date on date_dim (cost=0.29..10.57 rows=64 width=4)\n Index \nCond: ((d_date >= '2000-03-01'::date) AND (d_date <= '2000-04-30'::date))\n -> Index Only Scan using \nidx_wr_order_number on web_returns wr1 (cost=0.42..0.46 rows=2 width=4)\n Index Cond: \n(wr_order_number = ws1.ws_order_number)\n -> Index Scan using \ncustomer_address_pkey on customer_address (cost=0.42..0.44 rows=1 width=4)\n Index Cond: (ca_address_sk = \nws1.ws_ship_addr_sk)\n Filter: ((ca_state)::text = \n'GA'::text)\n -> Seq Scan on web_site (cost=0.00..2.52 \nrows=10 width=4)\n Filter: ((web_company_name)::text = \n'pri'::text)\n -> Index Scan using idx_ws_order_number on \nweb_sales ws2 (cost=0.43..1.84 rows=59 width=8)\n Index Cond: (ws_order_number = \nws1.ws_order_number)\n Filter: (ws1.ws_warehouse_sk <> ws_warehouse_sk)\n\nThe top of the stacktrace is:\n#0 errfinish (filename=0x5562dc1a5125 \"createplan.c\", lineno=6186, \nfuncname=0x5562dc1a54d0 <__func__.14> \"prepare_sort_from_pathkeys\") at \nelog.c:514\n#1 0x00005562dbc2d7de in prepare_sort_from_pathkeys \n(lefttree=0x5562dc5a2f58, pathkeys=0x5562dc4eabc8, relids=0x0, \nreqColIdx=0x0, adjust_tlist_in_place=<optimized out>, \np_numsortkeys=0x7ffc0b8cda84, p_sortColIdx=0x7ffc0b8cda88, \np_sortOperators=0x7ffc0b8cda90, p_collations=0x7ffc0b8cda98, \np_nullsFirst=0x7ffc0b8cdaa0) at createplan.c:6186\n#2 0x00005562dbe8d695 in make_sort_from_pathkeys (lefttree=<optimized \nout>, pathkeys=<optimized out>, relids=<optimized out>) at createplan.c:6313\n#3 0x00005562dbe8eba3 in create_sort_plan (flags=1, \nbest_path=0x5562dc548d68, root=0x5562dc508cf8) at createplan.c:2118\n#4 create_plan_recurse (root=0x5562dc508cf8, best_path=0x5562dc548d68, \nflags=1) at createplan.c:489\n#5 0x00005562dbe8f315 in create_gather_merge_plan \n(best_path=0x5562dc5782f8, root=0x5562dc508cf8) at createplan.c:1885\n#6 create_plan_recurse (root=0x5562dc508cf8, best_path=0x5562dc5782f8, \nflags=<optimized out>) at createplan.c:541\n#7 0x00005562dbe8ddad in create_nestloop_plan \n(best_path=0x5562dc585668, root=0x5562dc508cf8) at createplan.c:4237\n#8 create_join_plan (best_path=0x5562dc585668, root=0x5562dc508cf8) at \ncreateplan.c:1062\n#9 create_plan_recurse (root=0x5562dc508cf8, best_path=0x5562dc585668, \nflags=<optimized out>) at createplan.c:418\n#10 0x00005562dbe8ddad in create_nestloop_plan \n(best_path=0x5562dc5c4428, root=0x5562dc508cf8) at createplan.c:4237\n#11 create_join_plan (best_path=0x5562dc5c4428, root=0x5562dc508cf8) at \ncreateplan.c:1062\n#12 create_plan_recurse (root=0x5562dc508cf8, best_path=0x5562dc5c4428, \nflags=<optimized out>) at createplan.c:418\n#13 0x00005562dbe8ddad in create_nestloop_plan \n(best_path=0x5562dc5d3bd8, root=0x5562dc508cf8) at createplan.c:4237\n#14 create_join_plan (best_path=0x5562dc5d3bd8, root=0x5562dc508cf8) at \ncreateplan.c:1062\n#15 create_plan_recurse (root=0x5562dc508cf8, best_path=0x5562dc5d3bd8, \nflags=<optimized out>) at createplan.c:418\n#16 0x00005562dbe8e428 in create_agg_plan (best_path=0x5562dc5d6f08, \nroot=0x5562dc508cf8) at createplan.c:2238\n#17 create_plan_recurse (root=0x5562dc508cf8, best_path=0x5562dc5d6f08, \nflags=3) at createplan.c:509\n#18 0x00005562dbe8eb73 in create_sort_plan (flags=1, \nbest_path=0x5562dc5d7378, root=0x5562dc508cf8) at createplan.c:2109\n#19 create_plan_recurse (root=0x5562dc508cf8, best_path=0x5562dc5d7378, \nflags=1) at createplan.c:489\n#20 0x00005562dbe8e7e8 in create_limit_plan (flags=1, \nbest_path=0x5562dc5d7a08, root=0x5562dc508cf8) at createplan.c:2784\n#21 create_plan_recurse (root=0x5562dc508cf8, best_path=0x5562dc5d7a08, \nflags=1) at createplan.c:536\n#22 0x00005562dbe914ae in create_plan (root=root@entry=0x5562dc508cf8, \nbest_path=<optimized out>) at createplan.c:349\n\n\n",
"msg_date": "Mon, 12 Apr 2021 14:24:32 +0200",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": true,
"msg_subject": "\"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "On 4/12/21 2:24 PM, Luc Vlaming wrote:\n> Hi,\n> \n> When trying to run on master (but afaik also PG-13) TPC-DS queries 94,\n> 95 and 96 on a SF10 I get the error \"could not find pathkey item to sort\".\n> When I disable enable_gathermerge the problem goes away and then the\n> plan for query 94 looks like below. I tried figuring out what the\n> problem is but to be honest I would need some pointers as the code that\n> tries to matching equivalence members in prepare_sort_from_pathkeys is\n> something i'm really not familiar with.\n> \n\nCould be related to incremental sort, which allowed some gather merge\npaths that were impossible before. We had a couple issues related to\nthat fixed in November, IIRC.\n\n> To reproduce you can either ingest and test using the toolkit I used too\n> (see https://github.com/swarm64/s64da-benchmark-toolkit/), or\n> alternatively just use the schema (see\n> https://github.com/swarm64/s64da-benchmark-toolkit/tree/master/benchmarks/tpcds/schemas/psql_native)\n> \n\nThanks, I'll see if I can reproduce that with your schema.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 12 Apr 2021 14:36:58 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 8:37 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 4/12/21 2:24 PM, Luc Vlaming wrote:\n> > Hi,\n> >\n> > When trying to run on master (but afaik also PG-13) TPC-DS queries 94,\n> > 95 and 96 on a SF10 I get the error \"could not find pathkey item to sort\".\n> > When I disable enable_gathermerge the problem goes away and then the\n> > plan for query 94 looks like below. I tried figuring out what the\n> > problem is but to be honest I would need some pointers as the code that\n> > tries to matching equivalence members in prepare_sort_from_pathkeys is\n> > something i'm really not familiar with.\n> >\n>\n> Could be related to incremental sort, which allowed some gather merge\n> paths that were impossible before. We had a couple issues related to\n> that fixed in November, IIRC.\n>\n> > To reproduce you can either ingest and test using the toolkit I used too\n> > (see https://github.com/swarm64/s64da-benchmark-toolkit/), or\n> > alternatively just use the schema (see\n> > https://github.com/swarm64/s64da-benchmark-toolkit/tree/master/benchmarks/tpcds/schemas/psql_native)\n> >\n>\n> Thanks, I'll see if I can reproduce that with your schema.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\nThe query in question is:\n\nselect count(*)\n from store_sales\n ,household_demographics\n ,time_dim, store\n where ss_sold_time_sk = time_dim.t_time_sk\n and ss_hdemo_sk = household_demographics.hd_demo_sk\n and ss_store_sk = s_store_sk\n and time_dim.t_hour = 15\n and time_dim.t_minute >= 30\n and household_demographics.hd_dep_count = 7\n and store.s_store_name = 'ese'\n order by count(*)\n limit 100;\n\n From debugging output it looks like this is the plan being chosen\n(cheapest total path):\n Gather(store_sales household_demographics time_dim) rows=60626\ncost=3145.73..699910.15\n HashJoin(store_sales household_demographics time_dim)\nrows=25261 cost=2145.73..692847.55\n clauses: store_sales.ss_hdemo_sk =\nhousehold_demographics.hd_demo_sk\n HashJoin(store_sales time_dim) rows=252609\ncost=1989.73..692028.08\n clauses: store_sales.ss_sold_time_sk =\ntime_dim.t_time_sk\n SeqScan(store_sales) rows=11998564\ncost=0.00..658540.64\n SeqScan(time_dim) rows=1070\ncost=0.00..1976.35\n SeqScan(household_demographics) rows=720\ncost=0.00..147.00\n\nprepare_sort_from_pathkeys fails to find a pathkey because\ntlist_member_ignore_relabel returns null -- which seemed weird because\nthe sortexpr is an Aggref (in a single member equivalence class) and\nthe tlist contains a single member that's also an Aggref. It turns out\nthat the only difference between the two Aggrefs is that the tlist\nentry has \"aggsplit = AGGSPLIT_INITIAL_SERIAL\" while the sortexpr has\naggsplit = AGGSPLIT_SIMPLE.\n\nThat's as far as I've gotten so far, but I figured I'd get that info\nout to see if it means anything obvious to anyone else.\n\nJames\n\n\n",
"msg_date": "Wed, 14 Apr 2021 17:42:49 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 8:37 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Could be related to incremental sort, which allowed some gather merge\n> paths that were impossible before. We had a couple issues related to\n> that fixed in November, IIRC.\n\nHmm, could be. Although, the stack trace at issue doesn't seem to show\na call to create_incrementalsort_plan().\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Apr 2021 20:16:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 8:16 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Apr 12, 2021 at 8:37 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> > Could be related to incremental sort, which allowed some gather merge\n> > paths that were impossible before. We had a couple issues related to\n> > that fixed in November, IIRC.\n>\n> Hmm, could be. Although, the stack trace at issue doesn't seem to show\n> a call to create_incrementalsort_plan().\n\nThe changes to gather merge path generation made it possible to use\nthose paths in more cases for both incremental sort and regular sort,\nso by \"incremental sort\" I read Tomas as saying \"the patches that\nbrought in incremental sort\" not specifically \"incremental sort\nitself\".\n\nJames\n\n\n",
"msg_date": "Wed, 14 Apr 2021 20:19:54 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 5:43 PM James Coleman <jtc331@gmail.com> wrote:\n> The query in question is:\n> select count(*)\n> from store_sales\n> ,household_demographics\n> ,time_dim, store\n> where ss_sold_time_sk = time_dim.t_time_sk\n> and ss_hdemo_sk = household_demographics.hd_demo_sk\n> and ss_store_sk = s_store_sk\n> and time_dim.t_hour = 15\n> and time_dim.t_minute >= 30\n> and household_demographics.hd_dep_count = 7\n> and store.s_store_name = 'ese'\n> order by count(*)\n> limit 100;\n>\n> From debugging output it looks like this is the plan being chosen\n> (cheapest total path):\n> Gather(store_sales household_demographics time_dim) rows=60626\n> cost=3145.73..699910.15\n> HashJoin(store_sales household_demographics time_dim)\n> rows=25261 cost=2145.73..692847.55\n> clauses: store_sales.ss_hdemo_sk =\n> household_demographics.hd_demo_sk\n> HashJoin(store_sales time_dim) rows=252609\n> cost=1989.73..692028.08\n> clauses: store_sales.ss_sold_time_sk =\n> time_dim.t_time_sk\n> SeqScan(store_sales) rows=11998564\n> cost=0.00..658540.64\n> SeqScan(time_dim) rows=1070\n> cost=0.00..1976.35\n> SeqScan(household_demographics) rows=720\n> cost=0.00..147.00\n\nThis doesn't really make sense to me given the strack trace in the OP.\nThat seems to go Limit -> Sort -> Agg -> NestLoop -> NestLoop ->\nNestLoop -> GatherMerge -> Sort. If the plan were as you have it here,\nthere would be no Sort and no Gather Merge, so where would be getting\na failure related to pathkeys?\n\nI think if we can get the correct plan the thing to look at would be\nthe tlists at the relevant levels of the plan.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Apr 2021 20:21:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 8:20 PM James Coleman <jtc331@gmail.com> wrote:\n> > Hmm, could be. Although, the stack trace at issue doesn't seem to show\n> > a call to create_incrementalsort_plan().\n>\n> The changes to gather merge path generation made it possible to use\n> those paths in more cases for both incremental sort and regular sort,\n> so by \"incremental sort\" I read Tomas as saying \"the patches that\n> brought in incremental sort\" not specifically \"incremental sort\n> itself\".\n\nI agree. That's why I said \"hmm, could be\" even though the plan\ndoesn't involve one.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Apr 2021 20:21:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 8:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Apr 14, 2021 at 5:43 PM James Coleman <jtc331@gmail.com> wrote:\n> > The query in question is:\n> > select count(*)\n> > from store_sales\n> > ,household_demographics\n> > ,time_dim, store\n> > where ss_sold_time_sk = time_dim.t_time_sk\n> > and ss_hdemo_sk = household_demographics.hd_demo_sk\n> > and ss_store_sk = s_store_sk\n> > and time_dim.t_hour = 15\n> > and time_dim.t_minute >= 30\n> > and household_demographics.hd_dep_count = 7\n> > and store.s_store_name = 'ese'\n> > order by count(*)\n> > limit 100;\n> >\n> > From debugging output it looks like this is the plan being chosen\n> > (cheapest total path):\n> > Gather(store_sales household_demographics time_dim) rows=60626\n> > cost=3145.73..699910.15\n> > HashJoin(store_sales household_demographics time_dim)\n> > rows=25261 cost=2145.73..692847.55\n> > clauses: store_sales.ss_hdemo_sk =\n> > household_demographics.hd_demo_sk\n> > HashJoin(store_sales time_dim) rows=252609\n> > cost=1989.73..692028.08\n> > clauses: store_sales.ss_sold_time_sk =\n> > time_dim.t_time_sk\n> > SeqScan(store_sales) rows=11998564\n> > cost=0.00..658540.64\n> > SeqScan(time_dim) rows=1070\n> > cost=0.00..1976.35\n> > SeqScan(household_demographics) rows=720\n> > cost=0.00..147.00\n>\n> This doesn't really make sense to me given the strack trace in the OP.\n> That seems to go Limit -> Sort -> Agg -> NestLoop -> NestLoop ->\n> NestLoop -> GatherMerge -> Sort. If the plan were as you have it here,\n> there would be no Sort and no Gather Merge, so where would be getting\n> a failure related to pathkeys?\n\nAh, yeah, I'm not sure where the original stacktrace came from, but\nhere's the stack for the query I reproduced it with (perhaps it does\nso on different queries or there was some other GUC change in the\nreported plan):\n\n#0 errfinish (filename=filename@entry=0x56416eefa845 \"createplan.c\",\nlineno=lineno@entry=6186,\n funcname=funcname@entry=0x56416eefb660 <__func__.24872>\n\"prepare_sort_from_pathkeys\") at elog.c:514\n#1 0x000056416eb6ed52 in prepare_sort_from_pathkeys\n(lefttree=0x564170552658, pathkeys=0x5641704f2640, relids=0x0,\nreqColIdx=reqColIdx@entry=0x0,\n adjust_tlist_in_place=adjust_tlist_in_place@entry=false,\np_numsortkeys=p_numsortkeys@entry=0x7fff1252817c,\np_sortColIdx=0x7fff12528170,\n p_sortOperators=0x7fff12528168, p_collations=0x7fff12528160,\np_nullsFirst=0x7fff12528158) at createplan.c:6186\n#2 0x000056416eb6ee69 in make_sort_from_pathkeys (lefttree=<optimized\nout>, pathkeys=<optimized out>, relids=<optimized out>) at\ncreateplan.c:6313\n#3 0x000056416eb71fc7 in create_sort_plan\n(root=root@entry=0x564170511a70,\nbest_path=best_path@entry=0x56417054f650, flags=flags@entry=1)\n at createplan.c:2118\n#4 0x000056416eb6f638 in create_plan_recurse\n(root=root@entry=0x564170511a70, best_path=0x56417054f650,\nflags=flags@entry=1) at createplan.c:489\n#5 0x000056416eb72e06 in create_gather_merge_plan\n(root=root@entry=0x564170511a70,\nbest_path=best_path@entry=0x56417054f6e8) at createplan.c:1885\n#6 0x000056416eb6f723 in create_plan_recurse\n(root=root@entry=0x564170511a70, best_path=0x56417054f6e8,\nflags=flags@entry=4) at createplan.c:541\n#7 0x000056416eb726fb in create_agg_plan\n(root=root@entry=0x564170511a70,\nbest_path=best_path@entry=0x56417054f8c8) at createplan.c:2238\n#8 0x000056416eb6f67b in create_plan_recurse\n(root=root@entry=0x564170511a70, best_path=0x56417054f8c8,\nflags=flags@entry=3) at createplan.c:509\n#9 0x000056416eb71f8e in create_sort_plan\n(root=root@entry=0x564170511a70,\nbest_path=best_path@entry=0x56417054f560, flags=flags@entry=1)\n at createplan.c:2109\n#10 0x000056416eb6f638 in create_plan_recurse\n(root=root@entry=0x564170511a70, best_path=0x56417054f560,\nflags=flags@entry=1) at createplan.c:489\n#11 0x000056416eb72c83 in create_limit_plan\n(root=root@entry=0x564170511a70,\nbest_path=best_path@entry=0x56417054ffa0, flags=flags@entry=1)\n at createplan.c:2784\n#12 0x000056416eb6f713 in create_plan_recurse\n(root=root@entry=0x564170511a70, best_path=0x56417054ffa0,\nflags=flags@entry=1) at createplan.c:536\n#13 0x000056416eb6f79d in create_plan (root=root@entry=0x564170511a70,\nbest_path=<optimized out>) at createplan.c:349\n#14 0x000056416eb7fe93 in standard_planner (parse=0x564170437268,\nquery_string=<optimized out>, cursorOptions=2048,\nboundParams=<optimized out>)\n at planner.c:407\n\n> I think if we can get the correct plan the thing to look at would be\n> the tlists at the relevant levels of the plan.\n\nDoes the information in [1] help at all? The tlist does have an\nAggref, as expected, but its aggsplit value doesn't match the\npathkey's Aggref's aggsplit value.\n\nJames\n\n1: https://www.postgresql.org/message-id/CAAaqYe_NU4hO9COoJdcXWqjtH%3DdGMknYdsSdJjZ%3DJOHPTea-Nw%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 14 Apr 2021 20:39:17 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 8:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Apr 14, 2021 at 5:43 PM James Coleman <jtc331@gmail.com> wrote:\n> > The query in question is:\n> > select count(*)\n> > from store_sales\n> > ,household_demographics\n> > ,time_dim, store\n> > where ss_sold_time_sk = time_dim.t_time_sk\n> > and ss_hdemo_sk = household_demographics.hd_demo_sk\n> > and ss_store_sk = s_store_sk\n> > and time_dim.t_hour = 15\n> > and time_dim.t_minute >= 30\n> > and household_demographics.hd_dep_count = 7\n> > and store.s_store_name = 'ese'\n> > order by count(*)\n> > limit 100;\n> >\n> > From debugging output it looks like this is the plan being chosen\n> > (cheapest total path):\n> > Gather(store_sales household_demographics time_dim) rows=60626\n> > cost=3145.73..699910.15\n> > HashJoin(store_sales household_demographics time_dim)\n> > rows=25261 cost=2145.73..692847.55\n> > clauses: store_sales.ss_hdemo_sk =\n> > household_demographics.hd_demo_sk\n> > HashJoin(store_sales time_dim) rows=252609\n> > cost=1989.73..692028.08\n> > clauses: store_sales.ss_sold_time_sk =\n> > time_dim.t_time_sk\n> > SeqScan(store_sales) rows=11998564\n> > cost=0.00..658540.64\n> > SeqScan(time_dim) rows=1070\n> > cost=0.00..1976.35\n> > SeqScan(household_demographics) rows=720\n> > cost=0.00..147.00\n>\n> This doesn't really make sense to me given the strack trace in the OP.\n> That seems to go Limit -> Sort -> Agg -> NestLoop -> NestLoop ->\n> NestLoop -> GatherMerge -> Sort. If the plan were as you have it here,\n> there would be no Sort and no Gather Merge, so where would be getting\n> a failure related to pathkeys?\n\nAlso I just realized why this didn't make sense -- I'm not sure what\nthe above path is. It'd gotten logged as part of the debug options I\nhave configured, but it must be 1.) incomplete (perhaps at a lower\nlevel of path generation) and/or not the final path selected.\n\nMy apologies for the confusion.\n\nJames\n\n\n",
"msg_date": "Wed, 14 Apr 2021 20:45:42 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 5:42 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Mon, Apr 12, 2021 at 8:37 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > On 4/12/21 2:24 PM, Luc Vlaming wrote:\n> > > Hi,\n> > >\n> > > When trying to run on master (but afaik also PG-13) TPC-DS queries 94,\n> > > 95 and 96 on a SF10 I get the error \"could not find pathkey item to sort\".\n> > > When I disable enable_gathermerge the problem goes away and then the\n> > > plan for query 94 looks like below. I tried figuring out what the\n> > > problem is but to be honest I would need some pointers as the code that\n> > > tries to matching equivalence members in prepare_sort_from_pathkeys is\n> > > something i'm really not familiar with.\n> > >\n> >\n> > Could be related to incremental sort, which allowed some gather merge\n> > paths that were impossible before. We had a couple issues related to\n> > that fixed in November, IIRC.\n> >\n> > > To reproduce you can either ingest and test using the toolkit I used too\n> > > (see https://github.com/swarm64/s64da-benchmark-toolkit/), or\n> > > alternatively just use the schema (see\n> > > https://github.com/swarm64/s64da-benchmark-toolkit/tree/master/benchmarks/tpcds/schemas/psql_native)\n> > >\n> >\n> > Thanks, I'll see if I can reproduce that with your schema.\n> >\n> >\n> > regards\n> >\n> > --\n> > Tomas Vondra\n> > EnterpriseDB: http://www.enterprisedb.com\n> > The Enterprise PostgreSQL Company\n>\n> The query in question is:\n>\n> select count(*)\n> from store_sales\n> ,household_demographics\n> ,time_dim, store\n> where ss_sold_time_sk = time_dim.t_time_sk\n> and ss_hdemo_sk = household_demographics.hd_demo_sk\n> and ss_store_sk = s_store_sk\n> and time_dim.t_hour = 15\n> and time_dim.t_minute >= 30\n> and household_demographics.hd_dep_count = 7\n> and store.s_store_name = 'ese'\n> order by count(*)\n> limit 100;\n>\n> From debugging output it looks like this is the plan being chosen\n> (cheapest total path):\n> Gather(store_sales household_demographics time_dim) rows=60626\n> cost=3145.73..699910.15\n> HashJoin(store_sales household_demographics time_dim)\n> rows=25261 cost=2145.73..692847.55\n> clauses: store_sales.ss_hdemo_sk =\n> household_demographics.hd_demo_sk\n> HashJoin(store_sales time_dim) rows=252609\n> cost=1989.73..692028.08\n> clauses: store_sales.ss_sold_time_sk =\n> time_dim.t_time_sk\n> SeqScan(store_sales) rows=11998564\n> cost=0.00..658540.64\n> SeqScan(time_dim) rows=1070\n> cost=0.00..1976.35\n> SeqScan(household_demographics) rows=720\n> cost=0.00..147.00\n>\n> prepare_sort_from_pathkeys fails to find a pathkey because\n> tlist_member_ignore_relabel returns null -- which seemed weird because\n> the sortexpr is an Aggref (in a single member equivalence class) and\n> the tlist contains a single member that's also an Aggref. It turns out\n> that the only difference between the two Aggrefs is that the tlist\n> entry has \"aggsplit = AGGSPLIT_INITIAL_SERIAL\" while the sortexpr has\n> aggsplit = AGGSPLIT_SIMPLE.\n>\n> That's as far as I've gotten so far, but I figured I'd get that info\n> out to see if it means anything obvious to anyone else.\n\nThis really goes back to [1] where we fixed a similar issue by making\nfind_em_expr_usable_for_sorting_rel parallel the rules in\nprepare_sort_from_pathkeys.\n\nMost of those conditions got copied, and the case we were trying to\nhandle is the fact that prepare_sort_from_pathkeys can generate a\ntarget list entry under those conditions if one doesn't exist. However\nthere's a further restriction there I don't remember looking at: it\nuses pull_var_clause and tlist_member_ignore_relabel to ensure that\nall of the vars that feed into the sort expression are found in the\ntarget list. As I understand it, that is: it will build a target list\nentry for something like \"md5(column)\" if \"column\" (and that was one\nof our test cases for the previous fix) is in the target list already.\n\nBut there's an additional detail here: the call to pull_var_clause\nrequests aggregates, window functions, and placeholders be treated as\nvars. That means for our Aggref case it would require that the two\nAggrefs be fully equal, so the differing aggsplit member would cause a\ntarget list entry not to be built, hence our error here.\n\nI've attached a quick and dirty patch that encodes that final rule\nfrom prepare_sort_from_pathkeys into\nfind_em_expr_usable_for_sorting_rel. I can't help but think that\nthere's a cleaner way to do with this with less code duplication, but\nhindering that is that prepare_sort_from_pathkeys is working with a\nTargetList while find_em_expr_usable_for_sorting_rel is working with a\nlist of expressions.\n\nJames\n\n1: https://www.postgresql.org/message-id/CAAaqYe9C3f6A_tZCRfr9Dm7hPpgGwpp4i-K_%3DNS9GWXuNiFANg%40mail.gmail.com",
"msg_date": "Wed, 14 Apr 2021 22:01:11 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "On 15-04-2021 04:01, James Coleman wrote:\n> On Wed, Apr 14, 2021 at 5:42 PM James Coleman <jtc331@gmail.com> wrote:\n>>\n>> On Mon, Apr 12, 2021 at 8:37 AM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>> On 4/12/21 2:24 PM, Luc Vlaming wrote:\n>>>> Hi,\n>>>>\n>>>> When trying to run on master (but afaik also PG-13) TPC-DS queries 94,\n>>>> 95 and 96 on a SF10 I get the error \"could not find pathkey item to sort\".\n>>>> When I disable enable_gathermerge the problem goes away and then the\n>>>> plan for query 94 looks like below. I tried figuring out what the\n>>>> problem is but to be honest I would need some pointers as the code that\n>>>> tries to matching equivalence members in prepare_sort_from_pathkeys is\n>>>> something i'm really not familiar with.\n>>>>\n>>>\n>>> Could be related to incremental sort, which allowed some gather merge\n>>> paths that were impossible before. We had a couple issues related to\n>>> that fixed in November, IIRC.\n>>>\n>>>> To reproduce you can either ingest and test using the toolkit I used too\n>>>> (see https://github.com/swarm64/s64da-benchmark-toolkit/), or\n>>>> alternatively just use the schema (see\n>>>> https://github.com/swarm64/s64da-benchmark-toolkit/tree/master/benchmarks/tpcds/schemas/psql_native)\n>>>>\n>>>\n>>> Thanks, I'll see if I can reproduce that with your schema.\n>>>\n>>>\n>>> regards\n>>>\n>>> --\n>>> Tomas Vondra\n>>> EnterpriseDB: http://www.enterprisedb.com\n>>> The Enterprise PostgreSQL Company\n>>\n>> The query in question is:\n>>\n>> select count(*)\n>> from store_sales\n>> ,household_demographics\n>> ,time_dim, store\n>> where ss_sold_time_sk = time_dim.t_time_sk\n>> and ss_hdemo_sk = household_demographics.hd_demo_sk\n>> and ss_store_sk = s_store_sk\n>> and time_dim.t_hour = 15\n>> and time_dim.t_minute >= 30\n>> and household_demographics.hd_dep_count = 7\n>> and store.s_store_name = 'ese'\n>> order by count(*)\n>> limit 100;\n>>\n>> From debugging output it looks like this is the plan being chosen\n>> (cheapest total path):\n>> Gather(store_sales household_demographics time_dim) rows=60626\n>> cost=3145.73..699910.15\n>> HashJoin(store_sales household_demographics time_dim)\n>> rows=25261 cost=2145.73..692847.55\n>> clauses: store_sales.ss_hdemo_sk =\n>> household_demographics.hd_demo_sk\n>> HashJoin(store_sales time_dim) rows=252609\n>> cost=1989.73..692028.08\n>> clauses: store_sales.ss_sold_time_sk =\n>> time_dim.t_time_sk\n>> SeqScan(store_sales) rows=11998564\n>> cost=0.00..658540.64\n>> SeqScan(time_dim) rows=1070\n>> cost=0.00..1976.35\n>> SeqScan(household_demographics) rows=720\n>> cost=0.00..147.00\n>>\n>> prepare_sort_from_pathkeys fails to find a pathkey because\n>> tlist_member_ignore_relabel returns null -- which seemed weird because\n>> the sortexpr is an Aggref (in a single member equivalence class) and\n>> the tlist contains a single member that's also an Aggref. It turns out\n>> that the only difference between the two Aggrefs is that the tlist\n>> entry has \"aggsplit = AGGSPLIT_INITIAL_SERIAL\" while the sortexpr has\n>> aggsplit = AGGSPLIT_SIMPLE.\n>>\n>> That's as far as I've gotten so far, but I figured I'd get that info\n>> out to see if it means anything obvious to anyone else.\n> \n> This really goes back to [1] where we fixed a similar issue by making\n> find_em_expr_usable_for_sorting_rel parallel the rules in\n> prepare_sort_from_pathkeys.\n> \n> Most of those conditions got copied, and the case we were trying to\n> handle is the fact that prepare_sort_from_pathkeys can generate a\n> target list entry under those conditions if one doesn't exist. However\n> there's a further restriction there I don't remember looking at: it\n> uses pull_var_clause and tlist_member_ignore_relabel to ensure that\n> all of the vars that feed into the sort expression are found in the\n> target list. As I understand it, that is: it will build a target list\n> entry for something like \"md5(column)\" if \"column\" (and that was one\n> of our test cases for the previous fix) is in the target list already.\n> \n> But there's an additional detail here: the call to pull_var_clause\n> requests aggregates, window functions, and placeholders be treated as\n> vars. That means for our Aggref case it would require that the two\n> Aggrefs be fully equal, so the differing aggsplit member would cause a\n> target list entry not to be built, hence our error here.\n> \n> I've attached a quick and dirty patch that encodes that final rule\n> from prepare_sort_from_pathkeys into\n> find_em_expr_usable_for_sorting_rel. I can't help but think that\n> there's a cleaner way to do with this with less code duplication, but\n> hindering that is that prepare_sort_from_pathkeys is working with a\n> TargetList while find_em_expr_usable_for_sorting_rel is working with a\n> list of expressions.\n> \n> James\n> \n> 1: https://www.postgresql.org/message-id/CAAaqYe9C3f6A_tZCRfr9Dm7hPpgGwpp4i-K_%3DNS9GWXuNiFANg%40mail.gmail.com\n> \n\nHi,\n\nThe patch seems to make the planner proceed and not error out anymore. \nCannot judge if it's doing the right thing however or if its enough :) \nIt works for me for all reported queries however (queries 94-96).\n\nAnd sorry for the confusion wrt the stacktrace and plan. I tried to \nproduce a plan to possibly help with debugging but that would ofc then \nnot have the problem of the missing sortkey as otherwise i cannot \npresent a plan :) The stacktrace was however correct, and the plan \nconsidered involved a gather-merge with a sort. Unfortunately I could \nnot (easily) get the plan outputted in the end; even when setting the \ncosts to 0 somehow...\n\nRegards,\nLuc\n\n\n",
"msg_date": "Thu, 15 Apr 2021 11:33:56 +0200",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": true,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "On Thu, Apr 15, 2021 at 5:33 AM Luc Vlaming <luc@swarm64.com> wrote:\n>\n> On 15-04-2021 04:01, James Coleman wrote:\n> > On Wed, Apr 14, 2021 at 5:42 PM James Coleman <jtc331@gmail.com> wrote:\n> >>\n> >> On Mon, Apr 12, 2021 at 8:37 AM Tomas Vondra\n> >> <tomas.vondra@enterprisedb.com> wrote:\n> >>>\n> >>> On 4/12/21 2:24 PM, Luc Vlaming wrote:\n> >>>> Hi,\n> >>>>\n> >>>> When trying to run on master (but afaik also PG-13) TPC-DS queries 94,\n> >>>> 95 and 96 on a SF10 I get the error \"could not find pathkey item to sort\".\n> >>>> When I disable enable_gathermerge the problem goes away and then the\n> >>>> plan for query 94 looks like below. I tried figuring out what the\n> >>>> problem is but to be honest I would need some pointers as the code that\n> >>>> tries to matching equivalence members in prepare_sort_from_pathkeys is\n> >>>> something i'm really not familiar with.\n> >>>>\n> >>>\n> >>> Could be related to incremental sort, which allowed some gather merge\n> >>> paths that were impossible before. We had a couple issues related to\n> >>> that fixed in November, IIRC.\n> >>>\n> >>>> To reproduce you can either ingest and test using the toolkit I used too\n> >>>> (see https://github.com/swarm64/s64da-benchmark-toolkit/), or\n> >>>> alternatively just use the schema (see\n> >>>> https://github.com/swarm64/s64da-benchmark-toolkit/tree/master/benchmarks/tpcds/schemas/psql_native)\n> >>>>\n> >>>\n> >>> Thanks, I'll see if I can reproduce that with your schema.\n> >>>\n> >>>\n> >>> regards\n> >>>\n> >>> --\n> >>> Tomas Vondra\n> >>> EnterpriseDB: http://www.enterprisedb.com\n> >>> The Enterprise PostgreSQL Company\n> >>\n> >> The query in question is:\n> >>\n> >> select count(*)\n> >> from store_sales\n> >> ,household_demographics\n> >> ,time_dim, store\n> >> where ss_sold_time_sk = time_dim.t_time_sk\n> >> and ss_hdemo_sk = household_demographics.hd_demo_sk\n> >> and ss_store_sk = s_store_sk\n> >> and time_dim.t_hour = 15\n> >> and time_dim.t_minute >= 30\n> >> and household_demographics.hd_dep_count = 7\n> >> and store.s_store_name = 'ese'\n> >> order by count(*)\n> >> limit 100;\n> >>\n> >> From debugging output it looks like this is the plan being chosen\n> >> (cheapest total path):\n> >> Gather(store_sales household_demographics time_dim) rows=60626\n> >> cost=3145.73..699910.15\n> >> HashJoin(store_sales household_demographics time_dim)\n> >> rows=25261 cost=2145.73..692847.55\n> >> clauses: store_sales.ss_hdemo_sk =\n> >> household_demographics.hd_demo_sk\n> >> HashJoin(store_sales time_dim) rows=252609\n> >> cost=1989.73..692028.08\n> >> clauses: store_sales.ss_sold_time_sk =\n> >> time_dim.t_time_sk\n> >> SeqScan(store_sales) rows=11998564\n> >> cost=0.00..658540.64\n> >> SeqScan(time_dim) rows=1070\n> >> cost=0.00..1976.35\n> >> SeqScan(household_demographics) rows=720\n> >> cost=0.00..147.00\n> >>\n> >> prepare_sort_from_pathkeys fails to find a pathkey because\n> >> tlist_member_ignore_relabel returns null -- which seemed weird because\n> >> the sortexpr is an Aggref (in a single member equivalence class) and\n> >> the tlist contains a single member that's also an Aggref. It turns out\n> >> that the only difference between the two Aggrefs is that the tlist\n> >> entry has \"aggsplit = AGGSPLIT_INITIAL_SERIAL\" while the sortexpr has\n> >> aggsplit = AGGSPLIT_SIMPLE.\n> >>\n> >> That's as far as I've gotten so far, but I figured I'd get that info\n> >> out to see if it means anything obvious to anyone else.\n> >\n> > This really goes back to [1] where we fixed a similar issue by making\n> > find_em_expr_usable_for_sorting_rel parallel the rules in\n> > prepare_sort_from_pathkeys.\n> >\n> > Most of those conditions got copied, and the case we were trying to\n> > handle is the fact that prepare_sort_from_pathkeys can generate a\n> > target list entry under those conditions if one doesn't exist. However\n> > there's a further restriction there I don't remember looking at: it\n> > uses pull_var_clause and tlist_member_ignore_relabel to ensure that\n> > all of the vars that feed into the sort expression are found in the\n> > target list. As I understand it, that is: it will build a target list\n> > entry for something like \"md5(column)\" if \"column\" (and that was one\n> > of our test cases for the previous fix) is in the target list already.\n> >\n> > But there's an additional detail here: the call to pull_var_clause\n> > requests aggregates, window functions, and placeholders be treated as\n> > vars. That means for our Aggref case it would require that the two\n> > Aggrefs be fully equal, so the differing aggsplit member would cause a\n> > target list entry not to be built, hence our error here.\n> >\n> > I've attached a quick and dirty patch that encodes that final rule\n> > from prepare_sort_from_pathkeys into\n> > find_em_expr_usable_for_sorting_rel. I can't help but think that\n> > there's a cleaner way to do with this with less code duplication, but\n> > hindering that is that prepare_sort_from_pathkeys is working with a\n> > TargetList while find_em_expr_usable_for_sorting_rel is working with a\n> > list of expressions.\n> >\n> > James\n> >\n> > 1: https://www.postgresql.org/message-id/CAAaqYe9C3f6A_tZCRfr9Dm7hPpgGwpp4i-K_%3DNS9GWXuNiFANg%40mail.gmail.com\n> >\n>\n> Hi,\n>\n> The patch seems to make the planner proceed and not error out anymore.\n> Cannot judge if it's doing the right thing however or if its enough :)\n> It works for me for all reported queries however (queries 94-96).\n>\n> And sorry for the confusion wrt the stacktrace and plan. I tried to\n> produce a plan to possibly help with debugging but that would ofc then\n> not have the problem of the missing sortkey as otherwise i cannot\n> present a plan :) The stacktrace was however correct, and the plan\n> considered involved a gather-merge with a sort. Unfortunately I could\n> not (easily) get the plan outputted in the end; even when setting the\n> costs to 0 somehow...\n>\n> Regards,\n> Luc\n\nSame patch, but with a test case now.\n\nJames",
"msg_date": "Thu, 15 Apr 2021 13:35:59 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "\n\nOn 4/15/21 2:21 AM, Robert Haas wrote:\n> On Wed, Apr 14, 2021 at 8:20 PM James Coleman <jtc331@gmail.com> wrote:\n>>> Hmm, could be. Although, the stack trace at issue doesn't seem to show\n>>> a call to create_incrementalsort_plan().\n>>\n>> The changes to gather merge path generation made it possible to use\n>> those paths in more cases for both incremental sort and regular sort,\n>> so by \"incremental sort\" I read Tomas as saying \"the patches that\n>> brought in incremental sort\" not specifically \"incremental sort\n>> itself\".\n> \n> I agree. That's why I said \"hmm, could be\" even though the plan\n> doesn't involve one.\n> \n\nYeah, that's what I meant. The difference to pre-13 behavior is that we\nnow call generate_useful_gather_paths, which also considers adding extra\nsort (unlike plain generate_gather_paths).\n\nSo now we can end up with \"Gather Merge -> Sort\" paths that would not be\nconsidered before.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 15 Apr 2021 22:18:34 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "On 4/15/21 7:35 PM, James Coleman wrote:\n> On Thu, Apr 15, 2021 at 5:33 AM Luc Vlaming <luc@swarm64.com> wrote:\n>>\n>> On 15-04-2021 04:01, James Coleman wrote:\n>>> On Wed, Apr 14, 2021 at 5:42 PM James Coleman <jtc331@gmail.com> wrote:\n>>>>\n>>>> On Mon, Apr 12, 2021 at 8:37 AM Tomas Vondra\n>>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>>\n>>>>> On 4/12/21 2:24 PM, Luc Vlaming wrote:\n>>>>>> Hi,\n>>>>>>\n>>>>>> When trying to run on master (but afaik also PG-13) TPC-DS queries 94,\n>>>>>> 95 and 96 on a SF10 I get the error \"could not find pathkey item to sort\".\n>>>>>> When I disable enable_gathermerge the problem goes away and then the\n>>>>>> plan for query 94 looks like below. I tried figuring out what the\n>>>>>> problem is but to be honest I would need some pointers as the code that\n>>>>>> tries to matching equivalence members in prepare_sort_from_pathkeys is\n>>>>>> something i'm really not familiar with.\n>>>>>>\n>>>>>\n>>>>> Could be related to incremental sort, which allowed some gather merge\n>>>>> paths that were impossible before. We had a couple issues related to\n>>>>> that fixed in November, IIRC.\n>>>>>\n>>>>>> To reproduce you can either ingest and test using the toolkit I used too\n>>>>>> (see https://github.com/swarm64/s64da-benchmark-toolkit/), or\n>>>>>> alternatively just use the schema (see\n>>>>>> https://github.com/swarm64/s64da-benchmark-toolkit/tree/master/benchmarks/tpcds/schemas/psql_native)\n>>>>>>\n>>>>>\n>>>>> Thanks, I'll see if I can reproduce that with your schema.\n>>>>>\n>>>>>\n>>>>> regards\n>>>>>\n>>>>> --\n>>>>> Tomas Vondra\n>>>>> EnterpriseDB: http://www.enterprisedb.com\n>>>>> The Enterprise PostgreSQL Company\n>>>>\n>>>> The query in question is:\n>>>>\n>>>> select count(*)\n>>>> from store_sales\n>>>> ,household_demographics\n>>>> ,time_dim, store\n>>>> where ss_sold_time_sk = time_dim.t_time_sk\n>>>> and ss_hdemo_sk = household_demographics.hd_demo_sk\n>>>> and ss_store_sk = s_store_sk\n>>>> and time_dim.t_hour = 15\n>>>> and time_dim.t_minute >= 30\n>>>> and household_demographics.hd_dep_count = 7\n>>>> and store.s_store_name = 'ese'\n>>>> order by count(*)\n>>>> limit 100;\n>>>>\n>>>> From debugging output it looks like this is the plan being chosen\n>>>> (cheapest total path):\n>>>> Gather(store_sales household_demographics time_dim) rows=60626\n>>>> cost=3145.73..699910.15\n>>>> HashJoin(store_sales household_demographics time_dim)\n>>>> rows=25261 cost=2145.73..692847.55\n>>>> clauses: store_sales.ss_hdemo_sk =\n>>>> household_demographics.hd_demo_sk\n>>>> HashJoin(store_sales time_dim) rows=252609\n>>>> cost=1989.73..692028.08\n>>>> clauses: store_sales.ss_sold_time_sk =\n>>>> time_dim.t_time_sk\n>>>> SeqScan(store_sales) rows=11998564\n>>>> cost=0.00..658540.64\n>>>> SeqScan(time_dim) rows=1070\n>>>> cost=0.00..1976.35\n>>>> SeqScan(household_demographics) rows=720\n>>>> cost=0.00..147.00\n>>>>\n>>>> prepare_sort_from_pathkeys fails to find a pathkey because\n>>>> tlist_member_ignore_relabel returns null -- which seemed weird because\n>>>> the sortexpr is an Aggref (in a single member equivalence class) and\n>>>> the tlist contains a single member that's also an Aggref. It turns out\n>>>> that the only difference between the two Aggrefs is that the tlist\n>>>> entry has \"aggsplit = AGGSPLIT_INITIAL_SERIAL\" while the sortexpr has\n>>>> aggsplit = AGGSPLIT_SIMPLE.\n>>>>\n>>>> That's as far as I've gotten so far, but I figured I'd get that info\n>>>> out to see if it means anything obvious to anyone else.\n>>>\n>>> This really goes back to [1] where we fixed a similar issue by making\n>>> find_em_expr_usable_for_sorting_rel parallel the rules in\n>>> prepare_sort_from_pathkeys.\n>>>\n>>> Most of those conditions got copied, and the case we were trying to\n>>> handle is the fact that prepare_sort_from_pathkeys can generate a\n>>> target list entry under those conditions if one doesn't exist. However\n>>> there's a further restriction there I don't remember looking at: it\n>>> uses pull_var_clause and tlist_member_ignore_relabel to ensure that\n>>> all of the vars that feed into the sort expression are found in the\n>>> target list. As I understand it, that is: it will build a target list\n>>> entry for something like \"md5(column)\" if \"column\" (and that was one\n>>> of our test cases for the previous fix) is in the target list already.\n>>>\n>>> But there's an additional detail here: the call to pull_var_clause\n>>> requests aggregates, window functions, and placeholders be treated as\n>>> vars. That means for our Aggref case it would require that the two\n>>> Aggrefs be fully equal, so the differing aggsplit member would cause a\n>>> target list entry not to be built, hence our error here.\n>>>\n>>> I've attached a quick and dirty patch that encodes that final rule\n>>> from prepare_sort_from_pathkeys into\n>>> find_em_expr_usable_for_sorting_rel. I can't help but think that\n>>> there's a cleaner way to do with this with less code duplication, but\n>>> hindering that is that prepare_sort_from_pathkeys is working with a\n>>> TargetList while find_em_expr_usable_for_sorting_rel is working with a\n>>> list of expressions.\n>>>\n\nYeah, I think it'll be difficult to reuse code from later planner stages\nexactly because it operates on different representation. So something\nlike your patch is likely necessary.\n\nAs for the patch, I have a couple comments:\n\n1) expr_list_member_ignore_relabel would deserve a better comment, and\nmaybe a reference to tlist_member_ignore_relabel which it copies\n\n2) I suppose the comment before \"if (ec->ec_has_volatile)\" needs\nupdating, because now it says we're done as long as the expression is\nnot volatile (but we're doing more stuff).\n\n3) Shouldn't find_em_expr_usable_for_sorting_rel now mostly mimic what\nprepare_sort_from_pathkeys does? That is, try to match the entries\ndirectly first, before the new pull_vars() business?\n\n4) I've simplified the foreach() loop a bit. prepare_sort_from_pathkeys\ndoes it differently, but that's because there are multiple foreach\nlevels, I think. Yes, we'll not free the list, but I that's what most\nother places in planner do ...\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 16 Apr 2021 03:27:40 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "On Thu, Apr 15, 2021 at 6:27 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n>\n>\n> On 4/15/21 7:35 PM, James Coleman wrote:\n> > On Thu, Apr 15, 2021 at 5:33 AM Luc Vlaming <luc@swarm64.com> wrote:\n> >>\n> >> On 15-04-2021 04:01, James Coleman wrote:\n> >>> On Wed, Apr 14, 2021 at 5:42 PM James Coleman <jtc331@gmail.com>\n> wrote:\n> >>>>\n> >>>> On Mon, Apr 12, 2021 at 8:37 AM Tomas Vondra\n> >>>> <tomas.vondra@enterprisedb.com> wrote:\n> >>>>>\n> >>>>> On 4/12/21 2:24 PM, Luc Vlaming wrote:\n> >>>>>> Hi,\n> >>>>>>\n> >>>>>> When trying to run on master (but afaik also PG-13) TPC-DS queries\n> 94,\n> >>>>>> 95 and 96 on a SF10 I get the error \"could not find pathkey item to\n> sort\".\n> >>>>>> When I disable enable_gathermerge the problem goes away and then the\n> >>>>>> plan for query 94 looks like below. I tried figuring out what the\n> >>>>>> problem is but to be honest I would need some pointers as the code\n> that\n> >>>>>> tries to matching equivalence members in prepare_sort_from_pathkeys\n> is\n> >>>>>> something i'm really not familiar with.\n> >>>>>>\n> >>>>>\n> >>>>> Could be related to incremental sort, which allowed some gather merge\n> >>>>> paths that were impossible before. We had a couple issues related to\n> >>>>> that fixed in November, IIRC.\n> >>>>>\n> >>>>>> To reproduce you can either ingest and test using the toolkit I\n> used too\n> >>>>>> (see https://github.com/swarm64/s64da-benchmark-toolkit/), or\n> >>>>>> alternatively just use the schema (see\n> >>>>>>\n> https://github.com/swarm64/s64da-benchmark-toolkit/tree/master/benchmarks/tpcds/schemas/psql_native\n> )\n> >>>>>>\n> >>>>>\n> >>>>> Thanks, I'll see if I can reproduce that with your schema.\n> >>>>>\n> >>>>>\n> >>>>> regards\n> >>>>>\n> >>>>> --\n> >>>>> Tomas Vondra\n> >>>>> EnterpriseDB: http://www.enterprisedb.com\n> >>>>> The Enterprise PostgreSQL Company\n> >>>>\n> >>>> The query in question is:\n> >>>>\n> >>>> select count(*)\n> >>>> from store_sales\n> >>>> ,household_demographics\n> >>>> ,time_dim, store\n> >>>> where ss_sold_time_sk = time_dim.t_time_sk\n> >>>> and ss_hdemo_sk = household_demographics.hd_demo_sk\n> >>>> and ss_store_sk = s_store_sk\n> >>>> and time_dim.t_hour = 15\n> >>>> and time_dim.t_minute >= 30\n> >>>> and household_demographics.hd_dep_count = 7\n> >>>> and store.s_store_name = 'ese'\n> >>>> order by count(*)\n> >>>> limit 100;\n> >>>>\n> >>>> From debugging output it looks like this is the plan being chosen\n> >>>> (cheapest total path):\n> >>>> Gather(store_sales household_demographics time_dim)\n> rows=60626\n> >>>> cost=3145.73..699910.15\n> >>>> HashJoin(store_sales household_demographics time_dim)\n> >>>> rows=25261 cost=2145.73..692847.55\n> >>>> clauses: store_sales.ss_hdemo_sk =\n> >>>> household_demographics.hd_demo_sk\n> >>>> HashJoin(store_sales time_dim) rows=252609\n> >>>> cost=1989.73..692028.08\n> >>>> clauses: store_sales.ss_sold_time_sk =\n> >>>> time_dim.t_time_sk\n> >>>> SeqScan(store_sales) rows=11998564\n> >>>> cost=0.00..658540.64\n> >>>> SeqScan(time_dim) rows=1070\n> >>>> cost=0.00..1976.35\n> >>>> SeqScan(household_demographics) rows=720\n> >>>> cost=0.00..147.00\n> >>>>\n> >>>> prepare_sort_from_pathkeys fails to find a pathkey because\n> >>>> tlist_member_ignore_relabel returns null -- which seemed weird because\n> >>>> the sortexpr is an Aggref (in a single member equivalence class) and\n> >>>> the tlist contains a single member that's also an Aggref. It turns out\n> >>>> that the only difference between the two Aggrefs is that the tlist\n> >>>> entry has \"aggsplit = AGGSPLIT_INITIAL_SERIAL\" while the sortexpr has\n> >>>> aggsplit = AGGSPLIT_SIMPLE.\n> >>>>\n> >>>> That's as far as I've gotten so far, but I figured I'd get that info\n> >>>> out to see if it means anything obvious to anyone else.\n> >>>\n> >>> This really goes back to [1] where we fixed a similar issue by making\n> >>> find_em_expr_usable_for_sorting_rel parallel the rules in\n> >>> prepare_sort_from_pathkeys.\n> >>>\n> >>> Most of those conditions got copied, and the case we were trying to\n> >>> handle is the fact that prepare_sort_from_pathkeys can generate a\n> >>> target list entry under those conditions if one doesn't exist. However\n> >>> there's a further restriction there I don't remember looking at: it\n> >>> uses pull_var_clause and tlist_member_ignore_relabel to ensure that\n> >>> all of the vars that feed into the sort expression are found in the\n> >>> target list. As I understand it, that is: it will build a target list\n> >>> entry for something like \"md5(column)\" if \"column\" (and that was one\n> >>> of our test cases for the previous fix) is in the target list already.\n> >>>\n> >>> But there's an additional detail here: the call to pull_var_clause\n> >>> requests aggregates, window functions, and placeholders be treated as\n> >>> vars. That means for our Aggref case it would require that the two\n> >>> Aggrefs be fully equal, so the differing aggsplit member would cause a\n> >>> target list entry not to be built, hence our error here.\n> >>>\n> >>> I've attached a quick and dirty patch that encodes that final rule\n> >>> from prepare_sort_from_pathkeys into\n> >>> find_em_expr_usable_for_sorting_rel. I can't help but think that\n> >>> there's a cleaner way to do with this with less code duplication, but\n> >>> hindering that is that prepare_sort_from_pathkeys is working with a\n> >>> TargetList while find_em_expr_usable_for_sorting_rel is working with a\n> >>> list of expressions.\n> >>>\n>\n> Yeah, I think it'll be difficult to reuse code from later planner stages\n> exactly because it operates on different representation. So something\n> like your patch is likely necessary.\n>\n> As for the patch, I have a couple comments:\n>\n> 1) expr_list_member_ignore_relabel would deserve a better comment, and\n> maybe a reference to tlist_member_ignore_relabel which it copies\n>\n> 2) I suppose the comment before \"if (ec->ec_has_volatile)\" needs\n> updating, because now it says we're done as long as the expression is\n> not volatile (but we're doing more stuff).\n>\n> 3) Shouldn't find_em_expr_usable_for_sorting_rel now mostly mimic what\n> prepare_sort_from_pathkeys does? That is, try to match the entries\n> directly first, before the new pull_vars() business?\n>\n> 4) I've simplified the foreach() loop a bit. prepare_sort_from_pathkeys\n> does it differently, but that's because there are multiple foreach\n> levels, I think. Yes, we'll not free the list, but I that's what most\n> other places in planner do ...\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi,\n\n if (!expr_list_member_ignore_relabel(lfirst(k), target->exprs))\n- break;\n+ return NULL;\n\nI think it would be better if list_free(exprvars) is called before the\nreturn.\n\nCheers\n\nOn Thu, Apr 15, 2021 at 6:27 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\nOn 4/15/21 7:35 PM, James Coleman wrote:\n> On Thu, Apr 15, 2021 at 5:33 AM Luc Vlaming <luc@swarm64.com> wrote:\n>>\n>> On 15-04-2021 04:01, James Coleman wrote:\n>>> On Wed, Apr 14, 2021 at 5:42 PM James Coleman <jtc331@gmail.com> wrote:\n>>>>\n>>>> On Mon, Apr 12, 2021 at 8:37 AM Tomas Vondra\n>>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>>\n>>>>> On 4/12/21 2:24 PM, Luc Vlaming wrote:\n>>>>>> Hi,\n>>>>>>\n>>>>>> When trying to run on master (but afaik also PG-13) TPC-DS queries 94,\n>>>>>> 95 and 96 on a SF10 I get the error \"could not find pathkey item to sort\".\n>>>>>> When I disable enable_gathermerge the problem goes away and then the\n>>>>>> plan for query 94 looks like below. I tried figuring out what the\n>>>>>> problem is but to be honest I would need some pointers as the code that\n>>>>>> tries to matching equivalence members in prepare_sort_from_pathkeys is\n>>>>>> something i'm really not familiar with.\n>>>>>>\n>>>>>\n>>>>> Could be related to incremental sort, which allowed some gather merge\n>>>>> paths that were impossible before. We had a couple issues related to\n>>>>> that fixed in November, IIRC.\n>>>>>\n>>>>>> To reproduce you can either ingest and test using the toolkit I used too\n>>>>>> (see https://github.com/swarm64/s64da-benchmark-toolkit/), or\n>>>>>> alternatively just use the schema (see\n>>>>>> https://github.com/swarm64/s64da-benchmark-toolkit/tree/master/benchmarks/tpcds/schemas/psql_native)\n>>>>>>\n>>>>>\n>>>>> Thanks, I'll see if I can reproduce that with your schema.\n>>>>>\n>>>>>\n>>>>> regards\n>>>>>\n>>>>> --\n>>>>> Tomas Vondra\n>>>>> EnterpriseDB: http://www.enterprisedb.com\n>>>>> The Enterprise PostgreSQL Company\n>>>>\n>>>> The query in question is:\n>>>>\n>>>> select count(*)\n>>>> from store_sales\n>>>> ,household_demographics\n>>>> ,time_dim, store\n>>>> where ss_sold_time_sk = time_dim.t_time_sk\n>>>> and ss_hdemo_sk = household_demographics.hd_demo_sk\n>>>> and ss_store_sk = s_store_sk\n>>>> and time_dim.t_hour = 15\n>>>> and time_dim.t_minute >= 30\n>>>> and household_demographics.hd_dep_count = 7\n>>>> and store.s_store_name = 'ese'\n>>>> order by count(*)\n>>>> limit 100;\n>>>>\n>>>> From debugging output it looks like this is the plan being chosen\n>>>> (cheapest total path):\n>>>> Gather(store_sales household_demographics time_dim) rows=60626\n>>>> cost=3145.73..699910.15\n>>>> HashJoin(store_sales household_demographics time_dim)\n>>>> rows=25261 cost=2145.73..692847.55\n>>>> clauses: store_sales.ss_hdemo_sk =\n>>>> household_demographics.hd_demo_sk\n>>>> HashJoin(store_sales time_dim) rows=252609\n>>>> cost=1989.73..692028.08\n>>>> clauses: store_sales.ss_sold_time_sk =\n>>>> time_dim.t_time_sk\n>>>> SeqScan(store_sales) rows=11998564\n>>>> cost=0.00..658540.64\n>>>> SeqScan(time_dim) rows=1070\n>>>> cost=0.00..1976.35\n>>>> SeqScan(household_demographics) rows=720\n>>>> cost=0.00..147.00\n>>>>\n>>>> prepare_sort_from_pathkeys fails to find a pathkey because\n>>>> tlist_member_ignore_relabel returns null -- which seemed weird because\n>>>> the sortexpr is an Aggref (in a single member equivalence class) and\n>>>> the tlist contains a single member that's also an Aggref. It turns out\n>>>> that the only difference between the two Aggrefs is that the tlist\n>>>> entry has \"aggsplit = AGGSPLIT_INITIAL_SERIAL\" while the sortexpr has\n>>>> aggsplit = AGGSPLIT_SIMPLE.\n>>>>\n>>>> That's as far as I've gotten so far, but I figured I'd get that info\n>>>> out to see if it means anything obvious to anyone else.\n>>>\n>>> This really goes back to [1] where we fixed a similar issue by making\n>>> find_em_expr_usable_for_sorting_rel parallel the rules in\n>>> prepare_sort_from_pathkeys.\n>>>\n>>> Most of those conditions got copied, and the case we were trying to\n>>> handle is the fact that prepare_sort_from_pathkeys can generate a\n>>> target list entry under those conditions if one doesn't exist. However\n>>> there's a further restriction there I don't remember looking at: it\n>>> uses pull_var_clause and tlist_member_ignore_relabel to ensure that\n>>> all of the vars that feed into the sort expression are found in the\n>>> target list. As I understand it, that is: it will build a target list\n>>> entry for something like \"md5(column)\" if \"column\" (and that was one\n>>> of our test cases for the previous fix) is in the target list already.\n>>>\n>>> But there's an additional detail here: the call to pull_var_clause\n>>> requests aggregates, window functions, and placeholders be treated as\n>>> vars. That means for our Aggref case it would require that the two\n>>> Aggrefs be fully equal, so the differing aggsplit member would cause a\n>>> target list entry not to be built, hence our error here.\n>>>\n>>> I've attached a quick and dirty patch that encodes that final rule\n>>> from prepare_sort_from_pathkeys into\n>>> find_em_expr_usable_for_sorting_rel. I can't help but think that\n>>> there's a cleaner way to do with this with less code duplication, but\n>>> hindering that is that prepare_sort_from_pathkeys is working with a\n>>> TargetList while find_em_expr_usable_for_sorting_rel is working with a\n>>> list of expressions.\n>>>\n\nYeah, I think it'll be difficult to reuse code from later planner stages\nexactly because it operates on different representation. So something\nlike your patch is likely necessary.\n\nAs for the patch, I have a couple comments:\n\n1) expr_list_member_ignore_relabel would deserve a better comment, and\nmaybe a reference to tlist_member_ignore_relabel which it copies\n\n2) I suppose the comment before \"if (ec->ec_has_volatile)\" needs\nupdating, because now it says we're done as long as the expression is\nnot volatile (but we're doing more stuff).\n\n3) Shouldn't find_em_expr_usable_for_sorting_rel now mostly mimic what\nprepare_sort_from_pathkeys does? That is, try to match the entries\ndirectly first, before the new pull_vars() business?\n\n4) I've simplified the foreach() loop a bit. prepare_sort_from_pathkeys\ndoes it differently, but that's because there are multiple foreach\nlevels, I think. Yes, we'll not free the list, but I that's what most\nother places in planner do ...\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL CompanyHi, if (!expr_list_member_ignore_relabel(lfirst(k), target->exprs))- break;+ return NULL;I think it would be better if list_free(exprvars) is called before the return.Cheers",
"msg_date": "Thu, 15 Apr 2021 19:27:17 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "[ sorry for not getting to this thread till now ]\n\nTomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> 3) Shouldn't find_em_expr_usable_for_sorting_rel now mostly mimic what\n> prepare_sort_from_pathkeys does? That is, try to match the entries\n> directly first, before the new pull_vars() business?\n\nYeah. I concur that the problem here is that\nfind_em_expr_usable_for_sorting_rel isn't fully accounting for what\nprepare_sort_from_pathkeys can and can't do. However, I don't like this\npatch much:\n\n* As written, I think it may just move the pain somewhere else. The point\nof the logic in prepare_sort_from_pathkeys is to handle either full\nexpression matches (e.g. sort by \"A+B\" when \"A+B\" is an expression in\nthe input tlist) or computable expressions (sort by \"A+B\" when A and B\nare individually available). I think you've fixed the second case and\nbroken the first one. Now it's possible that the case never arises,\nand certainly failing to generate an early sort isn't catastrophic anyway.\nBut we ought to get it right.\n\n* If the goal is to match what prepare_sort_from_pathkeys can do, I\nthink that doubling down on the strategy of having a duplicate copy\nis not the path to a maintainable fix.\n\nI think it's time for some refactoring of this code so that we can\nactually share the logic. Accordingly, I propose the attached.\nIt's really not that hard to share, as long as you accept the idea\nthat the list passed to the shared subroutine can be either a list of\nTargetEntries or of bare expressions.\n\nAlso, I don't much care for either the name or API of\nfind_em_expr_usable_for_sorting_rel. The sole current caller only\nreally needs a boolean result, and if it did need more than that\nit'd likely need the whole EquivalenceMember not just the em_expr\n(certainly createplan.c does). So 0002 attached is some bikeshedding\non that. I kept that separate because it might be wise to do it only\nin HEAD, just in case somebody out there is calling the function from\nan extension.\n\n(BTW, responding to an upthread question: I think the looping to\nremove multiple levels of RelabelType is probably now redundant,\nbut I didn't remove it. If we want to do that there are more\nplaces to touch than just this code.)\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 17 Apr 2021 15:39:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "I wrote:\n> I think it's time for some refactoring of this code so that we can\n> actually share the logic. Accordingly, I propose the attached.\n\nAfter sleeping on it, here's an improved version that gets rid of\nan unnecessary assumption about ECs usually not containing both\nparallel-safe and parallel-unsafe members. I'd tried to do this\nyesterday but didn't like the amount of side-effects on createplan.c\n(caused by the direct call sites not being passed the \"root\" pointer).\nHowever, we can avoid refactoring createplan.c APIs by saying that it's\nokay to pass root = NULL to find_computable_ec_member if you're not\nasking it to check parallel safety. And there's not really a need to\nput a parallel-safety check into find_ec_member_matching_expr at all;\nthat task can be left with the one caller that cares.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 18 Apr 2021 13:21:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "On Sun, Apr 18, 2021 at 1:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > I think it's time for some refactoring of this code so that we can\n> > actually share the logic. Accordingly, I propose the attached.\n>\n> After sleeping on it, here's an improved version that gets rid of\n> an unnecessary assumption about ECs usually not containing both\n> parallel-safe and parallel-unsafe members. I'd tried to do this\n> yesterday but didn't like the amount of side-effects on createplan.c\n> (caused by the direct call sites not being passed the \"root\" pointer).\n> However, we can avoid refactoring createplan.c APIs by saying that it's\n> okay to pass root = NULL to find_computable_ec_member if you're not\n> asking it to check parallel safety. And there's not really a need to\n> put a parallel-safety check into find_ec_member_matching_expr at all;\n> that task can be left with the one caller that cares.\n\nI like the refactoring here.\n\nTwo things I wonder:\n1. Should we add tests for the relabel code path?\n2. It'd be nice not to have the IS_SRF_CALL duplicated, but that might\nadd enough complexity that it's not worth it.\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Mon, 19 Apr 2021 16:44:49 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "On Sat, Apr 17, 2021 at 3:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> ...\n> Also, I don't much care for either the name or API of\n> find_em_expr_usable_for_sorting_rel. The sole current caller only\n> really needs a boolean result, and if it did need more than that\n> it'd likely need the whole EquivalenceMember not just the em_expr\n> (certainly createplan.c does). So 0002 attached is some bikeshedding\n> on that. I kept that separate because it might be wise to do it only\n> in HEAD, just in case somebody out there is calling the function from\n> an extension.\n\nI forgot to comment on this in my previous email, but it seems to me\nthat relation_has_safe_ec_member, while less wordy, isn't quite\ndescriptive enough. Perhaps something like\nrelation_has_sort_safe_ec_member?\n\nJames Coleman\n\n\n",
"msg_date": "Mon, 19 Apr 2021 16:49:05 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> I forgot to comment on this in my previous email, but it seems to me\n> that relation_has_safe_ec_member, while less wordy, isn't quite\n> descriptive enough. Perhaps something like\n> relation_has_sort_safe_ec_member?\n\nI'm not wedded to that name, certainly, but it seems like neither\nof these is quite getting at the issue. An EC can be sorted on,\nby definition, but there are some things we don't want to sort\non till the final output step. I was trying to think of something\nusing the terminology \"early sort\", but didn't much like\n\"relation_has_early_sortable_ec_member\" or obvious variants of that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Apr 2021 17:37:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "I wrote:\n> I'm not wedded to that name, certainly, but it seems like neither\n> of these is quite getting at the issue. An EC can be sorted on,\n> by definition, but there are some things we don't want to sort\n> on till the final output step. I was trying to think of something\n> using the terminology \"early sort\", but didn't much like\n> \"relation_has_early_sortable_ec_member\" or obvious variants of that.\n\n... or, as long as it's returning a boolean, maybe it could be\n\"relation_can_be_sorted_early\" ?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Apr 2021 17:42:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> Two things I wonder:\n> 1. Should we add tests for the relabel code path?\n\nAs far as that goes, the Relabel-stripping loops in\nfind_ec_member_matching_expr are already exercised in the core\nregression tests (I didn't bother to discover exactly where, but\na quick coverage test run says that they're hit). The ones in\nexprlist_member_ignore_relabel are not iterated though. On\nreflection, the first loop stripping the input node is visibly\nunreachable by the sole caller, since everything in the exprvars\nlist will be a Var, Aggref, WindowFunc, or PlaceHolderVar. I'm\nless sure about what is possible in the targetlist that we're\nreferencing, but it strikes me that ignoring relabel on that\nside is probably functionally wrong: if we have say \"f(textcol)\"\nas an expression to be sorted on, but what is in the tlist is\ntextcol::varchar or the like, I do not think setrefs.c will\nconsider that an acceptable match. So now that's seeming like\nan actual bug --- although the lack of field reports suggests\nthat it's unreachable, most likely because if we do have\n\"f(textcol)\" as a sort candidate, we'll have made sure to emit\nplain \"textcol\" from the source relation, regardless of whether\nthere might also be a reason to emit textcol::varchar.\n\nAnyway I'm now inclined to remove that behavior from\nfind_computable_ec_member, and adjust comments accordingly.\n\n> 2. It'd be nice not to have the IS_SRF_CALL duplicated, but that might\n> add enough complexity that it's not worth it.\n\nYeah, I'd messed around with variants that put more smarts\ninto the bottom-level functions, and decided that it wasn't\na net improvement.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Apr 2021 18:09:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "I wrote:\n> Anyway I'm now inclined to remove that behavior from\n> find_computable_ec_member, and adjust comments accordingly.\n\nAfter some more testing, that seems like a good thing to do,\nso here's a v4.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 19 Apr 2021 19:10:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 7:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > Anyway I'm now inclined to remove that behavior from\n> > find_computable_ec_member, and adjust comments accordingly.\n>\n> After some more testing, that seems like a good thing to do,\n> so here's a v4.\n\nThis all looks good to me.\n\nJames Coleman\n\n\n",
"msg_date": "Mon, 19 Apr 2021 20:56:19 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> +\t/* We ignore binary-compatible relabeling on both ends */\n> +\twhile (expr && IsA(expr, RelabelType))\n> +\t\texpr = ((RelabelType *) expr)->arg;\n\nThere are 10 instances of this exact loop scattered around the codebase.\nIs it worth it turning it into a static inline function?\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl\n\n\n",
"msg_date": "Tue, 20 Apr 2021 11:01:28 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>\n>> +\t/* We ignore binary-compatible relabeling on both ends */\n>> +\twhile (expr && IsA(expr, RelabelType))\n>> +\t\texpr = ((RelabelType *) expr)->arg;\n>\n> There are 10 instances of this exact loop scattered around the codebase.\n> Is it worth it turning it into a static inline function?\n\nSomething like the attached, maybe?\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen",
"msg_date": "Tue, 20 Apr 2021 12:11:31 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "On Tue, Apr 20, 2021 at 7:11 AM Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n>\n> ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n>\n> > Tom Lane <tgl@sss.pgh.pa.us> writes:\n> >\n> >> + /* We ignore binary-compatible relabeling on both ends */\n> >> + while (expr && IsA(expr, RelabelType))\n> >> + expr = ((RelabelType *) expr)->arg;\n> >\n> > There are 10 instances of this exact loop scattered around the codebase.\n> > Is it worth it turning it into a static inline function?\n>\n> Something like the attached, maybe?\n\nI'm not opposed to this, but I think it should go in a separate thread\nsince it's orthogonal to the bugfix there and also will confuse cfbot.\n\nJames\n\n\n",
"msg_date": "Tue, 20 Apr 2021 08:01:47 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n>> There are 10 instances of this exact loop scattered around the codebase.\n>> Is it worth it turning it into a static inline function?\n\n> Something like the attached, maybe?\n\nMeh. The trouble with this is that the call sites don't all declare\nthe pointer variable the same way. While the written-out loops can\nlook the same regardless, a function can only accommodate one choice\nwithout messy casts. For my money, the notational savings here is\nsmall enough that the casts really discourage doing anything.\n\nSo if we wanted to do this, I'd think about using a macro:\n\n#define strip_relabeltype(nodeptr) \\\n\twhile (nodeptr && IsA(nodeptr, RelabelType))\n\t\tnodeptr = ((RelabelType *) nodeptr)->arg\n\n...\n\n\tstrip_relabeltype(em_expr);\n\n...\n\nSince the argument would have to be a variable, the usual\nmultiple-reference hazards of using a macro don't seem to apply.\n\n(Probably the macro could do with \"do ... while\" decoration\nto discourage any syntactic oddities, but you get the idea.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Apr 2021 10:42:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
},
{
"msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> On Mon, Apr 19, 2021 at 7:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> After some more testing, that seems like a good thing to do,\n>> so here's a v4.\n\n> This all looks good to me.\n\nPushed, thanks for reviewing!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Apr 2021 11:38:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"could not find pathkey item to sort\" for TPC-DS queries 94-96"
}
] |
[
{
"msg_contents": "Hi,\n\nWhilst developing a CSP that potentially sits (directly) above e.g. any \nunion or anything with a dummy tlist we observed some problems as the \nset_customscan_references cannot handle any dummy tlists and will give \ninvalid varno errors. I was wondering how we can fix this, and I was \nwondering what the reason is that there is actually no callback in the \ncsp interface for the set_customscan_references. Can someone maybe \nclarify this for me?\n\nThanks!\n\nRegards,\nLuc\n\n\n",
"msg_date": "Mon, 12 Apr 2021 14:31:53 +0200",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": true,
"msg_subject": "interaction between csps with dummy tlists and\n set_customscan_references"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile troubleshooting a failed upgrade from v11 -> v12 I realised I had\nencountered a bug previously reported on the pgsql-bugs mailing list:\n\n#14242 Role with a setconfig \"role\" setting to a nonexistent role causes\npg_upgrade to fail\n\nhttps://www.postgresql.org/message-id/20160711223641.1426.86096%40wrigleys.postgresql.org\n\nTo quote the previous report:\n\n> It is possible to modify the \"role\" setting in setconfig in the\n> pg_db_role_setting table such that it points to a nonexistent role. When\n> this is the case, restoring the output of pg_dumpall will fail due to the\n> missing role.\n\n> Steps to reproduce:\n\n> 1. As superuser, execute \"create role foo with login password 'test'\"\n> 2. As foo, execute \"alter role foo set role = 'foo'\"\n> 3. As superuser, execute \"alter role foo rename to bar\"\n> a. At this point, the setconfig entry in pg_db_role_setting for\n> 'bar' will contain '{role=foo}', which no longer exists\n> 4. Execute pg_upgrade with the recommended steps in\n> https://www.postgresql.org/docs/current/static/pgupgrade.html\n\n> During pg_upgrade (more specifically, during the restore of the output from\n> pg_dumpall), the \"ALTER ROLE \"bar\" SET \"role\" TO 'foo'\" command generated\n> will fail with \"ERROR: role \"foo\" does not exist\".\n\n> This issue was identified by Jordan Lange and Nathan Bossart.\n\nThe steps in the original report reproduce the problem on all currently\nsupported pg versions. I appreciate that the invalid role-specific default \nsettings are ultimately self-inflicted by the user, but as a third-party \nperforming the upgrade this caught me by surprise.\n\nSince it is possible to write a query to identify these cases, would there\nbe appetite for me to submit a patch to add a check for this to \npg_upgrade?\n\nFirst time mailing list user here so many apologies for any missteps I have\nmade in this message.\n\nBest regards,\nCharlie Hornsby\n\n",
"msg_date": "Mon, 12 Apr 2021 13:28:19 +0000",
"msg_from": "Charlie Hornsby <charlie.hornsby@hotmail.co.uk>",
"msg_from_op": true,
"msg_subject": "pg_upgrade check for invalid role-specific default config"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 01:28:19PM +0000, Charlie Hornsby wrote:\n> Hi all,\n> \n> While troubleshooting a failed upgrade from v11 -> v12 I realised I had\n> encountered a bug previously reported on the pgsql-bugs mailing list:\n> \n> #14242 Role with a setconfig \"role\" setting to a nonexistent role causes\n> pg_upgrade to fail\n> \n> https://www.postgresql.org/message-id/20160711223641.1426.86096%40wrigleys.postgresql.org\n\n...\n\n> Since it is possible to write a query to identify these cases, would there\n> be appetite for me to submit a patch to add a check for this to \n> pg_upgrade?\n> \n> First time mailing list user here so many apologies for any missteps I have\n> made in this message.\n\nYes, I think a patch would be good, but the fix might be for pg_dump\ninstead, which pg_upgrade uses.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Mon, 12 Apr 2021 17:16:02 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade check for invalid role-specific default config"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Mon, Apr 12, 2021 at 01:28:19PM +0000, Charlie Hornsby wrote:\n>> While troubleshooting a failed upgrade from v11 -> v12 I realised I had\n>> encountered a bug previously reported on the pgsql-bugs mailing list:\n>> #14242 Role with a setconfig \"role\" setting to a nonexistent role causes\n>> pg_upgrade to fail\n>> https://www.postgresql.org/message-id/20160711223641.1426.86096%40wrigleys.postgresql.org\n>> Since it is possible to write a query to identify these cases, would there\n>> be appetite for me to submit a patch to add a check for this to \n>> pg_upgrade?\n\n> Yes, I think a patch would be good, but the fix might be for pg_dump\n> instead, which pg_upgrade uses.\n\nI'm not sure I buy the premise that \"it is possible to write a query\nto identify these cases\". It seems to me that the general problem is\nthat ALTER ROLE/DATABASE SET values might have become incorrect since\nthey were installed and would thus fail when reloaded in dump/restore.\nWe're not going to be able to prevent that in the general case, and\nit's not obvious to me what special case might be worth going after.\n\nI do find it interesting that we now have two reports of somebody\ndoing \"ALTER ROLE SET role = something\". In the older thread,\nI was skeptical that that had any real use-case, so I wonder if\nCharlie has a rationale for having done that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Apr 2021 17:29:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade check for invalid role-specific default config"
},
{
"msg_contents": "I wrote:\n> I'm not sure I buy the premise that \"it is possible to write a query\n> to identify these cases\". It seems to me that the general problem is\n> that ALTER ROLE/DATABASE SET values might have become incorrect since\n> they were installed and would thus fail when reloaded in dump/restore.\n> We're not going to be able to prevent that in the general case, and\n> it's not obvious to me what special case might be worth going after.\n\nActually, after thinking about that a bit more, this is a whole lot\nlike the issues we have with reloading function bodies and function\nSET clauses. The solution we've adopted for that is to allow dumps\nto turn off validation via the check_function_bodies GUC. Maybe there\nshould be a GUC to disable validation of ALTER ROLE/DATABASE SET values.\nIf you fat-finger a setting, you might not be able to log in, but you\ncouldn't log in in the old database either.\n\nAnother answer is that maybe the processing of the \"role\" case\nin particular is just broken. Compare the behavior here:\n\nregression=# create role joe;\nCREATE ROLE\nregression=# alter role joe set role = 'notthere';\nERROR: role \"notthere\" does not exist\nregression=# alter role joe set default_text_search_config to 'notthere';\nNOTICE: text search configuration \"notthere\" does not exist\nALTER ROLE\n# \\drds\n List of settings\n Role | Database | Settings \n------+------------+-------------------------------------\n joe | | default_text_search_config=notthere\n\ndespite the fact that a direct SET fails:\n\nregression=# set default_text_search_config to 'notthere';\nERROR: invalid value for parameter \"default_text_search_config\": \"notthere\"\n\nIt's intentional that we don't throw a hard error for\ndefault_text_search_config, because that would create\na problematic ordering dependency for pg_dump: the\ndesired config might just not have been reloaded yet.\nMaybe the right answer here is that the processing of\n\"set role\" in particular failed to get that memo.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Apr 2021 17:46:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade check for invalid role-specific default config"
},
{
"msg_contents": "I wrote:\n> Another answer is that maybe the processing of the \"role\" case\n> in particular is just broken.\n\nAfter digging around a bit more, I think that that is indeed the\nright answer. Most of the GUC check functions that have\ndatabase-state-dependent behavior are programmed to behave specially\nwhen checking a proposed ALTER USER/DATABASE setting; but check_role\nand check_session_authorization did not get that memo. I also\nnoted that check_temp_buffers would throw an error for no very good\nreason. There don't look to be any other troublesome cases.\nSo I ended up with the attached.\n\nIt feels a bit unsatisfactory to have these various check functions\nknow about this explicitly. However, I'm not sure that there's a\ngood way to centralize it. Only the check function knows whether\nthe check it's making is immutable or dependent on DB state --- and\nin the former case, not throwing an error wouldn't be an improvement.\n\nAnyway, I'm inclined to think that we should not only apply this\nbut back-patch it. Two complaints is enough to suggest we have\nan issue. Plus, I think there is a dump/reload ordering problem\nhere: as things stand, \"alter user joe set role = bob\" would work\nor not work depending on the order the roles are created in\nand/or the order privileges are granted in.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 12 Apr 2021 19:33:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade check for invalid role-specific default config"
},
{
"msg_contents": "Tom wrote:\n> I do find it interesting that we now have two reports of somebody\n> doing \"ALTER ROLE SET role = something\". In the older thread,\n> I was skeptical that that had any real use-case, so I wonder if\n> Charlie has a rationale for having done that.\n\nUnfortunately I haven't heard back from the original developer\nwho set up this role configuration, but if I do then I will share\ntheir intentions. In any case the invalid configuration had been\nremoved from every other role except one (certainly by mistake)\nwhich lead to me rediscovering this issue.\n\nI tested the above patch with the invalid data locally and it avoids\nthe restore error that we ran into previously. Also it requires no\nintervention to progress with pg_upgrade unlike my initial idea of\nadding an check, so it is definitely simpler from a user perspective.\n\nThank you for taking a deep look into this and finding a better\nsolution.\n\nBest regards,\nCharlie Hornsby\n\n",
"msg_date": "Tue, 13 Apr 2021 17:28:50 +0000",
"msg_from": "Charlie Hornsby <charlie.hornsby@hotmail.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade check for invalid role-specific default config"
},
{
"msg_contents": "Charlie Hornsby <charlie.hornsby@hotmail.co.uk> writes:\n> I tested the above patch with the invalid data locally and it avoids\n> the restore error that we ran into previously. Also it requires no\n> intervention to progress with pg_upgrade unlike my initial idea of\n> adding an check, so it is definitely simpler from a user perspective.\n\nThanks for testing! I've pushed this, so it will be in the May\nminor releases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Apr 2021 15:11:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade check for invalid role-specific default config"
}
] |
[
{
"msg_contents": "Hello community,\n\nI’m Konstantina, a GSoC candidate for the project “Create Procedural\nlanguage extension for the Julia programming language”. The mentors have\nalready looked at my proposal and I’m attaching the finalized document.\nThere is still some time for corrections, in case anyone would like to\noffer an opinion.\n\nBest regards,\n\nKonstantina Skovola",
"msg_date": "Mon, 12 Apr 2021 16:56:20 +0300",
"msg_from": "Konstantina Skovola <konskov@gmail.com>",
"msg_from_op": true,
"msg_subject": "[GSoC 2021 proposal] pl/julia extension"
}
] |
[
{
"msg_contents": "Hi all!\n\nI would like to contribute my time and efforts to the PostgreSQL project\ndevelopment. I have some [hope not too bad] experience in software\ndevelopment primarily for Linux/BSD/Windows platforms with C/C++ though\nalmost no experience in RDBMS internals. I have read the \"Development\nInformation\" section in wiki and checked the official TODO list. It's\nreally great but it's a bit huge and \"all is so tasty\" so I don't know\nwhere to start. Can you please advise me how to proceed? Maybe there are\nactive tasks or bugs or issues that require man power? I would appreciate\nany advice or mentorship. Thank you!\n\n-- \nBest Regards,\nIan Zagorskikh\n\nHi all!I would like to contribute my time and efforts to the PostgreSQL project development. I have some [hope not too bad] experience in software development primarily for Linux/BSD/Windows platforms with C/C++ though almost no experience in RDBMS internals. I have read the \"Development Information\" section in wiki and checked the official TODO list. It's really great but it's a bit huge and \"all is so tasty\" so I don't know where to start. Can you please advise me how to proceed? Maybe there are active tasks or bugs or issues that require man power? I would appreciate any advice or mentorship. Thank you!-- Best Regards,Ian Zagorskikh",
"msg_date": "Mon, 12 Apr 2021 15:21:41 +0000",
"msg_from": "Ian Zagorskikh <ianzag@gmail.com>",
"msg_from_op": true,
"msg_subject": "Contribution to PostgreSQL - please give an advice"
},
{
"msg_contents": "Hi\n\npo 12. 4. 2021 v 17:22 odesílatel Ian Zagorskikh <ianzag@gmail.com> napsal:\n\n> Hi all!\n>\n> I would like to contribute my time and efforts to the PostgreSQL project\n> development. I have some [hope not too bad] experience in software\n> development primarily for Linux/BSD/Windows platforms with C/C++ though\n> almost no experience in RDBMS internals. I have read the \"Development\n> Information\" section in wiki and checked the official TODO list. It's\n> really great but it's a bit huge and \"all is so tasty\" so I don't know\n> where to start. Can you please advise me how to proceed? Maybe there are\n> active tasks or bugs or issues that require man power? I would appreciate\n> any advice or mentorship. Thank you!\n>\n\nIt is great - any hands and eyes are welcome.\n\nI think so the best start now (when you have not own topic) is review of\none or more patches from commitfest application\n\nhttps://commitfest.postgresql.org/33/\n\nRegards\n\nPavel\n\n\n\n> --\n> Best Regards,\n> Ian Zagorskikh\n>\n\nHipo 12. 4. 2021 v 17:22 odesílatel Ian Zagorskikh <ianzag@gmail.com> napsal:Hi all!I would like to contribute my time and efforts to the PostgreSQL project development. I have some [hope not too bad] experience in software development primarily for Linux/BSD/Windows platforms with C/C++ though almost no experience in RDBMS internals. I have read the \"Development Information\" section in wiki and checked the official TODO list. It's really great but it's a bit huge and \"all is so tasty\" so I don't know where to start. Can you please advise me how to proceed? Maybe there are active tasks or bugs or issues that require man power? I would appreciate any advice or mentorship. Thank you!It is great - any hands and eyes are welcome.I think so the best start now (when you have not own topic) is review of one or more patches from commitfest applicationhttps://commitfest.postgresql.org/33/RegardsPavel-- Best Regards,Ian Zagorskikh",
"msg_date": "Mon, 12 Apr 2021 18:11:22 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Contribution to PostgreSQL - please give an advice"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 03:21:41PM +0000, Ian Zagorskikh wrote:\n> I would like to contribute my time and efforts to the PostgreSQL project\n> development. I have some [hope not too bad] experience in software\n> development primarily for Linux/BSD/Windows platforms with C/C++ though\n> almost no experience in RDBMS internals. I have read the \"Development\n> Information\" section in wiki and checked the official TODO list. It's\n> really great but it's a bit huge and \"all is so tasty\" so I don't know\n> where to start. Can you please advise me how to proceed? Maybe there are\n> active tasks or bugs or issues that require man power? I would appreciate\n> any advice or mentorship. Thank you!\n\nI think the best way is to start following this mailing list.\nSince it's very \"busy\", it may be easier to follow the web archives.\nhttps://www.postgresql.org/list/pgsql-hackers/2021-04/\n\nWhen someone reports a problem, you can try to reproduce it to see if they've\nprovided enough information to confirm the issue, or test any proposed patch.\n\nYou can see the patches being proposed for future release:\nhttps://commitfest.postgresql.org/\n\nHowever, right now, we've just passed the \"feature freeze\" for v14, so new\ndevelopment is on hold for awhile. What's most useful is probably testing the\nchanges that have been committed. You can check that everything works as\ndescribed, that the implemented behavior doesn't have any rough edges, that the\nfeatures work together well, and work under your own use-cases/workloads.\n\nYou can see a list of commits for v14 like so:\n> git log --cherry-pick origin/REL_13_STABLE...origin/master\n(Any commits that show up twice are also in v13, so aren't actually \"new in\nv14\", but the patches differ so GIT couldn't figure that out)\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 12 Apr 2021 11:38:16 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Contribution to PostgreSQL - please give an advice"
},
{
"msg_contents": "All,\n\nGuys, thank you all for your advice! As a starting point I definitely\nshould take a look at the current Commitfest and try to help in review as\nbest I can. Thanks!\n\nRegards,\n\n\n\nпн, 12 апр. 2021 г. в 16:38, Justin Pryzby <pryzby@telsasoft.com>:\n\n> On Mon, Apr 12, 2021 at 03:21:41PM +0000, Ian Zagorskikh wrote:\n> > I would like to contribute my time and efforts to the PostgreSQL project\n> > development. I have some [hope not too bad] experience in software\n> > development primarily for Linux/BSD/Windows platforms with C/C++ though\n> > almost no experience in RDBMS internals. I have read the \"Development\n> > Information\" section in wiki and checked the official TODO list. It's\n> > really great but it's a bit huge and \"all is so tasty\" so I don't know\n> > where to start. Can you please advise me how to proceed? Maybe there are\n> > active tasks or bugs or issues that require man power? I would appreciate\n> > any advice or mentorship. Thank you!\n>\n> I think the best way is to start following this mailing list.\n> Since it's very \"busy\", it may be easier to follow the web archives.\n> https://www.postgresql.org/list/pgsql-hackers/2021-04/\n>\n> When someone reports a problem, you can try to reproduce it to see if\n> they've\n> provided enough information to confirm the issue, or test any proposed\n> patch.\n>\n> You can see the patches being proposed for future release:\n> https://commitfest.postgresql.org/\n>\n> However, right now, we've just passed the \"feature freeze\" for v14, so new\n> development is on hold for awhile. What's most useful is probably testing\n> the\n> changes that have been committed. You can check that everything works as\n> described, that the implemented behavior doesn't have any rough edges,\n> that the\n> features work together well, and work under your own use-cases/workloads.\n>\n> You can see a list of commits for v14 like so:\n> > git log --cherry-pick origin/REL_13_STABLE...origin/master\n> (Any commits that show up twice are also in v13, so aren't actually \"new in\n> v14\", but the patches differ so GIT couldn't figure that out)\n>\n> --\n> Justin\n>\n\n\n-- \nBest Regards,\nIan Zagorskikh\n\nAll,Guys, thank you all for your advice! As a starting point I definitely should take a look at the current Commitfest and try to help in review as best I can. Thanks!Regards,пн, 12 апр. 2021 г. в 16:38, Justin Pryzby <pryzby@telsasoft.com>:On Mon, Apr 12, 2021 at 03:21:41PM +0000, Ian Zagorskikh wrote:\n> I would like to contribute my time and efforts to the PostgreSQL project\n> development. I have some [hope not too bad] experience in software\n> development primarily for Linux/BSD/Windows platforms with C/C++ though\n> almost no experience in RDBMS internals. I have read the \"Development\n> Information\" section in wiki and checked the official TODO list. It's\n> really great but it's a bit huge and \"all is so tasty\" so I don't know\n> where to start. Can you please advise me how to proceed? Maybe there are\n> active tasks or bugs or issues that require man power? I would appreciate\n> any advice or mentorship. Thank you!\n\nI think the best way is to start following this mailing list.\nSince it's very \"busy\", it may be easier to follow the web archives.\nhttps://www.postgresql.org/list/pgsql-hackers/2021-04/\n\nWhen someone reports a problem, you can try to reproduce it to see if they've\nprovided enough information to confirm the issue, or test any proposed patch.\n\nYou can see the patches being proposed for future release:\nhttps://commitfest.postgresql.org/\n\nHowever, right now, we've just passed the \"feature freeze\" for v14, so new\ndevelopment is on hold for awhile. What's most useful is probably testing the\nchanges that have been committed. You can check that everything works as\ndescribed, that the implemented behavior doesn't have any rough edges, that the\nfeatures work together well, and work under your own use-cases/workloads.\n\nYou can see a list of commits for v14 like so:\n> git log --cherry-pick origin/REL_13_STABLE...origin/master\n(Any commits that show up twice are also in v13, so aren't actually \"new in\nv14\", but the patches differ so GIT couldn't figure that out)\n\n-- \nJustin\n-- Best Regards,Ian Zagorskikh",
"msg_date": "Tue, 13 Apr 2021 06:35:51 +0000",
"msg_from": "Ian Zagorskikh <ianzag@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Contribution to PostgreSQL - please give an advice"
}
] |
[
{
"msg_contents": "HI hackers,\r\n I found it could cause a crash when executing sql statement: `CREATE VIEW v1(c1) AS (SELECT ('4' COLLATE \"C\")::INT FROM generate_series(1, 10)); ` in postgres 13.2 release.\r\n\r\n The crash happens at view.c:89 and I did some analysis:\r\n\r\n```\r\n\r\nColumnDef *def = makeColumnDef(tle->resname,\r\n exprType((Node *) tle->expr),\r\n exprTypmod((Node *) tle->expr),\r\n exprCollation((Node *) tle->expr));\r\n\r\n\r\n\r\n/*\r\n * It's possible that the column is of a collatable type but the\r\n * collation could not be resolved, so double-check.\r\n */\r\n\r\n// Here is the analysis:\r\n\r\n//example : ('4' COLLATE \"C\")::INT\r\n\r\n//exprCollation((Node *) tle->expr) is the oid of collate \"COLLATE 'C'\" so def->collOid is valid\r\n//exprType((Node *) tle->expr)) is 23 which is the oid of type int4.\r\n//We know that int4 is not collatable by calling type_is_collatable()\r\n\r\nif (type_is_collatable(exprType((Node *) tle->expr)))\r\n{\r\n if (!OidIsValid(def->collOid))\r\n ereport(ERROR,\r\n (errcode(ERRCODE_INDETERMINATE_COLLATION),\r\n errmsg(\"could not determine which collation to use for view column \\\"%s\\\"\",\r\n def->colname),\r\n errhint(\"Use the COLLATE clause to set the collation explicitly.\")));\r\n}\r\nelse\r\n\r\n // So we are here! int is not collatable and def->collOid is valid.\r\n Assert(!OidIsValid(def->collOid));\r\n\r\n```\r\n\r\nI am not sure whether to fix this bug in function DefineVirtualRelation or to fix this bug in parse tree and analyze procedure, so maybe we can discuss.\r\n\r\n\r\n\r\n\r\nBest Regard!\r\nYulin PEI\r\n\r\n\r\n\n\n\n\n\n\n\n\nHI hackers,\n\n I found it could cause a crash when executing sql statement: `CREATE VIEW v1(c1) AS (SELECT ('4' COLLATE \"C\")::INT FROM generate_series(1, 10)); ` in postgres 13.2 release.\n\n\n\n\n The crash happens at view.c:89 and I did some analysis:\n\n\n\n\n``` \n\nColumnDef *def = makeColumnDef(tle->resname, exprType((Node *) tle->expr), exprTypmod((Node *) tle->expr), exprCollation((Node *) tle->expr));/* * It's possible that the column is of a collatable type but the * collation could not be resolved, so double-check. */\n// Here is the analysis://example : ('4' COLLATE \"C\")::INT//exprCollation((Node *) tle->expr) is the oid of collate \"COLLATE 'C'\" so def->collOid is valid//exprType((Node *) tle->expr)) is 23 which is the oid of type int4.//We know that int4 is not collatable by calling type_is_collatable()\nif (type_is_collatable(exprType((Node *) tle->expr))){ if (!OidIsValid(def->collOid)) ereport(ERROR, (errcode(ERRCODE_INDETERMINATE_COLLATION), errmsg(\"could not determine which collation to use for view column \\\"%s\\\"\", def->colname), errhint(\"Use the COLLATE clause to set the collation explicitly.\")));}else\n // So we are here! int is not collatable and def->collOid is valid. Assert(!OidIsValid(def->collOid));\n```\n\n\n\n\nI am not sure whether to fix this bug in function DefineVirtualRelation or to fix this bug in parse tree and analyze procedure, so maybe we can discuss.\n\n\n\n\n\n\n\n\n\n\n\n\n\nBest Regard!\n\nYulin PEI",
"msg_date": "Mon, 12 Apr 2021 15:39:38 +0000",
"msg_from": "Yulin PEI <ypeiae@connect.ust.hk>",
"msg_from_op": true,
"msg_subject": "Core dump happens when execute sql CREATE VIEW v1(c1) AS (SELECT ('4'\n COLLATE \"C\")::INT FROM generate_series(1, 10));"
},
{
"msg_contents": "Yulin PEI <ypeiae@connect.ust.hk> writes:\n> I found it could cause a crash when executing sql statement: `CREATE VIEW v1(c1) AS (SELECT ('4' COLLATE \"C\")::INT FROM generate_series(1, 10)); ` in postgres 13.2 release.\n\nNice catch. I don't think the code in DefineVirtualRelation is wrong:\nexprCollation shouldn't report any collation for an expression of a\nnon-collatable type. Rather the problem is with an old kluge in\ncoerce_type(), which will push a type coercion underneath a CollateExpr\n... without any mind for the possibility that the coercion result isn't\ncollatable. So the right fix is more or less the attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 12 Apr 2021 12:59:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Core dump happens when execute sql CREATE VIEW v1(c1) AS (SELECT\n ('4' COLLATE \"C\")::INT FROM generate_series(1, 10));"
},
{
"msg_contents": "After reading the code and the patch, I think the patch is good. If the type is non-collatable, we do not add a CollateExpr node as a 'parent' node to the coerced node.\r\n\r\n________________________________\r\n发件人: Tom Lane <tgl@sss.pgh.pa.us>\r\n发送时间: 2021年4月13日 0:59\r\n收件人: Yulin PEI <ypeiae@connect.ust.hk>\r\n抄送: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\r\n主题: Re: Core dump happens when execute sql CREATE VIEW v1(c1) AS (SELECT ('4' COLLATE \"C\")::INT FROM generate_series(1, 10));\r\n\r\nYulin PEI <ypeiae@connect.ust.hk> writes:\r\n> I found it could cause a crash when executing sql statement: `CREATE VIEW v1(c1) AS (SELECT ('4' COLLATE \"C\")::INT FROM generate_series(1, 10)); ` in postgres 13.2 release.\r\n\r\nNice catch. I don't think the code in DefineVirtualRelation is wrong:\r\nexprCollation shouldn't report any collation for an expression of a\r\nnon-collatable type. Rather the problem is with an old kluge in\r\ncoerce_type(), which will push a type coercion underneath a CollateExpr\r\n... without any mind for the possibility that the coercion result isn't\r\ncollatable. So the right fix is more or less the attached.\r\n\r\n regards, tom lane\r\n\r\n\n\n\n\n\n\n\n\n After reading the code and the patch, I think the patch is good. If the type is non-collatable, we do not add a CollateExpr node as a 'parent' node to the coerced node.\n\n\n\n\n\n发件人: Tom Lane <tgl@sss.pgh.pa.us>\n发送时间: 2021年4月13日 0:59\n收件人: Yulin PEI <ypeiae@connect.ust.hk>\n抄送: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\n主题: Re: Core dump happens when execute sql CREATE VIEW v1(c1) AS (SELECT ('4' COLLATE \"C\")::INT FROM generate_series(1, 10));\n \n\n\nYulin PEI <ypeiae@connect.ust.hk> writes:\n> I found it could cause a crash when executing sql statement: `CREATE VIEW v1(c1) AS (SELECT ('4' COLLATE \"C\")::INT FROM generate_series(1, 10)); ` in postgres 13.2 release.\n\nNice catch. I don't think the code in DefineVirtualRelation is wrong:\nexprCollation shouldn't report any collation for an expression of a\nnon-collatable type. Rather the problem is with an old kluge in\ncoerce_type(), which will push a type coercion underneath a CollateExpr\n... without any mind for the possibility that the coercion result isn't\ncollatable. So the right fix is more or less the attached.\n\n regards, tom lane",
"msg_date": "Tue, 13 Apr 2021 08:27:20 +0000",
"msg_from": "Yulin PEI <ypeiae@connect.ust.hk>",
"msg_from_op": true,
"msg_subject": "\n =?gb2312?B?u9i4tDogQ29yZSBkdW1wIGhhcHBlbnMgd2hlbiBleGVjdXRlIHNxbCBDUkVB?=\n =?gb2312?B?VEUgVklFVyB2MShjMSkgQVMgKFNFTEVDVCAoJzQnIENPTExBVEUgIkMiKTo6?=\n =?gb2312?Q?INT_FROM_generate=5Fseries(1,_10));?="
},
{
"msg_contents": "I think it is better to add this test case to regress.\r\n________________________________\r\n发件人: Tom Lane <tgl@sss.pgh.pa.us>\r\n发送时间: 2021年4月13日 0:59\r\n收件人: Yulin PEI <ypeiae@connect.ust.hk>\r\n抄送: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\r\n主题: Re: Core dump happens when execute sql CREATE VIEW v1(c1) AS (SELECT ('4' COLLATE \"C\")::INT FROM generate_series(1, 10));\r\n\r\nYulin PEI <ypeiae@connect.ust.hk> writes:\r\n> I found it could cause a crash when executing sql statement: `CREATE VIEW v1(c1) AS (SELECT ('4' COLLATE \"C\")::INT FROM generate_series(1, 10)); ` in postgres 13.2 release.\r\n\r\nNice catch. I don't think the code in DefineVirtualRelation is wrong:\r\nexprCollation shouldn't report any collation for an expression of a\r\nnon-collatable type. Rather the problem is with an old kluge in\r\ncoerce_type(), which will push a type coercion underneath a CollateExpr\r\n... without any mind for the possibility that the coercion result isn't\r\ncollatable. So the right fix is more or less the attached.\r\n\r\n regards, tom lane",
"msg_date": "Tue, 13 Apr 2021 09:27:08 +0000",
"msg_from": "Yulin PEI <ypeiae@connect.ust.hk>",
"msg_from_op": true,
"msg_subject": "\n =?gb2312?B?u9i4tDogQ29yZSBkdW1wIGhhcHBlbnMgd2hlbiBleGVjdXRlIHNxbCBDUkVB?=\n =?gb2312?B?VEUgVklFVyB2MShjMSkgQVMgKFNFTEVDVCAoJzQnIENPTExBVEUgIkMiKTo6?=\n =?gb2312?Q?INT_FROM_generate=5Fseries(1,_10));?="
},
{
"msg_contents": "After several tests, I found that this patch do not fix the bug well.\r\n\r\n I think we should use the same logic to treat parent CollateExpr and child CollateExpr. In your patch, if the parent node is CollateExpr and the target type is non-collatable, we coerce CollateExpr->arg. If the child node is CollateExpr and the target type is non-collatable, we just skip.\r\n Some types can be casted to another type even if type_is_collatable returns false. Like bytea to int (It depends on the content of the string). If we simply skip, bytea will never be casted to int even if the content is all digits.\r\n\r\nSo the attachment is my patch and it works well as far as I tested.\r\n\r\n\r\n\r\n________________________________\r\n发件人: Tom Lane <tgl@sss.pgh.pa.us>\r\n发送时间: 2021年4月13日 0:59\r\n收件人: Yulin PEI <ypeiae@connect.ust.hk>\r\n抄送: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\r\n主题: Re: Core dump happens when execute sql CREATE VIEW v1(c1) AS (SELECT ('4' COLLATE \"C\")::INT FROM generate_series(1, 10));\r\n\r\nYulin PEI <ypeiae@connect.ust.hk> writes:\r\n> I found it could cause a crash when executing sql statement: `CREATE VIEW v1(c1) AS (SELECT ('4' COLLATE \"C\")::INT FROM generate_series(1, 10)); ` in postgres 13.2 release.\r\n\r\nNice catch. I don't think the code in DefineVirtualRelation is wrong:\r\nexprCollation shouldn't report any collation for an expression of a\r\nnon-collatable type. Rather the problem is with an old kluge in\r\ncoerce_type(), which will push a type coercion underneath a CollateExpr\r\n... without any mind for the possibility that the coercion result isn't\r\ncollatable. So the right fix is more or less the attached.\r\n\r\n regards, tom lane",
"msg_date": "Sun, 18 Apr 2021 16:31:07 +0000",
"msg_from": "Yulin PEI <ypeiae@connect.ust.hk>",
"msg_from_op": true,
"msg_subject": "\n =?gb2312?B?u9i4tDogQ29yZSBkdW1wIGhhcHBlbnMgd2hlbiBleGVjdXRlIHNxbCBDUkVB?=\n =?gb2312?B?VEUgVklFVyB2MShjMSkgQVMgKFNFTEVDVCAoJzQnIENPTExBVEUgIkMiKTo6?=\n =?gb2312?Q?INT_FROM_generate=5Fseries(1,_10));?="
},
{
"msg_contents": "Yulin PEI <ypeiae@connect.ust.hk> writes:\n> After several tests, I found that this patch do not fix the bug well.\n\nWhat do you think is wrong with it?\n\n> So the attachment is my patch and it works well as far as I tested.\n\nThis seems equivalent to the already-committed patch [1] except that\nit wastes a makeNode call in the coerce-to-uncollatable-type case.\n\n\t\t\tregards, tom lane\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=c402b02b9fb53aee2a26876de90a8f95f9a9be92\n\n\n",
"msg_date": "Sun, 18 Apr 2021 13:46:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re:\n =?gb2312?B?u9i4tDogQ29yZSBkdW1wIGhhcHBlbnMgd2hlbiBleGVjdXRlIHNxbCBDUkVB?=\n =?gb2312?B?VEUgVklFVyB2MShjMSkgQVMgKFNFTEVDVCAoJzQnIENPTExBVEUgIkMiKTo6?=\n =?gb2312?Q?INT_FROM_generate=5Fseries(1,_10));?="
},
{
"msg_contents": "Consider the SQL statement 'SELECT (('1' COLLATE \"C\") ||(B'1'));' . Intuitively, the result will be '11' and the result is '11' in pg 13.2 release as well.\r\n\r\nThe function stack is make_fn_arguments -> coerce_type, which means that the param \"Node *node\" of function coerce_type could be a CollateExpr Node.\r\nLet's look at your patch:\r\n\r\n```\r\n// node is ('1' COLLATE \"C\")\r\n// targetType is varbit and it is non-collatable\r\nif (IsA(node, CollateExpr) && type_is_collatable(targetTypeId))\r\n{\r\n\r\n// we will not reach here.\r\n\r\nCollateExpr *coll = (CollateExpr *) node;\r\nCollateExpr *newcoll = makeNode(CollateExpr);\r\n\r\n....\r\n\r\n// An error will be generated. \"failed to find conversion function\"\r\n\r\n}\r\n\r\n```\r\n\r\nSo I suggest:\r\n\r\n```\r\n// node is ('1' COLLATE \"C\")\r\n\r\nif (IsA(node, CollateExpr))\r\n {\r\n CollateExpr *coll = (CollateExpr *) node;\r\n CollateExpr *newcoll = makeNode(CollateExpr);\r\n\r\n\r\n //targetType is varbit and it is non-collatable\r\n\r\n if (!type_is_collatable(targetTypeId)) {\r\n\r\n // try to convert '1'(string) to varbit\r\n\r\n // We do not make a new CollateExpr here, but don't forget to coerce coll->arg.\r\n\r\n return coerce_type(pstate, (Node *) coll->arg,\r\n inputTypeId, targetTypeId, targetTypeMod,\r\n ccontext, cformat, location);\r\n }\r\n ...\r\n }\r\n\r\n```\r\n\r\n\r\n\r\n\r\n\r\n\r\n________________________________\r\n寄件者: Tom Lane <tgl@sss.pgh.pa.us>\r\n寄件日期: 2021年4月19日 1:46\r\n收件者: Yulin PEI <ypeiae@connect.ust.hk>\r\n副本: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\r\n主旨: Re: 回复: Core dump happens when execute sql CREATE VIEW v1(c1) AS (SELECT ('4' COLLATE \"C\")::INT FROM generate_series(1, 10));\r\n\r\nYulin PEI <ypeiae@connect.ust.hk> writes:\r\n> After several tests, I found that this patch do not fix the bug well.\r\n\r\nWhat do you think is wrong with it?\r\n\r\n> So the attachment is my patch and it works well as far as I tested.\r\n\r\nThis seems equivalent to the already-committed patch [1] except that\r\nit wastes a makeNode call in the coerce-to-uncollatable-type case.\r\n\r\n regards, tom lane\r\n\r\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=c402b02b9fb53aee2a26876de90a8f95f9a9be92\r\n\n\n\n\n\n\n\n\nConsider the SQL statement 'SELECT (('1' COLLATE \"C\") ||(B'1'));' . Intuitively,\n the result will be '11' and the result is '11' in pg 13.2 release as well.\n\n\n\n\nThe function stack is make_fn_arguments -> coerce_type, which means that the param \"Node\n*node\" of function coerce_type\n could be a CollateExpr Node.\n\nLet's look at your patch:\n\n\n\n\n```\n\n// node is ('1'\n COLLATE \"C\")\n\n// targetType is varbit and it is non-collatable\n\n\n\nif (IsA(node, CollateExpr) && type_is_collatable(targetTypeId))\n{\n\n\n// we will not reach here.\n\nCollateExpr *coll = (CollateExpr *) node;CollateExpr *newcoll = makeNode(CollateExpr);\n....\n\n\n// An error will be generated. \"failed to find conversion function\"\n\n}\n\n\n\n\n```\n\n\n\n\nSo I suggest:\n\n\n\n\n```\n\n\n// node is ('1'\n COLLATE \"C\")\n\n\nif (IsA(node, CollateExpr)) { CollateExpr *coll = (CollateExpr *) node; CollateExpr *newcoll = makeNode(CollateExpr); //targetType is varbit and it is non-collatable\n if (!type_is_collatable(targetTypeId)) {\n // try to convert '1'(string) to varbit \n // We do not make a new CollateExpr here, but don't forget to coerce coll->arg. return coerce_type(pstate, (Node *) coll->arg, inputTypeId, targetTypeId, targetTypeMod, ccontext, cformat, location); } ... }\n\n\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n寄件者: Tom Lane <tgl@sss.pgh.pa.us>\n寄件日期: 2021年4月19日 1:46\n收件者: Yulin PEI <ypeiae@connect.ust.hk>\n副本: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\n主旨: Re: 回复: Core dump happens when execute sql CREATE VIEW v1(c1) AS (SELECT ('4' COLLATE \"C\")::INT FROM generate_series(1, 10));\n \n\n\nYulin PEI <ypeiae@connect.ust.hk> writes:\n> After several tests, I found that this patch do not fix the bug well.\n\nWhat do you think is wrong with it?\n\n> So the attachment is my patch and it works well as far as I tested.\n\nThis seems equivalent to the already-committed patch [1] except that\nit wastes a makeNode call in the coerce-to-uncollatable-type case.\n\n regards, tom lane\n\n[1] \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=c402b02b9fb53aee2a26876de90a8f95f9a9be92",
"msg_date": "Mon, 19 Apr 2021 15:19:29 +0000",
"msg_from": "Yulin PEI <ypeiae@connect.ust.hk>",
"msg_from_op": true,
"msg_subject": "\n =?big5?B?pl7C0Dogpl7OYDogQ29yZSBkdW1wIGhhcHBlbnMgd2hlbiBleGVjdXRlIHNxbCBD?=\n =?big5?B?UkVBVEUgVklFVyB2MShjMSkgQVMgKFNFTEVDVCAoJzQnIENPTExBVEUgIkMiKTo6?=\n =?big5?Q?INT_FROM_generate=5Fseries(1,_10));?="
},
{
"msg_contents": "Yulin PEI <ypeiae@connect.ust.hk> writes:\n> Let's look at your patch:\n\n> ```\n> // node is ('1' COLLATE \"C\")\n> // targetType is varbit and it is non-collatable\n> if (IsA(node, CollateExpr) && type_is_collatable(targetTypeId))\n> {\n\n> // we will not reach here.\n\nThat's not the committed patch, though. I realized after posting\nit that it didn't maintain the same behavior in coerce_type as\ncoerce_to_target_type. But the actually-committed fix does, and\nas I said, what you're suggesting seems equivalent though a bit\nmessier.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Apr 2021 12:47:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re:\n =?big5?B?pl7C0Dogpl7OYDogQ29yZSBkdW1wIGhhcHBlbnMgd2hlbiBleGVjdXRlIHNxbCBD?=\n =?big5?B?UkVBVEUgVklFVyB2MShjMSkgQVMgKFNFTEVDVCAoJzQnIENPTExBVEUgIkMiKTo6?=\n =?big5?Q?INT_FROM_generate=5Fseries(1,_10));?="
}
] |
[
{
"msg_contents": "Hello Sir/Madam,\nI'm Nandni Mehla, a sophomore currently pursuing B.Tech in IT from Indira\nGandhi Delhi Technical University for Women, Delhi. I've recently started\nworking on open source and I think I will be a positive addition to\nyour organization for working on projects using C and SQL, as I have\nexperience in these, and I am willing to learn more from you.\nI am attaching my proposal in this email for your reference, please guide\nme through this.\nRegards.\n\nhttps://docs.google.com/document/d/1H84WmzZbMERPrjsnXbvoQ7W2AaKsM8eJU02SNw7vQBk/edit?usp=sharing\n\nHello Sir/Madam,I'm Nandni Mehla, a sophomore currently pursuing B.Tech in IT from Indira Gandhi Delhi Technical University for Women, Delhi. I've recently started working on open source and I think I will be a positive addition to your organization for working on projects using C and SQL, as I have experience in these, and I am willing to learn more from you.I am attaching my proposal in this email for your reference, please guide me through this.Regards.https://docs.google.com/document/d/1H84WmzZbMERPrjsnXbvoQ7W2AaKsM8eJU02SNw7vQBk/edit?usp=sharing",
"msg_date": "Mon, 12 Apr 2021 23:26:48 +0530",
"msg_from": "Nandni Mehla <nandnimehlawat16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Proposal for working on open source with PostgreSQL"
},
{
"msg_contents": "On Mon, 2021-04-12 at 23:26 +0530, Nandni Mehla wrote:\n> I'm Nandni Mehla, a sophomore currently pursuing B.Tech in IT from Indira Gandhi\n> Delhi Technical University for Women, Delhi. I've recently started working on\n> open source and I think I will be a positive addition to your organization for\n> working on projects using C and SQL, as I have experience in these, and I am\n> willing to learn more from you.\n> I am attaching my proposal in this email for your reference, please guide me through this.\n> Regards.\n> \n> https://docs.google.com/document/d/1H84WmzZbMERPrjsnXbvoQ7W2AaKsM8eJU02SNw7vQBk/edit?usp=sharing\n\nThanks for your willingness to help with PostgreSQL!\n\nI couldn't see any detail information about the project in your proposal, except\nthat the project is called \"plsample\". Is there more information somewhere?\n\nIf it is a procedural language as the name suggests, you probably don't have\nto modify PostgreSQL core code to make it work.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 12 Apr 2021 21:08:43 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for working on open source with PostgreSQL"
},
{
"msg_contents": "On 2021-Apr-12, Laurenz Albe wrote:\n\n> I couldn't see any detail information about the project in your proposal, except\n> that the project is called \"plsample\". Is there more information somewhere?\n> \n> If it is a procedural language as the name suggests, you probably don't have\n> to modify PostgreSQL core code to make it work.\n\nplsample is in src/test/modules/plsample. Possible improvements are\nhandling of DO blocks, support for trigger functions, routine\nvalidation.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Mon, 12 Apr 2021 15:36:41 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for working on open source with PostgreSQL"
}
] |
[
{
"msg_contents": "Hi hackers!\n\nThis thread continues discussion of allowing something to non-superuser, AFAIK previous was [0].\n\nCurrently only superuser is allowed to create LEAKPROOF functions because leakproof functions can see tuples which have not yet been filtered out by security barrier views or row level security policies.\n\nBut managed cloud services typically do not provide superuser roles. I'm thinking about allowing the database owner or someone with BYPASSRLS flag to create these functions. Or, perhaps, pg_read_all_data.\n\nAnd I'm trying to figure out if there are any security implications. Consider a user who already has access to all user data in a DB and the ability to create LEAKPROOF functions. Can they gain a superuser role or access something else that is available only to a superuser?\nIs it possible to relax requirements for the creator of LEAKPROOF functions in upstream Postgres?\n\nI'll appreciate any comments. Thanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/CACqFVBbx6PDq%2B%3DvHM0n78kHzn8tvOM-kGO_2q_q0zNAMT%2BTzdA%40mail.gmail.com\n\n",
"msg_date": "Mon, 12 Apr 2021 23:31:30 +0300",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "Andrey Borodin <x4mmm@yandex-team.ru> writes:\n> Currently only superuser is allowed to create LEAKPROOF functions because leakproof functions can see tuples which have not yet been filtered out by security barrier views or row level security policies.\n\nYeah.\n\n> But managed cloud services typically do not provide superuser roles.\n\nThis is not a good argument for relaxing superuser requirements.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Apr 2021 16:37:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "\nOn 4/12/21 10:37 PM, Tom Lane wrote:\n> Andrey Borodin <x4mmm@yandex-team.ru> writes:\n>> Currently only superuser is allowed to create LEAKPROOF functions\n>> because leakproof functions can see tuples which have not yet been\n>> filtered out by security barrier views or row level security\n>> policies.\n> \n> Yeah.\n> \n>> But managed cloud services typically do not provide superuser\n>> roles.\n> \n> This is not a good argument for relaxing superuser requirements.\n> \n\nI guess for the cloud services it's not an issue - they're mostly\nconcerned about manageability and restricting access to the OS. It's\nunfortunate that we tie the this capability to being superuser, so maybe\nthe right solution would be to introduce a separate role with this\nprivilege?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 12 Apr 2021 22:42:03 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "Thanks for so quick response, Tom!\n\n> 12 апр. 2021 г., в 23:37, Tom Lane <tgl@sss.pgh.pa.us> написал(а):\n> \n>> But managed cloud services typically do not provide superuser roles.\n> \n> This is not a good argument for relaxing superuser requirements.\nOk, let's put aside question about relaxing requirements in upstream.\n\nDo I risk having some extra superusers in my installation if I allow everyone to create LEAKPROOF functions?\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 12 Apr 2021 23:51:02 +0300",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-12 16:37:01 -0400, Tom Lane wrote:\n> Andrey Borodin <x4mmm@yandex-team.ru> writes:\n> > Currently only superuser is allowed to create LEAKPROOF functions\n> > because leakproof functions can see tuples which have not yet been\n> > filtered out by security barrier views or row level security\n> > policies.\n>\n> Yeah.\n>\n> > But managed cloud services typically do not provide superuser roles.\n>\n> This is not a good argument for relaxing superuser requirements.\n\nIDK. I may have been adjacent to people operating database-as-a-service\nfor too long, but ISTM there's decent reasons for (and also against) not\nproviding full superuser access. Even outside of managed services it\nseems like a decent idea to split the \"can execute native code\" role\nfrom the \"administers an application\" role. That reduces the impact a\nbug in the application can incur.\n\nThere's certain things that are pretty intrinsically \"can execute native\ncode\", like defining new 'C' functions, arbitrary ALTER SYSTEM,\narbitrary file reads/writes, etc. Splitting them off from superuser is a\nfools errand. But it's not at all clear why adding LEAKPROOF to\nfunctions falls into that category?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Apr 2021 13:51:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-12 22:42:03 +0200, Tomas Vondra wrote:\n> It's unfortunate that we tie the this capability to being superuser,\n> so maybe the right solution would be to introduce a separate role with\n> this privilege?\n\nPerhaps DB owner + BYPASSRLS would be enough?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Apr 2021 13:54:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "Thanks, Tomas!\n\n> 12 апр. 2021 г., в 23:42, Tomas Vondra <tomas.vondra@enterprisedb.com> написал(а):\n> \n> I guess for the cloud services it's not an issue - they're mostly\n> concerned about manageability and restricting access to the OS.\nIn fact, we would happily give a client access to an OS too. It's a client's VM after all and all software is open source. But it opens a way to attack control plane. Which in turn opens a way for clients to attack each other. And we really do not want it.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 12 Apr 2021 23:59:53 +0300",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-12 23:51:02 +0300, Andrey Borodin wrote:\n> Do I risk having some extra superusers in my installation if I allow\n> everyone to create LEAKPROOF functions?\n\nI think that depends on what you define \"superuser\" to exactly\nbe. Defining it as \"has a path to executing arbitrary native code\", I\ndon't think, if implemented sensibly, allowing to set LEAKPROOF on new\nfunctions would equate superuser permissions. But you soon after might\nhit further limitations where lifting them would have such a risk,\ne.g. defining new types with in/out functions.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Apr 2021 14:01:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "\n\n> 13 апр. 2021 г., в 00:01, Andres Freund <andres@anarazel.de> написал(а):\n> \n> Hi,\n> \n> On 2021-04-12 23:51:02 +0300, Andrey Borodin wrote:\n>> Do I risk having some extra superusers in my installation if I allow\n>> everyone to create LEAKPROOF functions?\n> \n> I think that depends on what you define \"superuser\" to exactly\n> be. Defining it as \"has a path to executing arbitrary native code\", I\n> don't think, if implemented sensibly, allowing to set LEAKPROOF on new\n> functions would equate superuser permissions.\nThanks!\n\n\n> But you soon after might\n> hit further limitations where lifting them would have such a risk,\n> e.g. defining new types with in/out functions.\n\nI think, real extensibility of a managed DB service is a very distant challenge.\nCurrently we just allow-list extensions.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 13 Apr 2021 00:10:35 +0300",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-04-12 23:51:02 +0300, Andrey Borodin wrote:\n>> Do I risk having some extra superusers in my installation if I allow\n>> everyone to create LEAKPROOF functions?\n\n> I think that depends on what you define \"superuser\" to exactly\n> be. Defining it as \"has a path to executing arbitrary native code\", I\n> don't think, if implemented sensibly, allowing to set LEAKPROOF on new\n> functions would equate superuser permissions. But you soon after might\n> hit further limitations where lifting them would have such a risk,\n> e.g. defining new types with in/out functions.\n\nI think the issue here is more that superuser = \"able to break the\nsecurity guarantees of the database\". I doubt that falsely labeling\na function LEAKPROOF can get you more than the ability to read data\nyou're not supposed to be able to read ... but that ability is then\navailable to all users, or at least all users who can execute the\nfunction in question. So it definitely is a fairly serious security\nhazard, and one that's not well modeled by role labels. If you\ngive somebody e.g. pg_read_all_data privileges, you don't expect\nthat that means they can give it to other users.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Apr 2021 17:14:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-12 17:14:20 -0400, Tom Lane wrote:\n> I doubt that falsely labeling a function LEAKPROOF can get you more\n> than the ability to read data you're not supposed to be able to read\n> ... but that ability is then available to all users, or at least all\n> users who can execute the function in question. So it definitely is a\n> fairly serious security hazard, and one that's not well modeled by\n> role labels. If you give somebody e.g. pg_read_all_data privileges,\n> you don't expect that that means they can give it to other users.\n\nA user with BYPASSRLS can create public security definer functions\nreturning data. If the concern is a BYPASSRLS user intentionally\nexposing data, then there's not a meaningful increase to allow defining\nLEAKPROOF functions.\n\nTo me the more relevant concern is that it's hard to determine\nLEAKPROOF-ness and that many use-cases for BYPASSRLS do not require the\ntarget to have the technical chops to determine if a function actually\nis leakproof. But that seems more an argument for providing a separate\ncontrol over allowing to specify LEAKPROOF than against separating it\nfrom superuser.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Apr 2021 14:35:27 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 02:35:27PM -0700, Andres Freund wrote:\n> On 2021-04-12 17:14:20 -0400, Tom Lane wrote:\n> > I doubt that falsely labeling a function LEAKPROOF can get you more\n> > than the ability to read data you're not supposed to be able to read\n> > ... but that ability is then available to all users, or at least all\n> > users who can execute the function in question. So it definitely is a\n> > fairly serious security hazard, and one that's not well modeled by\n> > role labels. If you give somebody e.g. pg_read_all_data privileges,\n> > you don't expect that that means they can give it to other users.\n\nI do expect that, essentially. Like Andres describes for BYPASSRLS, they can\ncreate and GRANT a SECURITY DEFINER function that performs an arbitrary query\nand returns a refcursor (or stores the data to a table of the caller's\nchoosing, etc.). Unlike BYPASSRLS, they can even make pg_read_all_data own\nthe function, making the situation persist after one drops the actor's role\nand that role's objects.\n\n> A user with BYPASSRLS can create public security definer functions\n> returning data. If the concern is a BYPASSRLS user intentionally\n> exposing data, then there's not a meaningful increase to allow defining\n> LEAKPROOF functions.\n\nHence, I do find it reasonable to let pg_read_all_data be sufficient for\nsetting LEAKPROOF. I would not consult datdba, because datdba currently has\nno special read abilities. It feels too weird to let BYPASSRLS start\naffecting non-RLS access controls. A reasonable person may assume that\nBYPASSRLS has no consequences until someone uses CREATE POLICY. That said, I\nwouldn't be horrified if BYPASSRLS played a part. BYPASSRLS, like\npg_read_all_data, clearly isn't something to grant lightly.\n\n\n",
"msg_date": "Fri, 16 Apr 2021 00:56:55 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "On Fri, Apr 16, 2021 at 3:57 AM Noah Misch <noah@leadboat.com> wrote:\n> On Mon, Apr 12, 2021 at 02:35:27PM -0700, Andres Freund wrote:\n> > On 2021-04-12 17:14:20 -0400, Tom Lane wrote:\n> > > I doubt that falsely labeling a function LEAKPROOF can get you more\n> > > than the ability to read data you're not supposed to be able to read\n> > > ... but that ability is then available to all users, or at least all\n> > > users who can execute the function in question. So it definitely is a\n> > > fairly serious security hazard, and one that's not well modeled by\n> > > role labels. If you give somebody e.g. pg_read_all_data privileges,\n> > > you don't expect that that means they can give it to other users.\n>\n> I do expect that, essentially. Like Andres describes for BYPASSRLS, they can\n> create and GRANT a SECURITY DEFINER function that performs an arbitrary query\n> and returns a refcursor (or stores the data to a table of the caller's\n> choosing, etc.). Unlike BYPASSRLS, they can even make pg_read_all_data own\n> the function, making the situation persist after one drops the actor's role\n> and that role's objects.\n\nYes. I think that if someone can read all the data, it's unworkable to\nsuppose that they can't find a way to delegate that ability to others.\nIf nothing else, a station wagon full of tapes has a lot of bandwidth.\n\n> > A user with BYPASSRLS can create public security definer functions\n> > returning data. If the concern is a BYPASSRLS user intentionally\n> > exposing data, then there's not a meaningful increase to allow defining\n> > LEAKPROOF functions.\n>\n> Hence, I do find it reasonable to let pg_read_all_data be sufficient for\n> setting LEAKPROOF. I would not consult datdba, because datdba currently has\n> no special read abilities. It feels too weird to let BYPASSRLS start\n> affecting non-RLS access controls. A reasonable person may assume that\n> BYPASSRLS has no consequences until someone uses CREATE POLICY. That said, I\n> wouldn't be horrified if BYPASSRLS played a part. BYPASSRLS, like\n> pg_read_all_data, clearly isn't something to grant lightly.\n\nI agree that datdba doesn't seem like quite the right thing, but I'm\nnot sure I agree with the rest. How can we say that leakproof is a\nnon-RLS access control? Its only purpose is to keep RLS secure, so I\nguess I'd be inclined to think that of the two, BYPASSRLS is more\nclosely related to the topic at hand.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Apr 2021 16:25:13 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Apr 16, 2021 at 3:57 AM Noah Misch <noah@leadboat.com> wrote:\n>> Hence, I do find it reasonable to let pg_read_all_data be sufficient for\n>> setting LEAKPROOF. I would not consult datdba, because datdba currently has\n>> no special read abilities. It feels too weird to let BYPASSRLS start\n>> affecting non-RLS access controls. A reasonable person may assume that\n>> BYPASSRLS has no consequences until someone uses CREATE POLICY. That said, I\n>> wouldn't be horrified if BYPASSRLS played a part. BYPASSRLS, like\n>> pg_read_all_data, clearly isn't something to grant lightly.\n\n> I agree that datdba doesn't seem like quite the right thing, but I'm\n> not sure I agree with the rest. How can we say that leakproof is a\n> non-RLS access control? Its only purpose is to keep RLS secure, so I\n> guess I'd be inclined to think that of the two, BYPASSRLS is more\n> closely related to the topic at hand.\n\nUmm ... I'm pretty sure LEAKPROOF also affects optimization around\n\"security barrier\" views, which I wouldn't call RLS. Out of these\noptions, I'd prefer granting the ability to pg_read_all_data.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Apr 2021 16:32:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 4:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Fri, Apr 16, 2021 at 3:57 AM Noah Misch <noah@leadboat.com> wrote:\n> >> Hence, I do find it reasonable to let pg_read_all_data be sufficient for\n> >> setting LEAKPROOF. I would not consult datdba, because datdba currently has\n> >> no special read abilities. It feels too weird to let BYPASSRLS start\n> >> affecting non-RLS access controls. A reasonable person may assume that\n> >> BYPASSRLS has no consequences until someone uses CREATE POLICY. That said, I\n> >> wouldn't be horrified if BYPASSRLS played a part. BYPASSRLS, like\n> >> pg_read_all_data, clearly isn't something to grant lightly.\n>\n> > I agree that datdba doesn't seem like quite the right thing, but I'm\n> > not sure I agree with the rest. How can we say that leakproof is a\n> > non-RLS access control? Its only purpose is to keep RLS secure, so I\n> > guess I'd be inclined to think that of the two, BYPASSRLS is more\n> > closely related to the topic at hand.\n>\n> Umm ... I'm pretty sure LEAKPROOF also affects optimization around\n> \"security barrier\" views, which I wouldn't call RLS. Out of these\n> options, I'd prefer granting the ability to pg_read_all_data.\n\nOops, I forgot about security_barrier views, which is rather\nembarrassing since I committed them. So, yeah, I agree:\npg_read_all_data is better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Apr 2021 17:08:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Apr 19, 2021 at 4:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > On Fri, Apr 16, 2021 at 3:57 AM Noah Misch <noah@leadboat.com> wrote:\n> > >> Hence, I do find it reasonable to let pg_read_all_data be sufficient for\n> > >> setting LEAKPROOF. I would not consult datdba, because datdba currently has\n> > >> no special read abilities. It feels too weird to let BYPASSRLS start\n> > >> affecting non-RLS access controls. A reasonable person may assume that\n> > >> BYPASSRLS has no consequences until someone uses CREATE POLICY. That said, I\n> > >> wouldn't be horrified if BYPASSRLS played a part. BYPASSRLS, like\n> > >> pg_read_all_data, clearly isn't something to grant lightly.\n> >\n> > > I agree that datdba doesn't seem like quite the right thing, but I'm\n> > > not sure I agree with the rest. How can we say that leakproof is a\n> > > non-RLS access control? Its only purpose is to keep RLS secure, so I\n> > > guess I'd be inclined to think that of the two, BYPASSRLS is more\n> > > closely related to the topic at hand.\n> >\n> > Umm ... I'm pretty sure LEAKPROOF also affects optimization around\n> > \"security barrier\" views, which I wouldn't call RLS. Out of these\n> > options, I'd prefer granting the ability to pg_read_all_data.\n> \n> Oops, I forgot about security_barrier views, which is rather\n> embarrassing since I committed them. So, yeah, I agree:\n> pg_read_all_data is better.\n\nI'm not really sure that attaching it to pg_read_all_data makes sense..\n\nIn general, I've been frustrated by the places where we lump privileges\ntogether rather than having a way to distinctly GRANT capabilities\nindependently- that's more-or-less exactly what lead me to work on\nimplementing the role system in the first place, and later the\npredefined roles.\n\nI do think it's good to reduce the number of places that require\nsuperuser, in general, but I'm not sure that marking functions as\nleakproof as a non-superuser makes sense.\n\nHere's what I'd ask Andrey- what's the actual use-case here? Are these\ncases where users are actually adding new functions which they believe\nare leakproof where those functions don't require superuser already to\nbe created? Clearly if they're in a untrusted language and you have to\nbe a superuser to install them in the first place then they should just\nmark the function as leakproof when they install it. If these are\ntrusted language functions, I'd be curious to actually see them as I\nhave doubts about if they're actually leakproof..\n\nOr is the actual use-case here that they just want to mark functions we\nknow aren't leakproof as leakproof anyway because they aren't getting\nthe performance they want?\n\nIf it's the latter, as I suspect it is, then I don't really think the\nuse-case justifies any change on our part- the right answer is to make\nthose functions actually leakproof, or write ones which are.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 19 Apr 2021 17:38:43 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 05:38:43PM -0400, Stephen Frost wrote:\n> > > > On Fri, Apr 16, 2021 at 3:57 AM Noah Misch <noah@leadboat.com> wrote:\n> > > >> Hence, I do find it reasonable to let pg_read_all_data be sufficient for\n> > > >> setting LEAKPROOF.\n\n> I'm not really sure that attaching it to pg_read_all_data makes sense..\n> \n> In general, I've been frustrated by the places where we lump privileges\n> together rather than having a way to distinctly GRANT capabilities\n> independently- that's more-or-less exactly what lead me to work on\n> implementing the role system in the first place, and later the\n> predefined roles.\n\nThis would be no more lumpy than e.g. pg_read_all_stats. However, I could\nlive with a separate pg_change_leakproof (or whatever name).\n\n> Here's what I'd ask Andrey- what's the actual use-case here? Are these\n> cases where users are actually adding new functions which they believe\n> are leakproof where those functions don't require superuser already to\n> be created? Clearly if they're in a untrusted language and you have to\n> be a superuser to install them in the first place then they should just\n> mark the function as leakproof when they install it. If these are\n> trusted language functions, I'd be curious to actually see them as I\n> have doubts about if they're actually leakproof..\n> \n> Or is the actual use-case here that they just want to mark functions we\n> know aren't leakproof as leakproof anyway because they aren't getting\n> the performance they want?\n\nHearing those answers would be interesting, but it shouldn't change decisions.\nA reasonable person can write an actually-leakproof function and wish to mark\nit LEAKPROOF, independent of whether that happened in the case that prompted\nthis thread.\n\n\n",
"msg_date": "Sun, 25 Apr 2021 03:24:19 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "\n\n> 20 апр. 2021 г., в 02:38, Stephen Frost <sfrost@snowman.net> написал(а):\n> \n> Here's what I'd ask Andrey- what's the actual use-case here? Are these\n> cases where users are actually adding new functions which they believe\n> are leakproof where those functions don't require superuser already to\n> be created? Clearly if they're in a untrusted language and you have to\n> be a superuser to install them in the first place then they should just\n> mark the function as leakproof when they install it. If these are\n> trusted language functions, I'd be curious to actually see them as I\n> have doubts about if they're actually leakproof..\n> \n> Or is the actual use-case here that they just want to mark functions we\n> know aren't leakproof as leakproof anyway because they aren't getting\n> the performance they want?\n> \n> If it's the latter, as I suspect it is, then I don't really think the\n> use-case justifies any change on our part- the right answer is to make\n> those functions actually leakproof, or write ones which are.\n\nCustomer was restoring pg_dump of on-premise ERP known as 1C (something like TurboTax) with this add-on [0]\n\nCREATE FUNCTION simple1c.date_from_guid(varchar(36)) RETURNS timestamp LANGUAGE plpgsql IMMUTABLE LEAKPROOF STRICT\n\nI'm not 1C-expert (programmed it a bit to get few bucks when I was a student), but seems like this function simple1c.date_from_guid() can be used in DSL queries. It have no obvious side effects. Maybe we could hack it by exploiting timestamp overflow, but I doubt it's practically usable.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n[0] https://github.com/ivan816/simple-1c/blob/f2e5ce78b98f70f30039fd3de79308a59d432fc2/Simple1C/Impl/Sql/SchemaMapping/Simple1cSchemaCreator.cs#L74\n\n\n\n\n",
"msg_date": "Sun, 25 Apr 2021 16:10:35 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "Greetings,\n\n* Andrey Borodin (x4mmm@yandex-team.ru) wrote:\n> > 20 апр. 2021 г., в 02:38, Stephen Frost <sfrost@snowman.net> написал(а):\n> > Here's what I'd ask Andrey- what's the actual use-case here? Are these\n> > cases where users are actually adding new functions which they believe\n> > are leakproof where those functions don't require superuser already to\n> > be created? Clearly if they're in a untrusted language and you have to\n> > be a superuser to install them in the first place then they should just\n> > mark the function as leakproof when they install it. If these are\n> > trusted language functions, I'd be curious to actually see them as I\n> > have doubts about if they're actually leakproof..\n> > \n> > Or is the actual use-case here that they just want to mark functions we\n> > know aren't leakproof as leakproof anyway because they aren't getting\n> > the performance they want?\n> > \n> > If it's the latter, as I suspect it is, then I don't really think the\n> > use-case justifies any change on our part- the right answer is to make\n> > those functions actually leakproof, or write ones which are.\n> \n> Customer was restoring pg_dump of on-premise ERP known as 1C (something like TurboTax) with this add-on [0]\n> \n> CREATE FUNCTION simple1c.date_from_guid(varchar(36)) RETURNS timestamp LANGUAGE plpgsql IMMUTABLE LEAKPROOF STRICT\n> \n> I'm not 1C-expert (programmed it a bit to get few bucks when I was a student), but seems like this function simple1c.date_from_guid() can be used in DSL queries. It have no obvious side effects. Maybe we could hack it by exploiting timestamp overflow, but I doubt it's practically usable.\n\nErm, it's very clearly not leakproof and will happily return information\nabout the value passed in during some error cases...\n\nThanks,\n\nStephen",
"msg_date": "Sun, 25 Apr 2021 14:33:43 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "Greetings,\n\n* Noah Misch (noah@leadboat.com) wrote:\n> On Mon, Apr 19, 2021 at 05:38:43PM -0400, Stephen Frost wrote:\n> > > > > On Fri, Apr 16, 2021 at 3:57 AM Noah Misch <noah@leadboat.com> wrote:\n> > > > >> Hence, I do find it reasonable to let pg_read_all_data be sufficient for\n> > > > >> setting LEAKPROOF.\n> \n> > I'm not really sure that attaching it to pg_read_all_data makes sense..\n> > \n> > In general, I've been frustrated by the places where we lump privileges\n> > together rather than having a way to distinctly GRANT capabilities\n> > independently- that's more-or-less exactly what lead me to work on\n> > implementing the role system in the first place, and later the\n> > predefined roles.\n> \n> This would be no more lumpy than e.g. pg_read_all_stats. However, I could\n> live with a separate pg_change_leakproof (or whatever name).\n\nThere's been already some disagreements about pg_read_all_stats, so I\ndon't think that is actually a good model to look at.\n\nI have doubts about users generally being able to write actually\nleakproof functions (this case being an example of someone writing a\nfunction that certainly wasn't leakproof but marking it as such\nanyway...), though I suppose it's unlikely that it's any worse than the\ncases of people writing SECURITY DEFINER functions that aren't careful\nenough, of which I've seen plenty of.\n\nI would think the role/capability would be 'pg_mark_function_leakproof'\nor similar though, and allow a user who had that role to either create\nleakproof functions (provided they have access to create the function in\nthe first place) or to mark an existing function as leakproof (but\nrequiring them to be a member of the role which owns the function).\n\nThanks,\n\nStephen",
"msg_date": "Sun, 25 Apr 2021 14:40:54 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Andrey Borodin (x4mmm@yandex-team.ru) wrote:\n>> Customer was restoring pg_dump of on-premise ERP known as 1C (something like TurboTax) with this add-on [0]\n\n> Erm, it's very clearly not leakproof and will happily return information\n> about the value passed in during some error cases...\n\nYeah, that's pretty much a poster-child example for NOT letting\nrandom users fool with leakproofness settings. \n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 25 Apr 2021 15:13:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
},
{
"msg_contents": "On Sun, Apr 25, 2021 at 02:40:54PM -0400, Stephen Frost wrote:\n> * Noah Misch (noah@leadboat.com) wrote:\n> > On Mon, Apr 19, 2021 at 05:38:43PM -0400, Stephen Frost wrote:\n> > > > > > On Fri, Apr 16, 2021 at 3:57 AM Noah Misch <noah@leadboat.com> wrote:\n> > > > > >> Hence, I do find it reasonable to let pg_read_all_data be sufficient for\n> > > > > >> setting LEAKPROOF.\n> > \n> > > I'm not really sure that attaching it to pg_read_all_data makes sense..\n> > > \n> > > In general, I've been frustrated by the places where we lump privileges\n> > > together rather than having a way to distinctly GRANT capabilities\n> > > independently- that's more-or-less exactly what lead me to work on\n> > > implementing the role system in the first place, and later the\n> > > predefined roles.\n> > \n> > This would be no more lumpy than e.g. pg_read_all_stats. However, I could\n> > live with a separate pg_change_leakproof (or whatever name).\n> \n> There's been already some disagreements about pg_read_all_stats, so I\n> don't think that is actually a good model to look at.\n> \n> I have doubts about users generally being able to write actually\n> leakproof functions (this case being an example of someone writing a\n> function that certainly wasn't leakproof but marking it as such\n> anyway...), though I suppose it's unlikely that it's any worse than the\n> cases of people writing SECURITY DEFINER functions that aren't careful\n> enough, of which I've seen plenty of.\n\nMaking \"it's hard to do well\" imply \"only superusers get to try\" doesn't\nmitigate a risk; it multiplies risks.\n\n> I would think the role/capability would be 'pg_mark_function_leakproof'\n> or similar though, and allow a user who had that role to either create\n> leakproof functions (provided they have access to create the function in\n> the first place) or to mark an existing function as leakproof (but\n> requiring them to be a member of the role which owns the function).\n\nThat's fine.\n\n\n",
"msg_date": "Mon, 31 May 2021 14:39:22 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Allowing to create LEAKPROOF functions to non-superuser"
}
] |
[
{
"msg_contents": "I am wondering what was the intent of this test case added by commit\n257836a75:\n\nCREATE INDEX icuidx16_mood ON collate_test(id) WHERE mood > 'ok' COLLATE \"fr-x-icu\";\n\nwhere \"mood\" is of an enum type, which surely does not respond to\ncollations.\n\nThe reason I ask is that this case started failing after I fixed\na parse_coerce.c bug that allowed a CollateExpr node to survive\nin this WHERE expression, which by rights it should not. I'm\ninclined to think that the test case is wrong and should be removed,\nbut maybe there's some reason to have a variant of it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Apr 2021 16:59:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Curious test case added by collation version tracking patch"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 8:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I am wondering what was the intent of this test case added by commit\n> 257836a75:\n>\n> CREATE INDEX icuidx16_mood ON collate_test(id) WHERE mood > 'ok' COLLATE \"fr-x-icu\";\n>\n> where \"mood\" is of an enum type, which surely does not respond to\n> collations.\n>\n> The reason I ask is that this case started failing after I fixed\n> a parse_coerce.c bug that allowed a CollateExpr node to survive\n> in this WHERE expression, which by rights it should not. I'm\n> inclined to think that the test case is wrong and should be removed,\n> but maybe there's some reason to have a variant of it.\n\nIndeed, this doesn't do anything useful, other than prove that we\nrecord a collation dependency where it is (uselessly) allowed in an\nexpression. Since you're not going to allow that anymore, it should\nbe dropped.\n\n\n",
"msg_date": "Tue, 13 Apr 2021 10:08:07 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Curious test case added by collation version tracking patch"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Tue, Apr 13, 2021 at 8:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The reason I ask is that this case started failing after I fixed\n>> a parse_coerce.c bug that allowed a CollateExpr node to survive\n>> in this WHERE expression, which by rights it should not. I'm\n>> inclined to think that the test case is wrong and should be removed,\n>> but maybe there's some reason to have a variant of it.\n\n> Indeed, this doesn't do anything useful, other than prove that we\n> record a collation dependency where it is (uselessly) allowed in an\n> expression. Since you're not going to allow that anymore, it should\n> be dropped.\n\nOK, I'll go clean it up. Thanks!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Apr 2021 18:47:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Curious test case added by collation version tracking patch"
}
] |
[
{
"msg_contents": "Recent work from commit 5100010e taught VACUUM that it doesn't have to\ndo index vacuuming in cases where there are practically zero (not\nnecessarily exactly zero) tuples to delete from indexes. It also\nsurfaces the information used to decide whether or not we skip index\nvacuuming in the logs, via the log_autovacuum_min_duration mechanism.\nThis log output can be used to get a sense of how effective HOT is\nover time.\n\nThere is one number of particular interest: the proportion of heap\npages that have one or more LP_DEAD items across successive VACUUMs\n(this is expressed as a percentage of the table). The immediate reason\nto expose this is that it is crucial to the skipping behavior from\ncommit 5100010e -- the threshold for skipping is 2% of all heap pages.\nBut that's certainly not the only reason to pay attention to the\npercentage. It can also be used to understand HOT in general. It can\nbe correlated with workload spikes and stressors that tend to make HOT\nless effective.\n\nA number of interesting workload-specific patterns seem to emerge by\nfocussing on how this number changes/grows over time. I think that\nthis should be pointed out directly in the docs. What's more, it seems\nlike a good vehicle for discussing how HOT works in general. Why did\nwe never really get around to documenting HOT? There should at least\nbe some handling of how DBAs can get the most out of HOT through\nmonitoring and through tuning -- especially by lowering heap\nfillfactor.\n\nIt's very hard to get all UPDATEs to use HOT. It's much easier to get\nUPDATEs to mostly use HOT most of the time. How things change over\ntime seems crucially important.\n\nI'll show one realistic example, just to give some idea of what it\nmight look like. This is output for 3 successive autovacuums against\nthe largest TPC-C table:\n\nautomatic vacuum of table \"postgres.public.bmsql_order_line\": index scans: 0\npages: 0 removed, 4668405 remain, 0 skipped due to pins, 696997 skipped frozen\ntuples: 324571 removed, 186702373 remain, 333888 are dead but not yet\nremovable, oldest xmin: 7624965\nbuffer usage: 3969937 hits, 3931997 misses, 1883255 dirtied\nindex scan bypassed: 42634 pages from table (0.91% of total) have\n324364 dead item identifiers\navg read rate: 62.469 MB/s, avg write rate: 29.920 MB/s\nI/O Timings: read=42359.501 write=11867.903\nsystem usage: CPU: user: 43.62 s, system: 38.17 s, elapsed: 491.74 s\nWAL usage: 4586766 records, 1850599 full page images, 8499388881 bytes\n\nautomatic vacuum of table \"postgres.public.bmsql_order_line\": index scans: 0\npages: 0 removed, 5976391 remain, 0 skipped due to pins, 2516643 skipped frozen\ntuples: 759956 removed, 239787517 remain, 1848431 are dead but not yet\nremovable, oldest xmin: 18489326\nbuffer usage: 3432019 hits, 3385757 misses, 2426571 dirtied\nindex scan bypassed: 103941 pages from table (1.74% of total) have\n790233 dead item identifiers\navg read rate: 50.338 MB/s, avg write rate: 36.077 MB/s\nI/O Timings: read=49252.721 write=17003.987\nsystem usage: CPU: user: 45.86 s, system: 34.47 s, elapsed: 525.47 s\nWAL usage: 5598040 records, 2274556 full page images, 10510281959 bytes\n\nautomatic vacuum of table \"postgres.public.bmsql_order_line\": index scans: 1\npages: 0 removed, 7483804 remain, 1 skipped due to pins, 4208295 skipped frozen\ntuples: 972778 removed, 299295048 remain, 1970910 are dead but not yet\nremovable, oldest xmin: 30987445\nbuffer usage: 3384994 hits, 4593727 misses, 2891003 dirtied\nindex scan needed: 174243 pages from table (2.33% of total) had\n1325752 dead item identifiers removed\nindex \"bmsql_order_line_pkey\": pages: 1250660 in total, 0 newly\ndeleted, 0 currently deleted, 0 reusable\navg read rate: 60.505 MB/s, avg write rate: 38.078 MB/s\nI/O Timings: read=72881.986 write=21872.615\nsystem usage: CPU: user: 65.24 s, system: 42.24 s, elapsed: 593.14 s\nWAL usage: 6668353 records, 2684040 full page images, 12374536939 bytes\n\nThese autovacuums occur every 60-90 minutes with the workload in\nquestion (with pretty aggressive autovacuum settings). We see that HOT\nworks rather well here -- but not so well that index vacuuming can be\navoided consistently, which happens in the final autovacuum (it has\n\"index scans: 1\"). There was slow but steady growth in the percentage\nof LP_DEAD-containing heap pages over time here, which is common\nenough.\n\nThe point of HOT is not to avoid having to do index vacuuming, of\ncourse -- that has it backwards. But framing HOT as doing work in\nbackends so autovacuum doesn't have to do similar work later on is a\ngood mental model to encourage users to adopt. There are also\nsignificant advantages to reducing the effectiveness of HOT to this\none number -- HOT must be working well if it's close to 0%, almost\nalways below 2%, with the occasional aberration that sees it go up to\nmaybe 5%. But if it ever goes too high (in the absence of DELETEs),\nyou might have trouble on your hands. It might not go down again.\n\nThere are other interesting patterns from other tables within the same\ndatabase -- including on tables with no UPDATEs, and tables that\ngenerally cannot use HOT due to a need to modify indexed columns. The\nparticulars with these other tables hint at problems with heap\nfragmentation, which is something that users can think of as a\ndegenerative process -- something that gets progressively worse in\nextreme cases (i.e. cases where it matters).\n\nThis new percentage metric isn't about HOT per se. It's actually about\nthe broader question of how effective the system is at keeping the\nphysical location of each logical row stable over time, for a given\nworkload. So maybe that's what any new documentation should address.\nThe documentation would still have plenty to say about HOT, though. It\nwould also have something to say about bottom-up index deletion, which\ncan be thought of as avoiding problems when HOT doesn't or can't be\napplied very often.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 12 Apr 2021 16:11:59 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Teaching users how they can get the most out of HOT in Postgres 14"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-12 16:11:59 -0700, Peter Geoghegan wrote:\n> Recent work from commit 5100010e taught VACUUM that it doesn't have to\n> do index vacuuming in cases where there are practically zero (not\n> necessarily exactly zero) tuples to delete from indexes.\n\nFWIW, I'd not at all be surprised if this causes some issues. Consider\ncases where all index lookups are via bitmap scans (which does not\nsupport killtuples) - if value ranges are looked up often the repeated\nheap fetches can absolutely kill query performance. I've definitely had\nto make autovacuum more aggressive for cases like this or schedule\nmanual vacuums, and now that's silently not good enough anymore. Yes, 2%\nof the table isn't all that much, but often enough all the updates and\nlookups concentrate in one value range.\n\nAs far as I can see there's no reasonable way to disable this\n\"optimization\", which scares me.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Apr 2021 16:30:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 4:30 PM Andres Freund <andres@anarazel.de> wrote:\n> As far as I can see there's no reasonable way to disable this\n> \"optimization\", which scares me.\n\nI'm fine with adding a simple 'off' switch. What I'd like to avoid\ndoing is making the behavior tunable, since it's likely to change in\nPostgres 15 and Postgres 16 anyway.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 12 Apr 2021 16:35:13 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 04:35:13PM -0700, Peter Geoghegan wrote:\n> On Mon, Apr 12, 2021 at 4:30 PM Andres Freund <andres@anarazel.de> wrote:\n> > As far as I can see there's no reasonable way to disable this\n> > \"optimization\", which scares me.\n> \n> I'm fine with adding a simple 'off' switch. What I'd like to avoid\n> doing is making the behavior tunable, since it's likely to change in\n> Postgres 15 and Postgres 16 anyway.\n\nWhile going through this commit a couple of days ago, I really got to\nwonder why you are controlling this stuff with a hardcoded value and I\nfound that scary, while what you should be using are two GUCs with the\nreloptions that come with the feature (?):\n- A threshold, as an integer, to define a number of pages.\n- A scale factor to define a percentage of pages.\n\nAlso, I am a bit confused with the choice of BYPASS_THRESHOLD_PAGES as\nparameter name. For all the other parameters of autovacuum, we use\n\"threshold\" for a fixed number of items, not a percentage of a given\nitem.\n--\nMichael",
"msg_date": "Tue, 13 Apr 2021 08:52:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 4:52 PM Michael Paquier <michael@paquier.xyz> wrote:\n> While going through this commit a couple of days ago, I really got to\n> wonder why you are controlling this stuff with a hardcoded value and I\n> found that scary, while what you should be using are two GUCs with the\n> reloptions that come with the feature (?):\n> - A threshold, as an integer, to define a number of pages.\n> - A scale factor to define a percentage of pages.\n\nWhy?\n\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 12 Apr 2021 16:53:47 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-12 16:53:47 -0700, Peter Geoghegan wrote:\n> On Mon, Apr 12, 2021 at 4:52 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > While going through this commit a couple of days ago, I really got to\n> > wonder why you are controlling this stuff with a hardcoded value and I\n> > found that scary, while what you should be using are two GUCs with the\n> > reloptions that come with the feature (?):\n> > - A threshold, as an integer, to define a number of pages.\n> > - A scale factor to define a percentage of pages.\n> \n> Why?\n\nWell, one argument is that you made a fairly significant behavioural\nchange, with hard-coded logic for when the optimization kicks in. It's\nnot at all clear that your constants are the right ones for every\nworkload. We'll likely on get to know whether they're right in > 1 year\n- not having a real out at that point imo is somewhat scary.\n\nThat said, adding more and more reloptions has a significant cost, so I\ndon't think it's clear cut that it's the right decision to add\none. Perhaps vacuum_cleanup_index_scale_factor should just be reused for\nBYPASS_THRESHOLD_PAGES?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Apr 2021 17:37:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 5:37 PM Andres Freund <andres@anarazel.de> wrote:\n> Well, one argument is that you made a fairly significant behavioural\n> change, with hard-coded logic for when the optimization kicks in. It's\n> not at all clear that your constants are the right ones for every\n> workload.\n\n(Apparently nobody wants to talk about HOT and the documentation.)\n\nThe BYPASS_THRESHOLD_PAGES cutoff was chosen conservatively, so that\nit would avoid index vacuuming in truly marginal cases -- and it tends\nto only delay it there.\n\nA table-level threshold is not the best way of constraining the\nproblem. In the future, the table threshold should be treated as only\none factor among several. Plus there will be more than a simple yes/no\nquestion to consider. We should eventually be able to do index\nvacuuming for some indexes but not others. Bottom-up index deletion\nhas totally changed things here, because roughly speaking it makes\nindex bloat proportionate to the number of logical changes to indexed\ncolumns -- you could have one super-bloated index on the table, but\nseveral others that perfectly retain their original size. You still\nneed to do heap vacuuming eventually, which necessitates vacuuming\nindexes too, but the right strategy is probably to vacuum much more\nfrequently, vacuuming the bloated index each time. You only do a full\nround of index vacuuming when the table starts to accumulate way too\nmany LP_DEAD items. You need a much more sophisticated model for this.\nIt might also need to hook into autovacuums scheduling.\n\nOne of the dangers of high BYPASS_THRESHOLD_PAGES settings is that\nit'll work well for some indexes but not others. To a dramatic degree,\neven.\n\nThat said, nbtree isn't the only index AM, and it is hard to be\ncompletely sure that you've caught everything. So an off switch seems\nlike a good idea now.\n\n> We'll likely on get to know whether they're right in > 1 year\n> - not having a real out at that point imo is somewhat scary.\n>\n> That said, adding more and more reloptions has a significant cost, so I\n> don't think it's clear cut that it's the right decision to add\n> one. Perhaps vacuum_cleanup_index_scale_factor should just be reused for\n> BYPASS_THRESHOLD_PAGES?\n\nI think that the right way to do this is to generalize INDEX_CLEANUP\nto support a mode of operation that disallows vacuumlazy.c from\napplying this optimization, as well as any similar optimizations which\nwill be added in the future.\n\nEven if you don't buy my argument about directly parameterizing\nBYPASS_THRESHOLD_PAGES undermining future work, allowing it to be set\nmuch higher than 5% - 10% would be a pretty big footgun. It might\nappear to help at first, but risks destabilizing things much later on.\n\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 12 Apr 2021 18:12:18 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 06:12:18PM -0700, Peter Geoghegan wrote:\n> One of the dangers of high BYPASS_THRESHOLD_PAGES settings is that\n> it'll work well for some indexes but not others. To a dramatic degree,\n> even.\n> \n> That said, nbtree isn't the only index AM, and it is hard to be\n> completely sure that you've caught everything. So an off switch seems\n> like a good idea now.\n\nWhatever the solution chosen, the thing I can see we agree on here is\nthat we need to do something, at least in the shape of an on/off\nswitch to have an escape path in case of problems. Peter, could we\nget something by beta1 for that? FWIW, I would use a float GUC to\ncontrol that, and not a boolean switch, but I am just one voice here,\nand that's not a feature I worked on.\n--\nMichael",
"msg_date": "Tue, 11 May 2021 16:42:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Tue, May 11, 2021 at 04:42:27PM +0900, Michael Paquier wrote:\n> Whatever the solution chosen, the thing I can see we agree on here is\n> that we need to do something, at least in the shape of an on/off\n> switch to have an escape path in case of problems. Peter, could we\n> get something by beta1 for that? FWIW, I would use a float GUC to\n> control that, and not a boolean switch, but I am just one voice here,\n> and that's not a feature I worked on.\n\nSo, I have been thinking more about this item, and a boolean switch\nstill sounded weird to me, so attached is a patch to have two GUCs,\none for manual VACUUM and autovacuum like any other parameters, to \ncontrol this behavior, with a default set at 2% of the number of\nrelation pages with dead items needed to do the index cleanup work.\n\nEven if we switch the parameter type used here, the easiest and most\nconsistent way to pass down the parameter is just to use VacuumParams\nset within ExecVacuum() and the autovacuum code path. The docs need\nmore work, I guess.\n\nThoughts?\n--\nMichael",
"msg_date": "Thu, 13 May 2021 16:27:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Thu, May 13, 2021 at 04:27:47PM +0900, Michael Paquier wrote:\n> On Tue, May 11, 2021 at 04:42:27PM +0900, Michael Paquier wrote:\n> > Whatever the solution chosen, the thing I can see we agree on here is\n> > that we need to do something, at least in the shape of an on/off\n> > switch to have an escape path in case of problems. Peter, could we\n> > get something by beta1 for that? FWIW, I would use a float GUC to\n> > control that, and not a boolean switch, but I am just one voice here,\n> > and that's not a feature I worked on.\n> \n> So, I have been thinking more about this item, and a boolean switch\n> still sounded weird to me, so attached is a patch to have two GUCs,\n> one for manual VACUUM and autovacuum like any other parameters, to \n> control this behavior, with a default set at 2% of the number of\n> relation pages with dead items needed to do the index cleanup work.\n> \n> Even if we switch the parameter type used here, the easiest and most\n> consistent way to pass down the parameter is just to use VacuumParams\n> set within ExecVacuum() and the autovacuum code path. The docs need\n> more work, I guess.\n> \n> Thoughts?\n\n> +\t\tcleanup_index_scale_factor = autovacuum_cleanup_index_scale >= 0 ?\n> +\t\t\tautovacuum_cleanup_index_scale : VacuumCostDelay;\n\nCostDelay is surely not what you meant.\n\n> + <title>Vacuum parameters for Indexes</title>\n> + <para>\n> + During the execution of <xref linkend=\"sql-vacuum\"/>\n> + and <xref linkend=\"sql-analyze\"/>\n\n\"and analyze\" is wrong?\n\n> + This parameter can only be set in the <filename>postgresql.conf</filename>\n> + file or on the server command line.\n\nIt's SIGHUP\n\n> + This parameter can only be set in the <filename>postgresql.conf</filename>\n> + file or on the server command line.\n\nSame\n\n+ { \n+ {\"vacuum_cleanup_index_scale_factor\", PGC_SIGHUP, VACUUM_INDEX, \n+ gettext_noop(\"Fraction of relation pages, with at least one dead item, required to clean up indexes.\"), \n+ NULL \n+ }, \n+ &VacuumCleanupIndexScale, \n+ 0.02, 0.0, 0.05, \n+ NULL, NULL, NULL \n+ }, \n\nWhy is the allowed range from 0 to 0.05? Why not 0.10 or 1.0 ?\nThe old GUC of the same name had max 1e10.\nI think a reduced range and a redefinition of the GUC would need to be called\nout as an incompatibility.\n\nAlso, the old GUC (removed at 9f3665fbf) had:\n- {\"vacuum_cleanup_index_scale_factor\", PGC_USERSET, CLIENT_CONN_STATEMENT,\n\nI think USERSET and STATEMENT were right ?\n\nAlternately, what if this were in the DEVELOPER category, which makes this\neasier to remove in v15.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 13 May 2021 07:06:55 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Thu, May 13, 2021 at 5:06 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Why is the allowed range from 0 to 0.05? Why not 0.10 or 1.0 ?\n> The old GUC of the same name had max 1e10.\n\nIt also had a completely different purpose and default.\n\n> I think a reduced range and a redefinition of the GUC would need to be called\n> out as an incompatibility.\n\nThe justification from Michael for this approach is that not having\nthis level of control would be weird, at least to him. But that\njustification itself seems weird to me; why start from the premise\nthat you need a knob (as opposed to an off switch) at all? Why not\nstart with the way the mechanism works (or is intended to work) in\npractice? Most individual tables will *never* have VACUUM apply the\noptimization with *any* reasonable threshold value, so we only need to\nconsider the subset of tables/workloads where it *might* make sense to\nskip index vacuuming. This is more qualitative than quantitative.\n\nIt makes zero sense to treat the threshold as a universal scale --\nthis is one reason why I don't want to expose a true tunable knob to\nusers. Though the threshold-driven/BYPASS_THRESHOLD_PAGES design is\nnot exactly something with stable behavior for a given table, it\nalmost works like that in practice: tables tend to usually skip index\nvacuuming, or never skip it even once. There is a clear bifurcation\nalong this line when you view how VACUUM behaves with a variety of\ndifferent tables using the new autovacuum logging stuff.\n\nAlmost all of the benefit of the optimization is available with the\ncurrent BYPASS_THRESHOLD_PAGES threshold (2% of heap pages have\nLP_DEAD items), which has less risk than a higher threshold. I don't\nthink it matters much if we have the occasional unnecessary round of\nindex vacuuming on account of not applying the optimization. The truly\nimportant benefit of the optimization is to not do unnecessary index\nvacuuming all the time in the extreme cases where it's really hard to\njustify. There is currently zero evidence that anything higher than 2%\nwill ever help anybody to an appreciably degree. (There is also no\nevidence that the optimization will ever need to be disabled, but I\naccept the need to be conservative and offer an off switch -- the\nprecautionary principle applies when talking about new harms.)\n\nNot having to scan every index on every VACUUM, but only every 5th or\nso VACUUM is a huge improvement. But going from every 5th VACUUM to\nevery 10th VACUUM? That's at best a tiny additional improvement in\nexchange for what I'd guess is a roughly linear increase in risk\n(maybe a greater-than-linear increase, even). That's an awful deal.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 13 May 2021 13:27:44 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Thu, May 13, 2021 at 01:27:44PM -0700, Peter Geoghegan wrote:\n> Almost all of the benefit of the optimization is available with the\n> current BYPASS_THRESHOLD_PAGES threshold (2% of heap pages have\n> LP_DEAD items), which has less risk than a higher threshold. I don't\n> think it matters much if we have the occasional unnecessary round of\n> index vacuuming on account of not applying the optimization. The truly\n> important benefit of the optimization is to not do unnecessary index\n> vacuuming all the time in the extreme cases where it's really hard to\n> justify. There is currently zero evidence that anything higher than 2%\n> will ever help anybody to an appreciably degree. (There is also no\n> evidence that the optimization will ever need to be disabled, but I\n> accept the need to be conservative and offer an off switch -- the\n> precautionary principle applies when talking about new harms.)\n> \n> Not having to scan every index on every VACUUM, but only every 5th or\n> so VACUUM is a huge improvement. But going from every 5th VACUUM to\n> every 10th VACUUM? That's at best a tiny additional improvement in\n> exchange for what I'd guess is a roughly linear increase in risk\n> (maybe a greater-than-linear increase, even). That's an awful deal.\n\nPerhaps that's an awful deal, but based on which facts can you really\nsay that this new behavior of needing at least 2% of relation pages\nwith some dead items to clean up indexes is not a worse deal in some\ncases? This may cause more problems for the in-core index AMs, as\nmuch as it could impact any out-of-core index AM, no? What about\nother values like 1%, or even 5%? My guess is that there would be an\nask to have more control on that, though that stands as my opinion.\n\nSaying that, as long as there is a way to disable that for the users\nwith autovacuum and manual vacuums, I'd be fine. It is worth noting\nthat adding an GUC to control this optimization would make the code\nmore confusing, as there is already do_index_cleanup, a\nvacuum_index_cleanup reloption, and specifying vacuum_index_cleanup to\nTRUE may cause the index cleanup to not actually kick if the 2% bar is\nnot reached.\n--\nMichael",
"msg_date": "Fri, 14 May 2021 11:14:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Thu, May 13, 2021 at 7:14 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Perhaps that's an awful deal, but based on which facts can you really\n> say that this new behavior of needing at least 2% of relation pages\n> with some dead items to clean up indexes is not a worse deal in some\n> cases?\n\nIf I thought that it simply wasn't possible then I wouldn't have\naccepted the need to make it possible to disable. This is a\ncost/benefit decision problem, which must be made based on imperfect\ninformation -- there are no absolute certainties. But I'm certain\nabout one thing: there is a large practical difference between the\noptimization causing terrible performance in certain scenarios and the\noptimization causing slightly suboptimal performance in certain\nscenarios. A tiny risk of the former scenario is *much* worse than a\nrelatively large risk of the latter scenario. There needs to be a\nsense of proportion about risk.\n\n> This may cause more problems for the in-core index AMs, as\n> much as it could impact any out-of-core index AM, no?\n\nI don't understand what you mean here.\n\n> What about\n> other values like 1%, or even 5%? My guess is that there would be an\n> ask to have more control on that, though that stands as my opinion.\n\nHow did you arrive at that guess? Why do you believe that? This is the\nsecond time I've asked.\n\n> Saying that, as long as there is a way to disable that for the users\n> with autovacuum and manual vacuums, I'd be fine. It is worth noting\n> that adding an GUC to control this optimization would make the code\n> more confusing, as there is already do_index_cleanup, a\n> vacuum_index_cleanup reloption, and specifying vacuum_index_cleanup to\n> TRUE may cause the index cleanup to not actually kick if the 2% bar is\n> not reached.\n\nI don't intend to add a GUC. A reloption should suffice.\n\nYour interpretation of what specifying vacuum_index_cleanup (the\nVACUUM command option) represents doesn't seem particularly justified\nto me. To me it just means \"index cleanup and vacuuming are not\nexplicitly disabled, the default behavior\". It's an option largely\nintended for emergencies, and largely superseded by the failsafe\nmechanism. This interpretation is justified by well established\nprecedent: it has long been possible for VACUUM to skip heap page\npruning and even heap page vacuuming just because a super-exclusive\nlock could not be acquired (though the latter case no longer happens\ndue to the same work inside vacuumlazy.c) -- which also implies\nskipping some index vacuuming, without it ever being apparent to the\nuser.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 13 May 2021 19:56:05 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "(I had missed this discussion due to the mismatched thread subject..)\n\nOn Fri, May 14, 2021 at 11:14 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, May 13, 2021 at 01:27:44PM -0700, Peter Geoghegan wrote:\n> > Almost all of the benefit of the optimization is available with the\n> > current BYPASS_THRESHOLD_PAGES threshold (2% of heap pages have\n> > LP_DEAD items), which has less risk than a higher threshold. I don't\n> > think it matters much if we have the occasional unnecessary round of\n> > index vacuuming on account of not applying the optimization. The truly\n> > important benefit of the optimization is to not do unnecessary index\n> > vacuuming all the time in the extreme cases where it's really hard to\n> > justify. There is currently zero evidence that anything higher than 2%\n> > will ever help anybody to an appreciably degree. (There is also no\n> > evidence that the optimization will ever need to be disabled, but I\n> > accept the need to be conservative and offer an off switch -- the\n> > precautionary principle applies when talking about new harms.)\n> >\n> > Not having to scan every index on every VACUUM, but only every 5th or\n> > so VACUUM is a huge improvement. But going from every 5th VACUUM to\n> > every 10th VACUUM? That's at best a tiny additional improvement in\n> > exchange for what I'd guess is a roughly linear increase in risk\n> > (maybe a greater-than-linear increase, even). That's an awful deal.\n>\n> Perhaps that's an awful deal, but based on which facts can you really\n> say that this new behavior of needing at least 2% of relation pages\n> with some dead items to clean up indexes is not a worse deal in some\n> cases? This may cause more problems for the in-core index AMs, as\n> much as it could impact any out-of-core index AM, no? What about\n> other values like 1%, or even 5%? My guess is that there would be an\n> ask to have more control on that, though that stands as my opinion.\n\nI'm concerned how users can tune that scale type parameter that can be\nconfigurable between 0.0 and 0.05. I think that users basically don't\npay attention to how many blocks are updated by UPDATE/DELETE. Unlike\nold vacuum_cleanup_index_scale_factor, increasing this parameter would\ndirectly affect index bloats. If the user can accept more index bloat\nto speed up (auto)vacuum, they can use vacuum_index_cleanup instead.\n\nI prefer to have an on/off switch just in case. I remember I also\ncommented the same thing before. We’ve discussed a way to control\nwhether or not to enable the skipping optimization by adding a new\nmode to INDEX_CLEANUP option, as Peter mentioned. For example, we can\nuse the new mode “auto” (or “smart”) mode by default, enabling all\nskipping optimizations, and specifying “on” disables them. Or we can\nadd “force” mode to disable all skipping optimizations while leaving\nthe existing modes as they are. Anyway, I think it’s not a good idea\nto add a new GUC parameter that we’re not sure how to tune.\n\nIIUC skipping index vacuum when less than 2% of relation pages with at\nleast one LP_DEAD is a table’s optimization rather than a btree\nindex’s optimization. Since we’re not likely to set many pages\nall-visible or collect many dead tuples in that case, we can skip\nindex vacuuming and table vacuuming. IIUC this case, fortunately, goes\nwell together btree indexes’ bottom-up deletion. If this skipping\nbehavior badly affects other indexes AMs, this optimization should be\nconsidered within btree indexes, although we will need a way for index\nAMs to consider and tell the vacuum strategy. But I guess this works\nwell in most cases so having an on/off switch suffice.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 18 May 2021 23:28:43 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Tue, May 18, 2021 at 7:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> I prefer to have an on/off switch just in case. I remember I also\n> commented the same thing before. We’ve discussed a way to control\n> whether or not to enable the skipping optimization by adding a new\n> mode to INDEX_CLEANUP option, as Peter mentioned. For example, we can\n> use the new mode “auto” (or “smart”) mode by default, enabling all\n> skipping optimizations, and specifying “on” disables them. Or we can\n> add “force” mode to disable all skipping optimizations while leaving\n> the existing modes as they are. Anyway, I think it’s not a good idea\n> to add a new GUC parameter that we’re not sure how to tune.\n>\n> IIUC skipping index vacuum when less than 2% of relation pages with at\n> least one LP_DEAD is a table’s optimization rather than a btree\n> index’s optimization.\n\nRight. There *is* an excellent way to tune this behavior: by adjusting\nheap fillfactor to make HOT more effective. That was why I started\nthis thread!\n\nIf you leave heap fillfactor at the default of 100, and have lots of\nupdates (that don't modify indexed columns) and no deletes, then\nyou're almost certainly not going to have VACUUM skip indexes anyway\n-- in practice you're bound to exceed having 2% of pages with an\nLP_DEAD item before very long. Tuning heap fillfactor is practically\nessential to see a real benefit, regardless of the exact\nBYPASS_THRESHOLD_PAGES. (There may be some rare exceptions, but for\nthe most part this mechanism helps with tables that get many updates\nthat are expected to use HOT, and will use HOT barring a tiny number\nof cases where the new tuple won't' quite fit, etc.)\n\nThe idea of tuning the behavior directly (e.g. with a reloption that\nlets the user specify a BYPASS_THRESHOLD_PAGES style threshold) is\nexactly backwards. The point for the user should not be to skip\nindexes during VACUUM. The point for the user is to get lots of\nnon-HOT updates to *avoid heap fragmentation*, guided by the new\nautovacuum instrumentation. That also means that there will be much\nless index vacuuming. But that's a pretty minor side-benefit. Why\nshould the user *expect* largely unnecessary index vacuuming to take\nplace?\n\nTo put it another way, the index bypass mechanism added to\nvacuumlazy.c was not intended to add a new good behavior. It was\nactually intended to subtract an old bad behavior. The patch is mostly\nuseful because it allows the user to make VACUUM *more* aggressive\nwith freezing and VM bit setting (not less aggressive with indexes).\nThe BYPASS_THRESHOLD_PAGES threshold of 0.02 is a little arbitrary --\nbut only a little.\n\n> Since we’re not likely to set many pages\n> all-visible or collect many dead tuples in that case, we can skip\n> index vacuuming and table vacuuming. IIUC this case, fortunately, goes\n> well together btree indexes’ bottom-up deletion.\n\nIt's true that bottom-up index deletion provides additional insurance\nagainst problems, but I don't think that that insurance is strictly\nnecessary. It's nice to have insurance, though.\n\n> If this skipping\n> behavior badly affects other indexes AMs, this optimization should be\n> considered within btree indexes, although we will need a way for index\n> AMs to consider and tell the vacuum strategy. But I guess this works\n> well in most cases so having an on/off switch suffice.\n\nRight. I doubt that it will actually turn out to be necessary to have\nsuch a switch. But I try to be modest when it comes to predicting what\nwill be important to some user workload -- it's way too complicated to\nhave total confidence about something like that. It is a risk to be\nmanaged.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 18 May 2021 14:08:54 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Wed, May 19, 2021 at 6:09 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, May 18, 2021 at 7:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > If this skipping\n> > behavior badly affects other indexes AMs, this optimization should be\n> > considered within btree indexes, although we will need a way for index\n> > AMs to consider and tell the vacuum strategy. But I guess this works\n> > well in most cases so having an on/off switch suffice.\n>\n> Right. I doubt that it will actually turn out to be necessary to have\n> such a switch. But I try to be modest when it comes to predicting what\n> will be important to some user workload -- it's way too complicated to\n> have total confidence about something like that. It is a risk to be\n> managed.\n\nI think the possible side effect of this hard-coded\nBYPASS_THRESHOLD_PAGES would be that by default, bulkdelete is not\ncalled for a long term and the index becomes bloat. IOW, we will\nenforce users have the index bloat corresponding to 2% of table pages.\nThe bloat could be serious depending on the index tuple size (e.g.,\nindex including many columns). The user may have been running\nautovacuums aggressively on that table to prevent the index bloat but\nit's no longer possible and there is no choice. So I think that for\nthose (relatively) rare use cases, it's good to provide a way to\nsomehow control it. Fortunately, an on/off switch is likely to be\nuseful for controlling other optimizations that could be added in the\nfuture.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 24 May 2021 15:33:32 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Sun, May 23, 2021 at 11:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> I think the possible side effect of this hard-coded\n> BYPASS_THRESHOLD_PAGES would be that by default, bulkdelete is not\n> called for a long term and the index becomes bloat.\n\nWhat do you think of the approach taken in the attached POC patch?\n\nThe patch makes it possible to disable the optimization by\ngeneralizing the INDEX_CLEANUP reloption to be an enum that looks like\na trinay boolean (not just a plain boolean). INDEX_CLEANUP now accepts\nthe values 'auto', 'on', and 'off' (plus a variety of alternative\nspellings, the usual ones for booleans in Postgres). Now 'auto' is the\ndefault, and 'on' forces the previous behavior inside vacuumlazy.c. It\ndoes not disable the failsafe, though -- INDEX_CLEANUP remains a\nfairly mechanical thing.\n\nThis approach seems good to me because INDEX_CLEANUP remains\nconsistent with the original purpose and design of INDEX_CLEANUP --\nthat was always an option that forced VACUUM to do something special\nwith indexes. I don't see much downside to this approach, either. As\nthings stand, INDEX_CLEANUP is mostly superseded by the failsafe, so\nwe don't really need to talk about wraparound emergencies in the docs\nfor INDEX_CLEANUP anymore. This seems much more elegant than either\nrepurposing/reviving cleanup_index_scale_factor (which makes no sense\nto me at all) or inventing a new reloption (which would itself be in\ntension with INDEX_CLEANUP).\n\nThere are some practical issues that make this patch surprisingly\ncomplicated for such a simple problem. For example, I hope that I\nhaven't missed any subtlety in generalizing a boolean reloption like\nthis. We've done similar things with GUCs in the past, but this may be\na little different. Another concern with this approach is what it\nmeans for the VACUUM command itself. I haven't added an 'auto'\nspelling that is accepted by the VACUUM command in this POC version.\nBut do I need to at all? Can that just be implied by not having any\nINDEX_CLEANUP option? And does StdRdOptions.vacuum_truncate now need\nto become a VacOptTernaryValue field too, for consistency with the new\ndefinition of StdRdOptions.vacuum_index_cleanup?\n\n-- \nPeter Geoghegan",
"msg_date": "Thu, 27 May 2021 17:52:59 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Fri, May 28, 2021 at 9:53 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sun, May 23, 2021 at 11:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I think the possible side effect of this hard-coded\n> > BYPASS_THRESHOLD_PAGES would be that by default, bulkdelete is not\n> > called for a long term and the index becomes bloat.\n>\n> What do you think of the approach taken in the attached POC patch?\n>\n> The patch makes it possible to disable the optimization by\n> generalizing the INDEX_CLEANUP reloption to be an enum that looks like\n> a trinay boolean (not just a plain boolean). INDEX_CLEANUP now accepts\n> the values 'auto', 'on', and 'off' (plus a variety of alternative\n> spellings, the usual ones for booleans in Postgres). Now 'auto' is the\n> default, and 'on' forces the previous behavior inside vacuumlazy.c. It\n> does not disable the failsafe, though -- INDEX_CLEANUP remains a\n> fairly mechanical thing.\n\n+1\n\n>\n> This approach seems good to me because INDEX_CLEANUP remains\n> consistent with the original purpose and design of INDEX_CLEANUP --\n> that was always an option that forced VACUUM to do something special\n> with indexes. I don't see much downside to this approach, either. As\n> things stand, INDEX_CLEANUP is mostly superseded by the failsafe, so\n> we don't really need to talk about wraparound emergencies in the docs\n> for INDEX_CLEANUP anymore. This seems much more elegant than either\n> repurposing/reviving cleanup_index_scale_factor (which makes no sense\n> to me at all) or inventing a new reloption (which would itself be in\n> tension with INDEX_CLEANUP).\n\n+1\n\n>\n> There are some practical issues that make this patch surprisingly\n> complicated for such a simple problem. For example, I hope that I\n> haven't missed any subtlety in generalizing a boolean reloption like\n> this. We've done similar things with GUCs in the past, but this may be\n> a little different.\n\n+/* values from HeapOptIndexCleanupMode */\n+relopt_enum_elt_def HeapOptIndexCleanupOptValues[] =\n+{\n+ {\"auto\", VACOPT_TERNARY_DEFAULT},\n+ {\"on\", VACOPT_TERNARY_ENABLED},\n+ {\"off\", VACOPT_TERNARY_DISABLED},\n+ {\"true\", VACOPT_TERNARY_ENABLED},\n+ {\"false\", VACOPT_TERNARY_DISABLED},\n+ {\"1\", VACOPT_TERNARY_ENABLED},\n+ {\"0\", VACOPT_TERNARY_DISABLED},\n+ {(const char *) NULL} /* list terminator */\n+};\n\nWe need to accept \"yes\" and \"no\" too? Currently, the parsing of a\nboolean type reloption accepts those words.\n\n> Another concern with this approach is what it\n> means for the VACUUM command itself. I haven't added an 'auto'\n> spelling that is accepted by the VACUUM command in this POC version.\n> But do I need to at all? Can that just be implied by not having any\n> INDEX_CLEANUP option?\n\nIt seems to me that it's better to have INDEX_CLEANUP option of VACUUM\ncommand support AUTO for consistency. Do you have any concerns about\nsupporting it?\n\n> And does StdRdOptions.vacuum_truncate now need\n> to become a VacOptTernaryValue field too, for consistency with the new\n> definition of StdRdOptions.vacuum_index_cleanup?\n\nWe don't have the bypass optimization for heap truncation, unlike\nindex vacuuming. So I think we can leave both vacuum_truncate\nreloption and TRUNCATE option as boolean parameters.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 31 May 2021 10:30:08 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Mon, May 31, 2021 at 10:30:08AM +0900, Masahiko Sawada wrote:\n> On Fri, May 28, 2021 at 9:53 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>> Another concern with this approach is what it\n>> means for the VACUUM command itself. I haven't added an 'auto'\n>> spelling that is accepted by the VACUUM command in this POC version.\n>> But do I need to at all? Can that just be implied by not having any\n>> INDEX_CLEANUP option?\n>\n> It seems to me that it's better to have INDEX_CLEANUP option of VACUUM\n> command support AUTO for consistency. Do you have any concerns about\n> supporting it?\n\nI have read through the patch, and I am surprised to see that this\nonly makes possible to control the optimization at relation level.\nThe origin of the complaints is that this index cleanup optimization\nhas been introduced as a new rule that gets enforced at *system*\nlevel, so I think that we should have an equivalent with a GUC to\ncontrol the behavior for the whole system. With what you are\npresenting here, one could only disable the optimization for each\nrelation, one-by-one. If this optimization proves to be a problem,\nit's just going to be harder to users to go through all the relations\nand re-tune autovacuum. Am I missing something?\n--\nMichael",
"msg_date": "Fri, 4 Jun 2021 15:15:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 3:15 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, May 31, 2021 at 10:30:08AM +0900, Masahiko Sawada wrote:\n> > On Fri, May 28, 2021 at 9:53 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> >> Another concern with this approach is what it\n> >> means for the VACUUM command itself. I haven't added an 'auto'\n> >> spelling that is accepted by the VACUUM command in this POC version.\n> >> But do I need to at all? Can that just be implied by not having any\n> >> INDEX_CLEANUP option?\n> >\n> > It seems to me that it's better to have INDEX_CLEANUP option of VACUUM\n> > command support AUTO for consistency. Do you have any concerns about\n> > supporting it?\n>\n> I have read through the patch, and I am surprised to see that this\n> only makes possible to control the optimization at relation level.\n> The origin of the complaints is that this index cleanup optimization\n> has been introduced as a new rule that gets enforced at *system*\n> level, so I think that we should have an equivalent with a GUC to\n> control the behavior for the whole system. With what you are\n> presenting here, one could only disable the optimization for each\n> relation, one-by-one. If this optimization proves to be a problem,\n> it's just going to be harder to users to go through all the relations\n> and re-tune autovacuum. Am I missing something?\n\nI've not thought to disable that optimization at system level. I think\nwe can use this option for particular tables whose indexes\nunexpectedly bloated much due to this optimization. Similarly, we have\nDISABLE_PAGE_SKIPPING option but don’t have a way to disable lazy\nvacuum’s page skipping behavior at system level.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 4 Jun 2021 20:12:14 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 11:15 PM Michael Paquier <michael@paquier.xyz> wrote:\n> I have read through the patch, and I am surprised to see that this\n> only makes possible to control the optimization at relation level.\n> The origin of the complaints is that this index cleanup optimization\n> has been introduced as a new rule that gets enforced at *system*\n> level, so I think that we should have an equivalent with a GUC to\n> control the behavior for the whole system.\n\n*Why* does it have to work at the system level? I don't understand\nwhat you mean about the system level.\n\nAs Masahiko pointed out, adding a GUC isn't what we've done in other\nsimilar cases -- that's how DISABLE_PAGE_SKIPPING works, which was a\ndefensive option that seems similar enough to what we want to add now.\nTo give another example, the TRUNCATE VACUUM option (or the related\nreloption) can be used to disable relation truncation, a behavior that\nsometimes causes *big* issues in production. The truncate behavior is\ndetermined dynamically in most situations -- which is another\nsimilarity to the optimization we've added here.\n\nWhy is this fundamentally different to those two things?\n\n> With what you are\n> presenting here, one could only disable the optimization for each\n> relation, one-by-one. If this optimization proves to be a problem,\n> it's just going to be harder to users to go through all the relations\n> and re-tune autovacuum. Am I missing something?\n\nWhy would you expect autovacuum to run even when the optimization is\nunavailable (e.g. with Postgres 13)? After all, the specifics of when\nthe bypass optimization kicks in make it very unlikely that ANALYZE\nwill ever be able to notice enough dead tuples to trigger an\nautovacuum (barring antiwraparound and insert-driven autovacuums).\nThere will probably be very few LP_DEAD items remaining. Occasionally\nthere will be somewhat more LP_DEAD items, that happen to be\nconcentrated in less than 2% of the table's blocks -- but block-based\nsampling by ANALYZE is likely to miss most of them and underestimate\nthe total number. The sampling approach taken by acquire_sample_rows()\nensures this with larger tables. With small tables the chances of the\noptimization kicking in are very low, unless perhaps fillfactor has\nbeen tuned very aggressively.\n\nThere has never been a guarantee that autovacuum will be triggered\n(and do index vacuuming) in cases that have very few LP_DEAD items, no\nmatter how the system has been tuned. The main reason why making the\noptimization behavior controllable is for the VACUUM command.\nPrincipally for hackers. I can imagine myself using the VACUUM option\nto disable the optimization when I was interested in testing VACUUM or\nspace utilization in some specific, narrow way.\n\nOf course it's true that there is still some uncertainty about the\noptimization harming production workloads -- that is to be expected\nwith an enhancement like this one. But there is still no actual\nexample or test case that shows the optimization doing the wrong\nthing, or anything like it. Anything is possible, but I am not\nexpecting there to be even one user complaint about the feature.\nNaturally I don't want to add something as heavyweight as a GUC, given\nall that.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 11 Jun 2021 14:46:20 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Sun, May 30, 2021 at 6:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Another concern with this approach is what it\n> > means for the VACUUM command itself. I haven't added an 'auto'\n> > spelling that is accepted by the VACUUM command in this POC version.\n> > But do I need to at all? Can that just be implied by not having any\n> > INDEX_CLEANUP option?\n>\n> It seems to me that it's better to have INDEX_CLEANUP option of VACUUM\n> command support AUTO for consistency. Do you have any concerns about\n> supporting it?\n\nI suppose we should have it. But now we have to teach vacuumdb about\nthis new boolean-like enum too. It's a lot more new code than I would\nhave preferred, but I suppose that it makes sense.\n\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 11 Jun 2021 15:28:01 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Fri, Jun 11, 2021 at 02:46:20PM -0700, Peter Geoghegan wrote:\n> On Thu, Jun 3, 2021 at 11:15 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> I have read through the patch, and I am surprised to see that this\n>> only makes possible to control the optimization at relation level.\n>> The origin of the complaints is that this index cleanup optimization\n>> has been introduced as a new rule that gets enforced at *system*\n>> level, so I think that we should have an equivalent with a GUC to\n>> control the behavior for the whole system.\n> \n> *Why* does it have to work at the system level? I don't understand\n> what you mean about the system level.\n\nI mean that you lack a GUC that allows to enforce to *not* use this\noptimization for all relations, for all processes.\n\n> As Masahiko pointed out, adding a GUC isn't what we've done in other\n> similar cases -- that's how DISABLE_PAGE_SKIPPING works, which was a\n> defensive option that seems similar enough to what we want to add now.\n> To give another example, the TRUNCATE VACUUM option (or the related\n> reloption) can be used to disable relation truncation, a behavior that\n> sometimes causes *big* issues in production. The truncate behavior is\n> determined dynamically in most situations -- which is another\n> similarity to the optimization we've added here.\n\n> Why is this fundamentally different to those two things?\n\nBecause the situation looks completely different to me here. TRUNCATE\nis thought as a option to be able to avoid an exclusive lock when\ntruncating the relation file size at the end of VACUUM. More\nimportantly the default of TRUNCATE is *false*, meaning that we are\nnever going to skip the truncation unless one specifies it at the\nrelation level. \n\nHere, what we have is a decision that is enforced to happen by\ndefault, all the time, with the user not knowing about it. If there\nis a bug of an issue with it, users, based on your proposal, would be\nforced to change it for each *relation*. If they miss some of those\nrelations, they may still run into problems without knowing about it.\nThe change of default behavior and having no way to control it in\na simple way look incompatible to me.\n\n>> With what you are\n>> presenting here, one could only disable the optimization for each\n>> relation, one-by-one. If this optimization proves to be a problem,\n>> it's just going to be harder to users to go through all the relations\n>> and re-tune autovacuum. Am I missing something?\n> \n> Why would you expect autovacuum to run even when the optimization is\n> unavailable (e.g. with Postgres 13)? After all, the specifics of when\n> the bypass optimization kicks in make it very unlikely that ANALYZE\n> will ever be able to notice enough dead tuples to trigger an\n> autovacuum (barring antiwraparound and insert-driven autovacuums).\n>\n> There will probably be very few LP_DEAD items remaining. Occasionally\n> there will be somewhat more LP_DEAD items, that happen to be\n> concentrated in less than 2% of the table's blocks -- but block-based\n> sampling by ANALYZE is likely to miss most of them and underestimate\n> the total number. The sampling approach taken by acquire_sample_rows()\n> ensures this with larger tables. With small tables the chances of the\n> optimization kicking in are very low, unless perhaps fillfactor has\n> been tuned very aggressively.\n> \n> There has never been a guarantee that autovacuum will be triggered\n> (and do index vacuuming) in cases that have very few LP_DEAD items, no\n> matter how the system has been tuned. The main reason why making the\n> optimization behavior controllable is for the VACUUM command.\n> Principally for hackers. I can imagine myself using the VACUUM option\n> to disable the optimization when I was interested in testing VACUUM or\n> space utilization in some specific, narrow way.\n> \n> Of course it's true that there is still some uncertainty about the\n> optimization harming production workloads -- that is to be expected\n> with an enhancement like this one. But there is still no actual\n> example or test case that shows the optimization doing the wrong\n> thing, or anything like it. Anything is possible, but I am not\n> expecting there to be even one user complaint about the feature.\n> Naturally I don't want to add something as heavyweight as a GUC, given\n> all that.\n\nPerhaps. What I am really scared about is that you are assuming that\nenforcing this decision will be *always* fine. What I am trying to\nsay here is that it *may not* be fine for everybody, and that there\nshould be an easy way to turn it off if that proves to be a problem.\nI don't quite see how that's an implementation problem, we have\nalready many reloptions that are controlled with GUCs if the\nreloptions have no default.\n\nI think that a more careful choice implementation would have been to\nturn this optimization off by default, while having an option to allow\none to turn it on at will.\n--\nMichael",
"msg_date": "Tue, 15 Jun 2021 09:23:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Mon, Jun 14, 2021 at 5:23 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > *Why* does it have to work at the system level? I don't understand\n> > what you mean about the system level.\n>\n> I mean that you lack a GUC that allows to enforce to *not* use this\n> optimization for all relations, for all processes.\n\nYou've just explained what a GUC is. This is not helpful.\n\n> > Why is this fundamentally different to those two things?\n>\n> Because the situation looks completely different to me here. TRUNCATE\n> is thought as a option to be able to avoid an exclusive lock when\n> truncating the relation file size at the end of VACUUM. More\n> importantly the default of TRUNCATE is *false*, meaning that we are\n> never going to skip the truncation unless one specifies it at the\n> relation level.\n\nMaybe it looks different to you because that's not actually true;\nVACUUM *does* skip the truncation when it feels like it, regardless of\nthe value of the reloption. In general there is no strict guarantee of\ntruncation ever happening -- see lazy_truncate_heap().\n\nAgain: Why is this fundamentally different?\n\n> Here, what we have is a decision that is enforced to happen by\n> default, all the time, with the user not knowing about it. If there\n> is a bug of an issue with it, users, based on your proposal, would be\n> forced to change it for each *relation*. If they miss some of those\n> relations, they may still run into problems without knowing about it.\n> The change of default behavior and having no way to control it in\n> a simple way look incompatible to me.\n\nYou've just explained what a reloption is. Again, this is not helping.\n\n> Perhaps. What I am really scared about is that you are assuming that\n> enforcing this decision will be *always* fine.\n\nI very clearly and repeatedly said that there was uncertainty about\ncausing issues in rare real world cases. Are you always 100% certain\nthat your code has no bugs before you commit it?\n\nShould I now add a GUC for every single feature that I commit? You are\njust asserting that we must need to add a GUC, without giving any real\nreasons -- you're just giving generic reasons that work just as well\nin most situations. I'm baffled by this.\n\n> What I am trying to\n> say here is that it *may not* be fine for everybody, and that there\n> should be an easy way to turn it off if that proves to be a problem.\n\nAs I said, I think that the relationship is both necessary and\nsufficient. A GUC is a heavyweight solution that seems quite\nunnecessary.\n\n> I don't quite see how that's an implementation problem, we have\n> already many reloptions that are controlled with GUCs if the\n> reloptions have no default.\n\nI never said that there was an implementation problem with a GUC. Just\nthat it was unnecessary, and not consistent with existing practice.\n\nDoes anyone else have an opinion on this? Of course I can easily add a\nGUC. But I won't do so in the absence of any real argument in favor of\nit.\n\n> I think that a more careful choice implementation would have been to\n> turn this optimization off by default, while having an option to allow\n> one to turn it on at will.\n\nYou have yet to say anything about the implementation.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 14 Jun 2021 19:46:17 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Sun, May 30, 2021 at 6:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> We need to accept \"yes\" and \"no\" too? Currently, the parsing of a\n> boolean type reloption accepts those words.\n\nAdded those in the attached revision, version 2. This is much closer\nto being commitable than v1 was. I plan on committing this in the next\nseveral days.\n\nI probably need to polish the documentation some more, though.\n\n> It seems to me that it's better to have INDEX_CLEANUP option of VACUUM\n> command support AUTO for consistency. Do you have any concerns about\n> supporting it?\n\nv2 sorts out the mess with VacOptTernaryValue by just adding a new\nenum constant to VacOptTernaryValue, called VACOPT_TERNARY_AUTO -- the\nenum still has a distinct VACOPT_TERNARY_DEFAULT value. v2 also adds a\nnew reloption-specific enum, StdRdOptIndexCleanup, which is the\ndatatype that we actually use inside the StdRdOptions struct. So we\nare now able to specify \"VACUUM (INDEX_CLEANUP AUTO)\" in v2 of the\npatch.\n\nv2 also adds a new option to vacuumdb, --force-index-cleanup. This\nseemed to make sense because we already have a --no-index-cleanup\noption.\n\n> > And does StdRdOptions.vacuum_truncate now need\n> > to become a VacOptTernaryValue field too, for consistency with the new\n> > definition of StdRdOptions.vacuum_index_cleanup?\n>\n> We don't have the bypass optimization for heap truncation, unlike\n> index vacuuming. So I think we can leave both vacuum_truncate\n> reloption and TRUNCATE option as boolean parameters.\n\nActually FWIW we do have a bypass optimization for TRUNCATE -- it too\nhas an always-on dynamic behavior -- so it really is like the index\nvacuuming thing. In theory it might make sense to have the same \"auto,\non, off\" thing, just like with index vacuuming in the patch. However,\nI haven't done that in the patch because in practice it's a bad idea.\nIf we offered users the option of truly forcing truncation, then\nlazy_truncate_heap() could just insist on truncating. It would have to\njust wait for an AEL, no matter how long it took. That would probably\nbe dangerous because waiting for an AEL without backing out in VACUUM\njust isn't a great idea.\n\nThanks\n\n--\nPeter Geoghegan",
"msg_date": "Wed, 16 Jun 2021 18:53:47 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Thu, Jun 17, 2021 at 10:54 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sun, May 30, 2021 at 6:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > We need to accept \"yes\" and \"no\" too? Currently, the parsing of a\n> > boolean type reloption accepts those words.\n>\n> Added those in the attached revision, version 2. This is much closer\n> to being commitable than v1 was. I plan on committing this in the next\n> several days.\n>\n> I probably need to polish the documentation some more, though.\n>\n> > It seems to me that it's better to have INDEX_CLEANUP option of VACUUM\n> > command support AUTO for consistency. Do you have any concerns about\n> > supporting it?\n>\n> v2 sorts out the mess with VacOptTernaryValue by just adding a new\n> enum constant to VacOptTernaryValue, called VACOPT_TERNARY_AUTO -- the\n> enum still has a distinct VACOPT_TERNARY_DEFAULT value. v2 also adds a\n> new reloption-specific enum, StdRdOptIndexCleanup, which is the\n> datatype that we actually use inside the StdRdOptions struct. So we\n> are now able to specify \"VACUUM (INDEX_CLEANUP AUTO)\" in v2 of the\n> patch.\n>\n> v2 also adds a new option to vacuumdb, --force-index-cleanup. This\n> seemed to make sense because we already have a --no-index-cleanup\n> option.\n>\n> > > And does StdRdOptions.vacuum_truncate now need\n> > > to become a VacOptTernaryValue field too, for consistency with the new\n> > > definition of StdRdOptions.vacuum_index_cleanup?\n> >\n> > We don't have the bypass optimization for heap truncation, unlike\n> > index vacuuming. So I think we can leave both vacuum_truncate\n> > reloption and TRUNCATE option as boolean parameters.\n>\n> Actually FWIW we do have a bypass optimization for TRUNCATE -- it too\n> has an always-on dynamic behavior -- so it really is like the index\n> vacuuming thing. In theory it might make sense to have the same \"auto,\n> on, off\" thing, just like with index vacuuming in the patch. However,\n> I haven't done that in the patch because in practice it's a bad idea.\n> If we offered users the option of truly forcing truncation, then\n> lazy_truncate_heap() could just insist on truncating. It would have to\n> just wait for an AEL, no matter how long it took. That would probably\n> be dangerous because waiting for an AEL without backing out in VACUUM\n> just isn't a great idea.\n\nI agree that it doesn't make sense to force heap truncation.\n\nThank you for updating the patch! Here are comments on v2 patch:\n\n typedef enum VacOptTernaryValue\n {\n VACOPT_TERNARY_DEFAULT = 0,\n+ VACOPT_TERNARY_AUTO,\n VACOPT_TERNARY_DISABLED,\n VACOPT_TERNARY_ENABLED,\n } VacOptTernaryValue;\n\nVacOptTernaryValue is no longer a ternary value. Can we rename it\nsomething like VacOptValue?\n\n---\n+ if (vacopts->force_index_cleanup)\n {\n- /* INDEX_CLEANUP is supported since v12 */\n+ /*\n+ * \"INDEX_CLEANUP TRUE\" has been supported since v12. Though\n+ * the --force-index-cleanup vacuumdb option was only added in\n+ * v14, it still works in the same way on v12+.\n+ */\n Assert(serverVersion >= 120000);\n+ Assert(!vacopts->no_index_cleanup);\n appendPQExpBuffer(sql, \"%sINDEX_CLEANUP FALSE\", sep);\n sep = comma;\n }\n\nWe should specify TRUE instead.\n\n---\n--force-index-cleanup option isn't shown in the help message.\n\n---\nI think we also improve the tab completion for INDEX_CLEANUP option.\n\n---\n@@ -32,7 +32,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [\n<replaceable class=\"paramet\n ANALYZE [ <replaceable class=\"parameter\">boolean</replaceable> ]\n DISABLE_PAGE_SKIPPING [ <replaceable\nclass=\"parameter\">boolean</replaceable> ]\n SKIP_LOCKED [ <replaceable class=\"parameter\">boolean</replaceable> ]\n- INDEX_CLEANUP [ <replaceable class=\"parameter\">boolean</replaceable> ]\n+ INDEX_CLEANUP [ <replaceable class=\"parameter\">enum</replaceable> ]\n PROCESS_TOAST [ <replaceable class=\"parameter\">boolean</replaceable> ]\n TRUNCATE [ <replaceable class=\"parameter\">boolean</replaceable> ]\n PARALLEL <replaceable class=\"parameter\">integer</replaceable>\n\nHow about listing the available values of INDEX_CLEANUP here instead\nof enum? For example, we do a similar thing in the description of\nFORMAT option of EXPLAIN command. It would be easier to perceive all\navailable values.\n\n---\n+ <varlistentry>\n+ <term><option>--no-index-cleanup</option></term>\n+ <listitem>\n+ <para>\n\nIt should be --force-index-cleanup.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 17 Jun 2021 18:14:20 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "+ <literal>AUTO</literal>. With <literal>OFF</literal> index\n+ cleanup is disabled, with <literal>ON</literal> it is enabled,\n\nOFF comma\n\n+ bypassing index cleanup in cases where there is more than zero\n+ dead tuples.\n\n*are* more than zero\nOr (preferably): \"cases when there are no dead tuples at all\"\n\n+ If <literal>INDEX_CLEANUP</literal> is set to\n+ <literal>OFF</literal> performance may suffer, because as the\n\nOFF comma\n\n+ removed until index cleanup is completed. This option has no\n+ effect for tables that do not have an index and is ignored if\n+ the <literal>FULL</literal> option is used.\n\nI'd say \"tables that have no indexes,\"\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 17 Jun 2021 07:55:32 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Thu, Jun 17, 2021 at 2:14 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Thank you for updating the patch! Here are comments on v2 patch:\n\nThanks for the review!\n\nAttached is v3, which has all the changes that you suggested (plus the\ndoc stuff from Justin).\n\nI also renamed the \"default\" VacOptTernaryValue (actually now called\nVacOptValue) value -- it seems clearer to call this \"unspecified\".\nBecause it represents a state that has nothing to do with the default\nof the reloption or GUC. Really, it means \"VACUUM command didn't have\nthis specified explicitly\" (note that this means that it always starts\nout \"default\" in an autovacuum worker). Unspecified seems much clearer\nbecause it directly expresses \"fall back on the reloption, and then\nfall back on the reloption's default\". I find this much clearer -- it\nis unspecified, but will have to *become* specified later, so that\nvacuumlazy.c has a truly usable value (\"unspecified\" is never usable\nin vacuumlazy.c).\n\n> VacOptTernaryValue is no longer a ternary value. Can we rename it\n> something like VacOptValue?\n\nAs I said, done that way.\n\n> We should specify TRUE instead.\n\nOoops. Fixed.\n\n> --force-index-cleanup option isn't shown in the help message.\n\nFixed.\n\n> ---\n> I think we also improve the tab completion for INDEX_CLEANUP option.\n\nFixed.\n\n> How about listing the available values of INDEX_CLEANUP here instead\n> of enum? For example, we do a similar thing in the description of\n> FORMAT option of EXPLAIN command. It would be easier to perceive all\n> available values.\n\nThat looks much better. Fixed.\n\n> It should be --force-index-cleanup.\n\nFixed.\n\n--\nPeter Geoghegan",
"msg_date": "Thu, 17 Jun 2021 19:26:07 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Thu, Jun 17, 2021 at 5:55 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> (Various sgml typos)\n\nFixed in the v3 I just posted.\n\n> + removed until index cleanup is completed. This option has no\n> + effect for tables that do not have an index and is ignored if\n> + the <literal>FULL</literal> option is used.\n>\n> I'd say \"tables that have no indexes,\"\n\nThat wording wasn't mine (it just happened to be moved around by\nreformatting), but I think you're right. I went with your suggestion.\n\nThanks for taking a look\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 17 Jun 2021 19:27:51 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Thu, Jun 17, 2021 at 7:26 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Thanks for the review!\n>\n> Attached is v3, which has all the changes that you suggested (plus the\n> doc stuff from Justin).\n\nJust pushed a version of that with much improved documentation.\n\nThanks again\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 18 Jun 2021 20:05:21 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "\n\n> On Jun 14, 2021, at 7:46 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> Does anyone else have an opinion on this? Of course I can easily add a\n> GUC. But I won't do so in the absence of any real argument in favor of\n> it.\n\nI'd want to see some evidence that the GUC is necessary. (For that matter, why is a per relation setting necessary?) Is there a reproducible pathological case, perhaps with a pgbench script, to demonstrate the need? I'm not asking whether there might be some regression, but rather whether somebody wants to construct a worst-case pathological case and publish quantitative results about how bad it is.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Sun, 20 Jun 2021 09:22:06 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
},
{
"msg_contents": "On Sun, Jun 20, 2021 at 9:22 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I'd want to see some evidence that the GUC is necessary. (For that matter, why is a per relation setting necessary?) Is there a reproducible pathological case, perhaps with a pgbench script, to demonstrate the need? I'm not asking whether there might be some regression, but rather whether somebody wants to construct a worst-case pathological case and publish quantitative results about how bad it is.\n\nOne clear argument in favor of the VACUUM option (not so much the\nreloption) is that it enables certain testing scenarios.\n\nFor example, I was recently using pg_visibility to do a low-level\nanalysis of how visibility map bits were getting set with a test case\nthat built on the BenchmarkSQL fair-use TPC-C implementation. The\noptimization was something that I noticed in certain scenarios -- I\ncould have used the option of disabling it at the VACUUM command level\njust to get a perfectly clean slate. A small fraction of the pages in\nthe table to not be set all-visible, which would be inconsequential to\nusers but was annoying in the context of this particular test\nscenario.\n\nThe way the optimization works will only ever leave an affected table\nin a state where the LP_DEAD items left behind would be highly\nunlikely to be counted by ANALYZE. They would not be counted\naccurately anyway, either because they're extremely few in number or\nbecause there are relatively many that are concentrated in just a few\nheap blocks -- that's how block-based sampling by ANALYZE works.\n\nIn short, even if there really was a performance problem implied by\nthe bypass indexes optimization, it seems unlikely that autovacuum\nwould run in the first place to take care of it, with or without the\noptimization. Even if autovacuum_vacuum_scale_factor were set very\naggressively. VACUUM (really autovacuum) just doesn't tend to work at\nthat level of precision.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 20 Jun 2021 09:55:36 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Teaching users how they can get the most out of HOT in Postgres\n 14"
}
] |
[
{
"msg_contents": "PSA a patch to fix a typo found on this page [1],\n\n\"preapre_end_lsn\" -> \"prepare_end_lsn\"\n\n------\n[1] https://www.postgresql.org/docs/devel/logicaldecoding-output-plugin.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 13 Apr 2021 14:23:23 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "PG Docs - logical decoding output plugins - fix typo"
},
{
"msg_contents": "\n\nOn 2021/04/13 13:23, Peter Smith wrote:\n> PSA a patch to fix a typo found on this page [1],\n> \n> \"preapre_end_lsn\" -> \"prepare_end_lsn\"\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 13 Apr 2021 14:23:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: PG Docs - logical decoding output plugins - fix typo"
}
] |
[
{
"msg_contents": "Currently standard pgbench scenario produces transaction serialize\nerrors \"could not serialize access due to concurrent update\" if\nPostgreSQL runs in REPEATABLE READ or SERIALIZABLE level, and the\nsession aborts. In order to achieve meaningful results even in these\ntransaction isolation levels, I would like to propose an automatic\nretry feature if \"could not serialize access due to concurrent update\"\nerror occurs.\n\nProbably just adding a switch to retry is not enough, maybe retry\nmethod (random interval etc.) and max retry number are needed to be\nadded.\n\nI would like to hear your thoughts,\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Tue, 13 Apr 2021 14:51:48 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Retry in pgbench"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 5:51 PM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> Currently standard pgbench scenario produces transaction serialize\n> errors \"could not serialize access due to concurrent update\" if\n> PostgreSQL runs in REPEATABLE READ or SERIALIZABLE level, and the\n> session aborts. In order to achieve meaningful results even in these\n> transaction isolation levels, I would like to propose an automatic\n> retry feature if \"could not serialize access due to concurrent update\"\n> error occurs.\n>\n> Probably just adding a switch to retry is not enough, maybe retry\n> method (random interval etc.) and max retry number are needed to be\n> added.\n>\n> I would like to hear your thoughts,\n\nSee also:\n\nhttps://www.postgresql.org/message-id/flat/72a0d590d6ba06f242d75c2e641820ec%40postgrespro.ru\n\n\n",
"msg_date": "Tue, 13 Apr 2021 19:02:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Retry in pgbench"
},
{
"msg_contents": "> On Tue, Apr 13, 2021 at 5:51 PM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>> Currently standard pgbench scenario produces transaction serialize\n>> errors \"could not serialize access due to concurrent update\" if\n>> PostgreSQL runs in REPEATABLE READ or SERIALIZABLE level, and the\n>> session aborts. In order to achieve meaningful results even in these\n>> transaction isolation levels, I would like to propose an automatic\n>> retry feature if \"could not serialize access due to concurrent update\"\n>> error occurs.\n>>\n>> Probably just adding a switch to retry is not enough, maybe retry\n>> method (random interval etc.) and max retry number are needed to be\n>> added.\n>>\n>> I would like to hear your thoughts,\n> \n> See also:\n> \n> https://www.postgresql.org/message-id/flat/72a0d590d6ba06f242d75c2e641820ec%40postgrespro.ru\n\nThanks for the pointer. It seems we need to resume the discussion.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Tue, 13 Apr 2021 16:12:59 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Retry in pgbench"
},
{
"msg_contents": "Hi,\n\nOn Tue, 13 Apr 2021 16:12:59 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> [...] \n> [...] \n> [...] \n> \n> Thanks for the pointer. It seems we need to resume the discussion.\n\nBy the way, I've been playing with the idea of failing gracefully and retry\nindefinitely (or until given -T) on SQL error AND connection issue.\n\nIt would be useful to test replicating clusters with a (switch|fail)over\nprocedure.\n\nRegards,\n\n\n",
"msg_date": "Tue, 13 Apr 2021 22:57:40 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: Retry in pgbench"
},
{
"msg_contents": "> By the way, I've been playing with the idea of failing gracefully and retry\n> indefinitely (or until given -T) on SQL error AND connection issue.\n> \n> It would be useful to test replicating clusters with a (switch|fail)over\n> procedure.\n\nInteresting idea but in general a failover takes sometime (like a few\nminutes), and it will strongly affect TPS. I think in the end it just\ncompares the failover time.\n\nOr are you suggesting to ignore the time spent in failover?\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 16 Apr 2021 10:28:48 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Retry in pgbench"
},
{
"msg_contents": ">> It would be useful to test replicating clusters with a (switch|fail)over\n>> procedure.\n>\n> Interesting idea but in general a failover takes sometime (like a few\n> minutes), and it will strongly affect TPS. I think in the end it just\n> compares the failover time.\n>\n> Or are you suggesting to ignore the time spent in failover?\n\nOr simply to be able to measure it simply from a client perspective? How \nmuch delay is introduced, how long is endured to go back to the previous \ntps level…\n\nMy recollection of Marina patch is that it was non trivial, adding such a \nnew and interesting feature suggests a set of patches, not just one patch.\n\n-- \nFabien.",
"msg_date": "Fri, 16 Apr 2021 07:11:26 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Retry in pgbench"
},
{
"msg_contents": "On Fri, 16 Apr 2021 10:28:48 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> > By the way, I've been playing with the idea of failing gracefully and retry\n> > indefinitely (or until given -T) on SQL error AND connection issue.\n> > \n> > It would be useful to test replicating clusters with a (switch|fail)over\n> > procedure. \n> \n> Interesting idea but in general a failover takes sometime (like a few\n> minutes), and it will strongly affect TPS. I think in the end it just\n> compares the failover time.\n\nThis usecase is not about benchmarking. It's about generating constant trafic\nto be able to practice/train some [auto]switchover procedures while being close\nto production activity.\n\nIn this contexte, a max-saturated TPS of one node is not relevant. But being\nable to add some stats about downtime might be a good addition.\n\nRegards,\n\n\n",
"msg_date": "Fri, 16 Apr 2021 15:09:36 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: Retry in pgbench"
},
{
"msg_contents": "> This usecase is not about benchmarking. It's about generating constant trafic\n> to be able to practice/train some [auto]switchover procedures while being close\n> to production activity.\n> \n> In this contexte, a max-saturated TPS of one node is not relevant. But being\n> able to add some stats about downtime might be a good addition.\n\nOh I see. That makes sense.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 16 Apr 2021 22:15:08 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Retry in pgbench"
}
] |
[
{
"msg_contents": "Hello,\nAfter upgrading the cluster from 10.x to 13.1 we've started getting a problem describe pgsql-general:\nhttps://www.postgresql.org/message-id/8bf8785c-f47d-245c-b6af-80dc1eed40db%40unitygroup.com\nWe've noticed similar issue being described on this list in\nhttps://www.postgresql-archive.org/Logical-replication-CPU-bound-with-TRUNCATE-DROP-CREATE-many-tables-tt6155123.html\nwith a fix being rolled out in 13.2.\n\nAfter the 13.2 release, we've upgraded to it and unfortunately this did not solve the issue - the replication still stalls just as described in the original issue.\nPlease advise, how to debug and solve this issue.\n\n\n\n\n\n\n\n\n\nHello,\nAfter upgrading the cluster from 10.x to 13.1 we've started getting a problem describe pgsql-general:\nhttps://www.postgresql.org/message-id/8bf8785c-f47d-245c-b6af-80dc1eed40db%40unitygroup.com\nWe've noticed similar issue being described on this list in \nhttps://www.postgresql-archive.org/Logical-replication-CPU-bound-with-TRUNCATE-DROP-CREATE-many-tables-tt6155123.html\n\nwith a fix being rolled out in 13.2.\n\nAfter the 13.2 release, we've upgraded to it and unfortunately this did not solve the issue - the replication still stalls just as described in the original issue.\nPlease advise, how to debug and solve this issue.",
"msg_date": "Tue, 13 Apr 2021 07:36:33 +0000",
"msg_from": "Krzysztof Kois <krzysztof.kois@unitygroup.com>",
"msg_from_op": true,
"msg_subject": "Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 1:18 PM Krzysztof Kois\n<krzysztof.kois@unitygroup.com> wrote:\n>\n> Hello,\n> After upgrading the cluster from 10.x to 13.1 we've started getting a problem describe pgsql-general:\n> https://www.postgresql.org/message-id/8bf8785c-f47d-245c-b6af-80dc1eed40db%40unitygroup.com\n> We've noticed similar issue being described on this list in\n> https://www.postgresql-archive.org/Logical-replication-CPU-bound-with-TRUNCATE-DROP-CREATE-many-tables-tt6155123.html\n> with a fix being rolled out in 13.2.\n>\n\nThe fix for the problem discussed in the above threads is committed\nonly in PG-14, see [1]. I don't know what makes you think it is fixed\nin 13.2. Also, it is not easy to back-patch that because this fix\ndepends on some of the infrastructure introduced in PG-14.\n\n[1] - https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d7eb52d7181d83cf2363570f7a205b8eb1008dbc\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 14 Apr 2021 15:49:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "On 2021-Apr-14, Amit Kapila wrote:\n\n> On Tue, Apr 13, 2021 at 1:18 PM Krzysztof Kois\n> <krzysztof.kois@unitygroup.com> wrote:\n> >\n> > Hello,\n> > After upgrading the cluster from 10.x to 13.1 we've started getting a problem describe pgsql-general:\n> > https://www.postgresql.org/message-id/8bf8785c-f47d-245c-b6af-80dc1eed40db%40unitygroup.com\n> > We've noticed similar issue being described on this list in\n> > https://www.postgresql-archive.org/Logical-replication-CPU-bound-with-TRUNCATE-DROP-CREATE-many-tables-tt6155123.html\n> > with a fix being rolled out in 13.2.\n> \n> The fix for the problem discussed in the above threads is committed\n> only in PG-14, see [1]. I don't know what makes you think it is fixed\n> in 13.2. Also, it is not easy to back-patch that because this fix\n> depends on some of the infrastructure introduced in PG-14.\n> \n> [1] - https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d7eb52d7181d83cf2363570f7a205b8eb1008dbc\n\nHmm ... On what does it depend (other than plain git conflicts, which\nare aplenty)? On a quick look to the commit, it's clear that we need to\nbe careful in order not to cause an ABI break, but that doesn't seem\nimpossible to solve, but I'm wondering if there is more to it than that.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Tue, 27 Apr 2021 21:18:12 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "On Wed, Apr 28, 2021 at 6:48 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Apr-14, Amit Kapila wrote:\n>\n> > On Tue, Apr 13, 2021 at 1:18 PM Krzysztof Kois\n> > <krzysztof.kois@unitygroup.com> wrote:\n> > >\n> > > Hello,\n> > > After upgrading the cluster from 10.x to 13.1 we've started getting a problem describe pgsql-general:\n> > > https://www.postgresql.org/message-id/8bf8785c-f47d-245c-b6af-80dc1eed40db%40unitygroup.com\n> > > We've noticed similar issue being described on this list in\n> > > https://www.postgresql-archive.org/Logical-replication-CPU-bound-with-TRUNCATE-DROP-CREATE-many-tables-tt6155123.html\n> > > with a fix being rolled out in 13.2.\n> >\n> > The fix for the problem discussed in the above threads is committed\n> > only in PG-14, see [1]. I don't know what makes you think it is fixed\n> > in 13.2. Also, it is not easy to back-patch that because this fix\n> > depends on some of the infrastructure introduced in PG-14.\n> >\n> > [1] - https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d7eb52d7181d83cf2363570f7a205b8eb1008dbc\n>\n> Hmm ... On what does it depend (other than plain git conflicts, which\n> are aplenty)? On a quick look to the commit, it's clear that we need to\n> be careful in order not to cause an ABI break, but that doesn't seem\n> impossible to solve, but I'm wondering if there is more to it than that.\n>\n\nAs mentioned in the commit message, we need another commit [1] change\nto make this work.\n\n[1] - https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=c55040ccd0\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 28 Apr 2021 08:42:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "On 2021-Apr-28, Amit Kapila wrote:\n\n> On Wed, Apr 28, 2021 at 6:48 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > Hmm ... On what does it depend (other than plain git conflicts, which\n> > are aplenty)? On a quick look to the commit, it's clear that we need to\n> > be careful in order not to cause an ABI break, but that doesn't seem\n> > impossible to solve, but I'm wondering if there is more to it than that.\n> \n> As mentioned in the commit message, we need another commit [1] change\n> to make this work.\n> \n> [1] - https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=c55040ccd0\n\nOh, yeah, that looks tougher. (Still not impossible: it adds a new WAL\nmessage type, but we have added such on a minor release before.)\n\n... It's strange that replication worked for them on pg10 though and\nbroke on 13. What did we change anything to make it so? (I don't have\nany fish to fry on this topic at present, but it seems a bit\nconcerning.)\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Wed, 28 Apr 2021 10:06:36 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "On Wed, Apr 28, 2021 at 7:36 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Apr-28, Amit Kapila wrote:\n>\n> > On Wed, Apr 28, 2021 at 6:48 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > > Hmm ... On what does it depend (other than plain git conflicts, which\n> > > are aplenty)? On a quick look to the commit, it's clear that we need to\n> > > be careful in order not to cause an ABI break, but that doesn't seem\n> > > impossible to solve, but I'm wondering if there is more to it than that.\n> >\n> > As mentioned in the commit message, we need another commit [1] change\n> > to make this work.\n> >\n> > [1] - https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=c55040ccd0\n>\n> Oh, yeah, that looks tougher. (Still not impossible: it adds a new WAL\n> message type, but we have added such on a minor release before.)\n>\n\nYeah, we can try to make it possible if it is really a pressing issue\nbut I guess even in that case it is better to do it after we release\nPG14 so that it can get some more testing.\n\n> ... It's strange that replication worked for them on pg10 though and\n> broke on 13. What did we change anything to make it so?\n>\n\nNo idea but probably if the other person can share the exact test case\nwhich he sees working fine on PG10 but not on PG13 then it might be a\nbit easier to investigate.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 29 Apr 2021 10:44:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "Cc'ing Lukasz Biegaj because of the pgsql-general thread.\n\nOn 2021-Apr-29, Amit Kapila wrote:\n\n> On Wed, Apr 28, 2021 at 7:36 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > ... It's strange that replication worked for them on pg10 though and\n> > broke on 13. What did we change anything to make it so?\n> \n> No idea but probably if the other person can share the exact test case\n> which he sees working fine on PG10 but not on PG13 then it might be a\n> bit easier to investigate.\n\nAh, noticed now that Krzysztof posted links to these older threads,\nwhere a problem is described:\n\nhttps://www.postgresql.org/message-id/flat/CANDwggKYveEtXjXjqHA6RL3AKSHMsQyfRY6bK%2BNqhAWJyw8psQ%40mail.gmail.com\nhttps://www.postgresql.org/message-id/flat/8bf8785c-f47d-245c-b6af-80dc1eed40db%40unitygroup.com\n\nKrzysztof said \"after upgrading to pg13 we started having problems\",\nwhich implicitly indicates that the same thing worked well in pg10 ---\nbut if the problem has been correctly identified, then this wouldn't\nhave worked in pg10 either. So something in the story doesn't quite\nmatch up. Maybe it's not the same problem after all, or maybe they\nweren't doing X in pg10 which they are attempting in pg13.\n\nKrzysztof, Lukasz, maybe you can describe more?\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Thu, 29 Apr 2021 09:55:43 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "Hey, thanks for reaching out and sorry for the late reply - we had few \ndays of national holidays.\n\nOn 29.04.2021 15:55, Alvaro Herrera wrote:\n> https://www.postgresql.org/message-id/flat/CANDwggKYveEtXjXjqHA6RL3AKSHMsQyfRY6bK%2BNqhAWJyw8psQ%40mail.gmail.com\n> https://www.postgresql.org/message-id/flat/8bf8785c-f47d-245c-b6af-80dc1eed40db%40unitygroup.com\n> \n> Krzysztof said \"after upgrading to pg13 we started having problems\",\n> which implicitly indicates that the same thing worked well in pg10 ---\n> but if the problem has been correctly identified, then this wouldn't\n> have worked in pg10 either. So something in the story doesn't quite\n> match up. Maybe it's not the same problem after all, or maybe they\n> weren't doing X in pg10 which they are attempting in pg13.\n> \n\nThe problem started occurring after upgrade from pg10 to pg13. No other \nchanges were performed, especially not within the database structure nor \nperformed operations.\n\nThe problem is as described in \nhttps://www.postgresql.org/message-id/flat/8bf8785c-f47d-245c-b6af-80dc1eed40db%40unitygroup.com\n\nIt does occur on two separate production clusters and one test cluster - \nall belonging to the same customer, although processing slightly \ndifferent data (it's an e-commerce store with multiple languages and \nseparate production databases for each language).\n\nWe've tried recreating the database from dump, and recreating the \nreplication, but without any positive effect - the problem persists.\n\nWe did not rollback the databases to pg10, instead we've stayed with \npg13 and implemented a shell script to kill the walsender process if it \nseems stuck in `hash_seq_search`. It's ugly, but it works and we backup \nand monitor the data integrity anyway.\n\nI'd be happy to help in debugging the issue had I known how to do it \n:-). If you'd like then we can also try to rollback the installation \nback to pg10 to get certainty that this was not caused by schema changes.\n\n\n-- \nLukasz Biegaj | Unity Group | https://www.unitygroup.com/\nSystem Architect, AWS Certified Solutions Architect\n\n\n",
"msg_date": "Tue, 4 May 2021 15:21:07 +0200",
"msg_from": "Lukasz Biegaj <lukasz.biegaj@unitygroup.com>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "Hi Lukasz, thanks for following up.\n\nOn 2021-May-04, Lukasz Biegaj wrote:\n\n> The problem is as described in https://www.postgresql.org/message-id/flat/8bf8785c-f47d-245c-b6af-80dc1eed40db%40unitygroup.com\n> \n> It does occur on two separate production clusters and one test cluster - all\n> belonging to the same customer, although processing slightly different data\n> (it's an e-commerce store with multiple languages and separate production\n> databases for each language).\n\nI think the best next move would be to make certain that the problem is\nwhat we think it is, so that we can discuss if Amit's commit is an\nappropriate fix. I would suggest to do that by running the problematic\nworkload in the test system under \"perf record -g\" and then get a report\nwith \"perf report -g\" which should hopefully give enough of a clue.\n(Sometimes the reports are much better if you use a binary that was\ncompiled with -fno-omit-frame-pointer, so if you're in a position to try\nthat, it might be useful -- or apparently you could try \"perf record\n--call-graph dwarf\" or \"perf record --call-graph lbr\", depending.)\n\nAlso I would be much more comfortable about proposing to backpatch such\nan invasive change if you could ensure that in pg10 the same workload\ndoes not cause the problem. If it does, then it'd be clear we're\ntalking about a regression.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"I'm always right, but sometimes I'm more right than other times.\"\n (Linus Torvalds)\n\n\n",
"msg_date": "Tue, 4 May 2021 10:35:05 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "On 04.05.2021 16:35, Alvaro Herrera wrote:\n> I would suggest to do that by running the problematic\n> workload in the test system under \"perf record -g\"\n > [..]\n > you could ensure that in pg10 the same workload\n > does not cause the problem.\n\nWe'll go with both propositions. I expect to come back to you with \nresults in about a week or two.\n\n-- \nLukasz Biegaj | Unity Group | https://www.unitygroup.com/\nSystem Architect, AWS Certified Solutions Architect\n\n\n",
"msg_date": "Thu, 6 May 2021 10:35:16 +0200",
"msg_from": "Lukasz Biegaj <lukasz.biegaj@unitygroup.com>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "śr., 16 cze 2021 o 12:31 Lukasz Biegaj <lukasz.biegaj@unitygroup.com>\nnapisał(a):\n> On 04.05.2021 16:35, Alvaro Herrera wrote:\n> > I would suggest to do that by running the problematic\n> > workload in the test system under \"perf record -g\"\n> We'll go with both propositions. I expect to come back to you with\n> results in about a week or two.\n\nHi Alvaro,\nWe reproduced the replication issue and recorded the walsender process\nusing perf.\nBelow you can find the data for broken and working replication:\n\nhttps://easyupload.io/kxcovg\n\npassword to the zip file: johS5jeewo\n\nPlease let me know if you would like us to proceed with the downgrade.\n\n-- \nHubert Klasa\n\n\n",
"msg_date": "Wed, 16 Jun 2021 12:53:00 +0200",
"msg_from": "Ha Ka <klasahubert@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "On 2021-Jun-16, Ha Ka wrote:\n\n> śr., 16 cze 2021 o 12:31 Lukasz Biegaj <lukasz.biegaj@unitygroup.com>\n> napisał(a):\n> > On 04.05.2021 16:35, Alvaro Herrera wrote:\n> > > I would suggest to do that by running the problematic\n> > > workload in the test system under \"perf record -g\"\n> > We'll go with both propositions. I expect to come back to you with\n> > results in about a week or two.\n> \n> Hi Alvaro,\n> We reproduced the replication issue and recorded the walsender process\n> using perf.\n\nHello, thanks, I downloaded the files but since you sent the perf.data\nfiles there's not much I can do to usefully interpret them. Can you\nplease do \"perf report -g > perf_report.txt\" on each subdir with a\nperf.data file and upload those text files? (You don't need to rerun\nthe test cases.)\n\nThanks\n\n-- \nÁlvaro Herrera Valdivia, Chile\n\"People get annoyed when you try to debug them.\" (Larry Wall)\n\n\n",
"msg_date": "Wed, 16 Jun 2021 09:33:38 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "> Hello, thanks, I downloaded the files but since you sent the perf.data\n> files there's not much I can do to usefully interpret them. Can you\n> please do \"perf report -g > perf_report.txt\" on each subdir with a\n> perf.data file and upload those text files? (You don't need to rerun\n> the test cases.)\n> Thanks\n\nHi,\nHere is the upload with generated reports: https://easyupload.io/p38izx\npasswd: johS5jeewo\n\nRegards\n\n-- \nHubert Klasa\n\n\n",
"msg_date": "Wed, 16 Jun 2021 17:15:04 +0200",
"msg_from": "Ha Ka <klasahubert@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "On 2021-Jun-16, Ha Ka wrote:\n\n> Here is the upload with generated reports: https://easyupload.io/p38izx\n> passwd: johS5jeewo\n\nOK, so I downloaded that and this is the interesting entry in the\nprofile for the broken case:\n\n# Samples: 5K of event 'cpu-clock'\n# Event count (approx.): 59989898390\n#\n# Children Self Command Shared Object Symbol \n# ........ ........ ........ ............. ..................................\n#\n 100.00% 0.00% postgres postgres [.] exec_replication_command\n |\n ---exec_replication_command\n WalSndLoop\n XLogSendLogical\n LogicalDecodingProcessRecord\n | \n --99.51%--ReorderBufferQueueChange\n | \n |--96.06%--hash_seq_search\n | \n |--1.78%--ReorderBufferSerializeTXN\n | | \n | --0.52%--errstart\n | \n --0.76%--deregister_seq_scan\n\nWhat this tells me is that ReorderBufferQueueChange is spending a lot of\ntime doing hash_seq_search, which probably is the one in\nReorderBufferTXNByXid.\n\nI have, as yet, no idea what this means.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Wed, 16 Jun 2021 18:28:28 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "At Wed, 16 Jun 2021 18:28:28 -0400, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> On 2021-Jun-16, Ha Ka wrote:\n> # Children Self Command Shared Object Symbol \n> # ........ ........ ........ ............. ..................................\n> #\n> 100.00% 0.00% postgres postgres [.] exec_replication_command\n> |\n> ---exec_replication_command\n> WalSndLoop\n> XLogSendLogical\n> LogicalDecodingProcessRecord\n> | \n> --99.51%--ReorderBufferQueueChange\n> | \n> |--96.06%--hash_seq_search\n> | \n> |--1.78%--ReorderBufferSerializeTXN\n> | | \n> | --0.52%--errstart\n> | \n> --0.76%--deregister_seq_scan\n> \n> What this tells me is that ReorderBufferQueueChange is spending a lot of\n> time doing hash_seq_search, which probably is the one in\n> ReorderBufferTXNByXid.\n\nI don't see a call to hash_*seq*_search there. Instead, I see one in\nReorderBufferCheckMemoryLimit().\n\nIf added an elog line in hash_seq_search that is visited only when it\nis called under ReorderBufferQueueChange, then set\nlogical_decoding_work_mem to 64kB.\n\nRunning the following query calls hash_seq_search (relatively) frequently.\n\npub=# create table t1 (a int primary key);\npub=# create publication p1 for table t1;\nsub=# create table t1 (a int primary key);\nsub=# create subscription s1 connection 'host=/tmp port=5432' publication p1;\npub=# insert into t1 (select a from generate_series(0, 9999) a);\n\nThe insert above makes 20 calls to ReorderBufferLargestTXN() (via\nReorderBufferCheckmemoryLimit()), which loops over hash_seq_search.\n\n/*\n * Find the largest transaction (toplevel or subxact) to evict (spill to disk).\n *\n * XXX With many subtransactions this might be quite slow, because we'll have\n * to walk through all of them. There are some options how we could improve\n * that: (a) maintain some secondary structure with transactions sorted by\n * amount of changes, (b) not looking for the entirely largest transaction,\n * but e.g. for transaction using at least some fraction of the memory limit,\n * and (c) evicting multiple transactions at once, e.g. to free a given portion\n * of the memory limit (e.g. 50%).\n */\nstatic ReorderBufferTXN *\nReorderBufferLargestTXN(ReorderBuffer *rb)\n\nThis looks like a candidate of the culprit. The perf line for\n\"ReorderBufferSerializeTXN supports this hypothesis.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 17 Jun 2021 10:58:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "On Thu, Jun 17, 2021 at 7:28 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 16 Jun 2021 18:28:28 -0400, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in\n> > On 2021-Jun-16, Ha Ka wrote:\n> > # Children Self Command Shared Object Symbol\n> > # ........ ........ ........ ............. ..................................\n> > #\n> > 100.00% 0.00% postgres postgres [.] exec_replication_command\n> > |\n> > ---exec_replication_command\n> > WalSndLoop\n> > XLogSendLogical\n> > LogicalDecodingProcessRecord\n> > |\n> > --99.51%--ReorderBufferQueueChange\n> > |\n> > |--96.06%--hash_seq_search\n> > |\n> > |--1.78%--ReorderBufferSerializeTXN\n> > | |\n> > | --0.52%--errstart\n> > |\n> > --0.76%--deregister_seq_scan\n> >\n> > What this tells me is that ReorderBufferQueueChange is spending a lot of\n> > time doing hash_seq_search, which probably is the one in\n> > ReorderBufferTXNByXid.\n>\n> I don't see a call to hash_*seq*_search there. Instead, I see one in\n> ReorderBufferCheckMemoryLimit().\n>\n> If added an elog line in hash_seq_search that is visited only when it\n> is called under ReorderBufferQueueChange, then set\n> logical_decoding_work_mem to 64kB.\n>\n> Running the following query calls hash_seq_search (relatively) frequently.\n>\n> pub=# create table t1 (a int primary key);\n> pub=# create publication p1 for table t1;\n> sub=# create table t1 (a int primary key);\n> sub=# create subscription s1 connection 'host=/tmp port=5432' publication p1;\n> pub=# insert into t1 (select a from generate_series(0, 9999) a);\n>\n> The insert above makes 20 calls to ReorderBufferLargestTXN() (via\n> ReorderBufferCheckmemoryLimit()), which loops over hash_seq_search.\n>\n\nIf there are large transactions then someone can probably set\nlogical_decoding_work_mem to a higher value.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 17 Jun 2021 17:07:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "On 2021-Jun-17, Kyotaro Horiguchi wrote:\n\n> I don't see a call to hash_*seq*_search there. Instead, I see one in\n> ReorderBufferCheckMemoryLimit().\n\nDoh, of course -- I misread.\n\nReorderBufferCheckMemoryLimit is new in pg13 (cec2edfa7859) so now at\nleast we have a reason why this workload regresses in pg13 compared to\nearlier releases.\n\nLooking at the code, it does seem that increasing the memory limit as\nAmit suggests might solve the issue. Is that a practical workaround?\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Thu, 17 Jun 2021 12:56:42 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "At Thu, 17 Jun 2021 12:56:42 -0400, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> On 2021-Jun-17, Kyotaro Horiguchi wrote:\n> \n> > I don't see a call to hash_*seq*_search there. Instead, I see one in\n> > ReorderBufferCheckMemoryLimit().\n> \n> Doh, of course -- I misread.\n> \n> ReorderBufferCheckMemoryLimit is new in pg13 (cec2edfa7859) so now at\n> least we have a reason why this workload regresses in pg13 compared to\n> earlier releases.\n> \n> Looking at the code, it does seem that increasing the memory limit as\n> Amit suggests might solve the issue. Is that a practical workaround?\n\nI believe so generally. I'm not sure about the op, though.\n\nJust increasing the working memory to certain size would solve the\nproblem. There might be a potential issue that it might be hard like\nthis case for users to find out that the GUC works for their issue (if\nactually works). pg_stat_replicatoin_slots.spill_count / spill_txns\ncould be a guide for setting logical_decoding_work_mem. Is it worth\nhaving additional explanation like that for the GUC variable in the\ndocumentation?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 18 Jun 2021 14:52:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "On Fri, Jun 18, 2021 at 11:22 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 17 Jun 2021 12:56:42 -0400, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in\n> > On 2021-Jun-17, Kyotaro Horiguchi wrote:\n> >\n> > > I don't see a call to hash_*seq*_search there. Instead, I see one in\n> > > ReorderBufferCheckMemoryLimit().\n> >\n> > Doh, of course -- I misread.\n> >\n> > ReorderBufferCheckMemoryLimit is new in pg13 (cec2edfa7859) so now at\n> > least we have a reason why this workload regresses in pg13 compared to\n> > earlier releases.\n> >\n> > Looking at the code, it does seem that increasing the memory limit as\n> > Amit suggests might solve the issue. Is that a practical workaround?\n>\n> I believe so generally. I'm not sure about the op, though.\n>\n> Just increasing the working memory to certain size would solve the\n> problem. There might be a potential issue that it might be hard like\n> this case for users to find out that the GUC works for their issue (if\n> actually works). pg_stat_replicatoin_slots.spill_count / spill_txns\n> could be a guide for setting logical_decoding_work_mem. Is it worth\n> having additional explanation like that for the GUC variable in the\n> documentation?\n>\n\nI don't see any harm in doing that but note that we can update it only\nfor PG-14. The view pg_stat_replicatoin_slots is introduced in PG-14.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 19 Jun 2021 15:44:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "sob., 19 cze 2021 o 12:14 Amit Kapila <amit.kapila16@gmail.com> napisał(a):\n>\n> On Fri, Jun 18, 2021 at 11:22 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Thu, 17 Jun 2021 12:56:42 -0400, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in\n> > > On 2021-Jun-17, Kyotaro Horiguchi wrote:\n> > >\n> > > > I don't see a call to hash_*seq*_search there. Instead, I see one in\n> > > > ReorderBufferCheckMemoryLimit().\n> > >\n> > > Doh, of course -- I misread.\n> > >\n> > > ReorderBufferCheckMemoryLimit is new in pg13 (cec2edfa7859) so now at\n> > > least we have a reason why this workload regresses in pg13 compared to\n> > > earlier releases.\n> > >\n> > > Looking at the code, it does seem that increasing the memory limit as\n> > > Amit suggests might solve the issue. Is that a practical workaround?\n> >\n> > I believe so generally. I'm not sure about the op, though.\n> >\n> > Just increasing the working memory to certain size would solve the\n> > problem. There might be a potential issue that it might be hard like\n> > this case for users to find out that the GUC works for their issue (if\n> > actually works). pg_stat_replicatoin_slots.spill_count / spill_txns\n> > could be a guide for setting logical_decoding_work_mem. Is it worth\n> > having additional explanation like that for the GUC variable in the\n> > documentation?\n> >\n>\n> I don't see any harm in doing that but note that we can update it only\n> for PG-14. The view pg_stat_replicatoin_slots is introduced in PG-14.\n>\n> --\n> With Regards,\n> Amit Kapila.\n\nWe increased logical_decoding_work_mem for our production database\nfrom 64 to 192 MB and it looks like the issue still persists. The\nfrequency with which replication hangs has remained the same. Do you\nneed any additional perf reports after our change?\n\n--\nRegards,\nHubert Klasa.\n\n\n",
"msg_date": "Tue, 10 Aug 2021 16:45:23 +0200",
"msg_from": "Ha Ka <klasahubert@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 8:15 PM Ha Ka <klasahubert@gmail.com> wrote:\n>\n> sob., 19 cze 2021 o 12:14 Amit Kapila <amit.kapila16@gmail.com> napisał(a):\n>\n> We increased logical_decoding_work_mem for our production database\n> from 64 to 192 MB and it looks like the issue still persists. The\n> frequency with which replication hangs has remained the same.\n>\n\nSounds strange. I think one thing to identify at the time slowdown has\nhappened is whether there are a very large number of in-progress\ntransactions at the time slowdown happened. Because the profile shared\nlast time seems to be spending more time in hash_seq_search than in\nactually serializing the exact. Another possibility to try out for\nyour case is to just always serialize the current xact and see what\nhappens, this might not be an actual solution but can help in\ndiagnosing the problem.\n\n> Do you\n> need any additional perf reports after our change?\n>\n\nIt might be good if you can share the WALSender portion of perf as\nshared in one of the emails above?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 11 Aug 2021 12:04:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unresolved repliaction hang and stop problem."
}
] |
[
{
"msg_contents": "Hello,\n\nRecently, the result cache feature was committed to PostgreSQL. I\ntested its performance by executing TPC-DS. As a result, I found that\nthere were some regressions in the query performance.\n\nI used the TPC-DS scale factor 100 in the evaluation. I executed all\nof the 99 queries in the TPC-DS, and the result cache worked in the 21\nqueries of them. However, some queries took too much time, so I\nskipped their execution. I set work_mem to 256MB, and\nmax_parallel_workers_per_gather to 0.\n\nEvaluation results are as follows. The negative speedup ratio\nindicates that the execution time increased by the result cache.\n\nQuery No | Execution time with result cache | Execution time\nwithout result cache | Speedup ratio\n7 142.1 142.2 0.03%\n8 144.0 142.8 -0.82%\n13 164.6 162.0 -1.65%\n27 138.9 138.7 -0.16%\n34 135.7 137.1 1.02%\n43 209.5 207.2 -1.10%\n48 181.5 170.7 -6.32%\n55 130.6 123.8 -5.48%\n61 0.014 0.037 62.06%\n62 66.7 59.9 -11.36%\n68 131.3 127.2 -3.17%\n72 567.0 563.4 -0.65%\n73 130.1 129.8 -0.29%\n88 1044.5 1048.7 0.40%\n91 1.2 1.1 -7.93%\n96 132.2 131.7 -0.37%\n\nAs you can see from these results, many queries have a negative\nspeedup ratio, which means that there are negative impacts on the\nquery performance. In query 62, the execution time increased by\n11.36%. I guess these regressions are due to the misestimation of the\ncost in the planner. I attached the execution plan of query 62.\n\nThe result cache is currently enabled by default. However, if this\nkind of performance regression is common, we have to change its\ndefault behavior.\n\nBest regards,\nYuya Watari",
"msg_date": "Tue, 13 Apr 2021 18:29:57 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Performance Evaluation of Result Cache by using TPC-DS"
},
{
"msg_contents": "On Tue, 13 Apr 2021 at 21:29, Yuya Watari <watari.yuya@gmail.com> wrote:\n> I used the TPC-DS scale factor 100 in the evaluation. I executed all\n> of the 99 queries in the TPC-DS, and the result cache worked in the 21\n> queries of them. However, some queries took too much time, so I\n> skipped their execution. I set work_mem to 256MB, and\n> max_parallel_workers_per_gather to 0.\n\nMany thanks for testing this.\n\n> As you can see from these results, many queries have a negative\n> speedup ratio, which means that there are negative impacts on the\n> query performance. In query 62, the execution time increased by\n> 11.36%. I guess these regressions are due to the misestimation of the\n> cost in the planner. I attached the execution plan of query 62.\n\nCan you share if these times were to run EXPLAIN ANALYZE or if they\nwere just the queries being executed normally?\n\nThe times in the two files you attached do look very similar to the\ntimes in your table, so I suspect either TIMING ON is not that high an\noverhead on your machine, or the results are that of EXPLAIN ANALYZE.\n\nIt would be really great if you could show the EXPLAIN (ANALYZE,\nTIMING OFF) for query 62. There's a chance that the slowdown comes\nfrom the additional EXPLAIN ANALYZE timing overhead with the Result\nCache version.\n\n> The result cache is currently enabled by default. However, if this\n> kind of performance regression is common, we have to change its\n> default behavior.\n\nYes, the feedback we get during the beta period will help drive that\ndecision or if the costing needs to be adjusted.\n\nDavid\n\n\n",
"msg_date": "Tue, 13 Apr 2021 22:13:42 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance Evaluation of Result Cache by using TPC-DS"
},
{
"msg_contents": "Hello David,\n\nThank you for your reply.\n\n> Can you share if these times were to run EXPLAIN ANALYZE or if they\n> were just the queries being executed normally?\n\nThese times were to run EXPLAIN ANALYZE. I executed each query twice,\nand the **average** execution time was shown in the table of the last\ne-mail. Therefore, the result of the table is not simply equal to that\nof the attached file. I'm sorry for the insufficient explanation.\n\n> It would be really great if you could show the EXPLAIN (ANALYZE,\n> TIMING OFF) for query 62. There's a chance that the slowdown comes\n> from the additional EXPLAIN ANALYZE timing overhead with the Result\n> Cache version.\n\nI ran query 62 by \"EXPLAIN (ANALYZE, TIMING OFF)\" and normally. I\nattached these execution results to this e-mail. At this time, I\nexecuted each query only once (not twice). The results are as follows.\n\nMethod | Execution time with result cache (s) | Execution time\nwithout result cache (s) | Speedup ratio\nEXPLAIN (ANALYZE, TIMING ON) 67.161 59.615 -12.66%\nEXPLAIN (ANALYZE, TIMING OFF) 66.142 60.660 -9.04%\nNormal 66.611 60.955 -9.28%\n\nAlthough there is variation in the execution time, the speedup ratio\nis around -10%. So, the result cache has a 10% regression in query 62.\nThe overhead of EXPLAIN ANALYZE and TIMING ON do not seem to be high.\n\nBest regards,\nYuya Watari\n\nOn Tue, Apr 13, 2021 at 7:13 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 13 Apr 2021 at 21:29, Yuya Watari <watari.yuya@gmail.com> wrote:\n> > I used the TPC-DS scale factor 100 in the evaluation. I executed all\n> > of the 99 queries in the TPC-DS, and the result cache worked in the 21\n> > queries of them. However, some queries took too much time, so I\n> > skipped their execution. I set work_mem to 256MB, and\n> > max_parallel_workers_per_gather to 0.\n>\n> Many thanks for testing this.\n>\n> > As you can see from these results, many queries have a negative\n> > speedup ratio, which means that there are negative impacts on the\n> > query performance. In query 62, the execution time increased by\n> > 11.36%. I guess these regressions are due to the misestimation of the\n> > cost in the planner. I attached the execution plan of query 62.\n>\n> Can you share if these times were to run EXPLAIN ANALYZE or if they\n> were just the queries being executed normally?\n>\n> The times in the two files you attached do look very similar to the\n> times in your table, so I suspect either TIMING ON is not that high an\n> overhead on your machine, or the results are that of EXPLAIN ANALYZE.\n>\n> It would be really great if you could show the EXPLAIN (ANALYZE,\n> TIMING OFF) for query 62. There's a chance that the slowdown comes\n> from the additional EXPLAIN ANALYZE timing overhead with the Result\n> Cache version.\n>\n> > The result cache is currently enabled by default. However, if this\n> > kind of performance regression is common, we have to change its\n> > default behavior.\n>\n> Yes, the feedback we get during the beta period will help drive that\n> decision or if the costing needs to be adjusted.\n>\n> David",
"msg_date": "Wed, 14 Apr 2021 14:11:45 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance Evaluation of Result Cache by using TPC-DS"
},
{
"msg_contents": "On Wed, 14 Apr 2021 at 17:11, Yuya Watari <watari.yuya@gmail.com> wrote:\n> I ran query 62 by \"EXPLAIN (ANALYZE, TIMING OFF)\" and normally. I\n> attached these execution results to this e-mail. At this time, I\n> executed each query only once (not twice). The results are as follows.\n\nThanks for running that again. I see from the EXPLAIN ANALYZE output\nthat the planner did cost the Result Cache plan slightly more\nexpensive than the Hash Join plan. It's likely that add_path() did\nnot consider the Hash Join plan to be worth keeping because it was not\nmore than 1% better than the Result Cache plan. STD_FUZZ_FACTOR is set\nso new paths need to be at least 1% better than existing paths for\nthem to be kept. That's pretty unfortunate and that alone does not\nmean the costs are incorrect. It would be good to know if that's the\ncase for the other queries too.\n\nTo test that, I've set up TPC-DS locally, however, it would be good if\nyou could send me the list of indexes that you've created. I see the\ntool from the transaction processing council for TPC-DS only comes\nwith the list of tables.\n\nCan you share the output of:\n\nselect pg_get_indexdef(indexrelid) from pg_index where indrelid::regclass in (\n'call_center',\n'catalog_page',\n'catalog_returns',\n'catalog_sales',\n'customer',\n'customer_address',\n'customer_demographics',\n'date_dim',\n'dbgen_version',\n'household_demographics',\n'income_band',\n'inventory',\n'item',\n'promotion',\n'reason',\n'ship_mode',\n'store',\n'store_returns',\n'store_sales',\n'time_dim')\norder by indrelid;\n\nfrom your TPC-DS database?\n\nDavid\n\n\n",
"msg_date": "Mon, 19 Apr 2021 19:08:36 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance Evaluation of Result Cache by using TPC-DS"
},
{
"msg_contents": "Hello David,\n\nThank you for your reply.\n\n> Thanks for running that again. I see from the EXPLAIN ANALYZE output\n> that the planner did cost the Result Cache plan slightly more\n> expensive than the Hash Join plan. It's likely that add_path() did\n> not consider the Hash Join plan to be worth keeping because it was not\n> more than 1% better than the Result Cache plan. STD_FUZZ_FACTOR is set\n> so new paths need to be at least 1% better than existing paths for\n> them to be kept. That's pretty unfortunate and that alone does not\n> mean the costs are incorrect. It would be good to know if that's the\n> case for the other queries too.\n\nThanks for your analysis. I understood why HashJoin was not selected\nin this query plan.\n\n> To test that, I've set up TPC-DS locally, however, it would be good if\n> you could send me the list of indexes that you've created. I see the\n> tool from the transaction processing council for TPC-DS only comes\n> with the list of tables.\n>\n> Can you share the output of:\n\nI listed all indexes on my machine by executing your query. I attached\nthe result to this e-mail. I hope it will help you.\n\nBest regards,\nYuya Watari",
"msg_date": "Tue, 20 Apr 2021 13:43:28 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance Evaluation of Result Cache by using TPC-DS"
},
{
"msg_contents": "On Tue, 20 Apr 2021 at 16:43, Yuya Watari <watari.yuya@gmail.com> wrote:\n> I listed all indexes on my machine by executing your query. I attached\n> the result to this e-mail. I hope it will help you.\n\nThanks for sending that.\n\nI've now run some benchmarks of TPC-DS both with enable_resultcache on\nand off. I think I've used the same scale of test as you did. -SCALE\n10.\n\ntpcds=# \\l+ tpcds\n List of databases\n Name | Owner | Encoding | Collate | Ctype | Access\nprivileges | Size | Tablespace | Description\n-------+---------+----------+-------------+-------------+-------------------+-------+------------+-------------\n tpcds | drowley | UTF8 | en_NZ.UTF-8 | en_NZ.UTF-8 |\n | 28 GB | pg_default |\n(1 row)\n\nThe following settings were non-standard:\n\ntpcds=# select name,setting from pg_Settings where setting <> boot_val;\n name | setting\n----------------------------------+--------------------\n application_name | psql\n archive_command | (disabled)\n client_encoding | UTF8\n data_directory_mode | 0700\n DateStyle | ISO, DMY\n default_text_search_config | pg_catalog.english\n enable_resultcache | off\n fsync | off\n jit | off\n lc_collate | en_NZ.UTF-8\n lc_ctype | en_NZ.UTF-8\n lc_messages | en_NZ.UTF-8\n lc_monetary | en_NZ.UTF-8\n lc_numeric | en_NZ.UTF-8\n lc_time | en_NZ.UTF-8\n log_file_mode | 0600\n log_timezone | Pacific/Auckland\n max_parallel_maintenance_workers | 10\n max_parallel_workers_per_gather | 0\n max_stack_depth | 2048\n server_encoding | UTF8\n shared_buffers | 2621440\n TimeZone | Pacific/Auckland\n unix_socket_permissions | 0777\n wal_buffers | 2048\n work_mem | 262144\n(26 rows)\n\nThis is an AMD 3990x CPU with 64GB of RAM.\n\nI didn't run all of the queries. To reduce the benchmark times and to\nmake the analysis easier, I just ran the queries where EXPLAIN shows\nat least 1 Result Cache node.\n\nThe queries in question are: 1 2 6 7 15 16 21 23 24 27 34 43 44 45 66\n69 73 79 88 89 91 94 99.\n\nThe one exception here is query 58. It did use a Result Cache node\nwhen enable_resultcache=on, but the query took more than 6 hours to\nrun. This slowness is not due to Result Cache. It's due to the\nfollowing correlated subquery.\n\n and i.i_current_price > 1.2 *\n (select avg(j.i_current_price)\n from item j\n where j.i_category = i.i_category)\n\nThat results in:\n\nSubPlan 2\n -> Aggregate\n(cost=8264.44..8264.45 rows=1 width=32) (actual time=87.592..87.592\nrows=1 loops=255774)\n\n87.592 * 255774 is 6.22 hours. So 6.22 hours of executing that\nsubplan. The query took 6.23 hours in total. (A Result Cache on the\nsubplan would help here! :-) there are only 10 distinct categories)\n\nResults\n======\n\nOut of the 23 queries that used Result Cache, only 7 executed more\nquickly than with enable_resultcache = off. However, with 15 of the\n23 queries, the Result Cache plan was not cheaper. This means the\nplanner rejected some other join method that would have made a cheaper\nplan in 15 out of 23 queries. This is likely due to the add_path()\nfuzziness not keeping the cheaper plan.\n\nIn only 5 of 23 queries, the Result Cache plan was both cheaper and\nslower to execute. These are queries 1, 6, 27, 88 and 99. These cost\n0.55%, 0.04%, 0.25%, 0.25% and 0.01% more than the plan that was\npicked when enable_resultcache=off. None of those costs seem\nsignificantly cheaper than the alternative plan.\n\nSo, in summary, I'd say there are two separate problems here:\n\n1. The planner does not always pick the cheapest plan due to add_path\nfuzziness. (15 of 23 queries have this problem, however, 4 of these\n15 queries were faster with result cache, despite costing more)\n2. Sometimes the Result Cache plan is cheaper and slower than the plan\nthat is picked with enable_resultcache = off. (5 of 23 queries have\nthis problem)\n\nOverall with result cache enabled, the benchmark ran 1.15% faster.\nThis is mostly due to query 69 which ran over 40 seconds more quickly\nwith result cache enabled. Unfortunately, 16 of the 23 queries became\nslower due to result cache with only the remaining 7 becoming faster.\nThat's not a good track record. I never expected that we'd use a\nResult Cache node correctly in every planning problem we ever try to\nsolve, but only getting that right 30.4% of the time is not quite as\nclose to that 100% mark as I'd have liked. However, maybe that's\noverly harsh on the Result Cache code as it was only 5 queries that we\ncosted cheaper and were slower. So 18 of 23 seem to have more\nrealistic costs, which is 78% of queries.\n\nWhat can be done?\n===============\n\nI'm not quite sure. The biggest problem is add_path's fuzziness. I\ncould go and add some penalty cost to Result Cache paths so that\nthey're picked less often. If I make that penalty more than 1% of the\ncost, then that should get around add_path rejecting the other join\nmethod that is not fuzzily good enough. Adding some sort of penalty\nmight also help the 5 of 23 queries that were cheaper and slower than\nthe alternative.\n\nI've attached a spreadsheet with all of the results and also the\nEXPLAIN / EXPLAIN ANALYZE and times from both runs.\n\nThe query times in the spreadsheet are to run the query once with\npgbench (i.e -t 1). Not the EXPLAIN ANALYZE time.\n\nI've also zipped the entire benchmark results and attached as results.tar.bz2.\n\nDavid",
"msg_date": "Wed, 21 Apr 2021 19:02:04 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance Evaluation of Result Cache by using TPC-DS"
},
{
"msg_contents": "Hello David,\n\nThank you for running experiments on your machine and I really\nappreciate your deep analysis.\n\nYour results are very interesting. In 5 queries, the result cache is\ncheaper but slower. Especially, in query 88, although the cost with\nresult cache is cheaper, it has 34.23% degradation in query execution\ntime. This is big regression.\n\n> What can be done?\n> ===============\n>\n> I'm not quite sure. The biggest problem is add_path's fuzziness. I\n> could go and add some penalty cost to Result Cache paths so that\n> they're picked less often. If I make that penalty more than 1% of the\n> cost, then that should get around add_path rejecting the other join\n> method that is not fuzzily good enough. Adding some sort of penalty\n> might also help the 5 of 23 queries that were cheaper and slower than\n> the alternative.\n\nBased on your idea, I have implemented a penalty for the cost of the\nresult cache. I attached the patch to this e-mail. Please be noted\nthat this patch is experimental, so it lacks comments, documents,\ntests, etc. This patch adds a new GUC, resultcache_cost_factor. The\ncost of the result cache is multiplied by this factor. If the factor\nis greater than 1, we impose a penalty on the result cache.\n\nThe cost calculation has been modified as follows.\n\n=====\n@@ -2541,6 +2542,13 @@ cost_resultcache_rescan(PlannerInfo *root,\nResultCachePath *rcpath,\n */\n startup_cost += cpu_tuple_cost;\n\n+ /*\n+ * We multiply the costs by resultcache_cost_factor to control the\n+ * aggressiveness of result cache.\n+ */\n+ startup_cost *= resultcache_cost_factor;\n+ total_cost *= resultcache_cost_factor;\n=====\n@@ -1618,9 +1618,14 @@ create_resultcache_path(PlannerInfo *root,\nRelOptInfo *rel, Path *subpath,\n * Add a small additional charge for caching the first entry. All the\n * harder calculations for rescans are performed in\n * cost_resultcache_rescan().\n+ *\n+ * We multiply the costs by resultcache_cost_factor to control the\n+ * aggressiveness of result cache.\n */\n- pathnode->path.startup_cost = subpath->startup_cost + cpu_tuple_cost;\n- pathnode->path.total_cost = subpath->total_cost + cpu_tuple_cost;\n+ pathnode->path.startup_cost =\n+ (subpath->startup_cost + cpu_tuple_cost) *\nresultcache_cost_factor;\n+ pathnode->path.total_cost =\n+ (subpath->total_cost + cpu_tuple_cost) *\nresultcache_cost_factor;\n pathnode->path.rows = subpath->rows;\n\n return pathnode;\n=====\n\nAs this factor increases, the result cache becomes less and less\nlikely to be adopted. I conducted an experiment to clarify the\nthreshold of the factor. I ran EXPLAIN (not EXPLAIN ANALYZE) command\nwith different factors. The threshold is defined as the factor at\nwhich the result cache no longer appears in the query plan. The factor\nmore than the threshold indicates the planner does not select the\nresult cache.\n\nThis experiment was conducted on my machine, so the results may differ\nfrom those on your machine.\n\nI attached the thresholds as Excel and PDF files. The thresholds vary\nfrom 1.1 to 9.6. The threshold of 9.6 indicates that a penalty of 860%\nmust be imposed to avoid the result cache.\n\nThe Excel and PDF files also contain the chart showing the\nrelationship between speedup ratio and threshold. Unfortunately, there\nis no clear correlation. If we set the factor to 5, we can avoid 11%\ndegradation of query 62 because the threshold of the query is 4.7.\nHowever, we cannot gain a 62% speedup of query 61 with this factor.\nTherefore, this factor does not work well and should be reconsidered.\n\nIn this patch, I impose a penalty on the result cache node. An\nalternative way is to increase the cost of a nested loop that contains\na result cache. If so, there is no need to impose a penalty of 860%,\nbut a penalty of about 1% is enough.\n\nThis approach of introducing resultcache_cost_factor is not an\nessential solution. However, it is reasonable to offer a way of\ncontrolling the aggressiveness of the result cache.\n\nRepeatedly, this patch is experimental, so it needs feedback and modifications.\n\nBest regards,\nYuya Watari",
"msg_date": "Mon, 26 Apr 2021 17:32:17 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance Evaluation of Result Cache by using TPC-DS"
},
{
"msg_contents": "Thanks for doing further analysis on this.\n\nOn Mon, 26 Apr 2021 at 20:31, Yuya Watari <watari.yuya@gmail.com> wrote:\n> Thank you for running experiments on your machine and I really\n> appreciate your deep analysis.\n>\n> Your results are very interesting. In 5 queries, the result cache is\n> cheaper but slower. Especially, in query 88, although the cost with\n> result cache is cheaper, it has 34.23% degradation in query execution\n> time. This is big regression.\n\nThat's certainly one side of it. On the other side, it's pretty\nimportant to also note that in 4 of 23 queries the result cache plan\nexecuted faster but the planner costed it as more expensive.\n\nI'm not saying the costing is perfect, but what I am saying is, as you\nnoted above, in 5 of 23 queries the result cache was cheaper and\nslower, and, as I just noted, in 4 of 23 queries, result cache was\nmore expensive and faster. We know that costing is never going to be\na perfect representation of what the execution time will be However,\nin these examples, we've just happened to get quite a good balance. If\nwe add a penalty to result cache then it'll just subtract from one\nproblem group and add to the other.\n\nOverall, in my tests execution was 1.15% faster with result cache\nenabled than it was without.\n\nI could maybe get on board with adding a small fixed cost penalty. I'm\nnot sure exactly what it would be, maybe a cpu_tuple_cost instead of a\ncpu_operator_cost and count it in for forming/deforming cached tuples.\nI think the patch you wrote to add the resultcache_cost_factor is only\nsuitable for running experiments with.\n\nThe bigger concerns I have with the costing are around what happens\nwhen an n_distinct estimate is far too low on one of the join columns.\nI think it is more likely to be concerns like that one which would\ncause us to default enable_resultcache to off.\n\nDavid\n\n\n",
"msg_date": "Tue, 4 May 2021 11:02:22 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance Evaluation of Result Cache by using TPC-DS"
},
{
"msg_contents": "Hello David,\n\nThank you for your reply.\n\n> That's certainly one side of it. On the other side, it's pretty\n> important to also note that in 4 of 23 queries the result cache plan\n> executed faster but the planner costed it as more expensive.\n>\n> I'm not saying the costing is perfect, but what I am saying is, as you\n> noted above, in 5 of 23 queries the result cache was cheaper and\n> slower, and, as I just noted, in 4 of 23 queries, result cache was\n> more expensive and faster. We know that costing is never going to be\n> a perfect representation of what the execution time will be However,\n> in these examples, we've just happened to get quite a good balance. If\n> we add a penalty to result cache then it'll just subtract from one\n> problem group and add to the other.\n>\n> Overall, in my tests execution was 1.15% faster with result cache\n> enabled than it was without.\n\nThank you for your analysis. I agree with your opinion.\n\n> I think it is more likely to be concerns like that one which would\n> cause us to default enable_resultcache to off.\n\nI am not sure whether this kind of degradation is common, but setting\ndefault behavior to off is one of the realistic solutions.\n\nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Wed, 12 May 2021 14:08:20 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance Evaluation of Result Cache by using TPC-DS"
}
] |
[
{
"msg_contents": "Hi,\n\nFew of the statistics description in monitoring_stats.sgml doc is not\nconsistent. Made all the descriptions consistent by including\npunctuation marks at the end of each description.\nThoughts?\n\nRegards,\nVignesh",
"msg_date": "Tue, 13 Apr 2021 18:08:17 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Monitoring stats docs inconsistency"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 6:08 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Few of the statistics description in monitoring_stats.sgml doc is not\n> consistent. Made all the descriptions consistent by including\n> punctuation marks at the end of each description.\n> Thoughts?\n>\n\nI think monitoring.sgml uses a similar pattern as we use for system\ncatalogs. I am not sure of the rules in this regard but it appears\nthat normally for single line descriptions (for fields like OID, name,\netc.), we don't use a full stop at the end.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 14 Apr 2021 17:21:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring stats docs inconsistency"
}
] |
[
{
"msg_contents": "Hi.\n\nCurrently PostgreSQL supports CTE push down for SELECT statements, but \nit is implemented as turning each CTE reference into subquery.\n\nWhen CTE is referenced multiple times, we have choice - to materialize \nCTE (and disable quals distribution to the CTE query) or inline it (and \nso run CTE query multiple times,\nwhich can be inefficient, for example, when CTE references foreign \ntables).\n\nI was looking if it is possible to collect quals referencing CTE, \ncombine in OR qual and add them to CTE query.\n\nSo far I consider the following changes.\n\n1) Modify SS_process_ctes() to add a list of RestrictInfo* to \nPlannerInfo - one NULL RestrictInfo pointer per CTE (let's call this \nlist cte_restrictinfos for now)/\n2) In distribute_restrictinfo_to_rels(), when we get rel of RTE_CTE \nrelkind and sure that can safely pushdown restrictinfo, preserve \nrestrictinfo in cte_restrictinfos, converting multiple restrictions to \n\"OR\" RestrictInfos.\n3) In the end of subquery_planner() (after inheritance_planner() or \ngrouping_planner()) we can check if cte_restrictinfos contain some \nnon-null RestrictInfo pointers and recreate plan for corresponding CTEs, \ndistributing quals to relations inside CTE queries.\n\nFor now I'm not sure how to handle vars mapping when we push \nrestrictinfos to the level of cte root or when we push it down to the \ncte plan, but properly mapping vars seems seems to be doable.\n\nIs there something else I miss?\nDoes somebody work on alternative solution or see issues in such \napproach?\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n",
"msg_date": "Tue, 13 Apr 2021 16:28:40 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "CTE push down"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 6:58 PM Alexander Pyhalov\n<a.pyhalov@postgrespro.ru> wrote:\n>\n> Hi.\n>\n> Currently PostgreSQL supports CTE push down for SELECT statements, but\n> it is implemented as turning each CTE reference into subquery.\n>\n> When CTE is referenced multiple times, we have choice - to materialize\n> CTE (and disable quals distribution to the CTE query) or inline it (and\n> so run CTE query multiple times,\n> which can be inefficient, for example, when CTE references foreign\n> tables).\n>\n> I was looking if it is possible to collect quals referencing CTE,\n> combine in OR qual and add them to CTE query.\n>\n> So far I consider the following changes.\n>\n> 1) Modify SS_process_ctes() to add a list of RestrictInfo* to\n> PlannerInfo - one NULL RestrictInfo pointer per CTE (let's call this\n> list cte_restrictinfos for now)/\n> 2) In distribute_restrictinfo_to_rels(), when we get rel of RTE_CTE\n> relkind and sure that can safely pushdown restrictinfo, preserve\n> restrictinfo in cte_restrictinfos, converting multiple restrictions to\n> \"OR\" RestrictInfos.\n> 3) In the end of subquery_planner() (after inheritance_planner() or\n> grouping_planner()) we can check if cte_restrictinfos contain some\n> non-null RestrictInfo pointers and recreate plan for corresponding CTEs,\n> distributing quals to relations inside CTE queries.\n>\n> For now I'm not sure how to handle vars mapping when we push\n> restrictinfos to the level of cte root or when we push it down to the\n> cte plan, but properly mapping vars seems seems to be doable.\n\nI think similar mapping happens when we push quals that reference a\nnamed JOIN down to join rels. I didn't take a look at it, but I think\nit happens before planning time. But some similar machinary might help\nin this case.\n\nI believe step2 is needed to avoid materializing rows which will never\nbe selected. That would be a good improvement. However, care needs to\nbe taken for volatile quals. I think, the quals on CTE will be\nevaluated twice, once when materializing the CTE result and second\ntime when scanning the materialized result. volatile quals may produce\ndifferent results when run multiple times.\n\n>\n> Is there something else I miss?\n> Does somebody work on alternative solution or see issues in such\n> approach?\n\nIMO, a POC patch will help understand your idea.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 14 Apr 2021 18:31:46 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CTE push down"
},
{
"msg_contents": "Ashutosh Bapat писал 2021-04-14 16:01:\n> On Tue, Apr 13, 2021 at 6:58 PM Alexander Pyhalov\n> <a.pyhalov@postgrespro.ru> wrote:\n\n> I believe step2 is needed to avoid materializing rows which will never\n> be selected. That would be a good improvement. However, care needs to\n> be taken for volatile quals. I think, the quals on CTE will be\n> evaluated twice, once when materializing the CTE result and second\n> time when scanning the materialized result. volatile quals may produce\n> different results when run multiple times.\n> \n>> \n>> Is there something else I miss?\n>> Does somebody work on alternative solution or see issues in such\n>> approach?\n> \n> IMO, a POC patch will help understand your idea.\n\nHi.\n\nI have a POC patch, which allows to distribute restrictinfos inside \nCTEs.\nHowever, I found I can't efficiently do partition pruning.\nWhen CTE replan stage happens, plans are already done. I can create \nalternative paths for relations,\nfor example, like in Try-prune-partitions patch.\n\nHowever, new paths are not propagated to finalrel (UPPER_REL).\nI'm not sure how to achieve this and need some advice.\nShould we redo part of work, done by grouping_planner(), in the end of \nSS_replan_ctes()?\nShould we rely on executor partition pruning (with current patches it \ndoesn't work)?\nShould we create init plans for ctes after grouping_planner(), not \nbefore?\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Fri, 23 Apr 2021 16:29:07 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: CTE push down"
}
] |
[
{
"msg_contents": "Hi,\n\nWhilst trying to debug a deadlock in some tpc-ds query I noticed \nsomething that could cause problems in the hashjoin implementation and \ncause potentially deadlocks (if my analysis is right).\n\nWhilst building the inner hash table, the whole time the grow barriers \nare attached (the PHJ_BUILD_HASHING_INNER phase).\nUsually this is not a problem, however if one of the nodes blocks \nsomewhere further down in the plan whilst trying to fill the inner hash \ntable whilst the others are trying to e.g. extend the number of buckets \nusing ExecParallelHashIncreaseNumBuckets, they would all wait until the \nblocked process comes back to the hashjoin node and also joins the effort.\nWouldn't this give potential deadlock situations? Or why would a worker \nthat is hashing the inner be required to come back and join the effort \nin growing the hashbuckets?\n\nWith very skewed workloads (one node providing all data) I was at least \nable to have e.g. 3 out of 4 workers waiting in \nExecParallelHashIncreaseNumBuckets, whilst one was in the \nexecprocnode(outernode). I tried to detatch and reattach the barrier but \nthis proved to be a bad idea :)\n\nRegards,\nLuc\n\n\n",
"msg_date": "Tue, 13 Apr 2021 15:34:07 +0200",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": true,
"msg_subject": "potential deadlock in parallel hashjoin grow-buckets-barrier and\n blocking nodes?"
}
] |
[
{
"msg_contents": "On a system with selinux and sepgsql configured, search path resolution\nappears to fail if sepgsql is in enforcing mode, but selinux is in\npermissive mode (which, as I understand it, should cause sepgsql to behave\nas if it's in permissive mode anyway - and does for other operations).\nRegardless of whether my understanding of the interaction of the two\npermissive modes is correct, I don't believe the following should happen:\n\nmls=# SELECT current_user;\n\n current_user\n\n--------------\n\n postgres\n\n(1 row)\n\n\nmls=# SHOW search_path;\n\n search_path\n\n-----------------\n\n \"$user\", public\n\n(1 row)\n\n\nmls=# \\dn+ public\n\n List of schemas\n\n Name | Owner | Access privileges | Description\n\n--------+----------+----------------------+------------------------\n\n public | postgres | postgres=UC/postgres+| standard public schema\n\n | | =UC/postgres |\n\n(1 row)\n\n\nmls=# CREATE TABLE tb_users(uid int primary key, name text, mail text,\naddress text, salt text, phash text);\n\nERROR: no schema has been selected to create in\n\nLINE 1: CREATE TABLE tb_users(uid int primary key, name text, mail t...\n\n ^\n\nmls=# CREATE TABLE public.tb_users(uid int primary key, name text, mail\ntext, address text, salt text, phash text);\n\nCREATE TABLE\n\nmls=# drop table tb_users;\n\nERROR: table \"tb_users\" does not exist\n\nmls=# drop table public.tb_users;\n\nDROP TABLE\n\nThis is on head, pulled yesterday.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: http://www.enterprisedb.com\n\nOn a system with selinux and sepgsql configured, search path resolution appears to fail if sepgsql is in enforcing mode, but selinux is in permissive mode (which, as I understand it, should cause sepgsql to behave as if it's in permissive mode anyway - and does for other operations). Regardless of whether my understanding of the interaction of the two permissive modes is correct, I don't believe the following should happen:\nmls=# SELECT current_user;\n current_user \n--------------\n postgres\n(1 row)\n\nmls=# SHOW search_path;\n search_path \n-----------------\n \"$user\", public\n(1 row)\n\nmls=# \\dn+ public\n List of schemas\n Name | Owner | Access privileges | Description \n--------+----------+----------------------+------------------------\n public | postgres | postgres=UC/postgres+| standard public schema\n | | =UC/postgres | \n(1 row)\n\nmls=# CREATE TABLE tb_users(uid int primary key, name text, mail text, address text, salt text, phash text);\nERROR: no schema has been selected to create in\nLINE 1: CREATE TABLE tb_users(uid int primary key, name text, mail t...\n ^\nmls=# CREATE TABLE public.tb_users(uid int primary key, name text, mail text, address text, salt text, phash text);\nCREATE TABLE\nmls=# drop table tb_users;\nERROR: table \"tb_users\" does not exist\nmls=# drop table public.tb_users;\nDROP TABLEThis is on head, pulled yesterday.-- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 13 Apr 2021 15:33:23 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "More sepgsql weirdness"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 10:33 AM Dave Page <dpage@pgadmin.org> wrote:\n> On a system with selinux and sepgsql configured, search path resolution appears to fail if sepgsql is in enforcing mode, but selinux is in permissive mode (which, as I understand it, should cause sepgsql to behave as if it's in permissive mode anyway - and does for other operations). Regardless of whether my understanding of the interaction of the two permissive modes is correct, I don't believe the following should happen:\n\nI agree that this sounds like something which shouldn't happen if the\nsystem is in permissive mode, but I think the behavior itself is\ndeliberate. See OAT_NAMESPACE_SEARCH and commit\ne965e6344cfaff0708a032721b56f61eea777bc5.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 13 Apr 2021 13:21:57 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: More sepgsql weirdness"
},
{
"msg_contents": "Hi\n\nOn Tue, Apr 13, 2021 at 6:22 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Apr 13, 2021 at 10:33 AM Dave Page <dpage@pgadmin.org> wrote:\n> > On a system with selinux and sepgsql configured, search path resolution\n> appears to fail if sepgsql is in enforcing mode, but selinux is in\n> permissive mode (which, as I understand it, should cause sepgsql to behave\n> as if it's in permissive mode anyway - and does for other operations).\n> Regardless of whether my understanding of the interaction of the two\n> permissive modes is correct, I don't believe the following should happen:\n>\n> I agree that this sounds like something which shouldn't happen if the\n> system is in permissive mode,\n\n\nI realised that my test database hadn't had the sepgsql SQL script run in\nit (I must have created it before running it on template1). I guess the\nerror was caused by lack of proper labelling.\n\nSo, clearly my fault, but I think there are a couple of things we need to\ndo here:\n\n1) Improve the docs for sepgsql. The *only* vaguely useful source of info\nI've found on using this is \"SELinux System Administration\", a Packt book\nby Sven Vermeulen. Our own docs don't even list the supported object\nclasses (e.g. db_table) or types (e.g. sepgsql_ro_table_t) for example.\n\n2) Improve the way we handle cases like the one I ran into. I only realised\nwhat was going on when I tried to run sepgsql_getcon() to confirm I was\nrunning in undefined_t. Clearly very weird things can happen if labelling\nhasn't been run; perhaps we could raise a notice if the sepgsql module is\nloaded but sepgsql_getcon() isn't present (though that seems flakey at\nbest)? I'd hesitate to try to check for the presence of one or more labels\nas the admin could have intentionally removed them or changed them of\ncourse.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: http://www.enterprisedb.com\n\nHiOn Tue, Apr 13, 2021 at 6:22 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Apr 13, 2021 at 10:33 AM Dave Page <dpage@pgadmin.org> wrote:\n> On a system with selinux and sepgsql configured, search path resolution appears to fail if sepgsql is in enforcing mode, but selinux is in permissive mode (which, as I understand it, should cause sepgsql to behave as if it's in permissive mode anyway - and does for other operations). Regardless of whether my understanding of the interaction of the two permissive modes is correct, I don't believe the following should happen:\n\nI agree that this sounds like something which shouldn't happen if the\nsystem is in permissive mode, I realised that my test database hadn't had the sepgsql SQL script run in it (I must have created it before running it on template1). I guess the error was caused by lack of proper labelling.So, clearly my fault, but I think there are a couple of things we need to do here:1) Improve the docs for sepgsql. The *only* vaguely useful source of info I've found on using this is \"SELinux System Administration\", a Packt book by Sven Vermeulen. Our own docs don't even list the supported object classes (e.g. db_table) or types (e.g. sepgsql_ro_table_t) for example.2) Improve the way we handle cases like the one I ran into. I only realised what was going on when I tried to run sepgsql_getcon() to confirm I was running in undefined_t. Clearly very weird things can happen if labelling hasn't been run; perhaps we could raise a notice if the sepgsql module is loaded but sepgsql_getcon() isn't present (though that seems flakey at best)? I'd hesitate to try to check for the presence of one or more labels as the admin could have intentionally removed them or changed them of course.-- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 14 Apr 2021 09:40:06 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: More sepgsql weirdness"
}
] |
[
{
"msg_contents": "Avoid improbable PANIC during heap_update.\n\nheap_update needs to clear any existing \"all visible\" flag on\nthe old tuple's page (and on the new page too, if different).\nPer coding rules, to do this it must acquire pin on the appropriate\nvisibility-map page while not holding exclusive buffer lock;\nwhich creates a race condition since someone else could set the\nflag whenever we're not holding the buffer lock. The code is\nsupposed to handle that by re-checking the flag after acquiring\nbuffer lock and retrying if it became set. However, one code\npath through heap_update itself, as well as one in its subroutine\nRelationGetBufferForTuple, failed to do this. The end result,\nin the unlikely event that a concurrent VACUUM did set the flag\nwhile we're transiently not holding lock, is a non-recurring\n\"PANIC: wrong buffer passed to visibilitymap_clear\" failure.\n\nThis has been seen a few times in the buildfarm since recent VACUUM\nchanges that added code paths that could set the all-visible flag\nwhile holding only exclusive buffer lock. Previously, the flag\nwas (usually?) set only after doing LockBufferForCleanup, which\nwould insist on buffer pin count zero, thus preventing the flag\nfrom becoming set partway through heap_update. However, it's\nclear that it's heap_update not VACUUM that's at fault here.\n\nWhat's less clear is whether there is any hazard from these bugs\nin released branches. heap_update is certainly violating API\nexpectations, but if there is no code path that can set all-visible\nwithout a cleanup lock then it's only a latent bug. That's not\n100% certain though, besides which we should worry about extensions\nor future back-patch fixes that could introduce such code paths.\n\nI chose to back-patch to v12. Fixing RelationGetBufferForTuple\nbefore that would require also back-patching portions of older\nfixes (notably 0d1fe9f74), which is more code churn than seems\nprudent to fix a hypothetical issue.\n\nDiscussion: https://postgr.es/m/2247102.1618008027@sss.pgh.pa.us\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/34f581c39e97e2ea237255cf75cccebccc02d477\n\nModified Files\n--------------\nsrc/backend/access/heap/heapam.c | 44 ++++++++++++++++++++++++----------------\nsrc/backend/access/heap/hio.c | 24 ++++++++++++++++------\n2 files changed, 45 insertions(+), 23 deletions(-)",
"msg_date": "Tue, 13 Apr 2021 16:17:39 +0000",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 04:17:39PM +0000, Tom Lane wrote:\n> Avoid improbable PANIC during heap_update.\n> \n> heap_update needs to clear any existing \"all visible\" flag on\n> the old tuple's page (and on the new page too, if different).\n> Per coding rules, to do this it must acquire pin on the appropriate\n> visibility-map page while not holding exclusive buffer lock;\n> which creates a race condition since someone else could set the\n> flag whenever we're not holding the buffer lock. The code is\n> supposed to handle that by re-checking the flag after acquiring\n> buffer lock and retrying if it became set. However, one code\n> path through heap_update itself, as well as one in its subroutine\n> RelationGetBufferForTuple, failed to do this. The end result,\n> in the unlikely event that a concurrent VACUUM did set the flag\n> while we're transiently not holding lock, is a non-recurring\n> \"PANIC: wrong buffer passed to visibilitymap_clear\" failure.\n> \n\nHi,\n\nThis doesn't look as improbable because I saw it at least 3 times with\nv15beta4.\n\nThe first time I thought it was my fault, then I tried with a commit on\nseptember 25 (didn't remember which exactly but that doesn't seems too\nrelevant).\nFinally I saw it again in a build with TRACE_VISIBILITYMAP defined (the\nsame commit).\n\nBut I haven't see it anymore on rc1. Anyway I'm attaching the backtrace\n(this is from the build with TRACE_VISIBILITYMAP), the query that was \nrunning at the time was (no changes were made to quad_poly_tbl table \nnor any indexes were added to this table):\n\n\"\"\"\nupdate public.quad_poly_tbl set\n id = public.quad_poly_tbl.id\nreturning\n public.quad_poly_tbl.id as c0\n\"\"\"\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL",
"msg_date": "Thu, 29 Sep 2022 02:55:40 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "On Thu, Sep 29, 2022 at 02:55:40AM -0500, Jaime Casanova wrote:\n> On Tue, Apr 13, 2021 at 04:17:39PM +0000, Tom Lane wrote:\n> > Avoid improbable PANIC during heap_update.\n> > \n> > heap_update needs to clear any existing \"all visible\" flag on\n> > the old tuple's page (and on the new page too, if different).\n> > Per coding rules, to do this it must acquire pin on the appropriate\n> > visibility-map page while not holding exclusive buffer lock;\n> > which creates a race condition since someone else could set the\n> > flag whenever we're not holding the buffer lock. The code is\n> > supposed to handle that by re-checking the flag after acquiring\n> > buffer lock and retrying if it became set. However, one code\n> > path through heap_update itself, as well as one in its subroutine\n> > RelationGetBufferForTuple, failed to do this. The end result,\n> > in the unlikely event that a concurrent VACUUM did set the flag\n> > while we're transiently not holding lock, is a non-recurring\n> > \"PANIC: wrong buffer passed to visibilitymap_clear\" failure.\n> > \n> \n> Hi,\n> \n> This doesn't look as improbable because I saw it at least 3 times with\n> v15beta4.\n> \n> The first time I thought it was my fault, then I tried with a commit on\n> september 25 (didn't remember which exactly but that doesn't seems too\n> relevant).\n> Finally I saw it again in a build with TRACE_VISIBILITYMAP defined (the\n> same commit).\n> \n> But I haven't see it anymore on rc1. Anyway I'm attaching the backtrace\n> (this is from the build with TRACE_VISIBILITYMAP), the query that was \n> running at the time was (no changes were made to quad_poly_tbl table \n> nor any indexes were added to this table):\n> \n> \"\"\"\n> update public.quad_poly_tbl set\n> id = public.quad_poly_tbl.id\n> returning\n> public.quad_poly_tbl.id as c0\n> \"\"\"\n> \n\nJust to confirm I saw this on RC1\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Fri, 30 Sep 2022 15:44:22 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "Jaime Casanova <jcasanov@systemguards.com.ec> writes:\n> Just to confirm I saw this on RC1\n\nWhat test case are you using?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Sep 2022 16:51:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "Jaime Casanova <jcasanov@systemguards.com.ec> writes:\n> Just to confirm I saw this on RC1\n\nUgh ... I think I see the problem. There's still one path through\nRelationGetBufferForTuple that fails to guarantee that it's acquired\na vmbuffer pin if the all-visible flag becomes set in the otherBuffer.\nNamely, if we're forced to extend the relation, then we deal with\nvm pins when ConditionalLockBuffer(otherBuffer) fails ... but not\nwhen it succeeds. I think the fix is just to move the last\nGetVisibilityMapPins call out of the \"if\n(unlikely(!ConditionalLockBuffer(otherBuffer)))\" stanza.\n\nIt'd still be good to have a test case for this ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Sep 2022 17:28:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 2:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Jaime Casanova <jcasanov@systemguards.com.ec> writes:\n> > Just to confirm I saw this on RC1\n>\n> Ugh ... I think I see the problem. There's still one path through\n> RelationGetBufferForTuple that fails to guarantee that it's acquired\n> a vmbuffer pin if the all-visible flag becomes set in the otherBuffer.\n\n> It'd still be good to have a test case for this ...\n\nFWIW it seems possible that the Postgres 15 vacuumlazy.c work that\nadded lazy_scan_noprune() made this scenario more likely in practice\n-- even compared to Postgres 14.\n\nNote that VACUUM will collect preexisting LP_DEAD items in heap pages\nthat cannot be cleanup locked during VACUUM's first heap pass in\nPostgres 15 (in lazy_scan_noprune). There is no need for a cleanup\nlock in the second heap pass, either (that details is the same as 14,\nbut not earlier versions). So 15 is the first version that doesn't\nneed a cleanup lock in either the first heap pass or the second heap\npass to be able to set the heap page all-visible. That difference\nseems like it could be \"protective\" on 14, especially when vacuuming\nsmaller tables.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 30 Sep 2022 14:39:20 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "I wrote:\n> Ugh ... I think I see the problem. There's still one path through\n> RelationGetBufferForTuple that fails to guarantee that it's acquired\n> a vmbuffer pin if the all-visible flag becomes set in the otherBuffer.\n> Namely, if we're forced to extend the relation, then we deal with\n> vm pins when ConditionalLockBuffer(otherBuffer) fails ... but not\n> when it succeeds. I think the fix is just to move the last\n> GetVisibilityMapPins call out of the \"if\n> (unlikely(!ConditionalLockBuffer(otherBuffer)))\" stanza.\n\nConcretely, about like this.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 30 Sep 2022 17:52:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Sep 30, 2022 at 2:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Ugh ... I think I see the problem. There's still one path through\n>> RelationGetBufferForTuple that fails to guarantee that it's acquired\n>> a vmbuffer pin if the all-visible flag becomes set in the otherBuffer.\n\n> FWIW it seems possible that the Postgres 15 vacuumlazy.c work that\n> added lazy_scan_noprune() made this scenario more likely in practice\n> -- even compared to Postgres 14.\n\nCould be, because we haven't seen field reports of this in v14 yet.\nAnd there's still no hard evidence of a bug pre-14. Nonetheless,\nI'm inclined to backpatch to v12 as 34f581c39 was.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Sep 2022 17:56:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 2:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > FWIW it seems possible that the Postgres 15 vacuumlazy.c work that\n> > added lazy_scan_noprune() made this scenario more likely in practice\n> > -- even compared to Postgres 14.\n>\n> Could be, because we haven't seen field reports of this in v14 yet.\n\nI would be more confident here were it not for the recent\nheap_delete() issue reported by one of my AWS colleagues (and fixed by\nanother, Jeff Davis). See commit 163b0993 if you missed it before now.\n\n> And there's still no hard evidence of a bug pre-14. Nonetheless,\n> I'm inclined to backpatch to v12 as 34f581c39 was.\n\n+1\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 30 Sep 2022 15:03:46 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Sep 30, 2022 at 2:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Could be, because we haven't seen field reports of this in v14 yet.\n\n> I would be more confident here were it not for the recent\n> heap_delete() issue reported by one of my AWS colleagues (and fixed by\n> another, Jeff Davis). See commit 163b0993 if you missed it before now.\n\nHmm, okay, though that's really a distinct bug of the same ilk.\nYou're right that I'd not paid close attention to that thread after\nJeff diagnosed the problem. It does seem like Robins' report\nshows that there's some way that v13 will set the AV bit without\na cleanup lock ... does that constitute a bug in itself?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Sep 2022 19:52:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 4:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:> >\nI would be more confident here were it not for the recent\n> > heap_delete() issue reported by one of my AWS colleagues (and fixed by\n> > another, Jeff Davis). See commit 163b0993 if you missed it before now.\n>\n> Hmm, okay, though that's really a distinct bug of the same ilk.\n> You're right that I'd not paid close attention to that thread after\n> Jeff diagnosed the problem.\n\nI just meant that I don't feel particularly confident about what might\nbe possible or likely in Postgres 14 with this new issue in\nheap_update() on point releases without today's bugfix. My theory\nabout lazy_scan_noprune() might be correct, but take it with a grain\nof salt.\n\n> It does seem like Robins' report\n> shows that there's some way that v13 will set the AV bit without\n> a cleanup lock ... does that constitute a bug in itself?\n\nWe never got to the bottom of that part, strangely enough. I can ask again.\n\nIn any case we cannot really treat the information that we have about\nthat as a bug report -- not as things stand. Why should the question\nof whether or not we ever set a page PD_ALL_VISIBLE without a cleanup\nlock on v13 be a mystery at all? Why wouldn't a simple grep get to the\nbottom of it? I have to imagine that the true explanation is very\nsimple and boring.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 30 Sep 2022 17:09:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 5:09 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> In any case we cannot really treat the information that we have about\n> that as a bug report -- not as things stand. Why should the question\n> of whether or not we ever set a page PD_ALL_VISIBLE without a cleanup\n> lock on v13 be a mystery at all? Why wouldn't a simple grep get to the\n> bottom of it? I have to imagine that the true explanation is very\n> simple and boring.\n\nI talked to Robins about this privately. I was wrong; there isn't a\nsimple or boring explanation.\n\nRobins set out to find bugs like this in Postgres via stress-testing.\nHe used a lab environment for this, and was quite methodical. So there\nis no reason to doubt that a PANIC happened on v13 at least once.\nThere must be some relatively complicated explanation for that, but\nright now I can only speculate.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 30 Sep 2022 18:29:28 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 6:29 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I talked to Robins about this privately. I was wrong; there isn't a\n> simple or boring explanation.\n\nI think that I figured it out. With or without bugfix commit 163b0993,\nwe do these steps early in heap_delete() (this is 13 code as of\ntoday):\n\n2490 page = BufferGetPage(buffer);\n2491\n2492 /*\n2493 * Before locking the buffer, pin the visibility map page if\nit appears to\n2494 * be necessary. Since we haven't got the lock yet, someone\nelse might be\n2495 * in the middle of changing this, so we'll need to recheck\nafter we have\n2496 * the lock.\n2497 */\n2498 if (PageIsAllVisible(page))\n2499 visibilitymap_pin(relation, block, &vmbuffer);\n\nSo we're calling visibilitymap_pin() having just acquired a buffer pin\non a heap page buffer for the first time, and without acquiring a\nbuffer lock on the same heap page (we don't hold one now, and we've\nnever held one at some earlier point).\n\nWithout Jeff's bugfix, nothing stops heap_delete() from getting a pin\non a heap page that happens to have already been cleanup locked by\nanother session running VACUUM. The same session could then\n(correctly) observe that the page does not have PD_ALL_VISIBLE set --\nnot yet. VACUUM might then set PD_ALL_VISIBLE, without having to\nacquire any kind of interlock that heap_delete() will reliably notice.\nAfter all, VACUUM had a cleanup lock before the other session's call\nto heap_delete() even began -- so the responsibility has to lie with\nheap_delete().\n\nJeff's bugfix will fix the bug on 13 too. The bugfix doesn't take the\naggressive/conservative approach of simply getting an exclusive lock\nto check PageIsAllVisible() at the same point, on performance grounds\n(no change there). The bugfix does make this old heap_delete()\nno-buffer-lock behavior safe by teaching heap_delete() to not assume\nthat a page that didn't have PD_ALL_VISIBLE initially set cannot have\nit set concurrently.\n\nSo 13 is only different to 14 in that there are fewer ways for\nessentially the same race to happen. This is probably only true for\nthe heap_delete() issue, not either of the similar heap_update()\nissues.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 30 Sep 2022 21:23:05 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> ... nothing stops heap_delete() from getting a pin\n> on a heap page that happens to have already been cleanup locked by\n> another session running VACUUM. The same session could then\n> (correctly) observe that the page does not have PD_ALL_VISIBLE set --\n> not yet. VACUUM might then set PD_ALL_VISIBLE, without having to\n> acquire any kind of interlock that heap_delete() will reliably notice.\n\nI'm too tired to think this through completely clearly, but this\nsounds right, and what it seems to imply is that this race condition\nexists in all PG versions. Which would imply that we need to do the\nwork to back-patch these three fixes into v11/v10. I would rather\nnot do that, because then we'd have to also back-patch some other\nmore-invasive changes, and the net risk of introducing new bugs\nseems uncomfortably high. (Especially for v10, where there will\nbe no second chance after the November releases.)\n\nSo what is bothering me about this line of thought is: how come\nthere have not been reports of these failures in older branches?\nIs there some aspect we're not thinking about that masks the bug?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 01 Oct 2022 00:38:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 9:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm too tired to think this through completely clearly, but this\n> sounds right, and what it seems to imply is that this race condition\n> exists in all PG versions.\n\nI think that the heap_delete() issue is probably in all PG versions.\n\n> Which would imply that we need to do the\n> work to back-patch these three fixes into v11/v10.\n\nI am not aware of any reason why we should need the heap_update()\nfixes to be backpatched any further. Though I will need to think about\nit some more.\n\n> So what is bothering me about this line of thought is: how come\n> there have not been reports of these failures in older branches?\n> Is there some aspect we're not thinking about that masks the bug?\n\nThe likely explanation is that Robins was able to find the\nheap_delete() bug by throwing lots of resources (human effort and\nmachine time) into it. It literally took weeks of adversarial\nstress-testing to find the bug. It's entirely possible and perhaps\nlikely that this isn't representative of real world conditions in some\ncrucial way.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 30 Sep 2022 22:09:19 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> I think that the heap_delete() issue is probably in all PG versions.\n\nYeah, that's what I'm afraid of ...\n\n> I am not aware of any reason why we should need the heap_update()\n> fixes to be backpatched any further.\n\nHow so? AFAICS these are exactly the same oversight, ie failure\nto deal with the all-visible bit getting set partway through the\noperation. You've explained how that can happen.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 01 Oct 2022 01:13:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 10:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> How so? AFAICS these are exactly the same oversight, ie failure\n> to deal with the all-visible bit getting set partway through the\n> operation. You've explained how that can happen.\n\nI thought that there might have been something protective about how\nthe loop would work in heap_update(), but perhaps that's not true. It\nmight just be that heap_update() does lots of stuff in between, so\nit's less likely to be affected by this particular race (the race\nwhich seems to be present in all versions).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 30 Sep 2022 22:25:06 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "On Fri, 2022-09-30 at 21:23 -0700, Peter Geoghegan wrote:\n> 2490 page = BufferGetPage(buffer);\n> 2491\n> 2492 /*\n> 2493 * Before locking the buffer, pin the visibility map page if\n> it appears to\n> 2494 * be necessary. Since we haven't got the lock yet, someone\n> else might be\n> 2495 * in the middle of changing this, so we'll need to recheck\n> after we have\n> 2496 * the lock.\n> 2497 */\n> 2498 if (PageIsAllVisible(page))\n> 2499 visibilitymap_pin(relation, block, &vmbuffer);\n> \n> So we're calling visibilitymap_pin() having just acquired a buffer\n> pin\n> on a heap page buffer for the first time, and without acquiring a\n> buffer lock on the same heap page (we don't hold one now, and we've\n> never held one at some earlier point).\n> \n> Without Jeff's bugfix, nothing stops heap_delete() from getting a pin\n> on a heap page that happens to have already been cleanup locked by\n> another session running VACUUM. The same session could then\n> (correctly) observe that the page does not have PD_ALL_VISIBLE set --\n> not yet. \n\nWith you so far; I had considered this code path and was still unable\nto repro.\n\n> VACUUM might then set PD_ALL_VISIBLE, without having to\n> acquire any kind of interlock that heap_delete() will reliably\n> notice.\n> After all, VACUUM had a cleanup lock before the other session's call\n> to heap_delete() even began -- so the responsibility has to lie with\n> heap_delete().\n\nDirectly after the code you reference above, there is (in 5f9dda4c06,\nright before my patch):\n\n 2501 LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);\n 2502 \n 2503 /* \n 2504 * If we didn't pin the visibility map page and the page has\nbecome all \n 2505 * visible while we were busy locking the buffer, we'll have\nto unlock and \n 2506 * re-lock, to avoid holding the buffer lock across an I/O. \nThat's a bit \n 2507 * unfortunate, but hopefully shouldn't happen often. \n 2508 */\n 2509 if (vmbuffer == InvalidBuffer && PageIsAllVisible(page))\n 2510 {\n 2511 LockBuffer(buffer, BUFFER_LOCK_UNLOCK);\n 2512 visibilitymap_pin(relation, block, &vmbuffer);\n 2513 LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);\n 2514 }\n\nDoesn't that deal with the case you brought up directly? heap_delete()\ncan't get the exclusive lock until VACUUM releases its cleanup lock, at\nwhich point all-visible will be set. Then heap_delete() should notice\nin line 2509 and then pin the VM buffer. Right?\n\nAnd I don't think a similar issue exists when the locks are briefly\nreleased on lines 2563 or 2606. The pin is held until after the VM bit\nis cleared (aside from an error path and an early return):\n\n 2489 buffer = ReadBuffer(relation, block);\n ...\n 2717 if (PageIsAllVisible(page))\n 2718 {\n 2719 all_visible_cleared = true;\n 2720 PageClearAllVisible(page);\n ...\n 2835 ReleaseBuffer(buffer);\n\nIf VACCUM hasn't acquired the cleanup lock before 2489, it can't get\none until heap_delete() is done looking at the all-visible bit anyway.\nSo I don't see how my patch would have fixed it even if that was the\nproblem.\n\nOf course, I must be wrong somewhere, because the bug seems to exist.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Sat, 01 Oct 2022 09:35:31 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "On Sat, Oct 1, 2022 at 9:35 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> Doesn't that deal with the case you brought up directly? heap_delete()\n> can't get the exclusive lock until VACUUM releases its cleanup lock, at\n> which point all-visible will be set. Then heap_delete() should notice\n> in line 2509 and then pin the VM buffer. Right?\n\nI now believe that you're right. I don't think that this code was ever\ndesigned to rely on cleanup locks in any way; that was kind of an\naccident all along. Even still, I'm not sure how I missed such an\nobvious thing. Sorry for the misdirection.\n\nStill, there has to be *some* reason why the bug could repro on 13.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 1 Oct 2022 09:53:59 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 04:51:20PM -0400, Tom Lane wrote:\n> Jaime Casanova <jcasanov@systemguards.com.ec> writes:\n> > Just to confirm I saw this on RC1\n> \n> What test case are you using?\n> \n\nHi,\n\nCurrently the way I have to reproduce it is:\n\n- install the regression database\n- drop all tables but: \n\thash_i4_heap, hash_name_heap, hash_txt_heap,\n \tquad_poly_tbl, road\n- run 10 sqlsmith processes... normally in an hour or less the problem\n appears\n\nI have the logs from the last time it happened so maybe I can trace the\nexact pattern to reproducite at will... at least to keep a test\nsomewhere.\n\nBTW, so far so good with your last fix (about 12 hours now)...\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Mon, 3 Oct 2022 14:34:19 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
},
{
"msg_contents": "On 2022-Sep-29, Jaime Casanova wrote:\n\n> This doesn't look as improbable because I saw it at least 3 times with\n> v15beta4.\n\nTo further the case of the not-so-low-probability, we have customers\nthat are hitting this about once per day, with Postgres 14 ... so their\nsystems are crashing all the time :-( We've collected a bunch of\nbacktraces, and while I didn't analyze all of them, I hear that they all\nlook related to this fix.\n\nNot good.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nMaybe there's lots of data loss but the records of data loss are also lost.\n(Lincoln Yeoh)\n\n\n",
"msg_date": "Thu, 6 Oct 2022 18:28:36 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid improbable PANIC during heap_update."
}
] |
[
{
"msg_contents": "Hello Sir/Madam,\nI'm Nandni Mehla, a sophomore currently pursuing B.Tech in IT from Indira\nGandhi Delhi Technical University for Women, Delhi. I've recently started\nworking on open source and I think I will be a positive addition to\nyour organization for working on projects using C and SQL, as I have\nexperience in these, and I am willing to learn more from you.\nI am attaching my proposal in this email for your reference, please guide\nme through this.\nRegards.\n\nProposal Link:\nhttps://docs.google.com/document/d/1H84WmzZbMERPrjsnXbvoQ7W2AaKsM8eJU02SNw7vQBk/edit?usp=sharing\n\n Hello Sir/Madam,I'm Nandni Mehla, a sophomore currently pursuing B.Tech in IT from Indira Gandhi Delhi Technical University for Women, Delhi. I've recently started working on open source and I think I will be a positive addition to your organization for working on projects using C and SQL, as I have experience in these, and I am willing to learn more from you.I am attaching my proposal in this email for your reference, please guide me through this.Regards.Proposal Link: https://docs.google.com/document/d/1H84WmzZbMERPrjsnXbvoQ7W2AaKsM8eJU02SNw7vQBk/edit?usp=sharing",
"msg_date": "Tue, 13 Apr 2021 22:01:13 +0530",
"msg_from": "Nandni Mehla <nandnimehlawat16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Proposal for working open source with PostgreSQL"
}
] |
[
{
"msg_contents": "On Sat, Apr 10, 2021 at 01:42:26PM -0500, Justin Pryzby wrote:\n> On Sun, Mar 21, 2021 at 03:01:15PM -0300, Alvaro Herrera wrote:\n> > > But note that it doesn't check if an existing constraint \"implies\" the new\n> > > constraint - maybe it should.\n> > \n> > Hm, I'm not sure I want to do that, because that means that if I later\n> > have to attach the partition again with the same partition bounds, then\n> > I might have to incur a scan to recheck the constraint. I think we want\n> > to make the new constraint be as tight as possible ...\n> \n> If it *implies* the partition constraint, then it's at least as tight (and\n> maybe tighter), yes ?\n> \n> I think you're concerned with the case that someone has a partition with\n> \"tight\" bounds like (a>=200 and a<300) and a check constraint that's \"less\n> tight\" like (a>=100 AND a<400). In that case, the loose check constraint\n> doesn't imply the tighter partition constraint, so your patch would add a\n> non-redundant constraint.\n> \n> I'm interested in the case that someone has a check constraint that almost but\n> not exactly matches the partition constraint, like (a<300 AND a>=200). In that\n> case, your patch adds a redundant constraint.\n> \n> I wrote a patch which seems to effect my preferred behavior - please check.\n\nOn Sat, Apr 10, 2021 at 02:13:26PM -0500, Justin Pryzby wrote:\n> I suppose the docs should be updated for the exact behavior, maybe even without\n> this patch:\n> \n> |A <literal>CHECK</literal> constraint\n> |that duplicates the partition constraint is added to the partition.\n\nI added this as an Opened Item, since it affects user-visible behavior:\nwhether or not a redundant, non-equal constraint is added.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 13 Apr 2021 12:58:48 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY"
}
] |
[
{
"msg_contents": "",
"msg_date": "Tue, 13 Apr 2021 11:01:16 -0700",
"msg_from": "Nhi Dang <nhidangsd@gmail.com>",
"msg_from_op": true,
"msg_subject": "GSoC 2021 Proposal Document"
},
{
"msg_contents": "Hello,\n\nOn Sat, Apr 17, 2021 at 8:42 PM Nhi Dang <nhidangsd@gmail.com> wrote:\n\n>\n>\nThank you for this document.\n\nIt looks like there are a couple problems with this - at least if this was\nintended\nto be a submission for GSoC 2021:\n\n - The deadline for submissions was April 13th (\n https://summerofcode.withgoogle.com/dashboard/timeline/ )\n - Submissions need to be made using the GSoC website\n\n\nIf you still want to work on this - outside of GSoC - I suggest also reading\nthis thread here:\n\nhttps://www.postgresql.org/message-id/CAFwT4nAVEFJQF8nQ%2BmWc6M2Eh8nG2uEhV0bdY7wfaT2aLERUAQ%40mail.gmail.com\n\nThis discusses a number of suggested changes, and might be useful for your\nproposal.\n\n\nRegards,\n\n-- \nAndreas 'ads' Scherbaum\nGerman PostgreSQL User Group\nEuropean PostgreSQL User Group - Board of Directors\nVolunteer Regional Contact, Germany - PostgreSQL Project\n\nHello,On Sat, Apr 17, 2021 at 8:42 PM Nhi Dang <nhidangsd@gmail.com> wrote:\nThank you for this document.It looks like there are a couple problems with this - at least if this was intendedto be a submission for GSoC 2021:The deadline for submissions was April 13th ( https://summerofcode.withgoogle.com/dashboard/timeline/ )Submissions need to be made using the GSoC websiteIf you still want to work on this - outside of GSoC - I suggest also readingthis thread here:https://www.postgresql.org/message-id/CAFwT4nAVEFJQF8nQ%2BmWc6M2Eh8nG2uEhV0bdY7wfaT2aLERUAQ%40mail.gmail.comThis discusses a number of suggested changes, and might be useful for your proposal.Regards,-- Andreas 'ads' ScherbaumGerman PostgreSQL User GroupEuropean PostgreSQL User Group - Board of DirectorsVolunteer Regional Contact, Germany - PostgreSQL Project",
"msg_date": "Tue, 20 Apr 2021 03:38:33 +0200",
"msg_from": "\"Andreas 'ads' Scherbaum\" <ads@pgug.de>",
"msg_from_op": false,
"msg_subject": "Re: GSoC 2021 Proposal Document"
},
{
"msg_contents": "Hi there !\n\nWhat's about GSoC 2022 ?\n\nBest regards,\nOleg\n\nOn Tue, Apr 20, 2021 at 4:38 AM Andreas 'ads' Scherbaum <ads@pgug.de> wrote:\n\n>\n> Hello,\n>\n> On Sat, Apr 17, 2021 at 8:42 PM Nhi Dang <nhidangsd@gmail.com> wrote:\n>\n>>\n>>\n> Thank you for this document.\n>\n> It looks like there are a couple problems with this - at least if this was\n> intended\n> to be a submission for GSoC 2021:\n>\n> - The deadline for submissions was April 13th (\n> https://summerofcode.withgoogle.com/dashboard/timeline/ )\n> - Submissions need to be made using the GSoC website\n>\n>\n> If you still want to work on this - outside of GSoC - I suggest also\n> reading\n> this thread here:\n>\n>\n> https://www.postgresql.org/message-id/CAFwT4nAVEFJQF8nQ%2BmWc6M2Eh8nG2uEhV0bdY7wfaT2aLERUAQ%40mail.gmail.com\n>\n> This discusses a number of suggested changes, and might be useful for your\n> proposal.\n>\n>\n> Regards,\n>\n> --\n> Andreas 'ads' Scherbaum\n> German PostgreSQL User Group\n> European PostgreSQL User Group - Board of Directors\n> Volunteer Regional Contact, Germany - PostgreSQL Project\n>\n\n\n-- \nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\nHi there !What's about GSoC 2022 ?Best regards,OlegOn Tue, Apr 20, 2021 at 4:38 AM Andreas 'ads' Scherbaum <ads@pgug.de> wrote:Hello,On Sat, Apr 17, 2021 at 8:42 PM Nhi Dang <nhidangsd@gmail.com> wrote:\nThank you for this document.It looks like there are a couple problems with this - at least if this was intendedto be a submission for GSoC 2021:The deadline for submissions was April 13th ( https://summerofcode.withgoogle.com/dashboard/timeline/ )Submissions need to be made using the GSoC websiteIf you still want to work on this - outside of GSoC - I suggest also readingthis thread here:https://www.postgresql.org/message-id/CAFwT4nAVEFJQF8nQ%2BmWc6M2Eh8nG2uEhV0bdY7wfaT2aLERUAQ%40mail.gmail.comThis discusses a number of suggested changes, and might be useful for your proposal.Regards,-- Andreas 'ads' ScherbaumGerman PostgreSQL User GroupEuropean PostgreSQL User Group - Board of DirectorsVolunteer Regional Contact, Germany - PostgreSQL Project\n-- Postgres Professional: http://www.postgrespro.comThe Russian Postgres Company",
"msg_date": "Wed, 27 Oct 2021 16:07:45 +0300",
"msg_from": "Oleg Bartunov <obartunov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: GSoC 2021 Proposal Document"
}
] |
[
{
"msg_contents": "Attached are some draft patches to convert almost all of the\ncontrib modules' SQL functions to use SQL-standard function bodies.\nThe point of this is to remove the residual search_path security\nhazards that we couldn't fix in commits 7eeb1d986 et al. Since\na SQL-style function body is fully parsed at creation time,\nits object references are not subject to capture by the run-time\nsearch path. Possibly there are small performance benefits too,\nthough I've not tried to measure that.\n\nI've not touched the documentation yet. I suppose that we can\ntone down the warnings added by 7eeb1d986 quite a bit, maybe\nreplacing them with just \"be sure to use version x.y or later\".\nHowever I think we may still need an assumption that earthdistance\nand cube are in the same schema --- any comments on that?\n\nI'd like to propose squeezing these changes into v14, even though\nwe're past feature freeze. Reason one is that this is less a\nnew feature than a security fix; reason two is that this provides\nsome non-artificial test coverage for the SQL-function-body feature.\n\nBTW, there still remain a couple of old-style SQL functions in\ncontrib/adminpack and contrib/lo. AFAICS those are unconditionally\nsecure, so I didn't bother with them.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 13 Apr 2021 18:26:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Converting contrib SQL functions to new style"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 06:26:34PM -0400, Tom Lane wrote:\n> Attached are some draft patches to convert almost all of the\n> contrib modules' SQL functions to use SQL-standard function bodies.\n> The point of this is to remove the residual search_path security\n> hazards that we couldn't fix in commits 7eeb1d986 et al. Since\n> a SQL-style function body is fully parsed at creation time,\n> its object references are not subject to capture by the run-time\n> search path.\n\nAre there any inexact matches in those function/operator calls? Will that\nmatter more or less than it does today?\n\n> However I think we may still need an assumption that earthdistance\n> and cube are in the same schema --- any comments on that?\n\nThat part doesn't change, indeed.\n\n> I'd like to propose squeezing these changes into v14, even though\n> we're past feature freeze. Reason one is that this is less a\n> new feature than a security fix; reason two is that this provides\n> some non-artificial test coverage for the SQL-function-body feature.\n\nDogfooding like this is good. What about the SQL-language functions that\ninitdb creates?\n\n\n",
"msg_date": "Tue, 13 Apr 2021 19:08:27 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Tue, Apr 13, 2021 at 06:26:34PM -0400, Tom Lane wrote:\n>> Attached are some draft patches to convert almost all of the\n>> contrib modules' SQL functions to use SQL-standard function bodies.\n>> The point of this is to remove the residual search_path security\n>> hazards that we couldn't fix in commits 7eeb1d986 et al. Since\n>> a SQL-style function body is fully parsed at creation time,\n>> its object references are not subject to capture by the run-time\n>> search path.\n\n> Are there any inexact matches in those function/operator calls? Will that\n> matter more or less than it does today?\n\nI can't claim to have looked closely for inexact matches. It should\nmatter less than today, since there's a hazard only during creation\n(with a somewhat-controlled search path) and not during use. But\nthat doesn't automatically eliminate the issue.\n\n>> I'd like to propose squeezing these changes into v14, even though\n>> we're past feature freeze. Reason one is that this is less a\n>> new feature than a security fix; reason two is that this provides\n>> some non-artificial test coverage for the SQL-function-body feature.\n\n> Dogfooding like this is good. What about the SQL-language functions that\n> initdb creates?\n\nHadn't thought about those, but converting them seems like a good idea.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Apr 2021 23:11:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 11:11:13PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > On Tue, Apr 13, 2021 at 06:26:34PM -0400, Tom Lane wrote:\n> >> Attached are some draft patches to convert almost all of the\n> >> contrib modules' SQL functions to use SQL-standard function bodies.\n> >> The point of this is to remove the residual search_path security\n> >> hazards that we couldn't fix in commits 7eeb1d986 et al. Since\n> >> a SQL-style function body is fully parsed at creation time,\n> >> its object references are not subject to capture by the run-time\n> >> search path.\n> \n> > Are there any inexact matches in those function/operator calls? Will that\n> > matter more or less than it does today?\n> \n> I can't claim to have looked closely for inexact matches. It should\n> matter less than today, since there's a hazard only during creation\n> (with a somewhat-controlled search path) and not during use. But\n> that doesn't automatically eliminate the issue.\n\nOnce CREATE EXTENSION is over, things are a great deal safer under this\nproposal, as you say. I suspect it makes CREATE EXTENSION more hazardous.\nToday, typical SQL commands in extension creation scripts don't activate\ninexact argument type matching. You were careful to make each script clear\nthe search_path around commands deviating from that (commit 7eeb1d9). I think\n\"CREATE FUNCTION plus1dot1(int) RETURNS numeric LANGUAGE SQL RETURN $1 + 1.1;\"\nin a trusted extension script would constitute a security vulnerability, since\nit can lock in the wrong operator.\n\n\n",
"msg_date": "Wed, 14 Apr 2021 05:58:11 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 8:58 AM Noah Misch <noah@leadboat.com> wrote:\n> Once CREATE EXTENSION is over, things are a great deal safer under this\n> proposal, as you say. I suspect it makes CREATE EXTENSION more hazardous.\n> Today, typical SQL commands in extension creation scripts don't activate\n> inexact argument type matching. You were careful to make each script clear\n> the search_path around commands deviating from that (commit 7eeb1d9). I think\n> \"CREATE FUNCTION plus1dot1(int) RETURNS numeric LANGUAGE SQL RETURN $1 + 1.1;\"\n> in a trusted extension script would constitute a security vulnerability, since\n> it can lock in the wrong operator.\n\nI don't understand how that can happen, unless we've failed to secure\nthe search_path. And, if we've failed to secure the search_path, I\nthink we are in a lot of trouble no matter what else we do.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Apr 2021 09:55:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Apr 14, 2021 at 8:58 AM Noah Misch <noah@leadboat.com> wrote:\n>> Once CREATE EXTENSION is over, things are a great deal safer under this\n>> proposal, as you say. I suspect it makes CREATE EXTENSION more hazardous.\n>> Today, typical SQL commands in extension creation scripts don't activate\n>> inexact argument type matching. You were careful to make each script clear\n>> the search_path around commands deviating from that (commit 7eeb1d9). I think\n>> \"CREATE FUNCTION plus1dot1(int) RETURNS numeric LANGUAGE SQL RETURN $1 + 1.1;\"\n>> in a trusted extension script would constitute a security vulnerability, since\n>> it can lock in the wrong operator.\n\n> I don't understand how that can happen, unless we've failed to secure\n> the search_path. And, if we've failed to secure the search_path, I\n> think we are in a lot of trouble no matter what else we do.\n\nThe situation of interest is where you are trying to install an extension\ninto a schema that also contains malicious objects. We've managed to make\nmost of the commands you might use in an extension script secure against\nthat situation, and Noah wants to hold SQL-function creation to that same\nstandard.\n\nMy concern in this patch is rendering SQL functions safe against untrusted\nsearch_path at *time of use*, which is really an independent security\nconcern.\n\nIf you're willing to assume there's nothing untrustworthy in your\nsearch_path, then there's no issue and nothing to fix. Unfortunately,\nthat seems like a rather head-in-the-sand standpoint.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Apr 2021 10:49:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "\n\n> On Apr 13, 2021, at 3:26 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> However I think we may still need an assumption that earthdistance\n> and cube are in the same schema --- any comments on that?\n\nThis is probably not worth doing, and we are already past feature freeze, but adding syntax to look up the namespace of an extension might help. The problem seems to be that we can't syntactically refer to the schema of an extension. We have to instead query pg_catalog.pg_extension joined against pg_catalog.pg_namespace and then interpolate the namespace name into strings that get executed, which is ugly.\n\nThis syntax is perhaps a non-starter, but conceptually something like:\n\n-CREATE DOMAIN earth AS cube\n+CREATE DOMAIN earthdistance::->earth AS cube::->cube\n\nThen we'd perhaps extend RangeVar with an extensionname field and have either a schemaname or an extensionname be looked up in places where we currently lookup schemas, adding a catcache for extensions. (Like I said, probably not worth doing.)\n\n\nWe could get something like this working just inside the CREATE EXTENSION command if we expanded on the @extschema@ idea a bit. At first I thought this idea would suffer race conditions with concurrent modifications of pg_extension or pg_namespace, but it looks like we already have a snapshot when processing the script file, so:\n\n-CREATE DOMAIN earth AS cube\n+CREATE DOMAIN @@earthdistance@@::earth AS @@cube@@::cube\n\nor such, with @@foo@@ being parsed out, looked up in pg_extension join pg_namespace, and substituted back in.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 14 Apr 2021 10:18:30 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 10:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The situation of interest is where you are trying to install an extension\n> into a schema that also contains malicious objects. We've managed to make\n> most of the commands you might use in an extension script secure against\n> that situation, and Noah wants to hold SQL-function creation to that same\n> standard.\n\nOh, I was forgetting that the creation schema has to be first in your\nsearch path. :-(\n\nDoes the idea of allowing the creation schema to be set separately\nhave any legs? Because it seems like that would help here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Apr 2021 13:19:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Apr 13, 2021, at 3:26 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> However I think we may still need an assumption that earthdistance\n>> and cube are in the same schema --- any comments on that?\n\n> This is probably not worth doing, and we are already past feature\n> freeze, but adding syntax to look up the namespace of an extension might\n> help.\n\nYeah, that idea was discussed before (perhaps only in private\nsecurity-team threads, though). We didn't do anything about it because\nat the time there didn't seem to be pressing need, but in the context\nof SQL function bodies there's an obvious use-case.\n\n> We could get something like this working just inside the CREATE EXTENSION command if we expanded on the @extschema@ idea a bit. At first I thought this idea would suffer race conditions with concurrent modifications of pg_extension or pg_namespace, but it looks like we already have a snapshot when processing the script file, so:\n\n> -CREATE DOMAIN earth AS cube\n> +CREATE DOMAIN @@earthdistance@@::earth AS @@cube@@::cube\n\nRight, extending the @extschema@ mechanism is what was discussed,\nthough I think I'd lean towards something like @extschema:cube@\nto denote the schema of a referenced extension \"cube\".\n\nI'm not sure this is useful enough to break feature freeze for,\nbut I'm +1 for investigating it for v15.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Apr 2021 13:36:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Apr 14, 2021 at 10:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The situation of interest is where you are trying to install an extension\n>> into a schema that also contains malicious objects. We've managed to make\n>> most of the commands you might use in an extension script secure against\n>> that situation, and Noah wants to hold SQL-function creation to that same\n>> standard.\n\n> Oh, I was forgetting that the creation schema has to be first in your\n> search path. :-(\n\n> Does the idea of allowing the creation schema to be set separately\n> have any legs? Because it seems like that would help here.\n\nDoesn't help that much, because you still have to reference objects\nalready created by your own extension, so it's hard to see how the\ntarget schema won't need to be in the path.\n\n[ thinks for awhile ... ]\n\nCould we hack things so that extension scripts are only allowed to\nreference objects created (a) by the system, (b) earlier in the\nsame script, or (c) owned by one of the declared prerequisite\nextensions? Seems like that might provide a pretty bulletproof\ndefense against trojan-horse objects, though I'm not sure how much\nof a pain it'd be to implement.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Apr 2021 13:41:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 1:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Doesn't help that much, because you still have to reference objects\n> already created by your own extension, so it's hard to see how the\n> target schema won't need to be in the path.\n\nOh, woops.\n\n> Could we hack things so that extension scripts are only allowed to\n> reference objects created (a) by the system, (b) earlier in the\n> same script, or (c) owned by one of the declared prerequisite\n> extensions? Seems like that might provide a pretty bulletproof\n> defense against trojan-horse objects, though I'm not sure how much\n> of a pain it'd be to implement.\n\nThat doesn't seem like a crazy idea, but the previous idea of having\nsome magic syntax that means \"the schema where extension FOO is\" seems\nlike it might be easier to implement and more generally useful. If we\ntaught the core system that %!!**&^%?(earthdistance) means \"the schema\nwhere the earthdistance is located\" that syntax might get some use\neven outside of extension creation scripts, which seems like it could\nbe a good thing, just because code that is used more widely is more\nlikely to have been debugged to the point where it actually works.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Apr 2021 13:56:34 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Apr 14, 2021 at 1:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Could we hack things so that extension scripts are only allowed to\n>> reference objects created (a) by the system, (b) earlier in the\n>> same script, or (c) owned by one of the declared prerequisite\n>> extensions? Seems like that might provide a pretty bulletproof\n>> defense against trojan-horse objects, though I'm not sure how much\n>> of a pain it'd be to implement.\n\n> That doesn't seem like a crazy idea, but the previous idea of having\n> some magic syntax that means \"the schema where extension FOO is\" seems\n> like it might be easier to implement and more generally useful.\n\nI think that's definitely useful, but it's not a fix for the\nreference-capture problem unless you care to assume that the other\nextension's schema is free of trojan-horse objects. So I'm thinking\nthat we really ought to pursue both ideas.\n\nThis may mean that squeezing these contrib changes into v14 is a lost\ncause. We certainly shouldn't try to do what I suggest above for\nv14; but without it, these changes are just moving the security\nissue to a different place rather than eradicating it completely.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Apr 2021 14:03:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "\nOn 4/14/21 2:03 PM, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Wed, Apr 14, 2021 at 1:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Could we hack things so that extension scripts are only allowed to\n>>> reference objects created (a) by the system, (b) earlier in the\n>>> same script, or (c) owned by one of the declared prerequisite\n>>> extensions? Seems like that might provide a pretty bulletproof\n>>> defense against trojan-horse objects, though I'm not sure how much\n>>> of a pain it'd be to implement.\n>> That doesn't seem like a crazy idea, but the previous idea of having\n>> some magic syntax that means \"the schema where extension FOO is\" seems\n>> like it might be easier to implement and more generally useful.\n> I think that's definitely useful, but it's not a fix for the\n> reference-capture problem unless you care to assume that the other\n> extension's schema is free of trojan-horse objects. So I'm thinking\n> that we really ought to pursue both ideas.\n>\n> This may mean that squeezing these contrib changes into v14 is a lost\n> cause. We certainly shouldn't try to do what I suggest above for\n> v14; but without it, these changes are just moving the security\n> issue to a different place rather than eradicating it completely.\n>\n> \t\t\t\n\n\n\nIs there anything else we should be doing along the eat your own dogfood\nline that don't have these security implications?\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 14 Apr 2021 15:32:28 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 4/14/21 2:03 PM, Tom Lane wrote:\n>> This may mean that squeezing these contrib changes into v14 is a lost\n>> cause. We certainly shouldn't try to do what I suggest above for\n>> v14; but without it, these changes are just moving the security\n>> issue to a different place rather than eradicating it completely.\n\n> Is there anything else we should be doing along the eat your own dogfood\n> line that don't have these security implications?\n\nWe can still convert the initdb-created SQL functions to new style,\nsince there's no security threat during initdb. I'll make a patch\nfor that soon.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Apr 2021 16:13:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "On 4/14/21 7:36 PM, Tom Lane wrote:\n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>>> On Apr 13, 2021, at 3:26 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> However I think we may still need an assumption that earthdistance\n>>> and cube are in the same schema --- any comments on that?\n> \n>> This is probably not worth doing, and we are already past feature\n>> freeze, but adding syntax to look up the namespace of an extension might\n>> help.\n> \n> Yeah, that idea was discussed before (perhaps only in private\n> security-team threads, though). We didn't do anything about it because\n> at the time there didn't seem to be pressing need, but in the context\n> of SQL function bodies there's an obvious use-case.\n> \n>> We could get something like this working just inside the CREATE EXTENSION command if we expanded on the @extschema@ idea a bit. At first I thought this idea would suffer race conditions with concurrent modifications of pg_extension or pg_namespace, but it looks like we already have a snapshot when processing the script file, so:\n> \n>> -CREATE DOMAIN earth AS cube\n>> +CREATE DOMAIN @@earthdistance@@::earth AS @@cube@@::cube\n> \n> Right, extending the @extschema@ mechanism is what was discussed,\n> though I think I'd lean towards something like @extschema:cube@\n> to denote the schema of a referenced extension \"cube\".\n> \n> I'm not sure this is useful enough to break feature freeze for,\n> but I'm +1 for investigating it for v15.\nJust like we have a pseudo \"$user\" schema, could we have a pseudo\n\"$extension\" catalog? That should avoid changing grammar rules too much.\n\nCREATE TABLE unaccented_words (\n word \"$extension\".citext.citext,\n CHECK (word = \"$extension\".unaccent.unaccent(word)\n);\n\n-- \nVik Fearing\n\n\n",
"msg_date": "Wed, 14 Apr 2021 23:47:53 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "\n\n> On Apr 14, 2021, at 2:47 PM, Vik Fearing <vik@postgresfriends.org> wrote:\n> \n> On 4/14/21 7:36 PM, Tom Lane wrote:\n>> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>>>> On Apr 13, 2021, at 3:26 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> However I think we may still need an assumption that earthdistance\n>>>> and cube are in the same schema --- any comments on that?\n>> \n>>> This is probably not worth doing, and we are already past feature\n>>> freeze, but adding syntax to look up the namespace of an extension might\n>>> help.\n>> \n>> Yeah, that idea was discussed before (perhaps only in private\n>> security-team threads, though). We didn't do anything about it because\n>> at the time there didn't seem to be pressing need, but in the context\n>> of SQL function bodies there's an obvious use-case.\n>> \n>>> We could get something like this working just inside the CREATE EXTENSION command if we expanded on the @extschema@ idea a bit. At first I thought this idea would suffer race conditions with concurrent modifications of pg_extension or pg_namespace, but it looks like we already have a snapshot when processing the script file, so:\n>> \n>>> -CREATE DOMAIN earth AS cube\n>>> +CREATE DOMAIN @@earthdistance@@::earth AS @@cube@@::cube\n>> \n>> Right, extending the @extschema@ mechanism is what was discussed,\n>> though I think I'd lean towards something like @extschema:cube@\n>> to denote the schema of a referenced extension \"cube\".\n>> \n>> I'm not sure this is useful enough to break feature freeze for,\n>> but I'm +1 for investigating it for v15.\n> Just like we have a pseudo \"$user\" schema, could we have a pseudo\n> \"$extension\" catalog? That should avoid changing grammar rules too much.\n> \n> CREATE TABLE unaccented_words (\n> word \"$extension\".citext.citext,\n> CHECK (word = \"$extension\".unaccent.unaccent(word)\n> );\n\nHaving a single variable $extension might help in many cases, but I don't see how to use it to handle the remaining cross-extension references, such as earthdistance needing to reference cube.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 14 Apr 2021 15:18:48 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "On 4/15/21 12:18 AM, Mark Dilger wrote:\n> \n> \n>> On Apr 14, 2021, at 2:47 PM, Vik Fearing <vik@postgresfriends.org> wrote:\n>>\n>> On 4/14/21 7:36 PM, Tom Lane wrote:\n>>> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>>>>> On Apr 13, 2021, at 3:26 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>>> However I think we may still need an assumption that earthdistance\n>>>>> and cube are in the same schema --- any comments on that?\n>>>\n>>>> This is probably not worth doing, and we are already past feature\n>>>> freeze, but adding syntax to look up the namespace of an extension might\n>>>> help.\n>>>\n>>> Yeah, that idea was discussed before (perhaps only in private\n>>> security-team threads, though). We didn't do anything about it because\n>>> at the time there didn't seem to be pressing need, but in the context\n>>> of SQL function bodies there's an obvious use-case.\n>>>\n>>>> We could get something like this working just inside the CREATE EXTENSION command if we expanded on the @extschema@ idea a bit. At first I thought this idea would suffer race conditions with concurrent modifications of pg_extension or pg_namespace, but it looks like we already have a snapshot when processing the script file, so:\n>>>\n>>>> -CREATE DOMAIN earth AS cube\n>>>> +CREATE DOMAIN @@earthdistance@@::earth AS @@cube@@::cube\n>>>\n>>> Right, extending the @extschema@ mechanism is what was discussed,\n>>> though I think I'd lean towards something like @extschema:cube@\n>>> to denote the schema of a referenced extension \"cube\".\n>>>\n>>> I'm not sure this is useful enough to break feature freeze for,\n>>> but I'm +1 for investigating it for v15.\n>> Just like we have a pseudo \"$user\" schema, could we have a pseudo\n>> \"$extension\" catalog? That should avoid changing grammar rules too much.\n>>\n>> CREATE TABLE unaccented_words (\n>> word \"$extension\".citext.citext,\n>> CHECK (word = \"$extension\".unaccent.unaccent(word)\n>> );\n> \n> Having a single variable $extension might help in many cases, but I don't see how to use it to handle the remaining cross-extension references, such as earthdistance needing to reference cube.\n\n\nSorry, I hadn't realized that was a real example so I made up my own.\n\nBasically my idea is to use the fully qualified catalog.schema.object\nsyntax where the catalog is a special \"$extension\" value (meaning we\nwould have to forbid that as an actual database name) and the schema is\nthe name of the extension whose schema we want. The object is then just\nthe object.\n\n\nCREATE DOMAIN earth AS \"$extension\".cube.cube\n CONSTRAINT not_point check(\"$extension\".cube.cube_is_point(value))\n CONSTRAINT not_3d check(\"$extension\".cube.cube_dim(value <= 3)\n ...;\n\n\nCREATE FUNCTION earth_box(earth, float8)\n RETURNS \"$extension\".cube.cube\n LANGUAGE sql\n IMMUTABLE PARALLEL SAFE STRICT\nRETURN \"$extension\".cube.cube_enlarge($1, gc_to_sec($2), 3);\n\n\nIf I had my druthers, we would spell it pg_extension instead of\n\"$extension\" because I hate double-quoting identifiers, but that's just\nbikeshedding and has little to do with the concept itself.\n-- \nVik Fearing\n\n\n",
"msg_date": "Thu, 15 Apr 2021 02:18:56 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "On 2021-Apr-15, Vik Fearing wrote:\n\n> CREATE DOMAIN earth AS \"$extension\".cube.cube\n> CONSTRAINT not_point check(\"$extension\".cube.cube_is_point(value))\n> CONSTRAINT not_3d check(\"$extension\".cube.cube_dim(value <= 3)\n> ...;\n\nI find this syntax pretty weird -- here, the \".cube.\" part of the\nidentifier is acting as an argument of sorts for the preceding\n$extension thingy. This looks very surprising.\n\nSomething similar to OPERATOR() syntax may be more palatable:\n\n CREATE DOMAIN earth AS PG_EXTENSION_SCHEMA(cube).cube\n CONSTRAINT not_point check(PG_EXTENSION_SCHEMA(cube).cube_is_point(value))\n CONSTRAINT not_3d check(PG_EXTENSION_SCHEMA(cube).cube_dim(value <= 3)\n ...;\n\nHere, the PG_EXTENSION_SCHEMA() construct expands into the schema of the\ngiven extension. This looks more natural to me, since the extension\nthat acts as argument to PG_EXTENSION_SCHEMA() does look like an\nargument.\n\nI don't know if the parser would like this, though.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Thu, 15 Apr 2021 13:23:13 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 02:03:56PM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Wed, Apr 14, 2021 at 1:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Could we hack things so that extension scripts are only allowed to\n> >> reference objects created (a) by the system, (b) earlier in the\n> >> same script, or (c) owned by one of the declared prerequisite\n> >> extensions? Seems like that might provide a pretty bulletproof\n> >> defense against trojan-horse objects, though I'm not sure how much\n> >> of a pain it'd be to implement.\n\nGood idea.\n\n> > That doesn't seem like a crazy idea, but the previous idea of having\n> > some magic syntax that means \"the schema where extension FOO is\" seems\n> > like it might be easier to implement and more generally useful.\n> \n> I think that's definitely useful, but it's not a fix for the\n> reference-capture problem unless you care to assume that the other\n> extension's schema is free of trojan-horse objects.\n\nI could see using that, perhaps in a non-SQL-language function. I agree it\nsolves different problems.\n\n\n",
"msg_date": "Thu, 15 Apr 2021 18:40:04 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "On 14.04.21 00:26, Tom Lane wrote:\n> Attached are some draft patches to convert almost all of the\n> contrib modules' SQL functions to use SQL-standard function bodies.\n\nThis first patch is still the patch of record in CF 2021-09, but from \nthe subsequent discussion, it seems more work is being contemplated.\n\n\n",
"msg_date": "Wed, 1 Sep 2021 09:26:14 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Converting contrib SQL functions to new style"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 14.04.21 00:26, Tom Lane wrote:\n>> Attached are some draft patches to convert almost all of the\n>> contrib modules' SQL functions to use SQL-standard function bodies.\n\n> This first patch is still the patch of record in CF 2021-09, but from \n> the subsequent discussion, it seems more work is being contemplated.\n\nYeah, it looks like we already did the unconditionally-safe part\n(i.e. making initdb-created SQL functions use new style, cf 767982e36).\n\nThe rest of this is stuck pending investigation of the ideas about\nmaking new-style function creation safer when the creation-time path\nisn't secure, so I suppose we should mark it RWF rather than leaving\nit in the queue. Will go do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Sep 2021 13:27:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Converting contrib SQL functions to new style"
}
] |
[
{
"msg_contents": "Hi,\n\ncommit 676887a3 added support for jsonb subscripting.\n\nMany thanks for working on this. I really like the improved syntax.\n\nI was also hoping for some performance benefits,\nbut my testing shows that\n\n jsonb_value['existing_key'] = new_value;\n\ntakes just as long time as\n\n jsonb_value := jsonb_set(jsonb_value, ARRAY['existing_key'], new_value);\n\nwhich is a bit surprising to me. Shouldn't subscripting be a lot faster, since it could modify the existing data structure in-place? What am I missing here?\n\nI came to think of the this new functionality when trying to optimize some\nPL/pgSQL code where the bottle-neck turned out to be lots of calls to jsonb_set() for large jsonb objects.\n\nHere is the output from attached bench:\n\nn=10000\n00:00:00.002628 jsonb := jsonb_set(jsonb, ARRAY[existing key], value);\n00:00:00.002778 jsonb := jsonb_set(jsonb, ARRAY[new key], value);\n00:00:00.002332 jsonb[existing key] := value;\n00:00:00.002794 jsonb[new key] := value;\nn=100000\n00:00:00.042843 jsonb := jsonb_set(jsonb, ARRAY[existing key], value);\n00:00:00.046515 jsonb := jsonb_set(jsonb, ARRAY[new key], value);\n00:00:00.044974 jsonb[existing key] := value;\n00:00:00.075429 jsonb[new key] := value;\nn=1000000\n00:00:00.420808 jsonb := jsonb_set(jsonb, ARRAY[existing key], value);\n00:00:00.449622 jsonb := jsonb_set(jsonb, ARRAY[new key], value);\n00:00:00.31834 jsonb[existing key] := value;\n00:00:00.527904 jsonb[new key] := value;\n\nMany thanks for clarifying.\n\nBest regards,\n\nJoel",
"msg_date": "Wed, 14 Apr 2021 07:39:23 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "jsonb subscripting assignment performance"
},
{
"msg_contents": "st 14. 4. 2021 v 7:39 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> Hi,\n>\n> commit 676887a3 added support for jsonb subscripting.\n>\n> Many thanks for working on this. I really like the improved syntax.\n>\n> I was also hoping for some performance benefits,\n> but my testing shows that\n>\n> jsonb_value['existing_key'] = new_value;\n>\n> takes just as long time as\n>\n> jsonb_value := jsonb_set(jsonb_value, ARRAY['existing_key'], new_value);\n>\n> which is a bit surprising to me. Shouldn't subscripting be a lot faster,\n> since it could modify the existing data structure in-place? What am I\n> missing here?\n>\n\nno - it doesn't support in-place modification. Only arrays and records\nsupport it.\n\n\n> I came to think of the this new functionality when trying to optimize some\n> PL/pgSQL code where the bottle-neck turned out to be lots of calls\n> to jsonb_set() for large jsonb objects.\n>\n\nsure - there is big room for optimization. But this patch was big enough\nwithout its optimization. And it was not clean, if I will be committed or\nnot (it waited in commitfest application for 4 years). So I accepted\nimplemented behaviour (without inplace update). Now, this patch is in core,\nand anybody can work on others possible optimizations.\n\nRegards\n\nPavel\n\n\n>\n> Here is the output from attached bench:\n>\n> n=10000\n> 00:00:00.002628 jsonb := jsonb_set(jsonb, ARRAY[existing key], value);\n> 00:00:00.002778 jsonb := jsonb_set(jsonb, ARRAY[new key], value);\n> 00:00:00.002332 jsonb[existing key] := value;\n> 00:00:00.002794 jsonb[new key] := value;\n> n=100000\n> 00:00:00.042843 jsonb := jsonb_set(jsonb, ARRAY[existing key], value);\n> 00:00:00.046515 jsonb := jsonb_set(jsonb, ARRAY[new key], value);\n> 00:00:00.044974 jsonb[existing key] := value;\n> 00:00:00.075429 jsonb[new key] := value;\n> n=1000000\n> 00:00:00.420808 jsonb := jsonb_set(jsonb, ARRAY[existing key], value);\n> 00:00:00.449622 jsonb := jsonb_set(jsonb, ARRAY[new key], value);\n> 00:00:00.31834 jsonb[existing key] := value;\n> 00:00:00.527904 jsonb[new key] := value;\n>\n> Many thanks for clarifying.\n>\n> Best regards,\n>\n> Joel\n>\n\nst 14. 4. 2021 v 7:39 odesílatel Joel Jacobson <joel@compiler.org> napsal:Hi,commit 676887a3 added support for jsonb subscripting.Many thanks for working on this. I really like the improved syntax.I was also hoping for some performance benefits,but my testing shows that jsonb_value['existing_key'] = new_value;takes just as long time as jsonb_value := jsonb_set(jsonb_value, ARRAY['existing_key'], new_value);which is a bit surprising to me. Shouldn't subscripting be a lot faster, since it could modify the existing data structure in-place? What am I missing here?no - it doesn't support in-place modification. Only arrays and records support it. I came to think of the this new functionality when trying to optimize somePL/pgSQL code where the bottle-neck turned out to be lots of calls to jsonb_set() for large jsonb objects.sure - there is big room for optimization. But this patch was big enough without its optimization. And it was not clean, if I will be committed or not (it waited in commitfest application for 4 years). So I accepted implemented behaviour (without inplace update). Now, this patch is in core, and anybody can work on others possible optimizations.RegardsPavel Here is the output from attached bench:n=1000000:00:00.002628 jsonb := jsonb_set(jsonb, ARRAY[existing key], value);00:00:00.002778 jsonb := jsonb_set(jsonb, ARRAY[new key], value);00:00:00.002332 jsonb[existing key] := value;00:00:00.002794 jsonb[new key] := value;n=10000000:00:00.042843 jsonb := jsonb_set(jsonb, ARRAY[existing key], value);00:00:00.046515 jsonb := jsonb_set(jsonb, ARRAY[new key], value);00:00:00.044974 jsonb[existing key] := value;00:00:00.075429 jsonb[new key] := value;n=100000000:00:00.420808 jsonb := jsonb_set(jsonb, ARRAY[existing key], value);00:00:00.449622 jsonb := jsonb_set(jsonb, ARRAY[new key], value);00:00:00.31834 jsonb[existing key] := value;00:00:00.527904 jsonb[new key] := value;Many thanks for clarifying.Best regards,Joel",
"msg_date": "Wed, 14 Apr 2021 09:20:08 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb subscripting assignment performance"
},
{
"msg_contents": "On Wed, Apr 14, 2021, at 09:20, Pavel Stehule wrote:\n> sure - there is big room for optimization. But this patch was big enough without its optimization. And it was not clean, if I will be committed or not (it waited in commitfest application for 4 years). So I accepted implemented behaviour (without inplace update). Now, this patch is in core, and anybody can work on others possible optimizations.\n\nThanks for explaining.\n\nDo we a rough idea on how in-place could be implemented in a non-invasive non-controversial way that ought to be accepted by the project, if done right? Or are there other complicated problems that needs to be solved first?\n\nI'm asking because I could be interested in working on this, but I know my limitations when it comes to C, so I want to get an idea on if it should be more or less straightforward, or if we already know on beforehand it would require committer-level expertise of the PostgreSQL code base for any realistic chance of being successful.\n\n/Joel\nOn Wed, Apr 14, 2021, at 09:20, Pavel Stehule wrote:sure - there is big room for optimization. But this patch was big enough without its optimization. And it was not clean, if I will be committed or not (it waited in commitfest application for 4 years). So I accepted implemented behaviour (without inplace update). Now, this patch is in core, and anybody can work on others possible optimizations.Thanks for explaining.Do we a rough idea on how in-place could be implemented in a non-invasive non-controversial way that ought to be accepted by the project, if done right? Or are there other complicated problems that needs to be solved first?I'm asking because I could be interested in working on this, but I know my limitations when it comes to C, so I want to get an idea on if it should be more or less straightforward, or if we already know on beforehand it would require committer-level expertise of the PostgreSQL code base for any realistic chance of being successful./Joel",
"msg_date": "Wed, 14 Apr 2021 09:50:02 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: jsonb subscripting assignment performance"
},
{
"msg_contents": "> On Wed, Apr 14, 2021 at 09:20:08AM +0200, Pavel Stehule wrote:\n> st 14. 4. 2021 v 7:39 odes�latel Joel Jacobson <joel@compiler.org> napsal:\n>\n> > Hi,\n> >\n> > commit 676887a3 added support for jsonb subscripting.\n> >\n> > Many thanks for working on this. I really like the improved syntax.\n> >\n> > I was also hoping for some performance benefits,\n> > but my testing shows that\n> >\n> > jsonb_value['existing_key'] = new_value;\n> >\n> > takes just as long time as\n> >\n> > jsonb_value := jsonb_set(jsonb_value, ARRAY['existing_key'], new_value);\n> >\n> > which is a bit surprising to me. Shouldn't subscripting be a lot faster,\n> > since it could modify the existing data structure in-place? What am I\n> > missing here?\n> >\n>\n> no - it doesn't support in-place modification. Only arrays and records\n> support it.\n>\n>\n> > I came to think of the this new functionality when trying to optimize some\n> > PL/pgSQL code where the bottle-neck turned out to be lots of calls\n> > to jsonb_set() for large jsonb objects.\n> >\n>\n> sure - there is big room for optimization. But this patch was big enough\n> without its optimization. And it was not clean, if I will be committed or\n> not (it waited in commitfest application for 4 years). So I accepted\n> implemented behaviour (without inplace update). Now, this patch is in core,\n> and anybody can work on others possible optimizations.\n\nRight, jsonb subscripting deals mostly with the syntax part and doesn't\nchange internal jsonb behaviour. If I understand the original question\ncorrectly, \"in-place\" here means updating of e.g. just one particular\nkey within a jsonb object, since jsonb_set looks like an overwrite of\nthe whole jsonb. If so, then update will still cause the whole jsonb to\nbe updated, there is no partial update functionality for the on-disk\nformat. Although there is work going on to optimize this in case when\njsonb is big enough to be put into a toast table (partial toast\ndecompression thread, or bytea appendable toast).\n\n\n",
"msg_date": "Wed, 14 Apr 2021 09:57:33 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb subscripting assignment performance"
},
{
"msg_contents": "st 14. 4. 2021 v 9:57 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> > On Wed, Apr 14, 2021 at 09:20:08AM +0200, Pavel Stehule wrote:\n> > st 14. 4. 2021 v 7:39 odesílatel Joel Jacobson <joel@compiler.org>\n> napsal:\n> >\n> > > Hi,\n> > >\n> > > commit 676887a3 added support for jsonb subscripting.\n> > >\n> > > Many thanks for working on this. I really like the improved syntax.\n> > >\n> > > I was also hoping for some performance benefits,\n> > > but my testing shows that\n> > >\n> > > jsonb_value['existing_key'] = new_value;\n> > >\n> > > takes just as long time as\n> > >\n> > > jsonb_value := jsonb_set(jsonb_value, ARRAY['existing_key'],\n> new_value);\n> > >\n> > > which is a bit surprising to me. Shouldn't subscripting be a lot\n> faster,\n> > > since it could modify the existing data structure in-place? What am I\n> > > missing here?\n> > >\n> >\n> > no - it doesn't support in-place modification. Only arrays and records\n> > support it.\n> >\n> >\n> > > I came to think of the this new functionality when trying to optimize\n> some\n> > > PL/pgSQL code where the bottle-neck turned out to be lots of calls\n> > > to jsonb_set() for large jsonb objects.\n> > >\n> >\n> > sure - there is big room for optimization. But this patch was big enough\n> > without its optimization. And it was not clean, if I will be committed or\n> > not (it waited in commitfest application for 4 years). So I accepted\n> > implemented behaviour (without inplace update). Now, this patch is in\n> core,\n> > and anybody can work on others possible optimizations.\n>\n> Right, jsonb subscripting deals mostly with the syntax part and doesn't\n> change internal jsonb behaviour. If I understand the original question\n> correctly, \"in-place\" here means updating of e.g. just one particular\n> key within a jsonb object, since jsonb_set looks like an overwrite of\n> the whole jsonb. If so, then update will still cause the whole jsonb to\n> be updated, there is no partial update functionality for the on-disk\n> format. Although there is work going on to optimize this in case when\n> jsonb is big enough to be put into a toast table (partial toast\n> decompression thread, or bytea appendable toast).\n>\n\nAlmost all and almost everywhere Postgres's values are immutable. There is\nonly one exception - runtime plpgsql. \"local variables\" can hold values of\ncomplex values unboxed. Then the repeated update is significantly cheaper.\nNormal non repeated updates have the same speed, because the value should\nbe unboxed and boxed. Outside plpgsql the values are immutable. I think\nthis is a very hard problem, how to update big toasted values effectively,\nand I am not sure if there is a solution. TOAST value is immutable. It\nneeds to introduce some alternative to TOAST. The benefits are clear - it\ncan be nice to have fast append arrays for time series. But this is a very\ndifferent topic.\n\nRegards\n\nPavel\n\nst 14. 4. 2021 v 9:57 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Wed, Apr 14, 2021 at 09:20:08AM +0200, Pavel Stehule wrote:\n> st 14. 4. 2021 v 7:39 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n>\n> > Hi,\n> >\n> > commit 676887a3 added support for jsonb subscripting.\n> >\n> > Many thanks for working on this. I really like the improved syntax.\n> >\n> > I was also hoping for some performance benefits,\n> > but my testing shows that\n> >\n> > jsonb_value['existing_key'] = new_value;\n> >\n> > takes just as long time as\n> >\n> > jsonb_value := jsonb_set(jsonb_value, ARRAY['existing_key'], new_value);\n> >\n> > which is a bit surprising to me. Shouldn't subscripting be a lot faster,\n> > since it could modify the existing data structure in-place? What am I\n> > missing here?\n> >\n>\n> no - it doesn't support in-place modification. Only arrays and records\n> support it.\n>\n>\n> > I came to think of the this new functionality when trying to optimize some\n> > PL/pgSQL code where the bottle-neck turned out to be lots of calls\n> > to jsonb_set() for large jsonb objects.\n> >\n>\n> sure - there is big room for optimization. But this patch was big enough\n> without its optimization. And it was not clean, if I will be committed or\n> not (it waited in commitfest application for 4 years). So I accepted\n> implemented behaviour (without inplace update). Now, this patch is in core,\n> and anybody can work on others possible optimizations.\n\nRight, jsonb subscripting deals mostly with the syntax part and doesn't\nchange internal jsonb behaviour. If I understand the original question\ncorrectly, \"in-place\" here means updating of e.g. just one particular\nkey within a jsonb object, since jsonb_set looks like an overwrite of\nthe whole jsonb. If so, then update will still cause the whole jsonb to\nbe updated, there is no partial update functionality for the on-disk\nformat. Although there is work going on to optimize this in case when\njsonb is big enough to be put into a toast table (partial toast\ndecompression thread, or bytea appendable toast).Almost all and almost everywhere Postgres's values are immutable. There is only one exception - runtime plpgsql. \"local variables\" can hold values of complex values unboxed. Then the repeated update is significantly cheaper. Normal non repeated updates have the same speed, because the value should be unboxed and boxed. Outside plpgsql the values are immutable. I think this is a very hard problem, how to update big toasted values effectively, and I am not sure if there is a solution. TOAST value is immutable. It needs to introduce some alternative to TOAST. The benefits are clear - it can be nice to have fast append arrays for time series. But this is a very different topic.RegardsPavel",
"msg_date": "Wed, 14 Apr 2021 10:09:00 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb subscripting assignment performance"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 11:09 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> st 14. 4. 2021 v 9:57 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\n> napsal:\n>\n>> > On Wed, Apr 14, 2021 at 09:20:08AM +0200, Pavel Stehule wrote:\n>> > st 14. 4. 2021 v 7:39 odesílatel Joel Jacobson <joel@compiler.org>\n>> napsal:\n>> >\n>> > > Hi,\n>> > >\n>> > > commit 676887a3 added support for jsonb subscripting.\n>> > >\n>> > > Many thanks for working on this. I really like the improved syntax.\n>> > >\n>> > > I was also hoping for some performance benefits,\n>> > > but my testing shows that\n>> > >\n>> > > jsonb_value['existing_key'] = new_value;\n>> > >\n>> > > takes just as long time as\n>> > >\n>> > > jsonb_value := jsonb_set(jsonb_value, ARRAY['existing_key'],\n>> new_value);\n>> > >\n>> > > which is a bit surprising to me. Shouldn't subscripting be a lot\n>> faster,\n>> > > since it could modify the existing data structure in-place? What am I\n>> > > missing here?\n>> > >\n>> >\n>> > no - it doesn't support in-place modification. Only arrays and records\n>> > support it.\n>> >\n>> >\n>> > > I came to think of the this new functionality when trying to optimize\n>> some\n>> > > PL/pgSQL code where the bottle-neck turned out to be lots of calls\n>> > > to jsonb_set() for large jsonb objects.\n>> > >\n>> >\n>> > sure - there is big room for optimization. But this patch was big enough\n>> > without its optimization. And it was not clean, if I will be committed\n>> or\n>> > not (it waited in commitfest application for 4 years). So I accepted\n>> > implemented behaviour (without inplace update). Now, this patch is in\n>> core,\n>> > and anybody can work on others possible optimizations.\n>>\n>> Right, jsonb subscripting deals mostly with the syntax part and doesn't\n>> change internal jsonb behaviour. If I understand the original question\n>> correctly, \"in-place\" here means updating of e.g. just one particular\n>> key within a jsonb object, since jsonb_set looks like an overwrite of\n>> the whole jsonb. If so, then update will still cause the whole jsonb to\n>> be updated, there is no partial update functionality for the on-disk\n>> format. Although there is work going on to optimize this in case when\n>> jsonb is big enough to be put into a toast table (partial toast\n>> decompression thread, or bytea appendable toast).\n>>\n>\n> Almost all and almost everywhere Postgres's values are immutable. There is\n> only one exception - runtime plpgsql. \"local variables\" can hold values of\n> complex values unboxed. Then the repeated update is significantly cheaper.\n> Normal non repeated updates have the same speed, because the value should\n> be unboxed and boxed. Outside plpgsql the values are immutable. I think\n> this is a very hard problem, how to update big toasted values effectively,\n> and I am not sure if there is a solution. TOAST value is immutable. It\n> needs to introduce some alternative to TOAST. The benefits are clear - it\n> can be nice to have fast append arrays for time series. But this is a very\n> different topic.\n>\n\nI and Nikita are working on OLTP jsonb\nhttp://www.sai.msu.su/~megera/postgres/talks/jsonb-pgconfonline-2021.pdf\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n>\n>\n>\n\n-- \nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\nOn Wed, Apr 14, 2021 at 11:09 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:st 14. 4. 2021 v 9:57 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Wed, Apr 14, 2021 at 09:20:08AM +0200, Pavel Stehule wrote:\n> st 14. 4. 2021 v 7:39 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n>\n> > Hi,\n> >\n> > commit 676887a3 added support for jsonb subscripting.\n> >\n> > Many thanks for working on this. I really like the improved syntax.\n> >\n> > I was also hoping for some performance benefits,\n> > but my testing shows that\n> >\n> > jsonb_value['existing_key'] = new_value;\n> >\n> > takes just as long time as\n> >\n> > jsonb_value := jsonb_set(jsonb_value, ARRAY['existing_key'], new_value);\n> >\n> > which is a bit surprising to me. Shouldn't subscripting be a lot faster,\n> > since it could modify the existing data structure in-place? What am I\n> > missing here?\n> >\n>\n> no - it doesn't support in-place modification. Only arrays and records\n> support it.\n>\n>\n> > I came to think of the this new functionality when trying to optimize some\n> > PL/pgSQL code where the bottle-neck turned out to be lots of calls\n> > to jsonb_set() for large jsonb objects.\n> >\n>\n> sure - there is big room for optimization. But this patch was big enough\n> without its optimization. And it was not clean, if I will be committed or\n> not (it waited in commitfest application for 4 years). So I accepted\n> implemented behaviour (without inplace update). Now, this patch is in core,\n> and anybody can work on others possible optimizations.\n\nRight, jsonb subscripting deals mostly with the syntax part and doesn't\nchange internal jsonb behaviour. If I understand the original question\ncorrectly, \"in-place\" here means updating of e.g. just one particular\nkey within a jsonb object, since jsonb_set looks like an overwrite of\nthe whole jsonb. If so, then update will still cause the whole jsonb to\nbe updated, there is no partial update functionality for the on-disk\nformat. Although there is work going on to optimize this in case when\njsonb is big enough to be put into a toast table (partial toast\ndecompression thread, or bytea appendable toast).Almost all and almost everywhere Postgres's values are immutable. There is only one exception - runtime plpgsql. \"local variables\" can hold values of complex values unboxed. Then the repeated update is significantly cheaper. Normal non repeated updates have the same speed, because the value should be unboxed and boxed. Outside plpgsql the values are immutable. I think this is a very hard problem, how to update big toasted values effectively, and I am not sure if there is a solution. TOAST value is immutable. It needs to introduce some alternative to TOAST. The benefits are clear - it can be nice to have fast append arrays for time series. But this is a very different topic.I and Nikita are working on OLTP jsonb http://www.sai.msu.su/~megera/postgres/talks/jsonb-pgconfonline-2021.pdf RegardsPavel \n-- Postgres Professional: http://www.postgrespro.comThe Russian Postgres Company",
"msg_date": "Wed, 14 Apr 2021 12:07:19 +0300",
"msg_from": "Oleg Bartunov <obartunov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: jsonb subscripting assignment performance"
},
{
"msg_contents": "st 14. 4. 2021 v 11:07 odesílatel Oleg Bartunov <obartunov@postgrespro.ru>\nnapsal:\n\n>\n>\n> On Wed, Apr 14, 2021 at 11:09 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>>\n>>\n>> st 14. 4. 2021 v 9:57 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\n>> napsal:\n>>\n>>> > On Wed, Apr 14, 2021 at 09:20:08AM +0200, Pavel Stehule wrote:\n>>> > st 14. 4. 2021 v 7:39 odesílatel Joel Jacobson <joel@compiler.org>\n>>> napsal:\n>>> >\n>>> > > Hi,\n>>> > >\n>>> > > commit 676887a3 added support for jsonb subscripting.\n>>> > >\n>>> > > Many thanks for working on this. I really like the improved syntax.\n>>> > >\n>>> > > I was also hoping for some performance benefits,\n>>> > > but my testing shows that\n>>> > >\n>>> > > jsonb_value['existing_key'] = new_value;\n>>> > >\n>>> > > takes just as long time as\n>>> > >\n>>> > > jsonb_value := jsonb_set(jsonb_value, ARRAY['existing_key'],\n>>> new_value);\n>>> > >\n>>> > > which is a bit surprising to me. Shouldn't subscripting be a lot\n>>> faster,\n>>> > > since it could modify the existing data structure in-place? What am I\n>>> > > missing here?\n>>> > >\n>>> >\n>>> > no - it doesn't support in-place modification. Only arrays and records\n>>> > support it.\n>>> >\n>>> >\n>>> > > I came to think of the this new functionality when trying to\n>>> optimize some\n>>> > > PL/pgSQL code where the bottle-neck turned out to be lots of calls\n>>> > > to jsonb_set() for large jsonb objects.\n>>> > >\n>>> >\n>>> > sure - there is big room for optimization. But this patch was big\n>>> enough\n>>> > without its optimization. And it was not clean, if I will be committed\n>>> or\n>>> > not (it waited in commitfest application for 4 years). So I accepted\n>>> > implemented behaviour (without inplace update). Now, this patch is in\n>>> core,\n>>> > and anybody can work on others possible optimizations.\n>>>\n>>> Right, jsonb subscripting deals mostly with the syntax part and doesn't\n>>> change internal jsonb behaviour. If I understand the original question\n>>> correctly, \"in-place\" here means updating of e.g. just one particular\n>>> key within a jsonb object, since jsonb_set looks like an overwrite of\n>>> the whole jsonb. If so, then update will still cause the whole jsonb to\n>>> be updated, there is no partial update functionality for the on-disk\n>>> format. Although there is work going on to optimize this in case when\n>>> jsonb is big enough to be put into a toast table (partial toast\n>>> decompression thread, or bytea appendable toast).\n>>>\n>>\n>> Almost all and almost everywhere Postgres's values are immutable. There\n>> is only one exception - runtime plpgsql. \"local variables\" can hold values\n>> of complex values unboxed. Then the repeated update is significantly\n>> cheaper. Normal non repeated updates have the same speed, because the value\n>> should be unboxed and boxed. Outside plpgsql the values are immutable. I\n>> think this is a very hard problem, how to update big toasted values\n>> effectively, and I am not sure if there is a solution. TOAST value is\n>> immutable. It needs to introduce some alternative to TOAST. The benefits\n>> are clear - it can be nice to have fast append arrays for time series. But\n>> this is a very different topic.\n>>\n>\n> I and Nikita are working on OLTP jsonb\n> http://www.sai.msu.su/~megera/postgres/talks/jsonb-pgconfonline-2021.pdf\n>\n\n+1\n\nPavel\n\n\n>\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>\n> --\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n\nst 14. 4. 2021 v 11:07 odesílatel Oleg Bartunov <obartunov@postgrespro.ru> napsal:On Wed, Apr 14, 2021 at 11:09 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:st 14. 4. 2021 v 9:57 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Wed, Apr 14, 2021 at 09:20:08AM +0200, Pavel Stehule wrote:\n> st 14. 4. 2021 v 7:39 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n>\n> > Hi,\n> >\n> > commit 676887a3 added support for jsonb subscripting.\n> >\n> > Many thanks for working on this. I really like the improved syntax.\n> >\n> > I was also hoping for some performance benefits,\n> > but my testing shows that\n> >\n> > jsonb_value['existing_key'] = new_value;\n> >\n> > takes just as long time as\n> >\n> > jsonb_value := jsonb_set(jsonb_value, ARRAY['existing_key'], new_value);\n> >\n> > which is a bit surprising to me. Shouldn't subscripting be a lot faster,\n> > since it could modify the existing data structure in-place? What am I\n> > missing here?\n> >\n>\n> no - it doesn't support in-place modification. Only arrays and records\n> support it.\n>\n>\n> > I came to think of the this new functionality when trying to optimize some\n> > PL/pgSQL code where the bottle-neck turned out to be lots of calls\n> > to jsonb_set() for large jsonb objects.\n> >\n>\n> sure - there is big room for optimization. But this patch was big enough\n> without its optimization. And it was not clean, if I will be committed or\n> not (it waited in commitfest application for 4 years). So I accepted\n> implemented behaviour (without inplace update). Now, this patch is in core,\n> and anybody can work on others possible optimizations.\n\nRight, jsonb subscripting deals mostly with the syntax part and doesn't\nchange internal jsonb behaviour. If I understand the original question\ncorrectly, \"in-place\" here means updating of e.g. just one particular\nkey within a jsonb object, since jsonb_set looks like an overwrite of\nthe whole jsonb. If so, then update will still cause the whole jsonb to\nbe updated, there is no partial update functionality for the on-disk\nformat. Although there is work going on to optimize this in case when\njsonb is big enough to be put into a toast table (partial toast\ndecompression thread, or bytea appendable toast).Almost all and almost everywhere Postgres's values are immutable. There is only one exception - runtime plpgsql. \"local variables\" can hold values of complex values unboxed. Then the repeated update is significantly cheaper. Normal non repeated updates have the same speed, because the value should be unboxed and boxed. Outside plpgsql the values are immutable. I think this is a very hard problem, how to update big toasted values effectively, and I am not sure if there is a solution. TOAST value is immutable. It needs to introduce some alternative to TOAST. The benefits are clear - it can be nice to have fast append arrays for time series. But this is a very different topic.I and Nikita are working on OLTP jsonb http://www.sai.msu.su/~megera/postgres/talks/jsonb-pgconfonline-2021.pdf+1Pavel RegardsPavel \n-- Postgres Professional: http://www.postgrespro.comThe Russian Postgres Company",
"msg_date": "Wed, 14 Apr 2021 11:12:21 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb subscripting assignment performance"
},
{
"msg_contents": "On Wed, Apr 14, 2021, at 11:07, Oleg Bartunov wrote:\n> I and Nikita are working on OLTP jsonb http://www.sai.msu.su/~megera/postgres/talks/jsonb-pgconfonline-2021.pdf\n> \n\nPage 49/55 in the PDF:\n\"UPDATE test_toast SET jb = jsonb_set(jb, {keyN,0}, ?);\"\n\nWould you get similar improvements if updating jsonb variables in PL/pgSQL?\nIf not, could the infrastructure somehow be reused to improve the PL/pgSQL use-case as well?\n\nI would be happy to help out if there is something I can do, such as testing.\n\n/Joel\nOn Wed, Apr 14, 2021, at 11:07, Oleg Bartunov wrote:I and Nikita are working on OLTP jsonb http://www.sai.msu.su/~megera/postgres/talks/jsonb-pgconfonline-2021.pdf Page 49/55 in the PDF:\"UPDATE test_toast SET jb = jsonb_set(jb, {keyN,0}, ?);\"Would you get similar improvements if updating jsonb variables in PL/pgSQL?If not, could the infrastructure somehow be reused to improve the PL/pgSQL use-case as well?I would be happy to help out if there is something I can do, such as testing./Joel",
"msg_date": "Wed, 14 Apr 2021 12:53:49 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: jsonb subscripting assignment performance"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 10:57 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> > sure - there is big room for optimization. But this patch was big enough\n> > without its optimization. And it was not clean, if I will be committed or\n> > not (it waited in commitfest application for 4 years). So I accepted\n> > implemented behaviour (without inplace update). Now, this patch is in core,\n> > and anybody can work on others possible optimizations.\n>\n> Right, jsonb subscripting deals mostly with the syntax part and doesn't\n> change internal jsonb behaviour. If I understand the original question\n> correctly, \"in-place\" here means updating of e.g. just one particular\n> key within a jsonb object, since jsonb_set looks like an overwrite of\n> the whole jsonb. If so, then update will still cause the whole jsonb to\n> be updated, there is no partial update functionality for the on-disk\n> format. Although there is work going on to optimize this in case when\n> jsonb is big enough to be put into a toast table (partial toast\n> decompression thread, or bytea appendable toast).\n\n+1\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 14 Apr 2021 19:52:59 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb subscripting assignment performance"
}
] |
[
{
"msg_contents": "Hello guys!\nIn Postgres we can create view with view owner privileges only. What’s the reason that there is no option to create view with invoker privileges? Is there any technical or security subtleties related to absence of this feature?\n\n",
"msg_date": "Wed, 14 Apr 2021 10:25:08 +0300",
"msg_from": "Ivan Ivanov <m7onov@gmail.com>",
"msg_from_op": true,
"msg_subject": "View invoker privileges"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 10:25:08AM +0300, Ivan Ivanov wrote:\n> In Postgres we can create view with view owner privileges only. What’s the\n> reason that there is no option to create view with invoker privileges? Is\n> there any technical or security subtleties related to absence of this\n> feature?\n\nThe SQL standard calls for the owner privileges behavior, and nobody has\nimplemented an invoker privileges option. I know of no particular subtlety.\nAn SQL-language function can behave like an invoker-privileges view, but a\nview would allow more optimizer freedom. It would be a good option to have.\n\n\n",
"msg_date": "Fri, 14 May 2021 01:11:31 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: View invoker privileges"
},
{
"msg_contents": "On 5/14/21 4:11 AM, Noah Misch wrote:\n> On Wed, Apr 14, 2021 at 10:25:08AM +0300, Ivan Ivanov wrote:\n>> In Postgres we can create view with view owner privileges only. What’s the\n>> reason that there is no option to create view with invoker privileges? Is\n>> there any technical or security subtleties related to absence of this\n>> feature?\n> \n> The SQL standard calls for the owner privileges behavior, and nobody has\n> implemented an invoker privileges option. I know of no particular subtlety.\n> An SQL-language function can behave like an invoker-privileges view, but a\n> view would allow more optimizer freedom. It would be a good option to have.\n\n+1\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Fri, 14 May 2021 09:54:26 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: View invoker privileges"
}
] |
[
{
"msg_contents": "Hi all,\n\nAs fairywren has proved a couple of days ago, it is not really a good\nidea to rely on a file truncation to check for patterns in the logs of\nthe backend:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-04-07%2013%3A29%3A28\n\nVisibly, a logic based on the log file truncation fails on Windows\nbecause of the concurrent access of the backend that outputs its logs\nthere. In PostgresNode.pm, connect_ok() and connect_access() enforce\na rotation of the log file before restarting the server on Windows to\nmake sure that a given step does not find logs generated by a previous\ntest, but that's not the case of issues_sql_like(). Looking at the\nexisting tests using this routine (src/bin/scripts/), I have found on\ntest in 090_reindexdb.pl that could lead to a false positive. The\ntest is marked in the patch attached, just for awareness.\n\nWould there be any objections to change this routine so as we avoid\nthe file truncation on Windows? The patch attached achieves that.\n\nAny thoughts?\n--\nMichael",
"msg_date": "Wed, 14 Apr 2021 17:13:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "File truncation within PostgresNode::issues_sql_like() wrong on\n Windows"
},
{
"msg_contents": "\nOn 4/14/21 4:13 AM, Michael Paquier wrote:\n> Hi all,\n>\n> As fairywren has proved a couple of days ago, it is not really a good\n> idea to rely on a file truncation to check for patterns in the logs of\n> the backend:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-04-07%2013%3A29%3A28\n>\n> Visibly, a logic based on the log file truncation fails on Windows\n> because of the concurrent access of the backend that outputs its logs\n> there. In PostgresNode.pm, connect_ok() and connect_access() enforce\n> a rotation of the log file before restarting the server on Windows to\n> make sure that a given step does not find logs generated by a previous\n> test, but that's not the case of issues_sql_like(). Looking at the\n> existing tests using this routine (src/bin/scripts/), I have found on\n> test in 090_reindexdb.pl that could lead to a false positive. The\n> test is marked in the patch attached, just for awareness.\n>\n> Would there be any objections to change this routine so as we avoid\n> the file truncation on Windows? The patch attached achieves that.\n>\n> Any thoughts?\n\n\nThat seems rather heavy-handed. The buildfarm's approach is a bit\ndifferent. Essentially it seeks to the previous position of the log file\nbefore reading contents. Here is its equivalent of slurp_file:\n\n\n use Fcntl qw(:seek);\n sub file_lines\n {\n ��� my $filename = shift;\n ��� my $filepos� = shift;\n ��� my $handle;\n ��� open($handle, '<', $filename) || croak \"opening $filename: $!\";\n ��� seek($handle, $filepos, SEEK_SET) if $filepos;\n ��� my @lines = <$handle>;\n ��� close $handle;\n ��� return @lines;\n }\n\n\n\nA client wanting what's done in PostgresNode would do something like:\n\n\n my $logpos� = -s $logfile;\n do_some_stuff();\n my @lines = file_lines($logfile, $logpos);\n\n\nThis has the benefit of working the same on all platforms, and no\ntruncation, rotation, or restart is required.\n\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 14 Apr 2021 17:10:41 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: File truncation within PostgresNode::issues_sql_like() wrong on\n Windows"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 05:10:41PM -0400, Andrew Dunstan wrote:\n> That seems rather heavy-handed. The buildfarm's approach is a bit\n> different. Essentially it seeks to the previous position of the log file\n> before reading contents. Here is its equivalent of slurp_file:\n> \n> use Fcntl qw(:seek);\n> sub file_lines\n> {\n> my $filename = shift;\n> my $filepos = shift;\n> my $handle;\n> open($handle, '<', $filename) || croak \"opening $filename: $!\";\n> seek($handle, $filepos, SEEK_SET) if $filepos;\n> my @lines = <$handle>;\n> close $handle;\n> return @lines;\n> }\n\nThat's a bit surprising to see that you can safely open a file handle\nwith perl like that without using Win32API::File, and I would have\nassumed that this would have conflicted with the backend redirecting\nits output to stderr the same way as a truncation on Windows.\n\n> A client wanting what's done in PostgresNode would do something like:\n> \n> my $logpos = -s $logfile;\n> do_some_stuff();\n> my @lines = file_lines($logfile, $logpos);\n> \n> This has the benefit of working the same on all platforms, and no\n> truncation, rotation, or restart is required.\n\nJacob has suggested something like that a couple of days ago, but all\nthis code was not centralized yet in a single place.\n\nFor this code, the cleanest approach would be to extend slurp_file()\nwith an extra argument to seek the file before fetching its contents\nbased on a location given by the caller? Looking at the docs of\nWin32API::File, we'd need to use SetFilePointer() instead of seek().\n--\nMichael",
"msg_date": "Thu, 15 Apr 2021 09:10:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: File truncation within PostgresNode::issues_sql_like() wrong on\n Windows"
},
{
"msg_contents": "\nOn 4/14/21 8:10 PM, Michael Paquier wrote:\n> On Wed, Apr 14, 2021 at 05:10:41PM -0400, Andrew Dunstan wrote:\n>> That seems rather heavy-handed. The buildfarm's approach is a bit\n>> different. Essentially it seeks to the previous position of the log file\n>> before reading contents. Here is its equivalent of slurp_file:\n>>\n>> use Fcntl qw(:seek);\n>> sub file_lines\n>> {\n>> ��� my $filename = shift;\n>> ��� my $filepos� = shift;\n>> ��� my $handle;\n>> ��� open($handle, '<', $filename) || croak \"opening $filename: $!\";\n>> ��� seek($handle, $filepos, SEEK_SET) if $filepos;\n>> ��� my @lines = <$handle>;\n>> ��� close $handle;\n>> ��� return @lines;\n>> }\n> That's a bit surprising to see that you can safely open a file handle\n> with perl like that without using Win32API::File, and I would have\n> assumed that this would have conflicted with the backend redirecting\n> its output to stderr the same way as a truncation on Windows.\n>\n>> A client wanting what's done in PostgresNode would do something like:\n>>\n>> my $logpos� = -s $logfile;\n>> do_some_stuff();\n>> my @lines = file_lines($logfile, $logpos);\n>>\n>> This has the benefit of working the same on all platforms, and no\n>> truncation, rotation, or restart is required.\n> Jacob has suggested something like that a couple of days ago, but all\n> this code was not centralized yet in a single place.\n>\n> For this code, the cleanest approach would be to extend slurp_file()\n> with an extra argument to seek the file before fetching its contents\n> based on a location given by the caller? Looking at the docs of\n> Win32API::File, we'd need to use SetFilePointer() instead of seek().\n\n\n\nWell, let me try it on fairywren tomorrow. Since we manage this on the\nbuildfarm without any use at all of Win32API::File it might not be\nnecessary in TAP code either, particularly if we're not truncating the file.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 14 Apr 2021 21:26:19 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: File truncation within PostgresNode::issues_sql_like() wrong on\n Windows"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 09:26:19PM -0400, Andrew Dunstan wrote:\n> Well, let me try it on fairywren tomorrow. Since we manage this on the\n> buildfarm without any use at all of Win32API::File it might not be\n> necessary in TAP code either, particularly if we're not truncating the file.\n\nThanks. If that proves to not be necessary, +1 to remove this code.\nI have been playing with this stuff, and the attached patch seems to\nwork properly on Windows. On top of that, I have also tested the\nnon-Win32 path on an MSVC box to see that it was working, but my\nenvironment is not really noisy usually with such compatibility\nissues.\n--\nMichael",
"msg_date": "Thu, 15 Apr 2021 13:57:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: File truncation within PostgresNode::issues_sql_like() wrong on\n Windows"
},
{
"msg_contents": "\nOn 4/15/21 12:57 AM, Michael Paquier wrote:\n> On Wed, Apr 14, 2021 at 09:26:19PM -0400, Andrew Dunstan wrote:\n>> Well, let me try it on fairywren tomorrow. Since we manage this on the\n>> buildfarm without any use at all of Win32API::File it might not be\n>> necessary in TAP code either, particularly if we're not truncating the file.\n> Thanks. If that proves to not be necessary, +1 to remove this code.\n> I have been playing with this stuff, and the attached patch seems to\n> work properly on Windows. On top of that, I have also tested the\n> non-Win32 path on an MSVC box to see that it was working, but my\n> environment is not really noisy usually with such compatibility\n> issues.\n\n\nReviewing the history, I don't want to undo 114541d58e5. So I'm trying\nyour patch.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 15 Apr 2021 07:16:05 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: File truncation within PostgresNode::issues_sql_like() wrong on\n Windows"
},
{
"msg_contents": "On Thu, Apr 15, 2021 at 07:16:05AM -0400, Andrew Dunstan wrote:\n> Reviewing the history, I don't want to undo 114541d58e5.\n\nMaybe we could remove it, but that may be better as a separate\ndiscussion if it is proving to not improve the situation, and I don't \nreally want to take any risks in destabilizing the buildfarm these\ndays. \n\n> So I'm trying your patch.\n\nThanks! If you need any help, please feel free to ping me.\n--\nMichael",
"msg_date": "Fri, 16 Apr 2021 09:36:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: File truncation within PostgresNode::issues_sql_like() wrong on\n Windows"
},
{
"msg_contents": "\nOn 4/15/21 8:36 PM, Michael Paquier wrote:\n> On Thu, Apr 15, 2021 at 07:16:05AM -0400, Andrew Dunstan wrote:\n>> Reviewing the history, I don't want to undo 114541d58e5.\n> Maybe we could remove it, but that may be better as a separate\n> discussion if it is proving to not improve the situation, and I don't \n> really want to take any risks in destabilizing the buildfarm these\n> days. \n>\n>> So I'm trying your patch.\n> Thanks! If you need any help, please feel free to ping me.\n\n\n\nIt's worked on fairywren, I will double check on drongo and if all is\nwell will commit.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 15 Apr 2021 21:12:52 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: File truncation within PostgresNode::issues_sql_like() wrong on\n Windows"
},
{
"msg_contents": "On Thu, Apr 15, 2021 at 09:12:52PM -0400, Andrew Dunstan wrote:\n> It's worked on fairywren, I will double check on drongo and if all is\n> well will commit.\n\nThanks Andrew. For the archive's sake, this has been committed as of\n3c5b068.\n\nWhile reading the commit, I have noticed that you used SEEK_SET\ninstead of 0 as I did in my own patch. That makes the code easier to\nunderstand. Could it be better to apply the same style to all the\nperl scripts doing some seek() calls? Please see the attached.\n--\nMichael",
"msg_date": "Sat, 17 Apr 2021 22:04:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: File truncation within PostgresNode::issues_sql_like() wrong on\n Windows"
},
{
"msg_contents": "\nOn 4/17/21 9:04 AM, Michael Paquier wrote:\n> On Thu, Apr 15, 2021 at 09:12:52PM -0400, Andrew Dunstan wrote:\n>> It's worked on fairywren, I will double check on drongo and if all is\n>> well will commit.\n> Thanks Andrew. For the archive's sake, this has been committed as of\n> 3c5b068.\n>\n> While reading the commit, I have noticed that you used SEEK_SET\n> instead of 0 as I did in my own patch. That makes the code easier to\n> understand. Could it be better to apply the same style to all the\n> perl scripts doing some seek() calls? Please see the attached.\n\n\n\nYes please, much better to use a symbolic name rather than a magic\nnumber. I wouldn't bother backpatching it though.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 17 Apr 2021 09:55:47 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: File truncation within PostgresNode::issues_sql_like() wrong on\n Windows"
},
{
"msg_contents": "On Sat, Apr 17, 2021 at 09:55:47AM -0400, Andrew Dunstan wrote:\n> Yes please, much better to use a symbolic name rather than a magic\n> number. I wouldn't bother backpatching it though.\n\nOkay, done this way then.\n--\nMichael",
"msg_date": "Mon, 19 Apr 2021 10:24:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: File truncation within PostgresNode::issues_sql_like() wrong on\n Windows"
}
] |
[
{
"msg_contents": "Hi,\n\nIn my dev system(Ubuntu) when the postmaster is killed with SIGKILL,\nSIGPWR is being sent to its child processes (backends/any other bg\nprocess). If a child process is waiting with pg_usleep, it looks like\nit is not detecting the postmaster's death and it doesn't exit\nimmediately but stays forever until it gets killed explicitly. For\nthis experiment, I did 2 things to simulate the scenario i.e. a\nbackend waiting in pg_usleep and killing the postmaster. 1) I wrote a\nwait function that uses pg_usleep and called it in a backend. This\nbackend doesn't exit on postmaster death. 2) I set PostAuthDelay to\n100 seconds and started the postmaster. Then, the \"auotvacuum\nlauncher\" process still stays around (as it has pg_usleep in its main\nfunction), even after postmaster death.\n\nQuestions:\n1) Is it really harmful to use pg_usleep in a postmaster child process\nas it doesn't let the child process detect postmaster death?\n\n2) Can pg_usleep() always detect signals? I see the caution in the\npg_usleep function definition in pgsleep.c, saying the signal handling\nis platform dependent. We have code blocks like below in the code. Do\nwe actually process interrupts before going to sleep with pg_usleep()?\nwhile/for loop\n{\n......\n......\n CHECK_FOR_INTERRUPTS();\n pg_usleep();\n}\nand\nif (PostAuthDelay)\n pg_usleep();\n\n3) Is it intentional to use pg_usleep in some places in the code? If\nyes, what are they? At least, I see one place where it's intentional\nin the wait_pid function which is used while running the regression\ntests.\n\n4) Are there any places where we need to replace pg_usleep with\nWaitLatch/equivalent of pg_sleep to detect the postmaster death\nproperly?\n\nCorrect me if I'm missing something or if my observation/understanding\nof the pg_usleep() is wrong.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Apr 2021 19:36:06 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "Hi Bharath,\n\nOn Thu, Apr 15, 2021 at 2:06 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> 1) Is it really harmful to use pg_usleep in a postmaster child process\n> as it doesn't let the child process detect postmaster death?\n\nYeah, that's a bad idea. Any long-term waiting (including short waits\nin a loop) should ideally be done with the latch infrastructure.\n\nOne interesting and unusual case is recovery: it can run for a very\nlong time without reaching a waiting primitive of any kind (other than\nLWLock, which doesn't count), because it can be busy applying records\nfor hours at a time. In that case, we take special measures and\nexplicitly check if the postmaster is dead in the redo loop. In\ntheory, you could do the same in any loop containing pg_usleep() (we\nused to have several loops doing that, especially around replication\ncode), but it'd be better to use the existing wait-event-multiplexing\ntechnology we have, and keep improving that.\n\nSome people have argued that long running queries should *also* react\nfaster when the PM exits, a bit like recovery ... which leads to the\nnext point...\n\n> 2) Can pg_usleep() always detect signals? I see the caution in the\n> pg_usleep function definition in pgsleep.c, saying the signal handling\n> is platform dependent. We have code blocks like below in the code. Do\n> we actually process interrupts before going to sleep with pg_usleep()?\n> while/for loop\n> {\n> ......\n> ......\n> CHECK_FOR_INTERRUPTS();\n> pg_usleep();\n> }\n> and\n> if (PostAuthDelay)\n> pg_usleep();\n\nCHECK_FOR_INTERRUPTS() has nothing to do with postmaster death\ndetection, currently, so that'd be for dealing with interrupts, not\nfor that. Also, there would be a race: a signal on its own isn't\nenough on systems where we have them and where select() is guaranteed\nto wake up, because the signal might arrive between CFI() and\npg_usleep(100 years). latch.c knows how to void such problems.\n\nThere may be an argument that CFI() *should* be a potential\npostmaster-death-exit point, instead of having WaitLatch() (or its\ncaller) handle it directly, but it's complicated. At the time the\npostmaster pipe system was invented we didn't have a signals for this\nso it wasn't even a candidate for treatment as an \"interrupt\". On\nsystems that have postmaster death signals today (Linux + FreeBSD, but\nI suspect we can extend this to every Unix we support, see CF #3066,\nand a solution for Windows has been mentioned too), clearly the signal\nhandler could set a new interrupt flag PostmasterLost +\nInterruptPending, and then CHECK_FOR_INTERRUPTS() could see it and\nexit. The argument against this is that exiting isn't always the\nright thing! In a couple of places, we do something special, such as\nprinting a special error message (examples: sync rep and the main FEBE\nclient read). Look for WL_POSTMASTER_DEATH (as opposed to\nWL_EXIT_ON_PM_DEATH). So I guess you'd need to reverse those\ndecisions and standardise on \"exit immediately, no message\", or\ninvented a way to suppress that behaviour in code regions.\n\n> 3) Is it intentional to use pg_usleep in some places in the code? If\n> yes, what are they? At least, I see one place where it's intentional\n> in the wait_pid function which is used while running the regression\n> tests.\n\nThere are plenty of places that do a short sleep for various reasons,\nmore like a deliberate stall or backoff or auth thing, and it's\nprobably OK if they're shortish and not really a condition polling\nloop with an obvious latch/CV-based replacement. Note also that\nLWLock waits are similar.\n\n> 4) Are there any places where we need to replace pg_usleep with\n> WaitLatch/equivalent of pg_sleep to detect the postmaster death\n> properly?\n\nWe definitely have replaced a lot of sleeps with latch.c primitives\nover the past few years, since we got WL_EXIT_ON_PM_DEATH and\ncondition variables. There may be many more to improve... You\nmentioned autovacuum... yeah, Stephen fixed one of these with commit\n4753ef37, but yeah it's not great to have those others in there...\n\n\n",
"msg_date": "Thu, 15 Apr 2021 11:58:04 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "On Thu, Apr 15, 2021 at 5:28 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\nThanks a lot for the detailed explanation.\n\n> On Thu, Apr 15, 2021 at 2:06 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > 1) Is it really harmful to use pg_usleep in a postmaster child process\n> > as it doesn't let the child process detect postmaster death?\n>\n> Yeah, that's a bad idea. Any long-term waiting (including short waits\n> in a loop) should ideally be done with the latch infrastructure.\n\nAgree. Along with short waits in a loop, I think we also should\nreplace pg_usleep with WaitLatch that has a user configurable\nparameter like below:\n\npg_usleep(VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL * 1000L);\npg_usleep(PostAuthDelay * 1000000L);\npg_usleep(CommitDelay);\n\n> 4) Are there any places where we need to replace pg_usleep with\n> > WaitLatch/equivalent of pg_sleep to detect the postmaster death\n> > properly?\n>\n> We definitely have replaced a lot of sleeps with latch.c primitives\n> over the past few years, since we got WL_EXIT_ON_PM_DEATH and\n> condition variables. There may be many more to improve... You\n> mentioned autovacuum... yeah, Stephen fixed one of these with commit\n> 4753ef37, but yeah it's not great to have those others in there...\n\nI have not looked at the commit 4753ef37 previously, but it\nessentially addresses the problem with pg_usleep for vacuum delay. I'm\nthinking we can also replace pg_usleep in below places based on the\nfact that pg_usleep should be avoided in 1) short waits in a loop 2)\nwhen wait time is dependent on user configurable parameters. And using\nWaitLatch may require us to add wait event types to WaitEventTimeout\nenum, but that's okay.\n\n1) pg_usleep(VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL * 1000L); in lazy_truncate_heap\n2) pg_usleep(CommitDelay); in XLogFlush\n3) pg_usleep(10000L); in CreateCheckPoint\n4) pg_usleep(1000000L); in do_pg_stop_backup\n5) pg_usleep(1000L); in read_local_xlog_page\n6) pg_usleep(PostAuthDelay * 1000000L); in AutoVacLauncherMain,\nAutoVacWorkerMain, StartBackgroundWorker, InitPostgres\n7) pg_usleep(100000L); in RequestCheckpoint\n8) pg_usleep(1000000L); in pgarch_ArchiverCopyLoop\n9) pg_usleep(PGSTAT_RETRY_DELAY * 1000L); in backend_read_statsfile\n10) pg_usleep(PreAuthDelay * 1000000L); in BackendInitialize\n11) pg_usleep(10000L); in WalSndWaitStopping\n12) pg_usleep(standbyWait_us); in WaitExceedsMaxStandbyDelay\n13) pg_usleep(10000L); in RegisterSyncRequest\n\nI'm sure we won't be changing in all of the above places. It will be\ngood to review and correct the above list.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 15 Apr 2021 11:48:50 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "On Thu, Apr 15, 2021 at 11:48 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > We definitely have replaced a lot of sleeps with latch.c primitives\n> > over the past few years, since we got WL_EXIT_ON_PM_DEATH and\n> > condition variables. There may be many more to improve... You\n> > mentioned autovacuum... yeah, Stephen fixed one of these with commit\n> > 4753ef37, but yeah it's not great to have those others in there...\n>\n> I have not looked at the commit 4753ef37 previously, but it\n> essentially addresses the problem with pg_usleep for vacuum delay. I'm\n> thinking we can also replace pg_usleep in below places based on the\n> fact that pg_usleep should be avoided in 1) short waits in a loop 2)\n> when wait time is dependent on user configurable parameters. And using\n> WaitLatch may require us to add wait event types to WaitEventTimeout\n> enum, but that's okay.\n\nI'm attaching 3 patches that replace pg_usleep with WaitLatch: 0001 in\nlazy_truncate_heap, 0002 in do_pg_stop_backup and 0003 for Pre and\nPost Auth Delay. Regression tests pass with these patches. Please\nreview them.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 20 Apr 2021 07:36:39 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "On Tue, Apr 20, 2021 at 7:36 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Apr 15, 2021 at 11:48 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > We definitely have replaced a lot of sleeps with latch.c primitives\n> > > over the past few years, since we got WL_EXIT_ON_PM_DEATH and\n> > > condition variables. There may be many more to improve... You\n> > > mentioned autovacuum... yeah, Stephen fixed one of these with commit\n> > > 4753ef37, but yeah it's not great to have those others in there...\n> >\n> > I have not looked at the commit 4753ef37 previously, but it\n> > essentially addresses the problem with pg_usleep for vacuum delay. I'm\n> > thinking we can also replace pg_usleep in below places based on the\n> > fact that pg_usleep should be avoided in 1) short waits in a loop 2)\n> > when wait time is dependent on user configurable parameters. And using\n> > WaitLatch may require us to add wait event types to WaitEventTimeout\n> > enum, but that's okay.\n>\n> I'm attaching 3 patches that replace pg_usleep with WaitLatch: 0001 in\n> lazy_truncate_heap, 0002 in do_pg_stop_backup and 0003 for Pre and\n> Post Auth Delay. Regression tests pass with these patches. Please\n> review them.\n\nI made a CF entry [1] so that it may get a chance for review.\n\n[1] https://commitfest.postgresql.org/33/3085/\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Apr 2021 08:56:57 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nThis patch looks fine. Tested on MacOS Catalina; master 09ae3299\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Fri, 14 May 2021 12:15:14 +0000",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "On Tue, Apr 20, 2021 at 07:36:39AM +0530, Bharath Rupireddy wrote:\n> I'm attaching 3 patches that replace pg_usleep with WaitLatch: 0001 in\n> lazy_truncate_heap, 0002 in do_pg_stop_backup and 0003 for Pre and\n> Post Auth Delay. Regression tests pass with these patches. Please\n> review them.\n\n+ if (backup_started_in_recovery)\n+ latch = &XLogCtl->recoveryWakeupLatch;\n+ else\n+ latch = MyLatch;\nrecoveryWakeupLatch is used by the startup process, but it has nothing\nto do with do_pg_stop_backup(). Why are you doing that?\n\nI can get behind the change for the truncation lock when finishing a\nVACUUM as that helps with monitoring. Now, I am not sure I get the\npoint of changing anything for {post,pre}_auth_delay that are\ndeveloper options. Please note that at this stage we don't know the\nbackend activity in pg_stat_activity, so the use of wait events is not\nreally interesting. On top of that, not reacting on signals can be\ninteresting to keep as a behavior for developers?\n--\nMichael",
"msg_date": "Thu, 24 Jun 2021 15:34:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "On Thu, Jun 24, 2021 at 12:04 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Apr 20, 2021 at 07:36:39AM +0530, Bharath Rupireddy wrote:\n> > I'm attaching 3 patches that replace pg_usleep with WaitLatch: 0001 in\n> > lazy_truncate_heap, 0002 in do_pg_stop_backup and 0003 for Pre and\n> > Post Auth Delay. Regression tests pass with these patches. Please\n> > review them.\n>\n> + if (backup_started_in_recovery)\n> + latch = &XLogCtl->recoveryWakeupLatch;\n> + else\n> + latch = MyLatch;\n> recoveryWakeupLatch is used by the startup process, but it has nothing\n> to do with do_pg_stop_backup(). Why are you doing that?\n\nThe recoveryWakeupLatch and procLatch/MyLatch are being used for WAL\nreplay and recovery conflict, respectively. Actually, I was earlier\nusing procLatch/MyLatch, but came across the commit 00f690a23 which\nsays that the two latches are reserved for specific purposes. I'm not\nquite sure which one to use when do_pg_stop_backup is called by the\nstartup process. Any thoughts?\n\n> I can get behind the change for the truncation lock when finishing a\n> VACUUM as that helps with monitoring.\n\nThanks. Please let me know if there are any comments on\nv1-0001-Use-a-WaitLatch-for-lock-waiting-in-lazy_truncate.patch.\n\n> Now, I am not sure I get the\n> point of changing anything for {post,pre}_auth_delay that are\n> developer options. Please note that at this stage we don't know the\n> backend activity in pg_stat_activity, so the use of wait events is not\n> really interesting.\n\nHm. I was earlier thinking from the perspective that the processes\nshould be able to detect the postmaster death if the\n{post,pre}_auth_delay are set to higher values. Now, I agree that the\nauth delays are happening at the initial stages of the processes and\nif the developers(not common users) set the higher values for the\nGUCs, let them deal with the problem of the processes not detecting\nthe postmaster death.\n\n> On top of that, not reacting on signals can be\n> interesting to keep as a behavior for developers?\n\nYeah, it can be useful at times as it enables debugging even when the\npostmaster dies.\n\nWith Regards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 28 Jun 2021 20:21:06 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Thu, Jun 24, 2021 at 12:04 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> On top of that, not reacting on signals can be\n>> interesting to keep as a behavior for developers?\n\n> Yeah, it can be useful at times as it enables debugging even when the\n> postmaster dies.\n\nDunno ... I cannot recall ever having had that as a debugging requirement\nin a couple of decades worth of PG bug-chasing. If the postmaster is\ndying, you generally want to deal with that before bothering with child\nprocesses. Moreover, child processes that don't go awy when the\npostmaster does are a very nasty problem, because they could screw up\nsubsequent debugging work.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 28 Jun 2021 11:01:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "On Mon, Jun 28, 2021 at 11:01:57AM -0400, Tom Lane wrote:\n> Dunno ... I cannot recall ever having had that as a debugging requirement\n> in a couple of decades worth of PG bug-chasing. If the postmaster is\n> dying, you generally want to deal with that before bothering with child\n> processes. Moreover, child processes that don't go awy when the\n> postmaster does are a very nasty problem, because they could screw up\n> subsequent debugging work.\n\nAt the same time, nobody has really complained about this being an\nissue for developer options. I would tend to wait for more opinions\nbefore doing anything with the auth_delay GUCs.\n--\nMichael",
"msg_date": "Fri, 2 Jul 2021 10:27:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "On Mon, Jun 28, 2021 at 08:21:06PM +0530, Bharath Rupireddy wrote:\n> The recoveryWakeupLatch and procLatch/MyLatch are being used for WAL\n> replay and recovery conflict, respectively. Actually, I was earlier\n> using procLatch/MyLatch, but came across the commit 00f690a23 which\n> says that the two latches are reserved for specific purposes. I'm not\n> quite sure which one to use when do_pg_stop_backup is called by the\n> startup process. Any thoughts?\n\nCould you explain why you think dp_pg_stop_backup() can be called by\nthe startup process? AFAIK, this code path applies to two categories\nof sessions:\n- backend sessions, with the SQL functions calling this routine.\n- WAL senders, aka anything that connects with replication=1 able to\nuse the BASE_BACKUP with the replication protocol.\n\n> Thanks. Please let me know if there are any comments on\n> v1-0001-Use-a-WaitLatch-for-lock-waiting-in-lazy_truncate.patch.\n\nApplied this one as that's clearly a win. The event name has been\nrenamed to VacuumTruncate.\n--\nMichael",
"msg_date": "Fri, 2 Jul 2021 13:22:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "On Fri, Jul 2, 2021 at 9:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jun 28, 2021 at 08:21:06PM +0530, Bharath Rupireddy wrote:\n> > The recoveryWakeupLatch and procLatch/MyLatch are being used for WAL\n> > replay and recovery conflict, respectively. Actually, I was earlier\n> > using procLatch/MyLatch, but came across the commit 00f690a23 which\n> > says that the two latches are reserved for specific purposes. I'm not\n> > quite sure which one to use when do_pg_stop_backup is called by the\n> > startup process. Any thoughts?\n>\n> Could you explain why you think dp_pg_stop_backup() can be called by\n> the startup process? AFAIK, this code path applies to two categories\n> of sessions:\n> - backend sessions, with the SQL functions calling this routine.\n> - WAL senders, aka anything that connects with replication=1 able to\n> use the BASE_BACKUP with the replication protocol.\n\nMy bad. I was talking about the cases when do_pg_stop_backup is called\nwhile the server is in recovery mode i.e. backup_started_in_recovery =\nRecoveryInProgress(); evaluates to true. I'm not sure in these cases\nwhether we should replace pg_usleep with WaitLatch. If yes, whether we\nshould use procLatch/MyLatch or recoveryWakeupLatch as they are\ncurrently serving different purposes.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 2 Jul 2021 12:03:07 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "On Fri, Jul 02, 2021 at 12:03:07PM +0530, Bharath Rupireddy wrote:\n> My bad. I was talking about the cases when do_pg_stop_backup is called\n> while the server is in recovery mode i.e. backup_started_in_recovery =\n> RecoveryInProgress(); evaluates to true. I'm not sure in these cases\n> whether we should replace pg_usleep with WaitLatch. If yes, whether we\n> should use procLatch/MyLatch or recoveryWakeupLatch as they are\n> currently serving different purposes.\n\nIt seems to me that you should re-read the description of\nrecoveryWakeupLatch at the top of xlog.c and check for which purpose\nit exists, which is, in this case, to wake up the startup process to\naccelerate WAL replay. So do_pg_stop_backup() has no business with\nit.\n\nSwitching pg_stop_backup() to use a latch rather than pg_usleep() has\nbenefits:\n- It simplifies the wait event handling.\n- The process waiting for the last WAL segment to be archived will be\nmore responsive on signals like SIGHUP and on postmaster death.\n\nThese don't sound bad to me to apply here, so 0002 could be simplified\nas attached.\n--\nMichael",
"msg_date": "Mon, 5 Jul 2021 11:03:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "At Fri, 2 Jul 2021 10:27:21 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, Jun 28, 2021 at 11:01:57AM -0400, Tom Lane wrote:\n> > Dunno ... I cannot recall ever having had that as a debugging requirement\n> > in a couple of decades worth of PG bug-chasing. If the postmaster is\n> > dying, you generally want to deal with that before bothering with child\n> > processes. Moreover, child processes that don't go awy when the\n> > postmaster does are a very nasty problem, because they could screw up\n> > subsequent debugging work.\n> \n> At the same time, nobody has really complained about this being an\n> issue for developer options. I would tend to wait for more opinions\n> before doing anything with the auth_delay GUCs.\n\nI'm not sure the current behavior is especially useful for debugging,\nhowever, I don't think it is especially useful that children\nimmediately respond to postmaster's death while the debug-delays,\nbecause anyway children don't respond while debugging (until the\ncontrol (or code-pointer) reaches to the point of checking\npostmaster's death), and the delays must be very short even if someone\nabuses it on production systems. On the other hand, there could be a\ndiscussion as a convention that any user-definable sleep requires to\nrespond to signals, maybe as Thomas mentioned.\n\nSo, I don't object either way we will go. But if we don't change the\nbehavior we instead would need a comment that explains the reason for\nthe pg_usleep.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 05 Jul 2021 14:52:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Fri, Jul 02, 2021 at 12:03:07PM +0530, Bharath Rupireddy wrote:\n> > My bad. I was talking about the cases when do_pg_stop_backup is called\n> > while the server is in recovery mode i.e. backup_started_in_recovery =\n> > RecoveryInProgress(); evaluates to true. I'm not sure in these cases\n> > whether we should replace pg_usleep with WaitLatch. If yes, whether we\n> > should use procLatch/MyLatch or recoveryWakeupLatch as they are\n> > currently serving different purposes.\n> \n> It seems to me that you should re-read the description of\n> recoveryWakeupLatch at the top of xlog.c and check for which purpose\n> it exists, which is, in this case, to wake up the startup process to\n> accelerate WAL replay. So do_pg_stop_backup() has no business with\n> it.\n> \n> Switching pg_stop_backup() to use a latch rather than pg_usleep() has\n> benefits:\n> - It simplifies the wait event handling.\n> - The process waiting for the last WAL segment to be archived will be\n> more responsive on signals like SIGHUP and on postmaster death.\n\nYes, agreed.\n\n> These don't sound bad to me to apply here, so 0002 could be simplified\n> as attached.\n\nTook a quick look and the patch looks good to me.\n\nIn general, I agree with Tom's up-thread comment about children hanging\naround after postmaster death making things more difficult for debugging\nand just in general, so I'm in favor of trying to eliminate as many\ncases where that's happening as we reasonably can without impacting\nperformance by checking too often.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 5 Jul 2021 11:55:54 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "On Mon, Jul 5, 2021 at 7:33 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jul 02, 2021 at 12:03:07PM +0530, Bharath Rupireddy wrote:\n> > My bad. I was talking about the cases when do_pg_stop_backup is called\n> > while the server is in recovery mode i.e. backup_started_in_recovery =\n> > RecoveryInProgress(); evaluates to true. I'm not sure in these cases\n> > whether we should replace pg_usleep with WaitLatch. If yes, whether we\n> > should use procLatch/MyLatch or recoveryWakeupLatch as they are\n> > currently serving different purposes.\n>\n> It seems to me that you should re-read the description of\n> recoveryWakeupLatch at the top of xlog.c and check for which purpose\n> it exists, which is, in this case, to wake up the startup process to\n> accelerate WAL replay. So do_pg_stop_backup() has no business with\n> it.\n\nHm. The shared recoveryWakeupLatch is being owned by the startup\nprocess to wait and other backends/processes are using it to wake up\nthe startup process.\n\n> Switching pg_stop_backup() to use a latch rather than pg_usleep() has\n> benefits:\n> - It simplifies the wait event handling.\n> - The process waiting for the last WAL segment to be archived will be\n> more responsive on signals like SIGHUP and on postmaster death.\n>\n> These don't sound bad to me to apply here, so 0002 could be simplified\n> as attached.\n\nThe attached stop-backup-latch-v2.patch looks good to me.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 5 Jul 2021 21:34:15 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "On Mon, Jul 5, 2021 at 9:25 PM Stephen Frost <sfrost@snowman.net> wrote:\n> In general, I agree with Tom's up-thread comment about children hanging\n> around after postmaster death making things more difficult for debugging\n> and just in general, so I'm in favor of trying to eliminate as many\n> cases where that's happening as we reasonably can without impacting\n> performance by checking too often.\n\nI agree. I'm attaching the patch that replaces pg_usleep with\nWaitLatch for {pre, post}_auth_delay. I'm also attaching Michael's\nlatest patch stop-backup-latch-v2.patch, just for the sake of cfbot.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Mon, 5 Jul 2021 21:42:29 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "On Mon, Jul 05, 2021 at 09:42:29PM +0530, Bharath Rupireddy wrote:\n> I agree. I'm attaching the patch that replaces pg_usleep with\n> WaitLatch for {pre, post}_auth_delay. I'm also attaching Michael's\n> latest patch stop-backup-latch-v2.patch, just for the sake of cfbot.\n\nI don't object to the argument that switching to a latch for this code\npath could be good for responsiveness, but switching it is less\nattractive than the others as wait events are not available in\npg_stat_activity at authentication startup. That's the case of normal\nbackends and WAL senders, not the case of autovacuum workers using\npost_auth_delay if I read the code correctly.\n\nAnyway, it is worth noting that the patch as proposed breaks\npost_auth_delay. MyLatch is set when reaching WaitLatch() for\npost_auth_delay after loading the options, so the use of WL_LATCH_SET\nis not right. I think that this comes from SwitchToSharedLatch() in\nInitProcess(). And it does not seem quite right to me to just blindly\nreset the latch before doing the wait in this code path. Perhaps we\ncould just use (WL_TIMEOUT | WL_EXIT_ON_PM_DEATH) to do the job.\n\nThe one for pg_stop_backup() has been applied, no objections to that.\n--\nMichael",
"msg_date": "Tue, 6 Jul 2021 09:45:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "On Tue, Jul 6, 2021 at 6:15 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Jul 05, 2021 at 09:42:29PM +0530, Bharath Rupireddy wrote:\n> > I agree. I'm attaching the patch that replaces pg_usleep with\n> > WaitLatch for {pre, post}_auth_delay. I'm also attaching Michael's\n> > latest patch stop-backup-latch-v2.patch, just for the sake of cfbot.\n>\n> I don't object to the argument that switching to a latch for this code\n> path could be good for responsiveness, but switching it is less\n> attractive than the others as wait events are not available in\n> pg_stat_activity at authentication startup. That's the case of normal\n> backends and WAL senders, not the case of autovacuum workers using\n> post_auth_delay if I read the code correctly.\n\nWe may not see anything in the pg_stat_activity for {post,\npre}_auth_delay, but the processes can detect the postmaster death\nwith WaitLatch. I think we should focus on that.\n\n> Anyway, it is worth noting that the patch as proposed breaks\n> post_auth_delay. MyLatch is set when reaching WaitLatch() for\n> post_auth_delay after loading the options, so the use of WL_LATCH_SET\n> is not right. I think that this comes from SwitchToSharedLatch() in\n> InitProcess(). And it does not seem quite right to me to just blindly\n> reset the latch before doing the wait in this code path. Perhaps we\n> could just use (WL_TIMEOUT | WL_EXIT_ON_PM_DEATH) to do the job.\n\nI'm sorry to say that I didn't get what was said above. We reset the\nlatch after we come out of WaitLatch but not before going to wait. And\nthe reason to have WL_LATCH_SET, is to exit the wait loop if MyLatch\nis set for that process because of other SetLatch events. Am I missing\nsomething here?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Tue, 6 Jul 2021 12:42:21 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "On Tue, Jul 06, 2021 at 12:42:21PM +0530, Bharath Rupireddy wrote:\n> I'm sorry to say that I didn't get what was said above. We reset the\n> latch after we come out of WaitLatch but not before going to wait. And\n> the reason to have WL_LATCH_SET, is to exit the wait loop if MyLatch\n> is set for that process because of other SetLatch events. Am I missing\n> something here?\n\nDid you test the patch with post_auth_delay and a backend connection,\nmaking sure that the delay gets correctly applied? I did, and that\nwas not working here.\n--\nMichael",
"msg_date": "Tue, 6 Jul 2021 17:07:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "On Tue, Jul 6, 2021 at 1:38 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jul 06, 2021 at 12:42:21PM +0530, Bharath Rupireddy wrote:\n> > I'm sorry to say that I didn't get what was said above. We reset the\n> > latch after we come out of WaitLatch but not before going to wait. And\n> > the reason to have WL_LATCH_SET, is to exit the wait loop if MyLatch\n> > is set for that process because of other SetLatch events. Am I missing\n> > something here?\n>\n> Did you test the patch with post_auth_delay and a backend connection,\n> making sure that the delay gets correctly applied? I did, and that\n> was not working here.\n\nThanks. You are right. The issue is due to the MyLatch being set by\nSwitchToSharedLatch before WaitLatch. If we use (WL_TIMEOUT |\nWL_EXIT_ON_PM_DEATH), then the backends will honour the\npost_auth_delay as well as detect the postmaster death. Since we are\nnot using WL_LATCH_SET, I removed ResetLatch. Also, added some\ncomments around why we are not using WL_LATCH_SET.\n\nFor PreAuthDelay, there's no problem to use WL_LATCH_SET as MyLatch\nstill points to the local latch(which is not set) in\nBackendInitialize().\n\nPSA v2 patch.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Tue, 6 Jul 2021 15:54:07 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "On Tue, Jul 06, 2021 at 03:54:07PM +0530, Bharath Rupireddy wrote:\n> Thanks. You are right. The issue is due to the MyLatch being set by\n> SwitchToSharedLatch before WaitLatch. If we use (WL_TIMEOUT |\n> WL_EXIT_ON_PM_DEATH), then the backends will honour the\n> post_auth_delay as well as detect the postmaster death. Since we are\n> not using WL_LATCH_SET, I removed ResetLatch. Also, added some\n> comments around why we are not using WL_LATCH_SET.\n> \n> For PreAuthDelay, there's no problem to use WL_LATCH_SET as MyLatch\n> still points to the local latch(which is not set) in\n> BackendInitialize().\n\nFWIW, I think that it could be a good idea to use the same set of\nflags for all the pre/post_auth_delay paths for consistency. That's\nuseful when grepping for one. Please note that I don't plan to look\nmore at this patch set for this CF as I am not really excited by the\nupdates involving developer options, and I suspect more issues like\nthe one I found upthread so this needs a close lookup.\n\nIf somebody else wishes to look at it, please feel free, of course.\n--\nMichael",
"msg_date": "Tue, 6 Jul 2021 20:03:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "On Tue, Jul 6, 2021 at 4:33 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jul 06, 2021 at 03:54:07PM +0530, Bharath Rupireddy wrote:\n> > Thanks. You are right. The issue is due to the MyLatch being set by\n> > SwitchToSharedLatch before WaitLatch. If we use (WL_TIMEOUT |\n> > WL_EXIT_ON_PM_DEATH), then the backends will honour the\n> > post_auth_delay as well as detect the postmaster death. Since we are\n> > not using WL_LATCH_SET, I removed ResetLatch. Also, added some\n> > comments around why we are not using WL_LATCH_SET.\n> >\n> > For PreAuthDelay, there's no problem to use WL_LATCH_SET as MyLatch\n> > still points to the local latch(which is not set) in\n> > BackendInitialize().\n>\n> FWIW, I think that it could be a good idea to use the same set of\n> flags for all the pre/post_auth_delay paths for consistency. That's\n> useful when grepping for one. Please note that I don't plan to look\n> more at this patch set for this CF as I am not really excited by the\n> updates involving developer options, and I suspect more issues like\n> the one I found upthread so this needs a close lookup.\n>\n> If somebody else wishes to look at it, please feel free, of course.\n\nThanks. Anyways, I removed WL_LATCH_SET for PreAuthDelay as well. PSA v4 patch.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Tue, 6 Jul 2021 17:07:04 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
},
{
"msg_contents": "On Tue, Jul 06, 2021 at 05:07:04PM +0530, Bharath Rupireddy wrote:\n> Thanks. Anyways, I removed WL_LATCH_SET for PreAuthDelay as\n> well. PSA v4 patch.\n\nFor the moment, please note that I have marked the patch as committed\nin the CF app. It may be better to start a new thread with the\nremaining bits for a separate evaluation.\n--\nMichael",
"msg_date": "Mon, 12 Jul 2021 17:25:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Can a child process detect postmaster death when in pg_usleep?"
}
] |
[
{
"msg_contents": "Hi all,\n\nIf we create a table with vacuum_index_cleanup = off or execute VACUUM\nwith INDEX_CLEANUP = off, vacuum updates pg_stat_all_tables.n_dead_tup\nto the number of HEAPTUPLE_RECENTLY_DEAD tuples. Whereas analyze\nupdates it to the sum of the number of HEAPTUPLE_DEAD/RECENTLY_DEAD\ntuples and LP_DEAD line pointers. So if the table has many LP_DEAD\nline pointers due to skipping index cleanup, autovacuum is triggered\nevery time after analyze/autoanalyze. This issue seems to happen also\non back branches, probably from 12 where INDEX_CLEANUP option was\nintroduced.\n\nI think we can have heapam_scan_analyze_next_tuple() not count LP_DEAD\nline pointer as lazy_scan_prune() does. Attached the patch for that.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 14 Apr 2021 23:10:37 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "ANALYZE counts LP_DEAD line pointers as n_dead_tup"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 7:11 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> If we create a table with vacuum_index_cleanup = off or execute VACUUM\n> with INDEX_CLEANUP = off, vacuum updates pg_stat_all_tables.n_dead_tup\n> to the number of HEAPTUPLE_RECENTLY_DEAD tuples. Whereas analyze\n> updates it to the sum of the number of HEAPTUPLE_DEAD/RECENTLY_DEAD\n> tuples and LP_DEAD line pointers. So if the table has many LP_DEAD\n> line pointers due to skipping index cleanup, autovacuum is triggered\n> every time after analyze/autoanalyze. This issue seems to happen also\n> on back branches, probably from 12 where INDEX_CLEANUP option was\n> introduced.\n\nHmm.\n\n> I think we can have heapam_scan_analyze_next_tuple() not count LP_DEAD\n> line pointer as lazy_scan_prune() does. Attached the patch for that.\n\nlazy_scan_prune() is concerned about what the state of the table *will\nbe* when VACUUM finishes, based on its working assumption that index\nvacuuming and heap vacuuming always go ahead. This is exactly the same\nreason why lazy_scan_prune() will set LVPagePruneState.hastup to\n'false' in the presence of an LP_DEAD item -- this is not how\ncount_nondeletable_pages() considers if the same page 'hastup' much\nlater on, right at the end of the VACUUM (it will only consider the\npage safe to truncate away if it now only contains LP_UNUSED items --\nLP_DEAD items make heap/table truncation unsafe).\n\nIn general accounting rules like this may need to work slightly\ndifferently across near-identical functions because of \"being versus\nbecoming\" issues. It is necessary to distinguish between \"being\" code\n(e.g., this ANALYZE code, count_nondeletable_pages() and its hastup\nissue) and \"becoming\" code (e.g., lazy_scan_prune() ands its approach\nto counting \"remaining\" dead tuples as well as hastup-ness). I tend to\ndoubt that your patch is the right approach because the two code paths\nalready \"agree\" once you assume that the LP_DEAD items that\nlazy_scan_prune() sees will be gone at the end of the VACUUM. I do\nagree that this is a problem, though.\n\nGenerally speaking, the \"becoming\" code from lazy_scan_prune() is not\n100% sure that it will be correct in each case, for a large variety of\nreasons. But I think that we should expect it to be mostly correct. We\ndefinitely cannot allow it to be quite wrong all the time with some\nworkloads. And so I agree that this is a problem for the INDEX_CLEANUP\n= off case, though it's equally an issue for the recently added\nfailsafe mechanism. I do not believe that it is a problem for the\nbypass-indexes optimization, though, because that is designed to only\nbe applied when there are practically zero LP_DEAD items. The\noptimization can make VACUUM record that there are zero dead tuples\nafter the VACUUM finishes, even though there were in fact a very small\nnon-zero number of dead tuples -- but that's not appreciably different\nfrom any of the other ways that the count of dead tuples could be\ninaccurate (e.g. concurrent opportunistic pruning). The specific tests\nthat we apply inside lazy_vacuum() should make sure that autovacuum\nscheduling is never affected. The autovacuum scheduling code can\nsafely \"believe\" that the indexes were vacuumed, because it really is\nthe same as if there were precisely zero LP_DEAD items (or the same\nfor all practical purposes).\n\nI'm not sure what to do, though. Both the INDEX_CLEANUP = off case and\nthe failsafe case are only intended for emergencies. And it's hard to\nknow what to do in a code path that is designed to rarely or never be\nused.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 16 Apr 2021 13:16:07 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: ANALYZE counts LP_DEAD line pointers as n_dead_tup"
},
{
"msg_contents": "On Fri, Apr 16, 2021 at 1:16 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'm not sure what to do, though. Both the INDEX_CLEANUP = off case and\n> the failsafe case are only intended for emergencies. And it's hard to\n> know what to do in a code path that is designed to rarely or never be\n> used.\n\nHow about just documenting it in comments, as in the attached patch? I\ntried to address all of the issues with LP_DEAD accounting together.\nBoth the issue raised by Masahiko, and one or two others that were\nalso discussed recently on other threads. They all seem kind of\nrelated to me.\n\nI didn't address the INDEX_CLEANUP = off case in the comments directly\n(I just addressed the failsafe case). There is no good reason to think\nthat the situation will resolve with INDEX_CLEANUP = off, so it didn't\nseem wise to mention it too. But that should still be okay --\nINDEX_CLEANUP = off has practically been superseded by the failsafe,\nsince it is much more flexible. And, anybody that uses INDEX_CLEANUP =\noff cannot expect to never do index cleanup without seriously bad\nconsequences all over the place.\n\n\n--\nPeter Geoghegan",
"msg_date": "Fri, 16 Apr 2021 18:54:19 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: ANALYZE counts LP_DEAD line pointers as n_dead_tup"
},
{
"msg_contents": "On Fri, Apr 16, 2021 at 6:54 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> How about just documenting it in comments, as in the attached patch? I\n> tried to address all of the issues with LP_DEAD accounting together.\n> Both the issue raised by Masahiko, and one or two others that were\n> also discussed recently on other threads. They all seem kind of\n> related to me.\n\nI pushed a version of this just now.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 19 Apr 2021 18:58:04 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: ANALYZE counts LP_DEAD line pointers as n_dead_tup"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.